image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"monitoring"
] | [
{
"code": "# free -lh\n total used free shared buff/cache available\nMem: 188Gi 187Gi 473Mi 56Mi 740Mi 868Mi\nLow: 188Gi 188Gi 473Mi\nHigh: 0B 0B 0B\nSwap: 191Gi 117Gi 74Gi\n\n\n------------------------------------------------------------------\nTop Memory Consuming Process Using ps command\n------------------------------------------------------------------\n PID PPID %MEM %CPU CMD\n 311 49145 97.8 498 mongod --config /etc/mongod.conf\n23818 23801 0.0 3.8 /bin/prometheus --config.file=/etc/prometheus/prometheus.yml\n23162 23145 0.0 8.4 /usr/bin/cadvisor -logtostderr\n25796 25793 0.0 0.4 postgres: checkpointer\n23501 23484 0.0 1.0 /postgres_exporter\n24490 24473 0.0 0.1 grafana-server --homepath=/usr/share/grafana --config=/etc/grafana/grafana.ini --packaging=docker cfg:default.log.mode=console\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 311 systemd+ 20 0 313.9g 184.6g 2432 S 151.7 97.9 26229:09 mongod\n23818 nobody 20 0 11.3g 150084 17988 S 20.7 0.1 8523:47 prometheus\n23162 root 20 0 12.7g 93948 5964 S 65.5 0.0 18702:22 cadvisor\noctopusrs0:PRIMARY> db.serverStatus().mem\n{\n \"bits\" : 64,\n \"resident\" : 189097,\n \"virtual\" : 321404,\n \"supported\" : true\n}\noctopusrs0:PRIMARY> db.serverStatus().tcmalloc.tcmalloc.formattedString\n------------------------------------------------\nMALLOC: 218206510816 (208097.9 MiB) Bytes in use by application\nMALLOC: + 96926863360 (92436.7 MiB) Bytes in page heap freelist\nMALLOC: + 3944588576 ( 3761.9 MiB) Bytes in central cache freelist\nMALLOC: + 134144 ( 0.1 MiB) Bytes in transfer cache freelist\nMALLOC: + 713330688 ( 680.3 MiB) Bytes in thread cache freelists\nMALLOC: + 1200750592 ( 1145.1 MiB) Bytes in malloc metadata\nMALLOC: ------------\nMALLOC: = 320992178176 (306122.0 MiB) Actual memory used (physical + swap)\nMALLOC: + 13979086848 (13331.5 MiB) Bytes released to OS (aka unmapped)\nMALLOC: ------------\nMALLOC: = 334971265024 (319453.5 MiB) Virtual address space used\nMALLOC:\nMALLOC: 9420092 Spans in use\nMALLOC: 234 Thread heaps in use\nMALLOC: 4096 Tcmalloc page size\n------------------------------------------------\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\nBytes released to the OS take up virtual address space but no physical memory.\n",
"text": "I have a 5 note replicaset mongoDB - 1 primary, 3 secondaries and 1 arbiter.My Primary is very slow - each command from the shell takes a long time to return.Memory usage seems very high, and looks llike mongodb is consuming more than 50% of the RAM:Top:serverStatus memeory shows this:Actual memory used (306122.0 MiB) seems higher than server’s RAM (188Gi).\nHow does that makes sense, and how can I detrmine what is causing this high memory consumption?Thanks,\nTamar",
"username": "Tamar_Nirenberg"
},
{
"code": "",
"text": "forgot te mention that I am using mong version 4.2.3",
"username": "Tamar_Nirenberg"
},
{
"code": "",
"text": "Sizes:“dataSize” : 688.4161271536723,\n“indexes” : 177,\n“indexSize” : 108.41889953613281,",
"username": "Tamar_Nirenberg"
},
{
"code": "free",
"text": "Hi @Tamar_NirenbergActual memory used (306122.0 MiB) seems higher than server’s RAM (188Gi).I think it’s because it includes actual RAM + swap. It’s visible in free output that the system is using swap here. Up to 74GB of swap usage, it seems:Swap: 191Gi 117Gi 74GiTypically swap usage means that the server doesn’t have enough RAM to properly service the workload. It may be fine if this is not a regular occurrence, but if this is happening all the time, you might need to upgrade the server’s hardware. One silver lining is that the presence of swap allows the server to keep running. Without it, the OOMkiller will just kill the process with the highest memory usage in a not very nice manner using signal 9 (MongoDB, in this case), which is a much more disruptive process.You can check how much work the MongoDB server is doing by using mongostat command. If the output of mongostat indicates a heavily loaded server, you might need to increase the RAM of the server.Memory usage seems very high, and looks llike mongodb is consuming more than 50% of the RAM:It is entirely possible for MongoDB to use more than 50% of RAM. The main ~50% allocation is for WiredTiger use only, and MongoDB would need to allocate more RAM on top of this to service incoming connections (up to 1MB per connection), in-memory sorting, and other operational works.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi KevinI’m struggling with kind of the same situation but in a sharded cluster. Is it possible to limit the amount of RAM that mongo consumes?. I’d like to avoid a crash of one of my servers. This while we analyze mongostats and dicide to add more RAM.Thanks",
"username": "Oscar_Cervantes"
}
] | Mogno slow - very high memory consuption by Mongo | 2021-10-11T06:36:16.336Z | Mogno slow - very high memory consuption by Mongo | 8,167 |
null | [
"aggregation",
"ruby",
"mongoid-odm"
] | [
{
"code": "",
"text": "Is there a way to get which cluster serviced a mongo operation from MongoId or the mongo driver? I’m trying to update some instrumentation (NewRelic) that collects Mongo response times to have a per-cluster breakdown (currently it just gets an aggregate time for all Mongo operations). I’m looking for something along the lines of mongo returning some metadata along with the response that associates it with the cluster that serviced the operation.I’ve also already asked the NewRelic folks on this, currently waiting for a response. Asking here as well in case someone has any ideas.",
"username": "Neil_Ongkingco"
},
{
"code": "findaggregate",
"text": "Hi @Neil_Ongkingco,It might help if you could elaborate a bit on what you’re hoping to accomplish. When you issue a find or aggregate command via the driver it will just return a cursor, which doesn’t include any information as to where the operation was executed.",
"username": "alexbevi"
},
{
"code": "Mongo::Monitoring::Global.subscrbe( Mongo::Monitoring::COMMAND, MySubscriber.new)\n\nclass MySubscriber\n def started(event)\n cluster_id = get_cluster_id_from(event.address, event.database_name) #derive cluster from address and dbname\n #log metric using cluster id, event.command\n end\nend\n",
"text": "ich cluster serviced a mongo operation from MongoId or the mongo driver? I’m trying to update some instrumentation (NewRelic) that collects Mongo response times to have a per-cluster breakdown (currently it just gets an aggregate time for all Mongo operations). I’m looking for something along the lines of mongo returning some metadata along with the response that associates it with the cluster that serviced the operation.ts an aggregate time for all Mongo operations). I’m looking for something along the lines of mongo returning some metadata along with the response that associates it with the cluster that serviced the operation.I was looking to log metrics that track mongo operation response times per cluster. I’ve sorta figured something out using Mongo::Monitoring::Global.subscribe to subscribe to mongo driver commands and using event.address + event.database_name to figure out the cluster used by the operation. Psuedo code is something like this:Still working on get_cluster_id, but I think its doable using Mongoid::Config.clients along with event.address/database_name",
"username": "Neil_Ongkingco"
}
] | Getting cluster info on mongo operations (Ruby) | 2023-02-23T07:11:10.540Z | Getting cluster info on mongo operations (Ruby) | 1,085 |
null | [
"aggregation"
] | [
{
"code": "[\n {\n id: 1,\n name: 1,\n list: [\n {\n id: 11,\n name: 11,\n list: [\n [\n { id: 111, name: 111 },\n { id: 112, name: 112 }\n ]\n ]\n }\n ]\n },\n {\n id: 6,\n name: 6,\n list: [\n {\n id: 62,\n name: 12,\n list: [ [ { id: 111, name: 111 } ] ]\n }\n ]\n }\n]\n{\n $project: {\n id: 1, name: 1,\n list: {\n id: 1, name: 1,\n list: {\n $filter: {\n input: '$list.list',\n as: 'item',\n cond: { $eq: ['$$item.name', 111] }\n }\n }\n }\n }\n}\ndb.runoob.aggregate([{ $match: { $or: [{ 'name': 1, 'list.name': { $eq: 11 } }, { name: 6, 'list.name': { $eq: 12 } }] } }, { $project: { id:1, name: 1, 'list': { $filter: { input: '$list', as: 'item', cond: { $or: [{ $and: [{ $in: ['$$item.id', [11, 12]] }, {$eq: ['$$item.name', 11]}] }, { $and: [{ $in: ['$$item.id', [61, 62]] }, {$eq: ['$$item.name', 12]}] }] } } } } }, { $project: { id: 1, name: 1, list: { id: 1, name: 1, list: { $filter: { input: '$list.list', as: 'item1', cond: { $eq: ['$$item1.name', 111] } } } } } }])\n",
"text": "I have an array:I hope filter the second-level list array,use commandbut not get the expected result. The complete filter code is as follows:Please help me,thanks ",
"username": "weiming_zhou"
},
{
"code": "{ \"$project\" : {\n \"id\" : 1 , \n \"name\" : 1 ,\n \"list\" : { \"$map\" : {\n \"input\" : \"$list\" ,\n \"as\" : \"top_level_list_element\" ,\n \"in\" : {\n \"id\" : \"$$top_level_list_element.id\" ,\n \"name\" : \"$$top_level_list_element.name\" ,\n \"list\" : { \"$map\" : {\n \"input\" : \"$$top_level_list_element.list\" ,\n \"as\" : \"second_level_list_element\" ,\n \"in\" : { \"$filter\" : {\n \"input\" : \"$$second_level_list_element\" ,\n \"as\" : \"final_level_list_element\" ,\n \"cond\" : { \"$eq\" : [ \"$$final_level_list_element.name\" , 111 ] }\n } }\n } }\n }\n } }\n} }\n",
"text": "You are almost there. Thanks for the well formatted documents and code. However, different names for list, id and name at different level would make the solution easier to understand.What you are missing is a $map for the top level list. Then a 2nd $map for the list field that uses your $filter. Your final $project should look like:",
"username": "steevej"
},
{
"code": "db.collection.aggregate([\n {\n $match: {\n $or: [\n {\n \"name\": 1,\n \"list.name\": {\n $eq: 11\n }\n },\n {\n name: 6,\n \"list.name\": {\n $eq: 12\n }\n }\n ]\n }\n },\n {\n $project: {\n id: 1,\n name: 1,\n \"list\": {\n $filter: {\n input: \"$list\",\n as: \"item\",\n cond: {\n $or: [\n {\n $and: [\n {\n $in: [\n \"$$item.id\",\n [\n 11,\n 12\n ]\n ]\n },\n {\n $eq: [\n \"$$item.name\",\n 11\n ]\n }\n ]\n },\n {\n $and: [\n {\n $in: [\n \"$$item.id\",\n [\n 61,\n 62\n ]\n ]\n },\n {\n $eq: [\n \"$$item.name\",\n 12\n ]\n }\n ]\n }\n ]\n }\n }\n }\n }\n },\n {\n $project: {\n id: 1,\n name: 1,\n list: {\n $map: {\n input: \"$list\",\n as: \"item1\",\n in: {\n id: \"$$item1.id\",\n name: \"$$item1.name\",\n list: {\n $filter: {\n input: \"$$item1.list\",\n as: \"item2\",\n cond: {\n $eq: [\n \"$$item2.name\",\n 111\n ]\n }\n }\n }\n }\n },\n \n }\n }\n }\n])\ndb.collection.aggregate([\n {\n \"$addFields\": {\n \"list\": {\n \"$map\": {\n \"input\": \"$list\",\n \"as\": \"a1\",\n \"in\": {\n id: \"$$a1.id\",\n name: \"$$a1.name\",\n list: {\n \"$map\": {\n \"input\": \"$$a1.list\",\n \"as\": \"a2\",\n \"in\": {\n id: \"$$a2.id\",\n name: \"$$a2.name\",\n list: {\n \"$filter\": {\n \"input\": \"$$a2.list\",\n \"as\": \"a3\",\n \"cond\": {\n $eq: [\n \"$$a3.name\",\n 1111\n ]\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n },\n \n])\ndb.runoob.aggregate([\n { $unwind: '$list' },\n { $unwind: '$list.list' },\n { $unwind: '$list.list.list' },\n {\n $match: {\n $or: [\n {\n 'name': 1,\n 'list.name': { $eq: 11 },\n 'list.list.name': { $eq: 112 }\n },\n {\n name: 6,\n 'list.name': { $eq: 12 }\n }]\n }\n }]);\n",
"text": "@steevej Thanks a lot.Today I have tried to use three ways, one of thems is very similar to yours, the three codes are as follows:Because my array nesting level will be very deep, whether there are other better options?thanks again ",
"username": "weiming_zhou"
},
{
"code": " list: {\n $map: {\n input: \"$list\",\n as: \"item1\",\n in: {\n id: \"$item1.id\",\n name: \"$item1.name\",\n list: {\n $filter: {\n input: \"$item1.list\",\n { $unwind: '$list' },\n { $unwind: '$list.list' },\n { $unwind: '$list.list.list' },\n{ _id: ObjectId(\"63feb634f908e96a872caa32\"),\n id: 1,\n name: 1,\n list: \n { id: 11,\n name: 11,\n list: [ { id: 111, name: 111 }, { id: 112, name: 112 } ] } }\n{ _id: ObjectId(\"63feb634f908e96a872caa33\"),\n id: 6,\n name: 6,\n list: { id: 62, name: 12, list: [ { id: 111, name: 111 } ] } }\n[\n { $unwind: '$list' },\n { $unwind: '$list.list' },\n { $unwind: '$list.list' }\n{ _id: ObjectId(\"63feb634f908e96a872caa32\"),\n id: 1,\n name: 1,\n list: { id: 11, name: 11, list: { id: 111, name: 111 } } }\n{ _id: ObjectId(\"63feb634f908e96a872caa32\"),\n id: 1,\n name: 1,\n list: { id: 11, name: 11, list: { id: 112, name: 112 } } }\n{ _id: ObjectId(\"63feb634f908e96a872caa33\"),\n id: 6,\n name: 6,\n list: { id: 62, name: 12, list: { id: 111, name: 111 } } }\n",
"text": "Your code does not match the sample documents you shared.Your codeis missing the fact that list.list is not the list you want to filter. You want to filter each elements of list.list because it is a list of list.Your other code:is wrong in relation to the sample documents and produces no documents. You do not have 3 level of list named list. You have 2. The last level of list has not name as you can see when I perform the 2 first $unwind on your sample data.The following $unwind seriesproduce a better result on your sample documents with:can I use $group or other commands to restore the original structure?Yes you may but it is not recommended. I remember that @Asya_Kamsky wrote about it but I could not locate her post.Or do I finish restoring the structure in my application code?Yes you may but you will have a lot of duplicated data (from the top level to the second to last due to $unwind) to transfer.whether there are other better options?May be. I provided the best I know based on your sample documents.",
"username": "steevej"
},
{
"code": "{\n id: 1,\n name: 1,\n list: [\n {\n id: 11,\n name: 11,\n list: [\n {\n id: 111,\n name: 111,\n list: [\n {\n id: 1111,\n name: 1111\n },\n {\n id: 1112,\n name: 1112\n },\n {\n id: 1113,\n name: 1113\n },\n {\n id: 1114,\n name: 1114\n }\n ]\n },\n {\n id: 112,\n name: 112,\n list: [\n {\n id: 11121,\n name: 1121\n },\n {\n id: 1122,\n name: 1122\n },\n {\n id: 1123,\n name: 1123\n },\n {\n id: 1114,\n name: 1124\n }\n ]\n },\n ]\n },\n {\n id: 12,\n name: 12,\n list: [\n {\n id: 121,\n name: 121,\n list: [\n {\n id: 1211,\n name: 1211\n },\n {\n id: 1212,\n name: 1212\n },\n {\n id: 1213,\n name: 1213\n },\n {\n id: 1214,\n name: 1214\n }\n ]\n },\n {\n id: 122,\n name: 122,\n list: [\n {\n id: 12121,\n name: 1221\n },\n {\n id: 1222,\n name: 1222\n },\n {\n id: 1223,\n name: 1223\n },\n {\n id: 1214,\n name: 1224\n }\n ]\n },\n ]\n }\n ]\n },\n {\n id: 6,\n name: 6,\n list: [\n {\n id: 61,\n name: 11,\n list: [\n {\n id: 111,\n name: 111,\n list: [\n {\n id: 1111,\n name: 1111\n },\n {\n id: 1112,\n name: 1112\n },\n {\n id: 1113,\n name: 1113\n },\n {\n id: 1114,\n name: 1114\n }\n ]\n },\n ]\n },\n {\n id: 62,\n name: 12,\n list: [\n {\n id: 111,\n name: 111,\n list: [\n {\n id: 1111,\n name: 1111\n },\n {\n id: 1112,\n name: 1112\n },\n {\n id: 1113,\n name: 1113\n },\n {\n id: 1114,\n name: 1114\n }\n ]\n },\n ]\n }\n ]\n }\n",
"text": "I apologize for my mistake, because my subsequent tests use more complex data structures, and part of the code only performs simple tests on local data and specific operators to calculate my ideas.\nThe data as follows:@steevej Thank you very much for your patience and serious reply, I think this question has been perfectly answered, And let me learn a lot. ",
"username": "weiming_zhou"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Use aggregate filter multi-level nested array | 2023-02-28T13:12:03.082Z | Use aggregate filter multi-level nested array | 3,548 |
null | [] | [
{
"code": "",
"text": "Hi, there:I couldn’t see my atlas page loaded (only got a blank page) after signing in with Okta from my company’s account. There’s no obvious error printed from the chrome console, and I’ve tried the followings, and none of them works:Anyone having the same issue? Anyway I can get support from the MongoDB team?",
"username": "Yu_Zhang"
},
{
"code": "",
"text": "Hi @Yu_Zhang - welcome to the community.The behaviour described does sound odd. Have you tried on a different machine with the same login? Additionally, is anyone else in the same Atlas organization experiencing the same issues?I couldn’t see my atlas page loaded (only got a blank page) after signing in with Okta from my company’s account.Just to also clarify the above, is your Atlas organization configured with federated authentication?Regards,\nJason",
"username": "Jason_Tran"
}
] | MongoDB Atlas webpage is blank after signing in | 2023-02-24T21:53:45.801Z | MongoDB Atlas webpage is blank after signing in | 664 |
null | [
"aggregation"
] | [
{
"code": "{\n \"uid\": 1056066,\n \"event_start_time\": 1677207512684,\n \"event_end_time\": 1677207512684,\n \"article_id\": 5760884\n}\n\n// 2\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677210902918,\n \"event_end_time\": 1677210902918,\n \"article_id\": 5760884\n}\n\n// 3\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677211072966,\n \"event_end_time\": 1677211072966,\n \"article_id\": 5763688\n}\n\n// 4\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677217109856,\n \"event_end_time\": 1677217109856,\n \"article_id\": 5234061\n}\n\n// 5\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677217227239,\n \"event_end_time\": 1677217227239,\n \"article_id\": 5376768\n}\n\n// 6\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677217374833,\n \"event_end_time\": 1677217374833,\n \"article_id\": 4341130\n}\ndb.event_log.aggregate([\n\t{$match: {uid: 1056066}},\n\t{$project: {\n\t\t\t_id: false, \n\t\t\tuid: true, \n\t\t\tarticle_id: '$event_info.article_id', \n\t\t\tevent_start_time: true, event_end_time: true\n\t\t}\n\t},\n\t{$sort: {end_time: -1}}\n])\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677210902918,\n \"event_end_time\": 1677210902918,\n \"article_id\": 5760884\n \"size\": 2\n}\n\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677211072966,\n \"event_end_time\": 1677211072966,\n \"article_id\": 5763688\n \"size\": 1\n}\n\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677217109856,\n \"event_end_time\": 1677217109856,\n \"article_id\": 5234061\n \"size\": 1\n}\n\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677217227239,\n \"event_end_time\": 1677217227239,\n \"article_id\": 5376768\n \"size\": 1\n}\n\n{\n \"uid\": 1056066,\n \"event_start_time\": 1677217374833,\n \"event_end_time\": 1677217374833,\n \"article_id\": 4341130\n \"size\": 1\n}\n",
"text": "I get some data as blow:I want calculate the same article_id’s count as a field into the each doc, Then i want take the first doc which have the same article_id (by event_end_time desc)I’ve written those codes:this is goal data:Hope someone can help me. Thanks ",
"username": "Jamie.C"
},
{
"code": "db.event_log.aggregate([\n\t{$match: {uid: 1056066}},\n\t{$project: {\n\t\t\t_id: false, \n\t\t\tuid: true, \n\t\t\tarticle_id: '$event_info.article_id', \n\t\t\tevent_start_time: true, \n\t\t\tevent_end_time: true\n\t\t}\n\t},\n\t{$sort: {end_time: -1}},\n\t{\n\t\t$group: {\n\t\t\t_id: '$article_id',\n\t\t\ttotal: {$sum: 1},\n\t\t\tuid: {\n\t\t\t\t$last: '$$ROOT.uid'\n\t\t\t},\n\t\t\tarticle_id: {\n\t\t\t\t$last: '$$ROOT.article_id'\n\t\t\t},\n\t\t\tevent_start_time: {\n\t\t\t\t$last: '$$ROOT.event_start_time'\n\t\t\t},\n\t\t\tevent_end_time: {\n\t\t\t\t$last: '$$ROOT.event_end_time'\n\t\t\t}\n\t\t}\n\t}\n])\n_id total uid article_id event_start_time event_end_time \n4300316\t1\t1056066\t4300316\t1677467266155\t1677467266155\n5343571\t1\t1056066\t5343571\t1677228247711\t1677228247711\n3632323\t1\t1056066\t3632323\t1677467620237\t1677467620237\n5002174\t1\t1056066\t5002174\t1677639036940\t1677639036940\n2334732\t1\t1056066\t2334732\t1677652265659\t1677652265659\n5264876\t1\t1056066\t5264876\t1677652331254\t1677652331254\n5763688\t1\t1056066\t5763688\t1677211072966\t1677211072966\n5197303\t1\t1056066\t5197303\t1677224565511\t1677224565511\n4338164\t1\t1056066\t4338164\t1677639003483\t1677639003483\n4595799\t4\t1056066\t4595799\t1677570628426\t1677570628426\n5361738\t1\t1056066\t5361738\t1677463603175\t1677463603175\n5768443\t1\t1056066\t5768443\t1677637687937\t1677637687937\n5689085\t1\t1056066\t5689085\t1677462042636\t1677462042636\n4186307\t3\t1056066\t4186307\t1677227656475\t1677227656475\n669526\t1\t1056066\t669526\t1677463695072\t1677463695072\n5241195\t1\t1056066\t5241195\t1677549155401\t1677549155401\n5167535\t1\t1056066\t5167535\t1677462079466\t1677462079466\n5363806\t1\t1056066\t5363806\t1677217613371\t1677217613371\n5179192\t1\t1056066\t5179192\t1677467302756\t1677467302756\n5343345\t4\t1056066\t5343345\t1677570486194\t1677570486194\n",
"text": "Finally i’ve got the goal data i wanted, but the aggregate steps is seems to be so complex and redundance.so i wondering if there is a simply way to get this data There are codes:",
"username": "Jamie.C"
},
{
"code": "",
"text": "I see a bug in your code.Your $sort on end_time but end_time is not projected in the previous. So it means yous $sort on a non existing field.",
"username": "steevej"
},
{
"code": "",
"text": "Oh it’s a rename field from event_end_time i earlier defined, i mush forgotten to turn it back. It’s must be the reason the data is order by default. so i use the $last instead to $first\nThank u~",
"username": "Jamie.C"
}
] | How to calculate same doc's size then take the first one? | 2023-03-01T09:06:19.075Z | How to calculate same doc’s size then take the first one? | 339 |
[
"dot-net",
"transactions"
] | [
{
"code": "",
"text": "We have upgraded to Latest Version of Realm .NET SDK in our Xamarin Forms App.\nReplaced RealmObject to IRealmObject interface and updated corresponding classes/methods.Everything was working fine with the older version but with upgrade we started getting below issuesBut now a new issue is coming which I am not able to understand how to fix as it was working fine in the previous version. Please let us know if we have to make any specific changes for the BackLinks.\nError/Exception we are getting is as below :\nErrors1584×2588 805 KB\nPlease let us know if any specific changes we have to make in the migration or provide us a link or document for migration guide.We have upgraded to 10.20.00 from 10.13.00 .",
"username": "Dharmendra_Kumar2"
},
{
"code": "Using backlinks is only possible for managed(persisted) objectsCannot modify managed objects outside of write transaction",
"text": "For the Using backlinks is only possible for managed(persisted) objects error, the issue is that you’re attempting to access a backlinks property of an unmanaged object. This has never been supported as unmanaged objects wouldn’t have anything in the database linking to them, but we wanted to make this explicit by throwing an exception. Can you clarify in what situations you’re attempting to access the backlinks property - it’s possible we were too restrictive with this change.Regarding Cannot modify managed objects outside of write transaction - when are you getting this error?",
"username": "nirinchev"
},
{
"code": "using var transaction = await GetTransaction();\nvar result = new SomeDTOClass{\nRealmObjectTypeClass1Property1 = SomeObjectMapper.Adapt<RealmObjectTypeClass1>> (InstanceOfDtoClass),\nRealmObjectTypeClass2Property2 = SomeObjectMapper.Adapt<RealmObjectTypeClass2>(InstanceOfDtoClass),\n RealmObjectTypeClass3Property3 = SomeObjectMapper.Adapt<RealmObjectTypeClass3>(InstanceOfDtoClass),\nRealmObjectTypeClass4Property4 = SomeObjectMapper.Adapt<RealmObjectTypeClass4>(InstanceOfDtoClass),\n}\ntransaction?.Commit();\n",
"text": "Using backlinks is only possible for managed(persisted) objects : We don’t have any unmanaged object. We have simple .net classes implementing IRealmObject interface and IEquality interface.Cannot modify managed objects outside of write transaction : This is coming when we are setting properties of class who implements IRealmObjects interface … Something like belowSo sometime it works fine and some time it rhows exception",
"username": "Dharmendra_Kumar2"
}
] | Using Backlinks is only possible for managed(persisted) objects | 2023-03-01T15:28:09.788Z | Using Backlinks is only possible for managed(persisted) objects | 821 |
|
null | [
"node-js",
"field-encryption"
] | [
{
"code": " const keyVaultDatabase = \"encryption\";\n const keyVaultCollection = \"__keyVault\";\n const keyVaultNamespace = `${keyVaultDatabase}.${keyVaultCollection}`;\n const keyVaultClient = new MongoClient(uri);\n await keyVaultClient.connect();\n const keyVaultDB = keyVaultClient.db(keyVaultDatabase);\n // Drop the Key Vault Collection in case you created this collection\n // in a previous run of this application.\n await keyVaultDB.dropDatabase(); // here*************\n // Drop the database storing your encrypted fields as all\n // the DEKs encrypting those fields were deleted in the preceding line.\n await keyVaultClient.db(\"medicalRecords\").dropDatabase(); // here*********\n",
"text": "reference link: docs-in-use-encryption-examples/make_data_key.js at main · mongodb-university/docs-in-use-encryption-examples · GitHubtrying to implement client side field level encryption , doubt in step while creating Data encryption key in which we are droping the data base\ngithub line 43 and 46 , in this post marked as //here************is it necessary to generate new key each time and should we drop the encrypted database, and why are we deleting medical records here, is it necessary to these steps each time server restarts\ncan’t we use the same data encryption key again and again storing it in some place ?Thank you",
"username": "haswanth_reddy"
},
{
"code": "",
"text": "Hello Haswanth_reddy and welcome!The createKey is a one time operations and you do not need to, nor should you, drop and recreate them. The encrypted data encryption keys will be stored in the keyVault and used for the encrypt/decrypt operations. Once you do the initial setup they keys are used from that point on.Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Data encryption key, should it be unique each time server is restarted ? and should we drop the database storing keyValut as shown in the below example? | 2023-02-28T19:44:26.528Z | Data encryption key, should it be unique each time server is restarted ? and should we drop the database storing keyValut as shown in the below example? | 978 |
null | [] | [
{
"code": "",
"text": "I was trying to continue learning about the database after I had done lessons and I had to redo the intro course and I set up a new cluster when I logged with the new cluster the lesson would not let me progress and I got this error:Incorrect solution 11/11Identified multiple Atlas orgs - 3 - Please create a new dedicated Atlas account for your MongoDB University learnings. Please visit ‘Creating and Deploying an Atlas Cluster’ lab in Lesson 2 of Unit 1, ‘Getting Started with MongoDB Atlas’, if you are unsure of the next steps.The only thing I do with this account because we have out own professional server. How can I correct this error and take the classes?",
"username": "Joe_Creaney"
},
{
"code": "",
"text": "Hi @Joe_Creaney,Welcome to the MongoDB Community forums Please create a new dedicated Atlas account for your MongoDB University learnings.We are aware of the issue and a response from the Education Team can be found here.As of now, we’ll suggest you create a dedicated Atlas account for MongoDB University.Let us know if you have any further questions.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Joe_Creaney, There are two options here:",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi again,The answer Kushgara provided above was about “creating Search Index”. And from the details you have provided, I deduce other units also use Atlas CLI in the labs.I have, moments ago, tried the lab in the unit I mentioned and it has passed the first check with “1 project in the selected organization” and passed the second check after I moved a few other projects into that organization.So it is safe to assume this “create new account” problem is solved. However, be aware of possible clashes of projects (as stated in the linked answer) if you do not use Atlas CLI properly in the future with multiple organizations/projects/clusters.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi, I am also having similar issues, although I do not get any error message, but the default\n“Incorrect solution\nSomething went wrong while checking your solution”\nThis I also experienced and skipped in Unit 1, Lesson 2. Now, in lesson 3, I am asked to enter into the learning dedicated account, which I did create, with an authentication token. I follow the steps and am able to succeed. I am asked to go back into Atlas CLI and check my solution but the check fails. I have already terminated original Cluster0 and initiated another free, shared cluster with the desired/suggested name “myAtlasClusterEDU”, added the desired user “myAtlasDBUser” with admin permissions but the check still fails. I’ll open a new thread.",
"username": "matheus_malty"
},
{
"code": "",
"text": "This problem is still there (28 Feb 23). I tried 3 times and nothing seems to work, not even a new auth code or refresh/logout. Strangely enough, the first 2 lessons worked ok with the lab.",
"username": "Monika_K"
},
{
"code": "",
"text": "I wonder why.\nI passed the previous check (create a cluster\"myAtlasClusterEDU\" with a dedicated user “myAtlasDBUser”. But now, the next check wants me to create a database with two collections (“users” and “items”) and insert one document. After doing this, within the tab opened from within the CLI, I complete the task, return to the instruqtUI but the check fails saying “The expected document in the users collection has incorrect values. Please try again.”. Well, that’s wrong. It has the correct values. Can someone give a helping hand here?\ncheers",
"username": "matheus_malty"
},
{
"code": "",
"text": "Hi @matheus_malty,Welcome to the MongoDB Community forums Apologies for the late response.After doing this, within the tab opened from within the CLI, I complete the task, return to the instruqtUI but the check fails saying “The expected document in the users collection has incorrect valuesTo further understand the issue can you share the values you have inserted in the users collection and what command you ran to execute the operation?I follow the steps and am able to succeed. I am asked to go back into Atlas CLI and check my solution but the check fails. I have already terminated original Cluster0 and initiated another free, shared cluster with the desired/suggested name “myAtlasClusterEDU”, added the desired user “myAtlasDBUser” with admin permissions but the check still fails.Also, can you confirm if this issue has been resolved on your end? If not, please feel free to reach out to us.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Kushagra_Kesav,Thanks for your kind answer.can you confirm if this issue has been resolved on your end?Yes, I had inserted two integers as string, not as int. Once this change was made, the check went through.Thanks for the support though.\nCheers,\nMalty",
"username": "matheus_malty"
},
{
"code": "",
"text": "@Monika_K , you should be seeing some error messages. please copy-paste the error to help us understand the problem.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lab errors in Mongo University | 2023-01-31T19:00:26.714Z | Lab errors in Mongo University | 2,252 |
null | [
"queries",
"indexes"
] | [
{
"code": "db.test.find({b:2}, {a:1,_id:0}).sort({a:1})\"explainVersion\" : \"1\",\n\t\"queryPlanner\" : {\n\t\t\"namespace\" : \"test.test\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"b\" : {\n\t\t\t\t\"$eq\" : 2\n\t\t\t}\n\t\t},\n\t\t\"maxIndexedOrSolutionsReached\" : false,\n\t\t\"maxIndexedAndSolutionsReached\" : false,\n\t\t\"maxScansToExplodeReached\" : false,\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\"transformBy\" : {\n\t\t\t\t\"a\" : 1,\n\t\t\t\t\"_id\" : 0\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"b\" : {\n\t\t\t\t\t\t\"$eq\" : 2\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\"a\" : 1,\n\t\t\t\t\t\t\"b\" : 1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"a_1_b_1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\"a\" : [ ],\n\t\t\t\t\t\t\"b\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\"a\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"b\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 1,\n\t\t\"executionTimeMillis\" : 0,\n\t\t\"totalKeysExamined\" : 6,\n\t\t\"totalDocsExamined\" : 6,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\"nReturned\" : 1,\n\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\"works\" : 7,\n\t\t\t\"advanced\" : 1,\n\t\t\t\"needTime\" : 5,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 0,\n\t\t\t\"restoreState\" : 0,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"transformBy\" : {\n\t\t\t\t\"a\" : 1,\n\t\t\t\t\"_id\" : 0\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"b\" : {\n\t\t\t\t\t\t\"$eq\" : 2\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"nReturned\" : 1,\n\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\"works\" : 7,\n\t\t\t\t\"advanced\" : 1,\n\t\t\t\t\"needTime\" : 5,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"docsExamined\" : 6,\n\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"nReturned\" : 6,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\"works\" : 7,\n\t\t\t\t\t\"advanced\" : 6,\n\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 0,\n\t\t\t\t\t\"restoreState\" : 0,\n\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\"a\" : 1,\n\t\t\t\t\t\t\"b\" : 1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"a_1_b_1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\"a\" : [ ],\n\t\t\t\t\t\t\"b\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\"a\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t],\n\t\t\t\t\t\t\"b\" : [\n\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"keysExamined\" : 6,\n\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n\t\"command\" : {\n\t\t\"find\" : \"test\",\n\t\t\"filter\" : {\n\t\t\t\"b\" : 2\n\t\t},\n\t\t\"sort\" : {\n\t\t\t\"a\" : 1\n\t\t},\n\t\t\"projection\" : {\n\t\t\t\"a\" : 1,\n\t\t\t\"_id\" : 0\n\t\t},\n\t\t\"$db\" : \"test\"\n\t},\n",
"text": "My collection has an index {a:1, b:1}, however the querydb.test.find({b:2}, {a:1,_id:0}).sort({a:1})is not coverd (totalDocsExamined is the total number of docs in this collection). below is execution plan output. I understand the ESR rule, but even in that case, both a and b are in the same index tree, and should be able to serve: scan first (sort for free) and filter.\nWhy the engine is instead fetching value of b from disk ?",
"username": "Kobe_W"
},
{
"code": "\"keyPattern\" : {\n\t\t\t\t\t\t\"a\" : 1,\n\t\t\t\t\t\t\"b\" : 1\n\t\t\t\t\t}\n",
"text": "By the ESR rule your key patterndoes not match the query. Your Equality {b:2} is on b while your Sort is on a and your key pattern list a before b.To make it covered, I think the index would then need to be {b:1,a:1}.",
"username": "steevej"
},
{
"code": "",
"text": "Yes i got that part. I use a:1 b:1 intentionally.Though the equal comes after sort, both fields are in the same index. In that case i suppose mongo should be able to fetch value of a directly from index. But why its using disk instead?Any reason behind this or its just not imolemented yet?",
"username": "Kobe_W"
},
{
"code": "",
"text": "But why its using disk instead?FETCH in explain plan does not mean read from disk. It means that the server cannot take the decision only with the index. The storage engine will read the document from disk only if it is not currently in RAM.Why? I am not too sure and you should not worry about why the server is doing the wrong thing with the wrong index.With the appropriate index b:1,a:1 you should get PROJECTION_COVERED with IXSCAN.",
"username": "steevej"
},
{
"code": "",
"text": "you should not worry about why the server is doing the wrong thing with the wrong index.i think this is the key point. Probably it’s just how the logic is defined.",
"username": "Kobe_W"
}
] | Why this query is not covered query? | 2023-03-01T06:43:17.087Z | Why this query is not covered query? | 899 |
null | [
"queries"
] | [
{
"code": "Datecollection-Bcollection-Aarticleidarticeliddatecollection-B{\n\"_id\" : ObjectId(\"6368cef0cb0c042cbc5cc4e8\"),\n\"articleid\" : \"159448182\", \n \"type\" : \"online\",\n \"Date\":\"2023-01-01\"\n}\n {\n \"_id\" : ObjectId(\"34342dd123b0c042cbc5cc4e8\"),\n \"articleid\" : \"159448182\", \n \"guide\" : \"yes\",\n \"Date\":\"2023-04-01\"\n }\n",
"text": "I am trying to update a field name Date in collection-B from collection-A by comparing the articleid of both fields. if articelid id matches then the date will be inserted in the matching document of collection-B\nHow will I update the date field?Collection-ACollection-B",
"username": "Utsav_Upadhyay2"
},
{
"code": "collection-Bcollection-A?collection-Acollection-Barticleid",
"text": "Hello @Utsav_Upadhyay2 ,I am a bit confused with what you want to do with your data, can you please carify?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "collection - Bcollection - Acollection - Bcollection - A4.2.14collection - B",
"text": "I want to update existing date field of collection - B from collection - AI need to update date field from A to B but do not want to keep old date in the collection - B as it got updated from collection - AMongoDB version is 4.2.14Yes, if the articleid value is same then only it should update the date into collection - B",
"username": "Utsav_Upadhyay2"
},
{
"code": "const c_A = \"collection-A\" \nconst c_B = \"collection-B\"\nconst article_id = \"159448182\"\n\nconst match = { \"$match\" : { \n \"articleid\" : article_id\n} } \n\nconst lookup = { \"$lookup\" : {\n \"from\" : c_A ,\n \"localField\" : \"articleid\" ,\n \"foreignField\" : \"articleid\" ,\n \"as\" : \"Date\" ,\n \"pipeline\" : [\n { \"$project\" : { \"_id\" : 0 , \"Date\" : 1 } }\n ]\n} }\n\nconst match_found = { \"$match\" : {\n \"Date.0\" : { \"$exists\" : true }\n} }\n\nconst set = { \"$set\" : {\n \"Date\" : \"$Date.0.Date\"\n} }\n\nconst unwind = { \"$unwind\" : \"$Date\" }\n\nconst project = { \"$project\" : {\n \"_id\" : 1 ,\n \"Date\" : 1\n} }\n\nconst merge = { \"$merge\" : {\n \"into\" : c_B ,\n \"on\" : \"_id\"\n} }\n\n/* And the whole pipeline being */\npipeline = [ match , lookup , match_found , unwind, set , project , merge ] \n",
"text": "One way to do it is the following, assuming articleid is unique in collection-A.I really do not know if it is the best or if the performance are any good.What I like about the concept of doing it with merge rather than a $set update is that you can verify what will be updated by removing the $merge. This way the merge is like a commit. You can build your update incrementally and do the commit once you are happy.",
"username": "steevej"
},
{
"code": "articleid",
"text": "thanks for this answer, but what if I need to match all articleid not only one article id [ means whatever articleid of collection-A matches with collection-B then update the field pubdate in collection-B from A]? also can I use it with a date range in collection-A? if yes please give a small example.",
"username": "Utsav_Upadhyay2"
},
{
"code": "",
"text": "The match stage is optional and was posted to only update the sample document you provided.update the field pubdateThere is no field named pubdate in the sample documents you shared.I do not understand the following.can I use it with a date range in collection-A",
"username": "steevej"
},
{
"code": "Datedatedate",
"text": "Oh that was my mistake totally a typo, I mean to say what if I need to match data from collection-A to collection-B, within a date range field - Date",
"username": "Utsav_Upadhyay2"
},
{
"code": "const match_found = { \"$match\" : {\n \"Date.0\" : { \"$exists\" : true }\n} }\n",
"text": "The stagemakes sure that only B’s document for which an A’s document is found are updated.select data within a date range in collection-AYou may easily add a $match stage to the pipeline of inside the $lookup.",
"username": "steevej"
}
] | How to import a field from collection-A to Collection-B? | 2023-02-27T10:51:15.997Z | How to import a field from collection-A to Collection-B? | 505 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "I am trying to pass a field to the sort method via a parameter but it is not working.var sortBy = “deliveryDate”\ndb.orders.aggregate([{$match:{}}, {$sort: {sortBy: -1}}, {$limit: 10}])Note: sortBy can either be orderDate or deliveryDate.What an I doing wrong?",
"username": "Chris_Job1"
},
{
"code": "sortBy{ sortBy: -1 }[sortBy]var sortBy = “deliveryDate”\ndb.orders.aggregate([{$match:{}}, {$sort: {[sortBy]: -1}}, {$limit: 10}])\n",
"text": "Hello @Chris_Job1, Welcome back to the MongoDB community forum,I can see you have assigned the value in sortBy variable but not used it in the aggregation query, you have written the property name statically { sortBy: -1 }, and that will not refer to the variable,You can try this, wrapping property name in array blocks [sortBy],",
"username": "turivishal"
},
{
"code": "",
"text": "That’s what I missed!!\nThank you!!",
"username": "Chris_Job1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Passing a field value to the sort method via a function parameter | 2023-03-01T14:17:51.555Z | Passing a field value to the sort method via a function parameter | 462 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "I have a MongoDB App Services that of course is connected to the MongoDB Atlas Cluster.In the App Services, we have defined a trigger that should be invoked when the status of an entity changes.The problem is that the trigger is only invoked when we go into the Atlas Cluster Collections and change the value of that property directly in the database.However, when the same property is changed within the app and synced to the App Services, the trigger is not invoked.What are we missing?",
"username": "Gagik_Kyurkchyan"
},
{
"code": "",
"text": "Hi, I suspect that the issue you are running into is in the triggers configuration. Most importantly the MatchExpression and/or the Operation Type’s.I would recommend selecting both Replace and Update as event types. Additionally, I think it is a good idea to remove the match expression from your trigger and print out the full event (use EJSON.Stringify()). I suspect you will see that perhaps the Change Event is not exactly how you are expecting it to look like and that the match expression is filtering it out.",
"username": "Tyler_Kaye"
},
{
"code": "EJSON.Stringify()",
"text": "@Tyler_Kaye thanks for getting back.I will try it out. Though, it wasn’t very clear where should I put the EJSON.Stringify(). Should I put it instead of the match expression? Or should I put it inside the function that handles the change event?",
"username": "Gagik_Kyurkchyan"
},
{
"code": "",
"text": "Sorry about the confusion. I think you should put it in the function that handles the change event. Then when it fires, youll see the full change event and perhaps notice why your configuration for the match expression is not working properly.Do you have a match expression by the way? If so, what is it?",
"username": "Tyler_Kaye"
},
{
"code": "{\"updateDescription.updatedFields\":{\"StatusValue\":{\"$numberInt\":\"1\"}}}\nStatusValue",
"text": "Thanks TylerYes, we do have a match expression. It’s set to this valueStatusValue is an integer property on the entity on which we have the trigger.Let me do the debugging and get back to you.",
"username": "Gagik_Kyurkchyan"
},
{
"code": "changeEvent{\n \"txnNumber\": 293,\n \"lsid\": {\n \"id\": {},\n \"uid\": {}\n },\n \"_id\": {\n \"_data\": \"8263FF723C000000092B022C01002B046E5A100418F05FBB91AA496C8332E47CDB539C1E463C5F6964003C64633463343130612D663337632D343161312D623766342D303539656237306131643866000004\"\n },\n \"operationType\": \"update\",\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1677685308,\n \"i\": 9\n }\n },\n \"ns\": {\n \"db\": \"msoisales-dev-eu2-realm\",\n \"coll\": \"Move\"\n },\n \"documentKey\": {\n \"_id\": \"dc4c410a-f37c-41a1-b7f4-059eb70a1d8f\"\n },\n \"updateDescription\": {\n \"updatedFields\": {\n \"StatusValue\": 1,\n \"Description\": \"\\r\\nThe move is booked\"\n },\n \"removedFields\": [],\n \"truncatedArrays\": []\n }\n}\n{\"updateDescription.updatedFields\":{\"StatusValue\": 1}}\n",
"text": "I’ve removed the match expression and the trigger started firing. I logged the changeEvent that’s coming to the trigger function and here’s itI’ve re-enabled the match expression:If we take a look at the JSON payload, the match expression should match the trigger, but unfortunately, it doesn’t. I see the same behaviour - when I change the field from the Atlas Collection directly, the trigger is firing. However, when I change the data from Realm app, it doesn’t fire for some reason.I’ve also compared the JSON payload in both of the cases, and they are matching. Thus, my logical assumption would be that the problem is not in the match expression itself.",
"username": "Gagik_Kyurkchyan"
},
{
"code": "\"\\r\\nThe move is booked\"StateValue",
"text": "I have found the issue, though I am not sure how I can resolve it.The difference between changing the field directly in the Atlas collection and the way the field is changed in the mobile app, is that in the database I change only that single field. In the mobile app, when I change the status to 1, I set the description of the move in the same transaction to \"\\r\\nThe move is booked\".When I changed the business logic on the mobile app to set only the StateValue the match expression started working.How can I update the match expression in a way, that it doesn’t care whether it was a single field that was updated or several of them? It should only care about the fact that the property I have specified has the value I have specified.",
"username": "Gagik_Kyurkchyan"
},
{
"code": "{\"updateDescription.updatedFields\":{\"StatusValue\": 1}}\n{\"updateDescription.updatedFields.StatusValue\": 1}\n",
"text": "I believe you will want this to be:The previous expression was looking for when the entire “updatedFields” was equal to the object \"{ StatucValue: 1 }, but as you pointed out, when multiple values are changed that will not match. The match expression above should be what you are looking for, but let me know if it works for you. Match expressions are just a MongoDB Query on the Change Event document, so this should find any event where the StatusValue field is one of the fields that is updated and it is updated to 1",
"username": "Tyler_Kaye"
},
{
"code": "{\"updateDescription.updatedFields.StatusValue\": 1}",
"text": "{\"updateDescription.updatedFields.StatusValue\": 1}It did the job! Thanks a lot @Tyler_Kaye",
"username": "Gagik_Kyurkchyan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB App Services trigger works only when changing data directly in the database | 2023-03-01T08:44:52.687Z | MongoDB App Services trigger works only when changing data directly in the database | 1,191 |
null | [
"node-js",
"react-native"
] | [
{
"code": "",
"text": "Just checking, GitHub says Realm 10.24.0 is a prerelease version (Realm JavaScript v10.24.0). Is this true or a mistake? I’d assume it to be versioned 10.24.0-prerelease or similar if it was prerelease but maybe that’s an incorrect assumption?",
"username": "Liam_Jones"
},
{
"code": "",
"text": "Still need an answer on this one please!",
"username": "Liam_Jones"
},
{
"code": "alphabetarc",
"text": "Hello @Liam_Jones,Thank you for your question. Apologies for the delay in response.The pre-release versions are often named alpha, beta, or rc (release candidate). At this moment, this is an official release.Please feel free to ask if there are any follow-up questions.Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm 10.24.0 is a prerelease? | 2022-11-14T10:57:37.952Z | Realm 10.24.0 is a prerelease? | 1,600 |
[
"atlas-device-sync",
"react-native"
] | [
{
"code": " \"react\": \"18.1.0\",\n \"react-native\": \"0.70.6\",\n \"realm\": \"^11.3.1\"\nimport Realm from 'realm';\nimport EquipmentSchema from './schemas/Equipment';\n\n// Place Your RealmApp ID Here\nconst app = new Realm.App({id: 'app-pepe-abuci'});\n\nconst credentials = Realm.Credentials.anonymous(); // LoggingIn as Anonymous User.\n\nconst getRealm = async () => {\n // loggedIn as anonymous user\n try {\n const user = await app.logIn(credentials);\n console.log(app.currentUser);\n\n // MongoDB RealmConfiguration\n const configuration: Realm.ConfigurationWithSync = {\n schema: [EquipmentSchema], // add multiple schemas, comma seperated.\n sync: {\n user, // loggedIn User\n flexible: true,\n },\n };\n return Realm.open(configuration);\n } catch (err) {\n console.error('Failed to log in', err);\n }\n};\n\nexport default getRealm;\n LOG {} <- (current user log)\n LOG {\"errorCode\": 1, \"message\": \"SSL server certificate rejected\"}\n",
"text": "Hi everyone!I’m using this function to login into my device sync app in a simple react native appbut the app.currentUser log returns an empty object and when I go to my App users I see all the logins confirmed\n\nCaptura de pantalla 2022-12-28 a la(s) 11.17.442290×212 44.6 KB\nAlso, I receive this error code in my terminal",
"username": "Ezequiel_Leanes"
},
{
"code": "",
"text": "Hi Ezequiel.Did you ever figure this out?I’m running into a similar issue. I do have an anonymous user logged in, but when I go to open the realm, I’m getting the same error.–Kurt",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Hi Kurt,there was a configuration in my project that gave me this error. Please check your database access in Atlas. I deleted all the users and it starts working.Let me know if I can help you further",
"username": "Ezequiel_Leanes"
}
] | I can't login an user in my app but I see the users in the App services UI | 2022-12-28T14:20:40.823Z | I can’t login an user in my app but I see the users in the App services UI | 1,718 |
|
null | [
"aggregation",
"node-js"
] | [
{
"code": "",
"text": "Hi,\nI am getting an “MongoServerSelectionError: Server selection timed out after 30000 ms” while running qry.",
"username": "Suraj_Anand_Kupale"
},
{
"code": "MongoServerSelectionErrorserverSelectionTimeoutMS",
"text": "The MongoDB Node.js Driver will raise a MongoServerSelectionError typically when a timeout occurs during server selection.Each driver operation requires choosing of a healthy server satisfying the server selection criteria. If an appropriate server cannot be found within the server selection timeout (which is controlled via serverSelectionTimeoutMS and defaults to 30000), the driver will raise a server selection timeout error.Typically this error occurs when your client cannot connect to your cluster (ex: the application’s IP address is not in Atlas’ IP Access List), or if there is a network issue between the client and the server.",
"username": "alexbevi"
}
] | MongoServerSelectionError: Server selection timed out after 30000 | 2023-03-01T11:23:26.762Z | MongoServerSelectionError: Server selection timed out after 30000 | 3,368 |
null | [
"node-js"
] | [
{
"code": "const PROJECT_ID = '631b5cdefb...'\nconst APP_ID = '6331c3e8a3...'\nconst PUBLIC_API_KEY = 'dtn...'\nconst PRIVATE_API_KEY = '80792fcd-81e2...'\n\nconst API_BASE_URL = 'https://realm.mongodb.com/api/admin/v3.0'\nconst API_APP_BASE_URL = `${API_BASE_URL}/groups/${PROJECT_ID}/apps/${APP_ID}`\n\nasync function authenticate() {\n const result = await context.http.post({\n url: `${API_BASE_URL}/auth/providers/mongodb-cloud/login`,\n headers: {\n 'Content-Type': ['application/json'],\n Accept: ['application/json']\n },\n body: {\n username: PUBLIC_API_KEY,\n apiKey: PRIVATE_API_KEY\n },\n encodeBodyAsJSON: true\n })\n return EJSON.parse(result.body.text())\n}\n\nasync function listUsers(token) {\n const url = `${API_APP_BASE_URL}/users`\n console.log('log', '### listUsers url =', url)\n const response = await context.http.get({\n url: url,\n headers: {\n 'Content-Type': ['application/json'],\n Authorization: [`Bearer ${token}`]\n },\n encodeBodyAsJSON: true\n })\n return response\n}\n\nasync function createUser(token, email, password) {\n const url = `${API_APP_BASE_URL}/users`\n const response = await context.http.post({\n url: url,\n headers: {\n 'Content-Type': ['application/json'],\n Authorization: [`Bearer ${token}`]\n },\n body: {\n email: email,\n password: password\n },\n encodeBodyAsJSON: true\n })\n return response\n}\n\nasync function getUser(token, userId) {\n const url = `${API_APP_BASE_URL}/users/${userId}`\n console.log('log', '### getUser url =', url)\n const response = await context.http.get({\n url: url,\n headers: {\n 'Content-Type': ['application/json'],\n Authorization: [`Bearer ${token}`]\n },\n encodeBodyAsJSON: true\n })\n return response\n}\n\nasync function deleteUser(token, _id) {\n const url = `${API_APP_BASE_URL}/users/${_id}`\n console.log('log', '### deleteUser url =', url)\n const response = await context.http.delete({\n url: url,\n headers: { Authorization: [`Bearer ${token}`] }\n })\n return response\n}\n\nexports = async function() {\n const { access_token } = await authenticate()\n console.log('log', '### access_token =', access_token)\n {\n const response = await createUser(access_token, 'abcde1234', 'passwd-abcde1234')\n console.log('log', '### createUser response =', JSON.stringify(response))\n }\n {\n const response = await listUsers(access_token)\n console.log('log', '### listUsers response =', JSON.stringify(response))\n }\n {\n const response = await getUser(access_token, '63efc81da62aa1ff9ed9433b')\n console.log('log', '### getUser response =', JSON.stringify(response))\n }\n {\n const response = await deleteUser(access_token, '63efc81da62aa1ff9ed9433b')\n console.log('log', '### deleteUser response =', JSON.stringify(response))\n }\n}\nlog ### access_token = eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...\nlog ### createUser response = {\"status\":\"201 Created\",\"statusCode\":201,\"contentLength\":-1,\"headers\":{\"X-Envoy-Upstream-Service-Time\":[\"4929\"],\"Date\":[\"Sat, 18 Feb 2023 06:35:30 GMT\"],\"Content-Type\":[\"application/json\"],\"Strict-Transport-Security\":[\"max-age=31536000; includeSubdomains;\"],\"Vary\":[\"Origin\"],\"X-Appservices-Request-Id\":[\"63f071ad2640e1dc03da614d\"],\"X-Frame-Options\":[\"DENY\"],\"Server\":[\"mdbws\"]},\"body\":{}}\nlog ### listUsers url = https://realm.mongodb.com/api/admin/v3.0/groups/631b5cdefb.../apps/6331c3e8a3.../users\nlog ### listUsers response = {\"status\":\"200 OK\",\"statusCode\":200,\"contentLength\":-1,\"headers\":{\"Vary\":[\"Origin\"],\"Date\":[\"Sat, 18 Feb 2023 06:35:34 GMT\"],\"X-Frame-Options\":[\"DENY\"],\"X-Envoy-Upstream-Service-Time\":[\"4034\"],\"Server\":[\"mdbws\"],\"Cache-Control\":[\"no-cache, no-store, must-revalidate\"],\"Content-Type\":[\"application/json\"],\"Strict-Transport-Security\":[\"max-age=31536000; includeSubdomains;\"],\"X-Appservices-Request-Id\":[\"63f071b288d4013bd709a327\"]},\"body\":{}}\nlog ### getUser url = https://realm.mongodb.com/api/admin/v3.0/groups/631b5cdefb.../apps/6331c3e8a3.../users/63efc81da62aa1ff9ed9433b\nlog ### getUser response = {\"status\":\"200 OK\",\"statusCode\":200,\"contentLength\":-1,\"headers\":{\"Cache-Control\":[\"no-cache, no-store, must-revalidate\"],\"Strict-Transport-Security\":[\"max-age=31536000; includeSubdomains;\"],\"Content-Type\":[\"application/json\"],\"Vary\":[\"Origin\"],\"X-Appservices-Request-Id\":[\"63f071b62640e1dc03da62e3\"],\"X-Frame-Options\":[\"DENY\"],\"Date\":[\"Sat, 18 Feb 2023 06:35:38 GMT\"],\"X-Envoy-Upstream-Service-Time\":[\"4035\"],\"Server\":[\"mdbws\"]},\"body\":{}}\nlog ### deleteUser url = https://realm.mongodb.com/api/admin/v3.0/groups/631b5cdefb.../apps/6331c3e8a3.../users/63efc81da62aa1ff9ed9433b\nlog ### deleteUser response = {\"status\":\"204 No Content\",\"statusCode\":204,\"contentLength\":0,\"headers\":{\"Date\":[\"Sat, 18 Feb 2023 06:35:45 GMT\"],\"X-Appservices-Request-Id\":[\"63f071ba88d4013bd709a481\"],\"X-Frame-Options\":[\"DENY\"],\"Vary\":[\"Origin\"],\"X-Envoy-Upstream-Service-Time\":[\"6942\"],\"Server\":[\"mdbws\"],\"Content-Encoding\":[\"gzip\"],\"Strict-Transport-Security\":[\"max-age=31536000; includeSubdomains;\"]}}\n",
"text": "I made the following test program with the function of app service.\n“createUser”, “listUsers”, “getUser”, and “deleteUser” are tested.The results of the execute are as followsWhen I checked the log, the status of “createUser” was 201 and the body was empty, but when I checked the UI, the user was created correctly.The status of “listUsers” is 200, but the body is empty, and you can see more than a dozen accounts in the UI.The status of “getUser” is also 200, but the body is empty, but the specified user ID is an ID created in advance by “createUser”, so it cannot be empty.The status of “deleteUser” is 204, and the user with the specified ID has not been deleted when checked on the UI.I am having trouble getting the expected results.\nIn particular, it is very difficult to implement because the user list cannot be obtained with “listUsers” and the target user cannot be deleted with “deleteUser”.\nCould you let me know if you have any advice?\nThank you very much.",
"username": "KENICHI_SHIMIZU"
},
{
"code": "",
"text": "Hi @KENICHI_SHIMIZU,Thanks for providing the code snippets and also redacting the sensitive info I made the following test program with the function of app service.\n“createUser”, “listUsers”, “getUser”, and “deleteUser” are tested.Just to clarify, would you be able to provide the following information / answers:I’m curious to understand if you plan on associating this function with a scheduled trigger or similar. In saying so, I look forward to hearing from you regarding the use case details.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you for your reply!\n11870×942 172 KB\nAbout the operation confirmation method,\nAll operations were performed on the WEB UI as shown in the attached screen.\nI ran it with the RUN button circled in red.\nIf it goes well, I’d like to use it in data APIs, triggers, etc.\nthank you.",
"username": "KENICHI_SHIMIZU"
},
{
"code": "listUserslog ### listUsers url = https://realm.mongodb.com/api/admin/v3.0/groups/<REDACTED/apps/<REDACTED>/users\nlog ### listUsers response = [{\"_id\":\"63fc16a0a297115076e8dbdd\",\"identities\":[{\"id\":\"63fc16a0a297115076e8dbd9\",\"provider_type\":\"local-userpass\",\"provider_id\":\"63fc16774a8677cf046792a5\"}],\"type\":\"normal\",\"creation_date\":1677465248,\"last_authentication_date\":0,\"disabled\":false,\"data\":{\"email\":\"[email protected]\"}},{\"_id\":\"63fec06dbbc0299919efdb72\",\"identities\":[{\"id\":\"63fec06dbbc0299919efdb68\",\"provider_type\":\"local-userpass\",\"provider_id\":\"63fc16774a8677cf046792a5\"}],\"type\":\"normal\",\"creation_date\":1677639789,\"last_authentication_date\":0,\"disabled\":false,\"data\":{\"email\":\"abcde1234\"}}]\nasync function listUsers(token) {\n const url = `${API_APP_BASE_URL}/users`\n console.log('log', '### listUsers url =', url)\n const response = await context.http.get({\n url: url,\n headers: {\n 'Content-Type': ['application/json'],\n Authorization: [`Bearer ${token}`]\n },\n encodeBodyAsJSON: true\n }).then(response => {\n const ejson_body = EJSON.parse(response.body.text());\n return ejson_body;\n })\n return response\n}\n.then()",
"text": "Thanks for confirming @KENICHI_SHIMIZU,I managed to replicate the same response you were encountering. For simplicity, i’ll be referencing the listUsers response in my test environment after making some slight changes:I hope the above response is similar to what you are after. To achieve this, I had made the following changes to the code you had provided:The addition of .then() (and it’s contents) may help explain the difference in responses. The following Send an HTTP Request documentation may be of use.Hope this helps.If not, please advise what the response you are expecting is.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @Jason_Tran\nI was able to achieve my goal with the code you suggested.\nI can only thank you.\n ",
"username": "KENICHI_SHIMIZU"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Admin API is not functioning as expected | 2023-02-18T07:28:53.893Z | Admin API is not functioning as expected | 1,441 |
null | [
"java",
"transactions",
"spring-data-odm"
] | [
{
"code": "",
"text": "Hello I am developing a web service. In this process, I use mongodb atlas.\nI use spring boot framework and spring data mongoDB.I think, mongodb is working correctly but mongoDB atlas do not showing because of problems that I don’t know.If anyone knows the reason, could you tell me?Even now, MongoDB atlas shows that no database exists, but it is performing normally in spring code.",
"username": "QA_Jay"
},
{
"code": "",
"text": "Hi @QA_Jay - Welcome to the community!I’m afraid i’m not quite too sure what you mean by the following:But, that results not showing in mongoDB atlas.Do you have more information you could provide including the operation being performed as well as how you are verifying that the results are not showing up in Atlas? Example: Are you using the MongoDB Atlas Data explorer UI after inserting / updating some documents to monitor for the changes?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Did you tried to refresh the GUI?Atlas GUI or Compass does not automatically refresh, for good reasons, when the underlying data changes.",
"username": "steevej"
}
] | Mongodb atlas do not show collections but transaction is correctly done | 2023-02-28T08:01:32.041Z | Mongodb atlas do not show collections but transaction is correctly done | 1,150 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Hello All,I want to create an API which returns a collection of documents which is sorted according to particular condition for e.g. like any filter query or something like Documents of India location at top, then other documents later. I also want to integrate pagination for the same.\nIs there any solution or sample code for the same?\nThankyou in Advance!",
"username": "Ashutosh_Mishra1"
},
{
"code": "$cond",
"text": "Hi @Ashutosh_Mishra1,Do you have any sample documents to share as well as an expected output? This would help clarify what you’re after and how it can possibly be achieved.In addition to this, could you advise if you’re using an on-prem or Atlas deployment? Off the top of my head a few things that may help based off the information on this post:Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason,\nThankyou for the reply.\nI am trying to explain my use case here.Actually I have a database of let say 3000 users which are from different locations all around the globe.\nI want to get these data through a NodeJS API which returns data in sorted order (Not Filtered).\nSorting can be done in multiple ways likeSimilarly I want to create one more 4th filter which will be named as “Recommended”\n4. Recommendation filter returns the users in sorted order according to the Logged in users interests. Like a user of USA is logged in then, I want users of USA to come at top, then others at later. Similarly I want to set some priorities to some fields to get the data in that sorted order. Hope this use case helps you understand my need.P.s. I tried using $facet but that does’t seemed an optimized way to me as I am getting duplicate data many times, also I can’t face multiple priorities.Attaching a screenshot of sample code.\n\nimage685×844 21.4 KB\n",
"username": "Ashutosh_Mishra1"
},
{
"code": "match = { \"$match\" : {\n \"isVerified\" : true\n} }\nrelevance = { \"$set\" : {\n \"_tmp\" : {\n \"industry\" : { \"$cond\" : [\n { \"$in\" : { \"$industry\" , rindustry } } , 0 , 1\n ] } ,\n \"country\" : { \"$cond\" : [\n { \"$eq\" : { \"$location.country\" , rcountry } } , 0 , 1\n ] }\n }\n} }\nsort = { \"$sort\" : {\n \"_tmp.industry\" : 1 ,\n \"_tmp.country\" : 1 ,\n \"salary\" : -1\n} }\nunset = { \"$unset\" : \"_tmp\" }\npipeline = [ match , set , sort , unset ]\n",
"text": "The following often helps more that explications.sample documents to share as well as an expected outputTo bad you went the explications route. For example, is the field industry an array or a single value? Is the rindustry variable, an array or a single value?Check Run aggregation pipeline and sort results by a set of match priorities - #3 by steevejThe idea is to use $addFields/$set stage to set sort values for your relevance.You will start with a $match stage like:The second stage sets the relevance values.The you will $sort using the relevance value and your other sort criteria.The a little cosmetic $unset to remove the _tmp fields.The whole pipeline being:Please read Formatting code and log snippets in posts before posting new code or sample documents. It helps when we can just cut-n-paste your field names and/or your code.I am getting duplicate dataYour $facet matches seem to be mutually exclusive so I wonder how you can have duplicates.Since isVerified $match is the same in all facets, it should be move into its own $match before the $facet.",
"username": "steevej"
},
{
"code": "StartupSchema.statics.getRelevantStartups = async function () {\n const startups = await this.aggregate([\n {\n $match: {\n isVerified: true,\n },\n },\n\n {\n $set: {\n _tmp: {\n industry: {\n $cond: [{ $in: { $industry: ['Cosmetics'] } }, 0, 1],\n },\n country: {\n $cond: [\n { $eq: { '$location.country': 'India' } },\n 0,\n 1,\n ],\n },\n },\n },\n },\n {\n $sort: {\n '_tmp.industry': 1,\n '_tmp.country': 1,\n },\n },\n { $unset: '_tmp' },\n ])\n\n return startups\n}\n {\n $set: {\n _tmp: {\n industry: {\n $cond: [{ $in: { industry: ['Cosmetics'] } }, 0, 1],\n },\n country: {\n $cond: [{ $eq: { 'location.country': 'India' } }, 0, 1],\n },\n },\n },\n },\n",
"text": "Hi Steeve Juneau,\nI tried implementing with the code you provided, which is belowBut it gave me this error\n\nimage1033×296 13.5 KB\n\nThen I tried by removing this $ signThen it gave me a different error\n\nimage1115×170 8.51 KB\nCan you please help me out with this? It will be a great favour.\nThanks in Advance",
"username": "Ashutosh_Mishra1"
},
{
"code": "industry: {\n $cond: [{ $in: { $industry: ['Cosmetics'] } }, 0, 1],\n }\n\"industry\" : { \"$cond\" : [\n { \"$in\" : { \"$industry\" , rindustry } } , 0 , 1\n ] }\n\"industry\" : {\n \"$cond\" : [ { \"$in\" : { \"$industry\" , ['Cosmetics'] } } , 0 , 1 ]\n}\n",
"text": "I tried implementing with the code you providedI did not provideWhat I had waswhich if I reformat using your coding style and value for rindustry gives:So your syntax is subtly off compared to what I shared which might explain the errors your get.",
"username": "steevej"
}
] | Sort by Releveance | 2023-02-14T06:30:00.997Z | Sort by Releveance | 1,990 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I’m using geoNear to query users and sort them by location, but I need also to pull the users that have not shared their coordinates (preferably at the end of the array.) Is there a way to use geoNear or something similar to do this? I was thinking $match, $and, $geoNear, … but $geoNear needs to be at the beginning of the aggregation.",
"username": "stupendousweb"
},
{
"code": "",
"text": "Hi @stupendousweb and welcome to the MongoDB Community forum!!To understand the requirement in a better way, it would be helpful if you could help us with the below information:but I need also to pull the users that have not shared their coordinates (preferably at the end of the array.)Lastly, could you also help me understand how is the distinction being made for users who shared the location and who did not?Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Including Documents with No Coordinates in $geoNear | 2023-02-27T22:29:13.052Z | Including Documents with No Coordinates in $geoNear | 424 |
null | [] | [
{
"code": "const mongoose = require('mongoose');\n\nconst Ship = require('./shipModel');\n\nconst dateFormat = require('dateformat');\n\nconst reviewSchema = new mongoose.Schema(\n\n {\n\n review: {\n\n type: String,\n\n required: [true, 'Review cannot be empty!'],\n\n },\n\n rating: {\n\n type: Number,\n\n min: 1,\n\n max: 5,\n\n },\n\n ratingDining: {\n\n type: Number,\n\n min: 1,\n\n max: 5,\n\n },\n\n ratingCabin: {\n\n type: Number,\n\n min: 1,\n\n max: 5,\n\n },\n\n ratingKids: {\n\n type: Number,\n\n min: 1,\n\n max: 5,\n\n },\n\n ratingValue: {\n\n type: Number,\n\n min: 1,\n\n max: 5,\n\n },\n\n ratingEntertainment: {\n\n type: Number,\n\n min: 1,\n\n max: 5,\n\n },\n\n ratingValue: {\n\n type: Number,\n\n min: 1,\n\n max: 5,\n\n },\n\n cabinType: {\n\n type: String,\n\n },\n\n cabinNumber: {\n\n type: String,\n\n },\n\n sailDate: {\n\n type: String,\n\n },\n\n createdAt: {\n\n type: Date,\n\n default: Date.now,\n\n },\n\n displayDate: {\n\n type: String,\n\n },\n\n reviewLikes: {\n\n type: Number,\n\n default: 0,\n\n },\n\n ship: {\n\n type: mongoose.Schema.ObjectId,\n\n ref: 'Ship',\n\n required: [true, 'Review must belong to a ship.'],\n\n },\n\n user: {\n\n type: mongoose.Schema.ObjectId,\n\n ref: 'User',\n\n required: [true, 'Review must belong to a user.'],\n\n },\n\n },\n\n {\n\n toJSON: { virtuals: true },\n\n toObject: { virtuals: true },\n\n }\n\n);\n\nreviewSchema.index({ ship: 1, user: 1 }, { unique: true });\n\n//MIDDLEWARE\n\nreviewSchema.pre(/^find/, function (next) {\n\n // this.populate({\n\n // path: 'ship',\n\n // select: 'name',\n\n // }).populate({\n\n // path: 'user',\n\n // select: 'name photo',\n\n // });\n\n this.populate({\n\n path: 'user',\n\n select: 'photo name',\n\n });\n\n next();\n\n});\n\nreviewSchema.pre(/^find/, function (next) {\n\n this.populate({\n\n path: 'ship',\n\n select: 'shipName',\n\n });\n\n next();\n\n});\n\nreviewSchema.pre('save', async function () {\n\n const newDate = dateFormat(this.createdAt, 'mmmm dS, yyyy');\n\n this.displayDate = newDate;\n\n});\n\nreviewSchema.pre('save', async function () {\n\n const newDate = dateFormat(this.sailDate, 'mmmm, yyyy');\n\n this.sailDate = newDate;\n\n});\n\nreviewSchema.statics.calcAverageRatings = async function (shipId) {\n\n const stats = await this.aggregate([\n\n {\n\n $match: { ship: shipId },\n\n },\n\n {\n\n $group: {\n\n _id: '$ship',\n\n nRating: { $sum: 1 },\n\n avgRating: { $avg: '$rating' },\n\n avgRatingDining: { $avg: '$ratingDining' },\n\n avgRatingCabin: { $avg: '$ratingCabin' },\n\n avgRatingKids: { $avg: '$ratingKids' },\n\n avgRatingValue: { $avg: '$ratingValue' },\n\n avgRatingEnt: { $avg: '$ratingEntertainment' },\n\n },\n\n },\n\n ]);\n\n if (stats.length > 0) {\n\n await Ship.findByIdAndUpdate(shipId, {\n\n ratingsQuantity: stats[0].nRating,\n\n ratingsAverage: stats[0].avgRating.toFixed(1),\n\n ratingsAverageDining: stats[0].avgRatingCabin.toFixed(1),\n\n ratingsAverageCabin: stats[0].avgRatingCabin.toFixed(1),\n\n ratingsAverageKids: stats[0].avgRatingKids.toFixed(1),\n\n ratingsAverageValue: stats[0].avgRatingValue.toFixed(1),\n\n ratingsAverageEnt: stats[0].avgRatingEnt.toFixed(1),\n\n });\n\n } else {\n\n await Ship.findByIdAndUpdate(shipId, {\n\n ratingsQuantity: 0,\n\n ratingsAverage: 3,\n\n ratingsAverageDining: 3,\n\n ratingsAverageCabin: 3,\n\n ratingsAverageKids: 3,\n\n ratingsAverageValue: 3,\n\n ratingsAverageEnt: 3,\n\n });\n\n }\n\n};\n\nreviewSchema.post('save', function () {\n\n //\"this\" points to current review\n\n this.constructor.calcAverageRatings(this.ship);\n\n});\n\n// findByIdAndUpdate\n\n// findByIdAndDelete\n\nreviewSchema.pre(/^findOneAnd/, async function (next) {\n\n this.r = await this.findOne();\n\n //console.log(this.r);\n\n next();\n\n});\n\nreviewSchema.post(/^findOneAnd/, async function (next) {\n\n await this.r.constructor.calcAverageRatings(this.r.ship);\n\n});\n\nconst Review = mongoose.model('Review', reviewSchema);\n\nmodule.exports = Review;",
"text": "So I have a model in my DB called “Review.” I’m including the code for the model below. As you can see, there’s a field in each document called “reviewLikes.” My code is structured so that one someone clicks on a “like” button for a review, a PATCH request is sent to a controller, which triggers an update the “reviewLikes” field using findByIdAndUpdate. When this happens, the new number for “reviewLikes” is the only thing being passed in (along with the ID for the review, which is used to make sure the right document gets the new “like” number).The problem: When this all happens, there is aggregation middleware in the model that runs. The part that is causing me a headache is “reviewSchema.statics.calcAverageRatings” that is included below. As a result, the “reviewLikes” is correctly updated, however, the averages for the ship and the review count all get reset back to 3 and 0 (respectively).So, my question is, how can I prevent this block of code from executing if I am ONLY updating the “reviewLikes” field in a document? I have tried, unsuccessfully, to put a simple “if” statement outside of the block of code. It was something like if (this.rating) { the middleware } … but that did not work. Then I was going to try passing the entire review into the function instead of just reviewLikes, but then I realized that this wouldn’t stop the middleware from running… it would probably just generate funky changes to the counts each time the “like” button is clicked.Anyway, if you’re still with me, I appreciate you reading this. I’ve been stuck on this all weekend. Any help would be amazing. Thanks.",
"username": "Christopher_Clark"
},
{
"code": "revieLikesif/elseif/else",
"text": "Howdy Chirs! Thanks for pinging me about this question! Hmmm - sounds like you’ve got an interesting problem on your hands! It sounds like you are trying to update your revieLikes but the middleware is acting unexpectedly so your result is being reset. Programmatically, I don’t see any issues with using a if/else statement to control what code gets executed by the middleware. Can you elaborate on why it’s not working, so I can help troubleshoot? My guess would be that the aggregation is not returning an expected result, so the if/else statement is running the expected code.",
"username": "JoeKarlsson"
},
{
"code": " if (this._update.$set.reviewLikes) {\n console.log('All finished');\n } else {\n await this.r.constructor.calcAverageRatings(this.r.ship);\n }\n});",
"text": "I GOT IT! Thank you, this was helpful. I was putting the if/else in the wrong spot. I forgot that “this” inside of “statics” references the actual model. however, “this” inside of pre and post middleware references the actual object. When I console.log’d “this”, I was able to pinpoint where the info to be updated was. The solution, for me, looked like this:reviewSchema.post(/^findOneAnd/, async function (next) {",
"username": "Christopher_Clark"
},
{
"code": "",
"text": "The Clone a Willy kit is an amazing product that is easy to use and produces great results. It’s an ingenious way to make a replica of a special someone’s penis, and it’s a great way to keep the memory alive. The instructions are very easy to understand and the results are very realistic. It’s a great way to spice up a bedroom and make a fun, unique gift. Highly recommend!",
"username": "Sakhar_Saha"
}
] | How can I ignore a block of middlware in my model? | 2020-11-09T16:57:40.608Z | How can I ignore a block of middlware in my model? | 2,990 |
null | [
"aggregation",
"java",
"spring-data-odm"
] | [
{
"code": "{\n \"_id\" : ObjectId('...'), // This is generated automatically by MongoTemplate, I suppose\n \"key\" : 0,\n \"value\" : {\n (complex object)\n }\n \"timestamp\" : 1668584029237\n ...\n}\n List<AggregationOperation> aggregations = new ArrayList<>();\n\n ...\n\n aggregations.add(sort(Sort.by(DESC, \"timestamp\")));\n aggregations.add(group(\"key\").first(\"timestamp\").as(\"timestamp\"));\n \n final List<Map> result = mongoTemplate.aggregate(newAggregation(aggregations), tableName, Map.class).getMappedResults();\n",
"text": "I’m a complete beginner with MongoDB and we use MongoTemplate in our project.I have a collection with documents in this form:The tricky part for me is that we can have documents with the same “key” that are outdated (can be judged by the “timestamp”).I’d like to to query all documents in the database, but in the case of documents with the same “key”, then only the one with the latest timestamp (the greatest number) should be returned.So far I’ve written this:I know that I’m very close to the solution, but I cannot manage to get what I want by the group method Please help.Note: I need to return all fields expect “_id”",
"username": "Capitano_Giovarco"
},
{
"code": "db.collection.find({key : 0}).sort({timestamp : -1}).limit(1)\n",
"text": "Hi @Capitano_Giovarco ,You can use a simple find to query the data for a specific key, sort by timestamp (make sure it is indexed with -1 order) :\nExample:With index {key : , timestamp : -1} this will be efficient…Have I not understood you correctly?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel,If I understand correctly, your query returns one record for a specific key. However, this isn’t what I need.I don’t want to specify the key because the query should return all documents with all keys. Maybe I wasn’t clear enough, so I’ll share an example.If I have a collection with these documents{ “key” : 0, “timestamp” : 0 }\n{ “key” : 0, “timestamp” : 1 }\n{ “key” : 0, “timestamp” : 2 }\n{ “key” : 1, “timestamp” : 0 }\n{ “key” : 1, “timestamp” : 1 }\n{ “key” : 1, “timestamp” : 2 }I want a generic query that returns:{ “key” : 0, “timestamp” : 2 }\n{ “key” : 1, “timestamp” : 2 }because these documents are the latest in the collection for the corresponding keys. The query should be efficient because we have million of documents and the code should be as flexible as possible because in our code we have also flexible limit, skip, filtering, sorting etc. I just omitted the fluff that wasn’t needed So from my understanding I need to use Aggregation to do the above. However I’m no expect and therefore I’m stuck Also, MongoTemplate should be used because this is what we have in place.",
"username": "Capitano_Giovarco"
},
{
"code": "{label : 1, timestamp : -1}, {partialFilterExpression: {label: \"latest\"} }\n",
"text": "Hi @Capitano_Giovarco ,Ok now I understand.Going the aggregation route in my opinion would not be the optimal way.The optimal way is to have a label placed on the latest version of the key document and set/unset it as new data is generated.Then you can use a partial index set only on the latest data to support the query:And then query only {label : “latest”}Otherwise, you aggregation will need to involve sorting and grouping and pushing into arrays and replacing roots which is expensive.If you wish to go this route I can help you with a simple aggregation syntax and you will need to find some help with MongoTemplate specifically… I don’t think it should be that hard to portThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " Aggregation aggregation = newAggregation(\n group(\"$key\")\n .max(\"$timestamp\").as(\"timestamp\")\n );\n",
"text": "Hi Pavel!Your method is good, but I think that this is not feasable in my case. Long story.Therefore let’s keep going even with Vanilla MongoDb. I think we can still understand each other!I wrote this today:This returns the key and the largest timestamp which is a good start. But the result does not include all the fields in the original document, in this case “value” is the most important.I’ve read that I need to use this “$$ROOT” thingy, but it’s still not clear how to use this.",
"username": "Capitano_Giovarco"
},
{
"code": " Aggregation aggregation = newAggregation(\n group(\"$key\")\n .max(\"$timestamp\").as(\"timestamp\")\n .first(\"$value\").as(\"value\")\n );\n",
"text": "Wait, I just noticed that this returns what I need:The only difference is that “key” is called “_id” in the output. Not sure if one can rename it.Is this an inefficient way of solving the problem?",
"username": "Capitano_Giovarco"
},
{
"code": "[{$sort : {timestamp : -1}},\n {$group : {_id : \"$key\",\n docs : {$push : \"$$ROOT\" }}},\n {$replaceRoot : {$first : \"$docs\"}}]\n",
"text": "Hi @Capitano_Giovarco ,Yes ok.Does that help?Thanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel!Thanks for the hint. Sorry for the late response. Just got back from holidays to work.I try to implement your query and eventually let you know ",
"username": "Capitano_Giovarco"
},
{
"code": "aggregations.add(\n group(\"$\" + Record.getKeyName())\n .max(\"$\" + Record.getTimestampName()).as(Record.getTimestampName())\n .push(\"$$ROOT\").as(\"docs\")\n );\n \n aggregations.add(\n replaceRoot().withValueOf(ObjectOperators.valueOf(\"$docs\").merge())\n );\n",
"text": "So, I couldn’t quite reproduce your query unfortunately, but I’ve found something that equally works (and is a lot faster than my previous implementation).",
"username": "Capitano_Giovarco"
},
{
"code": "",
"text": "Could this be improved somehow?",
"username": "Capitano_Giovarco"
},
{
"code": "",
"text": "Hi @Capitano_Giovarco ,Not sure, maybe you can share what is indexed?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Nothing is indexed by me How do I choose what fields should be indexed and what type these should have?",
"username": "Capitano_Giovarco"
},
{
"code": "",
"text": "Sounds like {key : 1 , timestamp : -1} is a good option.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "@Capitano_Giovarco , Hi there, I noticed you said about being new.I noticed you are struggling to learn in the last 2 weeks. So I am here to suggest something else: MongoDB University.If you haven’t done so, have a visit to improve your basic understanding (M001,M121) as well as higher management topics. It is from MongoDB and all courses are free. (for its new face and learning paths, use the “explore new university” button).Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.",
"username": "Yilmaz_Durmaz"
},
{
"code": "mongoTemplate.indexOps(tableName).ensureIndex(\n new Index().on(Record.getOffsetName(), Direction.DESC).unique()\n .on(Record.getKeyName(), Direction.ASC)\n);\n{\n \"aggregate\":\"__collection__\",\n \"pipeline\":[\n {\n \"$match\":{\n \"offset\":{\n \"$lte\":999999 (NOTE: NEEDED FOR SOME BUSINESS LOGIC)\n }\n }\n },\n {\n \"$sort\":{\n \"offset\":-1\n }\n },\n {\n \"$group\":{\n \"_id\":\"$key\",\n \"offset\":{\n \"$max\":\"$offset\"\n },\n \"docs\":{\n \"$first\":\"$$ROOT\"\n }\n }\n },\n {\n \"$replaceRoot\":{\n \"newRoot\":\"$docs\"\n }\n },\n {\n \"$project\":{\n \"key\":1,\n \"value\":1,\n \"_id\":0\n }\n },\n (NOTE: HERE YOU CAN ADD EXTRA SORTING OR FILTERING \n CONDITIONS DEPENDING ON THE REQUEST)\n {\n \"$skip\":0\n },\n {\n \"$limit\":101\n }\n ],\n \"allowDiskUse\":true\n}\n{\n \"$group\":{\n \"_id\":\"$key\",\n \"offset\":{\n \"$max\":\"$offset\"\n },\n \"docs\":{\n \"$first\":\"$$ROOT\"\n }\n }\n },\n {\n \"$replaceRoot\":{\n \"newRoot\":\"$docs\"\n }\n }\n",
"text": "Hi!I’m back after a while because we still experience performance struggles.We tried to add a composite index:but this decreases performance by 4% on average, compared to having a single index on { OFFSET : DESC }.I can paste an example of aggregation pipeline:I think that most of the time is spend here:This is needed to pick only the latest version of a message before applying filters and other sorting conditions.I’m wondering if this can be improved somehow ",
"username": "Capitano_Giovarco"
}
] | MongoDB query: select all documents with latest timestamp | 2022-11-17T13:31:47.525Z | MongoDB query: select all documents with latest timestamp | 7,344 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I’m working with MongoDB 4.4.11 Enterprise edition for a pricing application, we have a collection which contains 36.5 million documents with 90 fields, average document size of 2.2 KB and storage size of 12.6 GB with 7 indexes around size of 7 GB.\nUse case:\nDropdown’s from the application UI for the selected criteria has to be populated from this collection (as it contains Master catalog data). Also this\ncollection is having combination of all the pricing availability data across the system, so in order to show VALID combiantions for the users to select we need\ndropdowns to be populated from this collection based on different filters dynamically based on user selection.Everytime we execute the query the response time is differing though the indexes are getting picked up, Please note the below query contains all the filters,\nbut there is a chance these filter criteria varies dynamically based on user selection (i.e. user can only apply accessTypeName,aspeedNumValue,countryId,btProductName (or) accessTypeName,countryId,btProductName (or) accessTypeName,popName,countryId,btProductName)We are facing the slowness in producing the resultsets, first hit as you see in below took 168 secs followed by 1.2 seconds and 0.2 seconds, Could someone\nhelp what are we missing here ?I have attached the following details\nQuery\nIndexes\nExplain Plans\nDocument modelquery:\ndb.btProductAccessMapping.explain(“executionStats”).aggregate([{\n“$match”: {\n“$and”: [{\n“accessTypeName”: “Ethernet”\n},\n{\n“aspeedNumValue”: 1000000\n},\n{\n“popName”: “Frankfurt Genfer Straße”\n},\n{\n“countryId”: “DE”\n},\n{\n“btProductName”: “BT iVPN”\n}\n]\n}\n},\n{\n“$group”: {\n“_id”: {\n“supplierName”: “$supplierName”,\n“supplierId”: “$supplierId”\n}\n}\n},\n{\n“$project”: {\n“_id”: 0,\n“supplierName”: “$_id.supplierName”,\n“supplierId”: “$_id.supplierId”\n}\n}, {\n“$sort”: {\n“supplierName”: 1\n}\n}\n])Please refere to the attached execution plan below in details:Execution 1:\nnReturned: 10,\nexecutionTimeMillisEstimate: 168293Execution 2:\t\nnReturned: 10,\nexecutionTimeMillisEstimate: 1209Execution 3:\nnReturned: 10,\nexecutionTimeMillisEstimate: 250Please refer to the attachment for more detials.Questionnaries:\nAre we missing any indexes ? In our usecase its dymanic filtering ?\nShould we create Mview with only the required fields ?\nI have tried creating Mview by grouping the required combinations and nesting the dataset, but while nesting it is exceeding the mongodb bsob storage limit ?\nif we re-create same collection as flattened with only limited fields(lets say only 50 fields) will yield better results ?",
"username": "Yokesh_Selvazhagan"
},
{
"code": "",
"text": "Index list:\n[\n{ v: 2, key: { _id: 1 }, name: ‘id’ },\n{\nv: 2,\nkey: {\naccessTypeName: 1,\ncpeAccessType: 1,\nsupportResilientPop: 1,\nbtProductName: 1,\nbtProductAvailabilityStatus: 1,\nipv6Enabled: 1,\nethernetPhaseAttribute: 1,\naspeedValue: 1,\naspeedUom: 1,\npspeedValue: 1,\npspeedUom: 1,\naspeedUpValue: 1,\naspeedUpUom: 1,\nserviceVariant: 1,\nsupplierId: 1,\nsupplierProductId: 1,\ninterfaceId: 1,\nframingId: 1,\nconnectorId: 1,\nacat: 1,\ncountryId: 1\n},\nname: ‘accessTypeName_1_cpeAccessType_1_supportResilientPop_1_btProductName_1_btProductAvailabilityStatus_1_ipv6Enabled_1_ethernetPhaseAttribute_1_aspeedValue_1_aspeedUom_1_pspeedValue_1_pspeedUom_1_aspeedUpValue_1_aspeedUpUom_1_serviceVariant_1_supplierId_1_supplierProductId_1_interfaceId_1_framingId_1_connectorId_1_acat_1_countryId_1’,\nbackground: true\n},\n{\nv: 2,\nkey: {\ncountryId: 1,\nbtProductId: 1,\npopId: 1,\nplatformId: 1,\nplatformName: 1,\naccessTypeId: 1,\naccessTypeName: 1,\nsupplierId: 1,\nsupplierProductId: 1,\naspeedValue: 1,\naspeedUom: 1,\naspeedUpValue: 1,\naspeedUpUom: 1,\npspeedValue: 1,\npspeedUom: 1,\npspeedUpValue: 1,\npspeedUpUom: 1,\nportTypeId: 1,\nlmpId: 1\n},\nname: ‘countryId_1_btProductId_1_popId_1_platformId_1_platformName_1_accessTypeId_1_accessTypeName_1_supplierId_1_supplierProductId_1_aspeedValue_1_aspeedUom_1_aspeedUpValue_1_aspeedUpUom_1_pspeedValue_1_pspeedUom_1_pspeedUpValue_1_pspeedUpUom_1_portTypeId_1_lmpId_1’\n},\n{\nv: 2,\nkey: {\ncountryId: 1,\ncountryName: 1,\nbtProductName: 1,\nbtProductDisplayName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\naccessTypeName: 1,\nlmpName: 1,\nlmpConfigurationName: 1,\ninterfaceName: 1,\nframingName: 1,\nconnectorName: 1,\npopName: 1,\npopTypeName: 1\n},\nname: ‘USSQuickQuotingFilters’\n},\n{\nv: 2,\nkey: {\ncountryName: 1,\nbtProductDisplayName: 1,\nplatformName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\naccessTypeGroup: 1,\naccessTypeName: 1,\nethernetPhaseAttribute: 1,\npopName: 1,\npopTypeName: 1,\nlmpName: 1,\nlmpConfigurationName: 1,\ninterfaceName: 1,\nframingName: 1,\nconnectorName: 1,\nportAvailabilityStatus: 1,\naccessAvailabilityStatus: 1,\norderingStatus: 1,\nproductAvailability: 1,\nphysicalLayer: 1,\ndeliveryMedium: 1,\nfaultrepair24x7Name: 1,\nstatus: 1,\naspeedNumValue: 1,\naspeedValue: 1,\naspeedUpNumValue: 1,\naspeedUpValue: 1,\npspeedNumValue: 1,\npspeedValue: 1,\npspeedUpNumValue: 1,\npspeedUpValue: 1,\n_id: 1\n},\nname: ‘master filter 1’\n},\n{\nv: 2,\nkey: {\ncountryName: 1,\nbtProductDisplayName: 1,\nbtProductAbbr: 1,\nserviceVariant: 1,\ncpeAccessType: 1,\nsupportResilientPop: 1,\nwanIpAddressType: 1,\nipv6Enabled: 1,\nacat: 1,\nspm: 1,\naccessLeadTime: 1,\naccessLeadTimeStatus: 1,\npspeedLeadTime: 1,\ncpeLeadTime: 1,\ncpeLeadTimeStatus: 1,\nminGuaranteedSpeedDown: 1,\nminGuaranteedSpeedUp: 1,\ncityName: 1,\nstandardFrameSize: 1,\ntimeToDeliver: 1,\nsupplierJitter: 1,\nsupplierLatency: 1,\ncuPairs: 1,\ncontentionRatio: 1,\nserviceLeadTimeCpe: 1,\nserLeadtimestatCpe: 1,\nserviceLeadTimeNocpe: 1,\nserLeadtimestatNocpe: 1,\nserviceId: 1,\nexception: 1,\ncomments: 1,\n_id: 1\n},\nname: ‘master filter 2’\n},\n{\nv: 2,\nkey: {\naccessTypeName: 1,\nbtProductName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\nportTypeId: 1,\npopTypeId: 1,\naspeedNumValue: 1,\npspeedNumValue: 1,\naspeedUpNumValue: 1,\npspeedUpNumValue: 1,\ncountryId: 1\n},\nname: ‘Staffscreenfilter’\n}\n]",
"username": "Yokesh_Selvazhagan"
},
{
"code": "",
"text": "Explain Plans:execution 1:{\nstages: [\n{\n‘$cursor’: {\nqueryPlanner: {\nplannerVersion: 1,\nnamespace: ‘btProductAvailabilityDB_staging.btProductAccessMapping’,\nindexFilterSet: false,\nparsedQuery: {\n‘$and’: [\n{\naccessTypeName: {\n‘$eq’: ‘Ethernet’\n}\n},\n{\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\n{\nbtProductName: {\n‘$eq’: ‘BT iVPN’\n}\n},\n{\ncountryId: {\n‘$eq’: ‘DE’\n}\n},\n{\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n}\n]\n},\nqueryHash: ‘26F32A64’,\nplanCacheKey: ‘29241033’,\nwinningPlan: {\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\ncountryId: 1,\ncountryName: 1,\nbtProductName: 1,\nbtProductDisplayName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\naccessTypeName: 1,\nlmpName: 1,\nlmpConfigurationName: 1,\ninterfaceName: 1,\nframingName: 1,\nconnectorName: 1,\npopName: 1,\npopTypeName: 1\n},\nindexName: ‘USSQuickQuotingFilters’,\nisMultiKey: false,\nmultiKeyPaths: {\ncountryId: ,\ncountryName: ,\nbtProductName: ,\nbtProductDisplayName: ,\nsupplierName: ,\nsupplierProductName: ,\naccessTypeName: ,\nlmpName: ,\nlmpConfigurationName: ,\ninterfaceName: ,\nframingName: ,\nconnectorName: ,\npopName: ,\npopTypeName: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\ncountryId: [\n‘[“DE”, “DE”]’\n],\ncountryName: [\n‘[MinKey, MaxKey]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nbtProductDisplayName: [\n‘[MinKey, MaxKey]’\n],\nsupplierName: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductName: [\n‘[MinKey, MaxKey]’\n],\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nlmpName: [\n‘[MinKey, MaxKey]’\n],\nlmpConfigurationName: [\n‘[MinKey, MaxKey]’\n],\ninterfaceName: [\n‘[MinKey, MaxKey]’\n],\nframingName: [\n‘[MinKey, MaxKey]’\n],\nconnectorName: [\n‘[MinKey, MaxKey]’\n],\npopName: [\n‘[“Frankfurt Genfer Straße”, “Frankfurt Genfer Straße”]’\n],\npopTypeName: [\n‘[MinKey, MaxKey]’\n]\n}\n}\n}\n},\nrejectedPlans: [\n{\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\n‘$and’: [\n{\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\n{\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n}\n]\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\naccessTypeName: 1,\ncpeAccessType: 1,\nsupportResilientPop: 1,\nbtProductName: 1,\nbtProductAvailabilityStatus: 1,\nipv6Enabled: 1,\nethernetPhaseAttribute: 1,\naspeedValue: 1,\naspeedUom: 1,\npspeedValue: 1,\npspeedUom: 1,\naspeedUpValue: 1,\naspeedUpUom: 1,\nserviceVariant: 1,\nsupplierId: 1,\nsupplierProductId: 1,\ninterfaceId: 1,\nframingId: 1,\nconnectorId: 1,\nacat: 1,\ncountryId: 1\n},\nindexName: ‘accessTypeName_1_cpeAccessType_1_supportResilientPop_1_btProductName_1_btProductAvailabilityStatus_1_ipv6Enabled_1_ethernetPhaseAttribute_1_aspeedValue_1_aspeedUom_1_pspeedValue_1_pspeedUom_1_aspeedUpValue_1_aspeedUpUom_1_serviceVariant_1_supplierId_1_supplierProductId_1_interfaceId_1_framingId_1_connectorId_1_acat_1_countryId_1’,\nisMultiKey: false,\nmultiKeyPaths: {\naccessTypeName: ,\ncpeAccessType: ,\nsupportResilientPop: ,\nbtProductName: ,\nbtProductAvailabilityStatus: ,\nipv6Enabled: ,\nethernetPhaseAttribute: ,\naspeedValue: ,\naspeedUom: ,\npspeedValue: ,\npspeedUom: ,\naspeedUpValue: ,\naspeedUpUom: ,\nserviceVariant: ,\nsupplierId: ,\nsupplierProductId: ,\ninterfaceId: ,\nframingId: ,\nconnectorId: ,\nacat: ,\ncountryId: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\ncpeAccessType: [\n‘[MinKey, MaxKey]’\n],\nsupportResilientPop: [\n‘[MinKey, MaxKey]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nbtProductAvailabilityStatus: [\n‘[MinKey, MaxKey]’\n],\nipv6Enabled: [\n‘[MinKey, MaxKey]’\n],\nethernetPhaseAttribute: [\n‘[MinKey, MaxKey]’\n],\naspeedValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUom: [\n‘[MinKey, MaxKey]’\n],\npspeedValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUom: [\n‘[MinKey, MaxKey]’\n],\naspeedUpValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUpUom: [\n‘[MinKey, MaxKey]’\n],\nserviceVariant: [\n‘[MinKey, MaxKey]’\n],\nsupplierId: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductId: [\n‘[MinKey, MaxKey]’\n],\ninterfaceId: [\n‘[MinKey, MaxKey]’\n],\nframingId: [\n‘[MinKey, MaxKey]’\n],\nconnectorId: [\n‘[MinKey, MaxKey]’\n],\nacat: [\n‘[MinKey, MaxKey]’\n],\ncountryId: [\n‘[“DE”, “DE”]’\n]\n}\n}\n}\n},\n{\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\n‘$and’: [\n{\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\n{\nbtProductName: {\n‘$eq’: ‘BT iVPN’\n}\n},\n{\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n}\n]\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\ncountryId: 1,\nbtProductId: 1,\npopId: 1,\nplatformId: 1,\nplatformName: 1,\naccessTypeId: 1,\naccessTypeName: 1,\nsupplierId: 1,\nsupplierProductId: 1,\naspeedValue: 1,\naspeedUom: 1,\naspeedUpValue: 1,\naspeedUpUom: 1,\npspeedValue: 1,\npspeedUom: 1,\npspeedUpValue: 1,\npspeedUpUom: 1,\nportTypeId: 1,\nlmpId: 1\n},\nindexName: ‘countryId_1_btProductId_1_popId_1_platformId_1_platformName_1_accessTypeId_1_accessTypeName_1_supplierId_1_supplierProductId_1_aspeedValue_1_aspeedUom_1_aspeedUpValue_1_aspeedUpUom_1_pspeedValue_1_pspeedUom_1_pspeedUpValue_1_pspeedUpUom_1_portTypeId_1_lmpId_1’,\nisMultiKey: false,\nmultiKeyPaths: {\ncountryId: ,\nbtProductId: ,\npopId: ,\nplatformId: ,\nplatformName: ,\naccessTypeId: ,\naccessTypeName: ,\nsupplierId: ,\nsupplierProductId: ,\naspeedValue: ,\naspeedUom: ,\naspeedUpValue: ,\naspeedUpUom: ,\npspeedValue: ,\npspeedUom: ,\npspeedUpValue: ,\npspeedUpUom: ,\nportTypeId: ,\nlmpId: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\ncountryId: [\n‘[“DE”, “DE”]’\n],\nbtProductId: [\n‘[MinKey, MaxKey]’\n],\npopId: [\n‘[MinKey, MaxKey]’\n],\nplatformId: [\n‘[MinKey, MaxKey]’\n],\nplatformName: [\n‘[MinKey, MaxKey]’\n],\naccessTypeId: [\n‘[MinKey, MaxKey]’\n],\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nsupplierId: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductId: [\n‘[MinKey, MaxKey]’\n],\naspeedValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUom: [\n‘[MinKey, MaxKey]’\n],\naspeedUpValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUpUom: [\n‘[MinKey, MaxKey]’\n],\npspeedValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUom: [\n‘[MinKey, MaxKey]’\n],\npspeedUpValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUpUom: [\n‘[MinKey, MaxKey]’\n],\nportTypeId: [\n‘[MinKey, MaxKey]’\n],\nlmpId: [\n‘[MinKey, MaxKey]’\n]\n}\n}\n}\n},\n{\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\naccessTypeName: 1,\nbtProductName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\nportTypeId: 1,\npopTypeId: 1,\naspeedNumValue: 1,\npspeedNumValue: 1,\naspeedUpNumValue: 1,\npspeedUpNumValue: 1,\ncountryId: 1\n},\nindexName: ‘Staffscreenfilter’,\nisMultiKey: false,\nmultiKeyPaths: {\naccessTypeName: ,\nbtProductName: ,\nsupplierName: ,\nsupplierProductName: ,\nportTypeId: ,\npopTypeId: ,\naspeedNumValue: ,\npspeedNumValue: ,\naspeedUpNumValue: ,\npspeedUpNumValue: ,\ncountryId: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nsupplierName: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductName: [\n‘[MinKey, MaxKey]’\n],\nportTypeId: [\n‘[MinKey, MaxKey]’\n],\npopTypeId: [\n‘[MinKey, MaxKey]’\n],\naspeedNumValue: [\n‘[1000000, 1000000]’\n],\npspeedNumValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUpNumValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUpNumValue: [\n‘[MinKey, MaxKey]’\n],\ncountryId: [\n‘[“DE”, “DE”]’\n]\n}\n}\n}\n}\n]\n},\nexecutionStats: {\nexecutionSuccess: true,\nnReturned: 2026,\nexecutionTimeMillis: 247060,\ntotalKeysExamined: 27604,\ntotalDocsExamined: 26952,\nexecutionStages: {\nstage: ‘PROJECTION_SIMPLE’,\nnReturned: 2026,\nexecutionTimeMillisEstimate: 191238,\nworks: 27604,\nadvanced: 2026,\nneedTime: 25577,\nneedYield: 0,\nsaveState: 10031,\nrestoreState: 10031,\nisEOF: 1,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\nnReturned: 2026,\nexecutionTimeMillisEstimate: 191228,\nworks: 27604,\nadvanced: 2026,\nneedTime: 25577,\nneedYield: 0,\nsaveState: 10031,\nrestoreState: 10031,\nisEOF: 1,\ndocsExamined: 26952,\nalreadyHasObj: 0,\ninputStage: {\nstage: ‘IXSCAN’,\nnReturned: 26952,\nexecutionTimeMillisEstimate: 505,\nworks: 27604,\nadvanced: 26952,\nneedTime: 651,\nneedYield: 0,\nsaveState: 10031,\nrestoreState: 10031,\nisEOF: 1,\nkeyPattern: {\ncountryId: 1,\ncountryName: 1,\nbtProductName: 1,\nbtProductDisplayName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\naccessTypeName: 1,\nlmpName: 1,\nlmpConfigurationName: 1,\ninterfaceName: 1,\nframingName: 1,\nconnectorName: 1,\npopName: 1,\npopTypeName: 1\n},\nindexName: ‘USSQuickQuotingFilters’,\nisMultiKey: false,\nmultiKeyPaths: {\ncountryId: ,\ncountryName: ,\nbtProductName: ,\nbtProductDisplayName: ,\nsupplierName: ,\nsupplierProductName: ,\naccessTypeName: ,\nlmpName: ,\nlmpConfigurationName: ,\ninterfaceName: ,\nframingName: ,\nconnectorName: ,\npopName: ,\npopTypeName: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\ncountryId: [\n‘[“DE”, “DE”]’\n],\ncountryName: [\n‘[MinKey, MaxKey]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nbtProductDisplayName: [\n‘[MinKey, MaxKey]’\n],\nsupplierName: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductName: [\n‘[MinKey, MaxKey]’\n],\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nlmpName: [\n‘[MinKey, MaxKey]’\n],\nlmpConfigurationName: [\n‘[MinKey, MaxKey]’\n],\ninterfaceName: [\n‘[MinKey, MaxKey]’\n],\nframingName: [\n‘[MinKey, MaxKey]’\n],\nconnectorName: [\n‘[MinKey, MaxKey]’\n],\npopName: [\n‘[“Frankfurt Genfer Straße”, “Frankfurt Genfer Straße”]’\n],\npopTypeName: [\n‘[MinKey, MaxKey]’\n]\n},\nkeysExamined: 27604,\nseeks: 652,\ndupsTested: 0,\ndupsDropped: 0\n}\n}\n}\n}\n},\nnReturned: 2026,\nexecutionTimeMillisEstimate: 168293\n},\n{\n‘$group’: {\n_id: {\nsupplierName: ‘$supplierName’,\nsupplierId: ‘$supplierId’\n}\n},\nnReturned: 10,\nexecutionTimeMillisEstimate: 168293\n},\n{\n‘$project’: {\nsupplierName: ‘$_id.supplierName’,\nsupplierId: ‘$_id.supplierId’,\n_id: false\n},\nnReturned: 10,\nexecutionTimeMillisEstimate: 168293\n},\n{\n‘$sort’: {\nsortKey: {\nsupplierName: 1\n}\n},\nnReturned: 10,\nexecutionTimeMillisEstimate: 168293\n}\n],\nserverInfo: {\nhost: ‘blp03537258’,\nport: 61901,\nversion: ‘4.4.11’,\ngitVersion: ‘b7530cacde8432d2f22ed506f258ff9c3b45c5e9’\n},\nok: 1,\n‘$clusterTime’: {\nclusterTime: Timestamp({ t: 1677649759, i: 1 }),\nsignature: {\nhash: Binary(Buffer.from(“89d0fc7a6e2eacb74e6448beef1c45c055fb4c63”, “hex”), 0),\nkeyId: 7163629481175810000\n}\n},\noperationTime: Timestamp({ t: 1677649759, i: 1 })\n}",
"username": "Yokesh_Selvazhagan"
},
{
"code": "",
"text": "execution 2:db.btProductAccessMapping.aggregate([{\n“$match”: {\n“$and”: [{\n“accessTypeName”: “Ethernet”\n},\n{\n“aspeedNumValue”: 1000000\n},\n{\n“popName”: “Frankfurt Genfer Straße”\n},\n{\n“countryId”: “DE”\n},\n{\n“btProductName”: “BT iVPN”\n}\n]\n}\n},\n{\n“$group”: {\n“_id”: {\n“supplierName”: “$supplierName”,\n“supplierId”: “$supplierId”\n}\n}\n},\n{\n“$project”: {\n“_id”: 0,\n“supplierName”: “$_id.supplierName”,\n“supplierId”: “$_id.supplierId”\n}\n}, {\n“$sort”: {\n“supplierName”: 1\n}\n}\n]).explain(“executionStats”)\n{\nstages: [\n{\n‘$cursor’: {\nqueryPlanner: {\nplannerVersion: 1,\nnamespace: ‘btProductAvailabilityDB_staging.btProductAccessMapping’,\nindexFilterSet: false,\nparsedQuery: {\n‘$and’: [\n{\naccessTypeName: {\n‘$eq’: ‘Ethernet’\n}\n},\n{\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\n{\nbtProductName: {\n‘$eq’: ‘BT iVPN’\n}\n},\n{\ncountryId: {\n‘$eq’: ‘DE’\n}\n},\n{\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n}\n]\n},\nqueryHash: ‘26F32A64’,\nplanCacheKey: ‘29241033’,\nwinningPlan: {\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\ncountryId: 1,\ncountryName: 1,\nbtProductName: 1,\nbtProductDisplayName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\naccessTypeName: 1,\nlmpName: 1,\nlmpConfigurationName: 1,\ninterfaceName: 1,\nframingName: 1,\nconnectorName: 1,\npopName: 1,\npopTypeName: 1\n},\nindexName: ‘USSQuickQuotingFilters’,\nisMultiKey: false,\nmultiKeyPaths: {\ncountryId: ,\ncountryName: ,\nbtProductName: ,\nbtProductDisplayName: ,\nsupplierName: ,\nsupplierProductName: ,\naccessTypeName: ,\nlmpName: ,\nlmpConfigurationName: ,\ninterfaceName: ,\nframingName: ,\nconnectorName: ,\npopName: ,\npopTypeName: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\ncountryId: [\n‘[“DE”, “DE”]’\n],\ncountryName: [\n‘[MinKey, MaxKey]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nbtProductDisplayName: [\n‘[MinKey, MaxKey]’\n],\nsupplierName: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductName: [\n‘[MinKey, MaxKey]’\n],\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nlmpName: [\n‘[MinKey, MaxKey]’\n],\nlmpConfigurationName: [\n‘[MinKey, MaxKey]’\n],\ninterfaceName: [\n‘[MinKey, MaxKey]’\n],\nframingName: [\n‘[MinKey, MaxKey]’\n],\nconnectorName: [\n‘[MinKey, MaxKey]’\n],\npopName: [\n‘[“Frankfurt Genfer Straße”, “Frankfurt Genfer Straße”]’\n],\npopTypeName: [\n‘[MinKey, MaxKey]’\n]\n}\n}\n}\n},\nrejectedPlans: [\n{\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\n‘$and’: [\n{\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\n{\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n}\n]\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\naccessTypeName: 1,\ncpeAccessType: 1,\nsupportResilientPop: 1,\nbtProductName: 1,\nbtProductAvailabilityStatus: 1,\nipv6Enabled: 1,\nethernetPhaseAttribute: 1,\naspeedValue: 1,\naspeedUom: 1,\npspeedValue: 1,\npspeedUom: 1,\naspeedUpValue: 1,\naspeedUpUom: 1,\nserviceVariant: 1,\nsupplierId: 1,\nsupplierProductId: 1,\ninterfaceId: 1,\nframingId: 1,\nconnectorId: 1,\nacat: 1,\ncountryId: 1\n},\nindexName: ‘accessTypeName_1_cpeAccessType_1_supportResilientPop_1_btProductName_1_btProductAvailabilityStatus_1_ipv6Enabled_1_ethernetPhaseAttribute_1_aspeedValue_1_aspeedUom_1_pspeedValue_1_pspeedUom_1_aspeedUpValue_1_aspeedUpUom_1_serviceVariant_1_supplierId_1_supplierProductId_1_interfaceId_1_framingId_1_connectorId_1_acat_1_countryId_1’,\nisMultiKey: false,\nmultiKeyPaths: {\naccessTypeName: ,\ncpeAccessType: ,\nsupportResilientPop: ,\nbtProductName: ,\nbtProductAvailabilityStatus: ,\nipv6Enabled: ,\nethernetPhaseAttribute: ,\naspeedValue: ,\naspeedUom: ,\npspeedValue: ,\npspeedUom: ,\naspeedUpValue: ,\naspeedUpUom: ,\nserviceVariant: ,\nsupplierId: ,\nsupplierProductId: ,\ninterfaceId: ,\nframingId: ,\nconnectorId: ,\nacat: ,\ncountryId: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\ncpeAccessType: [\n‘[MinKey, MaxKey]’\n],\nsupportResilientPop: [\n‘[MinKey, MaxKey]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nbtProductAvailabilityStatus: [\n‘[MinKey, MaxKey]’\n],\nipv6Enabled: [\n‘[MinKey, MaxKey]’\n],\nethernetPhaseAttribute: [\n‘[MinKey, MaxKey]’\n],\naspeedValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUom: [\n‘[MinKey, MaxKey]’\n],\npspeedValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUom: [\n‘[MinKey, MaxKey]’\n],\naspeedUpValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUpUom: [\n‘[MinKey, MaxKey]’\n],\nserviceVariant: [\n‘[MinKey, MaxKey]’\n],\nsupplierId: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductId: [\n‘[MinKey, MaxKey]’\n],\ninterfaceId: [\n‘[MinKey, MaxKey]’\n],\nframingId: [\n‘[MinKey, MaxKey]’\n],\nconnectorId: [\n‘[MinKey, MaxKey]’\n],\nacat: [\n‘[MinKey, MaxKey]’\n],\ncountryId: [\n‘[“DE”, “DE”]’\n]\n}\n}\n}\n},\n{\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\n‘$and’: [\n{\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\n{\nbtProductName: {\n‘$eq’: ‘BT iVPN’\n}\n},\n{\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n}\n]\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\ncountryId: 1,\nbtProductId: 1,\npopId: 1,\nplatformId: 1,\nplatformName: 1,\naccessTypeId: 1,\naccessTypeName: 1,\nsupplierId: 1,\nsupplierProductId: 1,\naspeedValue: 1,\naspeedUom: 1,\naspeedUpValue: 1,\naspeedUpUom: 1,\npspeedValue: 1,\npspeedUom: 1,\npspeedUpValue: 1,\npspeedUpUom: 1,\nportTypeId: 1,\nlmpId: 1\n},\nindexName: ‘countryId_1_btProductId_1_popId_1_platformId_1_platformName_1_accessTypeId_1_accessTypeName_1_supplierId_1_supplierProductId_1_aspeedValue_1_aspeedUom_1_aspeedUpValue_1_aspeedUpUom_1_pspeedValue_1_pspeedUom_1_pspeedUpValue_1_pspeedUpUom_1_portTypeId_1_lmpId_1’,\nisMultiKey: false,\nmultiKeyPaths: {\ncountryId: ,\nbtProductId: ,\npopId: ,\nplatformId: ,\nplatformName: ,\naccessTypeId: ,\naccessTypeName: ,\nsupplierId: ,\nsupplierProductId: ,\naspeedValue: ,\naspeedUom: ,\naspeedUpValue: ,\naspeedUpUom: ,\npspeedValue: ,\npspeedUom: ,\npspeedUpValue: ,\npspeedUpUom: ,\nportTypeId: ,\nlmpId: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\ncountryId: [\n‘[“DE”, “DE”]’\n],\nbtProductId: [\n‘[MinKey, MaxKey]’\n],\npopId: [\n‘[MinKey, MaxKey]’\n],\nplatformId: [\n‘[MinKey, MaxKey]’\n],\nplatformName: [\n‘[MinKey, MaxKey]’\n],\naccessTypeId: [\n‘[MinKey, MaxKey]’\n],\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nsupplierId: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductId: [\n‘[MinKey, MaxKey]’\n],\naspeedValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUom: [\n‘[MinKey, MaxKey]’\n],\naspeedUpValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUpUom: [\n‘[MinKey, MaxKey]’\n],\npspeedValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUom: [\n‘[MinKey, MaxKey]’\n],\npspeedUpValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUpUom: [\n‘[MinKey, MaxKey]’\n],\nportTypeId: [\n‘[MinKey, MaxKey]’\n],\nlmpId: [\n‘[MinKey, MaxKey]’\n]\n}\n}\n}\n},\n{\nstage: ‘PROJECTION_SIMPLE’,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\npopName: {\n‘$eq’: ‘Frankfurt Genfer Straße’\n}\n},\ninputStage: {\nstage: ‘IXSCAN’,\nkeyPattern: {\naccessTypeName: 1,\nbtProductName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\nportTypeId: 1,\npopTypeId: 1,\naspeedNumValue: 1,\npspeedNumValue: 1,\naspeedUpNumValue: 1,\npspeedUpNumValue: 1,\ncountryId: 1\n},\nindexName: ‘Staffscreenfilter’,\nisMultiKey: false,\nmultiKeyPaths: {\naccessTypeName: ,\nbtProductName: ,\nsupplierName: ,\nsupplierProductName: ,\nportTypeId: ,\npopTypeId: ,\naspeedNumValue: ,\npspeedNumValue: ,\naspeedUpNumValue: ,\npspeedUpNumValue: ,\ncountryId: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nsupplierName: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductName: [\n‘[MinKey, MaxKey]’\n],\nportTypeId: [\n‘[MinKey, MaxKey]’\n],\npopTypeId: [\n‘[MinKey, MaxKey]’\n],\naspeedNumValue: [\n‘[1000000, 1000000]’\n],\npspeedNumValue: [\n‘[MinKey, MaxKey]’\n],\naspeedUpNumValue: [\n‘[MinKey, MaxKey]’\n],\npspeedUpNumValue: [\n‘[MinKey, MaxKey]’\n],\ncountryId: [\n‘[“DE”, “DE”]’\n]\n}\n}\n}\n}\n]\n},\nexecutionStats: {\nexecutionSuccess: true,\nnReturned: 2026,\nexecutionTimeMillis: 1922,\ntotalKeysExamined: 27604,\ntotalDocsExamined: 26952,\nexecutionStages: {\nstage: ‘PROJECTION_SIMPLE’,\nnReturned: 2026,\nexecutionTimeMillisEstimate: 1411,\nworks: 27604,\nadvanced: 2026,\nneedTime: 25577,\nneedYield: 0,\nsaveState: 125,\nrestoreState: 125,\nisEOF: 1,\ntransformBy: {\nsupplierId: 1,\nsupplierName: 1,\n_id: 0\n},\ninputStage: {\nstage: ‘FETCH’,\nfilter: {\naspeedNumValue: {\n‘$eq’: 1000000\n}\n},\nnReturned: 2026,\nexecutionTimeMillisEstimate: 1411,\nworks: 27604,\nadvanced: 2026,\nneedTime: 25577,\nneedYield: 0,\nsaveState: 125,\nrestoreState: 125,\nisEOF: 1,\ndocsExamined: 26952,\nalreadyHasObj: 0,\ninputStage: {\nstage: ‘IXSCAN’,\nnReturned: 26952,\nexecutionTimeMillisEstimate: 88,\nworks: 27604,\nadvanced: 26952,\nneedTime: 651,\nneedYield: 0,\nsaveState: 125,\nrestoreState: 125,\nisEOF: 1,\nkeyPattern: {\ncountryId: 1,\ncountryName: 1,\nbtProductName: 1,\nbtProductDisplayName: 1,\nsupplierName: 1,\nsupplierProductName: 1,\naccessTypeName: 1,\nlmpName: 1,\nlmpConfigurationName: 1,\ninterfaceName: 1,\nframingName: 1,\nconnectorName: 1,\npopName: 1,\npopTypeName: 1\n},\nindexName: ‘USSQuickQuotingFilters’,\nisMultiKey: false,\nmultiKeyPaths: {\ncountryId: ,\ncountryName: ,\nbtProductName: ,\nbtProductDisplayName: ,\nsupplierName: ,\nsupplierProductName: ,\naccessTypeName: ,\nlmpName: ,\nlmpConfigurationName: ,\ninterfaceName: ,\nframingName: ,\nconnectorName: ,\npopName: ,\npopTypeName: \n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: ‘forward’,\nindexBounds: {\ncountryId: [\n‘[“DE”, “DE”]’\n],\ncountryName: [\n‘[MinKey, MaxKey]’\n],\nbtProductName: [\n‘[“BT iVPN”, “BT iVPN”]’\n],\nbtProductDisplayName: [\n‘[MinKey, MaxKey]’\n],\nsupplierName: [\n‘[MinKey, MaxKey]’\n],\nsupplierProductName: [\n‘[MinKey, MaxKey]’\n],\naccessTypeName: [\n‘[“Ethernet”, “Ethernet”]’\n],\nlmpName: [\n‘[MinKey, MaxKey]’\n],\nlmpConfigurationName: [\n‘[MinKey, MaxKey]’\n],\ninterfaceName: [\n‘[MinKey, MaxKey]’\n],\nframingName: [\n‘[MinKey, MaxKey]’\n],\nconnectorName: [\n‘[MinKey, MaxKey]’\n],\npopName: [\n‘[“Frankfurt Genfer Straße”, “Frankfurt Genfer Straße”]’\n],\npopTypeName: [\n‘[MinKey, MaxKey]’\n]\n},\nkeysExamined: 27604,\nseeks: 652,\ndupsTested: 0,\ndupsDropped: 0\n}\n}\n}\n}\n},\nnReturned: 2026,\nexecutionTimeMillisEstimate: 1209\n},\n{\n‘$group’: {\n_id: {\nsupplierName: ‘$supplierName’,\nsupplierId: ‘$supplierId’\n}\n},\nnReturned: 10,\nexecutionTimeMillisEstimate: 1209\n},\n{\n‘$project’: {\nsupplierName: ‘$_id.supplierName’,\nsupplierId: ‘$_id.supplierId’,\n_id: false\n},\nnReturned: 10,\nexecutionTimeMillisEstimate: 1209\n},\n{\n‘$sort’: {\nsortKey: {\nsupplierName: 1\n}\n},\nnReturned: 10,\nexecutionTimeMillisEstimate: 1209\n}\n],\nserverInfo: {\nhost: ‘blp03537258’,\nport: 61901,\nversion: ‘4.4.11’,\ngitVersion: ‘b7530cacde8432d2f22ed506f258ff9c3b45c5e9’\n},\nok: 1,\n‘$clusterTime’: {\nclusterTime: Timestamp({ t: 1677650309, i: 1 }),\nsignature: {\nhash: Binary(Buffer.from(“2a91e8cdae5cce2b058b76d29d370d3ec487472d”, “hex”), 0),\nkeyId: 7163629481175810000\n}\n},\noperationTime: Timestamp({ t: 1677650309, i: 1 })\n}",
"username": "Yokesh_Selvazhagan"
},
{
"code": "",
"text": "Document model:\n{\n“_id”: {\n“$oid”: “63e3485d2e6a2c71dd04ee29”\n},\n“popId”: “081-00012”,\n“popName”: “Tokyo 2”,\n“popTypeId”: 1,\n“popTypeName”: “GPOP”,\n“legacyCountryId”: 13,\n“countryName”: “Japan”,\n“countryId”: “JP”,\n“cityName”: “Tokyo”,\n“btProductId”: 76,\n“btProductName”: “BT MPLS”,\n“btProductDisplayName”: “IP Connect global (EDCA)”,\n“btProductAbbr”: “MPLS”,\n“btProductAvailCd”: 1,\n“btProductAvailabilityStatus”: “Standard”,\n“specialBidcomments”: “NA”,\n“platformId”: 3,\n“platformName”: “MPLS - harmonized”,\n“supplierId”: 4,\n“supplierName”: “Colt”,\n“supplierProductId”: 694652,\n“supplierProductName”: “Colt Ethernet Hub and Spoke P2A”,\n“accessTypeGroup”: “Ethernet”,\n“accessTypeId”: 2,\n“accessTypeName”: “Ethernet”,\n“cpeAccessType”: “1 Gbps (Ethernet)”,\n“pspeedCode”: 2254,\n“pspeedValue”: “6”,\n“pspeedUom”: “Mbps”,\n“pspeedNumValue”: 6000,\n“pspeedUpCode”: 2254,\n“pspeedUpValue”: “6”,\n“pspeedUpUom”: “Mbps”,\n“pspeedUpNumValue”: 6000,\n“pspeedStatusCode”: 1,\n“portAvailabilityStatus”: “Available”,\n“aspeedCode”: 7017,\n“aspeedValue”: “60”,\n“aspeedUom”: “Mbps”,\n“aspeedNumValue”: 60000,\n“aspeedStatusCode”: 1,\n“accessAvailabilityStatus”: “Available”,\n“orderingStatus”: “Standard”,\n“aspeedUpCode”: 7017,\n“aspeedUpValue”: “60”,\n“aspeedUpUom”: “Mbps”,\n“aspeedUpNumValue”: 60000,\n“interfaceId”: 13,\n“interfaceName”: “1000Base-LX”,\n“framingId”: 2,\n“framingName”: “None (Clear)”,\n“connectorId”: 6,\n“connectorName”: “SC series”,\n“accessLeadTime”: null,\n“accessLeadTimeStatus”: null,\n“pspeedLeadTime”: 14,\n“cpeLeadTime”: null,\n“cpeLeadTimeStatus”: null,\n“serviceLeadTimeCpe”: “34”,\n“serLeadtimestatCpe”: “Estimate”,\n“serviceLeadTimeNocpe”: “34”,\n“serLeadtimestatNocpe”: “Estimate”,\n“supportResilientPop”: 0,\n“serviceId”: null,\n“portTypeId”: 23504,\n“btPackageId”: -1,\n“btPackageName”: null,\n“serviceVariant”: “Premium”,\n“deliveryMode”: “Fibre”,\n“ipv6Enabled”: true,\n“ethernetPhaseId”: 24,\n“ethernetPhaseAttribute”: “2b”,\n“customerLocationTypeId”: 4,\n“customerLocationTypeName”: “Off Net”,\n“faultrepair24x7Id”: 2,\n“faultrepair24x7Name”: “included”,\n“lmpId”: 2261,\n“lmpName”: “Colt”,\n“lmpConfigurationName”: “Colt(Off Net)”,\n“acat”: “AC1+”,\n“spm”: “Amber”,\n“productAvailability”: “Off Net”,\n“standardFrameSize”: 1600,\n“status”: “LIVE”,\n“physicalLayer”: “TBC”,\n“deliveryMedium”: “TBC”\n}",
"username": "Yokesh_Selvazhagan"
}
] | Aggregate query is taking long-time to fetch results from huge collection | 2023-03-01T06:18:25.870Z | Aggregate query is taking long-time to fetch results from huge collection | 438 |
null | [
"sharding",
"capacity-planning"
] | [
{
"code": "maxSize",
"text": "Hello,\nusing MongoDB 6.0.4 with ranged-sharding.\nI’ve noticed that once the attached storage goes to 100% capacity, then mongo starts crashing with core dumps.\nTried using maxSize for the shard, but apparently it does not protect from exceeding this definition, but only as a hint for the balancer:\nhttps://www.mongodb.com/docs/manual/tutorial/manage-sharded-cluster-balancer/#change-the-maximum-storage-size-for-a-given-shardIs there a way to limit the capacity usage of mongo to not use all of the disk? or, prevent the core dump from happening in this case, like a graceful shut down?",
"username": "Oded_Raiches"
},
{
"code": "serverStatus",
"text": "ok, i searched but couldn’t get any info on setting limit on disk usage. But",
"username": "Kobe_W"
}
] | Mongod crashing with core dumps when storage is full | 2023-02-28T13:45:15.813Z | Mongod crashing with core dumps when storage is full | 904 |
null | [
"document-versioning"
] | [
{
"code": "",
"text": "We are planning to use MongoDB for a new project/product and one of the requirements is to have history/revisions of the documents for auditing purposes.After some research we found out that MongoDB supports Document Versioning, but it is always mentioned that there are limitations on the number of revisions and versioned documents and we couldn’t find any details about these limitations as what that “number” is.",
"username": "ZEYAD_BIN_KUWAIR"
},
{
"code": "",
"text": "I can’t recall that mongodb supports versioning as built-in fefature. Any official link?You may also want to check this.The Document Versioning Pattern - When history is important in a document",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hey @ZEYAD_BIN_KUWAIR,Welcome to the MongoDB Community Forums! After some research we found out that MongoDB supports Document Versioning…we couldn’t find any details about these limitations as what that “number” is.Can you please let us know which number you’re mentioning here? It would be great to link the source of your information as well for us from where you read MongoDB supports Document Versioning to better able to guide you here.As @Kobe_W has correctly pointed out, you can use MongoDB’s Document Versioning Pattern to build your data model so as to be easily able to support document versioning.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
}
] | MongoDB document versioning limitations | 2023-02-21T09:23:35.367Z | MongoDB document versioning limitations | 1,077 |
null | [] | [
{
"code": "",
"text": "Hey Guys,I’ve only been working with Relational Databases for many years, so it’s a little hard for me to imagine how I would go about doing this properly:In the mflix movie example data, we will find the cast of the movie with Names as an Array.\nSo many movies may have the same name (but maybe its not the same guy). But what do i do when one of these people gets married?So: How do i rename the Name of the people?In relation-databases there would be a “cast”-Table and a “cast-movie”-Table.\nAnd i will just update the cast-Table Name and the new Name is in all movies.What is the exact Workflow here in Mongo?I am sure this question has been asked several times here, but unfortunately i have not found it.Thanks\npad",
"username": "Patrick_Peters"
},
{
"code": "db.movies.update(\n{ title : \"<movie name>\" , cast:\"<Actor's Name>\" },\n{ $set:{ \"field.$\" : \"<New Name>\"}\n})\nupdate",
"text": "Hey @Patrick_Peters,Welcome to the MongoDB Community Forums! So many movies may have the same name (but maybe its not the same guy).But what do i do when one of these people gets married?So: How do i rename the Name of the people?So you want to do an update to a name that appears across multiple movies or only for some movies since names can be the same for two different people as you mentioned?As per what you described, you can use update with aggregation pipeline operators. For example, the syntax to update a cast member’s name using $set:Similarly, you can use other aggregation operators attached in the documentation based on different use cases. Also note that in the update example above, the update will only be performed on a single document. You can use bulk operation if you want to change all documents containing the name to be changed.Additionally, since you are a beginner in MongoDB, I would highly recommend you check out our University Courses to help you get started in MongoDB and make you feel more familiar with the concepts. \nIntroduction to MongoDB\nMongoDB for SQL ProsPlease let us know if there are any doubts about this. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | (Newbie) How to update "cast" name in mflix movie example | 2023-02-24T06:45:17.100Z | (Newbie) How to update “cast” name in mflix movie example | 486 |
null | [] | [
{
"code": "Can't extract geo keys'Can't extract geo keys: { _id: ObjectId('63e519a25d8e3392b33c997e'), centroid: { type: \"Point\", coordinates: [ -1.20585, 50.800568 ] }, geometry: { type: \"Polygon\", coordinates: [ [ [ -1.208399, 50.800335 ], [ -1.208383, 50.800265 ], [ -1.208363, 50.800228 ], [ -1.208298, 50.800104 ], [ -1.208203, 50.799997 ], [ -1.208116, 50.799885 ], [ -1.20796, 50.799773 ], [ -1.207641, 50.799532 ], [ -1.207576, 50.799503 ], [ -1.207356, 50.799427 ], [ -1.207213, 50.799396 ], [ -1.207309, 50.801346 ], [ -1.207433, 50.801312 ], [ -1.207613, 50.801243 ], [ -1.207635, 50.801235 ], [ -1.208173, 50.800931 ], [ -1.20826, 50.800845 ], [ -1.208309, 50.800783 ], [ -1.208331, 50.800747 ], [ -1.208349, 50.800718 ], [ -1.208362, 50.800685 ], [ -1.208372, 50.80066 ], [ -1.208396, 50.800504 ], [ -1.208399, 50.800335 ] ] Edges 34 and 36 cross. Edge locations in degrees: [50.7999570, -1.2033620]-[50.7999550, -1.2033580] and [50.7999560, -1.2033600]-[50.8001420, -1.2031880]'\n",
"text": "I’ve been having an incredibly frustrating time uploading polygons to a collection with a 2dsphere index. I have been constantly running into Can't extract geo keys errors on seemingly ‘clean’ polygons and I just can’t understand why. The example below shows one:I have also uploaded the feature geoJSON to a Github Gist here. The marker you can see is one of the coordinates given in the ‘edge locations’ even though there isn’t even an edge over there!Apart from helping me understand this error, is there a way I can flag these errors before I try to bulk upload them to our database? It’s very time consuming to create and clean these polygons everytime and only to realise there is a problem when uploading.",
"username": "Ben_Said"
},
{
"code": "2dsphere",
"text": "Hi @Ben_Said and welcome to the MongoDB community forum!!Generally, the above error occurs when the coordinates are not properly defined while inserting the document into the collection.However, for further understanding, could you help with the details of the deployment like:Based on the above example shared, I tried to insert the document in my local environment with version 6.0.3.I tried to create 2dsphere index using the below command:test> db.sample.createIndex( { ‘location.geometry’: ‘2dsphere’})\nlocation.geometry_2dsphereand further tried to insert the example data you posted using InsertOne command and the data was inserted successfully.is there a way I can flag these errors before I try to bulk upload them to our database?There is no direct support from MongoDB for the above ask. But the other third party links like GeoJSON Viewer and Validator could be helpful for your case.Please note that, the above linked tool is not from MongoDB, therefore we cannot guarantee the tool’s correct validation for the GeoJSON in all cases.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Errors when uploading geospatial data (polygons) | 2023-02-13T08:36:16.383Z | Errors when uploading geospatial data (polygons) | 665 |
null | [] | [
{
"code": "",
"text": "Hi,\nI’m running MongoDB locally. According to my calculations, I should have max 300\n“current” connections to the db. But I see that I have north of 1.6k active “current” connections.\nWhat can be the reason for that?Thanks all,\nMoshe",
"username": "Moshe_G"
},
{
"code": "300secs_runningdb.currentOp()activetrue",
"text": "Hello @Moshe_G ,To understand your setup and use-case better, could you please provide more details, such as:I should have max 300 “current” connections to the db.I have north of 1.6k active “current” connections.There could be many reasons why you may be seeing high number of active connections than expected. Below are a few possibilities:A connection pool is a cache of open, ready-to-use database connections maintained by the driver. Your application can seamlessly get connections from the pool, perform operations, and return connections back to the pool. Connection pools are thread-safe. This can lead to more active connections but should not be an issue if sufficient hardware resources are available. This is because a single client can have multiple connections open at the same time. You can check if this is the case by looking at the number of connections with the help of output of the db.currentOp() command.In case your application uses long-running queries, they can keep connections open for a longer time, which can lead to more active connections. You can check if this is the case by looking at the secs_running field in the output of the db.currentOp() command, which shows the duration of the operation in seconds. MongoDB calculates this value by subtracting the current time from the start time of the operation. Only appears if the operation is running; i.e. if active is true.If your application has recently experienced an increase in traffic, it could be that your application is creating more connections to handle the increased load. Consider scaling up your hardware resources to handle the increased workload and smooth working of database.I would recommend you go through below links to learn more about connections in MongoDB and how you can monitor and tune self managed installations as per your application’s workload.Learn how to monitor a MongoDB instance and which metrics you should consider to optimize performance.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi @Tarun_Gaur Thank you,\nThe Version is 5.0 and its a stand-alone deployment.\nI use PyMongo 4.1.1 as a Driver.\nI have max 300 different physical computers,\neach one with thereon NIC,\nthat runs a python code that uses the same PyMongo client per machine.\nI see on the Performance tab on the Compass GUI that I can get north of 1.6 k connections.\nThank you\nMoshe",
"username": "Moshe_G"
},
{
"code": "waitQueueTimeoutMSConnectionFailurewaitQueueTimeoutMS",
"text": "Please refer to below Problem-Solution from Tuning Connection Pool settings:I have max 300 different physical computers,\neach one with thereon NIC,\nthat runs a python code that uses the same PyMongo client per machine.I use PyMongo 4.1.1 as a Driver.max_pool_size : The maximum allowable number of concurrent connections to each connected server. Requests to a server will block if there are maxPoolSize outstanding connections to the requested server. Defaults to 100. Cannot be 0. When a server’s pool has reached max_pool_size, operations for that server block waiting for a socket to be returned to the pool. If waitQueueTimeoutMS is set, a blocked operation will raise ConnectionFailure after a timeout. By default waitQueueTimeoutMS is not set.min_pool_size : The minimum required number of concurrent connections that the pool will maintain to each connected server. Default is 0.so, 300 clients * (max connection pool size default which is 100) == 30,000 connections max possible.The connection pool is automatically managed by pymongo itself, so it will create connections as required by the workload.As per documentation on Connection PoolA connection pool is a cache of open, ready-to-use database connections maintained by the driver. Your application can seamlessly get connections from the pool, perform operations, and return connections back to the pool. Connection pools are thread-safe.A connection pool helps reduce application latency and the number of times new connections are created. A connection pool creates connections at startup. Applications do not need to manually return connections to the pool. Instead, connections return to the pool automatically. Some connections are active and some are inactive but available. If your application requests a connection and there’s an available connection in the pool, a new connection does not need to be created.Note: To handle the increase in workload, sufficient hardware resources are required by the database instance also.",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Number of current connections to MongoDB | 2023-02-20T09:41:05.506Z | Number of current connections to MongoDB | 6,961 |
null | [
"queries",
"replication"
] | [
{
"code": "",
"text": "enable security in replication all node in recovering mode how reslove to normal mode in mongodb in windows server 2016",
"username": "Hemanth_Perepi_1"
},
{
"code": "RECOVERINGdb.adminCommand({getCmdLineOpts:1})mongod",
"text": "Hello @Hemanth_Perepi_1 ,enable security in replication all node in recovering mode how reslove to normal mode in mongodb in windows server 2016To understand your use case better, could you please share more details such as:Each member of a replica set has a state. To learn about different states of replica set please referRegards,\nTarun",
"username": "Tarun_Gaur"
}
] | Enable security in replication all node in recovering mode how reslove to normal mode in mongodb in windows server 2016 | 2023-02-23T09:18:32.940Z | Enable security in replication all node in recovering mode how reslove to normal mode in mongodb in windows server 2016 | 708 |
null | [
"swift"
] | [
{
"code": " do {\n try realm.write {\n realm.delete(Task(ownerId: \"asdf\")) // Throws of course\n }\n } catch let error as NSError {\n print(error.localizedDescription)\n }\n",
"text": "I’m not able to catch any major exceptions inside of Realm. Is this expected behavior? I can’t think of one exception that has gotten caught that I can remember. Thankfully permission ones seem where you write to a synced realm in a way you don’t have permission for do work fine. Here’s a code example:In this code I’d expect the write to fail and an error get caught and printed. Instead the app crashes due to an uncaught exception.*** Terminating app due to uncaught exception ‘RLMException’, reason: ‘Can only delete an object from the Realm it belongs to.’What can I do to fix this?What kind of exceptions do get caught?Thanks\n-Jon",
"username": "Jonathan_Czeck"
},
{
"code": "NSErrorblockErrorTypelet data = [\"field0\": \"someId\", \"field1\": \"abcd\"]\nrealm.create(Person.self, value: data, update: .error)\n",
"text": "Super great question and they answer may not be so obvious, but here’s an attempt at a high level answer.The Do-Catch syntax is designed to capture errors during program execution and help recover from them - and is really a Swift implementation.What’s happening here is a programmer error - attempting to delete a non-managed object from a managed Realm is a pre-condition error and really, not recoverable. Attempting it again will produce the same error. e.g. it’s an ObjC exception due to a coding error, so fix your codeThat being said the documentation (IMO) is pretty weak/vague on this; realm.write says things like:and trying to begin a write transaction on a Realm which is already in a write transaction will throw an exception.andAn NSError if the transaction could not be completed successfully. If block throws, the function throws the propagated ErrorType instead.But it’s not clear under what conditions it will throw (pretty much none) or that its “throwing” is not what many of us consider “throwing” in a Swfit context.So, in a nutshell, Realm doesn’t validate the quality of the data, just that its data.That can get you into potential trouble: Given a Person has one property “name”; trying to populate it with “data” containing “field0” and “field1” is a valid call but what’s contained in “data” is notBut, that will not “throw”, it will crash, so the code itself needs to be fixed. And really though, let’s do type safe calls as much as possible instead of relying on quoted strings.So, in a nutshell, Do-Catch doesn’t Do-Catch with Realm.",
"username": "Jay"
},
{
"code": "",
"text": "Thanks much, Jay, that makes sense.In the end I just want to react to errors correctly, so it would be helpful for the docs to list out what exceptions I should expect to catch and respond to. I’d like to get it right. ",
"username": "Jonathan_Czeck"
}
] | Impossible to catch Realm exceptions? | 2023-02-27T21:04:27.750Z | Impossible to catch Realm exceptions? | 970 |
[
"sharding",
"atlas-cluster"
] | [
{
"code": "",
"text": "Hi community, in the morning my cluster auto scale from a M10 to a M20, but one of my shards cannot start, and i see the next legend in the portal\nimage1673×116 6.07 KB\ni been like this for at less 3 or 4 hours, i don`t kwon if i have to preparate to change the cluster region or is just some temporal, has anyone else experienced something like this?",
"username": "Ricardo_Flores_Ramos"
},
{
"code": "",
"text": "Hey @Ricardo_Flores_Ramos,Could you contact the Atlas in-app chat support regarding this one? Please be sure to advise them of which cluster is receiving this message when trying to auto-scale to M20.Regards,\nJason",
"username": "Jason_Tran"
}
] | Error in auto scale process | 2023-02-28T23:14:33.602Z | Error in auto scale process | 642 |
|
null | [
"node-js",
"react-native",
"typescript"
] | [
{
"code": "models/index.tsimport { Task, Vote } from './tasks';\n\n// Create schema with ALL types, including types for embedded objects\nconst ALL_SCHEMA = [\n Task,\n Vote\n]\n\nexport const TaskRealmContext = createRealmContext({\n schema: ALL_SCHEMA\n});\nmodels/task.tsimport { Realm } from '@realm/react';\n\n// Embedded object I intend to put in arrays\nexport class Vote extends Realm.Object<Vote> {\n static embedded: boolean = true;\n userId?: string;\n priority: number;\n createdAt?: Date = new Date();\n}\n\n// Object intended to be Document\nexport class Task extends Realm.Object<Task> {\n _id: Realm.BSON.UUID = new Realm.BSON.UUID();\n body!: string;\n createdAt: Date = new Date();\n userId?: string\n editedAt?: Date;\n votes?: Vote[];\n test?: string[];\n\n static primaryKey = '_id';\n\n constructor(realm: Realm, {\n _id,\n body, \n userId,\n \n }: any) {\n const newTask: any = {\n _id,\n userId,\n body,\n }\n super(realm, newTask);\n }\n}\nTaskrealm.write(() => {\n const testTask: any= {\n body: 'This is a test',\n }\n // This successfully creates a new Task\n const newTask: Task = realm.create('Task', testTask);\n\n // This works and correctly edits the task\n newTask.body = 'This really is a test'\n\n // These following arrays do not result in anything\n newTask.votes = [] \n\n // At first I thought the issue was an array of objects, \n // but even arrays of strings do not get saved\n newTask.test = ['This', 'doesnt', 'do', 'anything']\n})\nconsole.log(newTaskrealm.write{\n \"_id\": \"some_random_uuid\",\n \"body\": \"This really is a test\",\n \"votes\": [],\n \"test\": ['This', 'doesnt', 'do', 'anything']\n}\nTaskconsole.log{\n \"_id\": \"some_random_uuid\",\n \"body\": \"This really is a test\",\n}\nrealm.writeArray<Vote>Exception in HostFunction: Failed to read Realm object constructor must of a 'schema property.'Vote[]An element access expression should take an argumentthingy[]",
"text": "Hello all, I’m having an issue where arrays refuse to save in my Realm database documents. I am using Realm with React Native, TypeScript.Consider this case:In models/index.ts:In models/task.ts:In some file where I create a Task:If I console.log(newTask) at the very end of the realm.write operation, I see the following result:However, if I query Realm for the same Task and console.log it, I see the following:In other words, it appears there is a silent failure in which all of the object manipulations appear to work within the realm.write operation, but once the object finally gets written to the database, the arrays do not get saved in the resultant document.I’ve tried a few things:I can get embedded objects to work, but for my current use case I desperately need an array. But my arrays are just refusing to get saved into my Realm database documents.Thus far, I have:Am I missing anything?Help!",
"username": "Alexander_Ye"
},
{
"code": "votes?: Vote[];\nvotes: Realm.List<Vote>\nnullabletasksType 'undefined[]' is missing the following properties from type \n'List<MessageReaction>': type, optional, toJSON, description, and 12 more.\n",
"text": "Okay so I have somewhat of a solution:In the schema, instead of:I have:The issues here are:But it works. It freakin works. So I’m not gonna complain and will take the red TypeScript underlines in stride.",
"username": "Alexander_Ye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [Realm React Native] Arrays Don't Save in Realm Database Documents | 2023-02-28T21:07:33.736Z | [Realm React Native] Arrays Don’t Save in Realm Database Documents | 1,524 |
null | [
"queries",
"node-js",
"compass",
"mongodb-shell",
"typescript"
] | [
{
"code": "import * as mongoDB from \"mongodb\";\nimport { MongoClient } from 'mongodb';\nconst _ = require('lodash');\nconst fs=require('fs');\n\nexport async function connectToDatabase() {\n const collections = [ 'a', 'b', 'c' ];\n \n \n const client = new mongoDB.MongoClient(\"mongodb://127.0.0.1:27017/\");\n\n await client.connect();\n\n const databasesList = await client.db().admin().listDatabases();\n console.log(\"Databases:\");\n\n const poDb = _.find(databasesList.databases, function(db : any) { return db.name === 'mytestdb'; });\n if (poDb === undefined) {\n console.log('mytestdb database not found');\n await client.db(\"mytestdb\").createCollection(\"testcoll\");\n } else {\n console.log(` - ${poDb.name}`);\n }\n \n const dbObject = client.db('mytestdb'); \n\n _.forEach(collections,async function(value : string) { \n const collObject = await getCollectionObject(dbObject, value);\n\n const datafile = './initfiles/' + value + '.json';\n //console.log(`datafile ${datafile}`);\n\n const data= fs.readFileSync(datafile, 'utf8'); \n const inputData = mongoDB.BSON.deserialize(data); \n await collObject.insertMany(_.toArray(inputData));\n console.log(`loaded data into collection ${value}`); \n });\n}\n\nasync function getCollectionObject(dbObject: any, collectionName: string) {\n\n const collObject = dbObject.collection(collectionName);\n if (collObject === undefined || collObject.collectionName !== collectionName) {\n console.log(`Create Collection - ${collectionName}`);\n return await dbObject.createCollection(collectionName);\n } else {\n console.log(`Collection exists - ${collObject.collectionName}`);\n return collObject;\n }\n}\n",
"text": "Trying to create collection/database with my typescript. The API succeeds but I don’t see the new collections in compass or through mongosh. Where am I messing up?Sample code:",
"username": "Dev_Engine"
},
{
"code": "",
"text": "it looks like until a record is added, it doesn’t show up in the list whereas that is not the case when you do it from the compass.Hoping an expert can answer why that is the case.",
"username": "Dev_Engine"
}
] | Typescript - create db, collection and add json data to the collection | 2023-02-28T18:07:11.868Z | Typescript - create db, collection and add json data to the collection | 1,240 |
[
"golang"
] | [
{
"code": "`{\n\t\"bsonType\": \"object\",\n\t\"encryptMetadata\": {\n\t\t\"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n\t\t\"keyId\": \"/mykey\"\n\t},\n\t\"properties\": {\n\t\t\"config\": {\n\t\t\t\"encrypt\": {\n\t\t\t\t\"bsonType\": \"object\"\n\t\t\t}\n\t\t}\n\t}\n}`\n mongocryptd communication error: (Location51114) keyId pointer '/mykey' must point to a field that exists\n",
"text": "I’ve been trying to setup mongo client-side encryption (cse) in a golang application I am building. I’ve been following these guides to help me get started:Learn how to encrypt document fields client-side in Go with MongoDB client-side field level encryption (CSFLE).master/goMongoDB Client-Side Field Level Encryption Driver GuidesI’ve managed to create a key and write it to the KeyVaultNamespace. It shows up in mongodb under the key vault collection, and I can see that it has an altname of ‘mykey’. But with the following cse validation schema:I get errors like this:What am I doing wrong? I’ve followed the sample code above and have read through the official guide multiple times at this point. Any assistance would be helpful.",
"username": "Jonathan_Whitaker1"
},
{
"code": "",
"text": "Hey there, did you managed to get it running? I got the same error on my setup.",
"username": "Jakob_Blume"
},
{
"code": "",
"text": "the inserted document should have a field mykey with value containing the alt_name of DEK.",
"username": "Aakash_Bajaj"
}
] | Golang Client-side Encryption With Alt Key | 2021-12-17T16:04:59.707Z | Golang Client-side Encryption With Alt Key | 3,059 |
|
null | [
"node-js",
"mongoose-odm",
"react-native"
] | [
{
"code": "",
"text": "I have a collection in which documents are written online when from our nodeJS backend using mongoose. In order to support offline mode, I am using realm to create documents locally in the react native app and store in realm which in turn syncs it with the Atlas database. But, now I want to remove created document from the local realm database as it is no longer of use. Also, I want to restrict the documents created from the backend to be written to the sync as it will unnecessarily increase my storage consumption.I have two solutions for the above:",
"username": "Aditya_Rathore"
},
{
"code": "",
"text": "I am not a node-js guy but I think the question needs some clarity as the expected outcome is unclear.The question states you have synced situation - data is written locally and synced to the server.You want to delete local data but then not delete it on the server? That goes against a synced database because they would then be out of sync.Then you’re wanting to create documents on the server but not have then sync locally - again, that would not be a synced database.Can you clarify your desired outcome a bit?",
"username": "Jay"
}
] | How to remove objects from local realm database after they have been written to the atlas database? | 2023-02-27T07:17:40.831Z | How to remove objects from local realm database after they have been written to the atlas database? | 1,085 |
null | [] | [
{
"code": "",
"text": "Hi Hanako, pleasure to meet you. We were looking at integrations between mongodb and Google bigquery. Do you have any experience in this?",
"username": "Parsa_Riahi"
},
{
"code": "",
"text": "Hi Parsa! Great question. Here’s a video that can help with that! Better Together: MongoDB Atlas + Google BigQuery Real-time Bi-directional Sync - YouTubePlease let me know if you have any follow up questions!-Hanako",
"username": "Hanako_Tonozuka"
},
{
"code": "",
"text": "Can you share the source code links of this demo?",
"username": "Srinivasan_Subramanian1"
},
{
"code": "",
"text": "Hi Srinivasan,\nYou can find the source codes for the demo here: https://github.com/snarvaez/Atlas-BQ__ODS-EDWPlease read the instructions and notes carefully.",
"username": "Hanako_Tonozuka"
},
{
"code": "",
"text": "Hi,We are a startup, just a few months old and just joined here (streamkap.com).We’ve built a real-time sync from Mongo to BigQuery if you’re interested in trialling it - it’s production ready.",
"username": "Ricky_Thomas"
},
{
"code": "",
"text": "airbyte is a solution I am testing for my startup, all seem to be going well now for past few days",
"username": "Hammed_bb"
},
{
"code": "",
"text": "We were looking at integrations between mongodb and Google bigquery. Do you have any experience in this?Hi Parsa,\nTo integrate MongoDB with Google BigQuery, there are a few options available:Overall, the choice of integration method will depend on your specific use case and requirements. It’s recommended to review the pros and cons of each option before deciding which method to use.Also you can contact us in case you need more info, just go to Softkit company website.",
"username": "Julia_Samofal"
}
] | Integrations between mongodb and Google bigquery | 2022-04-04T23:38:01.662Z | Integrations between mongodb and Google bigquery | 9,099 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.2.24-rc2 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.23. The next stable release 4.2.24 will be a recommended upgrade for all 4.2 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.2.24-rc2 is released | 2023-02-23T03:50:09.487Z | MongoDB 4.2.24-rc2 is released | 1,096 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.4.19 is out and is ready for production deployment. This release contains only fixes since 4.4.18, and is a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.19 is released | 2023-02-28T17:12:02.560Z | MongoDB 4.4.19 is released | 1,315 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.2.24 is out and is ready for production deployment. This release contains only fixes since 4.2.23, and is a recommended upgrade for all 4.2 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.2.24 is released | 2023-02-28T16:54:46.839Z | MongoDB 4.2.24 is released | 1,513 |
null | [
"replication",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I have successfully deployed a standalone and replica set instance on kubernetes. I have looked in the documentation for deployments and don’t see any information on adding ingresses so you can connect via an external network. Does anyone have any experience with this?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "We have a few customers using Ingress, I spoke to one recently who’s using HA Proxy to good effect to use an internal CA for TLS within the cluster (which they’ve had to do as their main CA can’t sign cluster.local names) and then use their normal CA for an external name. HA Proxy is then terminating the TLS and reestablishing it on the internal side.",
"username": "Dan_Mckean"
},
{
"code": "",
"text": "Hi Dan,\nThanks for the reply I’ll look into HA Proxy. You said only a few customers use ingress, what do the other customers use to connect from outside the kubernetes cluster?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Some use load balancers, others use nodeport. I can’t really say what the split is, but I’ve just today (totally by chance) had a chat around all this with one of our engineers and we’re in favour of using ingress. Done well it saves using nodeport to expose all the individual replica set members one by one, and it saves having N load balancer external IPs for each of the N replica set members. You can just route to the right one using SNI.That’s the guidance we’re likely to document at some point, but we’ll likely not get to that for a while due to other priorities.",
"username": "Dan_Mckean"
}
] | MongoDB Standalone Deployment on Kubernetes add ingress | 2023-02-27T21:51:09.774Z | MongoDB Standalone Deployment on Kubernetes add ingress | 1,039 |
null | [
"replication"
] | [
{
"code": "{\"t\":{\"$date\":\"2022-04-07T09:44:21.765+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55534\",\"uuid\":\"970ac484-45f4-4bfd-a1cc-4e59d33bbf53\",\"connectionId\":1610,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:21.765+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55536\",\"uuid\":\"b5b6b395-bba8-4b43-bb7d-75e6d5d44b03\",\"connectionId\":1611,\"connectionCount\":346}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:21.775+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1610\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55534\",\"uuid\":\"970ac484-45f4-4bfd-a1cc-4e59d33bbf53\",\"connectionId\":1610,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:21.776+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1611\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55536\",\"uuid\":\"b5b6b395-bba8-4b43-bb7d-75e6d5d44b03\",\"connectionId\":1611,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:22.265+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55540\",\"uuid\":\"837b55aa-4989-492e-93c4-f2945f33dd88\",\"connectionId\":1612,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:22.275+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1612\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55540\",\"uuid\":\"837b55aa-4989-492e-93c4-f2945f33dd88\",\"connectionId\":1612,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:22.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55542\",\"uuid\":\"ed0c5ad6-4e7f-4a4a-a892-7084a1c3b8b2\",\"connectionId\":1613,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:22.776+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1613\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55542\",\"uuid\":\"ed0c5ad6-4e7f-4a4a-a892-7084a1c3b8b2\",\"connectionId\":1613,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:23.265+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55544\",\"uuid\":\"e55da0f7-de29-4f79-9bbe-616606642263\",\"connectionId\":1614,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:23.276+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1614\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55544\",\"uuid\":\"e55da0f7-de29-4f79-9bbe-616606642263\",\"connectionId\":1614,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:23.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55546\",\"uuid\":\"2c4eff77-feb9-4291-a2b9-f8af4fd638f7\",\"connectionId\":1615,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:23.776+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1615\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55546\",\"uuid\":\"2c4eff77-feb9-4291-a2b9-f8af4fd638f7\",\"connectionId\":1615,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:24.265+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55548\",\"uuid\":\"d200a42f-f191-4d70-a31e-b675d0ee9741\",\"connectionId\":1616,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:24.275+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1616\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55548\",\"uuid\":\"d200a42f-f191-4d70-a31e-b675d0ee9741\",\"connectionId\":1616,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:36.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55550\",\"uuid\":\"bb55f2a2-eb52-4480-bd83-f60c1b4945b6\",\"connectionId\":1617,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:36.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55552\",\"uuid\":\"a9e7f258-6262-4289-8357-945599d225f2\",\"connectionId\":1618,\"connectionCount\":346}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:36.781+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1617\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55550\",\"uuid\":\"bb55f2a2-eb52-4480-bd83-f60c1b4945b6\",\"connectionId\":1617,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:36.781+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1618\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55552\",\"uuid\":\"a9e7f258-6262-4289-8357-945599d225f2\",\"connectionId\":1618,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:37.266+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55554\",\"uuid\":\"367f2901-cca2-4aca-9465-9cfb2bbf6785\",\"connectionId\":1619,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:37.275+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1619\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55554\",\"uuid\":\"367f2901-cca2-4aca-9465-9cfb2bbf6785\",\"connectionId\":1619,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:37.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55556\",\"uuid\":\"3dc4b466-5064-4e58-bab3-8ad00a2c878a\",\"connectionId\":1620,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:37.777+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1620\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55556\",\"uuid\":\"3dc4b466-5064-4e58-bab3-8ad00a2c878a\",\"connectionId\":1620,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:38.266+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55558\",\"uuid\":\"23a5d6eb-7c9b-4979-81a2-982969a8e5a5\",\"connectionId\":1621,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:38.277+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1621\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55558\",\"uuid\":\"23a5d6eb-7c9b-4979-81a2-982969a8e5a5\",\"connectionId\":1621,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:38.767+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55560\",\"uuid\":\"6fbf7f97-919f-4963-a43b-20a3f1f107e3\",\"connectionId\":1622,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:38.777+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1622\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55560\",\"uuid\":\"6fbf7f97-919f-4963-a43b-20a3f1f107e3\",\"connectionId\":1622,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:39.266+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55562\",\"uuid\":\"2916a03e-b693-43a9-93e3-735ced3d5088\",\"connectionId\":1623,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:39.276+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1623\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55562\",\"uuid\":\"2916a03e-b693-43a9-93e3-735ced3d5088\",\"connectionId\":1623,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:51.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55564\",\"uuid\":\"09ae77fc-0d26-44d9-ba17-7ee698decd67\",\"connectionId\":1624,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:51.767+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55566\",\"uuid\":\"44222b4d-2a1c-4bb1-a3e7-73c7f66e22af\",\"connectionId\":1625,\"connectionCount\":346}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:51.777+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1625\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55566\",\"uuid\":\"44222b4d-2a1c-4bb1-a3e7-73c7f66e22af\",\"connectionId\":1625,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:51.777+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1624\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55564\",\"uuid\":\"09ae77fc-0d26-44d9-ba17-7ee698decd67\",\"connectionId\":1624,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:52.267+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55568\",\"uuid\":\"5ce2a50e-d8e6-4667-aa5b-bd344958b91a\",\"connectionId\":1626,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:52.277+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1626\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55568\",\"uuid\":\"5ce2a50e-d8e6-4667-aa5b-bd344958b91a\",\"connectionId\":1626,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:52.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55570\",\"uuid\":\"4cd60f8b-296e-4f09-b7e3-14967b78c40c\",\"connectionId\":1627,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:52.778+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1627\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55570\",\"uuid\":\"4cd60f8b-296e-4f09-b7e3-14967b78c40c\",\"connectionId\":1627,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:53.267+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55572\",\"uuid\":\"29606858-4902-4288-a7b1-b753f28271e4\",\"connectionId\":1628,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:53.277+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1628\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55572\",\"uuid\":\"29606858-4902-4288-a7b1-b753f28271e4\",\"connectionId\":1628,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:53.767+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55574\",\"uuid\":\"caed7d75-c257-4779-83c8-6f1b58aab4d5\",\"connectionId\":1629,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:53.777+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1629\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55574\",\"uuid\":\"caed7d75-c257-4779-83c8-6f1b58aab4d5\",\"connectionId\":1629,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:54.267+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55576\",\"uuid\":\"c2c61cdf-6d5b-4f15-b051-cc29026fb022\",\"connectionId\":1630,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:44:54.276+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1630\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55576\",\"uuid\":\"c2c61cdf-6d5b-4f15-b051-cc29026fb022\",\"connectionId\":1630,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:06.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55578\",\"uuid\":\"0c877b90-1cc6-4b18-9e27-01cc8821d57c\",\"connectionId\":1631,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:06.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55580\",\"uuid\":\"88a12d48-86ce-4b9c-bb5c-9c3a77b3f4e9\",\"connectionId\":1632,\"connectionCount\":346}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:06.776+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1631\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55578\",\"uuid\":\"0c877b90-1cc6-4b18-9e27-01cc8821d57c\",\"connectionId\":1631,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:06.778+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1632\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55580\",\"uuid\":\"88a12d48-86ce-4b9c-bb5c-9c3a77b3f4e9\",\"connectionId\":1632,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:07.266+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55582\",\"uuid\":\"968c2f0e-5ef8-4967-ab71-a017a165c40d\",\"connectionId\":1633,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:07.277+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1633\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55582\",\"uuid\":\"968c2f0e-5ef8-4967-ab71-a017a165c40d\",\"connectionId\":1633,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:07.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55584\",\"uuid\":\"ab47954c-d4c4-487d-a554-309d7bb07f75\",\"connectionId\":1634,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:07.776+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1634\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55584\",\"uuid\":\"ab47954c-d4c4-487d-a554-309d7bb07f75\",\"connectionId\":1634,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:08.266+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55586\",\"uuid\":\"e7b5a901-87d1-4409-9b17-b41dec3ef079\",\"connectionId\":1635,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:08.275+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1635\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55586\",\"uuid\":\"e7b5a901-87d1-4409-9b17-b41dec3ef079\",\"connectionId\":1635,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:08.766+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55588\",\"uuid\":\"a7eb3eb3-67ec-4f44-9540-56c1e4c8a8f6\",\"connectionId\":1636,\"connectionCount\":345}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:08.777+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1636\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:55588\",\"uuid\":\"a7eb3eb3-67ec-4f44-9540-56c1e4c8a8f6\",\"connectionId\":1636,\"connectionCount\":344}}\n{\"t\":{\"$date\":\"2022-04-07T09:45:09.266+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:55590\",\"uuid\":\"853d1b22-de94-4488-9e34-58eae3d3ccce\",\"connectionId\":1637,\"connectionCount\":345}}\n",
"text": "I’ve configured the verbosity of our 5 member replica set to level 0, but our log files keep filling with thousands of noisy “connection ended” and “connection accepted” messages. They appear perfectly innocent, but the sheer amount of these logs really makes no sense.Is it really normal & intentional for MongoDB to log all these near-identical log messages, since apparently there is no recommended method to disable them? Our old production cluster running 3.x doesn’t do this.Do these log messages indicate a problem in out MongoDB setup?",
"username": "ilari"
},
{
"code": "",
"text": "Forgot to mention, we’re running MongoDB 5.0.6.",
"username": "ilari"
},
{
"code": "",
"text": "Check your verbosity level from db.getLoglevel…Are they at default value or higher\nYou can set verbosity at component levels like query,storage,network etcYou cannot eliminate those messages.You need it for debugging\nSome suggest start mongod with quiet option but not recommended for prod",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Verbosity is set to 0, with all the components set to -1 (inherit, as I understood).",
"username": "ilari"
},
{
"code": "0",
"text": "I am having the same issue. Log files grow huge with Connection ended, Connection accepted messages. Running MongoDB 5.0.7 and logLevel is set to 0. I know I can rotate/delete logs, but wow this seems highly unnecessary and a risk of blowing up free space.",
"username": "Jon_Spewak"
},
{
"code": "",
"text": "Has someone ever found a solution to this issue? We’re having the same when running Mongo on a Kubernetes cluster.\nWith both Mongo 5.0.13 and 6.0.4.",
"username": "Acmeno"
},
{
"code": "mongod",
"text": "Hi @AcmenoI would not say that this is an issue per se, since connection accepted event is useful for debugging in many cases, and supressing these informational events may make troubleshooting a lot harder in production environment.There’s a discussion about exactly this in SERVER-18339. In particular, this comment on the ticket is of interest: in most cases, a connection open event is not expected to be very very frequent, unless you have a lot of clients, or there is minimal connection reuse in the application code. The latter actually merits further examination, since creating a connection is a relatively expensive event. If you have a small number of application but a huge number of connection open events, your app may experience a higher than expected latency due to the way it creates connections.All official drivers implements a connection pool that should keep this connection open message to a minimum in most cases.Having said that, you can also pass the --quiet parameter into mongod to supress at least some of these messages. However be aware that this switch also supress other messages that may be important for troubleshooting.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "{\"t\":{\"$date\":\"2023-02-21T06:38:40.818+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.818+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.820+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"iotb-logging-mongo-7766dc78-s8pfz\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":20720, \"ctx\":\"initandlisten\",\"msg\":\"Memory available to mongo process is less than total system memory\",\"attr\":{\"availableMemSizeMB\":512,\"systemMemSizeMB\":96501}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.4\",\"gitVersion\":\"44ff59461c1353638a71e710f385a566bcd2f547\",\"openSSLVersion\":\"OpenSSL 3.0.2 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2204\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.822+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"*\"}}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.831+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"/data/db\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.831+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-21T06:38:40.834+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=256M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.337+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1503}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.337+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.354+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.354+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22178, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.354+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22181, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/defrag is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.354+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":5123300, \"ctx\":\"initandlisten\",\"msg\":\"vm.max_map_count is too low\",\"attr\":{\"currentValue\":262144,\"recommendedMinimum\":1677720,\"maxConns\":838860},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.367+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":13,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":13,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.367+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"5.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.367+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.368+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.369+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.381+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.381+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.383+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27017.sock\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.383+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"0.0.0.0\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:42.383+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:50.041+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52108\",\"uuid\":\"1d3aeef2-de04-4f91-b22d-6f86f4d0b281\",\"connectionId\":1,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:50.041+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52108\",\"client\":\"conn1\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:50.042+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52110\",\"uuid\":\"a5985ed3-adf0-44aa-9eea-b1d88efd593f\",\"connectionId\":2,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:50.042+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn2\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52110\",\"client\":\"conn2\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:50.055+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn2\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52110\",\"uuid\":\"a5985ed3-adf0-44aa-9eea-b1d88efd593f\",\"connectionId\":2,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:50.055+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52108\",\"uuid\":\"1d3aeef2-de04-4f91-b22d-6f86f4d0b281\",\"connectionId\":1,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:51.040+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52178\",\"uuid\":\"9232fd3d-08c2-4d5d-bca4-fe8f53f91210\",\"connectionId\":3,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:51.040+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn3\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52178\",\"client\":\"conn3\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:51.041+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52180\",\"uuid\":\"cf3dc14f-edf0-4c73-af0a-b901ab49a5e5\",\"connectionId\":4,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:51.041+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn4\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52180\",\"client\":\"conn4\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:51.130+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn3\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52178\",\"uuid\":\"9232fd3d-08c2-4d5d-bca4-fe8f53f91210\",\"connectionId\":3,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:51.130+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn4\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52180\",\"uuid\":\"cf3dc14f-edf0-4c73-af0a-b901ab49a5e5\",\"connectionId\":4,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:52.038+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52260\",\"uuid\":\"96c8d2f8-aea7-4c5b-89b8-66301de1136a\",\"connectionId\":5,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:52.039+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn5\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52260\",\"client\":\"conn5\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:52.039+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52262\",\"uuid\":\"9823e586-35e0-4006-9fd6-e85322c533bc\",\"connectionId\":6,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:52.040+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn6\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52262\",\"client\":\"conn6\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:52.054+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn6\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52262\",\"uuid\":\"9823e586-35e0-4006-9fd6-e85322c533bc\",\"connectionId\":6,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:52.054+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52260\",\"uuid\":\"96c8d2f8-aea7-4c5b-89b8-66301de1136a\",\"connectionId\":5,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:53.049+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52350\",\"uuid\":\"59147541-b12a-4852-adab-bbb50ed18e9b\",\"connectionId\":7,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:53.049+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn7\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52350\",\"client\":\"conn7\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:53.050+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52352\",\"uuid\":\"cb8a3d41-10a0-4884-9f12-1ea9eddebb2b\",\"connectionId\":8,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:53.050+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn8\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52352\",\"client\":\"conn8\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:53.135+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn7\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52350\",\"uuid\":\"59147541-b12a-4852-adab-bbb50ed18e9b\",\"connectionId\":7,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:53.135+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn8\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52352\",\"uuid\":\"cb8a3d41-10a0-4884-9f12-1ea9eddebb2b\",\"connectionId\":8,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:54.050+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52430\",\"uuid\":\"2486c0b8-c169-419d-b165-4dc848d49faa\",\"connectionId\":9,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:54.050+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn9\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52430\",\"client\":\"conn9\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:54.051+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52432\",\"uuid\":\"7185eef6-ca72-43da-b194-9c04afa2081f\",\"connectionId\":10,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:54.051+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn10\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52432\",\"client\":\"conn10\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:54.064+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn9\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52430\",\"uuid\":\"2486c0b8-c169-419d-b165-4dc848d49faa\",\"connectionId\":9,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:54.064+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn10\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52432\",\"uuid\":\"7185eef6-ca72-43da-b194-9c04afa2081f\",\"connectionId\":10,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:55.056+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52508\",\"uuid\":\"3e75c8e8-558a-4324-ac0c-d476466b0f98\",\"connectionId\":11,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:55.056+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn11\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52508\",\"client\":\"conn11\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:55.057+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52510\",\"uuid\":\"daf94b33-bdae-4d83-b39e-c62785dfffdc\",\"connectionId\":12,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:55.057+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn12\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52510\",\"client\":\"conn12\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:55.227+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn12\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52510\",\"uuid\":\"daf94b33-bdae-4d83-b39e-c62785dfffdc\",\"connectionId\":12,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:55.228+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn11\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52508\",\"uuid\":\"3e75c8e8-558a-4324-ac0c-d476466b0f98\",\"connectionId\":11,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:56.059+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52588\",\"uuid\":\"5e439ed2-bca8-4155-a5f4-f9b89d38bc35\",\"connectionId\":13,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:56.059+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn13\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52588\",\"client\":\"conn13\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:56.060+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52590\",\"uuid\":\"8ad9e3f1-425b-480a-a7f0-0d96525512f9\",\"connectionId\":14,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:56.060+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn14\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52590\",\"client\":\"conn14\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:56.073+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn14\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52590\",\"uuid\":\"8ad9e3f1-425b-480a-a7f0-0d96525512f9\",\"connectionId\":14,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:56.073+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn13\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52588\",\"uuid\":\"5e439ed2-bca8-4155-a5f4-f9b89d38bc35\",\"connectionId\":13,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:57.129+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52908\",\"uuid\":\"cd0eeb09-1601-4c7b-b7b2-a2e054320710\",\"connectionId\":15,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:57.129+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn15\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52908\",\"client\":\"conn15\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:57.130+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:52920\",\"uuid\":\"82149646-04ef-4cbd-827a-432feb55ded3\",\"connectionId\":16,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:57.130+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn16\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:52920\",\"client\":\"conn16\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:57.151+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn16\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52920\",\"uuid\":\"82149646-04ef-4cbd-827a-432feb55ded3\",\"connectionId\":16,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:57.151+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn15\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:52908\",\"uuid\":\"cd0eeb09-1601-4c7b-b7b2-a2e054320710\",\"connectionId\":15,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:58.060+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:53024\",\"uuid\":\"d16d4b55-bec3-42a5-8290-5163d9346ab9\",\"connectionId\":17,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2023-02-21T06:38:58.060+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn17\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:53024\",\"client\":\"conn17\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Linux\",\"architecture\":\"x86_64\",\"version\":\"3.10.0-1160.53.1.el7.x86_64\"},\"platform\":\"CPython 3.8.13.final.0\"}}}\n",
"text": "Hi @kevinadiProbably I should have mentioned that I’m getting these messages 2-3 times per second without having any clients (at least not to my knowledge) as soon as Mongo gets up on Kubernetes:When spinning up the same container outside Kubernetes this behaviour does not appear. Therefore I’m talking about an issue - most probably on my end .These connects also appear without exposing the container externally. Is there some mechanism within the container which continuously connects to mongod?",
"username": "Acmeno"
},
{
"code": "{\"remote\":\"127.0.0.1:53024\",\"client\":\"conn17\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"}\n",
"text": "I’m getting these messages 2-3 times per second without having any clients (at least not to my knowledge) as soon as Mongo gets up on Kubernetes:I don’t think that’s what I see from the logs.The incoming connections all have this signature:This means that some client is connecting from localhost (perhaps inside the Docker image) using Pymongo, so it’s likely to be an app. From the log you posted, it creates 17 connections within the span of 8 seconds, so you might want to check any Python script you have, and see if it’s coded properly and not trying to flood the database with unnecessary connections.As you mentioned that you don’t see this when starting it without kubernetes, then I think it’s about how the kubernetes deployment is set up, and other things that runs within that environment.Having said that, I believe this demonstrated the merit of having all these connection log lines recorded. Otherwise we’ll have no idea that something is wrong Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I have no clue why, but the following day(s) the questionable connects did not appear anymore… Many thanks for your help.",
"username": "Acmeno"
}
] | A lot of noise in the logs - "Connection ended" - "Connection accepted" | 2022-04-07T09:52:01.460Z | A lot of noise in the logs - “Connection ended” - “Connection accepted” | 8,013 |
null | [
"queries",
"swift"
] | [
{
"code": "final class O: Object {\n @Persisted(primaryKey: true) var id: Int\n @Persisted var name: String\n}\nlet strings: [String]\nlet unfilteredObjects = realm.objects(O.self)\n\nlet objects = strings.isEmpty ? unfilteredObjects : unfilteredObjects.where { query in\n let queryComponents = strings.map { string in query.name.ends(with: string) }\n return queryComponents[1...].reduce(queryComponents[0]) { $0 || $1 }\n}\nstringsreducelet objects = realm.objects(O.self).where { query in\n strings.map { string in query.name.ends(with: string) }.reduce(Query.false) { $0 || $1 }\n}\n",
"text": "Given a model…and a dynamic list of stringsI want to query for matching objects with names that end in any of the strings.WIth the existing API, I can do that like this:This is a little ugly since I have to special-case the situation where strings is empty.What I’d prefer is to have a way to seed reduce with a “false” query:Questions:",
"username": "hershberger"
},
{
"code": " //the names we want ending with 'y' or l'\nlet endStringArray = [\"y\", \"l\"]\n\n//array to store the predicates for each letter\nvar predicateArray = [NSPredicate]() \n\n//iterate through the endStringArray getting each ending letter and creating a predicate\n// to retrieve matches for each letter\nfor endString in endStringArray { \n let p = NSPredicate(format: \"name ENDSWITH %@\", endString)\n predicateArray.append(p) //store each predicate in an array\n}\n\n//create a compound predicate of all of the above predicates\nlet compoundPredicate = NSCompoundPredicate(orPredicateWithSubpredicates: predicateArray)\n\n//do the query\nlet results = realm.objects(PersonClass.self).filter(compoundPredicate)\n\n//output the result\nfor person in results {\n print(person.name)\n}\nJay\nCindy\nCarl\nLarry\n",
"text": "I vote for #3Is there some completely different approach that is better overall?Suppose we have some PersonClass objects stored in Realm with the following namesJay\nCindy\nLarry\nCarl\nLindaand we want to retrieve only Jay, Cindy, Larry and Carl (names ending in ‘y’ or ‘l’). Here ya go using predicates and a compound predicateand the outputBest to avoid high-level Swift function calls (map, reduce etc) when possible to avoid overloading the memory will large datasets. The above code avoids those entirely and is lazy-loading safe(r)I am sure I can come up of a more type-safe version using modern API calls if needed.",
"username": "Jay"
},
{
"code": "",
"text": "Best to avoid high-level Swift function calls (map, reduce etc) when possible to avoid overloading the memory will large datasets.In my example, would map & reduce have this effect? From what I can tell, they’re just used to build up a query, not to actually transform the result objects themselves.I am sure I can come up of a more type-safe version using modern API calls if needed.Yes, we’re very interested in a type-safe version.",
"username": "hershberger"
},
{
"code": "let objects = realm.objects(O.self).where { query in\n strings.map { string in query.name.ends(with: string) }.reduce(Query.false) { $0 || $1 }\n}\n",
"text": "In my example, would map & reduce have this effect?Yes. One of the best things about using Realm to back your apps is that Realm Results are lazily-loaded. Meaning that HUGE datasets can be comfortably navigated without having to worry about overloading the devices memory.In a nutshell, if your app stores information about wine, when the user selects to retrieve every Cabernet Sauvignon (which is a LOT), Realm will just breeze through it and run that query and the results will easily contain those Cabernets.However. if you use Swift High Order functions, like map, reduce, filter, compactMap etc. to work with those results, that laziness goes out the window and they are ALL loaded in, blowing up the device because there would be too many results to store in memory at one time.So - your results are use case dependent. We use Swift High Order functions to massage our Realm data all the time - BUT, you have to go into it knowing the potential size of your data.I think a good general rule of thumb is if you know the rough size of your data, it’s safe to use Swift functions. Otherwise, stick with Realm functions to massage the data.",
"username": "Jay"
},
{
"code": "let strings: [String] = ...\nrealm.objects(O.self).where { query in\n let queryComponents = strings.map { string in query.name.ends(with: string) }\n return queryComponents[1...].reduce(queryComponents[0]) { $0 || $1 }\n}\nwhere",
"text": "My understanding is that the closure passed to where is only used to construct a predicate. From that perspective, it doesn’t seem like using map or reduce within that closure would be the same as using map or reduce on the Results.",
"username": "hershberger"
},
{
"code": "",
"text": "You are absolutely correct!I didn’t look closely enought at the code and saw the map and reduce but those are being used on the strings array and queryComponents so those will not affect Realm.So just curious - what’s the use care for the code? Are you filtering realm for strings that match strings in the array?",
"username": "Jay"
},
{
"code": "",
"text": "So just curious - what’s the use care for the code? Are you filtering realm for strings that match strings in the array?That’s correct! I want to query realm for objects with names that end in any of the strings in the array.",
"username": "hershberger"
}
] | Realm Swift Query API - True & False as initial values for dynamically constructed queries | 2023-02-23T15:41:45.747Z | Realm Swift Query API - True & False as initial values for dynamically constructed queries | 962 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "We are using Realm in Xamarin Forms app on Windows 10 and recently we have upgraded to latest version and replaced the RealmObject class with IRealmObject Interface and made correspodning changes.\nRealmObject class overrides the Equal method which is not available in the auto generated classes of the models which implements the IRealmObject interface.Is it completly removed or we can find it somewhere in the auto generated code using some setting or any method? or we have to override it in our relam classes and provide our own implementation?it was something like thispublic override bool Equals(object ob){\nif(obj is null)\n{\nreturn false;\n}\nif(ReferenceEqual(this,obj))\n{\nreturn true;\n}\nif(obj is InvalidObject)\n{\nreturn !IsValid;\n}\nif(obj is not IRealmObjectBase iro)\n{\nreturn false;\n}\nreturn _accessor.Equals(iro.Accessor);\n}Can we achieve this with IRealmObject implementation?",
"username": "Dharmendra_Kumar2"
},
{
"code": "Equals",
"text": "Hi @Dharmendra_Kumar2, the Equals method is also generated by the Source Generator unless you already defined it yourself, so you don’t need to add it yourself if you don’t want to.",
"username": "papafe"
},
{
"code": "",
"text": "Got it. I just removed it and got the implementation of the same. Thank You",
"username": "Dharmendra_Kumar2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | IRealmObject in .Net SDK | 2023-02-28T10:27:08.931Z | IRealmObject in .Net SDK | 596 |
null | [
"golang",
"atlas-search",
"migration"
] | [
{
"code": "",
"text": "Is there a way to create atlas search index using mongoDB shell methods like createIndexes which are used to create regular indexes? I want to create/update/maintain the index using golang-migrate package which supports the mongoDB shell methods.",
"username": "Priyanka_Kurkure"
},
{
"code": "",
"text": "Hi @Priyanka_Kurkure , work is underway to support this. You can vote on the feedback item here to stay up-to-date about progress!In the meantime, you can use the Atlas REST API or Atlas CLI to create search indexes programatically:",
"username": "amyjian"
}
] | Create Atlas search index via golang-migrate package | 2023-02-28T06:39:03.319Z | Create Atlas search index via golang-migrate package | 1,105 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "I want to send to store message to a specific partition. But I can’t understand how they configured output.shcema.key in this example posted on Git of a MongoDB specialist. Can anyone help me understand how he mentioned output.schema.key as animal_type? Thanks a lot in advance",
"username": "Nikhil_Moudgil"
},
{
"code": "",
"text": "Can you paste the link to which example you are referring?",
"username": "Robert_Walters"
},
{
"code": "#start the demo environment\n\n#define your atlas conection string (optional)\nexport ATLAS_CONNECTION=\"mongodb+srv://YOUR CONNECTION STRING HERE\"\n\n#Start the containers, skip the configuration so we can step through it and explain the parameters\nsh start-demo.sh $ATLAS_CONNECTION skip\n\n#If you do not have the kafka installed locally, download from https://www.apache.org/dyn/closer.cgi?path=/kafka\n#there will be a bin folder that includes various kafka- utilities\n#create a topic with 3 partitions\n~/kafka/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 3 --topic demo1-3.Demo.pets\n\n#configure source connector to read from Demo.pets\ncurl -X POST -H \"Content-Type: application/json\" --data '\n {\"name\": \"mongo-source-key\",\n \"config\": {\n \"tasks.max\":\"1\",\n \"connector.class\":\"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"output.json.formatter\":\"com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\",\n",
"text": "Its your article sir ",
"username": "Nikhil_Moudgil"
},
{
"code": "",
"text": "output.schema.key is the avro representation (not JSON schema) of the message event.",
"username": "Robert_Walters"
}
] | Need help with output.schema.key | 2023-01-31T09:45:10.078Z | Need help with output.schema.key | 1,159 |
null | [
"spark-connector"
] | [
{
"code": "def basicAverage(df): \n return df.groupby(window(col('timestamp'), \"1 hour\", \"5 minutes\"), col('stationcode')) \\\n .agg(avg('mechanical').alias('avg_mechanical'), avg('ebike').alias('avg_ebike'), avg('numdocksavailable').alias('avg_numdocksavailable'))\nqueryBasicAvg.writeStream.format('mongodb').queryName(\"basicAvg\") \\\n .option(\"checkpointLocation\", \"./tmp/pyspark7/\").option(\"forceDeleteTempCheckpointLocation\", \"true\") \\\n .option('spark.mongodb.connection.uri', 'mongodb://127.0.0.1') \\\n .option(\"spark.mongodb.database\", 'velibprj').option(\"spark.mongodb.collection\", 'stationsBasicAvg') \\\n .outputMode(\"append\").start()\n",
"text": "Hey,Using Spark Structured Streaming I am trying to sink streaming data to a MongoDB collection. The issue is that I am querying my data using a window as following:And it seems that mongodb spark connector cannot support a writeStream containing windowed data because when I run the script, my collection remains empty and no error shows up. I tried to delete the window option on my query and the sink worked like a charm.Here is my sink method:Any thought on how to solve this issue ?Thanks in advance",
"username": "Thezndo"
},
{
"code": "",
"text": "I solved the issue by using foreach sink method.",
"username": "Thezndo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot sink Windowed queried streaming data w/ spark to MongoDB | 2023-02-22T10:17:03.760Z | Cannot sink Windowed queried streaming data w/ spark to MongoDB | 1,045 |
[
"aggregation",
"python",
"change-streams",
"spark-connector"
] | [
{
"code": "agregation.pipeline",
"text": "Hi Everyone,I’m currently trying to implement change stream in spark using Databricks.Unfortunately, I’m unable to read (Neither Batch nor Streaming) from MongoDB. Even though, I’m able to get collection’s schema.Background:Source: Azure Cosmos DB for MongoDB v4.0Databricks Environment:Runtime: 10.4.x-cpu-ml-scala2.12\nLibrary: org.mongodb.spark:mongo-spark-connector_2.12:10.1.1Documentation read (as new user, can’t paste more than 3 links):Change streams in Azure Cosmos DB’s API for MongoDBMongoDB | Databricks on AWSMongoDB Spark Connector - Read Config Options\nMongoDB Spark Connector - Read From Mongo\nMongoDB Spark Connector - Structured Streaming with MongoDBStreaming Data with Apache Spark and MongoDBAfter read and followed docs and tutorial, I’m still unable to read from MongoDB.Here, I get collection’s schemabase_read_config = {\n‘connection.uri’: mongo_endpoint,\n‘database’: mongo_database,\n‘collection’: mongo_collection\n}schema_df = spark.read.format(“mongodb”).options(**base_read_config).load()But if I tried to display batch data, I got the following errorcom.mongodb.spark.sql.connector.exceptions.MongoSparkException: Partitioning failed. Partitioner calling collStats command failed\nCaused by: com.mongodb.spark.sql.connector.exceptions.MongoSparkException: Partitioner calling collStats command failed\nCaused by: com.mongodb.MongoCommandException: Command failed with error 40324 (40324): ‘Unrecognized pipeline stage name: $collStats’ on server XXXX. The full response is {“ok”: 0.0, “errmsg”: “Unrecognized pipeline stage name: $collStats”, “code”: 40324, “codeName”: “40324”}Then, I tried to display streaming data (getting collection’s schema)\n\nimage1326×531 34.6 KB\nBut again, with an errorStream stopped…\norg.apache.spark.SparkException: Writing job aborted\nCaused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 8) (10.248.224.5 executor 0): com.mongodb.spark.sql.connector.exceptions.MongoSparkException: Could not create the change stream cursor.\nCaused by: com.mongodb.MongoCommandException: Command failed with error 2 (BadValue): ‘Change stream must be followed by a match and then a project stage’ on server XXXX. The full response is {“ok”: 0.0, “errmsg”: “Change stream must be followed by a match and then a project stage”, “code”: 2, “codeName”: “BadValue”}I’m not sure but I think the agregation.pipeline option is unable to identify the match and project stages in pipeline variable.I would really appreciate if someone can help me to identify what I’m doing wrong.Regards",
"username": "pmirabe"
},
{
"code": "",
"text": "The problem is you are using CosmosDB which isn’t the same as MongoDB i.e. collStats doesn’t exist in CosmosDB. Use MongoDB Atlas which is available in Azure it should be a direct replacement in your environment. It is available on the Azure Marketplace Microsoft Azure Marketplace",
"username": "Robert_Walters"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Change stream must be followed by a match and then a project stage | 2023-02-24T14:48:30.238Z | Change stream must be followed by a match and then a project stage | 1,760 |
|
null | [
"c-driver"
] | [
{
"code": "",
"text": "Hello, I wanted to build the C driver v1.23.1 on Solaris Sparc64 and AIX64 platforms.\nIs it supported? Anyone had success.Thanks\nSanal",
"username": "Sanal_Vasudevan1"
},
{
"code": "",
"text": "The C driver is built on Debian for the sparc64 and 32/64 bit powerpc architectures, so the hardware side certainly works. As far as the Solaris and AIX OSs, that is not something that we test. If you happen to encounter issues with the vendor-provided C compiler, then you might consider using gcc or clang instead.",
"username": "Roberto_Sanchez"
}
] | Build the C driver v1.23.1 on Solaris Sparc64 and AIX 64 platforms | 2023-02-28T11:02:22.877Z | Build the C driver v1.23.1 on Solaris Sparc64 and AIX 64 platforms | 921 |
null | [
"php"
] | [
{
"code": "$manager = new MongoDB\\Driver\\Manager('mongodb://localhost:27017');\nvar_dump($manager);\n$servers = $manager->getServers();\nvar_dump($servers);´\n",
"text": "Hi,\nI installed everything from mongo db on ubuntu.\nEverything’s seems to work by I don’t find a good tutorial to start.My first lines are:Ok, because of lazy binding I receive a blank array, got it!But what do you ?Where to find a tutorial without MongoDB\\Client ?? This does not help anymore!Categories: Could not add “php 8.x” or similar.Best and thanks a lot in advance\nRobert",
"username": "Robert_Haupt"
},
{
"code": "ManagerClientext-mongodbmongodb/mongodb$client = new MongoDB\\Client('mongodb://localhost:27017');\n$collection = $client->selectCollection('dbName', 'collName');\n$collection->insertOne(['foo' => 'bar']);\nvar_dump($collection->findOne(['foo' => 'bar']));\ninsertOnefindOneinsertOne",
"text": "Hi Robert,our Drivers documentation would be a good place to start and contains a few tutorials on how to do things in PHP. Note that there are no specifics for installing on PHP 8.x, as the driver supports PHP 7.2-8.2 at this time.To clear up some confusion around the Manager and Client classes, our driver consists of two parts: the PHP extension (often referred to as PHPC, or ext-mongodb), which is a thin PECL extension to provide basic functionality. This is done to reduce the complexity of the PHP driver, as we’re able to leverage libmongoc to take care of things like server discovery, authentication, monitoring, etc.\nThe second part is the PHP library (PHPLIB, mongodb/mongodb on packagist), which provides the common drivers API that other drivers provide. In 99.9% of all cases, you’ll want to interact with that library, as that provides you with a high-level API. Once the library is installed, connecting to MongoDB and inserting data becomes a lot easier:This small snippet creates a connection, inserts a sample record into a collection via insertOne, then retrieves this record using the findOne.To answer your other questions:Out of curiosity and to potentially improve our documentation, did you follow any guides to get to the current point? I’m wondering because in our docs we always recommend people use the high-level library, since the PECL extension only provides a low-level interface not suited for everyday use.Thanks,\nAndreas",
"username": "Andreas_Braun"
},
{
"code": "apt update\napt install apache2\nadd-apt-repository ppa:ondrej/php\napt install php8.0\nservice apache2 restart\n\napt-get install php-pear\napt-get install php-dev (to avoid error \"phpize not found\")\napt install pecl\n\npecl install mongodb\napt install composer\nadded extension=mongodb.so to php.ini and it shows it with phpinfo();\n\ncd /var/www/html\ncomposer require mongodb/mongodb\nError from composer: \nProblem 1\n - Installation request for mongodb/mongodb ^1.15 -> satisfiable by mongodb/mongodb[1.15.0].\n - mongodb/mongodb 1.15.0 requires ext-mongodb ^1.15.0 -> the requested PHP extension mongodb is missing from your system.\n",
"text": "Hi Andreas,\nthanks for your long answer!\nI did start with a blank ubuntu 20.04 image:More or less the same when I install php8.2It does not know the class MongoDB\\Client Any other way to install mongodb with php8.2 ?Suggestion: apt install php8.2-mongodb would be nice And at the end: Working with the DB will be a pleasure, I still like it.\nIt’s fast and JSON is quite easy to handle so I’m looking forward to it!Another idea: At the end I will have a mongodb server on one of my instances.\nIs the basic stuff from pecl enough to connect to that local mongodb server or do I need also the library ?Best and again thanks in advance\nRobert",
"username": "Robert_Haupt"
},
{
"code": "php --ri mongodb",
"text": "The error from composer indicates that the extension is not added to the CLI environment. You can check whether it’s installed by running php --ri mongodb, which should show you information about the extension. Be aware that Ubuntu may have different php.ini files for the CLI and FPM SAPI.Theoretically speaking, the PECL extension is enough to connect to MongoDB, but it does not provide you with a high level API. For example, to insert a document you’ll have to manually create a bulk write, add the appropriate operations to it, and execute it on the manager or a previously selected server. I would always recommend using the PHP library along with the extension for a better development experience.",
"username": "Andreas_Braun"
},
{
"code": "<?php\n\necho \"Mongo DB Example<br>\";\n\n// run composer require mongodb/mongodb in your project folder before\nrequire(\"vendor/autoload.php\");\n\n// start mongod process on your instance first\n$client = new MongoDB\\Client('mongodb://localhost:27017');\n\n// select a database (will be created automatically if it not exists)\n$db = $client->test2;\necho \"Database test2 created/selected<br>\";\n\n// select a collection (will be created automatically if it not exists)\n$coll = $db->selectCollection(\"mycoll\");\necho \"Collection mycoll created/selected<br>\";\n\n// insert a datatow in the collection\n$coll->insertOne(['foo' => 'bar']);\n\n// search for the datarow\necho \"Result:<br>\";\nvar_dump($coll->findOne(['foo' => 'bar']));\n\n?>\n",
"text": "Hi Andreas,\ngot it finally to run with a help of a friend.\nI was not sure if a use a local file system or if I’m on my local server.There is the local server and the atlas and nothing more, right ?\nSo we need:Hereby is a little program that could be helpful at the beginning of your php tutorial.\nThe examples given are to specific in my opinion. If you are used to it they might be fine but to get in touch mongodb frustrated my a lot!At the end: Can you recommend any editor with mongodb php intellisense supported or installable ?Thanks for your help!Best Robert",
"username": "Robert_Haupt"
},
{
"code": "",
"text": "There is the local server and the atlas and nothing more, right ?You can deploy MongoDB on any system, just like you would on the local system. For production use you’d usually want something more resilient than a standalone: for operational resilience (read, protection against data loss) you’d create a replica set, and for scaling you’d create a sharded cluster. Atlas merely takes the task of managing your own servers off your hands.Can you recommend any editor with mongodb php intellisense supported or installableAny editor with code completion should be able to offer completion for the methods in the PHP library - the extension needs stub files for code completion to work (PhpStorm for example provides those). I personally use PhpStorm for my development needs and haven’t tried other editors for any significant work.",
"username": "Andreas_Braun"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting started with the new php 8 world | 2023-02-08T17:48:15.660Z | Getting started with the new php 8 world | 2,122 |
null | [
"replication",
"python"
] | [
{
"code": "readWriteAnyDatabasedbAdminAnyDatabaseclusterMonitorconfig = client.admin.command('replSetGetConfig') (Unauthorized) not authorized on admin to execute commandfrom pymongo import MongoClient\nclient = MongoClient('mongodb+srv://{dbUserName}:{dbPassword}@{cluster_url}0/{database}?retryWrites=true&w=majority')\nconfig = client.admin.command('replSetGetConfig')\nconfig['members'].append({\n '_id': 3,\n 'host': '{hostname}',\n 'priority': 0,\n 'hidden': True\n})\nclient.admin.command('replSetReconfig', config)\nreplSetReconfig",
"text": "Hi,I’m attempting to add an additional member to an existing replica set on one of our clusters. I’ve created a data base user that has readWriteAnyDatabase, dbAdminAnyDatabase & clusterMonitor. However, when I go to perform the below command in Pymongo…\nconfig = client.admin.command('replSetGetConfig') I receive the below error…\n (Unauthorized) not authorized on admin to execute command.The role clusterMonitor has been explicitly added to this user. According to the documentation, that should be enough access to run the replSetGetConfig command.Does anyone know what’s going wrong here?Full code…The connection to the DB works fine and i’m able to query results etc. I just can’t run the above queries on replSetReconfig.",
"username": "Paul_Chynoweth"
},
{
"code": ">>> client.admin.command('connectionStatus', showPrivileges=True)['authInfo']\n{'authenticatedUsers': [...], 'authenticatedUserRoles': [...], 'authenticatedUserPrivileges': [...]}\n",
"text": "Is your cluster self hosted or is it running on MongoDB Atlas? MongoDB Atlas does not allow adding replica set members in this way. Replica set config needs to happen through the MongoDB Atlas website UI (or Atlas Administration API).If this is self hosted then ensure your DB user is created in the admin database. You can verify the app is authenticated as the expected user by running the connectionStatus command:",
"username": "Shane"
},
{
"code": "",
"text": "Hi Shane,We are using MongoDB Atlas - so that’s probably what is happening. I’ll investigate how to add replica sets through the UI or via the Atlas Admin API.Really appreciate the help!\nPaul",
"username": "Paul_Chynoweth"
},
{
"code": "",
"text": "Hi Shane,Looking through the documentation (Deploy a Replica Set — MongoDB Cloud Manager, https://www.mongodb.com/docs/manual/tutorial/expand-replica-set/) - it’s not clear to me how I can add an additional member to an existing replica set within the Atlas UI. Is it possible to add a member in this way? The steps in the first link appear to be outdated, or my version of Atlas is an older version, ‘Create New Replica Set’ is not an option for me.Thanks in advanceThanks,\nPaul",
"username": "Paul_Chynoweth"
}
] | Adding ReplicaSet with PyMongo | 2023-02-27T16:34:16.018Z | Adding ReplicaSet with PyMongo | 692 |
null | [
"c-driver"
] | [
{
"code": "",
"text": "I have a legacy code running on a Windows 32 bit process. I would like to publish data using libmongoc and bson stuff. I cannot find out how to compile for this target.\nIt looks like cmake build is limited to x64 platform (there is no [arch] option)\nI tryed to build inside x86_x64 Cross Tools Commands for VS2019 but this still generates 64 bits librairies and binaries. Did i miss something or is this target definitively deprecated ?Thanks !",
"username": "Matthieu_Moret"
},
{
"code": "cmake -G \"Visual Studio 16 2019\" -A Win32",
"text": "Hello @Matthieu_Moret, try configuring the driver with the cmake options cmake -G \"Visual Studio 16 2019\" -A Win32 noted here: Visual Studio 16 2019 — CMake 3.26.0-rc4 Documentation",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Wonderful ! This works like a charm \nI think it would be interresting and time saving for people to speek about this option in :\nhttps://mongoc.org/libmongoc/current/installing.html\nMany thanks for your very fast reply ",
"username": "Matthieu_Moret"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Libmongoc and libbson driver Win32 Visual Studio compilation | 2023-02-27T17:17:23.205Z | Libmongoc and libbson driver Win32 Visual Studio compilation | 1,082 |
null | [
"data-modeling"
] | [
{
"code": "database = {\n\"users\": {\n \"UserA\": {\n \"_id\": \"jhas-d01j-ka23-909a\",\n \"name\": \"userA\",\n \"geo\": {\n \"lat\": \"\",\n \"log\": \"\",\n \"perimeter\": \"\"\n },\n \"session\": {\n \"lat\": \"\",\n \"log\": \"\"\n },\n \"users_accepted\": [\n \"j2jl-564s-po8a-oej2\",\n \"soo2-ap23-d003-dkk2\"\n\n ],\n \"users_rejected\": [\n \"jdhs-54sd-sdio-iuiu\",\n \"mbb0-12md-fl23-sdm2\",\n ],\n },\n \"UserB\": {...},\n \"UserC\": {...},\n \"UserD\": {...},\n \"UserE\": {...},\n \"UserF\": {...},\n \"UserG\": {...},\n\n},\ndatabase = {\n\"users\": {\n \"UserA\": {\n \"_id\": \"jhas-d01j-ka23-909a\",\n \"name\": \"userA\",\n \"geo\": {\n \"lat\": \"\",\n \"log\": \"\",\n \"perimeter\": \"\"\n },\n \"session\": {\n \"lat\": \"\",\n \"log\": \"\"\n },\n },\n \"UserB\": {...},\n \"UserC\": {...},\n \"UserD\": {...},\n \"UserE\": {...},\n \"UserF\": {...},\n \"UserG\": {...},\n\n\n},\n\"likes\": {\n \"id_27-82\" : {\n \"user_give_like\" : \"userB\",\n \"user_receive_like\" : \"userA\"\n },\n \"id_27-83\" : {\n \"user_give_like\" : \"userA\",\n \"user_receive_like\" : \"userC\"\n },\n},\n\"dislikes\": {\n \"id_23-82\" : {\n \"user_give_dislike\" : \"userA\",\n \"user_receive_dislike\" : \"userD\"\n },\n \"id_23-83\" : {\n \"user_give_dislike\" : \"userA\",\n \"user_receive_dislike\" : \"userE\"\n },\n}\n\"matches\": {\n \"match_id_1\": {\n \"user_1\": \"referece_user1\",\n \"user_2\": \"referece_user2\"\n },\n \"match_id_2\": {\n \"user_1\": \"referece_user3\",\n \"user_2\": \"referece_user4\"\n }\n}\n",
"text": "I’m trying to build a dating app, and for my backend, I’m using mongoDB. When it comes to the user’s collection, some relations are happening between documents of the same collection. For example, a user A can like, dislike, or may haven’t had the choice yet. A simple schema for this scenario is the following:}Here userA has a reference from the users it has seen and made a decision, and stores them either in “users_accepted” or “users_rejected”. If User C hasn’t been seen (either liked or disliked) by userA, then it is clear that it won’t appear in both of the arrays. However, these arrays are unbounded and may exceed the max size that a document can handle. One of the approaches may be to extract both of these arrays and create the following schema:}I need 4 basic queriesThe query 1. is fairly simple, just query the likes collection and get the users where “user_receive_like” is “userA”.Query 2. and 3. are used to get the users that userA has not seen yet, get the users that are not in query 2. or query 3.Finally query 4. may be another collectionIs this approach viable and efficient?",
"username": "David_Melendez"
},
{
"code": "",
"text": "Hi David\nDid you find any solution for your query. If so, please add. I too have the same questions for my dating app",
"username": "Ananthi_R"
}
] | Datamodeling dating app plausibility | 2022-05-13T01:58:48.275Z | Datamodeling dating app plausibility | 1,947 |
null | [
"database-tools",
"backup",
"storage"
] | [
{
"code": "|2023-02-27T20:51:14.562+0000|index: &idx.IndexDocument{Options:primitive.M{name:rs_id_1, ns:rsid.rsid_metadata, v:1}, Key:primitive.D{primitive.E{Key:rs_id, Value:1}}, PartialFilterExpression:primitive.D(nil)}|\n|---|---|\n|2023-02-27T20:51:14.562+0000|no indexes to restore for collection rsid.rsid_metadata_bkp_1116|\n|2023-02-27T20:51:14.562+0000|no indexes to restore for collection rsid.samplestations|\n|2023-02-27T20:51:14.562+0000|no indexes to restore for collection rsid.bkp_sample_0807|\n|2023-02-27T20:51:14.562+0000|no indexes to restore for collection rsid.bkp_sample_1106|\n|2023-02-27T20:51:14.562+0000|restoring indexes for collection image.image from metadata|\n|2023-02-27T20:51:14.562+0000|index: &idx.IndexDocument{Options:primitive.M{name:image_id_1_resolution_1, ns:image.image, v:1}, Key:primitive.D{primitive.E{Key:image_id, Value:1}, primitive.E{Key:resolution, Value:1}}, PartialFilterExpression:primitive.D(nil)}|\n|2023-02-27T20:51:14.562+0000|no indexes to restore for collection rsid.samplestations_auto|\n|2023-02-27T20:51:14.562+0000|restoring indexes for collection image.cds_image from metadata|\n|2023-02-27T20:51:14.562+0000|index: &idx.IndexDocument{Options:primitive.M{name:cds_id_1, ns:image.cds_image, v:1}, Key:primitive.D{primitive.E{Key:cds_id, Value:1}}, PartialFilterExpression:primitive.D(nil)}|\n|2023-02-27T20:51:14.562+0000|index: &idx.IndexDocument{Options:primitive.M{expireAfterSeconds:3.1536e+07, name:createdAt_1, ns:image.cds_image, v:1}, Key:primitive.D{primitive.E{Key:createdAt, Value:1}}, PartialFilterExpression:primitive.D(nil)}|\n|2023-02-27T20:51:14.562+0000|no indexes to restore for collection rsid.rsid_metadata_sbt_model|\n|2023-02-27T20:51:14.562+0000|no indexes to restore for collection cayenne.tvstation|\n|2023-02-27T20:51:14.562+0000|restoring indexes for collection fingerprint.fingerprint from metadata|\n|2023-02-27T20:51:14.562+0000|index: &idx.IndexDocument{Options:primitive.M{name:fp_id_1, ns:fingerprint.fingerprint, v:1}, Key:primitive.D{primitive.E{Key:fp_id, Value:1}}, PartialFilterExpression:primitive.D(nil)}|\n{\"t\":{\"$date\":\"2023-02-28T02:05:36.204+00:00\"},\"s\":\"W\", \"c\":\"ACCESS\", \"id\":5626700, \"ctx\":\"conn41\",\"msg\":\"Client has attempted to reauthenticate as a single user\",\"attr\":{\"user\":{\"user\":\"__system\",\"db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-02-28T02:05:36.204+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn41\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.61.128.37:57644\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-02-28T02:05:44.078+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1677549944:78358][1524094:0x7f650f7dd700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 560123, snapshot max: 560123 snapshot count: 0, oldest timestamp: (1677549641, 1) , meta checkpoint timestamp: (1677549941, 1) base write gen: 10399739\"}}\n",
"text": "mongo --version\nMongoDB shell version v5.0.11\nBuild Info: {\n“version”: “5.0.11”,\n“gitVersion”: “d08c3c41c105cde798ca934e3ac3426ac11b57c3”,\n“openSSLVersion”: “OpenSSL 1.1.1f 31 Mar 2020”,\n“modules”: ,\n“allocator”: “tcmalloc”,\n“environment”: {\n“distmod”: “ubuntu2004”,\n“distarch”: “x86_64”,\n“target_arch”: “x86_64”\n}\n}We have taken backup using mongodump\nmongodump --host=myhost.com --port=1801 --out=/data/backupBackup was done successfully.Restore started more than 6 hrs ago and it looks like it restored all databases but mongorestore process still not finish. Process is running using nohup\nmongorestore --authenticationDatabase=“admin” --username=“myuser” --password=“mypassword” – /data/backup &nohup log has not been updated since last 5 hrs or solast few linesdate\nTue 28 Feb 2023 02:05:30 AM GMTmongodb log files does not show any progress about mongorestore, I see some checkpoint infoI do see\ndb.currentOp(), there is process name mongorestore and command is createindex but if index is getting created , isn’t mongo log file should have some progress info as well.Any idea how to check what is going on with mongorestore and how to check status of index getting created.",
"username": "Sanjay_Gupta"
},
{
"code": "",
"text": "db.currentOp(true).inprog.forEach(function(op){ if(op.msg!==undefined) print(op.msg) })\nIndex Build: draining writes received during build\nIndex Build: draining writes received during build\nIndex Build: draining writes received during buildLooks like one replica node was in recovering mode so I have dropped and recreating it. Will update about progress.",
"username": "Sanjay_Gupta"
},
{
"code": "",
"text": "That was my issue. One of my replica set went into recovering state and that’s what causing issues with mongo restore. After I removed replica set and added back again. It synced up in couple hrs and mongorestore also finished.",
"username": "Sanjay_Gupta"
},
{
"code": "",
"text": "I don’t get your solution. Can you elaborate more? why a recovering node will make the restore hang forever?",
"username": "Kobe_W"
},
{
"code": "",
"text": "It did cause issue.\nLook this issue. I got resolution and direction from here.\nhttps://jira.mongodb.org/browse/SERVER-72427?jql=labels%20%3D%20indexes",
"username": "Sanjay_Gupta"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb restore stuck | 2023-02-28T02:28:16.028Z | Mongodb restore stuck | 1,715 |
null | [] | [
{
"code": "resource \"mongodbatlas_project\" \"project\" {\n name = var.project_name\n org_id = var.org_id\n\n dynamic \"teams\" {\n for_each = var.teams\n\n content {\n team_id = mongodbatlas_teams.team[teams.key].team_id\n role_names = [teams.value.role]\n }\n }\n\n is_performance_advisor_enabled = var.is_performance_advisor_enabled\n}\n\nresource \"mongodbatlas_third_party_integration\" \"datadog_integration\" {\n count = var.enable_datadog_integration ? 1 : 0\n\n project_id = mongodbatlas_project.project.id\n type = \"DATADOG\"\n api_key = var.datadog_api_key\n region = var.datadog_region\n}\n\n# in tfvar file\nenable_datadog_integration = true\ndatadog_region = \"US\"\n\nINTEGRATION_FIELDS_INVALIDError: error creating third party integration POST https://cloud.mongodb.com/api/atlas/v1.0/groups/<GROUP_ID>/integrations/DATADOG: 400 (request \"INVALID_ENUM_VALUE\") An invalid enumeration value US1 was specified.\n",
"text": "Hi,I’m trying to enable Datadog third party integration in one of my MongoDB Atlas projects. My projects are managed using Terraform, therefore it is relying on the MongoDB Atlas API. When I’m performing Terraform apply it fails with below API error.Error:Error: error creating third party integration POST https://cloud.mongodb.com/api/atlas/v1.0/groups/<GROUP_ID>/integrations/DATADOG: 400 (request “INTEGRATION_FIELDS_INVALID”) At least one integration field is invalid.Relevant code block:Additionally the Datadog API key value is set as an environment variable and it contains the correct value.I’d appreciate any leads on the error code INTEGRATION_FIELDS_INVALID or on anything that I’ve mistaken in here.Refs:\nTerraform module doc: Terraform Registry\nAtlas doc: https://www.mongodb.com/docs/atlas/tutorial/third-party-service-integrations/PS. Atlas doc specifies US1, US3, US5, and EU1 as the values for Datadog regions and specifying them resulted in below error:",
"username": "Isuru_Siriwardana"
},
{
"code": "",
"text": "Hi, The Terraform documentation indicates that there are two valid options - EU or US.\n\nScreen Shot 2023-02-27 at 9.27.15 AM790×172 11.4 KB\n\nhttps://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/third_party_integration",
"username": "Tonya_Edmonds1"
},
{
"code": "US",
"text": "Hi Tonya, I initially tried US as mentioned in the Terraform documentation and it resulted in a similar error with an error message saying “Invalid value not included in the enumeration” or something similar to that.",
"username": "Isuru_Siriwardana"
}
] | INTEGRATION_FIELDS_INVALID error from Atlas API when trying to enable Datadog integration | 2023-02-22T11:31:24.228Z | INTEGRATION_FIELDS_INVALID error from Atlas API when trying to enable Datadog integration | 738 |
null | [] | [
{
"code": "",
"text": "I am trying to connect my GCP VPC to my MongoDB Cluster through network peering. But it is not working as I expected. I have implemented network peering in multiple environment and it was successful, but not in this specific GKE. The only difference I can see is this GKE does not support VPC native routing, it is routes based cluster. I think network peering is successful, so which IP should I whitelist in atlas?There are 3 routes mentioned as “rejected by peer configuration” in my VPC network peering tab in GCP VPC. These are static routes and these are the routes to my nodes. So I doubt this is the reason, is there any way to resolve this. Because I can’t enable GKE VPC native routing without deleting the cluster, which is impossible.",
"username": "sbn390"
},
{
"code": "",
"text": "Hey Subin,Reading through the VPC peering docs, it appears that:By default, VPC Network Peering with GKE is supported when used with IP aliases. If you don’t use IP aliases, you can export custom routes so that GKE containers are reachable from peered networks.Based on your description, it sounds like you are not able to leverage IP aliases?Best,\nChris",
"username": "Christopher_Shum"
},
{
"code": "",
"text": "If you don’t use IP aliases, you can export custom routes so that GKE containers are reachable from peered networks.I do export the custom routes, but it is shown as “rejected by peer”. I think it doesn’t work with routes based cluster. I can’t recreate the cluster since it is in a high availability state. If you have any workarounds, please do reply.",
"username": "sbn390"
}
] | Network Peering to GKE Routes Based Cluster | 2023-02-23T10:17:47.030Z | Network Peering to GKE Routes Based Cluster | 866 |
[
"next-js",
"data-api"
] | [
{
"code": "",
"text": "\nas above documentation introduced, Data api takes longer to complete than driver.",
"username": "11115_1"
},
{
"code": "",
"text": "Hi @11115_1 and welcome to the MongoDB community forum!!i wonder if Data api will faster than driver in the future.The Driver and the Data API are two different aspects when it comes to querying the data in MongoDB.\nThe MongoDB drivers generally resides on the servers, and act as the bridge in between the request and the database server. However, the Data API acts as an additional layer to the request being made.\nDespite the fact that the data API includes an extra compute and network layer, it provides straightforward API logic out of the box, as well as serverless functions and separate rules/schema validation. It also acts as a proxy, assisting in the management of networks at scale.Also, as mentioned in the documentation, for applications which require low latency, using drivers is the recommended method.\nDepending on your use case, you can choose in between using the drivers and the Atlas Data API.and if i wanna use vercel’s edge function to connect with mongodb, how could i do?If I understand the question correctly, you are looking to connect the Vercel’s edge function to the serverless Atlas. In that case, the blog post on Integrate MongoDB into Vercel Functions for the Serverless Experience might be a useful information to answer the question.Please let us know if my understanding is wrong here.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "i appreciate your reply! and i mentioned vercels’s function is edge function which is different from serverless function. and it runs on edge runtime not nodejs. i wanna check mongodb in nextjs’s middleware that runs on edge runtime. how could we connect mongo using edge function?this links will helpfull! \nedge runtime\nedge middleware\nthis is why i need mongodb support",
"username": "11115_1"
}
] | Will data api be faster in the future? | 2023-02-16T04:12:18.826Z | Will data api be faster in the future? | 1,582 |
|
null | [
"api"
] | [
{
"code": "",
"text": "How to get all pending users with Atlas Admin API? Using this method from the doc returns only 50 entries. There don’t seem to be pagination options like the ListUsers endpoint.How to get all pending users?",
"username": "Jean-Baptiste_Beau"
},
{
"code": "after_id'https://realm.mongodb.com/api/admin/v3.0/groups/{group_id}/apps/{app_id}/user_registrations/pending_users?after=63d748faee3c72db5f1bbc08'\nuser_id_iduser_idafter_id[\n{\"_id\":\"63d748fe5dee77476b4f70aa\",\"domain_id\":\"63d7447f78613e6d6ec96df0\",\"login_ids\":[{\"id_type\":\"email\",\"id\":\"pendinguser51\"}],\"user_id\":\"\"},\n{\"_id\":\"63d74902ee3c72db5f1bbd97\",\"domain_id\":\"63d7447f78613e6d6ec96df0\",\"login_ids\":[{\"id_type\":\"email\",\"id\":\"pendinguser52\"}],\"user_id\":\"\"},\n{\"_id\":\"63d749075970c7cd4fe73f21\",\"domain_id\":\"63d7447f78613e6d6ec96df0\",\"login_ids\":[{\"id_type\":\"email\",\"id\":\"pendinguser53\"}],\"user_id\":\"\"}\n]\nafter_id",
"text": "Hi @Jean-Baptiste_Beau,How to get all pending users with Atlas Admin API? Using this method from the doc returns only 50 entries. There don’t seem to be pagination options like the ListUsers endpoint.Thanks for raising this one. You can use the after query parameter with the _id value of the last pending user from the previous request. An example below:Note: the List Users API will utilise the user_id value, the List Pending Users API will require the _id value as pending users do not have a user_idI had 53 pending users in my test environment, after specifying the after query parameter with the _id value of 50th pending user, the next 3 are returned. Example response:I have created a request to have our documentation updated to note that the after query parameter can be specified using the _id value for pending users.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "after",
"text": "Update : The list pending users API documentation has been updated with the after parameter and what values to provide.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Get all pending users with Atlas Admin API? | 2023-01-23T17:35:30.470Z | Get all pending users with Atlas Admin API? | 1,317 |
null | [
"atlas",
"charts"
] | [
{
"code": "",
"text": "Hello there. Currently, I’m just playing with embedded Atlas charts.One simple example: I have an embedded chart size of 300x300 pixels. If my bar chart grows, can I add scroll bars to my chart and see the complete chart?Or what’s the best way to resolve it?",
"username": "Orlando_Herrera"
},
{
"code": "",
"text": "Hi @Orlando_Herrera. An embedded chart always scales to fit in an iframe. But it should be possible to use custom CSS to enable scrolling.Another option is using an embedded dashboard to maintain the size of a chart inside it and introduce scrolling using the fixed height and width options.",
"username": "Avinash_Prasad"
},
{
"code": "<Chart height={'200px'} width={'500px'} filter={{\"createdAt\": {$gte: new Date(MyDate1)}} chartId={'MyId'} /> \n",
"text": "Hi Avinash, thank you for your answer.Currently I’m using JavaScript SDK (not IFrame)\nBut if an embedded dashboard is used, what about filters?\nCan I use custom filters in the same way I use charts ? Example:",
"username": "Orlando_Herrera"
},
{
"code": "",
"text": "You have two options on using filtering inside a dashboard.We have just released dashboard filtering on embedded dashboards. So if all your charts in the dashboard use a single data source, then you can easily use dashboard filtering to filter all charts. See the attached gif to switch it on.\nYou can start using this feature in SDK from Monday onwards. We are updating the SDK on Monday.You can individually filter each chart in a dashboard by using the below method.\n\nimage1389×257 34.2 KB\n\nMore details here @mongodb-js/charts-embed-dom - npm (npmjs.com)Let me know if you face any more hurdles",
"username": "Avinash_Prasad"
},
{
"code": "",
"text": "@Orlando_Herrera We have updated the SDK today. You can use the embed dashboard filters in the UI and SDK now.",
"username": "Avinash_Prasad"
},
{
"code": "const MyDashboard = ({dashboardId, height, width}) =>{\n const date1 = new Date('2024-02-20')\n const sdk = new ChartsEmbedSDK({baseUrl: 'https://charts.mongodb.com/MybaseurlXXXX'});\n const chartDiv = useRef(null);\n const [rendered, setRendered] = useState(false);\n const [chart] = useState(\n sdk.createDashboard({\n dashboardId: dashboardId, \n height: height, \n width: width, \n filter: { \"createdAt\": {$gte: date1} },\n }));\n",
"text": "Hi @Avinash_Prasad !! thanks again for your answer.\nI wanna share with you, please tell me If I’m using correctly the “dashboard filters” (with JS SDK).Right, I allowed all fields:\nUntitled810×397 15.7 KB\nI’m working with React JS, this is my code, as you can see, I’m using “Filter: {}” optionSo, I can see my dashboard embedded in my webpage, but my Filter is not working, dashboard is showing me all the data without my date filter.Thanks in advance.",
"username": "Orlando_Herrera"
},
{
"code": "filter: {\"createdAt\": {$gte: new Date(date1)}}\n",
"text": "@Avinash_Prasad\nUpdate\nIt’s working right now.\nI just moved the “new Date”, I dont know why, but it works for me:",
"username": "Orlando_Herrera"
},
{
"code": "",
"text": "Glad you got it working",
"username": "Avinash_Prasad"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Are there scroll bars in embedded charts? | 2023-02-23T01:29:20.983Z | Are there scroll bars in embedded charts? | 1,286 |
[] | [
{
"code": "",
"text": "Hello! I am learning how to integrate MongoDB with NextJS and I can’t seem to connect to MongoDB when I render another page. Mongo connects on my index but not on my other pages.I followed the tutorial in the docs…the only thing that is different is that the catch section which I found out requires a return statement.No connection:\n\nRecipe1637×874 54.2 KB\n",
"username": "Christopher_Moua"
},
{
"code": "MongoDB connectiondb.jsimport { MongoClient } from 'mongodb'\n\nconst uri = process.env.MONGODB_URI\nconst options = {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n}\n\nconst client = new MongoClient(uri, options)\n\nexport async function connectToDatabase() {\n await client.connect()\n return client.db(process.env.MONGODB_DB)\n}\ndbimport App from 'next/app'\nimport { connectToDatabase } from '../db' // call db function to your another page\n\nclass MyApp extends App {\n static async getInitialProps({ Component, ctx }) {\n const db = await connectToDatabase()\n const appProps = Component.getInitialProps\n ? await Component.getInitialProps(ctx)\n : {}\n\n return { ...appProps, db }\n }\n",
"text": "Hi @Christopher_Moua,Welcome to the MongoDB Community forums I suspect you might not be importing the MongoDB connection function to another page. Also, it seems that you’re using server-side props to determine the connection state. Note that using server-side props is not the right way to do this.I’ll suggest, you create one db.js file and write the code to connect to MongoDB, sharing the code snippet for your reference:And then import it to other pages to do the database call and connect to the db. I’m sharing the code snippet for reference:In this way, it will work as expected.I followed the tutorial in the docsCan you share the link to the documentation you followed? Meanwhile, you can check this tutorial on How to Integrate MongoDB Into Your Next.js App.I hope it helps!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Connecting to MongoDB Atlas | 2023-02-14T12:22:02.990Z | Connecting to MongoDB Atlas | 476 |
|
null | [
"aggregation",
"queries",
"node-js",
"data-modeling"
] | [
{
"code": " TransferModel.aggregate([{ $group: { _id: null, \"TotalAmount\": { $sum: \"$amount\" } } }]){ \"_id\": null, \"TotalAmount\": 0 }",
"text": "I’m trying to sum up all amount field in my database with aggregate and sum like this : TransferModel.aggregate([{ $group: { _id: null, \"TotalAmount\": { $sum: \"$amount\" } } }])\nBut am getting 0 as the response like this : { \"_id\": null, \"TotalAmount\": 0 }\nPlease is there anything am doing wrong? I would really appreciate if anybody can help, Thanks",
"username": "Emmanuel_Oluwatimilehin"
},
{
"code": "amountawait TransferModel.countDocuments({})",
"text": "Hello @Emmanuel_Oluwatimilehin,Your query looks good, probably there are some minor issues, try to make sure the below things,",
"username": "turivishal"
},
{
"code": "amount",
"text": "In addition tothe amount property should have a numeric type valuemake sure that your property is really amount?It could be Amount since you seem to favour an upper case as the first letter since your collection is TransferModel and your result field is TotalAmount.Sharing some sample documents would be helpful.",
"username": "steevej"
}
] | Add all field (Amount) | 2023-02-27T15:56:16.277Z | Add all field (Amount) | 623 |
null | [
"queries",
"java",
"spring-data-odm"
] | [
{
"code": "db.fooCollection.find(\n {\"$or\": [{\"$and\": [{\"params\": {\"$elemMatch\": {\"name\": \"manufacturer\", \"val\": \"FooManufacturer\"}}}, {\"params\": {\"$elemMatch\": {\"name\": \"articleId\", \"val\": \"FOO-FOO-ARTICLE-ID-FOO\"}}}]}, {\"$and\": [{\"params\": {\"$elemMatch\": {\"name\": \"brand\", \"val\": \"FooManufacturer\"}}}, {\"params\": {\"$elemMatch\": {\"name\": \"articleId\", \"val\": \"FOO-FOO-ARTICLE-ID-FOO\"}}}]}]}\n ).collation({locale: 'en', strength: 1, alternate: \"shifted\", maxVariable: \"punct\"})\n .explain(\"executionStats\")\n{\"executionSuccess\": true, \"nReturned\": new NumberInt(\"1\"), \"executionTimeMillis\": new NumberInt(\"2\"), \"totalKeysExamined\": new NumberInt(\"1\"), \"totalDocsExamined\": new NumberInt(\"1\"), \"executionStages\": {\"stage\": \"SUBPLAN\", \"nReturned\": new NumberInt(\"1\"), \"executionTimeMillisEstimate\": new NumberInt(\"2\"), \"works\": new NumberInt(\"2\"), \"advanced\": new NumberInt(\"1\"), \"needTime\": new NumberInt(\"0\"), \"needYield\": new NumberInt(\"0\"), \"saveState\": new NumberInt(\"0\"), \"restoreState\": new NumberInt(\"0\"), \"isEOF\": new NumberInt(\"1\"), \"inputStage\": {\"stage\": \"FETCH\", \"filter\": {\"$or\": [{\"$and\": [{\"params\": {\"$elemMatch\": {\"$and\": [{\"name\": {\"$eq\": \"articleId\"}}, {\"val\": {\"$eq\": \"FOO-FOO-ARTICLE-ID-FOO\"}}]}}}, {\"params\": {\"$elemMatch\": {\"$and\": [{\"name\": {\"$eq\": \"brand\"}}, {\"val\": {\"$eq\":\"FooManufacturer\"}}]}}}]}, {\"$and\": [{\"params\": {\"$elemMatch\": {\"$and\": [{\"name\": {\"$eq\": \"articleId\"}}, {\"val\": {\"$eq\": \"FOO-FOO-ARTICLE-ID-FOO\"}}]}}}, {\"params\": {\"$elemMatch\": {\"$and\": [{\"name\": {\"$eq\": \"manufacturer\"}}, {\"val\": {\"$eq\": \"FooManufacturer\"}}]}}}]}]}, \"nReturned\": new NumberInt(\"1\"), \"executionTimeMillisEstimate\": new NumberInt(\"0\"), \"works\": new NumberInt(\"2\"), \"advanced\": new NumberInt(\"1\"), \"needTime\": new NumberInt(\"0\"), \"needYield\": new NumberInt(\"0\"), \"saveState\": new NumberInt(\"0\"), \"restoreState\": new NumberInt(\"0\"), \"isEOF\": new NumberInt(\"1\"), \"docsExamined\": new NumberInt(\"1\"), \"alreadyHasObj\": new NumberInt(\"0\"), \"inputStage\": {\"stage\": \"IXSCAN\", \"nReturned\": new NumberInt(\"1\"), \"executionTimeMillisEstimate\": new NumberInt(\"0\"), \"works\": new NumberInt(\"2\"), \"advanced\": new NumberInt(\"1\"), \"needTime\": new NumberInt(\"0\"), \"needYield\": new NumberInt(\"0\"), \"saveState\": new NumberInt(\"0\"), \"restoreState\": new NumberInt(\"0\"), \"isEOF\": new NumberInt(\"1\"), \"keyPattern\": {\"params.name\": new NumberInt(\"1\"), \"params.val\": new NumberInt(\"1\")}, \"indexName\": \"params.name_1_params.val_1\", \"collation\": {\"locale\": \"en\", \"caseLevel\": false, \"caseFirst\": \"off\", \"strength\": new NumberInt(\"1\"), \"numericOrdering\": false, \"alternate\": \"shifted\", \"maxVariable\": \"punct\", \"normalization\": false, \"backwards\": false, \"version\": \"57.1\"}, \"isMultiKey\": true, \"multiKeyPaths\": {\"params.name\": [\"params\"], \"params.val\": [\"params\"]}, \"isUnique\": false, \"isSparse\": false, \"isPartial\": false, \"indexVersion\": new NumberInt(\"2\"), \"direction\": \"forward\", \"indexBounds\": {\"params.name\": [\"[CollationKey(0xCOLLECTION_KEY), CollationKey(0xCOLLECTION_KEY)]\"], \"params.val\": [\"[CollationKey(0xCOLLECTION_KEY_2), CollationKey(0xCOLLECTION_KEY_2)]\"]}, \"keysExamined\": new NumberInt(\"1\"), \"seeks\": new NumberInt(\"1\"), \"dupsTested\": new NumberInt(\"1\"), \"dupsDropped\": new NumberInt(\"0\")}}}}\n \"planSummary\": \"IXSCAN { params.name: 1, params.val: 1 }, IXSCAN { params.name: 1, params.val: 1 }\", \"keysExamined\": 87182, \"docsExamined\": 87182, \"cursorExhausted\": true, \"numYields\": 484, \"nreturned\": 1,\n",
"text": "Hi,\nI have a problem with the functioning of the application related to the long response time to the find query.\nThis is quite strange because if I run the query myself (e.g. via the dataGrip / Intellij driver) the response time is about 500-700 ms, and when it is performed by an application that uses spring data mongo, the response time is from 33000 ms to 45000 ms. Not only the time is different - but verifying the query more specifically using.explain(“executionStats”) - the number of keysExamined.\nThis is the query:This is executionStats from manual query run:and this is interesting - from app logs (same query - same db, executed via spring)I cannot understand how the result of the same query is different in each case. I tried to use an older version of the mongo driver in dataGrip because I thought maybe it was outdated in the application - but it didn’t help. Still hand-made is precise and fast. Both queries (manual and in-app) using same index.",
"username": "dirtydb"
},
{
"code": "",
"text": "I would be surprised if the same query performed on the same data set on the same server but with different drivers will provide such a big difference in the number of keys examined and execution time.If the query is the same, the server should take the same logical steps. Something must be different. A subtle difference might make a big difference.It would be best to share the spring code for the query.",
"username": "steevej"
},
{
"code": "private suspend fun findByParams(parameters: Set < Map < ParamName, ParamValue >> ): CloseableIterator < ProductDocument > {\n val collation = Collation.of(\"en\").strength(1).alternate(\"shifted\").maxVariable(\"punct\")\n val query = Query().collation(collation)\n val criterias = parameters.map {\n set - > set.map {\n Criteria.where(\"parameters\").elemMatch(Criteria.where(\"name\").iseEqualTo(it.key.name).and(\"value\").isEqualTo(it.value.value))\n }\n }\n query.addCriteria(Criteria().orOperator(criterias.mapNotNull {\n Criteria().andOperator(it)\n }))\n return withContext(Dispatchers.IO) {\n mongoTemplate.stream(query, ProductDocument::class.java)\n }\n}\n",
"text": "This is spring code:The problem is strange because I manually run the query generated by this method - so I assume that the results should be identical.",
"username": "dirtydb"
},
{
"code": "{\"$elemMatch\": {\"name\": \"brand\", \"val\": \"FooManufacturer\"}}elemMatch(Criteria.where(\"name\").iseEqualTo(it.key.name).and(\"value\").isEqualTo(it.value.value))",
"text": "You do not seem to query the same fields.Manually you are doing $elemMatch on the fields name and val while your spring code performs the elemMatch of the fields name and value.Manual{\"$elemMatch\": {\"name\": \"brand\", \"val\": \"FooManufacturer\"}}SpringelemMatch(Criteria.where(\"name\").iseEqualTo(it.key.name).and(\"value\").isEqualTo(it.value.value))Your index being on params.name:1,params.val:1 is fully covers the fields of the manual query but does not covers the field of the spring query because params.value is not part of the index. That is the reason why more keys are examined.That confirms my suspicions that the query was not the same.It would appears that it is a simple typing error in your spring code; value rather than val.",
"username": "steevej"
}
] | Different query execution times and keys examined depending on the call method | 2023-02-24T20:29:08.945Z | Different query execution times and keys examined depending on the call method | 1,118 |
null | [
"queries",
"atlas-search",
"text-search"
] | [
{
"code": ".find(\n { $text: { $search: KEYWORD } },\n { tags: { $in: [KEYWORD] } }\n)\n",
"text": "Greetings,I’m looking to find the latest XX documents in a collection by 1) KEYWORD text search OR 2) tags (array) contains KEYWORD. Something like this (which doesn’t seem to work):So: Is it possible to find by text search and tags in the same query? If so, could someone point me toward the proper syntax?Related: Would it be more performant to just include the tags field in my text index and simply perform a single text search?Thanks!",
"username": "Jet_Hin"
},
{
"code": "find()atlas search.find(\n { $text: { $search: KEYWORD } },\n { tags: { $in: [KEYWORD] } }\n)\nqueryprojectiondb.collection.find()KEYWORD\"test\"DB> KEYWORD\ntest\nDB> db.testcoll.find({$text:{$search:KEYWORD},tags:{$in:[KEYWORD]}})\n[\n {\n _id: ObjectId(\"63edb5ad8f476e1a65817e7a\"),\n tags: 'test',\n sometextfield: 'test'\n }\n]\ndb.testcoll.find({$or:[{$text:{$search:KEYWORD}},{tags:{$in:['test']}}]})\n[\n {\n _id: ObjectId(\"63edb5ad8f476e1a65817e7a\"),\n tags: 'test',\n sometextfield: 'test'\n }\n]\n$or$or$text$or$text$or$textIXSCANFETCH$ortagsIXSCANFETCHdb.collection.explain(\"executionStats\").find()$or$in",
"text": "Hi @Jet_Hin,Since you’re using find(), I’m assuming you’re using Text Search Operators on a Self-Managed Deployments. However, I see that the question is tagged with atlas search. Note that Atlas Search requires the query to use the aggregation pipeline.Something like this (which doesn’t seem to work):What doesn’t work specifically? Are you getting any particular error?Based off your example, it seems you have placed one of the query operators as a projection. More information in the db.collection.find() documentation.In saying so, please see the similar example below where KEYWORD is \"test\":I’m looking to find the latest XX documents in a collection by 1) KEYWORD text search OR 2) tags (array) contains KEYWORD.However, I do note you have specified “OR” in your conditions. In this case, perhaps the below example would suit your use case:Please note the following:If $or includes a $text query, all clauses in the $or array must be supported by an index. This is because a $text query must use an index, and $or can only use indexes if all its clauses are supported by indexes. If the $text query cannot use an index, the query will return an error.Related: Would it be more performant to just include the tags field in my text index and simply perform a single text search?Might be worth testing this in your own environment since I have only tried this on my my test environment which consists of only a few documents but the results are as follows from my testing:For more information regarding the above, I would refer to the db.collection.explain(\"executionStats\").find() documentation.Some operator documentation references for your information:I would be curious to also know if you’re using an on-prem environment or Atlas deployment? If you’re using Atlas, you may wish to consider looking into using Atlas Search to see if it suits your use case and requirements.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "find()",
"text": "Hi Jason, thanks for getting back to me.Yes, I’m seeking to use find() in an Atlas Search collection.I have title and description fields which are indexed, but the tags field (array) is not included in the index.My basic example doesn’t throw an error, but it appears that only the text search is returning results – the tags query appears to be ignored.The $or query you suggested throws this error: “Failed to produce a solution for TEXT under OR - other non-TEXT clauses under OR have to be indexed as well.” I assume this is as per the documentation you referenced.Though it might be nice to combine the results from a text search of my indexed fields along with separate “in array” query, it seems it’s not possible. So I suppose the easiest solution would be for me to recreate the index and include the tags array for find purposes.Atlas search looks very cool, but is frankly overkill for my very basic application.Thanks again for your help, Jet",
"username": "Jet_Hin"
},
{
"code": "find()$search.find()\"text\"\"text\"$or",
"text": "Hi @Jet_Hin,Yes, I’m seeking to use find() in an Atlas Search collection.As noted before, Atlas Search’s $search stage requires the use of the aggregation pipeline (not .find()). I believe you have a \"text\" index on a collection which just so happens to be hosted in an Atlas Deployment as opposed to an “Atlas Search collection” . The \"text\" index is different to an Atlas Search Index.The $or query you suggested throws this error: “Failed to produce a solution for TEXT under OR - other non-TEXT clauses under OR have to be indexed as well.” I assume this is as per the documentation you referenced.Based of the error, you’ll probably need another index on the $or field as well (that is not the text index field).Though it might be nice to combine the results from a text search of my indexed fields along with separate “in array” query, it seems it’s not possible. So I suppose the easiest solution would be for me to recreate the index and include the tags array for find purposes.Are you able to provide some sample document(s) along with the expected output? Just trying to get an idea of what the document(s) look like and what the output you’re trying to achieve is.Atlas search looks very cool, but is frankly overkill for my very basic application.I’m curious to know why it would be overkill for your use case. If you have any these details I could possibly communicate these to the product team.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"xxx\"\n },\n \"title\": \"Sandy knocked them down. Nothing will make them leave.\",\n \"description\": \"The risk of hurricanes hitting New York and southern New England is definitely going up.\",\n \"tags\": [\n \"hurricanes\",\"hurricane sandy\",\"New York\",\"New England\"\n ]\n}\n$text: { $search: KEYWORD }tags: { $in: [KEYWORD] }$or$in",
"text": "Hi Jason,Here’s a sample document:The title and description fields are in a text index; the tags array is not included in the index. I’d like to return the X most recent documents where 1) $text: { $search: KEYWORD } OR 2) tags: { $in: [KEYWORD] }.I think the $or error is indicating that I need to include the tags field in my index (i.e. a text search doesn’t support a separate $in query.)My reasoning re: Mongo text vs search is that I just need to find records by simple term matching and return a list sorted by date. Relevance isn’t required.",
"username": "Jet_Hin"
},
{
"code": "$or$intext$orDB> db.collection.createIndex({title:\"text\",description:\"text\"})\ntitle_text_description_text\nDB> db.collection.find({$text:{$search:'hurricanes'}})\n[\n {\n title: 'Sandy knocked them down. Nothing will make them leave.',\n description: 'The risk of hurricanes hitting New York and southern New England is definitely going up.',\n tags: [ 'hurricanes', 'hurricane sandy', 'New York', 'New England' ]\n }\n]\n/// Performing a query with `$or` now with a text search:\nDB> db.collection.find({$or:[{$text:{$search:'hurricanes'}},{tags:{$in:['hurricanes']}}]})\nUncaught:\nMongoServerError: error processing query: ns=textdb.collectionTree: $or\n tags $eq \"hurricanes\"\n TEXT : query=hurricanes, language=english, caseSensitive=0, diacriticSensitive=0, tag=NULL\ntags$orDB> db.collection.createIndex({tags:1})\ntags_1\nDB> db.collection.find({$or:[{$text:{$search:'hurricanes'}},{tags:{$in:['hurricanes']}}]})\n[\n {\n title: 'Sandy knocked them down. Nothing will make them leave.',\n description: 'The risk of hurricanes hitting New York and southern New England is definitely going up.',\n tags: [ 'hurricanes', 'hurricane sandy', 'New York', 'New England' ]\n }\n]\ntags\"Dragon\"tags/// Note : I dropped all the previous indexes mentioned above\nDB> db.collection.createIndex({title:\"text\",description:\"text\",tags:\"text\"})\ntitle_text_description_text_tags_text\ndb.collection.find({$text:{$search:'Dragon'}})\n[\n {\n title: 'Sandy knocked them down. Nothing will make them leave.',\n description: 'The risk of hurricanes hitting New York and southern New England is definitely going up.',\n tags: [\n 'hurricanes',\n 'hurricane sandy',\n 'New York',\n 'New England',\n 'Dragon'\n ]\n }\n]\n",
"text": "I think the $or error is indicating that I need to include the tags field in my index (i.e. a text search doesn’t support a separate $in query.)I’m not entirely sure the error indicates that the tags field needs to be included in the text index (unless you are advising that it just needs to be indexed which I believe is the case). However, see my testing below.I reproduced the error below by creating a text index only on the 2 fields and then trying to perform the $or operation:After creating an index on tags and executing the same $or query above:As you have advised before, you could create the text index so that it also includes the tags field (I added the array element \"Dragon\" only to the tags field for demonstration):My reasoning re: Mongo text vs search is that I just need to find records by simple term matching and return a list sorted by date. Relevance isn’t required.Thanks for the clarification Jet ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Find with Multiple Conditions: Text Search OR In Tag Array | 2023-02-14T16:23:15.037Z | Find with Multiple Conditions: Text Search OR In Tag Array | 4,542 |
null | [
"server"
] | [
{
"code": "Mar 20 19:20:54 <server name> systemd: Starting Apply the settings specified in cloud-config...\nMar 20 19:20:54 <server name> systemd: Started Permit User Sessions.\nMar 20 19:20:54 <server name> systemd: Started Job spooling tools.\nMar 20 19:20:54 <server name> sm-notify[2702]: Version 1.3.0 starting\nMar 20 19:20:54 <server name> ec2net: [ec2ifscan] Scanning for unconfigured interfaces\nMar 20 19:20:54 <server name> rsyslogd: [origin software=\"rsyslogd\" swVersion=\"8.24.0-57.amzn2.1\" x-pid=\"2699\" x-info=\"http://www.rsyslog.com\"] start\nMar 20 19:20:54 <server name> sshd: Could not load host key: /etc/ssh/ssh_host_dsa_key\nMar 20 19:20:54 <server name> systemd: Starting Wait for Plymouth Boot Screen to Quit...\nMar 20 19:20:54 <server name> systemd: Started Command Scheduler.\nMar 20 19:20:54 <server name> systemd: Starting Terminate Plymouth Boot Screen...\nMar 20 19:20:54 <server name> systemd: Started System Logging Service.\nMar 20 19:20:54 <server name> systemd: Started OpenSSH server daemon.\nMar 20 19:20:54 <server name> systemd: Started Finds and configures elastic network interfaces.\nMar 20 19:20:54 <server name> systemd: Started Notify NFS peers of a restart.\nMar 20 19:20:54 <server name> freshclam: ClamAV update process started at Sun Mar 20 19:20:54 2022\nMar 20 19:20:54 <server name> freshclam: daily.cld database is up-to-date (version: 26487, sigs: 1976398, f-level: 90, builder: raynman)\nMar 20 19:20:54 <server name> freshclam: main.cvd database is up-to-date (version: 62, sigs: 6647427, f-level: 90, builder: sigmgr)\nMar 20 19:20:54 <server name> freshclam: bytecode.cvd database is up-to-date (version: 333, sigs: 92, f-level: 63, builder: awillia2)\nMar 20 19:20:54 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 2190ms.\nMar 20 19:20:54 <server name> amazon-ssm-agent: Error occurred fetching the seelog config file path: open /etc/amazon/ssm/seelog.xml: no such file or directory\nMar 20 19:20:54 <server name> amazon-ssm-agent: Initializing new seelog logger\nMar 20 19:20:54 <server name> amazon-ssm-agent: New Seelog Logger Creation Complete\nMar 20 19:20:54 <server name> amazon-ssm-agent: 2022-03-20 19:20:54 INFO Agent will take identity from EC2\nMar 20 19:20:54 <server name> amazon-ssm-agent: 2022-03-20 19:20:54 INFO [amazon-ssm-agent] using named pipe channel for IPC\nMar 20 19:20:54 <server name> start-amazon-cloudwatch-agent: Valid Json input schema.\nMar 20 19:20:54 <server name> start-amazon-cloudwatch-agent: I! Detecting run_as_user...\nMar 20 19:20:54 <server name> systemd: Received SIGRTMIN+21 from PID 1970 (plymouthd).\nMar 20 19:20:54 <server name> systemd: Started Wait for Plymouth Boot Screen to Quit.\nMar 20 19:20:54 <server name> systemd: Started Terminate Plymouth Boot Screen.\nMar 20 19:20:54 <server name> systemd: Started Serial Getty on ttyS0.\nMar 20 19:20:54 <server name> systemd: Started Getty on tty1.\nMar 20 19:20:54 <server name> systemd: Reached target Login Prompts.\nMar 20 19:20:54 <server name> amazon-ssm-agent: 2022-03-20 19:20:54 INFO [amazon-ssm-agent] using named pipe channel for IPC\nMar 20 19:20:55 <server name> amazon-ssm-agent: 2022-03-20 19:20:54 INFO [amazon-ssm-agent] using named pipe channel for IPC\nMar 20 19:20:55 <server name> mongod: about to fork child process, waiting until server is ready for connections.\nMar 20 19:20:55 <server name> mongod: forked process: 2827\nMar 20 19:20:55 <server name> amazon-ssm-agent: 2022-03-20 19:20:54 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.0.1124.0\nMar 20 19:20:55 <server name> amazon-ssm-agent: 2022-03-20 19:20:54 INFO [amazon-ssm-agent] OS: linux, Arch: amd64\nMar 20 19:20:55 <server name> mongod: ERROR: child process failed, exited with 14\nMar 20 19:20:55 <server name> mongod: To see additional information in this output, start without the \"--fork\" option.\nMar 20 19:20:55 <server name> systemd: mongod.service: control process exited, code=exited status=14\nMar 20 19:20:55 <server name> systemd: Failed to start MongoDB Database Server.\nMar 20 19:20:55 <server name> systemd: Unit mongod.service entered failed state.\nMar 20 19:20:55 <server name> systemd: mongod.service failed.\nMar 20 19:20:55 <server name> systemd: Started EC2 Instance Connect Host Key Harvesting.\nMar 20 19:20:55 <server name> cloud-init: Cloud-init v. 19.3-45.amzn2 running 'modules:config' at Sun, 20 Mar 2022 13:50:55 +0000. Up 5.98 seconds.\nMar 20 19:20:55 <server name> systemd: Started Apply the settings specified in cloud-config.\nMar 20 19:20:55 <server name> systemd: Starting Initial hibernation setup job...\nMar 20 19:20:55 <server name> systemd: Starting Execute cloud user/final scripts...\nMar 20 19:20:55 <server name> hibinit-agent: Effective config: {'grub_update': True, 'swap_percentage': 100, 'log_to_syslog': True, 'touch_swap': False, 'state_dir': '/var/lib/hibinit-agent', 'swapoff': 'swapoff {swapfile}', 'mkswap': 'mkswap {swapfile}', 'swapon': 'swapon {swapfile}', 'swap_mb': 4000}\nMar 20 19:20:55 <server name> hibinit-agent: Requesting new IMDSv2 token.\nMar 20 19:20:55 <server name> hibinit-agent: Instance Launch has not enabled Hibernation Configured Flag. hibinit-agent exiting!!\nMar 20 19:20:55 <server name> systemd: Stopping ACPI Event Daemon...\nMar 20 19:20:55 <server name> amazon-ssm-agent: 2022-03-20 19:20:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process\nMar 20 19:20:55 <server name> amazon-ssm-agent: 2022-03-20 19:20:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2881) started\nMar 20 19:20:55 <server name> acpid: exiting\nMar 20 19:20:55 <server name> systemd: Stopped ACPI Event Daemon.\nMar 20 19:20:55 <server name> systemd: Starting ACPI Event Daemon...\nMar 20 19:20:55 <server name> systemd: Started ACPI Event Daemon.\nMar 20 19:20:55 <server name> systemd: Started Initial hibernation setup job.\nMar 20 19:20:55 <server name> acpid: starting up with netlink and the input layer\nMar 20 19:20:55 <server name> acpid: skipping incomplete file /etc/acpi/events/videoconf\nMar 20 19:20:55 <server name> acpid: 2 rules loaded\nMar 20 19:20:55 <server name> acpid: waiting for events: event logging is off\nMar 20 19:20:55 <server name> amazon-ssm-agent: 2022-03-20 19:20:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds\nMar 20 19:20:55 <server name> cloud-init: Cloud-init v. 19.3-45.amzn2 running 'modules:final' at Sun, 20 Mar 2022 13:50:55 +0000. Up 6.56 seconds.\nMar 20 19:20:55 <server name> cloud-init: Cloud-init v. 19.3-45.amzn2 finished at Sun, 20 Mar 2022 13:50:55 +0000. Datasource DataSourceEc2. Up 6.66 seconds\nMar 20 19:20:55 <server name> systemd: Started Execute cloud user/final scripts.\nMar 20 19:20:56 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 4350ms.\nMar 20 19:20:57 <server name> chronyd[2252]: Selected source 169.254.169.123\nMar 20 19:20:57 <server name> chronyd[2252]: System clock wrong by -1.132493 seconds\nMar 20 19:20:57 <server name> chronyd[2252]: System clock was stepped by -1.132493 seconds\nMar 20 19:20:57 <server name> systemd: Time has been changed\nMar 20 19:21:01 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 8680ms.\nMar 20 19:21:01 <server name> systemd: Started Dynamically Generate Message Of The Day.\nMar 20 19:21:01 <server name> systemd: Reached target Multi-User System.\nMar 20 19:21:01 <server name> systemd: Reached target Graphical Interface.\nMar 20 19:21:01 <server name> systemd: Starting Update UTMP about System Runlevel Changes...\nMar 20 19:21:01 <server name> systemd: Reached target Cloud-init target.\nMar 20 19:21:01 <server name> systemd: Started Update UTMP about System Runlevel Changes.\nMar 20 19:21:01 <server name> systemd: Startup finished in 1.360s (kernel) + 555ms (initrd) + 11.858s (userspace) = 13.774s.\nMar 20 19:21:09 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 17340ms.\nMar 20 19:21:27 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 34150ms.\nMar 20 19:22:01 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 71320ms.\nMar 20 19:23:12 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 126810ms.\nMar 20 19:25:19 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115070ms.\nMar 20 19:27:14 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 130960ms.\nMar 20 19:29:25 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 119540ms.\nMar 20 19:31:25 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 111890ms.\nMar 20 19:33:17 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 120670ms.\nMar 20 19:35:17 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 121540ms.\nMar 20 19:36:01 <server name> systemd: Starting Cleanup of Temporary Directories...\nMar 20 19:36:01 <server name> systemd: Started Cleanup of Temporary Directories.\nMar 20 19:37:19 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 123240ms.\nMar 20 19:39:22 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 111630ms.\nMar 20 19:41:14 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 110740ms.\nMar 20 19:43:05 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 130720ms.\nMar 20 19:45:16 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 126190ms.\nMar 20 19:47:22 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 124770ms.\nMar 20 19:47:27 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 19:47:27 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 19:47:27 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 19:47:27 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1402 seconds.\nMar 20 19:47:27 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 19:47:27 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 19:47:27 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 19:47:27 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 19:47:27 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 19:49:27 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 129680ms.\nMar 20 19:51:36 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 127060ms.\nMar 20 19:53:44 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 114840ms.\nMar 20 19:55:38 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 124150ms.\nMar 20 19:57:43 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 122360ms.\nMar 20 19:59:45 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 111420ms.\nMar 20 20:01:37 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 128930ms.\nMar 20 20:03:46 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 128890ms.\nMar 20 20:05:55 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 118400ms.\nMar 20 20:07:53 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 111760ms.\nMar 20 20:09:45 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 110040ms.\nMar 20 20:10:49 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 20:10:49 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 20:10:49 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 20:10:49 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1717 seconds.\nMar 20 20:10:49 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 20:10:49 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 20:10:49 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 20:10:49 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 20:10:49 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 20:11:35 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 112130ms.\nMar 20 20:13:27 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115010ms.\nMar 20 20:15:22 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 109640ms.\nMar 20 20:17:12 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 127400ms.\nMar 20 20:19:20 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 112300ms.\nMar 20 20:21:12 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115360ms.\nMar 20 20:23:08 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125340ms.\nMar 20 20:25:13 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 109060ms.\nMar 20 20:27:02 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 116040ms.\nMar 20 20:28:58 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 116480ms.\nMar 20 20:30:55 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 109210ms.\nMar 20 20:32:44 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 110630ms.\nMar 20 20:34:35 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 130880ms.\nMar 20 20:36:46 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 124400ms.\nMar 20 20:38:50 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 126600ms.\nMar 20 20:39:26 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 20:39:26 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 20:39:26 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 20:39:26 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1732 seconds.\nMar 20 20:39:26 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 20:39:26 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 20:39:26 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 20:39:26 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 20:39:26 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 20:40:57 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 121960ms.\nMar 20 20:42:59 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115080ms.\nMar 20 20:44:54 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 117460ms.\nMar 20 20:46:52 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 110100ms.\nMar 20 20:48:42 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 119000ms.\nMar 20 20:50:41 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 109880ms.\nMar 20 20:52:31 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125580ms.\nMar 20 20:54:36 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 120480ms.\nMar 20 20:56:37 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 120590ms.\nMar 20 20:58:38 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125930ms.\nMar 20 21:00:44 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 123850ms.\nMar 20 21:02:48 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 114420ms.\nMar 20 21:04:42 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 109110ms.\nMar 20 21:06:31 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 119440ms.\nMar 20 21:08:18 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 21:08:18 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 21:08:18 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 21:08:18 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1389 seconds.\nMar 20 21:08:18 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 21:08:18 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 21:08:18 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 21:08:18 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 21:08:18 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 21:08:31 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 119880ms.\nMar 20 21:10:31 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 127810ms.\nMar 20 21:12:39 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 116710ms.\nMar 20 21:14:35 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 114680ms.\nMar 20 21:16:30 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 123540ms.\nMar 20 21:18:34 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 110960ms.\nMar 20 21:20:25 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 111000ms.\nMar 20 21:20:53 <server name> freshclam: Received signal: wake up\nMar 20 21:20:53 <server name> freshclam: ClamAV update process started at Sun Mar 20 21:20:53 2022\nMar 20 21:20:53 <server name> freshclam: daily.cld database is up-to-date (version: 26487, sigs: 1976398, f-level: 90, builder: raynman)\nMar 20 21:20:53 <server name> freshclam: main.cvd database is up-to-date (version: 62, sigs: 6647427, f-level: 90, builder: sigmgr)\nMar 20 21:20:53 <server name> freshclam: bytecode.cvd database is up-to-date (version: 333, sigs: 92, f-level: 63, builder: awillia2)\nMar 20 21:22:16 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 108830ms.\nMar 20 21:24:05 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115930ms.\nMar 20 21:26:01 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 131080ms.\nMar 20 21:28:12 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125710ms.\nMar 20 21:30:18 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 129530ms.\nMar 20 21:31:27 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 21:31:27 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 21:31:27 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 21:31:27 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1617 seconds.\nMar 20 21:31:27 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 21:31:27 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 21:31:27 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 21:31:27 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 21:31:27 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 21:32:27 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 124090ms.\nMar 20 21:34:32 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 119920ms.\nMar 20 21:36:32 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125130ms.\nMar 20 21:38:37 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 122640ms.\nMar 20 21:40:40 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 117340ms.\nMar 20 21:42:37 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 122730ms.\nMar 20 21:44:40 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125290ms.\nMar 20 21:46:45 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 128080ms.\nMar 20 21:48:53 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 126010ms.\nMar 20 21:50:59 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 123310ms.\nMar 20 21:53:03 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115810ms.\nMar 20 21:54:59 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 129510ms.\nMar 20 21:57:08 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 117380ms.\nMar 20 21:58:24 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 21:58:24 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 21:58:24 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 21:58:24 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1483 seconds.\nMar 20 21:58:24 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 21:58:24 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 21:58:24 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 21:58:24 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 21:58:24 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 21:59:06 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 124190ms.\nMar 20 22:01:10 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 121610ms.\nMar 20 22:03:12 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 128530ms.\nMar 20 22:05:20 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 113500ms.\nMar 20 22:07:14 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 110850ms.\nMar 20 22:09:05 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 119750ms.\nMar 20 22:11:04 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115580ms.\nMar 20 22:13:00 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 119820ms.\nMar 20 22:15:00 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 110960ms.\nMar 20 22:16:51 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115550ms.\nMar 20 22:18:47 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125880ms.\nMar 20 22:20:53 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 130070ms.\nMar 20 22:23:03 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 130550ms.\nMar 20 22:23:07 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 22:23:07 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 22:23:07 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 22:23:07 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1639 seconds.\nMar 20 22:23:07 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 22:23:07 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 22:23:07 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 22:23:07 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 22:23:07 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 22:25:13 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 113260ms.\nMar 20 22:27:07 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 109560ms.\nMar 20 22:28:56 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 128590ms.\nMar 20 22:31:05 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 129660ms.\nMar 20 22:33:15 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 127290ms.\nMar 20 22:35:22 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 131790ms.\nMar 20 22:37:34 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 114590ms.\nMar 20 22:39:29 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 111350ms.\nMar 20 22:41:20 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 115940ms.\nMar 20 22:43:16 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 111250ms.\nMar 20 22:45:07 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 125120ms.\nMar 20 22:47:13 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 112810ms.\nMar 20 22:49:06 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 113250ms.\nMar 20 22:50:26 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 22:50:26 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 22:50:26 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 22:50:26 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1760 seconds.\nMar 20 22:50:26 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 22:50:26 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 22:50:26 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 22:50:26 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 22:50:26 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 22:50:59 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 114160ms.\nMar 20 22:52:53 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 123750ms.\nMar 20 22:54:57 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 129020ms.\nMar 20 22:57:06 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 112710ms.\nMar 20 22:58:59 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 127530ms.\nMar 20 23:01:06 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 108200ms.\nMar 20 23:02:55 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 127380ms.\nMar 20 23:05:02 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 127270ms.\nMar 20 23:07:10 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 108210ms.\nMar 20 23:08:58 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 113690ms.\nMar 20 23:10:52 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 112070ms.\nMar 20 23:12:44 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 126420ms.\nMar 20 23:14:50 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 117390ms.\nMar 20 23:16:48 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 121520ms.\nMar 20 23:18:49 <server name> dhclient[2506]: XMT: Solicit on eth0, interval 124140ms.\nMar 20 23:19:46 <server name> dhclient[2459]: DHCPREQUEST on eth0 to 10.20.1.129 port 67 (xid=0x2311156a)\nMar 20 23:19:46 <server name> dhclient[2459]: DHCPACK from 10.20.1.129 (xid=0x2311156a)\nMar 20 23:19:46 <server name> NET: dhclient: Locked /run/dhclient/resolv.lock\nMar 20 23:19:46 <server name> dhclient[2459]: bound to 10.20.1.190 -- renewal in 1352 seconds.\nMar 20 23:19:46 <server name> ec2net: [get_meta] Querying IMDS for meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 23:19:46 <server name> ec2net: [get_meta] Getting token for IMDSv2.\nMar 20 23:19:46 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/\nMar 20 23:19:46 <server name> ec2net: [get_meta] Trying to get http://169.254.169.254/latest/meta-data/network/interfaces/macs/02:59:26:73:f3:ea/local-ipv4s\nMar 20 23:19:46 <server name> ec2net: [remove_aliases] Removing aliases of eth0\nMar 20 23:20:53 <server name> freshclam: Received signal: wake up\n\n\n\n\n\n**tried to delete lock and started, but shutting down after some time**",
"text": "",
"username": "Naveen_Kumar_Dasari"
},
{
"code": "mongod",
"text": "Hi @Naveen_Kumar_Dasari welcome to the community!I’m not sure we have all the information here. Perhaps you could supply:In order to get better engagement, I would suggest you to take a look at the guidelines posted in How to write a good post/questionBest regards\nKevin",
"username": "kevinadi"
}
] | Mongodb constantly crashing with mongod.service: control process exited, code=exited status=14 | 2023-02-24T16:49:41.411Z | Mongodb constantly crashing with mongod.service: control process exited, code=exited status=14 | 1,667 |
null | [] | [
{
"code": "Array [\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n undefined,\n]\n",
"text": "I keep on gettingwhile trying filter realm.objects , it turns realm.objects itself returns undefined",
"username": "Gbenga_Joseph"
},
{
"code": "realm.objectsJSON.stringify(…)",
"text": "Hi @Gbenga_Joseph,Can you please specify which SDK you’re using? A code snippet of what you’re trying to do would also help clarifying the scenario.In general, however, realm.objects returns something similar to a cursor: until you directly access its members, the objects themselves aren’t in memory.For example, a JSON.stringify(…) may help to force the objects in, for debugging purposes (don’t use it in the actual app, though - massive results of the queries are better left on-demand!).",
"username": "Paolo_Manna"
},
{
"code": " (async () => {\n const realm = await Realm.open({\n path: \"myrealm\",\n // inMemory: true,\n schema: [TaskSchema],\n deleteRealmIfMigrationNeeded: true,\n });\n\n let task1, task2;\n realm.write(() => {\n task1 = realm.create(\"Task\", {\n _id: 1,\n name: \"go grocery shopping\",\n status: \"Open\",\n });\n task2 = realm.create(\"Task\", {\n _id: 2,\n name: \"go exercise\",\n status: \"Open\",\n });\n console.log(`created two tasks: ${task1.name} & ${task2.name}`);\n });\n",
"text": "Hi @Paolo_Manna,Thanks a lot for your reply, I am using SDK 44.It turned out that I was able to get the saved data but after a while realm just stopped working.I tried to reproduce the samples on the docs, i kept on getting undefined.\nThis sample:const TaskSchema = {\nname: “Task”,\nproperties: {\n_id: “int”,\nname: “string”,\nstatus: “string?”,\nowner_id: “string?”,\n},\nprimaryKey: “_id”,\n};})();What i got in the console was:\ncreated two tasks: undefined & undefined",
"username": "Gbenga_Joseph"
},
{
"code": "const Realm = require(\"realm\");\nconst { EJSON, ObjectId } = require('bson');\n\nconst TaskSchema = {\n name: \"Task\",\n properties: {\n _id: \"int\",\n name: \"string\",\n status: \"string?\",\n owner_id: \"string?\",\n },\n primaryKey: \"_id\"\n};\n\n\nconst schemaClasses = [TaskSchema];\nconst realmPath = './local/tasks.realm';\nconst realmCopyPath = './local/tasksCopy.realm';\nconst localConfig = { schemaVersion: 1, schema: schemaClasses, path: realmPath, deleteRealmIfMigrationNeeded: true };\nconst localCopyConfig = { schemaVersion: 1, schema: schemaClasses, path: realmCopyPath, deleteRealmIfMigrationNeeded: true };\n\nconst realm = new Realm(localConfig);\nconst realmCopy = new Realm(localCopyConfig);\n\nlet task;\nlet taskCopy;\n\nrealm.write(() => {\n task = realm.create(\"Task\", { _id: 1, name: \"go grocery shopping\", status: \"Open\" });\n});\n\nrealmCopy.write(() => {\n taskCopy = realmCopy.create(\"Task\", task);\n console.log(EJSON.stringify(taskCopy));\n taskCopy.name = \"go exercise\";\n console.log(EJSON.stringify(taskCopy));\n});\n\nsetTimeout(() => {\n if (realm) {\n realm.close();\n }\n if (realmCopy) {\n realmCopy.close();\n }\n\n process.exit(0);\n}, 2000);\n",
"text": "Hi @Gbenga_Joseph,In isolation, there’s nothing wrong in your code: I’ve tested a very similar one, reported below, and it works properlyCould you please test and verify?",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "hmmm. Thanks, @Paolo_Manna",
"username": "Gbenga_Joseph"
}
] | Realm.objects in expo returns undefined -please help | 2022-09-20T20:29:41.395Z | Realm.objects in expo returns undefined -please help | 1,166 |
null | [
"aggregation",
"queries",
"compass"
] | [
{
"code": "",
"text": "Hi,\nAm newbie to MongoDB. Can you help me pull all the documents that were recorded in a collection yesterday dynamically?Am trying this in MongoDB Compass, aggregation pipeline where am trying to retrieve all the documents that have date_added > yesterday and date_added < today (after truncating time).$and :\n[ {“date_added”: {“$gte”: [new Date((new Date().getTime() - (24 * 60 * 60 *1000)))]}},\n{“date_added”: {“$lt”: [{$dateToString: {format: “%d.%m.%Y”, date: (new Date()) } }]}}\n]Is this the right way to pull yesterdays data? or is there a better way to handle this?\nThanks in Advance.",
"username": "Vidya_Swar"
},
{
"code": "const startTime = new Date(new Date(new Date().setDate(new Date().getDate()-1)).setHours(00,00,00,00)).toISOString();\n\nconst endTime = new Date(new Date(new Date().setDate(new Date().getDate()-1)).setHours(23,59,59,999)).toISOString();\n\ndb.collection.aggregate([\n { \n $match: \n {\n \"date_added\": \n {\n $gte: startTime,\n $lte: endTime \n }\n }\n }\n])\n",
"text": "Hi @Vidya_Swar , welcome to the community.Have you tried this way, getting yesterday’s date and setting start time from midnight till 23:59 hrs.Hoping it’s useful.\nRegards",
"username": "R_V"
},
{
"code": "const startTime = new Date(new Date(new Date().setDate(new Date().getDate()-1)).setHours(0,0,0,0)).toISOString();\nconst endTime = new Date(new Date(new Date().setDate(new Date().getDate()-1)).setHours(23,59,59,999)).toISOString();\ndb.mycollection2.aggregate([\n { \n $match: \n {\n \"date_added\": \n {\n $gte: startTime,\n $lte: endTime \n }\n }\n }\n]);\n",
"text": "Thanks so much for your time @R_VI did a minor change to your code , but it neither returns any documents nor shows any errors.I am a SQL developer and was looking for a equivalent query to pull yesterdays records:\nSELECT * FROM t1\nWHERE date_added > CONVERT(DATE, GetDate()-1)\nAND date_added < CONVERT(DATE, GetDate())Appreciate any help or advice!",
"username": "Vidya_Swar"
},
{
"code": "db.mycollection2.aggregate([\n {\n $match: {\n $expr: {\n $and: [\n { $gte: [ \"$date_added\", { $toDate: { $subtract: [ ISODate(), 86400000 ] } } ] },\n { $lt: [ \"$date_added\", { $toDate: { $subtract: [ ISODate(), 0 ] } } ] }\n ]\n }\n }\n }\n])\n",
"text": "Hello @Vidya_Swar ,Welcome to The MongoDB Community Forums! Try using below code and update it as per your requirements.Note: Please test your aggregation pipeline as per your requirements and collection documents. Also, try testing the edge cases of your use-cases to make sure you don’t miss anything.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Pulling yesterdays documents based on date column | 2023-02-23T16:36:41.553Z | Pulling yesterdays documents based on date column | 1,973 |
null | [
"queries",
"crud"
] | [
{
"code": "",
"text": "SELECT * FROM Orders WHERE OrderID IN\n(SELECT OrderID FROM Employees WHERE EmployeeID = 5);",
"username": "Prabhudatta_Mishra"
},
{
"code": "",
"text": "Hello @Prabhudatta_Mishra ,Welcome to The MongoDB Community Forums! To understand your use case better, could you please share below details:Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Please convert this SQL Query to mongo_query | 2023-02-27T12:38:13.121Z | Please convert this SQL Query to mongo_query | 539 |
null | [
"java",
"android"
] | [
{
"code": "",
"text": "i have android app and i want to store server data in mongodb database.the data is server ip address and port.i want only my app can access.the question is where should I start. I’m still learning to use mongodb",
"username": "Alhe_Mora"
},
{
"code": "",
"text": "@Alhe_Mora Hey there - you have a few different options available to you:",
"username": "Ian_Ward"
}
] | Connect android application to mongodb database | 2023-02-25T07:18:04.848Z | Connect android application to mongodb database | 2,738 |
[
"storage"
] | [
{
"code": "",
"text": "I have a production cluster hosted on AWS by Atlas, consisting of two shards and using M50 (General) instance type. The current configuration uses “400 IOPS, provisioned”. However, I want to change it to “3000 IOPS, non-provisioned”, as this is the baseline for the gp3 volume, which provides better performance compared to the current 400 IOPS.\n\nScreenshot from 2023-02-23 11-27-48840×187 14.7 KB\n\nMy concern is whether this change is safe to make, as I am not sure how it will affect the system’s uptime. According to my research, changing the EBS volume type from io1 to gp3 may require a rolling restart of cluster nodes, which could potentially cause some downtime.",
"username": "Abdul_Rauf"
},
{
"code": "",
"text": "Hey @Abdul_Rauf - Welcome to the community!I’d recommend contacting the Atlas in-app chat support to verify if this change requires any downtime.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "For the avoidance of doubt, one of the key design principles of MongoDB Atlas is that all changes to database clusters are made in a rolling manner preserving majority quorum (save for a momentary election of at the replica set level) and hence uptime throughout: This means that changing from provisioned IOPS to general storage is a no downtime operation assuming your application is engineered to withstand elections: If you believe your application is not resilient to replica set level elections then we recommend working to change by testing: we offer our chaos testing capabilities like the Test Failover and test regional outage capabilities for these reasons.",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Move AWS atlas cluster from provisioned to Non-provisioned IOPS without any downtime | 2023-02-25T12:05:12.947Z | Move AWS atlas cluster from provisioned to Non-provisioned IOPS without any downtime | 1,103 |
|
null | [
"chennai-mug"
] | [
{
"code": "",
"text": "Hello Everyone!!\nI am Jeyaraj, a Senior Software Engineer/Architect from Chennai, India. I am honored to have this opportunity to lead and contribute to the local MongoDB community.Here is a little intro about myself:\nI am a senior engineer/architect at ZoomInfo, I have a track record of building and delivering large-scale products.My expertise lies in database design, cloud computing, cloud security, and database performance optimizations, and specifically, I have strong experience MongoDB database.\nI have been part of two early-stage startups and two successful exits, where I played a key role in scaling them to success. I enjoy working in fast-paced environments and thrive on challenges that require innovative solutions.\nI played a key role in scaling the product from serving just 50 customers to handling over 10,000 customers, while also optimizing cloud costs and saving between $100K to $500K in expenses.As the Leader of the MongoDB User Group in Chennai, I am passionate about sharing my knowledge and collaborating with other experts in the field.It will be an honor for us if you join our Chennai MUG , so we can stay in touch !! Lots of special things are coming soon You can also find me on LinkedIn ",
"username": "jeyaraj"
},
{
"code": "",
"text": "Hey @jeyaraj,On behalf of the MongoDB community, I would like to extend a warm welcome to you as the Leader of the Chennai MUG. We are thrilled to have you on board and excited to see the contributions you will make to the community.It is fantastic to hear that you have played a key role in scaling successful startups and optimizing cloud costs while saving expenses. I am sure the community would love to hear about it in detail as well! ",
"username": "Harshit"
}
] | Hello Friends, This is Jeyaraj from Chennai | 2023-02-27T12:32:37.378Z | Hello Friends, This is Jeyaraj from Chennai | 1,191 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 5.0.15-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.14. The next stable release 5.0.15 will be a recommended upgrade for all 5.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "mongod --bind_ip 0.0.0.0 --port 27418 --logpath data/mongodb/general/logs/log.txt --dbpath data/mongodb/general/wiredTiger --directoryperdb --storageEngine wiredTiger --wiredTigerDirectoryForIndexes\n[1] 474068 illegal hardware instruction (core dumped) mongod --bind_ip 0.0.0.0 --port 27418 --logpath --dbpath --directoryperdb \n",
"text": "I already posted my problem with 4.4.19 on a Raspberry Pi 4 running Ubuntu 20.04 on the post related to the 4.4-version.Just now I attempted to upgrade the 4.4.18 version to 5.0.15 in order to get around the problem, yet 5.0.15 results in the same error:I had to revert to 4.4.18 to continue using MongoDB on this device.",
"username": "dfaust"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.15-rc0 is released | 2023-01-18T23:46:00.433Z | MongoDB 5.0.15-rc0 is released | 1,222 |
[] | [
{
"code": "",
"text": "I have a free tier atlas db and atlas app service. A few days ago I started getting many failure messages:TranslatorFatalError Error: recoverable event subscription error encountered: error getting new mongo client while creating pbs app translator: error connecting to MongoDB service cluster: failed to ping: connection() error occurred during connection handshake: remote error: tls: internal errorI tried disconnecting sync and restarting sync, but that didn’t help, I’m continuing to get failures.I haven’t made any code or db changes since the first week of this month (Feb 2023) and these errors started appearing about Feb 18.\nimage1976×2068 507 KB\n",
"username": "Alex_Tang1"
},
{
"code": "",
"text": "Hi, I was able to take a peek at the logs for your app using the request-ids in the image you provided. All of your errors stem from the connection to MongoDB having issues every once in a while, and when that happens sync will retry for about 1-2 hours before giving up, emailing you, and presenting you with this button in the UI.Unfortunately, this is why we reccomend starting on at least an M10 cluster for any production or pre-production app. The reason is that the shared tier clusters have limited observability (fewer metrics are visible in the UI), various forms of rate limiting occuring which could be the cause of your issues, and generally more suceptible to the noisy neighbor issue.Your best bet will be to terminate sync, upgrade to an M10, and re-enable sync. I suspect you will stop running into this issue entirely.I hope this helps, and let me know if you run into any issues after upgrading and I would be more than happy to continue looking into it (but I strongly suspect that you will not given this is almost always just a symtom of being in the shared/free tier).Best,\nTylerLinks:",
"username": "Tyler_Kaye"
}
] | TranslatorFatalError Error for last few days | 2023-02-24T23:28:38.033Z | TranslatorFatalError Error for last few days | 544 |
|
null | [
"sharding"
] | [
{
"code": "mongos --configdb config/<config-sevrer-ip>:27017 --bind_ip localhost,<mongos-server-ip>\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.419Z\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":24132, \"ctx\":\"-\",\"msg\":\"Running a sharded cluster with fewer than 3 config servers should only be done for testing purposes and is not recommended for production.\"}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.422+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.423+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.423+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.424+00:00\"},\"s\":\"I\", \"c\":\"HEALTH\", \"id\":5936503, \"ctx\":\"main\",\"msg\":\"Fault manager changed state \",\"attr\":{\"state\":\"StartupCheck\"}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.426+00:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"main\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.426+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"mongosMain\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.4\",\"gitVersion\":\"44ff59461c1353638a71e710f385a566bcd2f547\",\"openSSLVersion\":\"OpenSSL 3.0.2 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"ubuntu2204\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.427+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"mongosMain\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.428+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"mongosMain\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"localhost,100.25.159.201\"},\"sharding\":{\"configDB\":\"config/54.157.187.130:27017\"}}}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.429+00:00\"},\"s\":\"E\", \"c\":\"SHARDING\", \"id\":22856, \"ctx\":\"mongosMain\",\"msg\":\"Error setting up listener\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Cannot assign requested address\"}}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:41.429+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4695701, \"ctx\":\"main\",\"msg\":\"Entering quiesce mode for mongos shutdown\",\"attr\":{\"quiesceTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:56.429+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4695702, \"ctx\":\"main\",\"msg\":\"Exiting quiesce mode for mongos shutdown\"}\n{\"t\":{\"$date\":\"2023-02-27T08:25:56.429+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4695300, \"ctx\":\"main\",\"msg\":\"Interrupted all currently running operations\",\"attr\":{\"opsKilled\":1}}\n{\"t\":{\"$date\":\"2023-02-27T08:25:59.429+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"main\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":48}}\n",
"text": "So, I was configuring mongodb sharding cluster on multiple servers. I’ve configured a config server and two shard servers. When I try configure mongos router on mongos server with command:I get an output like this:I’ve understood that my mongos router can’t assign requested address but I don’t know what I’m doing wrong. I’d be grateful if anyone can help me. I’ve configured the security groups so there is no issue regarding firewall and ports. But still it can’t establish a connection and connection is refused.HELP ME!!!",
"username": "19_231_Chirag_Sharma"
},
{
"code": "",
"text": "IP address you are passing for configdb parameter seems to be not correct\nDid you try with hostname?\nHow did you start your config servers?\nShow rs.status() of config servers",
"username": "Ramachandra_Tummala"
},
{
"code": "{ set: 'config', date: ISODate(\"2023-02-27T11:52:49.851Z\"), myState: 1, term: Long(\"11\"), syncSourceHost: '', syncSourceId: -1, configsvr: true, heartbeatIntervalMillis: Long(\"2000\"), majorityVoteCount: 1, writeMajorityCount: 1, votingMembersCount: 1, writableVotingMembersCount: 1, optimes: { lastCommittedOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, lastCommittedWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), readConcernMajorityOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, appliedOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, durableOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, lastAppliedWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), lastDurableWallTime: ISODate(\"2023-02-27T11:52:48.993Z\") }, lastStableRecoveryTimestamp: Timestamp({ t: 1677494104, i: 1 }), electionCandidateMetrics: { lastElectionReason: 'electionTimeout', lastElectionDate: ISODate(\"2023-02-27T11:52:28.973Z\"), electionTerm: Long(\"11\"), lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") }, lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677494104, i: 1 }), t: Long(\"10\") }, numVotesNeeded: 1, priorityAtElection: 1, electionTimeoutMillis: Long(\"10000\"), newTermStartDate: ISODate(\"2023-02-27T11:52:28.977Z\"), wMajorityWriteAvailabilityDate: ISODate(\"2023-02-27T11:52:28.988Z\") }, members: [ { _id: 0, name: '127.0.0.1:27017', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 22, optime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, optimeDate: ISODate(\"2023-02-27T11:52:48.000Z\"), lastAppliedWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), lastDurableWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp({ t: 1677498748, i: 1 }), electionDate: ISODate(\"2023-02-27T11:52:28.000Z\"), configVersion: 1, configTerm: 11, self: true, lastHeartbeatMessage: '' } ], ok: 1, lastCommittedOpTime: Timestamp({ t: 1677498768, i: 1 }), '$clusterTime': { clusterTime: Timestamp({ t: 1677498768, i: 1 }), signature: { hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0), keyId: Long(\"0\") } }, operationTime: Timestamp({ t: 1677498768, i: 1 }) }",
"text": "sure\n{ set: 'config', date: ISODate(\"2023-02-27T11:52:49.851Z\"), myState: 1, term: Long(\"11\"), syncSourceHost: '', syncSourceId: -1, configsvr: true, heartbeatIntervalMillis: Long(\"2000\"), majorityVoteCount: 1, writeMajorityCount: 1, votingMembersCount: 1, writableVotingMembersCount: 1, optimes: { lastCommittedOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, lastCommittedWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), readConcernMajorityOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, appliedOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, durableOpTime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, lastAppliedWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), lastDurableWallTime: ISODate(\"2023-02-27T11:52:48.993Z\") }, lastStableRecoveryTimestamp: Timestamp({ t: 1677494104, i: 1 }), electionCandidateMetrics: { lastElectionReason: 'electionTimeout', lastElectionDate: ISODate(\"2023-02-27T11:52:28.973Z\"), electionTerm: Long(\"11\"), lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") }, lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677494104, i: 1 }), t: Long(\"10\") }, numVotesNeeded: 1, priorityAtElection: 1, electionTimeoutMillis: Long(\"10000\"), newTermStartDate: ISODate(\"2023-02-27T11:52:28.977Z\"), wMajorityWriteAvailabilityDate: ISODate(\"2023-02-27T11:52:28.988Z\") }, members: [ { _id: 0, name: '127.0.0.1:27017', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 22, optime: { ts: Timestamp({ t: 1677498768, i: 1 }), t: Long(\"11\") }, optimeDate: ISODate(\"2023-02-27T11:52:48.000Z\"), lastAppliedWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), lastDurableWallTime: ISODate(\"2023-02-27T11:52:48.993Z\"), syncSourceHost: '', syncSourceId: -1, infoMessage: '', electionTime: Timestamp({ t: 1677498748, i: 1 }), electionDate: ISODate(\"2023-02-27T11:52:28.000Z\"), configVersion: 1, configTerm: 11, self: true, lastHeartbeatMessage: '' } ], ok: 1, lastCommittedOpTime: Timestamp({ t: 1677498768, i: 1 }), '$clusterTime': { clusterTime: Timestamp({ t: 1677498768, i: 1 }), signature: { hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0), keyId: Long(\"0\") } }, operationTime: Timestamp({ t: 1677498768, i: 1 }) }",
"username": "19_231_Chirag_Sharma"
},
{
"code": "",
"text": "Try with localhost instead of configserver ip address\n–configdb replsetname/localhost:port",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "No. It cannot be done because the config server is configured on another server and we can only reach it with public ip.",
"username": "19_231_Chirag_Sharma"
},
{
"code": "",
"text": "But your rs.status() of config server shows it is configured as localhost(127.0.0.1)\nConfigure it using hostname/ip and refer the same while starting mongos",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "refer the same while starting mongosI tried that but it was not working then also.",
"username": "19_231_Chirag_Sharma"
}
] | Configuring mongodb sharding | 2023-02-27T08:37:12.960Z | Configuring mongodb sharding | 913 |
null | [
"golang",
"transactions"
] | [
{
"code": "",
"text": "I am running a cron whose code is written in golang, and i am using mongoDb as database There was 128GB Ram into my system in which DataBase is stored, and I am using different system for the code. The cron is running with 17000 merchants parallely, each merchant having different database, which means there was 17000 Db’s into system.Now I will tell you the scenario, When the cron Runs, there are approximately 10000 write/insert operations per seconds, which makes mongodb slow and it affects the performance of the mongodb as well as the overall cron. The write operations include Bulk Insert queries as well as single Insertion and moreover these queries are being executed concurrently for different merchants.To overcome this problem, I’m thinking to use Transactions for write operations, will it make an positive impact on the slow down of mongodb. Is there anything else which i can implement to improve the performance of mongoDb, that doesn’t slows it down and makes it faster than now.",
"username": "rohit_arora3"
},
{
"code": "",
"text": "I think that you have a bad case of massive number of collections.With 17000 databases, even with only 1 collection per database and only the default index on _id, you have at least 34000 files. With 2 collections, your are at 68000 files. Add an extra index per collection and you reach 136000 files. Ouch!The fact that you may have an unlimited number of databases/collections is like having the possibility to jump over a 136000 feet cliff. Both are possible but not none is a good idea most of the time.Transactions should make things slower since more resources are locked for a longer time.128G RAM is okay or not only if the working set fits. What is your data set size? Is your cron running on the same machine as mongod? Do you have a standalone or replica set? What is the size of the data you try to write concurrently? What is your physical storage?",
"username": "steevej"
}
] | MongoDb Slows Down, When mongoDb write operations are more than 10000/sec | 2023-02-27T05:58:01.909Z | MongoDb Slows Down, When mongoDb write operations are more than 10000/sec | 1,625 |
null | [
"queries",
"atlas-functions"
] | [
{
"code": "exports = async function(id) {\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n const db = mongodb.db(\"testproducts\");\n const collection = db.collection(\"products\");\n\n return await collection.findOne({\"_id\": new BSON.ObjectId(id)});\n\n};\n BSON.ObjectId(\"12345.....efb\")});",
"text": "Here is my functionI am passing in the ID and when I return it I get “12345…efb” and its a 24-character string, if I manually input that string into where BSON.ObjectId(\"12345.....efb\")});\nI get the result back but passing it as a parameter its not working saying that ObjectId a strring of 12 bytes or 24 hex.",
"username": "Aneurin_Jones"
},
{
"code": "",
"text": "Please print out the value of id just before your findOne() and share the result.",
"username": "steevej"
},
{
"code": "exports = async function(id) //63f445264c0d37a80727b25c\n{\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n const db = mongodb.db(\"testproducts\");\n const collection = db.collection(\"products\");\n \n return await collection.findOne({\"_id\": new BSON.ObjectId(id)});\n\n return await collection.findOne({\"_id\": new BSON.ObjectId(\"63f445264c0d37a80727b25c\")});\n};\n",
"text": "Before I have created the function where I made it just return id; so I know that the comment on the function the id is the value of the object, if I were to do this function as on the second return I get the object in the return however when I use the parameter I get the error as stated above.",
"username": "Aneurin_Jones"
},
{
"code": "",
"text": "Please print out the value of id and the value of typeof id before your findOne() and share the result.Please share the code that calls your function.",
"username": "steevej"
},
{
"code": "var strproductid = String(productid);\nconst resultOfCallFunction = await user.functions.getItemByID(strproductid);\n",
"text": "It was on the client side react-native",
"username": "Aneurin_Jones"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Get a item by _id the database by passing a parameter | 2023-02-24T16:26:47.529Z | Get a item by _id the database by passing a parameter | 1,662 |
null | [
"aggregation",
"indexes"
] | [
{
"code": "$or$or$or$or{date: 1, listField: 1}$match$ordb.my_collection.aggregate([\n {\n \"$match\": {\n \"date\": ...,\n \"$and\": [\n\t\t {\"$or\": [\n\t\t {\"listField\": \"field1_a\"}, \n\t\t {\"listField\": \"field1_b\"}\n\t\t ]}, \n\t\t {\"$or\": [\n\t\t {\"listField\": \"field2_a\"}, \n\t\t {\"listField\": \"field2_b\"}\n\t\t ]}\n\t\t ],\n }}\n])\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"listField\" : {\n\t\t\t\t\t\"$in\" : [ \"field2_a\", \"field2_b\" ]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\"listField\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"date_1_listField_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\"listField\" : [ \"listField\" ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\"listField\" : [ \"[\\\"field1_a\\\", \\\"field1_a\\\"]\", \"[\\\"field2_b\\\", \\\"field2_b\\\"]\" ]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n$ordb.my_collection.aggregate([\n {\n \"$match\": {\n \"date\": ...,\n \"$and\": [\n\t\t {\"$or\": [\n\t\t {\"date\": ..., \"listField\": \"field1_a\"}, \n\t\t {\"date\": ..., \"listField\": \"field1_b\"}\n\t\t ]}, \n\t\t {\"$or\": [\n\t\t {\"date\": ..., \"listField\": \"field2_a\"}, \n\t\t {\"date\": ..., \"listField\": \"field2_b\"}\n\t\t ]}\n\t\t ],\n }}\n])\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"$or\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_a\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_b\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"OR\",\n\t\t\t\t\"inputStages\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\"listField\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_a\\\", \\\"field1_b\\\"]\" ]\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\"listField\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_a\\\", \\\"field1_b\\\"]\" ]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\nlistFieldinputStageindexBounds",
"text": "Been recently reading the docs regarding the $or operator, and when testing matching indexes on the $or clause vs don’t doing it… And so far the results are that NOT matching an index within $or clause makes my queries to be faster, whereas using an index match within $or clauses makes the queries to be >4 seconds slower in comparison.Having this index:{date: 1, listField: 1}First, taking into account the following $match clause without matching indexes within $or:The resulting winning plan for this is:And now, matching indexes within the $or clause:The resulting winning plan is the following:Regarding the output of this last winning plan I’ve got two questions:",
"username": "eddy_turbox"
},
{
"code": "",
"text": "Hi Edgar,Can you share the following information please?Ronan",
"username": "Ronan_Merrick"
},
{
"code": "approvedDocdate: 1, listField: 1, approvedDoc: 1{\n\t\"explainVersion\" : \"1\",\n\t\"queryPlanner\" : {\n\t\t\"namespace\" : \"my_db.my_collection\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\"$in\" : [ \"field1_b\", \"field1_a\" ]\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\"$in\" : [ \"field2_b\", \"field2_a\" ]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"optimizedPipeline\" : true,\n\t\t\"maxIndexedOrSolutionsReached\" : false,\n\t\t\"maxIndexedAndSolutionsReached\" : false,\n\t\t\"maxScansToExplodeReached\" : false,\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"listField\" : {\n\t\t\t\t\t\"$in\" : [ \"field2_b\", \"field2_a\" ]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\"listField\" : [ \"[\\\"field1_b\\\", \\\"field1_b\\\"]\", \"[\\\"field1_a\\\", \\\"field1_a\\\"]\" ],\n\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 297825,\n\t\t\"executionTimeMillis\" : 3497,\n\t\t\"totalKeysExamined\" : 383780,\n\t\t\"totalDocsExamined\" : 383756,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"listField\" : {\n\t\t\t\t\t\"$in\" : [ \"field2_b\", \"field2_a\" ]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : 297825,\n\t\t\t\"executionTimeMillisEstimate\" : 2195,\n\t\t\t\"works\" : 383780,\n\t\t\t\"advanced\" : 297825,\n\t\t\t\"needTime\" : 85954,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 431,\n\t\t\t\"restoreState\" : 431,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"docsExamined\" : 383756,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 383756,\n\t\t\t\t\"executionTimeMillisEstimate\" : 317,\n\t\t\t\t\"works\" : 383780,\n\t\t\t\t\"advanced\" : 383756,\n\t\t\t\t\"needTime\" : 23,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 431,\n\t\t\t\t\"restoreState\" : 431,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\"listField\" : [ \"[\\\"field1_b\\\", \\\"field1_b\\\"]\", \"[\\\"field1_a\\\", \\\"field1_a\\\"]\" ],\n\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 383780,\n\t\t\t\t\"seeks\" : 24,\n\t\t\t\t\"dupsTested\" : 383756,\n\t\t\t\t\"dupsDropped\" : 0\n\t\t\t}\n\t\t},\n\t\t\"allPlansExecution\" : [ ]\n\t},\n\t\"command\" : {\n\t\t\"aggregate\" : \"my_collection\",\n\t\t\"pipeline\" : [\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$gte\" : ISODate(\"...\"),\n\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t},\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field1_a\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field1_b\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field2_a\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field2_b\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\t\"cursor\" : {\n\t\t\t\n\t\t},\n\t\t\"$db\" : \"my_db\"\n\t},\n\t\"serverInfo\" : {\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"5.0.6\"\n\t},\n\t\"serverParameters\" : {\n\t\t\"internalQueryFacetBufferSizeBytes\" : 104857600,\n\t\t\"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600,\n\t\t\"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600,\n\t\t\"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600,\n\t\t\"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600,\n\t\t\"internalQueryProhibitBlockingMergeOnMongoS\" : 0,\n\t\t\"internalQueryMaxAddToSetBytes\" : 104857600,\n\t\t\"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600\n\t},\n\t\"ok\" : 1\n}\n{\n\t\"explainVersion\" : \"1\",\n\t\"queryPlanner\" : {\n\t\t\"namespace\" : \"my_db.my_collection\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_a\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_b\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_a\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_b\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"optimizedPipeline\" : true,\n\t\t\"maxIndexedOrSolutionsReached\" : false,\n\t\t\"maxIndexedAndSolutionsReached\" : false,\n\t\t\"maxScansToExplodeReached\" : false,\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"$or\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_a\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_b\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"OR\",\n\t\t\t\t\"inputStages\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_a\\\", \\\"field1_a\\\"]\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_b\\\", \\\"field1_b\\\"]\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [\n\t\t\t{\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_a\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_b\"\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"OR\",\n\t\t\t\t\t\"inputStages\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field2_a\\\", \\\"field2_a\\\"]\" ],\n\t\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field2_b\\\", \\\"field2_b\\\"]\" ],\n\t\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_a\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_b\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_a\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_b\"\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\"listField\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 297825,\n\t\t\"executionTimeMillis\" : 7228,\n\t\t\"totalKeysExamined\" : 383788,\n\t\t\"totalDocsExamined\" : 383756,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"$or\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_a\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_b\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t\"nReturned\" : 297825,\n\t\t\t\"executionTimeMillisEstimate\" : 5667,\n\t\t\t\"works\" : 383788,\n\t\t\t\"advanced\" : 297825,\n\t\t\t\"needTime\" : 85962,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 549,\n\t\t\t\"restoreState\" : 549,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"docsExamined\" : 383756,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"OR\",\n\t\t\t\t\"nReturned\" : 383756,\n\t\t\t\t\"executionTimeMillisEstimate\" : 489,\n\t\t\t\t\"works\" : 383788,\n\t\t\t\t\"advanced\" : 383756,\n\t\t\t\t\"needTime\" : 31,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"dupsTested\" : 383756,\n\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\"inputStages\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 150432,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 218,\n\t\t\t\t\t\t\"works\" : 150448,\n\t\t\t\t\t\t\"advanced\" : 150432,\n\t\t\t\t\t\t\"needTime\" : 15,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_a\\\", \\\"field1_a\\\"]\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 150448,\n\t\t\t\t\t\t\"seeks\" : 16,\n\t\t\t\t\t\t\"dupsTested\" : 150432,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"indexDef\" : {\n\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"direction\" : \"forward\"\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 233324,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 228,\n\t\t\t\t\t\t\"works\" : 233340,\n\t\t\t\t\t\t\"advanced\" : 233324,\n\t\t\t\t\t\t\"needTime\" : 15,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_b\\\", \\\"field1_b\\\"]\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 233340,\n\t\t\t\t\t\t\"seeks\" : 16,\n\t\t\t\t\t\t\"dupsTested\" : 233324,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"indexDef\" : {\n\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\"direction\" : \"forward\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t},\n\t\t\"allPlansExecution\" : [\n\t\t\t{\n\t\t\t\t\"nReturned\" : 101,\n\t\t\t\t\"executionTimeMillisEstimate\" : 101,\n\t\t\t\t\"totalKeysExamined\" : 258,\n\t\t\t\t\"totalDocsExamined\" : 258,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_a\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_b\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"nReturned\" : 101,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 101,\n\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\"advanced\" : 101,\n\t\t\t\t\t\"needTime\" : 157,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 19,\n\t\t\t\t\t\"restoreState\" : 19,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"docsExamined\" : 258,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"OR\",\n\t\t\t\t\t\t\"nReturned\" : 258,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\t\"advanced\" : 258,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 19,\n\t\t\t\t\t\t\"restoreState\" : 19,\n\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\"dupsTested\" : 258,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"inputStages\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 258,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\t\t\t\"advanced\" : 258,\n\t\t\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 19,\n\t\t\t\t\t\t\t\t\"restoreState\" : 19,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_a\\\", \\\"field1_a\\\"]\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keysExamined\" : 258,\n\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\"dupsTested\" : 258,\n\t\t\t\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\"works\" : 0,\n\t\t\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 19,\n\t\t\t\t\t\t\t\t\"restoreState\" : 19,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field1_b\\\", \\\"field1_b\\\"]\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keysExamined\" : 0,\n\t\t\t\t\t\t\t\t\"seeks\" : 0,\n\t\t\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"nReturned\" : 92,\n\t\t\t\t\"executionTimeMillisEstimate\" : 191,\n\t\t\t\t\"totalKeysExamined\" : 258,\n\t\t\t\t\"totalDocsExamined\" : 258,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_a\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_b\"\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"nReturned\" : 92,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 191,\n\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\"advanced\" : 92,\n\t\t\t\t\t\"needTime\" : 166,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"docsExamined\" : 258,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"OR\",\n\t\t\t\t\t\t\"nReturned\" : 258,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\t\"advanced\" : 258,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\"dupsTested\" : 258,\n\t\t\t\t\t\t\"dupsDropped\" : 0,\n\t\t\t\t\t\t\"inputStages\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 258,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\t\t\t\"advanced\" : 258,\n\t\t\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field2_a\\\", \\\"field2_a\\\"]\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keysExamined\" : 258,\n\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\"dupsTested\" : 258,\n\t\t\t\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\"works\" : 0,\n\t\t\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\t\t\"listField\" : [ \"[\\\"field2_b\\\", \\\"field2_b\\\"]\" ],\n\t\t\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keysExamined\" : 0,\n\t\t\t\t\t\t\t\t\"seeks\" : 0,\n\t\t\t\t\t\t\t\t\"dupsTested\" : 0,\n\t\t\t\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"nReturned\" : 24,\n\t\t\t\t\"executionTimeMillisEstimate\" : 77,\n\t\t\t\t\"totalKeysExamined\" : 258,\n\t\t\t\t\"totalDocsExamined\" : 258,\n\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_a\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field1_b\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_a\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"listField\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$eq\" : \"field2_b\"\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"nReturned\" : 24,\n\t\t\t\t\t\"executionTimeMillisEstimate\" : 77,\n\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\"advanced\" : 24,\n\t\t\t\t\t\"needTime\" : 234,\n\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\"docsExamined\" : 258,\n\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\"nReturned\" : 258,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\"works\" : 258,\n\t\t\t\t\t\t\"advanced\" : 258,\n\t\t\t\t\t\t\"needTime\" : 0,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 549,\n\t\t\t\t\t\t\"restoreState\" : 549,\n\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\t\t\"listField\" : 1,\n\t\t\t\t\t\t\t\"approvedDoc\" : 1\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"indexName\" : \"date_1_listField_1_approvedDoc_1\",\n\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\t\t\"listField\" : [ \"listField\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\t\t\"listField\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\t\t\"approvedDoc\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"keysExamined\" : 258,\n\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\"dupsTested\" : 258,\n\t\t\t\t\t\t\"dupsDropped\" : 0\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t]\n\t},\n\t\"command\" : {\n\t\t\"aggregate\" : \"my_collection\",\n\t\t\"pipeline\" : [\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$gte\" : ISODate(\"...\"),\n\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t},\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\"),\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field1_a\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\"),\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field1_b\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"$or\" : [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\"),\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field2_a\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"...\"),\n\t\t\t\t\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"listField\" : \"field2_b\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\t\"cursor\" : {\n\t\t\t\n\t\t},\n\t\t\"$db\" : \"my_db\"\n\t},\n\t\"serverInfo\" : {\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"5.0.6\"\n\t},\n\t\"serverParameters\" : {\n\t\t\"internalQueryFacetBufferSizeBytes\" : 104857600,\n\t\t\"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600,\n\t\t\"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600,\n\t\t\"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600,\n\t\t\"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600,\n\t\t\"internalQueryProhibitBlockingMergeOnMongoS\" : 0,\n\t\t\"internalQueryMaxAddToSetBytes\" : 104857600,\n\t\t\"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600\n\t},\n\t\"ok\" : 1\n}\n",
"text": "Thanks for replying @Ronan_Merrick !I also forgot to include a later field in the index which is a boolean approvedDoc. My bad, including it now in the following plans. The index is: date: 1, listField: 1, approvedDoc: 1.Not maching indexes:Matching indexes within $or:",
"username": "eddy_turbox"
},
{
"code": "",
"text": "Hi Edgar,Thank you for providing this information. You appear to have obfuscated the date values in your query.Are the different portions of the query applying the same filtering criteria to date or different criteria, i.e. are the $lt and $gte values the same in all cases?Regards,Ronan",
"username": "Ronan_Merrick"
},
{
"code": "",
"text": "Oh yes, sorry for that, thought they wouldn’t add value to the debugging.And yes, they are indeed the same 9 days interval for all.",
"username": "eddy_turbox"
},
{
"code": "",
"text": "Hi @Ronan_Merrick , any more insights on this performance issue?Thanks!",
"username": "eddy_turbox"
},
{
"code": "",
"text": "Hi Edgar,Apologies for the delay in getting back to you.I am looking into this.Regards,Ronan",
"username": "Ronan_Merrick"
},
{
"code": "db.foo.find({$and: [{listField:{$in:[\"field1_a\", \"field1_b\"]}},{listField:{$in:[\"field2_a\", \"field2_b\"]}},{date:{$lt:ISODate(\"2023-02-11T14:41:35.803Z\"),$gte:ISODate(\"2023-02-08T14:41:35.803Z\")}}]}).explain(1)\n$and listField: 1, date: 1,approvedDoc: 1\n",
"text": "Hi Edgar,Thanks for your patience while I have been investigating.I simplified the query slightly:I have done some testing with your query and it appears that we will always FETCH for one of the $and branch values when the multikey index is a compound one as in your case. If the multikey index is not a compound index, we consider an alternative plan that performs 2 index scans and an `AND_SORTED stage. However in my testing, this was not the winning plan and there is no way to force this. This suggests that in my testing anyway this plan didn’t perform better than scanning for one value and then FETCHing for the other and this may not be the reason the query is not performing as you expect.I would like to note that the reason for the performance could be the index order. We recommend to follow the ESR rule when constructing your indexes, which places Equality fields first, then Sort fields and finally Range fields. In your query you are performing a range match on date but this is the first field in the index.Please try to reverse the order of the listField and date fields in the index and let us know if this improves the performance e.gPlease let me know how you get on with this.Regards,Ronan",
"username": "Ronan_Merrick"
},
{
"code": "listFieldkey:value\"\"",
"text": "Thanks a lot for the investigation Ronan! Much appreciated.I’ve just performed those changes in the indexes without much difference … response times are mostly equal with no major advantage/disadvantage.I’m worried that it may be the listField itself that’s causing the times. Tried to apply the attribute pattern without much luck (mainly since I can’t really append date prior a wildcard and ended up retrieving way more documents that I wanted to https://jira.mongodb.org/browse/SERVER-48570), I appended all the attributes related to each of the documents in a list such as key:value, and in order to retrieve all the documents all those have an empty string \"\" to match all documents when no attributes for lookup are specified.Could this be a document issue more than a query one?Also, in case this may be the bottleneck are parallel aggregations something plausible? In order to launch n parallel aggregations into MongoDB and merge cursors altogether.",
"username": "eddy_turbox"
},
{
"code": "\"specs\": [\n { k: \"volume\", v: \"500\", u: \"ml\" },\n { k: \"volume\", v: \"12\", u: \"ounces\" }\n]\n{\"specs.k\":1,\"specs.v\":1}",
"text": "Hi Eddy,No problem at all.I would suggest to read this blog post from one of our colleagues about the attribute pattern.If I understand correctly, your problem is that you may not know the attribute names in advance and the considerations for Wildcard Indexes meant these weren’t a good fit for your use case. Please correct me if I misunderstand.If this is the case, you could use a key/value convention as suggested in our colleague’s blog post, for example:Then you only need to index {\"specs.k\":1,\"specs.v\":1} in this case.You would need to use $elemMatch to compound the bounds when matching on array items with this approach.Let me know if you see any difference with this approach.Regards,Ronan",
"username": "Ronan_Merrick"
},
{
"code": "listFieldfield1_akey1:value1listField: [\n { fieldKey: \"key1\", fieldValue: \"value1\" },\n { fieldKey: \"key2\", fieldValue: \"value2\" },\n { fieldKey: \"\", fieldValue: \"\"} // so all documents can be retrieved\n]\n{ date: 1, listField: 1, approvedDoc: 1 }{ listField: 1, date: 1, approvedDoc: 1 }db.foo.find(\n {\n $and: [\n { listField: { $in: [\"key1:value1\"] } },\n { date: { $lt:ISODate(\"2023-02-11T14:41:35.803Z\"), $gte:ISODate(\"2023-02-08T14:41:35.803Z\") } }]\n }).explain(1)\nexplain$elemMatchdb.foo.aggregate([\n {\n \"$match\": {\n \"date\": {\"$gte\": ISODate(\"2022-11-01T00:00:00Z\"), \"$lt\": ISODate(\"2022-11-09T00:00:00Z\")}, \n \"listField\": {$elemMatch: {\"fieldKey\": \"key1\", \"fieldValue\": \"value1\"}}\n }\n }]).explain(\"executionStats\")\nexplain$elemMatchkey1key2key1explain",
"text": "When applying the blog post you mentioned a few weeks ago I saw that wildcard indexes cannot be preceded or proceeded by any other field. With this I mean that applying the wildcard index I’m retrieving unbounded by date sets instead of going for a more specific subset given a date + listField criteria for example.It would be great if wildcard indexes could be mixed by preceding fields such as a date so the working set is narrowed even more, which is an enhancement proposed in the ticket before.Knowing that, I applied the attribute pattern in a different fashion making listField look like this - clarifying that field1_a = key1:value1, so that could be easily parsed into a key/value structure:Having as index the previous ones, that means: { date: 1, listField: 1, approvedDoc: 1 }, or { listField: 1, date: 1, approvedDoc: 1 } following the ESR convention you mentioned.Now, taking into account the previous query with the current approach:This will examine 150k index keys, having 1M documents. Time elapsed using explain: 2,5 seconds.Now, using the attribute pattern with the changes mentioned previously and using $elemMatch:This will examine 5.5M index keys, having 1M documents. Time elapsed using explain = 15 seconds.One thing that also worries me about this approach is that I loose the 1:1 link between keys and values, because with $elemMatch if values happen to have duplicated values for key1 and key2, in case I’m looking only for key1 related values it may result in erroneous results.Also, is explain the best way to determine if a query is better performant than another or is there any other better mechanism for local testing?",
"username": "eddy_turbox"
}
] | Why is performance worse when matching an index within an $or clause? | 2023-02-01T14:13:06.005Z | Why is performance worse when matching an index within an $or clause? | 1,345 |
null | [
"aggregation",
"queries"
] | [
{
"code": "$$NOW2023-01-29T00:30:48.370+00:00[\n {\n \"signedUpAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n \"loggedInAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n id: 1\n },\n {\n \"loggedInAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n id: 2\n },\n {\n \"loggedInAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n id: 3\n },\n {\n \"loggedInAt\": new Date(\"2023-01-27T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2023-01-01T00:30:48.370+00:00\"),\n id: 4\n },\n {\n \"loggedInAt\": new Date(\"2023-01-00T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2023-01-01T00:30:48.370+00:00\"),\n id: 5\n },\n {\n \"loggedInAt\": new Date(\"2023-01-01T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2023-01-01T00:30:48.370+00:00\"),\n id: 6\n },\n {\n \"loggedInAt\": new Date(\"2022-01-01T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2022-01-01T00:30:48.370+00:00\"),\n id: 7\n }\n]\n[\n {\n \"signedUpAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n \"loggedInAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n id: 1\n }, // signed up < 24hrs\n {\n \"loggedInAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2023-01-29T00:30:48.370+00:00\"),\n id: 3\n }, // signed up < 24hrs\n {\n \"loggedInAt\": new Date(\"2023-01-27T00:30:48.370+00:00\"),\n \"signedUpAt\": new Date(\"2023-01-01T00:30:48.370+00:00\"),\n id: 4\n } // logged in last 7 days & signed up > 24 hrs ago\n]\n\ndb.collection.aggregate([\n {\n \"$addFields\": {\n \"randSortKey\": {\n \"$rand\": {}\n },\n signedUpHrs: {\n $dateDiff: {\n startDate: \"$signedUpAt\",\n endDate: new Date(\"2023-01-29T00:30:48.370+00:00\"),\n unit: \"hour\"\n }\n },\n loggedInDaysAgo: {\n $dateDiff: {\n startDate: \"$loggedInAt\",\n endDate: new Date(\"2023-01-29T00:30:48.370+00:00\"),\n unit: \"day\"\n }\n }\n }\n },\n {\n \"$addFields\": {\n partition: {\n $switch: {\n branches: [\n {\n case: {\n $lte: [\n \"$signedUpHrs\",\n 24\n ]\n },\n then: -1\n },\n {\n case: {\n $lte: [\n \"$loggedInDaysAgo\",\n 7\n ]\n },\n then: 1\n },\n {\n case: {\n $and: [\n {\n $lte: [\n \"$loggedInDaysAgo\",\n 15\n ]\n }\n ],\n \n },\n then: 2\n },\n {\n case: {\n $and: [\n {\n $lte: [\n \"$loggedInDaysAgo\",\n 30\n ]\n }\n ],\n \n },\n then: 3\n },\n \n ],\n default: 4\n }\n }\n }\n },\n {\n \"$setWindowFields\": {\n \"partitionBy\": \"$partition\",\n \"sortBy\": {\n \"randSortKey\": 1\n },\n \"output\": {\n \"rank\": {\n \"$rank\": {}\n },\n total: {\n $sum: 1\n }\n }\n }\n },\n {\n \"$match\": {\n $expr: {\n $or: [\n {\n $ne: [\n \"$partition\",\n -1\n ]\n },\n {\n $and: [\n {\n partition: -1\n },\n {\n $lte: [\n \"$rank\",\n 2\n ]\n }\n ]\n }\n ]\n }\n }\n },\n {\n \"$limit\": 4\n }\n])\n",
"text": "I have mongo record that looks like this\nand assume $$NOW is 2023-01-29T00:30:48.370+00:00I want to get maximum 3(or any number) results from the above data, with following priorityrandomly selected Users who signed up < 24 hrs ago, maximum of 2 recordsrandomly select remaining(3-users returned from condition 1) Users who logged in within the past 7 days and signed up >24 hours agorandomly select remaining(3-users returned from condition 1+ 4) Users who logged in within the past 8 to 15 daysAny other random user(first 3 conditions still dint return 3 results)condition 3 & 4 should only return data(or even run?) if condition 1 & 2 hasn’t returned 3 records yet.one possible expected resultI tried below queryIt gives me results.\nbut my question is, is there a performant way? would this query still be good if i have millions of rows? or this is the best possible way to achieve this?mongo playground: Mongo playground",
"username": "MAHENDRA_HEGDE"
},
{
"code": "",
"text": "Assuming you have the signed up and logged in indexed you can make your query very performant. First put in a match that selects users who logged in between 8 and 14 days from the current date or signed up less than 24 hours. That will be the IXSCAN for the aggregation. What follows will be much faster than going over the entire collection.",
"username": "Ilan_Toren"
}
] | Performant Mongo Query to get X users with specific criteria | 2023-01-29T09:29:18.366Z | Performant Mongo Query to get X users with specific criteria | 633 |
null | [
"golang"
] | [
{
"code": "",
"text": "Hello everyone,\nI am running a cron whose code is written in golang, and i am using mongoDb as database\nThere was 128GB Ram into my system in which DataBase is stored, and I am using different system for the code.\nAfter the startup of Mongodb, when the cron runs for the first time it takes so much time to load indexes to ram which effects the overall time of it.\nI have also added the indexes for some of the collections to improve the overall time of the query to execute, which does suits well when the Indexes are loaded to the RAM for that collection.Is there any way to Preload indexes of all databases collections in RAM at the time of startup of Mongodb, for some warmup of database, so that it doesn’t takes time for executing the queries from db.Can someone help me out to find the solution?",
"username": "rohit_arora3"
},
{
"code": "",
"text": "Maybe try running a “preload version of cron” before you start the “first cron” ?\nthe preload version only issues gets requests so that its related index will be cached in ram.RAM is limited so the cache for index data is normally LRU.",
"username": "Kobe_W"
},
{
"code": "",
"text": "I want some other alternative than this, like earlier there was a touch command which was valid in versions less than 4.2. Does mongodb has anything like this in recent versions?",
"username": "rohit_arora3"
}
] | Is there any way to Preload indexes to RAM on startup of mongodb | 2023-02-23T05:46:13.332Z | Is there any way to Preload indexes to RAM on startup of mongodb | 1,001 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n \"myobject\":{\n \"type\":\"object\",\n \"properties\":{\n \"ticketing\":{\n \"type\":\"object\",\n \"title\":\"Ticketing\",\n \"description\":\"Ticketing\",\n \"properties\":{\n \"sth_history\":{\n \"type\":\"array\",\n \"title\":\"Season Ticket Holder Season History\",\n \"description\":\"Season Ticket Holder Season History\",\n \"items\":{\n \"type\":\"object\",\n \"properties\":{\n \"season\":{\n \"title\":\"Season: Season Ticket Holder\",\n \"description\":\"Purchased any type of package between 10-81 games\",\n \"type\":\"integer\",\n \"meta:type\":\"int\"\n }\n },\n \"meta:type\":\"object\"\n },\n \"meta:type\":\"array\"\n },\n \"stubhub_seller\":{\n \"type\":\"object\",\n \"title\":\"Ticketing Stubhub Seller\",\n \"description\":\"Ticketing Stubhub Seller\",\n \"properties\":{\n \"season_sales\":{\n \"type\":\"array\",\n \"title\":\"Ticketing Stubhub Seller Season Sales\",\n \"description\":\"Ticketing Stubhub Seller Season Sales\",\n \"items\":{\n \"type\":\"object\",\n \"properties\":{\n \"season\":{\n \"title\":\"Season: Stubhub Seller\",\n \"description\":\"Season: Stubhub Seller\",\n \"type\":\"integer\",\n \"meta:type\":\"int\"\n }\n },\n \"meta:type\":\"object\"\n },\n \"meta:type\":\"array\"\n },\n \"games_sold_total\":{\n \"title\":\"Total Stubhub Games Sold\",\n \"description\":\"Total Stubhub Games Sold\",\n \"type\":\"integer\",\n \"meta:type\":\"int\"\n }\n },\n \"meta:type\":\"object\"\n }\n },\n \"meta:type\":\"object\"\n },\n \"shop\":{\n \"type\":\"object\",\n \"title\":\"Shop\",\n \"description\":\"Shop\",\n \"properties\":{\n \"top_categories_current_season\":{\n \"type\":\"array\",\n \"title\":\"Top Shop Categories Current Season\",\n \"description\":\"Top Shop Categories Current Season\",\n \"items\":{\n \"type\":\"object\",\n \"properties\":{\n \"merch_category\":{\n \"title\":\"Top Shop Product Types Purchased This Season\",\n \"description\":\"Top Shop Product Types Purchased This Season\",\n \"type\":\"string\",\n \"meta:type\":\"string\"\n },\n \"spend\":{\n \"title\":\"Top Shop Product Types Spend Purchased This Season\",\n \"description\":\"Top Shop Product Types Spend Purchased This Season\",\n \"type\":\"number\",\n \"meta:type\":\"number\"\n }\n },\n \"meta:type\":\"object\"\n },\n \"meta:type\":\"array\"\n },\n \"top_categories_total\":{\n \"type\":\"array\",\n \"title\":\"Shop Top Categories Total\",\n \"description\":\"Shop Top Categories Total\",\n \"items\":{\n \"type\":\"object\",\n \"properties\":{\n \"merch_category\":{\n \"title\":\"Top 5 Shop Product Types Purchased Ever\",\n \"description\":\"Top 5 Shop Product Types Purchased Ever\",\n \"type\":\"string\",\n \"meta:type\":\"string\"\n },\n \"spend\":{\n \"title\":\"Top 5 Shop Product Types Spend Ever\",\n \"description\":\"Top 5 Shop Product Types Spend Ever\",\n \"type\":\"number\",\n \"meta:type\":\"number\"\n }\n },\n \"meta:type\":\"object\"\n },\n \"meta:type\":\"array\"\n }\n },\n \"meta:type\":\"object\"\n }\n },\n \"meta:type\":\"object\"\n }\n}\n",
"text": "@turivishal Can you please help me with this ? It is similar to the other problem you helped me with.Need to filter “title” and “description” from the documents. This is just at the first level, but i also need to solve it for the nested documents.",
"username": "waykarp"
},
{
"code": "$function",
"text": "Hello @waykarp,If it is a fixed structure then you can use the same approach as answered hereFirst of all, I would suggest you improve your schema design, because this kind of operation impacts the server and slow execution,I would suggest you do this kind of operation on the client side/front-end side.Currently, there is no straight way to do this in an aggregation query.If this is totally required in aggregation query then you can use javascript code in the $function operator, but make sure you read important notes in the document.",
"username": "turivishal"
},
{
"code": "{\n $project: {\n \"type: 1,\n \"properties\": {\n $map: {\n input: { $objectToArray: \"$properties\" },\n in: {\n k: \"$$this.k\",\n v: {\n $switch: {\n branches: [\n {\n case: { $or: [{$and:[{ $eq: [\"$$this.v.type\", \"object\"]}]}]},\n then: { \n $map: { input: { $objectToArray: \"$$this.v.properties\" }, \n in: { k: \"$$this.k\", v: \"$$this.v\" }} \n }\n },\n {\n case: { $or: [{$and:[{ $eq: [\"$$this.v.type\", \"array\"] }]}]},\n then: { \n $map: { input: { $objectToArray: \"$$this.v.items.properties\" }, \n in: { k: \"$$this.k\", v: \"$$this.v\" }} \n }\n }\n ], default: \"$$this.v\"\n }\n }\n }\n }\n }\n }\n },\n\"myobject\":{\n \"type\":\"object\",\n \"properties\":{\n \"ticketing\":{\n \"type\":\"object\",\n \"title\":\"Ticketing\",\n \"description\":\"Ticketing\",\n \"properties\":{\n \"sth_history\":{\n \"type\":\"array\",\n \"title\":\"Season Ticket Holder Season History\",\n \"description\":\"Season Ticket Holder Season History\",\n \"items\":{\n \"type\":\"object\",\n \"properties\":{\n \"season\":{\n \"title\":\"Season: Season Ticket Holder\",\n \"description\":\"Purchased any type of package between 10-81 games\",\n \"type\":\"integer\",\n \"meta:type\":\"int\"\n }\n },\n \"meta:type\":\"object\"\n },\n \"meta:type\":\"array\"\n }\n }\n },\n \"abc\" : {\n \"title\" : \"Identifier\",\n \"type\":\"object\",\n \"description\" : \"Identity of the consumer\"\n }\n }\n}\n",
"text": "Yeah! I can understand the schema isn’t done the correct way. But, this is what we eventually have, and need to think for a workaround to deal with it.Using the pattern you shared in the previous post, i tried to build something on top of it. But, not able to get a solution for documents that are of type:“object” and some have nested “properties” but some don’t.Can you please recommend a way around it? If with “objectToArray”, i have a document with type=“object” and has “properties” do this and the document without “properties” do something else.",
"username": "waykarp"
},
{
"code": "{\n $project: {\n \"type: 1,\n \"properties\": {\n $map: {\n input: { $objectToArray: \"$properties\" },\n in: {\n k: \"$$this.k\",\n v: {\n $switch: {\n branches: [\n {\n case: { $or: [{$and:[{ $eq: [\"$$this.v.type\", \"object\"]}]}]},\n then: { \n $cond: [\n { $ifNull: [\"$$this.v.properties\", false] },\n { \n $map: { \n input: { $objectToArray: \"$$this.v.properties\" }, \n in: { \n k: \"$$this.k\", \n v: {\n $cond: [\n { $ifNull: [\"$$this.v.properties\", false] },\n { \n $map: { \n input: { $objectToArray: \"$$this.v.properties\" }, \n in: { \n k: \"$$this.k\", \n v: \"$$this.v\"\n }\n }\n },\n \"$$this.v\"\n ]\n } \n }\n }\n },\n \"$$this.v\"\n ]\n \n }\n },\n {\n case: { $or: [{$and:[{ $eq: [\"$$this.v.type\", \"array\"] }]}]},\n then: { \n $map: { input: { $objectToArray: \"$$this.v.items.properties\" }, \n in: { k: \"$$this.k\", v: \"$$this.v\" }} \n }\n },\n { case: { $gt: [ 0, 5 ] }, then: \"greater than\" }\n ], default: \"$$this.v\"\n }\n }\n }\n }\n }\n }\n },\n {\n $project: {\n \"properties.v.title\":0,\n \"properties.v.description\":0,\n \"properties.v.v.title\":0,\n \"properties.v.v.description\":0,\n \"properties.v.v.v.title\":0,\n \"properties.v.v.v.description\":0,\n }\n }, \n",
"text": "@turivishal I was able to get it working but running into challenge to convert it back to ObjectNeed help to convert it back to Object.",
"username": "waykarp"
}
] | Dynamically filter fields which might even be nested in a document - updated | 2023-02-26T18:25:28.629Z | Dynamically filter fields which might even be nested in a document - updated | 612 |
null | [] | [
{
"code": "",
"text": "The database trigger configured in the current atlas db project, earlier it used to work and now that it is appeared to be not working. The current database trigger link is this.https://realm.mongodb.com/groups/6217649ee89133230f422f4d/apps/62f2202fe46d0080933ac333/triggers/6309ed087de8a6e6fe95e7e0",
"username": "Sudheer_PM"
},
{
"code": "",
"text": "Hi Sudheer,This was posted some time ago so it is probably resolved by now and the trigger id provided no longer exists.\nIn the future please define what “not working” means.If the trigger was suspended please see the the article below which details what can cause suspension.Regards\nManny",
"username": "Mansoor_Omar"
}
] | Database trigger is not working but it is enabled | 2022-11-17T11:01:44.309Z | Database trigger is not working but it is enabled | 1,430 |
null | [] | [
{
"code": "",
"text": "Hello, I have a specific document stored in a Config collection that I would like to sync and access when the user first downloads my app. This is my expected flow…I would expect useObject to update when the sync finishes. But it doesn’t. Am I missing something here? I’m having to rely on waitForSynchronization() to know when the data is ready.Please let me know best practices for something like this.",
"username": "Mark_Vrahas"
},
{
"code": "",
"text": "I’ve got a minimum reproducible example hereMinimum reproducible example for a bug with @realm/react - GitHub - mvrahas/config-app-test: Minimum reproducible example for a bug with @realm/react",
"username": "Mark_Vrahas"
}
] | Update data after subscription finishes syncing | 2023-02-26T23:11:36.728Z | Update data after subscription finishes syncing | 569 |
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\" : \"63dadc6753bfc516421c5958\",\n \"properties\" : [\n {\n \"k\" : \"abc:id\",\n \"v\" : [\n {\n\n },\n {\n \"k\" : \"type\",\n \"v\" : \"string\"\n },\n {\n\n }\n ]\n },\n {\n \"k\" : \"abc:authenticatedState\",\n \"v\" : [\n {\n\n },\n {\n \"k\" : \"type\",\n \"v\" : \"string\"\n },\n {\n \"k\" : \"default\",\n \"v\" : \"ambiguous\"\n },\n {\n \"k\" : \"enum\",\n \"v\" : [\n \"ambiguous\",\n \"authenticated\",\n \"loggedOut\"\n ]\n }\n ]\n },\n {\n \"k\" : \"abc:primary\",\n \"v\" : [\n {\n\n },\n {\n \"k\" : \"type\",\n \"v\" : \"boolean\"\n },\n {\n \"k\" : \"default\",\n \"v\" : false\n },\n {\n\n }\n ]\n }\n ]\n}\n\n",
"text": "I was able to remove “title” and “description” fields from the document using “objectToArray” and “$map” to filter.\nI needed help merging the document now and filtering the empty sub-document.",
"username": "waykarp"
},
{
"code": "cond : { \"$ne\" : [ \"$$this\" , { } }\n",
"text": "To get rid of the empty documents from an array you would use $filter withThe reverse operation of $objectToArray to merge back the k: and v: is $arrayToObject.",
"username": "steevej"
}
] | Need help to merge the documents after filtering | 2023-02-25T03:21:47.648Z | Need help to merge the documents after filtering | 428 |
null | [
"node-js",
"app-services-cli"
] | [
{
"code": "~ realm-cli -v\nnode:internal/child_process:413\n throw errnoException(err, 'spawn');\n ^\n\nError: spawn Unknown system error -86\n at ChildProcess.spawn (node:internal/child_process:413:11)\n at spawn (node:child_process:700:9)\n at Object.<anonymous> (/usr/local/lib/node_modules/mongodb-realm-cli/wrapper.js:22:22)\n at Module._compile (node:internal/modules/cjs/loader:1099:14)\n at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)\n at Module.load (node:internal/modules/cjs/loader:975:32)\n at Function.Module._load (node:internal/modules/cjs/loader:822:12)\n at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)\n at node:internal/main/run_main_module:17:47 {\n errno: -86,\n code: 'Unknown system error -86',\n syscall: 'spawn'\n}\n\nNode.js v17.9.0\n",
"text": "Hello!I just started my first project working with realm, but I am not getting very far at all. I installed the realm CLI (npm install -g mongodb-realm-cli), but I am unable to run it. I get this error anytime I try running the realm-cli, no matter the input:I saw that there was one other ticket open about the same case:Just like that user, I am also using an M1 mac, but his solution was to freshly setup his macbook again. My macbook however was just freshly setup, and yet I experience this problem anyway.Is this an M1 specific issue? Any idea how I can fix this?Thanks!",
"username": "Dominik_Antunovic"
},
{
"code": "",
"text": "any luck with this? same issue for us here. pretty disappointing given M1 Macs have been around for a while now.",
"username": "Angus_Johnston"
},
{
"code": "",
"text": "Sorry, no further luck with this yet. I spent 1-2 days trying different approaches, but I keep having this issue. This really is a bummer, there don’t seem to be many alternatives out there that are as service-complete as realm. If only it worked. ",
"username": "Dominik_Antunovic"
},
{
"code": "softwareupdate --install-rosetta",
"text": "Have you installed Rosetta on your system? You can try running softwareupdate --install-rosetta then trying again. I encountered the same problem when trying to run MongoMemoryServer and this fixed my problem.",
"username": "taekwon"
},
{
"code": "",
"text": "softwareupdate --install-rosetta100% the answer, big thanks Tae Kwon Kim!!",
"username": "Paul_Vu"
},
{
"code": "",
"text": "A little more context as to what the issue is to help anyone facing the same issue.If you’re using a M1 or M2 mac you’ll most likely run into this issue.Rosetta is required because many apps have not yet been updated to support Apple’s new M1 chip, which uses a different architecture than the Intel processors used in previous Macs. By installing Rosetta, these apps can still run on M1 Macs.Once the installation is complete, you can then run apps that were built for Intel-based Macs on your M1 Mac.",
"username": "Paul_Vu"
}
] | Realm-cli, "Unknown system error -86" | 2022-12-18T11:50:42.841Z | Realm-cli, “Unknown system error -86” | 2,961 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Ok… I’m a newbe when it comes to MongoDB but I have a question that need a little more info on than just typing in a Google Search…I am building a version of an E-Commerce platform… I need to allow store owners to upload images (max of 5) for a product image gallery…My question is what is the best way to store them… My initial thought is to store them in their own database external to MongoDB and just link to the image through Embedded docs in the Product collection.Is this the best way… Or is there other options for me to look at?",
"username": "David_Thompson"
},
{
"code": "",
"text": "Hello @David_Thompson, take a look at this post with discussion and information about storing images in MongoDB database:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Using an external database is a great choice as long as you’re able to keep track of the images with Embedded docs in a Product collection. It’ll be helpful to have a system to keep the files organized, so you can quickly and easily access them when needed. If you have the resources, you could set up a cloud storage system to help you store and organize your images in the cloud. However, if you’re looking for more customized solutions and high-quality e-commerce web development services, check out https://transformagency.com/ to automate every step of your digital commerce funnel and improve your platform performance. No matter what you decide, I’m sure you’ll come up with a great solution for your project.",
"username": "Jordan_Flex"
}
] | Storing Images for E-Commerce Application | 2021-07-20T02:12:45.261Z | Storing Images for E-Commerce Application | 11,673 |
null | [
"node-js"
] | [
{
"code": "model Product {\n id String @id\n name String\n color Color\n photos Photo[]\n}\n\nmodel Order {\n id String @id\n product Product @relation(fields: [productId], references: [id])\n productId String \n shippingAddress Address\n billingAddress Address?\n}\n\nenum Color {\n Red\n Green\n Blue\n}\n\ntype Photo {\n height Int\n width Int\n url String\n}\n\ntype Address {\n street String\n city String\n zip String\n}\nconst newOrder = await prisma.order.create({\n data: {\n // Relation (via reference ID)\n product: { connect: { id: 'some-object-id' } },\n color: 'Red',\n // Embedded document\n shippingAddress: {\n street: '1084 Candycane Lane',\n city: 'Silverlake',\n zip: '84323',\n },\n },\n})\n\nconst updatedOrder = await prisma.order.update({\n where: {\n id: 'some-object-id',\n },\n data: {\n shippingAddress: {\n // Update just the zip field\n update: {\n zip: '41232',\n },\n },\n },\n})\nprisma/prisma-examplesprisma db pullnpm install prismanpx prisma initschema.prismaschema.prismanpx prisma db pullnpx prisma generatePrismaClientconst prisma = new PrismaClient()",
"text": "Hey MongoDB community! I work at Prisma where we are building a new kind of ORM/ODM for the Node.js ecosystem (with a special focus on TypeScript).Our MongoDB connector is currently running in Preview and today we released support for embedded documents, making it pretty much feature-complete — so from now on we are looking for feedback that helps us stabilize the connector and iron out that last rough edges before we release it for production.Let me spare a few words on why a tool like Prisma would be useful when using MongoDB in a Node.js application.In my opinion, the biggest benefit Prisma provides is that it enforces a schema for the data that you store in your MongoDB database. You declare this schema using an intuitive and human-readable modeling language that looks as follows:Once you have your schema in place, Prisma will generate a type-safe database client (called Prisma Client) for you that is aware of your schema and provides powerful queries that are tailored to your schema. If a certain query is not available in the native Prisma Client API, you can also fallback to using raw MongoDB queries that can still be sent via Prisma Client.Here are a few example queries:You can find more API examples in our docs.Notice that all query results, even the ones where you retrieve only a subset of fields or include a relation (via a reference or an embedded document) will be strongly typed if you are using TypeScript. This means you will never accidentally access a field that wasn’t actually retrieved from the DB because the TypeScript compiler won’t allow you to do this.Prisma also provides full auto-completion for all of your queries (and naturally for accessing the data in your query results as well). This benefit comes even if you’re using plain JavaScript because modern code editors will still pick up Prisma Client’s generated types.You can follow our Getting started guide or check out the ready-to-run MongoDB example (in the prisma/prisma-examples repo) to get started.Prisma also works nicely with your existing MongoDB instance! We have an introspection feature (invoked via the prisma db pull command) that allows you to generate your Prisma models instead of typing them up manually, in short, the workflow to get started with your existing MongoDB looks as follows:The Preview of the MongoDB connector has been seeing great adoption already and people really seem to like it so far If you are currently a MongoDB user, we would love to hear your opinions on what we’ve built! Feel free to share your feedback with me here in the community, find me in our public Slack community or directly drop your thoughts in the open #mongodb channel there.",
"username": "Nikolas_Burk"
},
{
"code": "",
"text": "Hello @Nikolas_Burk, Welcome to the MongoDB Community forum!I just tried the Prisma just now to connect to MongoDB database and perform an insert and query on a single collection. I noted that the connection and querying works fine with a Standalone deployment - but the insert failed with an error. I Googled and found that with a replica-set the insert works fine; I tried that on an Atlas database and it did insert data without errors. ",
"username": "Prasad_Saya"
},
{
"code": "model Product {\n id String @id\n name String\n color Color\n photos Photo[]\n}\n\nmodel Order {\n id String @id\n product Product @relation(fields: [productId], references: [id])\n productId String \n shippingAddress Address\n billingAddress Address?\n}\n\nenum Color {\n Red\n Green\n Blue\n}\n\ntype Photo {\n height Int\n width Int\n url String\n}\n\ntype Address {\n street String\n city String\n zip String\n}\npopulate",
"text": "Yeah. That’s . I appreciate this much more than Mongoose which seems strife with inconsistencies and gothcas. This appears to be kind of like references and populate from Mongoose, no?I’m curious about how Prisma would work with the embedded subset pattern. So, instead of keeping a reference for all, we have a subset embedded. Case and point, a review is references a movie.Each movie embeds the first 10 reviews directly for easy access, and then we use references to go beyond the 10.",
"username": "Manav_Misra"
}
] | Prisma ORM/ODM v3.10.0 adds support for embedded documents | 2022-02-25T09:30:40.349Z | Prisma ORM/ODM v3.10.0 adds support for embedded documents | 9,352 |
null | [
"react-native",
"react-js"
] | [
{
"code": "const TaskSchema = {\n name: \"Task\",\n properties: {\n _id: \"int\",\n name: \"string\",\n status: \"string?\",\n },\n primaryKey: \"_id\",\n };\n // Open a local realm file with a particular path & predefined Car schema\n\nconst realmDb = async () => {\n console.log('REALMDB');\n try {\n const realm = await Realm.open({\n path: \"myrealm\",\n schema: [TaskSchema],\n });\n\n let task1, task2;\n // realm.write(() => {\n // task1 = realm.create('Task', {\n // _id: 8,\n // name: 'go grocery shopping',\n // status: 'Open',\n // });\n // console.log(`created two tasks: ${task1.name}`);\n // });\n\n const tasks = realm.objects(\"Task\");\n console.log(`The lists of tasks are: ${tasks.map((task) => task.name)}`);\n\n realm.close();\n\n } catch (err) {\n console.error(\"Failed to open the realm\", err.message);\n }\n}\n\nrealmDb()\n\n\nreturn (\n <SafeAreaView>\n </SafeAreaView>\n)\n",
"text": "Hi, im trying to implement Realm in ReactNative but only writes undefined objects.The code im using:import React from ‘react’\nimport { SafeAreaView } from ‘react-native’\nimport Realm from ‘realm’;export default function App() {}OUTPUT:\nThe lists of tasks are: ,",
"username": "Nedko"
},
{
"code": "",
"text": "Hi @Nedko,Were you able to resolve this by chance as i have similar issues.Thanks.",
"username": "Gbenga_Joseph"
}
] | Realm only writes undefined objects on ReactNative | 2022-09-10T13:32:41.817Z | Realm only writes undefined objects on ReactNative | 2,069 |
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\" : \"63dadc6753bfc516421c5958\",\n \"properties\" : {\n \"abc:id\" : {\n \"title\" : \"Identifier\",\n \"type\" : \"string\",\n \"description\" : \"Identity of the consumer\"\n },\n \"abc:authenticatedState\" : {\n \"description\" : \"The state this identity is authenticated\",\n \"type\" : \"string\",\n \"default\" : \"ambiguous\",\n \"enum\" : [\n \"ambiguous\",\n \"authenticated\",\n \"loggedOut\"\n ]\n },\n \"abc:primary\" : {\n \"title\" : \"Primary\",\n \"type\" : \"boolean\",\n \"default\" : false,\n \"description\" : \"description\"\n }\n }\n}\n",
"text": "Need to filter “title” and “description” from the documents. This is just at the first level, but i also need to solve it for the nested documents.Output should not have “title” and “description” in its $project phase",
"username": "waykarp"
},
{
"code": "$objectToArray$arrayToObjectdb.collection.aggregate([\n {\n $project: {\n properties: {\n $objectToArray: \"$properties\"\n }\n }\n },\n {\n $project: {\n \"properties.v.description\": 0,\n \"properties.v.title\": 0\n }\n },\n {\n $project: {\n properties: {\n $arrayToObject: \"$properties\"\n }\n }\n }\n])\n",
"text": "Hello @waykarp, Welcome to the MongoDB community forum,You can try something like this,",
"username": "turivishal"
},
{
"code": " {\n $project: {\n _id: 1,\n properties: {\n $map: {\n input: \"$properties\",\n in: {\n k: \"$$this.k\",\n v: {\n $filter: {\n input: \"$$this.v\",\n cond: { $and: [{$ne: [\"$$this.k\", \"title\"]},{$ne: [\"$$this.k\", \"description\"]}] }\n }\n }\n }\n }\n }\n }\n }\n",
"text": "Thank you @turivishal.\nI was also able to get it working using the $map and $filter.",
"username": "waykarp"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Dynamically filter fields which might even be nested in a document | 2023-02-25T01:35:32.163Z | Dynamically filter fields which might even be nested in a document | 580 |
null | [
"performance",
"atlas-functions",
"serverless"
] | [
{
"code": "",
"text": "Hello I have an entire backend running on MongoDB App Service Functions and my deployment model is GLOBAL.Basically all my functions connects to my Atlas Cluster to read or write something.Would it be better if I change the deployment model to LOCAL??My non-informed guess is yes, because:\n— As all functions are r/w to the cluster anyway, so there is little improvement for the functions to be GLOBAL, as they will have to reach the Atlas cluster anyway;\n— I will gain on serverless warm up. Being LOCAL all calls will go to the same region, and I have increased chance the Lambda function is already warmed up;So I would love confirmation from an expert before doing the change.Notes:\n— Yes, I would choose the same LOCAL region as my Atlas cluster.\n— MongoDB now have the ability to change deployment model/region https://www.mongodb.com/docs/atlas/app-services/apps/change-deployment-models/#std-label-change-deployment-models",
"username": "andrefelipe"
},
{
"code": "",
"text": "Hi,My general take is that GLOBAL apps are best when you are trying to minimize the amount of distance between your end-users and App Services. Having your app be GLOBAL is preferred if your functions are just hitting an http endpoint, doing some logic and sending back a response, or doing a DB read on a secondary if you are using a multi-region cluster.However, if most of your app relies on making calls to your database which is in a single region (which is how most customers are set up), then it is likely that the LOCAL configuration is the better option for you since the latency between App Services and your Atlas Cluster will likely be more important than the latency between App Services and your end user.Another added benefit is that it will result in your connection count on your database being lower as all of your requests will be routed through a subset of the servers (which share connections).The one correction I would make to your response is that there will not be any gain in “warm up” because there is already a very low warm-up time due to the way our functions are architected. We do not have the same cold start problem that Lambda functions can run into.To confirm your notes:Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hello @Tyler_KayeThank you very much for the clear explanation. Extra nice to understand there is no warmup issues on MongoDB App Functions.I successfully migrated my app to LOCAL and did a test:\n— Indeed improved performance, the function call I was watching took ~1000ms on GLOBAL and ~600 on LOCAL deployment;\n— It was being called from Sao Paulo, Brazil and my App is on US-EAST-1. The function executes 3 DB calls (2 reads and 1 write).So I am thankful for the knowledge and the speedup!Please add that to your Documentation!\nWould be nice to know that beforehand.\nI have been on GLOBAL for 2 years now. Wish I’ve chosen LOCAL from the start.All my best!",
"username": "andrefelipe"
},
{
"code": "",
"text": "Felt the need to drop an extra comment, the speed improvement has improved amazingly throughout the entire app. Even GitHub auto-deployment.Happy for the service. Thanks.(please add to documentation and Atlas UI, the GLOBAL may be only for a few use cases, maybe the most is better with LOCAL)",
"username": "andrefelipe"
}
] | Performance advice for App Services deployment model, GLOBAL vs LOCAL | 2023-02-10T13:06:50.862Z | Performance advice for App Services deployment model, GLOBAL vs LOCAL | 1,233 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi,\nI am having a new requirement, I am having a collection and having a fields like dId, tmp, and createdOn,\nmy requirement is to get the cumulative tmp values which are greater than 10 for 24 hours with in last two days from the current date. For getting the last two days records from current date I am using the following aggregation[ { $match: { createdOn: { $gte: new Date(new Date().getTime() - 22460*60 * 1000) } } } ]For example i am having a data like\n/* 1 */\n{\n“_id” : ObjectId(“61e67d390fcdd25fa73aa080”),\n“dId” : “356849088494265”,\n“tmp” : 7.0,\n“createdOn” : ISODate(“2022-01-18T08:41:29.932Z”)\n}/* 2 */\n{\n“_id” : ObjectId(“61e67d390fcdd25fa73aa081”),\n“dId” : “356849088494265”,\n“tmp” : 25.9,\n“createdOn” : ISODate(“2022-01-18T08:42:29.953Z”)\n}/* 3 */\n{\n“_id” : ObjectId(“61e67d390fcdd25fa73aa082”),\n“dId” : “356849088494265”,\n“tmp” : 5,\n“createdOn” : ISODate(“2022-01-18T08:45:29.953Z”)\n}/* 4 */\n{\n“_id” : ObjectId(“61e67d390fcdd25fa73aa083”),\n“dId” : “356849088494266”,\n“tmp” : 26.0,\n“createdOn” : ISODate(“2022-01-18T08:41:29.953Z”)\n}/* 5 */\n{\n“_id” : ObjectId(“61e67d470fcdd25fa73aa085”),\n“dId” : “356849088494266”,\n“tmp” : 12.5,\n“createdOn” : ISODate(“2022-01-18T08:50:29.953Z”)\n}/* 6 */\n{\n“_id” : ObjectId(“61e67d470fcdd25fa73aa086”),\n“dId” : “356849088494266”,\n“tmp” : 25.9,\n“createdOn” : ISODate(“2022-01-18T08:55:29.953Z”)\n}/* 7 */\n{\n“_id” : ObjectId(“61e67d470fcdd25fa73aa087”),\n“dId” : “356849088494266”,\n“tmp” : 7.0,\n“createdOn” : ISODate(“2022-01-18T08:56:29.953Z”)\n}/* 8 */\n{\n“_id” : ObjectId(“61e67d470fcdd25fa73aa088”),\n“dId” : “356849088494267”,\n“tmp” : 26.0,\n“createdOn” : ISODate(“2022-01-18T09:41:29.953Z”)}/* 9 */\n{\n“_id” : ObjectId(“61e67d530fcdd25fa73aa08a”),\n“dId” : “356849088494267”,\n“tmp” : 26.0,\n“createdOn” : ISODate(“2022-01-18T09:55:29.953Z”)\n}/* 10 */\n{\n“_id” : ObjectId(“61e67d530fcdd25fa73aa08b”),\n“dId” : “356849088494267”,\n“tmp” : 2.0,\n“createdOn” : ISODate(“2022-01-18T09:56:29.953Z”)\n}Here i am providing 3 different dId’s, which are 356849088494265, 356849088494266 and 356849088494267, and providing their tmp values and their dates, whenever tmp value changes the new record will be inserted with their tmp value, and with createdOn date with time and its device Id which is dId.So here we can observe that for dId: 356849088494265 for record 2 the tmp value is 25.9 which is greater than 10 and its date and time is 2022-01-18T08:42:29.953Z. And In 3rd record its tmp value is 5 which is less than 10 and its date and time is 2022-01-18T08:45:29.953Z . the tmp value is greater than 10 for 8:42 to 08:45 right? i have to get that time duration.And for dId: 356849088494266 in 4th,5th and 6th records we can observe that its tmp value is greater than 10 and its date and time starts at 2022-01-18T08:41:29.953Z and ends at 2022-01-18T08:56:29.953Z, so tmp value is grater than 10 starts at 4th record and continuously grater than for 5th and 6th and tmp values gets less at 7th record which is at 2022-01-18T08:56:29.953Z so i have to get the time duration from 08:41 to 08:56 , And i have to sum that cumulative sums of time duration which we got for each dId, and if the sum of time duration is greater than or equals to 24 hours i have to get those records. Hope get my requirement . Its little bit of tricky, But i dont have any idea on aggreaton. Hence requesting you support on this. Any help on this highly appraciated.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and provide your sample documents in a format that we can cut-n-paste.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,\nGreetings,As per the suggestion, providing the sample data.db.temperatures_cumulative.insertMany([{“_id”:ObjectId(“63fa1beeba44dd674dda511a”),“dId”:“356849088497870”,“tmp”:55.8,“createdOn”:ISODate(“2023-02-25T14:32:14.597Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5118”),“dId”:“356849088497870”,“tmp”:6.4,“createdOn”:ISODate(“2023-02-25T14:32:14.596Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5116”),“dId”:“356849088497870”,“tmp”:55.9,“createdOn”:ISODate(“2023-02-25T14:32:14.594Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5114”),“dId”:“356849088497870”,“tmp”:6.4,“createdOn”:ISODate(“2023-02-25T14:32:14.593Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5112”),“dId”:“356849088497870”,“tmp”:55.9,“createdOn”:ISODate(“2023-02-25T14:32:14.591Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5110”),“dId”:“356849088497870”,“tmp”:6.4,“createdOn”:ISODate(“2023-02-25T14:32:14.588Z”)},{“_id”:ObjectId(“63fa1bedba44dd674dda50da”),“dId”:“865006041824062”,“tmp”:12.18,“createdOn”:ISODate(“2023-02-25T14:32:13.405Z”)},{“_id”:ObjectId(“63fa1bedba44dd674dda50d8”),“dId”:“865006041824062”,“tmp”:5.18,“createdOn”:ISODate(“2023-02-25T14:32:13.403Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda5070”),“dId”:“862818045314616”,“tmp”:13.12,“createdOn”:ISODate(“2023-02-25T14:32:11.539Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda506e”),“dId”:“862818045314616”,“tmp”:3.68,“createdOn”:ISODate(“2023-02-25T14:32:11.537Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda506c”),“dId”:“862818045314616”,“tmp”:13.12,“createdOn”:ISODate(“2023-02-25T14:32:11.536Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda506a”),“dId”:“862818045314616”,“tmp”:3.75,“createdOn”:ISODate(“2023-02-25T14:32:11.534Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda5068”),“dId”:“862818045314616”,“tmp”:13.25,“createdOn”:ISODate(“2023-02-25T14:32:11.533Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda5066”),“dId”:“862818045314616”,“tmp”:3.75,“createdOn”:ISODate(“2023-02-25T14:32:11.531Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda501a”),“dId”:“355026070101866”,“tmp”:28.1,“createdOn”:ISODate(“2023-02-25T14:32:09.918Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5018”),“dId”:“355026070101866”,“tmp”:2.6,“createdOn”:ISODate(“2023-02-25T14:32:09.916Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5016”),“dId”:“355026070101866”,“tmp”:28.1,“createdOn”:ISODate(“2023-02-25T14:32:09.915Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5014”),“dId”:“355026070101866”,“tmp”:2.9,“createdOn”:ISODate(“2023-02-25T14:32:09.913Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5012”),“dId”:“355026070101866”,“tmp”:28.1,“createdOn”:ISODate(“2023-02-25T14:32:09.912Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5010”),“dId”:“355026070101866”,“tmp”:3.3,“createdOn”:ISODate(“2023-02-25T14:32:09.910Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda500e”),“dId”:“355026070101866”,“tmp”:28.0,“createdOn”:ISODate(“2023-02-25T14:32:09.909Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda500a”),“dId”:“355026070101866”,“tmp”:3.8,“createdOn”:ISODate(“2023-02-25T14:32:09.907Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5008”),“dId”:“355026070101866”,“tmp”:28.0,“createdOn”:ISODate(“2023-02-25T14:32:09.906Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5006”),“dId”:“355026070101866”,“tmp”:3.8,“createdOn”:ISODate(“2023-02-25T14:32:09.905Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5004”),“dId”:“355026070101866”,“tmp”:28.0,“createdOn”:ISODate(“2023-02-25T14:32:09.903Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5002”),“dId”:“355026070101866”,“tmp”:4.1,“createdOn”:ISODate(“2023-02-25T14:32:09.900Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b6”),“dId”:“862818045314616”,“tmp”:13.12,“createdOn”:ISODate(“2023-02-25T14:32:08.154Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b4”),“dId”:“862818045314616”,“tmp”:3.81,“createdOn”:ISODate(“2023-02-25T14:32:08.152Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b2”),“dId”:“862818045314616”,“tmp”:13.06,“createdOn”:ISODate(“2023-02-25T14:32:08.151Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b0”),“dId”:“862818045314616”,“tmp”:3.81,“createdOn”:ISODate(“2023-02-25T14:32:08.149Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653ae”),“dId”:“862818045314616”,“tmp”:13.18,“createdOn”:ISODate(“2023-02-25T14:32:08.148Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653ac”),“dId”:“862818045314616”,“tmp”:3.87,“createdOn”:ISODate(“2023-02-25T14:32:08.146Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fb4”),“dId”:“356849088500509”,“tmp”:26.8,“createdOn”:ISODate(“2023-02-25T14:32:08.018Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fb2”),“dId”:“356849088500509”,“tmp”:4.7,“createdOn”:ISODate(“2023-02-25T14:32:08.016Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fb0”),“dId”:“356849088500509”,“tmp”:26.9,“createdOn”:ISODate(“2023-02-25T14:32:08.015Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fae”),“dId”:“356849088500509”,“tmp”:4.9,“createdOn”:ISODate(“2023-02-25T14:32:08.014Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fac”),“dId”:“356849088500509”,“tmp”:26.7,“createdOn”:ISODate(“2023-02-25T14:32:08.012Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4faa”),“dId”:“356849088500509”,“tmp”:5.0,“createdOn”:ISODate(“2023-02-25T14:32:08.011Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fa8”),“dId”:“356849088500509”,“tmp”:26.2,“createdOn”:ISODate(“2023-02-25T14:32:08.010Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fa6”),“dId”:“356849088500509”,“tmp”:5.1,“createdOn”:ISODate(“2023-02-25T14:32:08.008Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fa4”),“dId”:“356849088500509”,“tmp”:26.2,“createdOn”:ISODate(“2023-02-25T14:32:08.007Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fa2”),“dId”:“356849088500509”,“tmp”:5.1,“createdOn”:ISODate(“2023-02-25T14:32:08.005Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4fa0”),“dId”:“356849088500509”,“tmp”:25.3,“createdOn”:ISODate(“2023-02-25T14:32:08.004Z”)},{“_id”:ObjectId(“63fa1be8ba44dd674dda4f9e”),“dId”:“356849088500509”,“tmp”:5.1,“createdOn”:ISODate(“2023-02-25T14:32:08.001Z”)},{“_id”:ObjectId(“63fa1be6ed13fd701c7478bb”),“dId”:“862818041494719”,“tmp”:27.87,“createdOn”:ISODate(“2023-02-25T14:32:06.806Z”)},{“_id”:ObjectId(“63fa1be6ed13fd701c7478b9”),“dId”:“862818041494719”,“tmp”:3.37,“createdOn”:ISODate(“2023-02-25T14:32:06.804Z”)},{“_id”:ObjectId(“63fa1be66d8192257e165354”),“dId”:“865006041893653”,“tmp”:23.12,“createdOn”:ISODate(“2023-02-25T14:32:06.487Z”)},{“_id”:ObjectId(“63fa1be66d8192257e165352”),“dId”:“865006041893653”,“tmp”:5.37,“createdOn”:ISODate(“2023-02-25T14:32:06.485Z”)},{“_id”:ObjectId(“63fa1be56d8192257e16530b”),“dId”:“356849088499439”,“tmp”:31.0,“createdOn”:ISODate(“2023-02-25T14:32:05.121Z”)},{“_id”:ObjectId(“63fa1be56d8192257e165309”),“dId”:“356849088499439”,“tmp”:3.6,“createdOn”:ISODate(“2023-02-25T14:32:05.120Z”)},{“_id”:ObjectId(“63fa1deeba44dd674ddacd55”),“dId”:“869738067227335”,“tmp”:5.2,“createdOn”:ISODate(“2023-02-25T14:40:46.957Z”)},{“_id”:ObjectId(“63fa1deeba44dd674ddacd53”),“dId”:“869738067227335”,“tmp”:28.9,“createdOn”:ISODate(“2023-02-25T14:40:46.953Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7e5”),“dId”:“862818045355304”,“tmp”:25.75,“createdOn”:ISODate(“2023-02-25T14:40:46.872Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7e3”),“dId”:“862818045355304”,“tmp”:3.81,“createdOn”:ISODate(“2023-02-25T14:40:46.870Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7d8”),“dId”:“869738067228440”,“tmp”:3.3,“createdOn”:ISODate(“2023-02-25T14:40:46.666Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7d6”),“dId”:“869738067228440”,“tmp”:21.8,“createdOn”:ISODate(“2023-02-25T14:40:46.662Z”)},{“_id”:ObjectId(“63fa1deeba44dd674ddacd3f”),“dId”:“862818045338052”,“tmp”:27.81,“createdOn”:ISODate(“2023-02-25T14:40:46.557Z”)},{“_id”:ObjectId(“63fa1deeba44dd674ddacd3d”),“dId”:“862818045338052”,“tmp”:5.43,“createdOn”:ISODate(“2023-02-25T14:40:46.554Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7ca”),“dId”:“862818045360171”,“tmp”:34.18,“createdOn”:ISODate(“2023-02-25T14:40:46.388Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7c8”),“dId”:“862818045360171”,“tmp”:-19.31,“createdOn”:ISODate(“2023-02-25T14:40:46.386Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7bb”),“dId”:“355026070205337”,“tmp”:30.9,“createdOn”:ISODate(“2023-02-25T14:40:46.084Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7b9”),“dId”:“355026070205337”,“tmp”:4.5,“createdOn”:ISODate(“2023-02-25T14:40:46.082Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7b7”),“dId”:“355026070205337”,“tmp”:-2.6,“createdOn”:ISODate(“2023-02-25T14:40:46.081Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7b5”),“dId”:“355026070205337”,“tmp”:30.9,“createdOn”:ISODate(“2023-02-25T14:40:46.079Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7b3”),“dId”:“355026070205337”,“tmp”:4.5,“createdOn”:ISODate(“2023-02-25T14:40:46.078Z”)},{“_id”:ObjectId(“63fa1deeed13fd701c74f7b1”),“dId”:“355026070205337”,“tmp”:-2.6,“createdOn”:ISODate(“2023-02-25T14:40:46.076Z”)},{“_id”:ObjectId(“63fa1dee6d8192257e16d4d3”),“dId”:“869247048620543”,“tmp”:23.0,“createdOn”:ISODate(“2023-02-25T14:40:46.066Z”)},{“_id”:ObjectId(“63fa1dee6d8192257e16d4d1”),“dId”:“869247048620543”,“tmp”:2.06,“createdOn”:ISODate(“2023-02-25T14:40:46.064Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4c4”),“dId”:“869738067477484”,“tmp”:6.9,“createdOn”:ISODate(“2023-02-25T14:40:45.903Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4c2”),“dId”:“869738067477484”,“tmp”:20.2,“createdOn”:ISODate(“2023-02-25T14:40:45.899Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddacd17”),“dId”:“869247048620071”,“tmp”:30.0,“createdOn”:ISODate(“2023-02-25T14:40:45.769Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddacd15”),“dId”:“869247048620071”,“tmp”:5.0,“createdOn”:ISODate(“2023-02-25T14:40:45.766Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddacd0b”),“dId”:“869247048676024”,“tmp”:30.06,“createdOn”:ISODate(“2023-02-25T14:40:45.637Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddacd09”),“dId”:“869247048676024”,“tmp”:4.0,“createdOn”:ISODate(“2023-02-25T14:40:45.634Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddacd06”),“dId”:“869247047758120”,“tmp”:30.43,“createdOn”:ISODate(“2023-02-25T14:40:45.611Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddacd04”),“dId”:“869247047758120”,“tmp”:6.56,“createdOn”:ISODate(“2023-02-25T14:40:45.609Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f798”),“dId”:“355026070123555”,“tmp”:5.0,“createdOn”:ISODate(“2023-02-25T14:40:45.504Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f796”),“dId”:“355026070123555”,“tmp”:4.9,“createdOn”:ISODate(“2023-02-25T14:40:45.502Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f794”),“dId”:“355026070123555”,“tmp”:4.8,“createdOn”:ISODate(“2023-02-25T14:40:45.501Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f792”),“dId”:“355026070123555”,“tmp”:4.7,“createdOn”:ISODate(“2023-02-25T14:40:45.499Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f790”),“dId”:“355026070123555”,“tmp”:4.7,“createdOn”:ISODate(“2023-02-25T14:40:45.498Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f78e”),“dId”:“355026070123555”,“tmp”:4.6,“createdOn”:ISODate(“2023-02-25T14:40:45.495Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4b1”),“dId”:“356849088509302”,“tmp”:24.6,“createdOn”:ISODate(“2023-02-25T14:40:45.485Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f789”),“dId”:“869616062916385”,“tmp”:3.9,“createdOn”:ISODate(“2023-02-25T14:40:45.457Z”)},{“_id”:ObjectId(“63fa1deded13fd701c74f787”),“dId”:“869616062916385”,“tmp”:26.2,“createdOn”:ISODate(“2023-02-25T14:40:45.453Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4aa”),“dId”:“355026070233834”,“tmp”:3.7,“createdOn”:ISODate(“2023-02-25T14:40:45.432Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4a7”),“dId”:“355026070233834”,“tmp”:3.7,“createdOn”:ISODate(“2023-02-25T14:40:45.430Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4a5”),“dId”:“355026070233834”,“tmp”:3.7,“createdOn”:ISODate(“2023-02-25T14:40:45.429Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4a3”),“dId”:“355026070233834”,“tmp”:3.7,“createdOn”:ISODate(“2023-02-25T14:40:45.427Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d4a1”),“dId”:“355026070233834”,“tmp”:3.7,“createdOn”:ISODate(“2023-02-25T14:40:45.426Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d49f”),“dId”:“355026070233834”,“tmp”:3.7,“createdOn”:ISODate(“2023-02-25T14:40:45.422Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d49c”),“dId”:“869247048707381”,“tmp”:31.43,“createdOn”:ISODate(“2023-02-25T14:40:45.394Z”)},{“_id”:ObjectId(“63fa1ded6d8192257e16d49a”),“dId”:“869247048707381”,“tmp”:4.62,“createdOn”:ISODate(“2023-02-25T14:40:45.392Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddaccf3”),“dId”:“862818045365063”,“tmp”:27.18,“createdOn”:ISODate(“2023-02-25T14:40:45.189Z”)},{“_id”:ObjectId(“63fa1dedba44dd674ddaccf0”),“dId”:“862818045365063”,“tmp”:4.25,“createdOn”:ISODate(“2023-02-25T14:40:45.186Z”)},{“_id”:ObjectId(“63fa1deced13fd701c74f774”),“dId”:“869738067208400”,“tmp”:-16.2,“createdOn”:ISODate(“2023-02-25T14:40:44.954Z”)},{“_id”:ObjectId(“63fa1deced13fd701c74f770”),“dId”:“869738067208400”,“tmp”:-16.1,“createdOn”:ISODate(“2023-02-25T14:40:44.952Z”)},{“_id”:ObjectId(“63fa1deced13fd701c74f76e”),“dId”:“869738067208400”,“tmp”:35.5,“createdOn”:ISODate(“2023-02-25T14:40:44.950Z”)},{“_id”:ObjectId(“63fa1deced13fd701c74f76c”),“dId”:“869738067208400”,“tmp”:35.5,“createdOn”:ISODate(“2023-02-25T14:40:44.948Z”)},{“_id”:ObjectId(“63fa1decba44dd674ddacce7”),“dId”:“862818045335017”,“tmp”:30.81,“createdOn”:ISODate(“2023-02-25T14:40:44.932Z”)}])Note: I had provided current date data. but the requisting to get the two days data from current date.Expecting result is_id:ObjectId(“63fa1beeba44dd674dda511a”),\ndId:“356849088497870”,\ntmp:[23.0,12.0,15.0,18.0,10.0,]\ncreatedOn:[ISODate(“2023-02-25T14:32:14.593Z”),ISODate(“2023-02-25T14:40:14.593Z”),ISODate(“2023-02-25T14:52:14.593Z”)],\ncumulative_tmp_hours: 26Thanks & Regards,\nM. Ramesh.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "",
"text": "It is not.Have you try to cut-n-paste what it is displayed in the rendered page? I get syntax error when I do because the quotes are all wrong. The quotes would be find if you had follow the link I supplied and did what is written. I know you did not follow the link, since I do not see a click count like we see when the link is clicked.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the update Steevej,\nIt is worked for me, Following is the result set after cut and paste in my shell.mongos> db.temperatures_cumulative.insertMany([{“_id”:ObjectId(“63fa1beeba44dd674dda511a”),“dId”:“356849088497870”,“tmp”:55.8,“createdOn”:ISODate(“2023-02-25T14:32:14.597Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5118”),“dId”:“356849088497870”,“tmp”:6.4,“createdOn”:ISODate(“2023-02-25T14:32:14.596Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5116”),“dId”:“356849088497870”,“tmp”:55.9,“createdOn”:ISODate(“2023-02-25T14:32:14.594Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5114”),“dId”:“356849088497870”,“tmp”:6.4,“createdOn”:ISODate(“2023-02-25T14:32:14.593Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5112”),“dId”:“356849088497870”,“tmp”:55.9,“createdOn”:ISODate(“2023-02-25T14:32:14.591Z”)},{“_id”:ObjectId(“63fa1beeba44dd674dda5110”),“dId”:“356849088497870”,“tmp”:6.4,“createdOn”:ISODate(“2023-02-25T14:32:14.588Z”)},{“_id”:ObjectId(“63fa1bedba44dd674dda50da”),“dId”:“865006041824062”,“tmp”:12.18,“createdOn”:ISODate(“2023-02-25T14:32:13.405Z”)},{“_id”:ObjectId(“63fa1bedba44dd674dda50d8”),“dId”:“865006041824062”,“tmp”:5.18,“createdOn”:ISODate(“2023-02-25T14:32:13.403Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda5070”),“dId”:“862818045314616”,“tmp”:13.12,“createdOn”:ISODate(“2023-02-25T14:32:11.539Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda506e”),“dId”:“862818045314616”,“tmp”:3.68,“createdOn”:ISODate(“2023-02-25T14:32:11.537Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda506c”),“dId”:“862818045314616”,“tmp”:13.12,“createdOn”:ISODate(“2023-02-25T14:32:11.536Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda506a”),“dId”:“862818045314616”,“tmp”:3.75,“createdOn”:ISODate(“2023-02-25T14:32:11.534Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda5068”),“dId”:“862818045314616”,“tmp”:13.25,“createdOn”:ISODate(“2023-02-25T14:32:11.533Z”)},{“_id”:ObjectId(“63fa1bebba44dd674dda5066”),“dId”:“862818045314616”,“tmp”:3.75,“createdOn”:ISODate(“2023-02-25T14:32:11.531Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda501a”),“dId”:“355026070101866”,“tmp”:28.1,“createdOn”:ISODate(“2023-02-25T14:32:09.918Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5018”),“dId”:“355026070101866”,“tmp”:2.6,“createdOn”:ISODate(“2023-02-25T14:32:09.916Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5016”),“dId”:“355026070101866”,“tmp”:28.1,“createdOn”:ISODate(“2023-02-25T14:32:09.915Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5014”),“dId”:“355026070101866”,“tmp”:2.9,“createdOn”:ISODate(“2023-02-25T14:32:09.913Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5012”),“dId”:“355026070101866”,“tmp”:28.1,“createdOn”:ISODate(“2023-02-25T14:32:09.912Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5010”),“dId”:“355026070101866”,“tmp”:3.3,“createdOn”:ISODate(“2023-02-25T14:32:09.910Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda500e”),“dId”:“355026070101866”,“tmp”:28.0,“createdOn”:ISODate(“2023-02-25T14:32:09.909Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda500a”),“dId”:“355026070101866”,“tmp”:3.8,“createdOn”:ISODate(“2023-02-25T14:32:09.907Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5008”),“dId”:“355026070101866”,“tmp”:28.0,“createdOn”:ISODate(“2023-02-25T14:32:09.906Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5006”),“dId”:“355026070101866”,“tmp”:3.8,“createdOn”:ISODate(“2023-02-25T14:32:09.905Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5004”),“dId”:“355026070101866”,“tmp”:28.0,“createdOn”:ISODate(“2023-02-25T14:32:09.903Z”)},{“_id”:ObjectId(“63fa1be9ba44dd674dda5002”),“dId”:“355026070101866”,“tmp”:4.1,“createdOn”:ISODate(“2023-02-25T14:32:09.900Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b6”),“dId”:“862818045314616”,“tmp”:13.12,“createdOn”:ISODate(“2023-02-25T14:32:08.154Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b4”),“dId”:“862818045314616”,“tmp”:3.81,“createdOn”:ISODate(“2023-02-25T14:32:08.152Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b2”),“dId”:“862818045314616”,“tmp”:13.06,“createdOn”:ISODate(“2023-02-25T14:32:08.151Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653b0”),“dId”:“862818045314616”,“tmp”:3.81,“createdOn”:ISODate(“2023-02-25T14:32:08.149Z”)},{“_id”:ObjectId(“63fa1be86d8192257e1653ae”),“dId”:“862818045314616”,“tmp”:13.18,“createdOn”:ISODate(“2023-02-25T14:32:08.148Z”)}])\n{\n“acknowledged” : true,\n“insertedIds” : [\nObjectId(“63fa1beeba44dd674dda511a”),\nObjectId(“63fa1beeba44dd674dda5118”),\nObjectId(“63fa1beeba44dd674dda5116”),\nObjectId(“63fa1beeba44dd674dda5114”),\nObjectId(“63fa1beeba44dd674dda5112”),\nObjectId(“63fa1beeba44dd674dda5110”),\nObjectId(“63fa1bedba44dd674dda50da”),\nObjectId(“63fa1bedba44dd674dda50d8”),\nObjectId(“63fa1bebba44dd674dda5070”),\nObjectId(“63fa1bebba44dd674dda506e”),\nObjectId(“63fa1bebba44dd674dda506c”),\nObjectId(“63fa1bebba44dd674dda506a”),\nObjectId(“63fa1bebba44dd674dda5068”),\nObjectId(“63fa1bebba44dd674dda5066”),\nObjectId(“63fa1be9ba44dd674dda501a”),\nObjectId(“63fa1be9ba44dd674dda5018”),\nObjectId(“63fa1be9ba44dd674dda5016”),\nObjectId(“63fa1be9ba44dd674dda5014”),\nObjectId(“63fa1be9ba44dd674dda5012”),\nObjectId(“63fa1be9ba44dd674dda5010”),\nObjectId(“63fa1be9ba44dd674dda500e”),\nObjectId(“63fa1be9ba44dd674dda500a”),\nObjectId(“63fa1be9ba44dd674dda5008”),\nObjectId(“63fa1be9ba44dd674dda5006”),\nObjectId(“63fa1be9ba44dd674dda5004”),\nObjectId(“63fa1be9ba44dd674dda5002”),\nObjectId(“63fa1be86d8192257e1653b6”),\nObjectId(“63fa1be86d8192257e1653b4”),\nObjectId(“63fa1be86d8192257e1653b2”),\nObjectId(“63fa1be86d8192257e1653b0”),\nObjectId(“63fa1be86d8192257e1653ae”)\n]\n}Please don’t be wrong at me Steevej , i have clicked the link which you have provided as follows.\nStep 1: Placed my cursor on the link,\nStep 2: Right Click on the link,\nStep 3: After Right click i have selcted open link in new window option.May be that is the reson i hope you are not able to view the status, if i click directly on the link might be you can see the status.Regards,\nRamesh.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "db.inventory.insertMany([\n { item: \"journal\", qty: 25, status: \"A\", size: { h: 14, w: 21, uom: \"cm\" }, tags: [ \"blank\", \"red\" ] },\n { item: \"notebook\", qty: 50, status: \"A\", size: { h: 8.5, w: 11, uom: \"in\" }, tags: [ \"red\", \"blank\" ] },\n { item: \"paper\", qty: 10, status: \"D\", size: { h: 8.5, w: 11, uom: \"in\" }, tags: [ \"red\", \"blank\", \"plain\" ] },\n { item: \"planner\", qty: 0, status: \"D\", size: { h: 22.85, w: 30, uom: \"cm\" }, tags: [ \"blank\", \"red\" ] },\n { item: \"postcard\", qty: 45, status: \"A\", size: { h: 10, w: 15.25, uom: \"cm\" }, tags: [ \"blue\" ] }\n]);\n",
"text": "i have clicked the linkThen you must have missed the 2nd point of the first list that says:Add triple backticks (```) before and after a snippet of code. This syntax (aka code fencing in GItHub-flavoured Markdown) will automatically detect language formatting and generally be an improvement over a straight copy & paste.You must also have missed the following part too:db.inventory.insertMany([ { item: “journal”, qty: 25, status: “A”, size: { h: 14, w: 21, uom: “cm” }, tags: [ “blank”, “red” ] }, { item: “notebook”, qty: 50, status: “A”, size: { h: 8.5, w: 11, uom: “in” }, tags: [ “red”, “blank” ] }, { item: “paper”, qty: 10, status: “D”, size: { h: 8.5, w: 11, uom: “in” }, tags: [ “red”, “blank”, “plain” ] }, { item: “planner”, qty: 0, status: “D”, size: { h: 22.85, w: 30, uom: “cm” }, tags: [ “blank”, “red” ] }, { item: “postcard”, qty: 45, status: “A”, size: { h: 10, w: 15.25, uom: “cm” }, tags: [ “blue” ] } ]);Where and how do take the source of cut-n-paste operation?",
"username": "steevej"
}
] | How to get the time duration of cumulative temperature values | 2023-02-25T05:22:14.608Z | How to get the time duration of cumulative temperature values | 1,603 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.19-rc2 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.18. The next stable release 4.4.19 will be a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "mongod --bind_ip 0.0.0.0 --port 27418 --logpath /data/mongodb/general/logs/log.txt --dbpath /data/mongodb/general/wiredTiger --directoryperdb --storageEngine wiredTiger --wiredTigerDirectoryForIndexes\n\n[1] 19287 illegal hardware instruction (core dumped) mongod --bind_ip 0.0.0.0 --port 27418 --logpath --dbpath --directoryperdb\n",
"text": "Hi,apparently this RC is the final one which is live in the repo for a couple of days now.Tonight (2023-02-24 23:17:38 UTC) I updated all my machines, so 4.4.18 went to 4.4.19 and one is a Raspberry Pi 4 running Ubuntu 20.04 64bit, where the database won’t start:Reverting to 4.4.18 fixed the issue.All other databases are on amd64.",
"username": "dfaust"
},
{
"code": "pymongo.errors.NetworkTimeout",
"text": "Also on all of those other amd64 databases I’m now getting occasional pymongo.errors.NetworkTimeout errors, something which never happened before.",
"username": "dfaust"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.19-rc2 is released | 2023-02-23T03:55:58.820Z | MongoDB 4.4.19-rc2 is released | 1,237 |
null | [] | [
{
"code": "",
"text": "Continuing the discussion from How to get last 7 days records based on createdOn date field:Actually i have another field called “tmp”, so my requirement is to get the last 7 days records which is solved by @Jason_Tran , once again thanks a lot to @Jason_Tran , along with that i have to get the maximum tmp value in those last 7 days records, And currently we are using 4.4 version Any help on this will highly appreciated.Thanks & Regards,\nM. Ramesh.",
"username": "MERUGUPALA_RAMES"
},
{
"code": "",
"text": "Please provide sample documents that we can cut-n-paste and expected results.",
"username": "steevej"
}
] | How to pull the records for those last 7 days records | 2023-02-22T20:42:06.536Z | How to pull the records for those last 7 days records | 483 |
null | [
"queries",
"golang"
] | [
{
"code": "",
"text": "Hello everyone,\nI am running a cron whose code is written in golang, and i am using mongoDb as database\nThere was 128GB Ram into my system in which DataBase is stored, and I am using different system for the code.\nThe cron is running with 17000 merchants parallely, each merchant having different database, which means there was 17000 Dbs into system,\nCron work is to send reminders based on the db data, and update data into db,\nnumbers of reminder can be variadic, let’s say for a merchant there may be 20 reminders to be sent, and for another merchant there was no reminder to be sent in a specific cron.\nAnd the time of cron interval is 15 minutes.Now I will tell you a scenario,\nfrom 17000 merchants, there was around 6000 merchants in which reminders to be sent through a cron,\nand in each merchant there was 5 reminders to be sent\nwhile sending 1 reminder there was average 20-30 db queries(including Get, insert, update) run.The issue is that, I have to save slow query logs, db logs for that db queries which makes db slow.\nWhen the cron get starts, the db queries ran fast, but after sometime it gets slowed down.Ram usage of mongoDb system goes to maximum of 85%\nand free memory space will be around 15GBAfter seeing the mongoDb logs, I have found that mongoDb stop executing queries for sometime, and after sometime it starts executing queries again.and I found some abnormal things in the log after that mongoDb stop works for a while:-“msg”:“Failed to gather storage statistics for slow operation”,“attr”:{“opId”:1533607,“error”:“lock acquire timeout”Checkpoint has been running for 165 seconds and wrote: 35000 pages (1160 MB)Can someone help me to find out why I am facing these issues?",
"username": "sahil_garg1"
},
{
"code": "",
"text": "The first looks like it fails to acquire a lock in order to gather storage statistics. I’m not sure if this slows down your cron or not.Checkpoint has been running for 165 seconds and wrote: 35000 pages (1160 MB)This indicates that you are doing a lot of writes (my guess, not 100% sure ), and the checkpoint takes a longer time to finish. Check Internals - Checkpoints and Journaling - #3 by Alexandre_Araujo for more info.As you have more merchants, i believe it will be difficult for your cron to catch up. (e.g. may run even longer than 15mins internal).",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hello @Kobe_W\nThanks for reply,\nI understand what you are referring to and I am looking into it further, what I am not able to understand is that the mongodb hangs for few seconds and hang time increases eventually when this error occurs. In the hang time it does not perform any operations like find query etc. Also, if I disable the slow query logging then this should not happen but when I disabled the slow query loging then this error did not appear but the mongodb hang was happening. As suggested in productions notes, I have moved journal and logs to another drive but no improvement in the performance.",
"username": "sahil_garg1"
}
] | mongoDb queries getting slow down with logs | 2023-02-22T11:34:30.289Z | mongoDb queries getting slow down with logs | 1,538 |
[
"node-js",
"atlas-cluster"
] | [
{
"code": "",
"text": "\nimage2601×630 202 KB\n\nim getting this error while connecting to the data base\nhere is my connection URL: mongodb+srv://shashankreddybanda:@cluster0.u59k3hy.mongodb.net/?retryWrites=true&w=majority\ni have opened the access to all IP addresses",
"username": "reddy_Shashank"
},
{
"code": "",
"text": "Can you connect from shell to your cluster?",
"username": "Ramachandra_Tummala"
}
] | Could not connect to db | 2023-02-25T00:16:59.254Z | Could not connect to db | 433 |
|
null | [
"server"
] | [
{
"code": "",
"text": "Hi,\nIs Mongodb 4.0 supported on Ubuntu 20.04 ?\nI could install Mongodb 4.0.27 version on Ubuntu 20.04 server. However, I am wondering if all the functionality of 4.0 is supported on 20.04.\nPlease confirm the same.Thanks,\nPMJ",
"username": "pmjanmatti"
},
{
"code": "",
"text": "4.4 is the minimum supported version on Ubuntu 20.04MongoDB will not have tested this combination of 4.0 and Ubuntu 20.04",
"username": "chris"
}
] | Ubuntu 20.04 support for Mongodb 4.0 | 2023-02-23T21:26:18.467Z | Ubuntu 20.04 support for Mongodb 4.0 | 1,120 |
null | [
"mongodb-shell",
"storage"
] | [
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIpAll: true\n tls:\n mode: requireTLS\n certificateKeyFile: /etc/ssl/mongo.pem\n allowInvalidCertificates: false\n allowInvalidHostnames: false\n allowConnectionsWithoutCertificates: false\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nsecurity:\n authorization: enabled\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\nmongosh --tls --tlsAllowInvalidHostnames 1.2.3.4db.auth()mongodmongosh",
"text": "Hey there,I’ve been able to setup my MongoDB server on GCP and as far as I can tell it should be fully secured via a TLS certificate created via Let’s Encrypt & certbot but for some reason I’m still able to connect without supplying any TLS certificate…My /etc/mongod.conf file looks like thisI’ve got the security.authorization enabled, the tls.mode set to requireTLS, I’m explicitly disallowing invalid certificates, hostnames or connections without certificates yet I can still access my server both on the server itself and externally from my home network without any issues if I simply use the command mongosh --tls --tlsAllowInvalidHostnames 1.2.3.4What am I missing here? Why isn’t my server refusing these connections? I can at least confirm that I’m not able to do much until I run db.auth() to login to a user but still, I shouldn’t even be able to get connected without a certificate… How do I resolve this?My mongod version is v6.0.3 and the mongosh version is 1.6.1, I’m running a VM inside GCP with Debian GNU/Linux 11Greets,\nMiley",
"username": "Miley_Hollenberg"
},
{
"code": "CAFile: /etc/ssl/mongoCA.pemcat /etc/letsencrypt/live/[domain]/chain.pem >> /etc/ssl/mongoCA.crtMongoServerSelectionError: unable to get issuer certificate\nmongosh --tls --tlsCertificateKeyFile /etc/ssl/mongo.pem --tlsCertificateSelector\nMongoServerSelectionError: Hostname/IP does not match certificate's altnames: IP: 127.0.0.1 is not in the cert's list:\n--host [domain]",
"text": "Small update,I’ve added CAFile: /etc/ssl/mongoCA.pem to my conf file and this CA was generated viacat /etc/letsencrypt/live/[domain]/chain.pem >> /etc/ssl/mongoCA.crtaccording to this. It’s now at least rejecting the conections without any certificate but when I try to connect I get the following error:And when I try to connect with the commandI get the errorWhich doesn’t seem to get fixed with adding --host [domain] to the command, then it simply waits and closes the connection",
"username": "Miley_Hollenberg"
}
] | requireTLS doesn't seem to actually require it | 2023-02-24T19:43:40.671Z | requireTLS doesn’t seem to actually require it | 1,076 |
null | [
"data-modeling",
"swift"
] | [
{
"code": "LinkingObjectsObjectletdynamic@Persistedvarvar@PersistedoriginPropertyclass User: Object {\n\t@Persisted(originProperty: \"users.owner\")\n\tvar ownedItems: LinkingObjects<Items>\n}\n\nclass Items: Object {\n\t@Persisted var users: Users\n\t\n\tclass Users: EmbeddedObject {\n\t\t@Persisted var owner: User?\n\t}\n}\nMutableSetLinkingObjectsclass ModelA: Object {\n\t@Persisted(originProperty: \"toOne\")\n\tvar toMany: LinkingObjects<ModelB>\n}\n\nclass ModelB: Object {\n\t@Persisted var toOne: ModelA?\n}\nclass ModelA: Object {\n\t@Persisted var toMany: MutableSet<ModelB>\n}\n",
"text": "I have a few questions/clarifications about LinkingObjects:LinkingObjects can only be used as a property on Object models. Properties of this type must be declared as let and cannot be dynamic.But when used with @Persisted it appears it must be defined as var. There are enough examples of this that I’m sure var must be the right way when using @Persisted, but it was a point of confusion in the docs that I wanted to doublecheck.Example:For example:as opposed toThanks for the assistance!",
"username": "Tom_J"
},
{
"code": "var@Persisted@Persisted(originProperty: \"dogList\") var linkedPersons: LinkingObjects<Person>originPropertyMutableSetLinkingObjectsclass ModelA: Object {\n\t@Persisted(originProperty: \"toOne\")\n\tvar toMany: LinkingObjects<ModelB>\n}\nclass Person: Object {\n @Persisted var dogList = RealmSwift.List<Dog>()\n}\n\nclass Dog: Object {\n @Persisted var dogName = \"spot\"\n}\nclass Dog: Object {\n @Persisted var dogName = \"spot\"\n @Persisted(originProperty: \"dogList\") var linkedPersons: LinkingObjects<Person>\n}\n",
"text": "I’m sure var must be the right way when using @Persistedyes! That paragraph is really tied back to @ObjC and legacy documentation. Var is the correct selection. If a PersonClass has a List of Dogs and you want to transverse the graph back to the Person From Dog:@Persisted(originProperty: \"dogList\") var linkedPersons: LinkingObjects<Person>Does originProperty for LinkingObjects support a key path to an embedded object, or must it be a top level property on the related object?I understand the question but that’s not the correct implementation. Let me explain.In your code example, Users is an Embedded object in Items, but it’s not an Embedded Object to Realm. All Realm objects - all - must be declared at the top level of the app, not inside another class. Including EmbeddedObjectsAlso, linking objects would not be used on an EmbeddedObject - as there is no reason to do that. Embedded objects are child objects of a specific parent and are not independently persisted - meaning to get to an embedded object, you need to go through the graph of the parent to the child. So if you want a specific child you would have to know the parent object.Perhaps you can can clarify that part of the question a bit if my explanation doesn’t answer it.For modeling a one-to-many relationship, should it be preferred to use MutableSet vs LinkingObjects?You would not generally use LinkingObjects in a one-to-many relationship in that capacity. Let me dive a bit into a LinkingObject use case for clarity.Relationships in databases can be Forward and Reverse and 1-1, 1-many, many-many. Forward takes you from a parent to a child object(s) and reverse transverses the graph from the child object back to the parent object.A forward relationship would be done with a List (one example)In the above example we have a forward one to many relationship from Person to their Dogs. But: What if you find a lost dog and want to know it’s owner? While that could be achieved with a query, a reverse relationship takes you from the Dog right to the Person (So List is Forward; Parent → Child and LinkingObjects is Reverse; Parent ← Child.There are interesting things about LinkingObjects:LinkingObjects is “computed” - e.g. the relationship between the Child and Parent is computed when you ask for it - there’s nothing stored on Disk that defines that relationship. Very unlike a List where you can actually see (in the Realm Browser) the dogs in a Persons dogList.LinkingObjects is actually a reverse many to many relationship! It can actually point back to multiple parents. That’s incredibly powerful - suppose there’s a married couple that both have ownership of a dog - so that dog appears in both of their dogList properites. Well, the dogs linkedPersons will contains BOTH of those owners. If you are only every going to have one owner, you can simply use linkedPersons.first to get the one personLinkedObjects relationships self-destruct. e.g. If a Dog is removed from a Persons dogList, the reverse relationship goes away as well (this is why I call it computed).*note the above is kinda at the 10,000’ levelWhew - hope that helps",
"username": "Jay"
},
{
"code": "AttachmentAttachmentLinkingObjectsclass Attachment: Object {\n\t// To-one relationships to all the different object types\n\t@Persisted var item: Item?\n\t@Persisted var list: List?\n\t@Persisted var person: Person?\n\t@Persisted var place: Place?\n\t// ...and there's a few more...\n}\n\nclass Person: Object {\n\t@Persisted(originProperty: \"person\") var attachments: LinkingObjects<Attachment>\n}\nLinkingObjectsoriginPropertypersonref.personclass Attachment: Object {\n\t// All relationships have been moved into `AttachmentRef` \n\t@Persisted var ref: AttachmentRef\n}\n\nclass AttachmentRef: EmbeddedObject {\n\t// To-one relationships to all the different object types\n\t@Persisted var item: Item?\n\t@Persisted var list: List?\n\t@Persisted var person: Person?\n\t@Persisted var place: Place?\n\t// ...and there's a few more...\n}\n\nclass Person: Object {\n\t@Persisted(originProperty: \"ref.person\") var attachments: LinkingObjects<Attachment>\n\t// Is ^this^ valid?\n}\n",
"text": "In your code example, Users is an Embedded object in Items, but it’s not an Embedded Object to Realm. All Realm objects - all - must be declared at the top level of the app, not inside another class. Including EmbeddedObjectsThanks for this clarification! I noticed this for Object, but I was hoping I could get away with it for modeling EmbeddedObject.I do have some more clarifications on #2 and #3. Here’s a more real-life use case that I hope will help explain both.I have an Attachment class that I want to persist as separate objects (not embedded for various reasons). These Attachments will be used with multiple other classes, but each Attachment will only be associated with one other object. Because these Attachments will be used across classes, I’d like to keep a reference to the “owning” object to easily understand the origin when viewing a list of attachments, and even though there will only be one related object, I’d prefer to have fully typed relationships.Those considerations led me to the following, with the relationships persisted on Attachment, and LinkingObjects providing convenient access to the list of attachments for each of the other objects:This use case is somewhat unusual, but it seems like a reasonable way to link the objects. Please let me know if there are any pitfalls I haven’t noticed!Then I was curious if I could organize the structure a bit more to hide the clutter of the myriad relationships, so I wondered if they could be isolated in an embedded object, and then if LinkingObjects would be able to find populate itself if originProperty was a multi-component keypath, person vs ref.person in this example.Does that explain what I’m trying to do?Thanks again for the detailed assistance!",
"username": "Tom_J"
},
{
"code": "AttachmentLinkingObjectsLinkingObjectsoriginPropertyref.personclass AttachmentRef: EmbeddedObject {\n @Persisted var person: Person?\n}\n\nclass Person: Object {\n @Persisted(originProperty: \"person\") var attachments: LinkingObjects<AttachmentRef>\n}\nclass AttachmentRef: EmbeddedObject {\n @Persisted var person: Person?\n @Persisted var parentAttachment: Attachment!\n}\nlet results = realm.objects(Person.self)\nfor person in results {\n print(person.name, person.attachments.first!.parentAttachment.some_attachement_property)\n}\n",
"text": "Additional explanation was perfect.Those considerations led me to the following, with the relationships persisted on Attachment, and LinkingObjects providing convenient access to the list of attachments for each of the other objects:Sure! You can do that. I am not aware of any pitfalls but the concept is the same - when a person is added to an attachment (forward) it will create a (computed) reverse link from the person back to the attachment.Then I was curious… LinkingObjects would be able to find populate itself if originProperty was a multi-component keypathIn a nutshell, the parent object has an embedded object, and that embedded object has a forward to a Person object, which has a reverse relationship to the embedded object.Really had to stretch my brain on that one but I don’t think that’s valid. The linking objects var is saying “Hey, link back to an Attachment object” and look for a property ref.person, but the Attachment object doesn’t have that property. e.g. ref.person resolves to a Person object, not an Attachment object.But…You could link back to the embedded object directly.the trouble there is the embedded object AttachmentRef has no idea who it’s parent is so if you need to go from Person to AttachmentRef to the Attachment, that won’t work.To fix, add a property to AttachementRef to refer to it’s parent Attachment.Then you could get data in both directions - going in reverse:and +1 for a super great question!",
"username": "Jay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | LinkingObjects questions and clarifications | 2023-02-22T18:27:06.515Z | LinkingObjects questions and clarifications | 1,036 |
null | [
"node-js",
"data-modeling",
"mongoose-odm"
] | [
{
"code": "",
"text": "Hello! I’m building a Movie Review project that’s a combination of a social media / blog app. I plan on using a 3rd party movie API like TMBDI Api to allow users to search and find movies they want to review. I spent a few days researching and trying stuff but I’m not too sure how to go about building an efficient schema around that. I’m hoping to get some help/guidance on how to do so. I watched the series on best practice/anti-patterns but still need extra assistance.Basically, I want users to be able to write reviews for a movie that will show up down a timeline feed similar to any social media app like Twitter or Facebook. I’d like users to be able to like another user’s review and/or leave a comment. However when a user clicks the actual movie or the post the user made from the timeline it redirects them to that specific movie with all the data displayed from the tmbdi API and a collection of that specific movie with reviews from different users will appear along with any nested comments.I think I’d need a schema/document collection for:and I’m thinking I could probably combine 2,3 & 4 together when displaying the entire collection of every related review for a specific movie? Or is it better to create a separate schema/collection for that. I’m pretty new to mongodb/mongoose but I built small mini projects.Also: Would it be a lot to add a follower count?Sorry for the long message hope my question wasn’t too complicated. Thanks in advance!",
"username": "Ssjr"
},
{
"code": "",
"text": "Hi @Ssjr ,MongoDB has a sample IMDb database design , I think it might be a good start for you.I recommend you review this design of the collections and see if you can use the same.You can easily create a free Atlas cluster and load those sample data sets to play with and try out.In general you can have:Reviews collection:Movies collection:Comments collection:Let me know if you have follow up questions.Ty",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "is it possible to update this sample? this exercise, as well as a similar exercise with the airbnb data sample, both have a lot of out of date information and it’s really hard to get going",
"username": "Kristo"
},
{
"code": "",
"text": "Your survey ought to begin with a presentation, then a synopsis of the book/film, then your investigation lastly your decision.",
"username": "Sakhar_Saha"
}
] | What is the best Schema Design for a Movie Review App for storing Reviews, Likes, Comments, etc | 2022-08-26T10:50:04.616Z | What is the best Schema Design for a Movie Review App for storing Reviews, Likes, Comments, etc | 3,567 |
null | [] | [
{
"code": "",
"text": "Hi Team,\nCould anyone give some insight about having “enterprise data catalog” for metadata in MongoDB Atlas?\nany feature available?thank you,\nRajesh",
"username": "rajesh_b"
},
{
"code": "",
"text": "Rajesh,We have a number of partners that provide enterprise data catalog support for MongoDB. Are you asking about a list of vendors or are you looking for a feature within MongoDB Atlas that is an enterprise data catalog? If so what specifically are you trying to accomplish? Do you just want to enumerate the collection metadata or are you looking for lineage?Thanks,\nRob",
"username": "Robert_Walters"
}
] | MongoDb Atlas for Metadata management | 2023-02-20T19:03:44.027Z | MongoDb Atlas for Metadata management | 478 |
null | [
"app-services-user-auth",
"react-native"
] | [
{
"code": "",
"text": "I’m able to sign a user in successfully via Apple sign in. The logs show that “oauth2-apple” succeeded. However, I am unable to access Realm.User.profile, and when I check the App Users section in the Realm console, there’s a user with an “unknown” Name and the provider shows “Unknown, oauth2-apple” that matches the userId of the user that was successfully signed in via “oauth2-apple” in the logs .Are there additional steps for Apple sign-in to create a Realm user? Google sign-in automatically creates a Realm user with the user’s Google data, so I thought Apple sign-in might be similar.Thanks!",
"username": "Jerry_Wang"
},
{
"code": "",
"text": "@Jerry_Wang Can you post more code to show what you are doing? Although it is Google OAuth - this post may help you - Facebook + Google OAuth Issues? - #6 by Sumedha_Mehta1",
"username": "Ian_Ward"
},
{
"code": "let realmApp = new Realm.App({\n id: APP_ID,\n timeout: 10000,\n app: {\n name: 'default',\n version: '0',\n }\n});\nimport appleAuth, {\n AppleAuthCredentialState,\n AppleAuthRequestOperation,\n AppleAuthRequestScope,\n AppleButton,\n} from '@invertase/react-native-apple-authentication';\n\n<AppleButton\n onPress={async () => {\n const identityToken = await getAppleIdentityToken();\n const credential = Realm.Credentials.apple(identityToken);\n const user = await realmApp.logIn(credential);\n // user.profile is undefined <----------------------------------\n }}\n/>\n\nconst getAppleIdentityToken = async () => {\n const appleAuthRequestResponse = await appleAuth.performRequest({\n requestedOperation: AppleAuthRequestOperation.LOGIN,\n requestedScopes: [\n AppleAuthRequestScope.EMAIL,\n AppleAuthRequestScope.FULL_NAME,\n ],\n });\n\n const credentialState = await appleAuth.getCredentialStateForUser(\n appleAuthRequestResponse.user,\n );\n\n if (credentialState === AppleAuthCredentialState.AUTHORIZED) {\n return appleAuthRequestResponse.identityToken;\n } else {\n throw new Error('Credential state is not authorized.');\n }\n};\n",
"text": "Hi Ian,Thanks for getting back to me!Google OAuth works perfectly for me.I’m having trouble with Apple sign-in. Apple sign-in is able to log a person in, but it looks like Realm is creating an anonymous user for that person in the backend rather than a user with information populated from Apple. The provider information in the Realm console shows “Unknown, oauth2-apple”.I’m creating a Realm app with:Then, this is how I’m signing a user in through Apple:The request is successful according to the Realm logs:\nScreen Shot 2020-10-06 at 10.55.48 PM2390×110 22.1 KBBut the user shows up as “unknown” in the App Users section of the Realm console:\nScreen Shot 2020-10-06 at 10.57.18 PM3058×96 23.4 KBReact Native: 0.63.1\nRealm: 10.0.0-rc.1",
"username": "Jerry_Wang"
},
{
"code": "",
"text": "Hey Jerry,You’re right, this does seem a bit weird. Do you mind messaging me a link to your application so we can take a deeper look.cc: @Ian_Ward this looks like a different issue.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hey Jerry -So actually, Apple does not give any user information intentionally. The workaround here would be to use Realm’s custom user data to populate any extra information about the user when they login via a RealmYou can do this after the user logs in from the client or via an authentication trigger (these do not support Apple OAuth yet but we’re expecting that should be fixed within the next release in ~2 weeks).",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Gotcha.Just out of curiosity, why does storing the Apple user information require an authentication trigger? Apple does receive the email and full name (or requested scopes) for the user the first time a user signs in via Apple (or the first time a user signs in after revoking Apple credentials for the app). The information is available in the identity token that’s passed into Realm.Credentials.apple. Are there any plans to store the user information in the identity token automatically in Realm.User.profile in the future?Again, thanks for the help! Can’t wait for the next release (:",
"username": "Jerry_Wang"
},
{
"code": "",
"text": "I mentioned using Authentication triggers as a way to populate Custom User Data as that is a standard pattern that developers use when developing with Realm.I believe the reason we chose not to populate name/email is that they can’t be trusted and the email address could sometimes be an alias - see article from Okta hereNote: Apple will send the user’s name and email in the form post response back to your redirect URL. You should not treat these values as authoritative, because like the OAuth Implicit flow, the data cannot be safely trusted at this point. Unfortunately Apple does not return the user’s name in the ID token where it would be safe to trust.If you have suggestions based on Apple’s current API that might improve the experience, feel free to drop a suggestion here so we can track collective feedback from the community. Hope this was helpful!",
"username": "Sumedha_Mehta1"
},
{
"code": "currentUser.profileuser.data.emailuser.data.name",
"text": "Does it mean that with Apple Auth the email and username are unavailable? Or it’s only in a currentUser.profile, while we still can get the user’s email and username in auth trigger via user.data.email and user.data.name?",
"username": "Stanislaw_Baranski"
},
{
"code": "user.profile.emailuser.profile.name",
"text": "I am using the same code as Jerry_Wang. I would like to know the e-mail and name provided by the user via “Sign in with Apple”. user.profile.email is populated, but user.profile.name is not. From the Apple Authentication both are available. Is there a way to get the name from the profile or store it in CustomData upon creation?",
"username": "Tim_Pintens"
}
] | User not properly created with Apple sign in for React Native Realm | 2020-10-05T01:22:48.465Z | User not properly created with Apple sign in for React Native Realm | 5,682 |
null | [] | [
{
"code": "",
"text": "Our team is currently exploring the process of migrating MongoDB to an ARM-based virtual machine (AWS Graviton) in order to reduce our cloud spending costs. However, since this is new to everyone on our team, we’re unsure about the migration process for ARM-based workloads. Therefore, we would greatly appreciate any assistance you can provide in guiding us through the necessary steps for migration.If you have any documentation that outlines the migration process for ARM-based workloads, please share it with us. We are eager to learn as much as we can about this process and make this transition as smooth as possible.Thank you in advance for your support and any advice you can offer.",
"username": "Navin_prasad"
},
{
"code": "",
"text": "you need to wait for some more time for this as we found some issues in graviton\nraise case with mongo support",
"username": "abinas_roy"
},
{
"code": "",
"text": "Hello Abinas Roy, Thank you for your response. Unfortunately, I couldn’t raise support with MongoDB.\nIs the issue being tracked somewhere else? If yes, could you please share the link with me?",
"username": "Navin_prasad"
},
{
"code": "rs.stepDown(300)votes:0, priority:0mongodump/mongorestore",
"text": "The approach I’d take with this is the following. This is an online minimal/no downtime approach.I have not tested this with members of differing CPU architectures, so testing in a non-prod environment would be recommended.If the database is small enough and/or a tolerance for downtime exists then a mongodump/mongorestore might suit you well enough.See also the replica set tutorials:",
"username": "chris"
}
] | Migration process of MongoDB from x86 intel to ARM based VM | 2023-02-20T12:54:57.836Z | Migration process of MongoDB from x86 intel to ARM based VM | 1,202 |
null | [
"python"
] | [
{
"code": " File \"C:\\Users\\****\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\__init__.py\", line 87, in <module>\n from pymongo import _csot\n File \"C:\\Users\\****\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\_csot.py\", line 22, in <module>\n from pymongo.write_concern import WriteConcern\n File \"C:\\Users\\****\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\write_concern.py\", line 19, in <module>\n from pymongo.errors import ConfigurationError\n File \"C:\\Users\\****\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pymongo\\errors.py\", line 18, in <module>\n from bson.errors import InvalidDocument\nModuleNotFoundError: No module named 'bson'\n",
"text": "Initially I was having issues with pymongo because of a conflict between the bson module on PyPI and the one that comes with pymongo. I addressed this issue by doing the following:pip uninstall bson\npip uninstall pymongo\npip install pymongoEverything is successful, no errors here.However, now I have a new issue:I’ve spent hours trying to figure this out, any help would be greatly appreciated.",
"username": "Anton_Abashkin"
},
{
"code": "",
"text": "I have the same problem. It was working in python 3.8 and it broke then I upgraded to python 3.10.6\nbson==0.5.10\npymongo==4.3.3I’ve tried reinstalling the packages and downgrading pip to 22.3.1",
"username": "Matti_Kotsalainen"
},
{
"code": "",
"text": "I managed to fix it. I’m not sure what I did but I installed a new pyenv environment and I started using GitHub - pyenv/pyenv-virtualenv: a pyenv plugin to manage virtualenv (a.k.a. python-virtualenv) -",
"username": "Matti_Kotsalainen"
},
{
"code": "",
"text": "Do not install the “bson” package from pypi. See the warning here: Installing / Upgrading — PyMongo 4.3.3 documentationDo not install the “bson” package from pypi. PyMongo comes with its own bson package; doing “pip install bson” or “easy_install bson” installs a third-party package that is incompatible with PyMongo.",
"username": "Shane"
},
{
"code": "pymongo==4.3.3\npymongo[zstd]\npymongo[srv]\n",
"text": "I install dependencies from a requirements file. bson is not listed there. These are our pymongo requirements:I got it working after lots of trial and error but I’m not sure what I did to fix it.",
"username": "Matti_Kotsalainen"
},
{
"code": "",
"text": "@shane 's answer is clear about the package. however, I want to add some other things.new versions of some libraries versus the version of python installation has always its problems. a new library version may not have compiled binaries for old python, or a new/old library may not be compiled yet for new python. In such cases, the installer tries to use a system C compiler (CPython at least) and compile the library for the python of the currently active environment.emphasis is on the “version” and “active”. You guys might be forgetting to change activate/switch environments at times, which gives different results every time you try.Or maybe, as the lines get longer, you might be ignoring/missing possible compile errors on the way. so the actual library you were expecting might not even be there. This is very common in windows systems as compilers are not easy to install/manage.Two things are important when you start having problems: make sure you are in the right environment, and check the installation result. A new/clear virtual environment with a suitable python version is mostly the best solution.PS: pymongo 4.3.3, currently latest, has wheels for python 3.7 and later for most distributions, and a compile should be triggered for previous python versions, whereas 4.2.0 does not have any for python 3.11 which also should trigger a compile.",
"username": "Yilmaz_Durmaz"
}
] | Pymongo ModuleNotFoundError: No module named 'bson' | 2023-01-14T09:33:51.341Z | Pymongo ModuleNotFoundError: No module named ‘bson’ | 4,687 |
null | [
"golang"
] | [
{
"code": "",
"text": "Hello, I understant connect will build a connection with database, return the connection to client, so the “client” here is similar to the concept “session” in globalsign/mgo, right? but what happens in client.Database(“db_name”), I know it will return a reference of the database, such reference is thread safe to use? what happens in db.Collection(“coll_name”)? the reference of collection returned is thread safe too?",
"username": "Zhihong_GUO"
},
{
"code": "session.Copy()session.Clone()mongo-go-driverClientDatabaseCollection",
"text": "Hi @Zhihong_GUO,so the “client” here is similar to the concept “session” in globalsign/mgo, right?Although there are similarities between them, they are not quite the same. For example, there is no methods such as session.Copy() or session.Clone(). With mongo-go-driver, you just dial one client which can be passed around between routines (each running whatever commands it needed), and the connection pooling is handled for you.Also, worth mentioning that there’s also Session in mongo-go-driver, which is an interface that represents a MongoDB logical session.I know it will return a reference of the database, such reference is thread safe to use? what happens in db.Collection(“coll_name”)?Client, Database and Collection are safe for concurrent use by multiple goroutines.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thank you Wan. So I can do something like that:type MyService struct {\ncoll *mongo.Collection //I will save the collect as a member\n}//PaginateDocs can be accessed by several “client” apps\nfunc (serv *MyService) PaginateDocs (id int, filter Filter, opt FindOptions, result interface{}) error {\nc, err := serv.coll.Find(context.TODO(), filter, opt) //so here I can use the member, without concern of thread safe\nerr = c.All(context.TODO(), &result)\nreturn err\n}Thank you for your answer.",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "Yes, you can pass mongo.Collection around in the application.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks a lot for the answer!",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "I am sharing db.Collection(“coll_name”) in my application and have a query (I think my query is on the same line of this thread “What happens behind collection in the Go driver”)",
"username": "Ajinkya_Rawankar"
},
{
"code": "",
"text": "it should only hold a reference to the connection (maybe indirectly) pool.\nConnections are shared by all queries, so whichever is available, the query will try using it.multiple queries are supposed to use different connections if one is available. (i’m not a mongo employee btw). this is so that requests don’t block each other. Just like there is normally only one active request on a single http connection. (no http pipelining).",
"username": "Kobe_W"
},
{
"code": "",
"text": "yes, makes sense, thanks a lot for the answer",
"username": "Ajinkya_Rawankar"
},
{
"code": "mongo.Connect()Connect()",
"text": "The mongo.Connect() method in the Go driver for MongoDB creates a new client to connect to a MongoDB server. When you call Connect() ,",
"username": "Karl_Shady"
}
] | What happens behind Connect, Database, Collection in the Go driver? | 2020-02-22T04:32:03.888Z | What happens behind Connect, Database, Collection in the Go driver? | 4,680 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.