image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "cat /var/run/mongodb/mongod.pid 2>/dev/null", "text": "I configured logrotate, it’s seems to me that works but other the .gz files that contains the mongd.log there are files with 0 bytes in size of the past mongod.log files. What I have to do to have the compressed file fo the logs. Thank you in advance.-rw-r----- 1 mongod mongod 0 Sep 18 00:00 mongod.log.2023-09-18T00-00-02\n-rw-r----- 1 mongod mongod 0 Sep 19 00:00 mongod.log.2023-09-19T00-00-01\n-rw-r----- 1 mongod mongod 0 Sep 20 00:00 mongod.log.2023-09-20T00-00-01\n-rw-r----- 1 mongod mongod 0 Sep 22 00:00 mongod.log.2023-09-22T00-00-01\n-rw-r----- 1 mongod mongod 0 Sep 23 00:00 mongod.log.2023-09-23T00-00-02\n-rw-r----- 1 mongod mongod 0 Sep 24 00:00 mongod.log.2023-09-24T00-00-01\n-rw------- 1 mongod mongod 32992 Sep 25 00:00 mongod.log-20230925_00.gz\n-rw-r----- 1 mongod mongod 0 Sep 25 00:00 mongod.log.2023-09-25T00-00-01\n-rw------- 1 mongod mongod 33062 Sep 26 00:00 mongod.log-20230926_00.gz\n-rw-r----- 1 mongod mongod 0 Sep 26 00:00 mongod.log.2023-09-26T00-00-01\n-rw------- 1 mongod mongod 33001 Sep 27 00:00 mongod.log-20230928_00.gzsystemLog:\ndestination: file\nlogAppend: truepath: /var/log/mongodb/mongod.log\ntimeStampFormat: iso8601-utc/var/log/mongodb/mongod.log {\ndaily\nrotate 5\ndateext\ndateformat -%Y%m%d_%H\ncompress\ndelaycompress\nmissingok\ncreate640 mongod mongod\nsharedscripts\npostrotate\n/bin/kill -USR1 cat /var/run/mongodb/mongod.pid 2>/dev/null >/dev/null 2>&1\nendscript", "username": "Enrico_Bevilacqua1" }, { "code": "systemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n logRotate: reopen\nprocessManagement: \n pidFilePath: /var/run/mongodb/mongod.pid\n/var/log/mongodb/mongod.log {\n daily\n size 100M\n rotate 10\n missingok\n compress\n delaycompress\n notifempty\n create 640 mongod mongod\n sharedscripts\n postrotate\n /bin/kill -SIGUSR1 `cat /var/run/mongodb/mongod.pid 2>/dev/null` >/dev/null 2>&1\n endscript\n}\n", "text": "Hello everything is fine? I hope so You can try to do it like this:And logrotate.confThis is just an example and you can adjust size, retention, among other things however you prefer.", "username": "Samuel_84194" }, { "code": "", "text": "Hi @Enrico_Bevilacqua1,\nAs mentioned from @Samuel_84194, I think you need to add the parameter logRotate: reopen.From documentation:“You can also configure MongoDB to support the Linux/Unix logrotate utility by setting systemLog.logRotate or --logRotate to reopen. With reopen, mongod or mongos closes the log file, and then reopens a log file with the same name, expecting that another process renamed the file prior to rotation.”\nhttps://www.mongodb.com/docs/manual/tutorial/rotate-log-files/#:~:text=You%20can%20also%20configure%20MongoDB%20to%20support%20the%20Linux/Unix%20logrotate%20utility%20by%20setting%20systemLog.logRotate%20or%20--logRotate%20to%20reopen.%20With%20reopen%2C%20mongod%20or%20mongos%20closes%20the%20log%20file%2C%20and%20then%20reopens%20a%20log%20file%20with%20the%20same%20name%2C%20expecting%20that%20another%20process%20renamed%20the%20file%20prior%20to%20rotation.", "username": "Fabio_Ramohitaj" }, { "code": "cat /var/run/mongodb/mongod.pid 2>/dev/null", "text": "First of all I wanto to thank you for your reply.\nI’m sorry but the change proposed it didn’t work for me.\nFollowing the mongod.conf section about the system log, the list of the files that I got and the logroate.d config file. Where am I wrong?systemLog:\ndestination: file\nlogAppend: true\nlogRotate: reopen\npath: /var/log/mongodb/mongod.log\ntimeStampFormat: iso8601-utctotal 992\n-rw-r----- 1 mongod mongod 231214 Oct 5 08:52 mongod.log\n-rw-r----- 1 mongod mongod 0 Sep 30 00:00 mongod.log.2023-09-30T00-00-01\n-rw------- 1 mongod mongod 33606 Oct 1 00:00 mongod.log-20231001_00.gz\n-rw-r----- 1 mongod mongod 0 Oct 1 00:00 mongod.log.2023-10-01T00-00-01\n-rw------- 1 mongod mongod 33562 Oct 2 00:00 mongod.log-20231002_00.gz\n-rw-r----- 1 mongod mongod 0 Oct 2 00:00 mongod.log.2023-10-02T00-00-01\n-rw------- 1 mongod mongod 37762 Oct 3 00:00 mongod.log-20231003_00.gz\n-rw-r----- 1 mongod mongod 0 Oct 3 00:00 mongod.log.2023-10-03T00-00-01\n-rw------- 1 mongod mongod 37930 Oct 4 00:00 mongod.log-20231004_00.gz\n-rw-r----- 1 mongod mongod 626234 Oct 5 00:00 mongod.log-20231005_00daily\nrotate 5\ndateext\ndateformat -%Y%m%d_%H\ncompress\ndelaycompress\nmissingok\ncreate640 mongod mongod\nsharedscripts\npostrotate\n/bin/kill -USR1 cat /var/run/mongodb/mongod.pid 2>/dev/null >/dev/null 2>&1\nendscript", "username": "Enrico_Bevilacqua1" } ]
Multiple log files at 0 byte of logrotate
2023-09-29T10:36:51.384Z
Multiple log files at 0 byte of logrotate
286
https://www.mongodb.com/…c_2_1024x602.png
[ "graphql" ]
[ { "code": "", "text": "Hi Support,We have been using Mongo Atlas App Service as a GraphQL service to access MongoDB for the last 5 - 6 months and APIs were working fine but in the last 15 days we have been getting “OperationCanceled Error” in apis sometimes so can you please check why we are getting this error sometime?image1296×762 102 KBOur Org name is NeurologikThanks\nRaman", "username": "Raman_Kumar" }, { "code": "OperationCanceled", "text": "Hi @Raman_Kumar,The OperationCanceled error means typically what it says - the client, for whatever reason, has decided to cancel the request. This could have had different causes (explicit cancel, connection drop, …), but in general is not due to anything wrong on the backend side, it’s just an acknowledgment that the request was canceled.There are other errors in your logs, that seem to point to some overlapping of queries and mutations, you may want to look into those as well.Unfortunately, you’ve chosen to go for a single, API Key-based user, that, if clients are calling GraphQL directly, is both pretty bad for security (that type of API Key is meant to be used by servers, especially where mutations happen), and even worse for diagnostics, as it’s impossible to track whatever flow that particular client was following: if your app is indeed routed through a server you’ve control on, you may be in a better position to understand what happened there.", "username": "Paolo_Manna" } ]
OperationCanceled Error in Graphql APIs
2023-10-05T08:14:21.790Z
OperationCanceled Error in Graphql APIs
248
null
[ "aggregation", "queries" ]
[ { "code": "db.places.find( {\n loc: { $geoWithin: { $centerSphere: [ [ -88, 30 ], 10/3963.2 ] } }\n} )\n\nvar point = new GeoJsonPoint<GeoJson2DCoordinates>(); (X, Y Coords)\nvar results = await collection.Aggregate()\n .Search(Builders<MyDocument>.Search\n .Compound()\n .Filter(Builders<MyDocument>.Search.Equals(d => d.IsDeleted, false))\n .Must(Builders<MyDocument>.Search.GeoWithin(\n d => d.GeoCode, point)),\n indexName: MyCollectionName)\n .ToListAsync(cancellationToken)\n", "text": "Given I have a Search Index with a Geo Mapping, is it possible to use the search index to perform a GeoWithin with a CentreSphere query? I have a lat/lng and radius.I’ve seen I can do this on the main collection:But is this possible to query using the Search Index?Cheers", "username": "JerP" }, { "code": "", "text": "Added a 2dSpehere index to the collection, and used the $near operator on the collection with this index over the Search index", "username": "JerP" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Geo CentreSphere Query
2023-10-04T07:48:01.577Z
Geo CentreSphere Query
260
null
[ "aggregation" ]
[ { "code": "", "text": "Hi Team,We do not have backup , but particular collections few records was deleted by accidentlyIn Replication how do we recovering data and restore particular collection without any impact to collections ?", "username": "Srihari_Mamidala" }, { "code": "", "text": "any one can update me ?", "username": "Srihari_Mamidala" }, { "code": "", "text": "Hello, welcome to the MongoDB community.Ideally, you should always keep a backup of your data.You can try to recover using the oplog, if you have all the oplogs. Since you don’t have any backups, you need to have them all to maintain the integrity of your system.If you have it, you can use the following step by step:", "username": "Samuel_84194" }, { "code": "", "text": "Or if you have a large oplog window, you can recover using an aggregate, as explained in this topic:", "username": "Samuel_84194" } ]
How to point in time recovery data
2023-10-03T19:12:37.642Z
How to point in time recovery data
427
null
[ "swift" ]
[ { "code": "", "text": "If i follow the doc https://www.mongodb.com/docs/atlas/app-services/authentication/apple/ i get the error “Failed to log in: failed to lookup key for kid=xxxx” in xcode debug.Using this guide Synced Realm on iOS with SwiftUI using Sign-in with Apple for Authentication i use the SignInWithAppleButton button for generete the jwt token and it works.", "username": "Roberto_D_Isanto" }, { "code": "", "text": "Hi @Roberto_D_Isanto, did you fix this? I’m getting the same error.Thanks.", "username": "varyamereon" } ]
Sign in with Apple - Failed to log in: failed to lookup key for kid=xxxx - fixed
2022-09-30T22:34:48.292Z
Sign in with Apple - Failed to log in: failed to lookup key for kid=xxxx - fixed
1,714
null
[]
[ { "code": "{\n \"orderId\": \"12345678900000\",\n \"percentage\": 0.20,\n \"subtotalAmount\": 8000,\n \"serviceFeeAmount\": 1600\n }\n{ $toInt: { $multiply [ '$subtotalAmount', '$percentage'] } }", "text": "Hi there -\nI can’t seem to find the simple answer to this anywhere. Here’s an example of what Id like to achieve,\nGiven the following document:I would like the serviceFeeAmount to calculate it’s value\n{ $toInt: { $multiply [ '$subtotalAmount', '$percentage'] } }\nbased on the inserted subtotalAmount & percentage values.Thanks so much!", "username": "falisse_frazier" }, { "code": "percentagesubtotalAmountserviceFeeAmountpercentagesubtotalAmountserviceFeeAmountserviceFeeAmountsubTotalAmountpercentage", "text": "Hi @falisse_frazier - Welcome to the community.Sounds like you want this value calculated upon insert but please correct me if i’m wrong. If so, is there a reason you are not calculating this beforehand and just inserting the calculated value then? i.e., You have percentage and subtotalAmount values to be inserted, what is the reason for not calculating serviceFeeAmount and inserting all 3 of these fields in 1 go?However, you’ve also provided the sample document - I would like to clarify if this means that it is already inserted into your database (or at least the percentage and subtotalAmount) and you want to calculate serviceFeeAmount based off this already inserted document? i.e., you just want to update the document (that is already inserted) to have the serviceFeeAmount equal to the multiplication result of subTotalAmount and percentage.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "CREATE TABLE people (\n ...,\n height_cm numeric,\n height_in numeric GENERATED ALWAYS AS (height_cm / 2.54) STORED\n);\n", "text": "Hi there @Jason_Tran\nThe former is correct and Yes, for security purposes this calculation needs to happen inline. The code above is just a simple, generic example of what I’m trying to achieve. I am looking for an equivalent to Postgresql’s generated columns:A generated column is a special column that is always computed from other columns. Thus, it is for columns what a view is for tables. There are two kinds of generated columns: stored and virtual. A stored generated column is computed when it is written (inserted or updated) and occupies storage as if it were a normal column. A virtual generated column occupies no storage and is computed when it is read. Thus, a virtual generated column is similar to a view and a stored generated column is similar to a materialized view (except that it is always updated automatically).Are generated columns possible in MongoDB? Or would I have to create a view?\nThanks so much!", "username": "falisse_frazier" }, { "code": "exports = function(changeEvent) {\n const fullDocument = changeEvent.fullDocument;\n const collection = context.services.get(\"Cluster0\").db(\"scores\").collection(\"collection\");\n const update = [{\n \"$set\": {\n \"serviceFeeAmount\": { \n \"$multiply\" : [\"$subtotalAmount\", \"$percentage\"] \n }\n }\n }];\n collection.updateOne({'orderId':fullDocument.orderId}, update);\n};\nmongosh\"serviceFeeAmount\"DB> db.collection.find({})\n/// No documents as of yet\nDB> db.collection.insertOne({ \"orderId\": \"12345678900000\", \"percentage\": 0.20, \"subtotalAmount\": 8000 })\n{\n acknowledged: true,\n insertedId: ObjectId(\"642cec310e4251108de45c1b\")\n}\nDB> db.collection.insertOne({ \"orderId\": \"12345678000001\", \"percentage\": 0.50, \"subtotalAmount\": 10000})\n{\n acknowledged: true,\n insertedId: ObjectId(\"642cec4b0e4251108de45c1c\")\n}\n/// inserted 2 documents above WITHOUT specifying a \"serviceFeeAmount\"\nDB> db.collection.find({})\n[\n {\n _id: ObjectId(\"642cec310e4251108de45c1b\"),\n orderId: '12345678900000',\n percentage: 0.2,\n subtotalAmount: 8000,\n serviceFeeAmount: 1600 /// <--- Now has serviceFeeAmount field\n },\n {\n _id: ObjectId(\"642cec4b0e4251108de45c1c\"),\n orderId: '12345678000001',\n percentage: 0.5,\n subtotalAmount: 10000,\n serviceFeeAmount: 5000 /// <--- Now has serviceFeeAmount field\n }\n]\n", "text": "Thanks for clarifying Falisse. You could try creating a view but off the top of my head, if you’re using Atlas you can also consider testing Database Triggers to see if it suits your use case and requirements.I created a very simple trigger function to run upon insert for demonstration purposes:Using mongosh for the inserts (inserted documents don’t have a \"serviceFeeAmount\" field specified):I’ve only done brief testing for several minutes on this so you may want to go over the following documentation if you believe it might suit your use case:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I want to add to Jason_Tran’s answer.Why store a value that is a direct computation of other fields? You may always produce this value using a projection or $set/$addFields. Storing it increases the storage size.One reason to store such a computed field would be to be able to query it efficiently using an index.Jason, I am not too familiar with Atlas triggers so I am wondering if doing a $set on a trigger would result in the document being written twice. Once for the initial insert and a second time for the update. With change stream, that would be the case but I am not too sure with triggers.", "username": "steevej" }, { "code": "", "text": "Good point Steve. In terms of the number of writes, I believe triggers work a similar manner to what you’ve described. I.e., A write for the insert and another for the update.", "username": "Jason_Tran" }, { "code": "", "text": "Thanks for the clarification.Personally, then if and only if it is required to store the computed field, I would avoid the double write and compute the field in my data insert API, with the caveat that any insert done without the API would not have the field. Perhaps with schema validation could prohibit that. But my first choice would be to not store the computed field and use $project to do it.", "username": "steevej" }, { "code": "{ eventType: insert, \n data: {\n incoming: document\n outgoing: null\n }\n}\n\n{ eventType: update, \n data: {\n incoming: document\n outgoing: document\n }\n}\n\n{ eventType: delete, \n data: {\n incoming: null\n outgoing: document\n }\n}\n", "text": "Thanks so much @Jason_Tran & @steevej!I had no idea MongoDB supports Triggers \nIf the specification is anything like RDB Triggers, you should be able to control which event type (Insert, Update or Delete) your trigger will handle. The trigger object would also have plenty of info to determine whether the field has already been set, e.g.As for my solution, we ended up transferring our “relational” data to Postgres rather than fighting to make it work in MongoDB. However, the majority of our variable data is still persisted to MongoDB. ", "username": "Falisse_Frazier1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to add a computed field to an existing collection such that as values change for a record the field is updated?
2023-03-30T19:22:19.438Z
Is it possible to add a computed field to an existing collection such that as values change for a record the field is updated?
1,155
null
[ "sharding" ]
[ { "code": "", "text": "Hello,I have created a sharded environment with 2 mongos. I am trying to use HAProxy as a load balancer for the mongos. When I try to connect using the loadBalanced=true flag I always get error.MongoCompatibilityError: Driver attempted to initialize in load balancing mode, but the server does not support this mode.Maybe I have missed some configurations or is it possible to use the loadBalanced=true flag when using HAProxy as load balancer for the mongos?", "username": "csac" }, { "code": "", "text": "I am trying to use HAProxy as a load balancer for the mongos.are you saying you put 2 mongos behind a HAProxy ?what’s your connection URI like?", "username": "Kobe_W" }, { "code": "", "text": "Yes. It is correct.\nMy connection URI is mongodb://[host]:27017/test?loadBalanced=true", "username": "csac" }, { "code": "=====================\nLoad Balancer Support\n=====================\n\n:Status: Accepted\n:Minimum Server Version: 5.0\n\n.. contents::\n\n--------\n\nAbstract\n========\n\nThis specification defines driver behaviour when connected to MongoDB services\nthrough a load balancer.\n\nMETA\n====\n\n", "text": "i don’t know how mongodb handles this options, but i got below links. Hope they can help:https://jira.mongodb.org/browse/SERVER-58502", "username": "Kobe_W" } ]
Load balancing mongos
2023-10-04T02:06:59.925Z
Load balancing mongos
304
https://www.mongodb.com/…_2_1024x573.jpeg
[ "jakarta-mug" ]
[ { "code": "Full-stack DeveloperFull-stack DeveloperJakarta MUG Leader", "text": "\nScreen Shot 2023-08-31 at 21.17.081724×966 202 KB\nGet ready for a fun and interactive way to build an investment app using mongodb.what will be there ?The session will brought by two full-stack developer ilham and severin which using methods of extreme programming as their daily basis on working at jenius.Don’t miss out on this incredible opportunity to learn and have some fun with other MongoDB users. Be sure to mark your calendars for September 7th, 2023, 7:00PM Onwards and we’ll see you there! \nTo RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are RSVPed. You need to be signed in to access the button.Event Type: Online\nLink(s):\nVideo Conferencing URLFull-stack Developer at JeniusFull-stack Developer at JeniusJakarta MUG Leader", "username": "Fajar_Abdul_Karim" }, { "code": "", "text": "Gentle Reminder: Jakarta MongoDB User Group Online Meetup (in Bahasa) is tomorrow at 07:00 PM.This is your chance to dive into the world of MongoDB Atlas and discover how it empowers you to build amazing investment applications. We are thrilled to have you join us.Zoom Link: Launch Meeting - ZoomLooking forward to seeing you all at the event!", "username": "Harshit" }, { "code": "", "text": "It’s not in english. At least mention Language.", "username": "Mohammad_Zuha_Khalid" }, { "code": "", "text": "thank you for the feedback, we already mention in-bahasa in the post Image. I will make it bigger next time and put it in the description.", "username": "Fajar_Abdul_Karim" }, { "code": "", "text": "WhatsApp Image 2023-10-04 at 20.31.181200×1600 239 KBthank you for being our speaker @severinnafa @ilham, hope you like the swagg", "username": "Fajar_Abdul_Karim" } ]
Jakarta MUG: Building Investment App with MongoDB Atlas Function
2023-08-27T05:28:13.576Z
Jakarta MUG: Building Investment App with MongoDB Atlas Function
1,737
null
[]
[ { "code": "[{\n \"description\": \"PC-13\",\n \"comments\": \"\",\n \"parts\": [\n {\n \"id\": \"PARTS-23aq\",\n \"type\": \"mouse\",\n \"description\": \"\"\n },\n {\n \"id\": \"PARTS-18be\",\n \"type\": \"keyboard\",\n \"description\": \"\"\n },\n {\n \"id\": \"PARTS-53ih\",\n \"type\": \"monitor\",\n \"description\": \"\"\n }\n ]\n}, {\n \"description\": \"PC-09\",\n \"comments\": \"\",\n \"parts\": [\n {\n \"id\": \"PARTS-31me\",\n \"type\": \"monitor\",\n \"description\": \"\"\n },\n {\n \"id\": \"PARTS-36hs\",\n \"type\": \"mouse\",\n \"description\": \"\"\n },\n {\n \"id\": \"PARTS-74bc\",\n \"type\": \"keyboard\",\n \"description\": \"\"\n }\n ]\n}, {\n \"description\": \"PC-49\",\n \"comments\": \"\",\n \"parts\": [\n {\n \"id\": \"PARTS-48up\",\n \"type\": \"monitor\",\n \"description\": \"\"\n },\n {\n \"id\": \"PARTS-90hz\",\n \"type\": \"mouse\",\n \"description\": \"\"\n },\n {\n \"id\": \"PARTS-14zg\",\n \"type\": \"keyboard\",\n \"description\": \"\"\n }\n ]\n}\n]\n{\n\t\"_id\": ObjectId(\"507f1f77bcf86cd799439011\"),\n\t\"audit_date\": ISODate(\"2023-10-03T08:00:00.000Z\"),\n\t\"status\": \"IN-PROGRESS\",\n\t\"audit_notes\": {\n\t\t\"comments\": \"IN-PROGRESS\",\n\t\t\"auditor\": \"Jane Doe\",\n\t\t\"result\": [{\n\t\t\t \"description\": \"PC-13\",\n\t\t\t \"comments\": \"\",\n\t\t\t \"audit_overall_status\": \"IN-PROGRESS\",\n\t\t\t \"parts\": [\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-23aq\",\n\t\t\t\t \"type\": \"mouse\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"PASSED\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-18be\",\n\t\t\t\t \"type\": \"keyboard\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"PASSED\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-53ih\",\n\t\t\t\t \"type\": \"monitor\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"PASSED\"\n\t\t\t\t}\n\t\t\t ]\n\t\t\t}, {\n\t\t\t \"description\": \"PC-09\",\n\t\t\t \"comments\": \"\",\n\t\t\t \"parts\": [\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-31me\",\n\t\t\t\t \"type\": \"monitor\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"IN-PROGRESS\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-36hs\",\n\t\t\t\t \"type\": \"mouse\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"FAILED\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-74bc\",\n\t\t\t\t \"type\": \"keyboard\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"PASSED\"\n\t\t\t\t}\n\t\t\t ]\n\t\t\t}, {\n\t\t\t \"description\": \"PC-49\",\n\t\t\t \"comments\": \"\",\n\t\t\t \"parts\": [\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-48up\",\n\t\t\t\t \"type\": \"monitor\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"IN-PROGRESS\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-90hz\",\n\t\t\t\t \"type\": \"mouse\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"IN-PROGRESS\"\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t \"id\": \"PARTS-14zg\",\n\t\t\t\t \"type\": \"keyboard\",\n\t\t\t\t \"description\": \"\",\n\t\t\t\t \"audit_status\": \"IN-PROGRESS\"\n\t\t\t\t}\n\t\t\t ]\n\t\t\t}\n\t\t\t]\n\t}\n}\n", "text": "Hi,\nI want to ask for help with how to do this in MongoDB.I created a collection named “resources” which contains the resources in my school like computers.\nI have created the following collection.Now, I wanted to execute an “audit” where I would check the status of my resources.\nSo basically, in my application administrator would verify each computer resource part if they are still in order. If they are still working, in failure, etc.I would like to create a document in a separate collection named “resources_audit” where I could transfer the content of one collection (resources) into another (resources_audit).\nIn that document, I would log the date of the audit, who did the audit, and the audit status for each resource.Similar to the one belowIs what I am thinking feasible? Or how could this be done?", "username": "Nel_Neliel" }, { "code": "db.collection.aggregate([\n {\n $group: {\n _id: null,\n allItems: {\n $push: \"$$ROOT\"\n }\n }\n }\n])\n", "text": "You could group, pushing all documents into a single field:Mongo playground: a simple sandbox to test and share MongoDB queries onlineObviously you can re-shape as needed before combining them. Combining multiple into one document, watch out for the document size limit of 16MB, but that’s still a lot of data you could fit in there.", "username": "John_Sewell" }, { "code": "", "text": "To add to @John_Sewell’s response there are two aggregation stages that can output to another collection they are $merge and $out and $mergeThe main difference between the two is that $out will completely overwrite the output collection and $merge will allow you to merge the aggregation output into the target collection.", "username": "chris" }, { "code": "{\n \"_id\": ObjectId(\"5a934e000102030405000000\"),\n \"comments\": \"\",\n \"description\": \"PC-13\",\n \"parts\": [\n {\n \"description\": \"\",\n \"id\": \"PARTS-23aq\",\n \"type\": \"mouse\"\n },\n {\n \"description\": \"\",\n \"id\": \"PARTS-18be\",\n \"type\": \"keyboard\"\n },\n {\n \"description\": \"\",\n \"id\": \"PARTS-53ih\",\n \"type\": \"monitor\"\n }\n ]\n },\n{\n\t\"_id\": ObjectId(\"5a934e000102030405000000\"),\n\t\"comments\": \"\",\n\t\"description\": \"PC-13\",\n\t\"parts\": [\n\t\t{\n\t\t\t\"description\": \"\",\n\t\t\t\"id\": \"PARTS-23aq\",\n\t\t\t\"type\": \"mouse\",\n\t\t\t\"audit_status\": \"FAILED\"\n\t\t},\n\t\t{\n\t\t\t\"description\": \"\",\n\t\t\t\"id\": \"PARTS-18be\",\n\t\t\t\"type\": \"keyboard\",\n\t\t\t\"audit_status\": \"FAILED\"\n\t\t},\n\t\t{\n\t\t\t\"description\": \"\",\n\t\t\t\"id\": \"PARTS-53ih\",\n\t\t\t\"type\": \"monitor\",\n\t\t\t\"audit_status\": \"FAILED\"\n\t\t}\n\t]\n }\n", "text": "First of all, I would like to say thanks to both of you for introducing me to the aggregation framework.\nI have only learned about it today and how powerful it could be.I do have a question though that I cannot figure out how to do this. I have my playground here https://www.mongoplayground.net/p/cgE58Ys1kbDI would like to know how can I insert a new field to the array that we have pushed?\nThe current output looks like this.But I wanted to add a default field like below and set it to FAILED.Not sure if this is feasible?Thank you.", "username": "Nel_Neliel" }, { "code": " {\n \"$addFields\": {\n \"overall_status\": \"IN-PROGRESS\",\n \"auditor\": \"Jane Doe\",\n \"audit_notes.parts.result\": \"FAILED\"\n }\n },\n", "text": "You can use the dot notation:Mongo playground: a simple sandbox to test and share MongoDB queries onlineThe aggregation framework is an awesome tool to slice and dice data, update it or all kinds of things!If you’ve not done so already, check out the MongoDB university courses, i.e.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "John_Sewell" }, { "code": "db.collection.aggregate([\n {\n $group: {\n _id: null,\n allItems: {\n $push: \"$$ROOT\"\n }\n }\n }\n])\n", "text": "Hey,\nI was thinking along the lines about doing something about theBut alas, I am overthinking things and I just needed the dot notation on the $addfieldsThank you very much for your help and will take some of the MongoDB courses to learn more about MongoDB.", "username": "Nel_Neliel" }, { "code": "", "text": "Also check out:Learn about MongoDB Aggregations to develop effective and optimal data manipulation and analytics aggregation pipelines with this book, using the MongoDB Aggregation Framework (aggregate)Which is also available in print. Lucky me received a free copy recently.", "username": "chris" }, { "code": "", "text": "That from the mongo local event?", "username": "John_Sewell" }, { "code": "", "text": "Yes it was. Great event.", "username": "chris" }, { "code": "", "text": "I was in the London event but did hang about long enough to get a copy!Getting off topic!", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Transferring content of one collection into a document in a different collection
2023-10-03T10:51:47.893Z
Transferring content of one collection into a document in a different collection
360
null
[]
[ { "code": "", "text": "Good fellows I have a question that I would like to solve: We have a Tier M30 cluster with 130 GB storege with storage scaling activated.\nWell currently we have lowered the disk space to about 114.0Gb in the next few days we will continue lowering.\nIs it advisable to manually lower the GB of disk space? Will there be any kind of stop or affect the production startup?\nI also do not understand how my collections in total occupy 109.594GB and instead in the monitoring it continues putting Disk Usage 114.0 GB, where can I free space? Can it be configured to scale down as it does automatically when it needs to expand?Thank you very much are several questions but it is a topic that has me a little disorienting, thanks once more", "username": "Juan_Jose_Garcia_Gonzalez" }, { "code": "", "text": "Hi @Juan_Jose_Garcia_Gonzalez,\nI will try to help you!Doing a very simple calculation, 90% of 130Gb is 117Gb, so the next time when will scale the cluster storage.From the documentation:These three parameters should answer your question:I don’t seem to have found any warnings inherent in manually reducing storage, but I personally wouldn’t do it.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Hi and welcome to the forums @Juan_Jose_Garcia_GonzalezI also do not understand how my collections in total occupy 109.594GB and instead in the monitoring it continues putting Disk Usage 114.0 GB, where can I free space? Can it be configured to scale down as it does automatically when it needs to expand?As items are removed from the collection the blocks are simply marked as free for reuse for future document updates or inserts. To recover any free space to the OS you would need to compact the collection.Directly connect to each secondary in turn and compact the collection(s) then use the “Test Failover” to switch Primary to another member and repeat the compact on this member.But @Fabio_Ramohitaj’s advice remains valid, you would be close to storage autoscaling threshold. Maybe the cost/benefit works out for you though!", "username": "chris" } ]
Consultation with disk space
2023-10-03T13:35:21.129Z
Consultation with disk space
244
null
[ "node-js" ]
[ { "code": "", "text": "I encounter this error message when I try to establish a connection, and my operating system is Windows. I have already tried reinstalling multiple times, but I can’t resolve it", "username": "Fabio_D_amato" }, { "code": "mongod.cfg", "text": "", "username": "Jack_Woehr" } ]
Connect ECONNREFUSED 127.0.0.1:27017?
2023-10-04T11:21:45.163Z
Connect ECONNREFUSED 127.0.0.1:27017?
280
null
[]
[ { "code": "", "text": "This is one of the objectives in the study guide …Can someone explain what is meant by “where it should be inserted if it does not exist”?Thanks!", "username": "Adam_N_A1" }, { "code": "", "text": "Hi Adam. We apologize for the delayed response. Please reference this document to get a better understanding on what we are asking in reference to objective 2.4. If you have any further questions, please reach out to [email protected]\nThank you!", "username": "Heather_Davis" } ]
Associate Developer Study Guide
2023-09-10T21:01:11.082Z
Associate Developer Study Guide
502
null
[]
[ { "code": "", "text": "Hey there! I wanted to share my experience with a particular exam platform(Examity). Unfortunately, passing the exam on this platform doesn’t just cost you money, but also your valuable time. The language proficiency of the instructors isn’t up to par, making it quite challenging to understand them. Moreover, there seems to be a lack of respect in their communication. It appears to be a group of individuals who may not have received proper training, and the audio quality during sessions is subpar. It’s even noticeable that they might not have the means to afford better microphones", "username": "SOHAIB_MANAH" }, { "code": "", "text": "Hi Sohaib. I sincerely apologize for your experience with our proctoring service and will most definitely bring it to their attention. We happen to be meeting with them tomorrow so this is timely feedback. Meanwhile my team will work to make this issue right.", "username": "Carol_Dibert" }, { "code": "", "text": "Greetings, @Carol_Dibert! I’m absolutely thrilled to have received your response. This isn’t the first time I’ve received assistance from your team, and I’m genuinely proud to be a part of this wonderful community. Thanks", "username": "SOHAIB_MANAH" } ]
Do not use examity to pass MongoDb exams
2023-10-04T14:58:42.268Z
Do not use examity to pass MongoDb exams
367
null
[ "python" ]
[ { "code": "", "text": "Hello All,I am using MongoDB 2.6.2 and using python PyMongo to connect. I am getting error as\nError Server at 172.24.16.71:27017 reports wire version 2, but this version of PyMongo requires at least 6 (MongoDB 3.6).Can someone please help me to know how to access MongoDB 2.6.2 using python .Thanks,\nAsmita", "username": "asmita_magdum" }, { "code": "", "text": "Hi @asmita_magdum and welcome to the forums.MongoDB 2.6.2 is absolutely vintage and very definitely end of life. You should plan upgrading immediately.But to answer your question PyMongo 3.12.0 was the last version to support 2.6 PyMongo 4.0 dropped support for MongoDB 3.4 and earlier.", "username": "chris" } ]
Error Server at 172.24.16.71:27017 reports wire version 2, but this version of PyMongo requires at least 6 (MongoDB 3.6)
2023-10-04T10:49:32.683Z
Error Server at 172.24.16.71:27017 reports wire version 2, but this version of PyMongo requires at least 6 (MongoDB 3.6)
421
null
[ "replication", "sharding" ]
[ { "code": "mongos--tcpFastOpenServer", "text": "We have a sharded cluster where each shard has a 5-node replica set spread across 5 physical data centers. Our mongos instances are located on an app servers and can have 150ms latency because they are on the other side of the planet. I stumbled across --tcpFastOpenServer and thought it could significantly reduce connection setup latency while also reducing the number of needed pooled connections (since creating a new connection wouldn’t require a 3-way handshake).Has anyone here turned this setting on? If so, do you have any data or anecdotes of how well it worked? Anyone have any unexpected negative results? I haven’t seen much on the forums discussing this.", "username": "AmitG" }, { "code": "db.serverStatus() tcpFastOpen: {\n kernelSetting: Long(\"1\"),\n serverSupported: true,\n clientSupported: true,\n accepted: Long(\"0\")\n },\n", "text": "The documentation seems to indicate this is on by default if both the client and server support TFO. My db.serverStatus() output contains the following:Is there a way to see if TFO is actually already being used?", "username": "AmitG" }, { "code": "accepted: Long(\"0\")network.tcpFastOpen.acceptedmongodmongosmongod/mongos", "text": "accepted: Long(\"0\")This tells it’s not being used. From the documentation: network.tcpFastOpen.accepted indicates the total number of accepted incoming TFO connections to the mongod/mongos since the mongod/mongos last started.", "username": "denny99" } ]
Stability and impract of TCP Fast Open
2023-02-23T21:54:18.731Z
Stability and impract of TCP Fast Open
899
null
[ "queries", "python" ]
[ { "code": "motor_asyncioresult = await collection.update_one({\"ssid\": ssid}, {\n \"$addToSet\": {\"routes\": route},\n \"$setOnInsert\": {\"timestamps.added\": datetime.utcnow()},\n \"$currentDate\": {\"timestamps.last_modified\": {\"$type\": \"date\"}}\n}, upsert=True)\n$currentDate$setOnInsert\"$setOnInsert\": {\"$currentDate\": {\"timestamps.added\": True}}datetime.utcnow()", "text": "Greetings, i have this database operation written in python using motor_asynciothe issue i have with this is that id like to also get $currentDate in $setOnInsert\nsadly doing \"$setOnInsert\": {\"$currentDate\": {\"timestamps.added\": True}} doesn’t really work.\nThis works fine as it is but im storing two different server times when using datetime.utcnow()I would be grateful for hints or help, i did look into the docs but didint really find a solution", "username": "y0nei" }, { "code": "", "text": "I see two options:Do either of these options work?", "username": "Shane" }, { "code": "\"$setOnInsert\": {\"timestamps.added\": datetime.utcnow()},\n\"$set\": {\"timestamps.last_modified\": datetime.utcnow()}\n", "text": "I went with option 2but im curious how would option 1 look, can you give an example?", "username": "y0nei" }, { "code": "", "text": "@Shane could you show us how option 1 would look?In a distributed system you really want to use the timestamp from the database, instead of each worker node as the clock could drift between those.", "username": "Linus_Unneback" } ]
Getting server time in $setOnInsert using $currentDate?
2023-05-03T11:30:35.736Z
Getting server time in $setOnInsert using $currentDate?
783
null
[]
[ { "code": "", "text": "Hi,I scored 75.67% in total in the developer exam and it says I failed, wondering how is this possible?\nI got the exam interface and I passed all topics, I even score 100% in some of them. I don’t understand how you measure this pass/fail? Is it possible to share more information about it? Thanks.", "username": "Ana_Escobar_Llamazares1" }, { "code": "", "text": "Would like to know, too, if we have the total questions needed to pass and how many we hit would be very better than a percentage breakdown… if we hit 0% on one topic that have fewer questions (1 or 2, like Data Modeling for example) how is the impact on the general final percentage?On AWS says you need 750 to pass on an exam, why not do the same?", "username": "JoaoDias_N_A" }, { "code": "", "text": "Hello Ana & Joao. Thank you for your questions. There are many reasons we do not disclose a cut score to our examinees but here are two of the biggest factors we considered when choosing to disclose our cut score or not.Confidentiality and Fairness - Disclosing the cut score could potentially enable candidates to game the system by narrowly focusing on the exact passing score rather than genuinely mastering the material.Psychometrics - Determining a fair and valid cut score is a complex process that involves psychometric analysis. Psychometricians take into account factors like question difficulty, candidate performance, and exam reliability to establish a passing score. These calculations may not lend themselves to a simple, easily communicable number.I hope that gives you a better understanding of why we do not disclose the cut score.", "username": "Carol_Dibert" } ]
MongoDB Developer Exam results
2023-10-01T18:28:14.861Z
MongoDB Developer Exam results
386
null
[ "cxx" ]
[ { "code": " document\n << \"some_field_name\"\n << bsoncxx::types::b_binary{\n bsoncxx::binary_sub_type::k_binary,\n data_length,\n reinterpret_cast<const uint8_t*>(data_pointer)\n }\n document \n<< \"some_field\" \n<< bsoncxx::types::b_binary{\nbsoncxx::binary_sub_type::k_binary, data_size, (const uint8_t*)&data_pointer\n};\n", "text": "Hello there. First time posting, please be gentle.I’ve been tasked with lifting lots of code from libmongoc to mongocxx. A lot of things I can do manually and have helped me to learn the driver better, but I ran into something which I do not quite have a fix for.Within a function populating a new mongodb document, a function called “bson_new_from_data” is used to create a new subdocument and appending it. I have tried searching for an equal function in mongocxx/bsoncxx but to no avail. The suggestions I have so far:use the data buffer to construct a bsoncxx::document::view using this function:\nMongoDB C++ Driver: bsoncxx::document::view Class Referencemanually read the data buffer as binary and insert it (I’m using the stream syntax to build my documents):Will either of these work to replace the (much more elegant) bson_new_from_data function? Is there a difference in a binary that is bson and bsoncxx?", "username": "lnorth" }, { "code": "bsoncxx::document::view#include <mongocxx/instance.hpp>\n#include <bsoncxx/builder/basic/kvp.hpp>\n#include <bsoncxx/builder/basic/document.hpp>\n#include <bsoncxx/json.hpp>\n\n#include <cstdlib>\n#include <iostream>\n\nusing namespace bsoncxx::builder::basic;\nint main()\n{\n auto instance = mongocxx::instance();\n\n // `bsondata` represents the document { \"foo\" : \"bar\" }\n uint8_t bsondata[] = {\n 0x12, 0x00, 0x00, 0x00, 0x02, 0x66, 0x6f, 0x6f, 0x00, 0x04, 0x00, 0x00, 0x00, 0x62, 0x61, 0x72, 0x00, 0x00};\n auto doc = bsoncxx::document::view(bsondata, sizeof bsondata);\n // Append as a subdocument:\n auto doc2 = make_document(kvp(\"subdoc\", doc));\n std::cout << bsoncxx::to_json(doc2) << std::endl;\n // Prints `{ \"subdoc\" : { \"foo\" : \"bar\" } }`.\n}\n", "text": "Hi @lnorthWelcome to the MongoDB community!You should be able to achieve the desired outcome with the first option, ie. by creating a bsoncxx::document::view from the data and appending.", "username": "Rishabh_Bisht" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there a bson_new_from_data equivalent for bsoncxx (c++ driver)?
2023-10-04T13:10:24.071Z
Is there a bson_new_from_data equivalent for bsoncxx (c++ driver)?
225
null
[ "time-series" ]
[ { "code": "", "text": "Hi everyone,I’m currently working on a project where I have to use the new Time Series collection of MongoDB, which has been working so far, but I’m curious about one aspect of it.Currently I’m getting around a thousand new inserts every 10 minutes, but only around 5% of those inserts are actually different than their previous insert. Is it possible or practical within the Time Series collection to only insert the changed data, and not the unchanged data? I know that way of storing is possible, but how would one go over the querying and getting averages? Is there a way to fill in the “gaps”, which is essentially the document that comes before the requested timestamp, and all the timestamps in between.Let me know if someone already has figured this out, thanks in advance!", "username": "Rolf_Oldenkotte" }, { "code": "", "text": "Hi @Rolf_Oldenkotte ,It is possible to ingest only data when something changed, wether its worth doing that and how it complex the code and effort is up to the use case, therefore not sure its worth it,Newer versions of MongoDB like 6.0, offers aggregation operators that can fill timeseries gaps:Let me know if that helpsThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "To calculate averages or some similar quantity on a numerical time series with irregular timing, you can use standard numerical integration methods to calculate the sums, etc. that are needed - e.g. trapezoid rule or Simpson’s rule.You can also use interpolators of various kinds to fit irregularly spaced data points. Which type works best depends on what you know about your data.", "username": "Paul_Grimes" } ]
MongoDB Time Series only insert when changed
2022-09-09T17:55:47.848Z
MongoDB Time Series only insert when changed
1,843
null
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Whenever i start the nodejs server the backend and the mongodb is succesfully connected but when I send an api request to my backend from the frontend this error appears and server stops automatically. I am using a simple REST api written in express and reactjs in frontend.\nMongoServerError: invalid flag in regex options: $\nat Connection.onMessage (C:\\Users\\DELL\\Downloads\\SocialGuruji-master\\SocialGuruji-master\\api\\node_modules\\mongoose\\node_modules\\mongodb\\lib\\cmap\\connection.js:202:26)\nat MessageStream. (C:\\Users\\DELL\\Downloads\\SocialGuruji-master\\SocialGuruji-master\\api\\node_modules\\mongoose\\node_modules\\mongodb\\lib\\cmap\\connection.js:61:60)\nat MessageStream.emit (node:events:513:28)\nat processIncomingData (C:\\Users\\DELL\\Downloads\\SocialGuruji-master\\SocialGuruji-master\\api\\node_modules\\mongoose\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:124:16)\nat MessageStream._write (C:\\Users\\DELL\\Downloads\\SocialGuruji-master\\SocialGuruji-master\\api\\node_modules\\mongoose\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:33:9)\nat writeOrBuffer (node:internal/streams/writable:391:12)\nat _write (node:internal/streams/writable:332:10)\nat MessageStream.Writable.write (node:internal/streams/writable:336:10)\nat TLSSocket.ondata (node:internal/streams/readable:754:22)\nat TLSSocket.emit (node:events:513:28) {\nok: 0,\ncode: 51108,\ncodeName: ‘Location51108’,\n‘$clusterTime’: {\nclusterTime: Timestamp { low: 89, high: 1696426214, unsigned: true },\nsignature: {\nhash: Binary {\nsub_type: 0,\nbuffer: Buffer(20) [Uint8Array] [\n118, 51, 93, 94, 131, 119,\n194, 90, 29, 57, 213, 166,\n236, 177, 73, 209, 211, 120,\n84, 148\n],\nposition: 20\n},\nkeyId: Long { low: 17, high: 1684245988, unsigned: false }\n}\n},\noperationTime: Timestamp { low: 87, high: 1696426214, unsigned: true },\n[Symbol(errorLabels)]: Set(0) {}\n}", "username": "Guruji_Singh" }, { "code": "", "text": "Looks like your code is malforming a request to Mongo but without the code or query that’s being run not much else can be said.Also please see the welcome posts on how to format a message, see the buttons on the editor to format code so it’s more readable.", "username": "John_Sewell" } ]
Invalid flag in regex option:$
2023-10-04T13:52:33.292Z
Invalid flag in regex option:$
251
null
[ "python" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6308bbdefcc302677837d855\"\n },\n \"action\": \"xxxxxx\",\n \"user\": \"sandeep\",\n \"scheduled_time\": \"24-08-2022\",\n \"file_path\": \"c://asdhkjshd\",\n \"created_at\": \"22-08-2022\",\n \"updated_at\": \"22-08-2022\"\n}\n\n{\n \"_id\": {\n \"$oid\": \"6308bbdefcc302677837d855\"\n },\n \"action\": \"xxxxxx\",\n \"user\": \"akhil\",\n \"scheduled_time\": \"24-07-2022\",\n \"file_path\": \"c://asdhkjshd\",\n \"created_at\": \"22-07-2022\",\n \"updated_at\": \"22-07-2022\"\n}\n_id:\"6308bbdefcc302677837d855\"\nuser:\"akhil\"\nstatus:null\nold_data:null\nnew_data:2022-08-30T17:58:18.551+00:00\n", "text": "i have a collection called scheduled_tasks in my db.The collection looks like below:Now,i want to copy the documents in above collection to another collection-say planned_activities.I want to copy only the _id and user field from scheduled_tasks collection to my new collection ie planned_activities.Also in my new collection planned_activities ,apart from _id and user(which i got from scheduled_tasks collection),i want to add two new fields such as old_data and new_data.I want my planned_activities collection to be like belowHow do i achieve this now.I know how to copy from one collection to another,but i am unaware how to achieve it in my case", "username": "sai_sankalp" }, { "code": "$match$project$addFields$out$merge$out", "text": "You should be able to do this with the aggregation framework. You can $match (if necessary), then $project to get the fields from the current documents you want to use, $addFields to create the new fields and then finally $out to save the new collection.You could also look at $merge which is similar to $out but has more functionality.", "username": "Doug_Duncan" }, { "code": "$project$addFields$addFields$project$project$addFields", "text": " $project to get the fields from the current documents you want to use, $addFields to create the new fieldsHi @Doug_Duncan,Thanks for sharing a speedy solution!One further improvement: $addFields is equivalent to a $project stage specifying all fields in the input documents. Selection of existing fields and creation of new fields can be done within a $project stage, so an $addFields stage would not be needed for this use case.Regards,\nStennie", "username": "Stennie_X" }, { "code": "$project$addFields", "text": "Selection of existing fields and creation of new fields can be done within a $project stage, so an $addFields stage would not be needed for this use case.Of course you’re right @Stennie_X. Sometimes I don’t stop and think of the solution I propose, and would have compressed those stages into a single one when writing out the query in real time.", "username": "Doug_Duncan" }, { "code": "scheduled_migration.aggregate([\n {\"$project\": {\"_id\": 1, \"user\": 1, \"status\": \"New\", \"old_migrationtime\": None,\n \"new_migrationtime\": \"$scheduled_time\"}},\n {\"$merge\": {\"into\": \"audit_details\"}}\n_id:\"6308bbdefcc302677837d855\"\nuser:\"akhil\"\nstatus:null\nold_data:null\nnew_data:2022-08-30T17:58:18.551+00:00\n{\n \"_id\": {\n \"$oid\": \"6308bbdefcc302677837d855\"\n },\n \"new_migrationtime\": \"2022-09-26 16:26:00\",\n \"old_migrationtime\": \"2022-07-21 16:26:00\",\n \"status\": \"modified\",\n \"user\": \"xxxxx\",\n \"last_modified\": {\n \"$date\": {\n \"$numberLong\": \"1662039075400\"\n }\n }\n}\n", "text": "Hi @Doug_Duncan and @Stennie_X,\nThanks for the quick reply.\nI have tried the following and it worked:everything looks fine,but I want the order of the fields in the new collection audit_details to be like below:but my new collection looks like below:is there any way to achieve it in the order i require", "username": "sai_sankalp" }, { "code": "$project_iduserstatusold_migrationtimenew_migrationtime_idnew_migrationtimeold_migrationtimestatususerlast_modifiedaggregate()", "text": "is there any way to achieve it in the order i requireIf you want your fields in a certain order, then you need to $project those fields in that order. Currently you have: _id, user, status, old_migrationtime, new_migrationtime. Change that to be _id, new_migrationtime, old_migrationtime, status, user.As for the last_modified field, not sure how that got into the documents as it’s not part of the aggregate() call you provided.", "username": "Doug_Duncan" }, { "code": "", "text": "Hello,\nCan this be done using insertMany in batches? I am using the below script.\nI thought exporting any collection documents to another collection can be done easily. Could you please advice if I may overlooking something here.var docsToInsert = db.PMForms_local.find({ “exist”: “yes” }).toArray();\nvar batchSize = 20;\nvar successfulInserts = 0;for (var i = 0; i < docsToInsert.length; i += batchSize) {\nvar batch = docsToInsert.slice(i, i + batchSize);\ntry {\nvar result = db.collectionB.insertMany(batch);\nsuccessfulInserts += result.insertedCount;\n} catch (e) {\nprint(\"Error inserting batch starting at index \" + i + \": \" + e.message);\n}\n}", "username": "vibhuti_sharan" }, { "code": "", "text": "You’re pulling everything down with the .toArray call which is bad for large collections.If you’re just moving between collections in the same database or different on the same server then using the $out or $merge aggregation calls is much faster as it’ll all run server side.If you were taking this approach (and I’ve done something similar for moving data from relational servers to Mongo) then you want to tune the batch size, 20 is tiny, unless each document is 16MB.If you need to run with lots of data, then you’d want to get the iterator from the find call and loop through that, pushing an insertMany with the current batch as you go.", "username": "John_Sewell" } ]
Copy documents from one collection to another
2022-08-31T20:03:24.048Z
Copy documents from one collection to another
14,223
null
[ "security", "android" ]
[ { "code": "", "text": "I noticed when porting my app to another platform that the only thing I needed in the app to connect to my backend is the realm app ID. With only this ID, I could:So if malicious developers get this ID, what prevents them from creating another app, and conduct DoS attacks like:I would think that to connect to the backend, the app (iOS, Android, etc…) would need some kind of private key to make sure that it’s authorized to send requests, but that’s not the case.So, is there any mechanism in place to prevent that, and if not, how do you address this security issue?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "IP Access list, i guess.", "username": "Vegar_Vikan" }, { "code": "", "text": "From what I understand, IP Access list allows to enter a whitelist of IP address to connect to the backend. However, when distributing an app on some AppStore, each client will have a different IP and I don’t know it in advance. And even if I did, that doesn’t prevent a malicious user from making another app to connect to the backend with an accepted IP.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Having no real answer to this question worries me a bit. MongoDB dev team, anything to say?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "G’Day, @Jean-Baptiste_Beau,Thanks for raising your query.For the execution of any of these tasks on your application, you need to be authenticated and you need appropriate permissions to perform actions (the permissions you assign when you create the project).This also depends on how your Authentication Providers are configured: If Anonymous authentication is left on, then anyone can access your data unless you have sync permissions specified to control the access.For any changes to the app itself, requires database permissions to the user, and the ID on its own is not enough.Please refer to MongoDB Applications with Sync section on Application Security Documentation.I hope the provided information helps. Please feel free to ask if you have any more questions.Cheers. ", "username": "henna.s" }, { "code": "", "text": "Hi @henna.s,Thank you for your message. This still doesn’t answer the question though.Example 1. Creating new anonymous app users doesn’t require any permission. Therefore, one could easily create a script to create 1000 new users per second, which would quickly saturate the database. Is there any mechanism in place to prevent that?Example 2. Let’s say a malicious user creates an account in the app. This user could then create a script to add 100’000 objects per second to the database, which would also quickly break it. He could also make 1000 function calls per second. Is there any mechanism in place to prevent that?Thanks for clarifying that.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hey, hopefully, I can shed a little light on this.If you are using anonymous authentication then you are correct that it would enable anyone to use your app-id to send requests to Realm, but that is kind of the point. If you want your application to allow users to make requests and navigate around on your page without setting up an account then by definition you are saying that any client should be able to send requests to Realm. This is also the reason that enabling anonymous authentication will give you a warning in the UI and think we document this.If you want more stringent limitations on who can access your service, then you should be using more strict authentication providers such as:As for your examples above, hopefully, the description here explains that there are reasons to avoid anonymous auth when you want to guard against attacks like the above, but we also have app-level limits in place to prevent too many requests from saturating an application in any given hour. These are internal and we get alerts when applications get close to the limit and we can raise them (and we do that for many production applications).We have discussed the idea of adding user-specific limits to realm requests, so I would be curious to hear if that is something you would be interested in and why? Additionally, what specific things would you want to be able to toggle? IE, would total requests per user be enough, or would you want to distinguish by service (graphql, sync, functions, etc)? Would you want to have specific limits for specific users and allow some users to eclipse those limits?The other security measure is that we have permissions to prevent users’ from performing actions that they should not be able to do, but I don’t think that fully solves your problem which is mostly about preventing request spamming.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi @Tyler_Kaye,Thank you for your detailed answer. The security on this subject is indeed what I thought it was.I think user-specific limits could partly address the concern. However, while anonymous authentication is enabled, a malicious user could simply create other accounts to make more requests. IP-specific limits could be used to prevent that.In the end, what I was looking for is something similar to what Firebase did with Firebase App Check:App Check helps protect your app from abuse by attesting that incoming traffic is coming from your app and blocking traffic without valid credentials.It’s a way to authenticate all requests as coming from the right client code. It would be an interesting feature to add to your roadmap.Cheers", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "The original poster is raising some valid points but the discussion went by the way of authentication. Assuming anonymous authentication is off and everyone has to log in appropriately. This does not change the fact that with sync on, a user has pretty much access to everything (I know flexible sync is coming, and I do see a lot of benefits there, but for my questions here, I am going to ignore that for now because it will still have holes)…Lets say I am making an app for timekeeping. I want other companies purchase my services. I can easily segment each company into thier own partition. In that partition, the company will add multiple users so they can punch in and punch out. Every user would have access to every record in that partition. Now, a disgruntled employee who may also know a bit of programing, could theoretically delete every single record in that portion just by simply knowing the app id and having a valid login. (This app would have a web based interface in addition to a mobile app and that app id would have to be exposed to the client unless I did everything server side which kinda defeats the purpose. Even if I didn’t use a web based interface, we are just hoping that the id is never exposed).How are companies today securing data in a realm sync app. Or is this a case of if you need to secure data, realm sync might not be the best option for you? I know big companies use realm sync, I just can’t get past how open the database is.One example I can think of is using triggers to help validate and or reject changes (just showing how I am trying to think outside the box. I haven’t gone down the road to see if this is feasible or would cause performance issues if there were triggers on every write operation).", "username": "Robert_Charest" }, { "code": "", "text": "Hi, I think the situation you are describing is, in a lot of ways, one of the many reasons we created Flexible Sync which has:Partition-based sync has a fundamental limitation built into it: the partition.And because of that, you are correct that not all solutions fit into that limitation, which is why we removed it.\nFor people who are using this in production, one fairly common partition key is “user_id” so that you can isolate the permissions of a partition to only a specific user. So in your hypothetical case above, you would want to partition on a “compound” partition key which is something like { userIdCompanyId=“userd1_companyid1” }. This is a fairly common pattern we see in customers of partition-based sync, having one field be a combination of several other fields and that is what is the “partition”, and in that sense, the permissions are applied more granularly. However, this is obviously just masking the underlying problem in a lot of ways.Apologies if this does not answer your question. Partition-based sync is great and performant, but built into the entire system are limitations with what you can do with it from a schema modeling and permissions perspective. This makes it great if those limitations align with your application, but we know that people want to remove these restrictions and be able to let their application guide their\nschema and permissions model and not the other way around.", "username": "Tyler_Kaye" }, { "code": "", "text": "This is very helpful to know that I am at least understanding the concepts (and that I am not missing anything).Are there rough timelines as to when everything should be available for production use? If it is months, I can probably program knowing its coming, if we are looking a year or more, I would need to plan on something different.I am also very curious as to how opetion and field based permissions can be applied in an offline first world. Should a commit fail because of security permission, what would the ramifications of that be.", "username": "Robert_Charest" }, { "code": "", "text": "Hi Robert,We hope to go into GA this summer, so hopefully, that aligns with your timeline.Before then, we hope to address the following:None of these should lead to any breaking changes ideally and we do not expect to break the API before the GA date. It is most fit and finish as well as removing some initial limitations we placed on the system.Your question about permissions is a very valid one and why we have a project called “compensating writes”. In an offline system, we cannot trust the SDK to synthesize writes that is has pemissions to make. Currently, when this happens this will result in an error being returned to the client, but the goal of Compensating Writes will be to (a) alert the SDK when this takes place and (b) unwind those changes safely while continuing to sync.Please let me know if you have any more questions. Your questions are very good, and I think in large point to why we wanted to build Flexible Sync.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Don’t mean to bring back an old post, but I see some parts of original post unanswered, specifically Jean-Baptiste’s comment.I noticed that the issue with authentication was addressed with Flexible Sync where there were document-level permissions which addresses the equation of who, but i don’t see anything addressed about the what. There’s a great article here that talks about the issues described in this thread. Why Does Your Mobile App Need an API Key?The ‘what’ in this case, an iOS mobile app can still be tampered with and thus the ruining the integrity of the app. Apple acknowledges this and has a service called App Attest that helps verify the authenticity of the app / device. Once verified, the app receives a JWT that it will include with every request to the backend. Every request is verified by the backend, otherwise it will not respond to the compromised app / device. Does any of the Atlas services like Device Sync or Functions support this?I believe this is the crux of Firebase App Check, it performs app attestations which addresses the what. The who is addressed by data access rules that I can see is already supported by Realm.", "username": "Milton" }, { "code": "", "text": "Hey Milton, thanks for following up.\nWe understand the concern that an attacker could tamper with a mobile app and then pretend, from another device or program, to be that app when performing requests to the app services backend.Atlas App Services does not currently support app attestation using these tools. However, it’s worth keeping in mind that these tools come with some caveats:Given this, I would rely on the who components of authentication (app service’s permissioning model) to prevent unauthorized access of data.", "username": "Sudarshan_Muralidhar" }, { "code": "", "text": "Hi,\nI think people in this thread are making a very valid point. And even though App Attest and App check might not be foolproof, they still add a lot of friction to the process which will at the very least discourage any malicious actor. As it stands right now, it is extremely easy to flood the api with 1000s of requests with just your app ID and a very very simple scripts. This might actually be a deal breaker for my team unfortunately. ", "username": "Ashish_Vaidya" }, { "code": "", "text": "Thanks for your feedback!\nI would suggest disabling anonymous auth, and instead requiring a stronger authentication mechanism to prevent abuse.\nWe will, however, note your feedback about App Attest / App Check as we plan future features.", "username": "Sudarshan_Muralidhar" } ]
What prevents another app to connect to my backend, if they have my MongoDB Realm App ID?
2022-03-16T10:48:25.957Z
What prevents another app to connect to my backend, if they have my MongoDB Realm App ID?
5,769
null
[ "node-js", "mongoose-odm" ]
[ { "code": "E11000 duplicate key error collection: mydb.mycollection index: xxxxxx.xxxxx_xx_1 dup key: { xxxxxx.xxxxx_xx: null]", "text": "Hi, Pardon me if this is not the right topic/subtopic to ask questions. I was trying to insert a new document through nodejs mongoose Model.save(). However, the first document works. When I try to insert another document. I get the following error. E11000 duplicate key error collection: mydb.mycollection index: xxxxxx.xxxxx_xx_1 dup key: { xxxxxx.xxxxx_xx: null]. I would appreciate if someone can help me out.", "username": "Ayush_N_A1" }, { "code": "nullnullnull", "text": "Hi,This means that you are passing null value for a field that should be unique.So, when you try to insert first document, it works because there are still no document with null value for that field. When you try to insert another document, first document already exists with null value for that field, so duplicated key error is thrown.", "username": "NeNaD" }, { "code": "", "text": "how do i solve the problem then?", "username": "Yitbarek_Gossaye" }, { "code": "", "text": "An answer was given above, if you have a similar issue, create a new topic with details of your issue and code etc.", "username": "John_Sewell" } ]
Duplicate Key Error trying to insert a new document through mongoose
2022-08-15T08:04:52.366Z
Duplicate Key Error trying to insert a new document through mongoose
5,019
null
[ "queries" ]
[ { "code": "", "text": "Hello,I’m struggling with querying an array of nested arrays. I need a query that that will find all projects where at least one of skillsetCombinations’s has all skillsets in my searched combination, see playground Mongo playgroundExample:\nfind skillsetCombination [A,B,D] => result project1, project3\nfind skillsetCombination [A,B] => result project3\nfind skillsetCombination [A,B,D,E] => result project1, project2, project3Thanks, for any help\nRegards,\nIvan", "username": "Ivan_Vlcek" }, { "code": "result project1, project3", "text": "Hello @Ivan_Vlcek, Welcome to the MongoDB community forum,I can see the input and expected result but you have to explain it so others can understand why it should be the result.find skillsetCombination [A,B,D] => result project1, project3I am confused by your first query, all skills “A”, “B” and “D” are available in all 3 documents then why just result project1, project3?", "username": "turivishal" }, { "code": "", "text": "Hi, basically, I’m looking for project where project’s nested array is subset of my array (or equal to my array).Example explanation:\nfind skillsetCombination [A,B,D] =>\nproject1: [ABD] is subset of [ABD] - should be returned\nproject3: [AB] is subset of [ABD]) - should be returned\nproject2: none of [XYZ],[ABDE],[UVW] are subsets of [ABD] so this project shouldn’t be returned", "username": "Ivan_Vlcek" }, { "code": "", "text": "Any idea how do to it? I have working query with aggregation using NIN operator on inverted items. See, Mongo playground if you are looking for all projects with ABD I do NIN on all other letters.", "username": "Ivan_Vlcek" }, { "code": "", "text": "I was not able to come up with the exact combination but I have a feeling that a $reduce on a $map that uses $setIsSubset could be the key to the solution.", "username": "steevej" }, { "code": "db.projects.aggregate([\n {\n $addFields: {\n mapVal: {\n $map: {\n input: \"$skillsetCombinations\",\n as: \"skillsetCombinations\",\n in: {\n $setIsSubset: [\n \"$$skillsetCombinations\",\n [\n \"A\",\n \"B\"\n ]\n ]\n }\n }\n }\n }\n },\n {\n $addFields: {\n mapValResult: {\n $reduce: {\n input: \"$mapVal\",\n initialValue: false,\n in: {\n $or: [\n \"$$value\",\n \"$$this\"\n ]\n }\n }\n }\n }\n },\n {\n $match: {\n \"mapValResult\": true\n }\n },\n {\n $project: {\n \"mapVal\": 0,\n \"mapValResult\": 0\n }\n }\n])\n", "text": "As @steevej said, $setIsSubset seemed to work, I had a brief play this morning with an aggregate query:Mongo playground: a simple sandbox to test and share MongoDB queries onlineI’ve not tested performance etc, but the steps are:", "username": "John_Sewell" } ]
Query for array of nested arrays
2023-09-30T11:35:14.070Z
Query for array of nested arrays
293
https://www.mongodb.com/…f_2_1023x219.png
[ "node-js", "replication", "connecting", "atlas-cluster", "server" ]
[ { "code": "ERROR\tUnhandled Promise Rejection \t{\"errorType\":\"Runtime.UnhandledPromiseRejection\",\"errorMessage\":\"MongoServerSelectionError: Server selection timed out after 30000 ms\",\"reason\":{\"errorType\":\"MongoServerSelectionError\",\"errorMessage\":\"Server selection timed out after 30000 ms\",\"reason\":{\"type\":\"ReplicaSetNoPrimary\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":10000,\"localThresholdMS\":15,\"", "text": "Following up from my previous post.I’m still stuck with the occasional ReplicaSetNoPrimary errors. Quite rare but it does happen.ERROR\tUnhandled Promise Rejection \t{\"errorType\":\"Runtime.UnhandledPromiseRejection\",\"errorMessage\":\"MongoServerSelectionError: Server selection timed out after 30000 ms\",\"reason\":{\"errorType\":\"MongoServerSelectionError\",\"errorMessage\":\"Server selection timed out after 30000 ms\",\"reason\":{\"type\":\"ReplicaSetNoPrimary\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":10000,\"localThresholdMS\":15,\".This is despite upgrading to a dedicated M10 cluster. My application barely has any traffic so I’m confused why it sometimes can’t seem to connect?My setup is still the same:It’s frustrating not knowing why this is happening despite setting everything up correctly, and it’s connecting properly most of the time.Random, unpredictable errors of unknown cause are unsettling so if someone has insight, please share.", "username": "Pyra_Metrik" }, { "code": "", "text": "Hey @Pyra_Metrik,I’m still stuck with the occasional ReplicaSetNoPrimary errors. Quite rare but it does happen.There could be various reasons behind it, a few of them could be:Please refer to Test Primary Failover and Test Resilience to read more about it.In case you need further assistance, please share the org name of your cluster, so we can look into it or you can reach out to Atlas in-app chat support.The in-app chat support does not require any payment to use and can be found at the bottom right corner of the Atlas UI:Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_KesavThanks for the reply. I conducted a Primary Failover Test in the Atlas UI, and my app worked fine during and after the test.So, this leaves us with intermittent network failures.\nIs there any way we can verify that the ReplicaSetNoPrimary errors are indeed from network failures, by checking logs somewhere (or something else)?And how exactly do I get the org name of my cluster?", "username": "Pyra_Metrik" }, { "code": "", "text": "Hey @Pyra_Metrik,Is there any way we can verify that the ReplicaSetNoPrimary errors are indeed from network failures, by checking logs somewhere (or something else)?The above details will help us to assist you better.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "mongodb+srv://${process.env.DB_USERNAME}:${process.env.DB_PASSWORD}@cluster0.nkmq1cz.mongodb.net/?retryWrites=true&w=majority", "text": "@Kushagra_Kesav", "username": "Pyra_Metrik" }, { "code": "", "text": "Hey @Pyra_Metrik,Thanks for sharing the details! The Mongo client is to connected to from a serverless Next.js API function.Just out of curiosity, I’m wondering if you are using Vercel. Could you please confirm it?Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "@Kushagra_Kesav yes, I am", "username": "Pyra_Metrik" }, { "code": "", "text": "Hi @Pyra_Metrik,If you’ve checked with the Atlas in-app chat support and they’ve advised no issues were identified on the Atlas cluster side at the time of the error messages, I would also recommend checking with Vercel support. There was another mention of this previously on this post as well.Depending on your cluster tier, you might be able to check the mongod logs to see the client metadata as well to determine if connection was ended from the application side possibly.You can perform the same troubleshooting step mentioned in my comment by connecting from a different client perhaps outside of Vercel for trying to narrow down what the issue could be.Regards,\nJason", "username": "Jason_Tran" }, { "code": "Automation Agent v13.4.2.8420 (git: <id>)\"}}}}\n{\"t\":{\"$date\":\"2023-09-28T16:25:39.414+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn115096\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"192.168.254.146:43258\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-09-28T16:25:39.415+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn115094\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":31692522}}\n{\"t\":{\"$date\":\"2023-09-28T16:25:39.415+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn115095\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"192.168.254.146:43252\",\"uuid\":\"ea3b6fab-f503-49f9-8af1-b71110d04158\",\"connectionId\":115095,\"connectionCount\":40}}\n", "text": "Hi @Jason_Tran thanks for sharing the tips + the other post. I’m in contact with Vercel community/support as well to solve the problem.I did check the logs for my cluster however, and I see this:Basically, it looks the authentication succeeded, then client disconnect immediately after, then it logged a “Connection ended” message.I don’t think this is expected behavior, that is, for a client to disconnect immediately after authenticated. Please confirm, and in the mean time, I’m debugging the problem on Vercel’s end.", "username": "Pyra_Metrik" }, { "code": "remote", "text": "Hi @Pyra_Metrik,Based off those logs, it doesn’t look like this is the vercel client. The logs seem to indicate that this is possibly from an internal mongodb / atlas agent. Are you able to find any regarding the vercel client? I believe the remote value should be the IP of the vercel application connecting.Regards,\nJason", "username": "Jason_Tran" } ]
No way to avoid ReplicaSetNoPrimary errors
2023-09-09T00:09:04.416Z
No way to avoid ReplicaSetNoPrimary errors
763
null
[ "replication", "compass", "atlas-cluster" ]
[ { "code": "Health check StartupCheck threw an unhandled exception after 29999.7424ms\nSystem.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode : Primary } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"1\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/schedulerdevcluster-shard-00-00-pri.yei6y.mongodb.net:27017\" }\", EndPoint: \"Unspecified/schedulerdevcluster-shard-00-00-pri.yei6y.mongodb.net:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytes(Stream stream, Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n--- End of stack trace from previous location ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.HelloHelper.GetResult(IConnection connection, CommandWireProtocol`1 helloProtocol, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.ConnectionInitializer.SendHello(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2023-09-22T11:56:42.3033290Z\", LastUpdateTimestamp: \"2023-09-22T11:56:42.3033291Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/schedulerdevcluster-shard-00-01-pri.yei6y.mongodb.net:27017\" }\", EndPoint: \"Unspecified/schedulerdevcluster-shard-00-01-pri.yei6y.mongodb.net:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytes(Stream stream, Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n--- End of stack trace from previous location ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.HelloHelper.GetResult(IConnection connection, CommandWireProtocol`1 helloProtocol, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.ConnectionInitializer.SendHello(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2023-09-22T11:56:42.3861491Z\", LastUpdateTimestamp: \"2023-09-22T11:56:42.3861491Z\" }, { ServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/schedulerdevcluster-shard-00-02-pri.yei6y.mongodb.net:27017\" }\", EndPoint: \"Unspecified/schedulerdevcluster-shard-00-02-pri.yei6y.mongodb.net:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\n ---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytes(Stream stream, Byte[] buffer, Int32 offset, Int32 count, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n--- End of stack trace from previous location ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.HelloHelper.GetResult(IConnection connection, CommandWireProtocol`1 helloProtocol, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.ConnectionInitializer.SendHello(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\", LastHeartbeatTimestamp: \"2023-09-22T11:56:42.3905484Z\", LastUpdateTimestamp: \"2023-09-22T11:56:42.3905484Z\" }] }.\n at MongoDB.Driver.Core.Clusters.Cluster.ThrowTimeoutException(IServerSelector selector, ClusterDescription description)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChangedHelper.HandleCompletedTask(Task completedTask)\n at MongoDB.Driver.Core.Clusters.Cluster.WaitForDescriptionChanged(IServerSelector selector, ClusterDescription description, Task descriptionChangedTask, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Clusters.Cluster.SelectServer(IServerSelector selector, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Clusters.IClusterExtensions.SelectServerAndPinIfNeeded(ICluster cluster, ICoreSessionHandle session, IServerSelector selector, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.ReadPreferenceBinding.GetReadChannelSource(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Bindings.ReadBindingHandle.GetReadChannelSource(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.Initialize(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableReadContext.Create(IReadBinding binding, Boolean retryRequested, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.EstimatedDocumentCountOperation.ExecuteAsync(IReadBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteReadOperationAsync[TResult](IReadBinding binding, IReadOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteReadOperationAsync[TResult](IClientSessionHandle session, IReadOperation`1 operation, ReadPreference readPreference, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at TenantServices.Service.StartupHealthCheck.CheckHealthAsync(HealthCheckContext context, CancellationToken cancellationToken) in /src/TenantServices.Service/StartupHealthCheck.cs:line 29\n at Microsoft.Extensions.Diagnostics.HealthChecks.DefaultHealthCheckService.RunCheckAsync(HealthCheckRegistration registration, CancellationToken cancellationToken)\n", "text": "I’m trying to deploy an Azure Container App (managed Kubernetes) which accesses an Atlas Mongo database through a peering connection. The initial connection to the database fails after 30 seconds with a timeout. The full stack trace at the end of this post.The peer status is “Available”.The IP Address range of the Azure VNet is in the IP Access list and is Active.The IP Address range of the Atlas CIDR for the peer is in the IP Access list and is Active.I think the connection string is correct, it includes the “-pri” and, If I remove the “-pri”, I can use the connection string to connect from Compass on my machine.I don’t have a machine in the Azure VNet that I can test from.I have been through this: https://www.mongodb.com/docs/atlas/troubleshoot-connection/How do I diagnose this problem? Any suggestions are welcome! Thanks!Here is the full stack trace:", "username": "John_Vottero1" }, { "code": "", "text": "John have you tried the non-SRV (legacy) connection string? We’ve seen issues in the past where in certain Azure contexts (Windows specifically I believe) the SRV record length didn’t work with the DNS capabilities of that runtime: We’ve tried to escalate this with Microsoft for a fix in the past.", "username": "Andrew_Davidson" }, { "code": "", "text": "Thanks for the suggestion but, it did not resolve the issue.Also, when using a SRV connection string I can see from the error message that it was able to resolve the cluster name into a server name so DNS seems to be working.I’ve also tried granting 0.0.0.0/0 Network Access and I still have the problem.I suspect some sort of routing issue in the Azure Container Apps environment. I have opened a case with Azure and they are investigating.", "username": "John_Vottero1" }, { "code": " var settings = MongoClientSettings.FromConnectionString(connectionString);\n settings.UseTls = false;\n\n", "text": "My code was causing the problem. I was doing:Setting UseTls to false is required when using a local MongoDb container but caused the timeout when deployed to an Azure Container App. Changing false to true resolved the issue.", "username": "John_Vottero1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Timeout when Azure Container App tries to connect via peering
2023-09-22T12:57:39.568Z
Timeout when Azure Container App tries to connect via peering
451
null
[ "replication", "sharding", "mongodb-shell" ]
[ { "code": "mongocluster1mongocluster2mongocluster3configReplSetconfigReplSetmongocluster1mongocluster2mongocluster2vindb2pndb2daldb2daldb2mongosh[direct: other]rs.status()SECONDARYmongocluster2 [direct: other] test> rs.status()\n{\n set: 'mongocluster2',\n date: ISODate(\"2022-11-20T07:30:30.003Z\"),\n myState: 2,\n term: Long(\"66\"),\n syncSourceHost: 'pndb2:27017',\n syncSourceId: 7,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 2,\n writableVotingMembersCount: 2,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n lastCommittedWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n appliedOpTime: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n durableOpTime: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n lastAppliedWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n lastDurableWallTime: ISODate(\"2022-11-20T07:30:29.566Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1668929239, i: 1 }),\n members: [\n {\n _id: 6,\n name: 'vindb2:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 37,\n optime: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n optimeDurable: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n optimeDate: ISODate(\"2022-11-20T07:30:29.000Z\"),\n optimeDurableDate: ISODate(\"2022-11-20T07:30:29.000Z\"),\n lastAppliedWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n lastDurableWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n lastHeartbeat: ISODate(\"2022-11-20T07:30:29.881Z\"),\n lastHeartbeatRecv: ISODate(\"2022-11-20T07:30:28.785Z\"),\n pingMs: Long(\"29\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1668928629, i: 1 }),\n electionDate: ISODate(\"2022-11-20T07:17:09.000Z\"),\n configVersion: 27,\n configTerm: 66\n },\n {\n _id: 7,\n name: 'pndb2:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 37,\n optime: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n optimeDurable: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n optimeDate: ISODate(\"2022-11-20T07:30:29.000Z\"),\n optimeDurableDate: ISODate(\"2022-11-20T07:30:29.000Z\"),\n lastAppliedWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n lastDurableWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n lastHeartbeat: ISODate(\"2022-11-20T07:30:29.733Z\"),\n lastHeartbeatRecv: ISODate(\"2022-11-20T07:30:29.465Z\"),\n pingMs: Long(\"22\"),\n lastHeartbeatMessage: '',\n syncSourceHost: 'vindb2:27017',\n syncSourceId: 6,\n infoMessage: '',\n configVersion: 27,\n configTerm: 66\n },\n {\n _id: 11,\n name: 'daldb2:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 39,\n optime: { ts: Timestamp({ t: 1668929429, i: 1 }), t: Long(\"66\") },\n optimeDate: ISODate(\"2022-11-20T07:30:29.000Z\"),\n lastAppliedWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n lastDurableWallTime: ISODate(\"2022-11-20T07:30:29.566Z\"),\n syncSourceHost: 'pndb2:27017',\n syncSourceId: 7,\n infoMessage: '',\n configVersion: 27,\n configTerm: 66,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n ok: 1,\n '$gleStats': {\n lastOpTime: Timestamp({ t: 0, i: 0 }),\n electionId: ObjectId(\"000000000000000000000000\")\n },\n lastCommittedOpTime: Timestamp({ t: 1668929429, i: 1 }),\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1668929429, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1668929429, i: 1 })\n}\n\"errmsg\" : \"Encountered non-retryable error during query :: caused by :: BSON field 'DatabaseVersion.timestamp' is missing but a required field\"\n[direct: other]rs.status()", "text": "I don’t have a lot of details of the problem since we had to revert our upgrade in production pretty quickly. But we have a sharded cluster with 3 shards (mongocluster1,mongocluster2,mongocluster3). Each shard is a 3-node RS. The config servers are a 3-node RS as well (configReplSet). The configReplSet and 2 of the shards (mongocluster1 and mongocluster2) seemed to upgrade fine. However, in our second shard (mongocluster2 which consists of vindb2, pndb2 and daldb2), daldb2 showed the following status in mongosh prompt. Notice that the state says [direct: other] however, rs.status() shows SECONDARY for the node. :I don’t have much to work on. The only other clue I saw was in some app logs that had an error like:Does anyone know why the prompt would show [direct: other] while rs.status() shows everything is fine?", "username": "AmitG" }, { "code": "$version.timestamp", "text": "While doing more research, I came across the following closed bug: https://jira.mongodb.org/browse/SERVER-68511Although the versions I’m running are supposedly the fixed, the comments about $version.timestamp and the behavior of the nodes not moving primaries correctly looks suspiciously close to the issue we were facing.", "username": "AmitG" }, { "code": "[direct: other][direct: other]", "text": "Hello @AmitG ,Typically the status [direct: other] means you are connected to a node that is not in the state of Primary, Secondary or Arbiter. There are other replica set member states other than those three, but most of the time they are transient and will settle themselves to either Primary, Secondary, or Arbiter. Nodes in this state cannot be queried, but their hosts lists are useful for discovering the current replica set configuration. For more information, please check below link on RSGhost and RSOther.https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst#rsghost-and-rsotherCould you please confirm if you are facing any issues apart from prompt showing [direct: other] or if things are working as expected?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi Tarun,While I was upgrading production, I saw this prompt (Even though rs.status() showed 1 primary and 2 secondary members). You can see the rs.status() in my post above to verify. It was not transient… The prompt was stuck in that state… I tried restarting the mongod service.While this was happening, I noticed a major outage in our production applications not being able to connect to our mongo sharded cluster. I had to revert to 5.0 quickly so I didn’t have time to gather more info. I did find the error message that was in the message above about “BSON field ‘DatabaseVersion.timestamp’ is missing but a required field” I think it may be related, but I’m not sure.We have done every major upgrade on MongoDB over the years without failure… This latest one has me hesitant to try again soon. That being said, I’m happy to try again if the MongoDB team wants to watch and assist on a weekend ", "username": "AmitG" }, { "code": "", "text": "Were you able to find a fix for this issue?", "username": "Todd_Vernick" } ]
Weird prompt status replicaset node after upgrade from 5.0.14 to 6.0.3
2022-11-20T16:19:46.172Z
Weird prompt status replicaset node after upgrade from 5.0.14 to 6.0.3
2,376
null
[ "aggregation" ]
[ { "code": "", "text": "I am looking for examples where in aggregate function, we doing MATCH first, and then Doing a Lookup.E.g. I have 2 collections. Person and Payments. I want to list down all payments of a person having name as “Gaurav”, and payment date between “X Date range”So, we will do an aggregate Query on Payments with lookup on Payments, and the output to have data from Payments with fields from Person too.A. Filter on Payments before Lookup to Person.\nb. Pipeline on matching name of Person\nc. Filter on joined query.Can someone share a scenario similar to this along with the sample aggregation query. Its bit urgent.", "username": "Gaurav_Vij" }, { "code": "", "text": "What have you tried so far?", "username": "John_Sewell" }, { "code": "", "text": "I have tried aggregation on payments, and then lookup with pipeline and filter.I am looking for samples where “match” is done before lookup and “match” is also done on lookup collection, and then a combined match after that.", "username": "Gaurav_Vij" }, { "code": "", "text": "You can just $match then $lookup, $unwind (or use arrayelementat) to get the lookup field not as an array and then a $match after as needed.What does your data and current pipeline look like? Put up a sample document here or create a mongo playground sketch with the details in it.\nHas lots of examples of it’s use.", "username": "John_Sewell" }, { "code": "", "text": "Here is the pipeline.[{$match: {\nStatus: ‘Paid’\n}}, {$lookup: {\nfrom: ‘crmTenant’,\nlocalField: ‘TenantId’,\nforeignField: ‘_id’,\nas: ‘TenantLookup’\n}}, {$match: {\n‘TenantLookup.PropertyNumber’: ‘MH12014’\n}}]What I want is to actually have the match on “crmTenant” i.e. PropertyNumber:MH12014 before doing the lookup stage too.Is it possible with mongodb?", "username": "Gaurav_Vij" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and format your accordingly.Also provide sample documents from both collections. Sample resulting documents is also needed.", "username": "steevej" } ]
Queries with Lookup and Aggregation
2023-10-01T08:40:28.012Z
Queries with Lookup and Aggregation
370
null
[ "aggregation" ]
[ { "code": "", "text": "Is there a way to do a conditional $lookup?I have records in a collection that have an optional (null) field. I’d rather not waste processing on the records for which the lookup field is null. Not to mention that, depending on indexing, doing a $lookup on a null field could still be expensive.", "username": "nefiga" }, { "code": "{\n $lookup:\n {\n from: <collection to join>,\n let: { <var_1>: <expression>, …, <var_n>: <expression> },\n pipeline: [ <pipeline to execute on the collection to join> ],\n as: <output array field>\n }\n}\n", "text": "With this $lookup syntax,you can specify condition(s) on the join in the pipeline. In your case, that would be a null check condition (in addition to the join conditions).", "username": "Prasad_Saya" }, { "code": "", "text": "Yes, the problem with the solution as you posted is this bug:\nhttps://jira.mongodb.org/browse/SERVER-40362So when you get to nulls on a lookup join, the query runs SLOWWWWW. Now, one thing I could do is an ifNull, and then change the null to be a value that I am certain won’t be in in the lookup collection… but this seems… jenky. So, was hoping for alternative solution.", "username": "nefiga" }, { "code": "$match$lookup{ $match: { someField: { $exists: true } } }$ifNullnulllet: { someField : '$someField' }let: { someField: { $ifNull: [\"$someField\", null ] } }", "text": "You can try using a $match before the $lookup, to exclude the documents with the non-existing field:{ $match: { someField: { $exists: true } } }The alternative workaround to use the $ifNull to substitute a null for non-existing field is fine too. Substitute let: { someField : '$someField' } with let: { someField: { $ifNull: [\"$someField\", null ] } }In applications it is not unusual to have unusual cases - data or logic. It just happens this case is a program shortcoming (and temporary only). It is not that bad to have a case by covering with an additional condition, if it helps running the program efficiently. Some documentation around this workaround helps for reference and apply the issue fix later on when available.", "username": "Prasad_Saya" }, { "code": "updatedFieldsByContractor.updatedValues.citiesdb.getCollection('contractors').find({ _id: ObjectId('htat document') })updatedFieldsByContractor.updatedValues.cities", "text": "Hello. This conditional lookup does not work for me\nMongoDB version: 5\nGUI: robo3t-snap\n\nimage983×726 69.6 KB\n\nupdatedFieldsByContractor.updatedValues.cities there is not in the original document. it just appears here because of that lookup. I mean when i run this command: db.getCollection('contractors').find({ _id: ObjectId('htat document') }) to get the same document it had not updatedFieldsByContractor.updatedValues.cities field. what should i do to lookup conventionally?", "username": "Kasir_Barati" }, { "code": "contractors$lookup$match$lookup", "text": "Hello @Kasir_Barati, you cannot refer the contractors collection document fields directly in the $lookup's pipeline $match stage. Please refer the above linked MongoDB documentation for the $lookup and find correct syntax and usage.", "username": "Prasad_Saya" }, { "code": "", "text": "@Kasir_Barati, was @Prasad_Saya’s answer help you solve your issue? If it has, please mark the post as the solution.", "username": "steevej" }, { "code": "", "text": "This is not my question. Therefore I cannot select it as the answer", "username": "Kasir_Barati" }, { "code": "", "text": "Here I am, a few years later. The original answer is not what I’m looking for. I don’t want to filter out results. I want the entire result set, but only do lookups on certain records.For example, collection fruit and child_fruit. If I do db.child_fruit.aggregate(pipeline), I only want child_fruit records to perform $lookup against fruit collection if child_fruit HAS a value for a field “fruit_name”. For example:##fruit collection\ndb.fruit.insert({name: ‘Orange’});##child_fruit collection\ndb.child_fruit.insert({name: ‘Something’, fruit_name: ‘Orange’});\ndb.child_fruit.insert({name: ‘Another’});db.child_fruit.aggregate([\n{$lookup: {\nfrom: ‘fruit’,\n…\n}}\n]);The problem is, given that I still want the “Another” record back in my final result set, I don’t believe there is a way to “ignore” the $lookup for child_fruit records without “fruit_name” field…In this small example it’s irrelevant, but when I’m querying millions of records, many of which do not have a fruit_name field, it makes a huge difference. I want to just “bypass” the lookup operation in this case for these records. Maybe @Asya_Kamsky has an idea, she is a genius.", "username": "nefiga" }, { "code": "$match : { \"fruit_name\" : { \"$exists\" : 1 } }$match : { \"fruit_name\" : { \"$exists\" : 0 } }", "text": "One idea that comes to mind is that you could use $facet.One will start with$match : { \"fruit_name\" : { \"$exists\" : 1 } }and do the $lookup while the other will start with$match : { \"fruit_name\" : { \"$exists\" : 0 } }The issue I see is that each facet is limited to 16Gb which may or may not be an issue.The other thing you could try is do 2 aggregations in a transaction. But I am not sure it is possible to do aggregations inside transactions.Why 2 normal aggregations do not work?Have you looked at the new $lookup variants with which you can have conditions and sub-pipeline?", "username": "steevej" }, { "code": "", "text": "Because the example is part of a much larger aggregation doing other pipeline operations. So running it as two separate aggregations is much less optimal. And yes, $facet hitting that limit would be an issue (it’s 16mb, not Gb… Gb would be great lol).But I appreciate the thought!It would be awesome if there was a way to only conditionally $lookup. Oh well!", "username": "nefiga" }, { "code": "$facet$lookup{$in:[]}{$set:{fruit_name:{$ifNull:[\"$fruit_name\", [] ]}}}$lookup[]localField", "text": "$facet is definitely not the way to go. However, if the field you are doing $lookup on is an empty array then it won’t match anything in the foreign collection (because array is used sort of as {$in:[]} which matches nothing.So the solution to your problem may just be to fill in something like: {$set:{fruit_name:{$ifNull:[\"$fruit_name\", [] ]}}} before the $lookup and the empty [] as localField won’t match anything, unlike null or missing field.Asya", "username": "Asya_Kamsky" }, { "code": "$unionWith", "text": "And since $unionWith was added in 4.4 you can also do two aggregations within the same aggregation pipeline, btw.Asya", "username": "Asya_Kamsky" }, { "code": "{$set:{fruit_name:{$ifNull:[\"$fruit_name\", [] ]}}}$lookup[]localField", "text": "So the solution to your problem may just be to fill in something like: {$set:{fruit_name:{$ifNull:[\"$fruit_name\", [] ]}}} before the $lookup and the empty [] as localField won’t match anything, unlike null or missing field.Awesome, thank you, I’ll give that a go. Isn’t this a more common occurrence? I would think that doing a $lookup against a related collection defined by an optional field happens quite a bit, so it seems to be there should be a more straightforward way to do this? Or am I just bonkers?Thanks again.", "username": "nefiga" }, { "code": "", "text": "i am stuck with same case where i am trying to do a lookup with same collection as ben but i have the fruit_name fields already as an empty array [].\nbut it doesnt returns the another documentmy aggregate query\ndb.child_fruits.aggregate([\n{ $match: {\n_id: ObjectId(‘624c577bc98e24f398b012ad’)\n}\n},\n{ $lookup: {\nfrom: ‘fruit’,\nlocalField: ‘fruit_name’,\nforeignField: ‘name’,\nas: ‘fruitDetails’\n}\n},\n{ $unwind: { path: ‘$fruitDetails’ } }\n])", "username": "Adi_Tya" }, { "code": "", "text": "{$set:{fruit_name:{$ifNull:[\"$fruit_name\", ]}}}Thank you. This helped", "username": "George_Thomas" }, { "code": "", "text": "Won’t this still run a lookup on the record with empty field even though it won’t match anything?", "username": "Viktoria_Terzieva" }, { "code": "$unionWithfruit_namefruit_namedb.child_fruit.aggregate([\n { $match: { fruit_name: null }},\n { $unionWith: {\n coll: \"child_fruit\",\n pipeline: [\n { $match: { fruit_name: { $ne: null }}},\n { $lookup: {\n from: \"fruit\",\n ...\n }}\n ]\n },\n ...\n]);\n", "text": "For anyone coming here, $unionWith is the way to go.\nTo use the original question’s context, in your aggregation, initially match the records where fruit_name is null, then union that with the records where fruit_name is defined, and perform the lookup operation within the unionWith pipeline. Something like this:", "username": "Eric_Ferreira" }, { "code": "{ $match: { fruit_name: { $ne: null }}},db.child_fruit.aggregate([\n {$lookup: {\n from: ‘fruit’,\n …\n }}\n]);\n$lookup: {\n ...,\n skip: <condition>\n}\n", "text": "It is definitely an ‘alternative’, but has many drawbacks.For example, { $match: { fruit_name: { $ne: null }}}, is going to be nonperformant even with an index, especially when “fruit_name” has many different values.I would say, GENERALLY speaking, doingis going to be more performant than your unionWith example. Obviously depends on a bunch of factors (collection sizes, index values, etc.). The originally proposed solution by Asya is still the best (using some alternative operator to “fake” a lookup that will cancel out the possibility of a null lookup).It would be awesome if the $lookup operator just supported some sort of skip field based on a conditional", "username": "nefiga" } ]
Conditional $lookup
2020-05-07T21:45:40.843Z
Conditional $lookup
35,504
null
[ "replication", "java", "app-services-user-auth", "spring-data-odm" ]
[ { "code": "", "text": "We have mongo cluster which is a 3 node replicaset and we connect the cluster using Spring boot app. This cluster have a daily maintenance window of 20-30 sec during which one of the node goes down and it automatically comes up post that greenzone.We are using Mongodb java sync driver(Spring-data-mongodb) of version 4.6.1 and MongoDB cluster is having version - 5.0.14.In the logs, I could see it continously logs the error “Exception in monitor thread - Command failed with error 91 - The server is in Quiesce mode and will shutdown” for around 1000 times during this period (~30 sec). And after that it automatically connects back with the log - “Monitor thread successfully connected”.My Query is:", "username": "Deepak_Kumar18" }, { "code": "", "text": "In the logsis the log your app’s or mongodb node’s?during which one of the node goes downis the node shutdown ? or only network is down for this node (and mongodb is still running)?mongodb drivers and cluster nodes all have health check connections to other nodes (so that they know which node is down/up), not sure if the check messages you see come from those.", "username": "Kobe_W" }, { "code": "MongoClient mongoClient = MongoClients.create(MongoClientSettings.builder() \n\t\t\t\t\t\t\t\t\t\t\t\t\t.applyToClusterSettings(builder -> builder\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t.requiredReplicaSetName(replicaSet)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t.hosts(Arrays.asList( \n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnew ServerAddress(maasReplicaSetHost1, port), \n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnew ServerAddress(maasReplicaSetHost2, port), \n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tnew ServerAddress(maasReplicaSetHost3, port))))\n", "text": "We are connecting through Spring Data MongoDB and so we are checking logs in our Spring boot app which connects to the 3 nodes cluster (replicaset). We have written configuration class to connect the cluster like below:This is a planned downtime happens everyday at same time. And whenever this happens we see this error. Not sure about N/W being down.", "username": "Deepak_Kumar18" }, { "code": "", "text": "so we are checking logs in our Spring boot app which connects to the 3 nodes cluster (replicaset)in that case, i’m guessing it’s the cluster monitoring thread (health check) in the driver code. You can check driver documentation and see if there’s any configuration to increase the check interval.", "username": "Kobe_W" } ]
MongoDB Command failed with error 91(shutDownInProgress)
2023-10-02T15:40:45.044Z
MongoDB Command failed with error 91(shutDownInProgress)
372
null
[ "compass" ]
[ { "code": "", "text": "I have installed MongoDB on Windows Server 2019. It installs successfully and the service is running on 27017. The Compass icon is on the desktop, however when I try to open it nothing happens. I’ve tried both double-clicking the icon and going to the file location and running it from the command line, and it still the Compass application will not open. Is there some log file I can check to see what went wrong? Thanks.", "username": "Keith_Lynn" }, { "code": "", "text": "Did Compass install went fine?What is the version\nThere is no log for Compass\nNormally it should open Compass screen when you click on the icon\nDo you have shell installed on your pc?Can you connect to mongod\nShow us screenshots", "username": "Ramachandra_Tummala" }, { "code": "%USERPROFILE%\\AppData\\Local\\mongodb\\compass", "text": "There is, actually, a log file. It’s not yet mentioned in the docs but it’s there.The easiest way to access the log file is from Compass itself (Help > Open Log File) but this is not useful in this case.You should find your logs in %USERPROFILE%\\AppData\\Local\\mongodb\\compass.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "I’m having the same issue with Windows Server 2019. With a fresh install of MongoDB, Compass interface will not show. The process is running in the taskmgr but no GUI", "username": "Craig_Sills" }, { "code": "", "text": "Check these links", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Was Compass installed with the EXE? If yes, it’s a known issue that we are looking into. I would recommend using the MSI installer.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "I used mongoDB msi installer and compass got installed with it. mongodb-windows-x86_64-5.0.6-signed. Is that what you are referring to?", "username": "Craig_Sills" }, { "code": "", "text": "Ah, I see. Can you try reinstalling Compass using its own MSI? You can find it here: MongoDB Compass Download | MongoDB.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Hello.I uninstalled the previous version and installed the version you suggested and it worked fine. Thanks for taking the time to address this.Regards", "username": "Craig_Sills" }, { "code": "", "text": "i downloaded and installed mongodb compass but .exe file is not inside the bin folder and mongodb icon also not open in desktop i dont know how to open the campass local desktop. before i used mongodb once installed it will open automatically now campass is not opening please give some suggestion?", "username": "kalaiselvi_jayachandran" }, { "code": "", "text": "What is your os?\nYou should see executable file.May be you are checking under mongodb/bin\nCheck under Compass install directory\nOn Windows from programs you can find Compass\nor go to installed directory you will find Compass executable", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I tried it with MSI but after installation when I try opening the compass its not opening at all. How can i open it? Thanks!", "username": "Dick_Harry" }, { "code": "", "text": "What error are you getting?\nPl show with a screenshot\nDid you try from bin dir clicking on the executable", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I have installed Compass and when I try to open Compass by clicking on MongoDB Compass nothing happens my. I have tried opening it from the bin location as well; again nothing happens. I currently using windows 8.1. Do I need a newer version of windows?", "username": "Dick_Harry" }, { "code": "", "text": "Is your Windows 32bit or 64bit?\nMongodb supports only 64bit\nAs per download page Win64-bit 8.1+ version needed to install 1.10.1 version", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for the reply! I have 64 bit windows 8.1 on my computer.", "username": "Dick_Harry" }, { "code": "", "text": "I too am using Windows 8.1.It seems there is no msi for downloading Compass at:MongoDB Compass, the GUI for MongoDB, is the easiest way to explore and manipulate your data. Download for free for dev environments.I went here and expanded the assets to find the .msi although I used the .exe file which worked fine:The GUI for MongoDB. Contribute to mongodb-js/compass development by creating an account on GitHub.I got Version 1.36.4 working although 1.37 may have worked (I want to get to work and I’m too lazy to mess with it right now).", "username": "thecyrusj13_N_A" }, { "code": "", "text": "I tried .exe and. zip as well but they didn’t too, so I upgraded my system instead.", "username": "Dick_Harry" }, { "code": "", "text": "I had the same problem with the last version 1.38.2. I’ve updated the from 1.37. The program does not run.\nLooking in the Application Logs of Windows: I’ve got Exception code: 0x80000003.\nI’ve tried to install it on Windows Server 2022. The same problem.\nThe only solution is to install again the 1.37 version.", "username": "Nicola_ZENI" }, { "code": "", "text": "Win11 x64 here:\nI tried upgrading from a very old version (1.25.0) to 1.39.0, but no Compass shows up after launching. 3x compass processes are launched, but they do nothing. A log file is created, but not closed.Happens regardless of .exe or .msi, but the 1.39.0-beta.2 worked here, at least in MSI.", "username": "Jan-Petter_Jensen" } ]
Mongodb Compass dialog not opening
2022-03-05T22:34:50.221Z
Mongodb Compass dialog not opening
18,659
null
[ "queries", "atlas-device-sync", "android", "kotlin", "flexible-sync" ]
[ { "code": "{\n \"title\": \"Reward\",\n \"type\": \"object\",\n \"required\": [\n \"_id\",\n \"coins\",\n \"description\",\n \"price\",\n \"subtitle\",\n \"title\",\n \"type\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"coins\": {\n \"bsonType\": \"long\"\n },\n \"description\": {\n \"bsonType\": \"string\"\n },\n \"price\": {\n \"bsonType\": \"double\"\n },\n \"subtitle\": {\n \"bsonType\": \"string\"\n },\n \"title\": {\n \"bsonType\": \"string\"\n },\n \"type\": {\n \"bsonType\": \"string\"\n }\n }\n}\nopen class Reward: RealmObject {\n @PrimaryKey\n var _id: ObjectId = ObjectId()\n var type: String = \"premium\"\n var title: String = \"\"\n var subtitle: String = \"\"\n var description: String = \"\"\n var price: Double = 0.0\n var coins: Int = 0\n}\nobject MongoDB : MongoRepository {\n private val app = App.create(APP_ID)\n private val user = app.currentUser\n private lateinit var realm: Realm\n\n init {\n configureTheRealm()\n }\n\n override fun configureTheRealm(): RequestState<Boolean> {\n return if (user != null) {\n try {\n val config = SyncConfiguration.Builder(\n user,\n setOf(Student::class, Reward::class)\n )\n .waitForInitialRemoteData(15.seconds)\n .initialSubscriptions { sub ->\n add(\n query = sub.query<Student>(query = \"ownerId == $0\", user.id),\n name = \"Student subscription\"\n )\n add(\n query = sub.query<Reward>(query = \"type == $0\", \"premium\"),\n name = \"Reward subscription\"\n )\n }\n .log(LogLevel.ALL)\n .build()\n realm = Realm.open(config)\n RequestState.Success(data = true)\n } catch (e: Exception) {\n RequestState.Error(message = e.message.toString())\n }\n } else {\n RequestState.Error(message = MONGO_USER_NULL_MESSAGE)\n }\n }\n\n @OptIn(ExperimentalFlexibleSyncApi::class)\n override suspend fun getRewards(): Flow<RequestState<List<Reward>>> {\n return if (user != null) {\n return try {\n realm.query<Reward>(query = \"type == $0\", \"premium\")\n .find()\n .subscribe(mode = WaitForSync.ALWAYS, timeout = 15.seconds)\n .asFlow()\n .map { RequestState.Success(data = it.list) }\n } catch (e: Exception) {\n flow { emit(RequestState.Error(e.message.toString())) }\n }\n } else {\n flow { emit(RequestState.Error(MONGO_USER_NULL_MESSAGE)) }\n }\n }\n\n}\n", "text": "I have a ‘Reward’ collection that I’m trying to fetch the data from. I want to get all documents from that collection in my android app, where I’m using a Device Sync. The problem is, I already have two documents there, but when I try to fetch the data, I get an empty list? Even though I don’t see any error in the logs. Strange. Plus I’ve enabled the rules so that everyone can read/write to it. And I’ve added ‘type’ field as a queriable field to the Device Sync as well. Btw I’m able to fetch the data from my other collection “Student” successfully.Development mode is enabled, this is the schema:Model class (Realm object):MongoDB data source:", "username": "111757" }, { "code": "", "text": "To fix this issue, I’ve had to create a function for adding a document in that collection, and afterwards I was able to fetch the data. Just if someone else have the same issue…", "username": "111757" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Device Sync returns an Empty List from a Collection - MongoDB Sync (Android's Kotlin SDK)
2023-10-02T06:41:41.440Z
Device Sync returns an Empty List from a Collection - MongoDB Sync (Android&rsquo;s Kotlin SDK)
293
null
[]
[ { "code": "", "text": "hi,with wix, its possible to add an external database (“add external collection”) by entering an endpoint url and configuration to set up an API key. However, I keep getting an “wde0116 no message from connector” error, no matter how i configure the API Key (APIkey, key, api-key al don’t work).Trying to solve it, in mongodb atlas, i created an API key under Data API, and under user settings I enabled custom user data, set the cluster/collection/database name, and entered the API ID in the user ID field.Still,i can’t configure it to work; is there anyone who knows how to set a wix <> mongodb connection directly? I can find some tools online (reshuffle-wix-connector for example) but no manuals to do it directly without any other tools or scripts. Seems to me that I’m doing something wrong? Thanks in advance.", "username": "Koen_Keunen" }, { "code": "", "text": "Hi @Koen_Keunen and welcome to the MongoDB Community forum!!Unfortunately there is no documentation available from us to connect Wix with MongoDB using the data API.\nBut, you can refer to the documentation on:Unfortunately I’m not familiar enough with Wix and how they connect to an external database to understand what went wrong with your setup, I would recommend you to visit the wix community forum for more detailed information.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This is going to be similar to how you setup the HTTP connection to Firehose.Of course not exactly, but similar.", "username": "Brock" }, { "code": "", "text": "Hi @Koen_KeunenThis documentation from Wix may help you.", "username": "Uelinton_Santos" } ]
Connecting to atlas with wix
2023-03-19T13:39:52.320Z
Connecting to atlas with wix
1,092
null
[ "flutter" ]
[ { "code": "realm\n .query<JournalEntryEntity>('TRUEPREDICATE SORT(date DESC, time DESC)')\n .skip(10)\n .take(20)\n", "text": "Hi,Just updated to new 1.5.0 version with the improvement to .skip() method and noticed that paging stopped working. If I doThe first element or the result would be the element on index 0.\nIf I run the same query but without the .take(), than the result will be properly skipped.Is there a way to handle such pagination with new .skip() implementation?", "username": "Andrew_M1" }, { "code": "", "text": "This is a bug with the new skip implementation - I filed a bug report here: .skip doesn't work well with .take · Issue #1409 · realm/realm-dart · GitHub.", "username": "nirinchev" }, { "code": "", "text": "@nirinchev thanks! I wasn’t sure where to report this ", "username": "Andrew_M1" }, { "code": "", "text": "I opened a pr to fix this. Sorry for the oversight.", "username": "nielsenko" }, { "code": "", "text": "@Kasper_Overgard_Nielsen Awesome! Thank you for the fast fix!", "username": "Andrew_M1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
New .skip() implementation doesn't work with .take()
2023-10-02T11:09:38.386Z
New .skip() implementation doesn&rsquo;t work with .take()
345
null
[ "mongodb-shell" ]
[ { "code": "", "text": "I am using AWS linux 2023 , installed mongodb openssl… it was working fine. i was able to access it using mongosh and also able to access it through my website but suddenly it stopped.Current Mongosh Log ID: 651a68e1bd5d207b9792aaad Connecting to: mongodb://127.0.0.1:27017/directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.0.1 connect ECONNREFUSED 127.0.0.1:27017Current Scenerio TestedLoaded: loaded (/etc/systemd/system/mongod.service; enabled; preset: disab> Active: failed (Result: exit-code) since Mon 2023-10-02 10:24:00 UTC; 9s a> Duration: 1ms Process: 161134 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=e> Main PID: 161134 (code=exited, status=217/USER) C", "username": "Meenu_Dogra" }, { "code": "journalctl -u mongod.service", "text": "Hi @Meenu_Dogra , welcome to the forums.First check the output from journalctl -u mongod.service that may have more information as to why it failed.", "username": "chris" } ]
Suddely Mogodb openssl stop connecting - connect ECONNREFUSED 127.0.0.1:27017
2023-10-03T04:21:00.072Z
Suddely Mogodb openssl stop connecting - connect ECONNREFUSED 127.0.0.1:27017
247
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "{\n _id: <ObjectID>\n user_id: <ObjectID>\n following_id: <ObjectID>\n timestamp: <datetime>\n status: <string>\n}\n{\n _id: <ObjectID>\n creator_id: <ObjectID>\n content: <string (to S3 or something)>\n ...\n}\n$lookup$match$lookup", "text": "Hello everyone,I am currently developing a social media app, and I’m getting very stuck on an efficient way to structure individual user’s social media feeds, sorted by most recently posted. Here are the relevant structures (both of which reference a users collection that isn’t shown here).followerspostsMy current solution (which I am aware is suboptimal), is to run an aggregation pipeline that consists of two stages: for each post $lookup all the followers of the creator, and then $match the relevant relationships for the viewing user. This seems very inefficient, as I am going to have to essentially do a $lookup on every post.I’ve read solutions about having a “fan-out on write”, where users have a “timeline” of sorts, and when users that they are following make a post, it gets pushed onto their timeline. The timeline would be capped as to not overflow the document size. This seems like a good possibility, but I’m very confused about the logistics:The data is structured in such a way that user’s with millions of followers would still maintain efficiency. It is really easy to query a user’s followers/following. But I’m still not sure, is there a completely different way I should be structuring this data?This seems like a problem that I’m sure many other people have run into, but I’m struggling to find answers to some of these questions. Any advice would be very much appreciated", "username": "Mike_Scornavacca" }, { "code": "", "text": "@Mike_Scornavacca First, thanks for sharing your proposed solution, despite appearing to be a great option to build a social media style feed with a non-relational database there does not seem to be much information regarding this approach. I would be curious to hear what someone at MongoDB thinks, but here are my thoughts.Answering your Two Specific QuestionsCan you elaborate on your use case here? Depending on need, the easiest solution is to just have the feed end. I am not sure if they do this anymore, but know Facebook did exactly that for a number of years, at some point there is just a message at the bottom of the feed that said “No more content available.” If you are trying to populate the feed with new posts, do new posts exist? Why weren’t they in the feed in the first place? Long story short, I think this is dependent on your applications need. If each document in the Timeline collection is unique to a user, I would imagine that each post inserted here would only take up a few KB of space, you can easily store hundreds/thousands of posts for a user’s timeline before getting close to the 16MB cap (not suggesting you need to fill each timeline to 16MB).I think there are a number of solutions here, dependent on your goals. The option I would prefer is to just sort the posts whenever you get the user’s timeline vs. inserting posts in the correct order anytime there is a new follower, multiple posts are made at the same time, etc.Separate Collections (Joining with Lookup)\nI am confident you are spot on here, using $lookup to join the Users and Posts collections will not work effectively as the number of documents in each collection grows. There are countless ‘problems’ shared across the Internet. Although it looks and is easy to implement, it is definitely not the right solution as $lookup would be used frequently for an application with a social media feed. I can imagine scenarios where running an aggregation pipeline can take several seconds (or even minutes) and the user just watches a loader spin. Obviously not ideal for a social media application.Fan-Out on Write\nThis approach should work great and after some research appears to be exactly what Twitter does. I really like this approach because it only requires a simple get request, you can set each Timeline _id to match the user’s id (indexed by default) and very effectively fetch the timeline document for the user. Loading the feed would be very quick.To consider with fan-out on write approach:Out of curiosity, have you started using the fan-out on write approach? How are you handling inserting posts to the timelines of all the user’s followers?", "username": "Jason_Tulloch" }, { "code": "", "text": "I’m sorry I missed this question the first time around. There is a reference implementation for social platform called Socialite we wrote back in 2014 - all the principles it demonstrates are still applicable. Take a look at its documentation here: GitHub - mongodb-labs/socialite: Social Data Reference Architecture - This Repository is NOT a supported MongoDB product and there are a few recordings talking about the various trade-offs and benchmarking though I’m not sure I was able to find all of them (original presentation was in three parts): Industries | MongoDB and I’m still looking for parts two (how to store user graph) and 3 (how to cache timeline efficiently).Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi all, I’d love it if you could help me too. I’m considering developing a service to gain followers on different social networks. This idea is very much in demand among social network users, and it will be my first project, so I will be glad to get some advice from experienced developers. I want to create a service like this: https://enforcesocial.com/buy-tiktok-folllowers , but better. What do you advise paying attention to when making such a service? What problems may arise? I will be glad to get your advice, and good luck to all of you in future projects.", "username": "Paul_Ruppert" }, { "code": "", "text": "When users scroll through their entire timeline, you can implement a pagination system. Load a batch of posts as they scroll, and fetch more when they reach the end. This way, you avoid reloading the entire timeline, which can be resource-intensive.", "username": "Verten_Saltan" } ]
Efficient Structure for Social Media Feeds (fan-out on write)?
2022-01-14T05:35:21.050Z
Efficient Structure for Social Media Feeds (fan-out on write)?
8,667
null
[ "database-tools", "containers", "backup", "atlas-cli" ]
[ { "code": "", "text": "Hi,I am trying to build a Docker Image with atlas-cli and mongodump and mongorestore.\nIn my Dockerfile I am using debian:10-slim so I followed the instrcutions on the atlas cli page.\nIt became a bit tricky as I am working on a M1 Mac but the Container will run on a x86 machine.\nThe compatibility matrix ( https://www.mongodb.com/docs/atlas/cli/stable/compatibility/#std-label-compatibility-atlas-cli ) says Debian: x86/ARM so my thought was “I can try the image locally and if everything works fine I will push it to the CI/CD” but there is one problem…\nThe apt repository doesn’t have any atlas-cli binaries:\nMongoDB RepositoriesI think this should be fixed as using arm based machines is becoming more usual these days.", "username": "Sven_Meyer" }, { "code": "", "text": "Hi @Sven_Meyer ,Thank you for raising it to our attention. That’s a miss on our end and we will have it fixed.\nIn meantime, feel free to use the Debian builds from the Download Center.\nI’m also wondering if a vialble alternative for you would be using our official Docker image for Atlas CLI and then do Docker Compose? This way you could save some time with the ongoing maintenacne to keep Atlas CLI up to date.\nIn case you’re using GitHub Actions for your CI / CD we also have an official Atlas action that uses Atlas CLI.Thanks,\nJakub", "username": "Jakub_Lazinski" } ]
Atlas CLI not available on Debian Linux ARM64
2022-09-19T09:39:15.715Z
Atlas CLI not available on Debian Linux ARM64
2,048
https://www.mongodb.com/…5_2_1024x717.png
[]
[ { "code": "", "text": "Hi there!\nI configured SSO Federated Authentication from Okta regarding these instructions: Okta MongoDBI tried to log in and got this URL MongoDB Cloud - Error\nAnd error message: “Login Error”\n\nimage1339×938 32.4 KB\nWhere could I get more details or descriptions about this error? What happened?\nThanks for your help!", "username": "Maksym_Mykytyn" }, { "code": "", "text": "Hi @Maksym_Mykytyn I tried to log in and got this URL MongoDB Cloud - Error \nAnd error message: “Login Error”Where could I get more details or descriptions about this error? What happened?\nThanks for your help!If you’re unable to log in anymore I’d contact the Atlas in-app chat support regarding this one. They have further insight into your Atlas project so they might be able to provide some pointers.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran\nThank you for your reply and help!Now I can log in with the Bypass SAML URL.\nBut in-app support can’t help me with any Federation Management.I got SAML Response, decoded it from base64, and read XML. In XML all fields are correct.\nMaybe you know another way to see what precisely an error is during signing into Atlas from Okta?\n\nimage740×395 23 KB\n", "username": "Maksym_Mykytyn" }, { "code": "", "text": "I get the same error when I invite new users to Mongodb Atlas.\nSAML integration is configured with our google workspace.\nExisting accounts can login:There doesn’t seem to be a way for them to sign in via SSO.", "username": "Mark_O_Reilly" }, { "code": "", "text": "I’m experiencing the same scenario. Has anyone found the solution for this?\nThanks", "username": "Robert_Strajher" }, { "code": "", "text": "Have you tried unticking “Signed response” in your IdP configuration?", "username": "eMDe" }, { "code": "", "text": "In my case I hadn’t configured the attribute mapping for firstName and lastName.\nIt seems that MongoDB needs this information and if the mapping isn’t created SAML login will crash with okta/hooks/acserror.A better error message would’ve been great.", "username": "Mark_O_Reilly" } ]
MongoDB Atlas Okta Integration Login Error
2023-06-13T18:37:59.875Z
MongoDB Atlas Okta Integration Login Error
737
null
[ "atlas-cli" ]
[ { "code": " name: Install MongoDB Atlas CLI\n\non: [push]\n\njobs:\n install-cli:\n runs-on: ubuntu-latest\n\n steps:\n - name: Checkout code\n uses: actions/checkout@v2\n\n - name: Install MongoDB Atlas CLI\n run: |\n sudo apt-get install gnupg\n wget -qO - https://pgp.mongodb.com/server-6.0.asc | sudo apt-key add -\n echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list\n sudo apt-get update\n sudo apt-get install -y mongodb-atlas-cli\n atlas --version\n", "text": "I’m trying to develop a script that will use MongoDB Atlas CLI to do an automatic point-in-time recovery operation if db migrations fail.I intend to run this on Github actions. My question is therefore, how do I install the Atlas CLI on Github actions?Based on this page, the recommended way might be to follow the guidelines for ubuntu there, so something like this?Thanks in advance ", "username": "Alex_Bjorlig" }, { "code": "", "text": "Hi @Alex_Bjorlig ,Sorry for the late response here. Have you had a chance to try the official GitHub Action for Atlas CLI?Use AtlasCLI in your GitHub workflowThanks,\nJakub", "username": "Jakub_Lazinski" }, { "code": "", "text": "I will give it a try", "username": "Alex_Bjorlig" }, { "code": "", "text": "Thanks Alex,Once you get a chance to give it a try, I’d be very keen to hear what you think about it and what was your experience with it.Thanks,\nJakub", "username": "Jakub_Lazinski" } ]
What is the recommended way to install the Atlas CLI on github actions?
2023-08-11T13:58:44.358Z
What is the recommended way to install the Atlas CLI on github actions?
618
null
[ "node-js", "compass", "database-tools" ]
[ { "code": "{\n\"_id\": {\"$oid\": \"fa3e6fe626f94cdfe8558a6e\"},\n\"members1\": [\n{\"$oid:\": \"6d51c5346c5060a572c4cbc7\"},\n{\"$oid:\": \"fe8ce32ad418303ed38aec47\"},\n{\"$oid:\": \"9d4076461b20770443d46327\"},\n{\"$oid:\": \"28b730141139503b00128eb4\"}\n],\n\"members2\": {\n\"1\": {\"$oid\": \"6d51c5346c5060a572c4cbc7\"},\n\"2\": {\"$oid\": \"fe8ce32ad418303ed38aec47\"},\n\"3\": {\"$oid\": \"9d4076461b20770443d46327\"},\n\"4\": {\"$oid\": \"28b730141139503b00128eb4\"}\n},\n\"embed1\": {\"2\": {\"$oid\": \"fe8ce32ad418303ed38aec47\"},\n\"_id\": {\"$oid\": \"6d51c5346c5060a572c4cbc7\"}\n},\n\"embed2\": {\"type\": \"vessel-status\",\n\"lookupValue\": \"A\",\n\"description\": \"Active\",\n\"isActive\": true,\n\"isAshop\": true,\n\"isWcgop\": true,\n\"_id\": {\"$oid\": \"6d51c5346c5060a572c4cbc7\"}\n},\n\"parent\": {\"$oid\": \"d9630c927bc17266f4549426\"}\n}\nt> db.resources.findOne(ObjectId('fa3e6fe626f94cdfe855\n8a6e'))\n{\n _id: ObjectId(\"fa3e6fe626f94cdfe8558a6e\"), \n members1: [\n { '$oid:': '6d51c5346c5060a572c4cbc7' }, \n { '$oid:': 'fe8ce32ad418303ed38aec47' }, \n { '$oid:': '9d4076461b20770443d46327' }, \n { '$oid:': '28b730141139503b00128eb4' } \n ],\n members2: {\n '1': ObjectId(\"6d51c5346c5060a572c4cbc7\"),\n '2': ObjectId(\"fe8ce32ad418303ed38aec47\"),\n '3': ObjectId(\"9d4076461b20770443d46327\"),\n '4': ObjectId(\"28b730141139503b00128eb4\")\n },\n embed1: {\n '2': ObjectId(\"fe8ce32ad418303ed38aec47\"),\n _id: ObjectId(\"6d51c5346c5060a572c4cbc7\")\n },\n embed2: {\n type: 'vessel-status',\n lookupValue: 'A',\n description: 'Active',\n isActive: true,\n isAshop: true,\n isWcgop: true,\n _id: ObjectId(\"6d51c5346c5060a572c4cbc7\")\n },\n parent: ObjectId(\"d9630c927bc17266f4549426\")\n}\n\n", "text": "I have been wrestling with trying to import JSON – using mongoimport or the Javascript insert/replace calls – an array of mongo ObjectIds.According to Mongo docs, to specify a Mongo Id in JSON, one uses {“$oid”:XXXX}.The long and short of it is that if the data is in a JSON array, the mongo import – both command line and API – does not recognize {“$oid”:XXXX} and convert it to an Object Id.Below is a test case, using Compass as the viewer, and what I see is the array members are not recognized as ObjectIds. I have no idea if this is a bug, or a feature. It even manifests itself with JSON.stringify.Given a test case where we insert the JSON belowWhen we retrieve the data we get:", "username": "Stephen_Montsaroff_NOAA_Affiliate" }, { "code": "\"members1\": [\n{\"$oid:\": \"6d51c5346c5060a572c4cbc7\"},\n{\"$oid:\": \"fe8ce32ad418303ed38aec47\"},\n{\"$oid:\": \"9d4076461b20770443d46327\"},\n{\"$oid:\": \"28b730141139503b00128eb4\"}\n],\n\"members1\": [\n{\"$oid\": \"6d51c5346c5060a572c4cbc7\"},\n{\"$oid\": \"fe8ce32ad418303ed38aec47\"},\n{\"$oid\": \"9d4076461b20770443d46327\"},\n{\"$oid\": \"28b730141139503b00128eb4\"}\n],\n", "text": "Syntax error:vs:You have the “:” characher within the $oid prefix.", "username": "John_Sewell" } ]
Trouble importing JSON array of objectID {$oid:XXX}
2023-10-03T00:45:38.253Z
Trouble importing JSON array of objectID {$oid:XXX}
33,209
null
[ "replication", "database-tools", "backup" ]
[ { "code": "", "text": "I am trying to import an archive using mongorestore. I have a MongoDB Atlas replica set with 3 nodes. I tried “/writeConcern=majority” and even “/writeConcern=3” but I keep getting a “Replication Oplog Window has gone below 1 hours” email warning. I have found Alert Replication Oplog Window has gone below 1 hours which gives two options. One is exactly to utilize “writeConcern”, while the other being increasing the cluster “oplog size” (which is something I want to avoid).Why isn’t “writeConcern” working?Any help is appreciated.", "username": "AlexP" }, { "code": "", "text": "When you write lots data, your oplog window is going to drop. If it drops below your replication lag the node is going to enter recovering state. Check your metrics, how close did you come to a disaster? If you came close, next time you import a similar amount of data make sure you increase your oplog size. Do not say you don’t want to do it. You have to or you can import your data in smaller batches.", "username": "Daniele_Graziani1" } ]
Mongorestore: Replication Oplog Window has gone below 1 hours
2022-11-02T17:39:19.192Z
Mongorestore: Replication Oplog Window has gone below 1 hours
1,557
null
[ "node-js", "mongoose-odm", "mongodb-shell", "containers" ]
[ { "code": "mongomongodocker run -d \\\n --name mongo \\\n -p 27017:27017 \\\n -v /path/to/mongo:/data/db \\\n -e MONGO_INITDB_ROOT_USERNAME=root \\\n -e MONGO_INITDB_ROOT_PASSWORD=password \\\n mongo:latest\nmongomongoshmongosh --host 17.17.17.17. --port 27017 --username root --password password --authenticationDatabase admin\nrootmongodb://root:[email protected]:27017/test?authSource=admin\nmongodb://root:[email protected]:27017/admin?authSource=admin\nmongoshdocker run -d \\\n --name mongo \\\n -p 27017:27017 \\\n -v /path/to/mongo:/data/db \\\n mongo:latest\nmongosh", "text": "I am facing a weird problem regarding the mongo container. I have a server machine whose IP address is 17.17.17.17. In that machine I am trying to deploy my MongoDB, and I want it to be accessible from other machines. I have used the following command to deploy mongo and allowed the port 27017 in the firewall.The mongo container is running but when I try to access it from another machine either via mongosh or via a nodejs process (mongoose library) it is not accessible.I am using the following command to access mongodb deployed in this machine from other machine:This shows authentication error every time and in the server machine logs. It says it is not getting any user named root in the admin database.In the nodejs , I am using the following urls to connect to the mongodb:Again, it fails to connect to the mongodb.However, if I do not use any kind of authentication to deploy mongo db, it can be accessed both via mongosh and nodejs.For example if I use this:It is accessible both via mongosh and nodejs library from other machines.Can anyone tell me what am I doing wrong here?", "username": "Rifat_Rubayatul_Islam" }, { "code": "", "text": "If at any point the password for root changed on this database the provided credentials on the command line will no longer apply. Could that be the case for this database ?", "username": "chris" }, { "code": "", "text": "The password was never changed. But i did try the db without authentication at first. Then stopped and removed the container and created a new one with root password. Is that causing the issue? If yes, how should I fix it?", "username": "Rifat_Rubayatul_Islam" }, { "code": "db.getSiblingDB('admin').createUser(\n{\n user:\"root\",\n pwd:\"reallySecur3Passw0rd\",\n roles: [\"root\"]\n}\n", "text": "Yes that will be creating the issue.Start the database without authentication enabled. Connect and create the user:Then start the database with authentication enabled.", "username": "chris" }, { "code": "", "text": "It worked. Thanks a lot.", "username": "Rifat_Rubayatul_Islam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AuthenticationFailed while trying to connect to mongo docker container
2023-10-02T13:31:43.189Z
AuthenticationFailed while trying to connect to mongo docker container
311
null
[ "aggregation", "queries" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"635286ea1c66064140400d67\"\n }, \n \"retailerRequestId\": 82, \n \"receive\": [\n {\n \"receivedQty\": 100,\n \"receivedDate\": \"2015-12-01 18:16:46\",\n \"receivedStatus\": {\n \"statusId\": 435,\n \"status\": \"RReceivedFull\"\n },\n \"tagItems\": [\n {\n \"tagSerialNumber\": \"137438955172\",\n \"status\": \"RReceivedFull\"\n },\n {\n \"tagSerialNumber\": \"137438955171\",\n \"status\": \"RReceivedFull\"\n },\n {\n \"tagSerialNumber\": \"137438955170\",\n \"status\": \"RReceivedFull\"\n },\n {\n \"tagSerialNumber\": \"137438955169\",\n \"status\": \"RReceivedFull\"\n },\n {\n \"tagSerialNumber\": \"137438955168\",\n \"status\": \"RReceivedFull\"\n }\n ]\n }\n ]\n}\n", "text": "Hello Everyone, In the below mentioned document: I would like to update the tagSerialNumber to int64.Please help me in updating the datatype", "username": "Amarendra_Krishna" }, { "code": "$mapreceive$mergeObjects$toLongtagSerialNumberdb.collection.updateMany(\n { \"receive.tagItems.tagSerialNumber\": { $type: \"string\" } },\n [{\n $set: {\n receive: {\n $map: {\n input: \"$receive\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n tagItems: {\n $map: {\n input: \"$$this.tagItems\",\n in: {\n $mergeObjects: [\n \"$$this\",\n { tagSerialNumber: { $toLong: \"$$this.tagSerialNumber\" } }\n ]\n }\n }\n }\n }\n ]\n }\n }\n }\n }\n }]\n)\n", "text": "Hello @Amarendra_Krishna ,You can use an update with aggregation pipeline starting from MongoDB 4.2,PlaygroundWarning: Should test in development environment first before production.", "username": "turivishal" }, { "code": "", "text": "Thank you so much, this worked for me", "username": "Amarendra_Krishna" }, { "code": " dbTenant.getCollection(\"LoanRequest\").updateMany({}, [\n {\n $set: {\n Proposal: {\n $map: {\n input: \"$Proposal\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n ProposalTaxes: {\n $map: {\n input: \"$$this.ProposalTaxes\",\n in: {\n $mergeObjects: [\n \"$$this\",\n { IOF: { $toInt: \"$$this.IOF\" } },\n ],\n },\n },\n },\n },\n ],\n },\n },\n },\n },\n },\n ]);\nFailed to execute script.\n\nError:\nWriteError({\n\t\"index\" : 0,\n\t\"code\" : 16883,\n\t\"errmsg\" : \"input to $map must be an array not object\",\n\t\"op\" : {\n\t\t\"q\" : {\n\t\t\t\n\t\t},\n\t\t\"u\" : [\n\t\t\t{\n\t\t\t\t\"$set\" : {\n\t\t\t\t\t\"Proposal\" : {\n\t\t\t\t\t\t\"$map\" : {\n\t\t\t\t\t\t\t\"input\" : \"$Proposal\",\n\t\t\t\t\t\t\t\"in\" : {\n\t\t\t\t\t\t\t\t\"$mergeObjects\" : [\n\t\t\t\t\t\t\t\t\t\"$$this\",\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"ProposalTaxes\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$map\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$$this.ProposalTaxes\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"in\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$mergeObjects\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$$this\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"IOF\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$toInt\" : \"$$this.IOF\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\t\"multi\" : true,\n\t\t\"upsert\" : false\n\t}\n}) :\nWriteError({\n\t\"index\" : 0,\n\t\"code\" : 16883,\n\t\"errmsg\" : \"input to $map must be an array not object\",\n\t\"op\" : {\n\t\t\"q\" : {\n\t\t\t\n\t\t},\n\t\t\"u\" : [\n\t\t\t{\n\t\t\t\t\"$set\" : {\n\t\t\t\t\t\"Proposal\" : {\n\t\t\t\t\t\t\"$map\" : {\n\t\t\t\t\t\t\t\"input\" : \"$Proposal\",\n\t\t\t\t\t\t\t\"in\" : {\n\t\t\t\t\t\t\t\t\"$mergeObjects\" : [\n\t\t\t\t\t\t\t\t\t\"$$this\",\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"ProposalTaxes\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$map\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$$this.ProposalTaxes\",\n\t\t\t\t\t\t\t\t\t\t\t\t\"in\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$mergeObjects\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$$this\",\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"IOF\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$toInt\" : \"$$this.IOF\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\t\"multi\" : true,\n\t\t\"upsert\" : false\n\t}\n})\nWriteError@src/mongo/shell/bulk_api.js:458:48\nmergeBatchResults@src/mongo/shell/bulk_api.js:855:49\nexecuteBatch@src/mongo/shell/bulk_api.js:919:13\nBulk/this.execute@src/mongo/shell/bulk_api.js:1163:21\nDBCollection.prototype.updateMany@src/mongo/shell/crud_api.js:690:17\n@(shell):4:3\nDBQuery.prototype.forEach@src/mongo/shell/query.js:494:9\n@(shell):1:1\n", "text": "can u help me? I’m trying to change double to decimal inside arrays, but for me it doesnt work :image1012×648 33.5 KB", "username": "Thais_Caldoncelli_Nogueira" }, { "code": "", "text": "Hello @Thais_Caldoncelli_Nogueira, Welcome to the MongoDB community forum,I would suggest you ask this in a new topic and mention the current topic link if this is related.", "username": "turivishal" } ]
Update the DataType for a field inside the Nested Array
2022-10-21T13:35:49.467Z
Update the DataType for a field inside the Nested Array
1,889
https://www.mongodb.com/…e492f76df97e.png
[ "compass" ]
[ { "code": "", "text": "I am having with MongoDB Compass installation. I had some older version installed on my Windows 10 laptop(64 bit). Compass prompted me that I can upgrade to a new version. I gave it the permission after which it stopped working(it wouldn’t open, it would just show a task in the Task Manager).I uninstalled it and tried installing the latest version 1.40. That also didn’t help. This time it wouldn’t even complete the installation. It would just show the installation window for a few seconds and close itself. However, it did reflect in Programs and Features.I tried installing 1.36. This time the installation progressed for a few minutes and gave me the following error.The issue is the path mentioned in the error, does not exist in the MongoDB Compass folder.\nPlease help .Thanks in advance\n", "username": "Amit_Rathod" }, { "code": "", "text": "Hi AmitWhen you say you tried 1.40, do you mean 1.40.0 or 1.40.2? Because 1.40.2 should have a fix for this specific error. It would be interesting to know if that works for you or not.", "username": "Le_Roux_Bodenstein" }, { "code": "", "text": "Actually, I have been trying to fix this, since last week. I tried 1.40.0 and also 1.40.2.Yes, It has been fixed in 1.40.2. Thanks.", "username": "Amit_Rathod" } ]
MongoDB compass installation issue
2023-10-01T19:20:46.591Z
MongoDB compass installation issue
313
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "How to update the email used by a user in Email/Password authentication?In my app I have a settings panel which allows the user to update his/her username, email address and password. I am able to change the email address stored in custom user data, but not the one used for authentication.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "I am also interested in this. If the user changes his email or simply wants to transfer his account to a new domain, changing the login email is necessary.", "username": "Marco_Ancona" }, { "code": "", "text": "Hi @Marco_Ancona and @Jean-Baptiste_Beau,This option is currently cannot be done via email/password Auth in Realm, the only way with this provider is create a new user and associate to user data.Having said that, you may consider using custom function authentication and store encrypted passwords in the Atlas deployment:\nhttps://docs.mongodb.com/realm/authentication/custom-functionHere you are in full control of the user credentials as long as the process return the same unique identifierEncryption can be done via util.cryptoThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "firebase.auth()\n.signInWithEmailAndPassword('[email protected]', 'super-secret-password')\n.then(function(userCredential) {\n userCredential.user.updateEmail('[email protected]')\n})\nutil.crypto", "text": "So we would need to implement our own email/password Auth via function authentication, just to enable a user to change their email address?! This seems to be a very foundational feature for any email authentication system. I expected something like the following to be available out of the box for Realm Auth Email/Password authentication:Since this is apparently still not available, I am considering to implement function authentication and am looking for a simple example in the documentation or elsewhere. util.crypto does provide SHA256 which could be used for password hashing.The example in the documentation just shows a simple username verification without password. Could you point to examples that also include password verification and possibly email verification?Has anyone in this thread actually implemented something like this using function authentication or would it be preferable to just use JWT authentication with a 3rd party provider like Auth0, which is more fully featured?", "username": "Christian_Wagner" }, { "code": "", "text": "Hi @Christian_Wagner ,I have used auth functions in different ways.In your case you can store the username and hashed password on this system auth collection allowing just the function of to access it.Hashing the password can be done in server or client side as you wish the idea is the query do an and between the username and password.Updating a user is done by just having a flow to update that collection.You can potentially use username and password regular auth , register the user under a new email and link the old and new identities.JWT is also a recommended provider and ita up to you if you think to go that path. We just suggest what are the alternatives…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "// the user data has a \"authId\" field to attach the Realm User ID to it, also the email too;\n// we need a on Login trigger that gets the user data by email and updates it with the new authId;\n\nexport async function changeEmail(email, password, newEmail) {\n isLoading.value = true\n let status = 'fail'\n let code = 0\n let error = ''\n email = String(email).toLowerCase()\n\n try {\n // re-login\n const credentials = Realm.Credentials.emailPassword(email, password)\n const user = await app.logIn(credentials)\n\n // create new account\n // confirmation email will be sent\n await app.emailPasswordAuth.registerUser(newEmail, password)\n\n // custom function to change user data to new email\n // the user data will have the new email, and still the old authId\n await user.functions.userChangeEmail(newEmail)\n // note this function must throw an Error if needed so this try block breaks\n\n // delete old account\n // WARNING if you have a Realm Trigger on user delete make sure it \n // doesn't mess with the user data. In my case the trigger checks for the\n // user email, so as it was just changed above, the user wouldn't be found, \n // and the code would exit without changing anything.\n await app.deleteUser(user)\n\n // clean local user data\n localStorage.clear()\n sessionStorage.clear()\n user.value = false\n userData.value = {}\n\n // IMPORTANT, this is our current state:\n // - the user is logged out;\n // - they will have to confirm their new email before login in;\n // - the user data is still attached to the old authId;\n // - we do have a login trigger that gets the user data by email and updates it\n // with the new authId, that's basically why this works;\n\n // at this point it was successful\n // notify user they need to confirm their email and re-login\n status = 'success'\n\n // SUMMARY\n // This is not the best experience, because the user will have to login again\n // and click the confirmation email before doing so. Also the confirmation email \n // may well have a message to welcome the user as it were new, which it isn't.\n }\n catch (err) {\n console.log('Failed to change email', err, err.statusCode, err.error, err.errorCode, err.message)\n\n code = +err.statusCode\n error = err.error\n }\n\n isLoading.value = false\n return { status, code, error }\n}\n", "text": "Hello @Pavel_Duchovny and MongoDB team,I think this shouldn’t be overlooked, to change a user email on the fly would be the best experience for the user.My understanding is that their Auth ID would retain the same, and we would need to listen to a trigger “on user change email” on our Realm app to change the user data’s email.Anyway, meanwhile a solution I found is to have sequence of 4 actions:I reckon this is not the best experience because the user will have to login again and click the confirmation email before doing so. Also the confirmation email may well have a message to welcome the user as it were new, which it isn’t.Here is my personal code in case helps anyone:If that is a bad practice, or if there are improvements to it, please let me know.", "username": "andrefelipe" }, { "code": "", "text": "Chiming in with my two cents: this is a feature that needs to be implemented!Additionally, if using email/password auth, the Mongo backend should really be treating the email as case insensitive. Via an oversight on my part - many of my first users created accounts with a capital first letter in their email because the iOS keyboard was trying to capitalize the first letter of the text input. I now have no recourse to adjust their email addresses, OR to lower case all input emails from the front end. I’m stuck.", "username": "Joe_Stella" }, { "code": "", "text": "I would also like to see this functionality implemented as a standard feature.", "username": "Dominik_Hait" }, { "code": "", "text": "Me too.I have a valid use-case, where some of our users create their account with their university email at first, then, after the years when they end their studies, then need to change over to another email.Since 2008, a reference platform for typographic and graphic posters.", "username": "andrefelipe" }, { "code": "", "text": "I also think this is a very needed feature. Sometimes users honestly mispell their username on signup. I know plenty of us have seen it!", "username": "Lukas_deConantseszn1" } ]
Update user email (Email/password authentication)
2020-09-11T11:55:46.864Z
Update user email (Email/password authentication)
8,726
null
[ "dot-net" ]
[ { "code": "public class SomeClass : ISomeInterface\n{\n\n}\n\npublic interface ISomeInterface\n{\n\n}\n\npublic class SomeClassSerializer : IBsonSerializer\n{\n public Type ValueType => typeof(SomeClass);\n\n public object Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n {\n return new SomeClass();\n }\n\n public void Serialize(BsonSerializationContext context, BsonSerializationArgs args, object value)\n {\n context.Writer.WriteStartDocument();\n context.Writer.WriteEndDocument();\n }\n}\n\n// The discriminator changes with or without the following line\nBsonSerializer.RegisterSerializer(typeof(SomeClass), new SomeClassSerializer());\n\nISomeInterface item = new SomeClass();\nBsonDocument doc = item.ToBsonDocument();\n\nConsole.WriteLine(doc[\"_t\"]);\nScalarDiscriminatorConventionHierarchicalDiscriminatorConventionBsonSerializer.RegisterDiscriminatorConvention(typeof(ISomeInterface), new ScalarDiscriminatorConvention(\"_t\"));\n", "text": "Hello mongo community,I have a question about some behavior I am seeing. Basically, the act of registering a custom BsonSerializer is changing discriminator behavior.For example, say I have this code:The behavior I am seeing is that if I include the line that registers the custom serializer, the discriminator looks like:\n“Some.Namespace.Path.SomeClass, Some.Namespace.Path”If I remove the line that registers the custom serializer, the discriminator looks like:\n“SomeClass”The mongo docs state thatI believe what is happening based on poking around in the C# driver code on github is that when a custom serializer is registered, the ObjectDiscriminatorConvention ends up being used instead of the default Scalar or Hierarchical, and this is what is setting the disriminator to use the full namespace path.My question is - is this intentional behavior by the mongo driver? I would like for the discriminator to remain as only “SomeClass”, as otherwise it makes the code more fragile (Ex. SomeClass may move namespaces/assembly in the future.), and I have been unable to find any documentation that states whether this is intentional or not.\nWhat is the mongo recommended way for me to preserve the default “SomeClass” discriminator value? Am I responsible for adding a line of code to force ISomeInterface to still use the ScalarDiscriminatorConvention?Or does this warrant raising a jira issue?", "username": "Mark_Fisher1" }, { "code": "public class SomeClass : ISomeInterface\n{\n\n}\n\npublic interface ISomeInterface\n{\n\n}\n\npublic class SomeClassSerializer : IBsonSerializer\n{\n public Type ValueType => typeof(SomeClass);\n\n public object Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n {\n return new SomeClass();\n }\n\n public void Serialize(BsonSerializationContext context, BsonSerializationArgs args, object value)\n {\n context.Writer.WriteStartDocument();\n \n var actualType = value.GetType();\n if (args.NominalType != actualType) {\n context.Writer.WriteName(TagForType); \n context.Writer.WriteString(actualType.Name);\n context.Writer.WriteName(TagForValue); \n context.Writer.WriteStartDocument();\n }\n\n // serialize \"value\" here...\n\n if (args.NominalType != actualType) {\n context.Writer.WriteEndDocument();\n }\n context.Writer.WriteEndDocument();\n }\n}\n\nBsonSerializer.RegisterSerializer(typeof(SomeClass), new SomeClassSerializer());\n\nISomeInterface item = new SomeClass();\nBsonDocument doc = item.ToBsonDocument();\n\nConsole.WriteLine(doc[\"_t\"]);\n", "text": "One update here - by stepping through the MongoDB.Bson code I found that I can make my custom serializer implement IBsonPolymorphicSerializer, which allows me to take control of assigning the discriminator to exacly what I want it to be inside the custom serializer. The example code from my original post would become:Is this the correct thing to do? I’m struggling to find any documentation related to IBsonPolymorphicSerializer in the serialization tutorial\nhttps://mongodb.github.io/mongo-csharp-driver/1.11/serialization/#write-a-custom-serializer", "username": "Mark_Fisher1" } ]
Registering a custom bson serializer changes the discriminator convention
2023-10-02T17:42:23.280Z
Registering a custom bson serializer changes the discriminator convention
300
null
[ "queries" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6504d7f74b847364fc8d192b\"\n },\n \"id\": \"1001\",\n \"name\": \"News management\",\n \"departments\": [\n {\n \"id\": \"1001-001\",\n \"name\": \"Finance\",\n \"repo\": [\n {\n \"name\": \"Udemy\",\n \"url\": \"https://www.udemy.com/\",\n \"username\": \"[email protected]\",\n \"password\": \"udemy123\"\n },\n {\n \"name\": \"Google\",\n \"url\": \"https://www.google.com/\",\n \"username\": \"user123\",\n \"password\": \"googlepass\"\n },\n {\n \"name\": \"Amazon\",\n \"url\": \"https://www.amazon.com/\",\n \"username\": \"shopper99\",\n \"password\": \"amaz0n!\"\n },\n {\n \"name\": \"Netflix\",\n \"url\": \"https://www.netflix.com/\",\n \"username\": \"bingewatcher\",\n \"password\": \"netflixislife\"\n },\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/\",\n \"username\": \"tweetmaster\",\n \"password\": \"tweettweet\"\n }\n ],\n \"employees\": [\n \"ObjectId('65057518f0a44c810d387932')\"\n ]\n },\n {\n \"id\": \"1001-002\",\n \"name\": \"IT\",\n \"repo\": [\n {\n \"name\": \"Instagram\",\n \"url\": \"https://www.instagram.com/\",\n \"username\": \"instaaddict\",\n \"password\": \"insta1234\"\n },\n {\n \"name\": \"LinkedIn\",\n \"url\": \"https://www.linkedin.com/\",\n \"username\": \"connectme\",\n \"password\": \"linkedinpass\"\n },\n {\n \"name\": \"Facebook\",\n \"url\": \"https://www.facebook.com/\",\n \"username\": \"fbuser\",\n \"password\": \"fb12345\"\n },\n {\n \"name\": \"YouTube\",\n \"url\": \"https://www.youtube.com/\",\n \"username\": \"youtuber\",\n \"password\": \"youtube101\"\n }\n ],\n \"employees\": []\n },\n {\n \"id\": \"1001-003\",\n \"name\": \"Writing\",\n \"repo\": [\n {\n \"name\": \"Spotify\",\n \"url\": \"https://www.spotify.com/\",\n \"username\": \"musiclover\",\n \"password\": \"spotifytunes\"\n },\n {\n \"name\": \"Microsoft\",\n \"url\": \"https://www.microsoft.com/\",\n \"username\": \"msuser\",\n \"password\": \"microsoftpass\"\n },\n {\n \"name\": \"Apple\",\n \"url\": \"https://www.apple.com/\",\n \"username\": \"applefan\",\n \"password\": \"apple123\"\n }\n ],\n \"employees\": []\n } \n \n ],\n \"employees\": []\n}\n", "text": "Hi. I have the following data.I would like to push data to a repo array for a specific department, say “IT” with id “1001-002”. I am struggling to find a solution for this issue which seems to be very straightforward.My issue is very similar to this issue Pushing elements to nested array - Working with Data - MongoDB Developer Community Forums. However our data structure is slightly different, therefore the solution does not really work for my application.Your assistance will be greatly appreciated.", "username": "Karabo_Molemane" }, { "code": "", "text": "You should be able to use arrayfilters for this", "username": "John_Sewell" }, { "code": "\n const orgUnitId = req.body.ouId;\n const deptId = req.body.deptId;\n const name = req.body.name;\n const url = req.body.url;\n const username = req.body.username;\n const password = req.body.password;\n\n const result = await OrganisationalUnit.updateOne(\n {\n id: orgUnitId,\n },\n {\n $push: {\n \"departments.$[department].repo\": {\n name: name,\n url: url,\n username: username,\n password: password,\n },\n },\n },\n { arrayFilters: [{ \"department.id\": { $eq: deptId } }] }\n );\n\n", "text": "Thank you John,You are a lifesaver. The manual did help. Again thank you.Below is the code sample for those with a similar problem to mine.", "username": "Karabo_Molemane" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Updating nested array of objects within am array of objects
2023-09-30T13:01:23.605Z
Updating nested array of objects within am array of objects
267
null
[ "kolkata-mug" ]
[ { "code": "", "text": "Dive deep into the fusion of technology as we explore the intersections of MongoDB with some of the most cutting-edge technological advancements of our time. This event will elucidate how MongoDB seamlessly integrates with AWS to achieve scalable, efficient, and robust solutions. Discover the limitless possibilities when Augmented Reality taps into the power of MongoDB for dynamic data-driven experiences. Further, enrich your web applications by supercharging MongoDB using React Server Components. Join us for a day of learning, innovation, and hands-on sessions with industry experts!Register on meetup at : Empowering Modern Applications: Using MongoDB with AWS, AR, and React Components, Sat, Oct 7, 2023, 10:30 AM | MeetupEvent Type: In-Person\nLocation: Blob Studio , 7th floor, Yamuna Apartment, 86 Golaghata Road, Dakshindari, Ultadanga, West Bengal 700048", "username": "Sumanta_Mukhopadhyay" }, { "code": "", "text": "do you have zoom link or any other where we can attend this event online", "username": "Naga_Raju" }, { "code": "", "text": "It is in person event I will try but no commitments.", "username": "Sumanta_Mukhopadhyay" }, { "code": "", "text": "My only question - how come India gets the best events? Unfortunately, I live in North Carolina but would love to attend.", "username": "Richard_Krueger2" }, { "code": "", "text": "Hey @Richard_Krueger2,Welcome to the MongoDB Community Forums! Unfortunately, we don’t have any physical user group yet in North Carolina but we also do virtual user group events. You can join these here: Regional Virtual User GroupsOur physical user group events are run and organized by enthusiastic community members like @Sumanta_Mukhopadhyay If this is something that you feel you’d like to do, you too can start a user group in your city. You can read more about it here: Start a MongoDB User GroupRegards,\nSatyam", "username": "Satyam" } ]
Empowering Modern Applications: Leveraging MongoDB with AWS, Augmented Reality, and React Components
2023-09-25T09:52:21.911Z
Empowering Modern Applications: Leveraging MongoDB with AWS, Augmented Reality, and React Components
1,555
null
[]
[ { "code": "", "text": "Hello,I had a question regarding the eligible free certifications included in the Student pack: as the DBA course is withdrawing from the online university, will the newer “self-managed database admin” path be included in place ?\nOr is there no plan to replace the DBA course on the pack for now ?Thank you !", "username": "vincent38" }, { "code": "", "text": "Hi Vincent, welcome to the forums! Great question. Yes, the new Self-Managed DBA learning will be taking the place of the old DBA path.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Wonderful !\nThank you for your reply ", "username": "vincent38" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Question regarding the withdrawal of the DBA path
2023-10-01T14:30:34.506Z
Question regarding the withdrawal of the DBA path
294
null
[ "atlas-cluster", "data-api", "atlas-cli" ]
[ { "code": "", "text": "Hey,is there any way to enable Data API via atlas cli or curl ?\nI want to automate the Atlas cluster creation process without touching UIArek", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "Asking the same question again and again (third time in this case) is not the best way to get a faster response.It slows down everyone because we have to read again and again the same.If someone knows the answer, you will be answered when they connect. Some do not connect everyday so showing a little bit of patience is welcome.", "username": "steevej" }, { "code": "", "text": "I asked in different groups. is it forbidden? maybe someone checks only specific groups", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "Most people that could answer probably read all the groups.Just in case I moved your post to the MongoDB Atlas group and added a few tags.", "username": "steevej" }, { "code": "", "text": "It is a topic for discussion. I only read selected groups, this is why I posted in a few different groups to increase my chances of an answer", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "anyway, I’m almost sure there is no way to enable Data API via cli or curl. I was also checking realm cli but no luck", "username": "Arkadiusz_Borucki" }, { "code": "", "text": "@Arkadiusz_Borucki which parts of the process are you hoping to get done via Data API?\nThe pure Atlas Cluster creation can be done via the Atlas CLI. If you’d like to stick to the Terminal for accessing the cluster, you can also use MongoDB Shell.", "username": "Jakub_Lazinski" } ]
Enable Data API via atlas cli or curl
2023-01-07T13:47:32.819Z
Enable Data API via atlas cli or curl
1,374
null
[ "aggregation", "node-js" ]
[ { "code": "{\n\t\t\"_id\": \"6512f6fc6120ccb5a1112aef\",\n\t\t\"postBody\": \"This is a post\",\n\t\t\"postHeading\": \"Hello \",\n\t\t\"views\": 0,\n\t\t\"images\": [],\n\t\t\"likes\": [],\n\t\t\"mode\": \"normal\",\n\t\t\"comments\": [\n\t\t\t{\n\t\t\t\t\"comment\": \"This is 123\",\n\t\t\t\t\"userId\": \"651129da4ab8bce719dc9b78\",\n\t\t\t\t\"replies\": [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"comment\": \"This is 123\",\n\t\t\t\t\t\t\"userId\": \"651129da4ab8bce719dc9b78\",\n\t\t\t\t\t\t\"_id\": \"6517c051e3c8018bfd19c9f3\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"comment\": \"This is 123\",\n\t\t\t\t\t\t\"userId\": \"651129da4ab8bce719dc9b78\",\n\t\t\t\t\t\t\"_id\": \"6517c059e3c8018bfd19c9f9\"\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"comment\": \"This is 123\",\n\t\t\t\t\t\t\"userId\": \"651129da4ab8bce719dc9b78\",\n\t\t\t\t\t\t\"_id\": \"6517c059e3c8018bfd19ca01\"\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\t\"_id\": \"6517bf501185a06202599943\"\n\t\t\t}\n\t\t],\n\t\t\"createdAt\": \"2023-09-26T15:21:32.994Z\",\n\t\t\"updatedAt\": \"2023-09-30T06:29:46.239Z\",\n\t\t\"user\": {\n\t\t\t\"_id\": \"651129da4ab8bce719dc9b78\",\n\t\t\t\"username\": \"hoanghieu\",\n\t\t\t\"email\": \"[email protected]\"\n\t\t},\n\t},\nconst aggregatePost = (\n await this.postModel.aggregate([\n {\n $match: { _id: postId },\n },\n {\n $addFields: {\n hasComment: {\n $size: {\n $filter: {\n input: '$comments',\n as: 'comment',\n cond: { $eq: ['$$comment._id', commentId] },\n },\n },\n },\n },\n },\n {\n $addFields: {\n comments: {\n $cond: {\n if: { $eq: ['$hasComment', 0] },\n then: {\n $concatArrays: [\n '$comments',\n [\n {\n userId: userId,\n comment: createCommentDto.commentBody,\n _id: createdCommentId,\n replies: [], // Initialize replies as an empty array for top-level comments\n },\n ],\n ],\n },\n else: {\n $map: {\n input: '$comments',\n as: 'comment',\n in: {\n $cond: {\n if: { $eq: ['$$comment._id', commentId] }, // Replace '$someCommentId' with the actual comment ID\n then: {\n // Add a reply to the matching comment\n $mergeObjects: [\n '$$comment',\n {\n replies: {\n $concatArrays: [\n '$$comment.replies',\n [\n {\n userId: userId,\n comment: createCommentDto.commentBody,\n _id: createdCommentId,\n },\n ],\n ],\n },\n },\n ],\n },\n else: '$$comment', // Keep the original comment if it doesn't match\n },\n },\n },\n },\n },\n },\n },\n },\n ])\n )[0];\nreturn this.postModel.findByIdAndUpdate(postId, post);\n", "text": "I have a project going on, and I keep thinking about using aggregate function to speed up the codes because mongodb still faster than javascript right ?Let’s say for 1 example:so this is my post in the posts table, Let’s say I want to add one comment in one comment’s replies, If I do javascript code only, I will have to use 1-2 for loops for it to work, now if I use the aggregate function to do so like thisif I do update like this, will codes run faster ? and is this worth it ? I think using aggregate function is simply to complicated, but if the speed wise is huge then I will continue using it. thank you !", "username": "BrangTo_ggez" }, { "code": "", "text": "I think you are in the best position to answer that. You have the code, you have the data. you simply test and time it and you will know.but as general rule, doing things on the server is faster", "username": "steevej" }, { "code": "", "text": "you mean in Nodejs or in mongodb will be faster ? the term “server” is kinda vague to be honest. But thanks, I guess I have to do it on my own then.", "username": "BrangTo_ggez" }, { "code": "", "text": "on the database server", "username": "steevej" } ]
Should I use aggregate function to change data like update, delete?
2023-09-30T06:56:39.855Z
Should I use aggregate function to change data like update, delete?
233
null
[ "node-js", "mongoose-odm", "serverless" ]
[ { "code": "", "text": "Hi we are looking into migrating our current mongo logic which uses mongoose schemas and the driver to conduct CRUD functions. We run this on serverless functions with many microservices that connect to the db so we have been hitting many issues with connection pool max limits.We have decided to migrate to the Data API because it would resolve our connection issues. One issue we dont understand or can find documentation on is how do we leverage mongoose schema with the data api?", "username": "Shawn_Varughese" }, { "code": "", "text": "did you get a solution to this?", "username": "Abhinay_Pandey" } ]
Data API and mongoose schemas
2023-02-11T02:04:45.700Z
Data API and mongoose schemas
1,268
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "{\n \"_id\": ObjectId('65110ce0d7c21aaca1d1sf33'),\n bookingId: \"1234\",\n pgId: ObjectId('65110ce0d7c21aaca1d1db33'),\n scheduledDate: ISODate(\"2023-09-16T05:30:00.000Z\")\n}\ndb.bookingOrders.find( { pgId: ObjectId('65110ce0d7c21aaca1d1db33') } ).sort({ scheduledDate : -1 }).explain(\"executionStats\")\n\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'foody.bookingOrders',\n indexFilterSet: false,\n parsedQuery: { pgId: { '$eq': ObjectId(\"65110ce0d7c21aaca1d1db33\") } },\n queryHash: '0D151AAD',\n planCacheKey: 'B4247D13',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'SORT',\n sortPattern: { scheduledDate: -1 },\n memLimit: 104857600,\n type: 'simple',\n inputStage: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { pgId: 1 },\n indexName: 'pgId_1',\n isMultiKey: false,\n multiKeyPaths: { pgId: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n pgId: [\n \"[ObjectId('65110ce0d7c21aaca1d1db33'), ObjectId('65110ce0d7c21aaca1d1db33')]\"\n ]\n }\n }\n }\n },\n rejectedPlans: []\n },\nexecutionStats: {\n executionSuccess: true,\n nReturned: 8910,\n executionTimeMillis: 46,\n totalKeysExamined: 8910,\n totalDocsExamined: 8910,\n executionStages: {\n stage: 'SORT',\n nReturned: 8910,\n executionTimeMillisEstimate: 9,\n works: 17822,\n advanced: 8910,\n needTime: 8911,\n needYield: 0,\n saveState: 17,\n restoreState: 17,\n isEOF: 1,\n sortPattern: { scheduledDate: -1 },\n memLimit: 104857600,\n type: 'simple',\n totalDataSizeSorted: 4535264,\n usedDisk: false,\n spills: 0,\n inputStage: {\n stage: 'FETCH',\n nReturned: 8910,\n executionTimeMillisEstimate: 9,\n works: 8911,\n advanced: 8910,\n needTime: 0,\n needYield: 0,\n saveState: 17,\n restoreState: 17,\n isEOF: 1,\n docsExamined: 8910,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 8910,\n executionTimeMillisEstimate: 0,\n works: 8911,\n advanced: 8910,\n needTime: 0,\n needYield: 0,\n saveState: 17,\n restoreState: 17,\n isEOF: 1,\n keyPattern: { pgId: 1 },\n indexName: 'pgId_1',\n isMultiKey: false,\n multiKeyPaths: { pgId: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n pgId: [\n \"[ObjectId('65110ce0d7c21aaca1d1db33'), ObjectId('65110ce0d7c21aaca1d1db33')]\"\n ]\n },\n keysExamined: 8910,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n },\ndb.bookingOrders.find( { pgId: ObjectId('65110ce0d7c21aaca1d1db33') } ).sort({ scheduledDate : -1 }).explain(\"executionStats\")\n\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'foody.bookingOrders',\n indexFilterSet: false,\n parsedQuery: { pgId: { '$eq': ObjectId(\"65110ce0d7c21aaca1d1db33\") } },\n queryHash: '0D151AAD',\n planCacheKey: 'B4247D13',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'FETCH',\n filter: { pgId: { '$eq': ObjectId(\"65110ce0d7c21aaca1d1db33\") } },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { scheduledDate: -1 },\n indexName: 'scheduledDate_-1',\n isMultiKey: false,\n multiKeyPaths: { scheduledDate: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { scheduledDate: [ '[MaxKey, MinKey]' ] }\n }\n },\n rejectedPlans: [\n {\n stage: 'SORT',\n sortPattern: { scheduledDate: -1 },\n memLimit: 104857600,\n type: 'simple',\n inputStage: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { pgId: 1 },\n indexName: 'pgId_1',\n isMultiKey: false,\n multiKeyPaths: { pgId: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n pgId: [\n \"[ObjectId('65110ce0d7c21aaca1d1db33'), ObjectId('65110ce0d7c21aaca1d1db33')]\"\n ]\n }\n }\n }\n }\n ]\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 8910,\n executionTimeMillis: 418,\n totalKeysExamined: 80000,\n totalDocsExamined: 80000,\n executionStages: {\n stage: 'FETCH',\n filter: { pgId: { '$eq': ObjectId(\"65110ce0d7c21aaca1d1db33\") } },\n nReturned: 8910,\n executionTimeMillisEstimate: 135,\n works: 80001,\n advanced: 8910,\n needTime: 71090,\n needYield: 0,\n saveState: 80,\n restoreState: 80,\n isEOF: 1,\n docsExamined: 80000,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 80000,\n executionTimeMillisEstimate: 55,\n works: 80001,\n advanced: 80000,\n needTime: 0,\n needYield: 0,\n saveState: 80,\n restoreState: 80,\n isEOF: 1,\n keyPattern: { scheduledDate: -1 },\n indexName: 'scheduledDate_-1',\n isMultiKey: false,\n multiKeyPaths: { scheduledDate: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { scheduledDate: [ '[MaxKey, MinKey]' ] },\n keysExamined: 80000,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n },\n command: {\n find: 'bookingOrders',\n filter: { pgId: ObjectId(\"65110ce0d7c21aaca1d1db33\") },\n sort: { scheduledDate: -1 },\n '$db': 'foody'\n },\n serverInfo: {\n host: 'DESKTOP-O4GVMQ6',\n port: 27017,\n version: '6.0.8',\n gitVersion: '3d84c0dd4e5d99be0d69003652313e7eaf4cdd74'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1\n}\n", "text": "HiI have small schema like thisI have index for\na. pgIdMy collection could have huge amount of documents.What is the best way to do this ?\nI have tried two ways\na. SORT stage being used in RAM memoryfind( { pgId: ObjectId(‘65110ce0d7c21aaca1d1db33’) } ).sort({ scheduledDate : -1 }).explain(“executionStats”)Here is the outputb. No SORT operation in RAM by using sort on index field. In this case scheduledDate ( For this i created index for scheduledDate)find( { pgId: ObjectId(‘65110ce0d7c21aaca1d1db33’) } ).sort({ scheduledDate : -1 }).explain(“executionStats”)Here is the outputEven though approach “a” has given less time, in future as data grows i am afraid SORT being done in RAM is not that great idea.Can someone explain why approach “b” is taking more time, can we not expect do sort only for find based result ? Meaning sort only subset of doc matching pgId ?Please suggest", "username": "Manjunath_k_s" }, { "code": "", "text": "Hello @Manjunath_k_s,You can use the compound index,Also refer to the ESR rule for better performance,", "username": "turivishal" }, { "code": "createIndex({ pgId : 1, scheduledDate : -1 })\n.find( { pgId: ObjectId('65110ce0d7c21aaca1d1db33') } ).explain(\"executionStats\")\n\nID Time in ms dataSetSize \n1. ~21 ms 80K\n2. ~110 ms 190K\n{\n \"explainVersion\": \"1\",\n \"queryPlanner\": {\n \"namespace\": \"foody.bookingOrders\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"pgId\": {\n \"$eq\": \"65110ce0efa22fd55988e542\"\n }\n },\n \"queryHash\": \"767206C5\",\n \"planCacheKey\": \"7E0A1DA7\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"pgId\": 1,\n \"scheduledDate\": -1\n },\n \"indexName\": \"pgId_1_scheduledDate_-1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"pgId\": [],\n \"scheduledDate\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"pgId\": [\n \"[ObjectId('65110ce0efa22fd55988e542'), ObjectId('65110ce0efa22fd55988e542')]\"\n ],\n \"scheduledDate\": [\"[MaxKey, MinKey]\"]\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 21239,\n \"executionTimeMillis\": 115,\n \"totalKeysExamined\": 21239,\n \"totalDocsExamined\": 21239,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 21239,\n \"executionTimeMillisEstimate\": 34,\n \"works\": 21240,\n \"advanced\": 21239,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 21,\n \"restoreState\": 21,\n \"isEOF\": 1,\n \"docsExamined\": 21239,\n \"alreadyHasObj\": 0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 21239,\n \"executionTimeMillisEstimate\": 7,\n \"works\": 21240,\n \"advanced\": 21239,\n \"needTime\": 0,\n \"needYield\": 0,\n \"saveState\": 21,\n \"restoreState\": 21,\n \"isEOF\": 1,\n \"keyPattern\": {\n \"pgId\": 1,\n \"scheduledDate\": -1\n },\n \"indexName\": \"pgId_1_scheduledDate_-1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"pgId\": [],\n \"scheduledDate\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"pgId\": [\n \"[ObjectId('65110ce0efa22fd55988e542'), ObjectId('65110ce0efa22fd55988e542')]\"\n ],\n \"scheduledDate\": [\"[MaxKey, MinKey]\"]\n },\n \"keysExamined\": 21239,\n \"seeks\": 1,\n \"dupsTested\": 0,\n \"dupsDropped\": 0\n }\n },\n \"allPlansExecution\": []\n },\n", "text": "Hi Vishal,Thanks for suggesting compound index. I could able to use it.However if we double the data set size and do the same query it is taking more timeSeems “totalKeysExamined” and “totalDocsExamined” directly proportional to time consumption.\nEven though it is indexed, why does it take so much time ? Can we optimize this ? Please help in correcting ?Here is executionStats", "username": "Manjunath_k_s" }, { "code": "", "text": "Hello @Manjunath_k_s,I don’t see a major difference in performance, because if you understand the execution speed does not matter only on indexing but also you need to ensure that your hardware resources (CPU, RAM, storage) are sufficient for your dataset size.", "username": "turivishal" }, { "code": "", "text": "Hi Vishal,I see your point. Just out of curiosity how did you say no major difference ? 21ms vs 110 ms ?", "username": "Manjunath_k_s" }, { "code": "", "text": "Unless you have a covered index it’ll still need to read the documents to return the data as opposed to just using an in-memory index.\nFrom the looks of things you have fields not in the index that are in the document, hence needing to hit documents as opposed to just the index.", "username": "John_Sewell" }, { "code": "", "text": "Hi @Manjunath_k_s,I see your point. Just out of curiosity how did you say no major difference ? 21ms vs 110 ms ?We can’t measure the execution time by document counts, Second, possibly your first query result comes from the cache but the second query does not, the time will fluctuate if there are other queries that fit/overwrite in memory or cache depending on your hardware configurations. Third possibly the other 80k document sizes (bytes) are higher than the first 80k documents.Can’t predict the difference in milliseconds with the number of documents but if it is taking time in seconds then you have to worry about what is going on.", "username": "turivishal" }, { "code": "db.<col>.createIndex({ \"pgId\" : 1, \"email\" : 1, \"bookingStatus\" : 1, \"bookingType\" : 1, \"scheduledDate\" : -1})\n", "text": "Hi Vishal,Thanks. Since i am on the same topic and schema, i have another question.I have compound index like thisMy question isQuery 1 - find({ email : “[email protected]” })Query 2 - find({pgId: ObjectId(‘65110ce00f00ba098b296d8e’), email : “[email protected]” })If i query with fields pgId, email query execution is done by “IXSCAN” stage. However if i just query by email field alone, query plan indicates scan is done by “COLSCAN”.\nI expected a “IXSCAN” in the second query as well. Why is mongodb looking for collscan when “email” is prefix of compund index ?", "username": "Manjunath_k_s" }, { "code": "pgId", "text": "Hello @Manjunath_k_s,I think you missed to read the compound index documentation property, refer example provided in the documentation about prefixes,The order of the fields in the compound index matters, to support the index it requires the first field pgId in your query.", "username": "turivishal" } ]
Best way to sort by date after index on a field?
2023-09-28T12:25:55.847Z
Best way to sort by date after index on a field?
350
null
[ "aggregation", "mongodb-shell" ]
[ { "code": "print('Hello');\nload('./bar.js');\nprint(' World');\nnoSuchFunction();\n~ /usr/bin/mongo --quiet foo ./foo.js\nHello\n World\n2023-09-21T09:56:49.765+0200 E QUERY [thread1] [./bar.js:2:1] ReferenceError: noSuchFunction is not defined\nStack trace:\n@./bar.js:2:1\n@./foo.js:2:1\n----------\n2023-09-21T09:56:49.765+0200 E QUERY [thread1] Error: error loading js file: ./bar.js @./foo.js:2:1\nfailed to load: ./foo.js\n~ /usr/bin/mongosh --quiet foo ./foo.js\nHello\n World\nReferenceError: noSuchFunction is not defined\n--verbose/etc/mongosh.confmongosh:\n showStackTraces: true\n/etc/mongosh.conf", "text": "After having moved from the old mongo commandline to mongosh, I am struggling with the lack of info from debug output when scripts run into problems. For comparison with the old (i.e. 3.6ish) behaviour, consider the two following JS files:foo.js:bar.js:Running this on the legacy mongo 3.6 instance, I get some helpful info:In mongosh (1.10.1, running on MongoDB 6.0.8), debug output is somewhat on the meagre side:I have scanned the documentation, but I didn’t find anything referencing mongosh debug output. The --verbose-Flag doesn’t help either. Once you have to deal with some more complex scripts running generated aggregations, this is becoming a major pain. I am fairly certain that I’ve just overlooked something in the configuration, but I cannot for the life of me figure out what it could be.Is there any way to get the old behaviour back with mongosh, so a stack trace with filenames and line numbers is shown for errors? I already tried with /etc/mongosh.conf - this currently readsIf I add some error to /etc/mongosh.conf, mongosh complains, so I am fairly sure that this config is actually read. Still, no stack traces for me.Kind regardsMarkus", "username": "Markus_Wollny" }, { "code": "config[primary] test> config.get('showStackTraces') \nfalse /// <--- initially false\n[primary] test> noSuchFunction()\nReferenceError: noSuchFunction is not defined\n\n[primary] test> config.set('showStackTraces',true) /// <--- set to true\nSetting \"showStackTraces\" has been changed\n[primary] test> noSuchFunction()\nUncaught:\nReferenceError: noSuchFunction is not defined\n at REPL7:25:9\n at REPL7:39:5\n at REPL7:43:3\n at Script.runInContext (node:vm:141:12)\n at PrettyREPLServer.defaultEval (node:repl:574:29)\n at bound (node:domain:433:15)\n at PrettyREPLServer.runBound (node:domain:444:12)\n at /opt/homebrew/Cellar/mongosh/1.10.1/libexec/lib/node_modules/@mongosh/cli-repl/lib/async-repl.js:147:20\n at /opt/homebrew/Cellar/mongosh/1.10.1/libexec/lib/node_modules/@mongosh/cli-repl/lib/async-repl.js:167:20\n at node:internal/util:364:7\n[primary] test> \n", "text": "Hey @Markus_Wollny,Have you tried using the config API to see if it suits your use case / is useable in the meantime whilst I’m testing the config file setting? Updates made using the config API persist between sessions.I ran the following in my test environment:I’ll try with a config file and see if i’m getting similar behaviour to what you’ve detailed on the post. Can you advise what operating system you’ve tried this on as well?Thanks in advance,\nJason", "username": "Jason_Tran" }, { "code": "/etc/mongosh.confshowStackTracesrs0 [direct: primary] test> config.get('showStackTraces') ;\ntrue\nprint(\"showStackTraces is set to \" + config.get('showStackTraces'));# /usr/bin/mongosh --quiet foo ./foo.js\nHello\nshowStackTraces is set to true\n World\nReferenceError: noSuchFunction is not defined\n--quietrs0 [direct: primary] foo> load('./foo.js');\nHello\nshowStackTraces is set to true\n World\nUncaught:\nReferenceError: noSuchFunction is not defined\n at /home/foo/bar.js:26:9\n at async ShellEvaluator.innerEval (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:100:375625)\n at async ShellEvaluator.customEval (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:100:375764)\n at async MongoshNodeRepl.eval (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:22:137768)\n at async PrettyREPLServer.h.eval (/tmp/m/boxednode/mongosh/node-v16.20.1/out/Release/node:22:92116)\nload()", "text": "The behaviour seems to be different depending on how commands are being executed. Using the config-API, I can confirm, that the setting from /etc/mongosh.conf does in fact work, as the initial value for showStackTraces is already true:If I add print(\"showStackTraces is set to \" + config.get('showStackTraces')); to my test-script and run it, I still get no stacktrace in the output:Omitting the --quiet flag just adds the usual server info and startup warning, but alas, no stack trace.If I however run mongosh in interactive mode instead of passing a JS file on the commandline, the stack trace is working fine:So a viable workaround for the time being is to explicitly load() my scripts in an interactive shell, though for testing purposes, getting the stack trace when passing the script as commandline argument would really, really be helpful.Operating system is Debian 11.7, shell is bash.Kind regardsMarkus", "username": "Markus_Wollny" } ]
How can I get debug info when running mongosh script?
2023-09-21T08:34:10.984Z
How can I get debug info when running mongosh script?
443
null
[ "compass", "installation", "upgrading" ]
[ { "code": "", "text": "Hi, updated my version of Compass to 1.40.0 yesterday, on windows 10 pro. Now the GUI simply does not load. I see 3 mongo processes running in task manager, but no GUI. I uninstalled, and even tried reinstalling the older version (1.39.3) but now its just not loading the GUI.VS Code plugin still connects, but I would like to use Compass still, and it was working until I installed the upgrade.", "username": "Matt_Christian" }, { "code": "", "text": "I found the answer in this thread - thanks @Daryl_Anderson . I also found that Compass would not open until I deleted the directory “%APPDATA%\\MongoDB Compass”. Note this worked but deleted all my connections and history etc, but wasnt a major problem for me.", "username": "Matt_Christian" }, { "code": "", "text": "Hi MattIs there any chance you still have your “%APPDATA%\\MongoDB Compass\\AppPreferences\\General.json” file? There might have been something in there that caused an error that wasn’t caught properly.Le Roux", "username": "Le_Roux_Bodenstein" }, { "code": "", "text": "%APPDATA%\\MongoDB CompassSorry @Le_Roux_Bodenstein I got rid of the files completely.", "username": "Matt_Christian" }, { "code": "", "text": "I have the same problem, I’ll keep the files backedup in case you need it but I’m defnitely following the advice to get Compass working. Let me know how i can send it to you.", "username": "Arno_Van_der_Walt" }, { "code": "", "text": "Hi ArnoOn windows the preferences file should be in AppData\\Roaming\\MongoDB Compass\\AppPreferences\\General.jsonOn Mac ~/Library/Application Support/MongoDB Compass/AppPreferences/Global.jsonand on Linux ~/.config/MongoDB Compass/AppPreferences/General.jsonYou can just open it in a text editor and copy/paste the text. Or have a look and see if you can spot if it is valid/invalid JSON.On which operating system are you and which version of Compass is this? We released version 1.40.2 that should work around the problem of corrupted preferences.", "username": "Le_Roux_Bodenstein" }, { "code": "", "text": "Oh and the newer version should also catch the error and show it in a dialog. It would be useful to get that error message as well.", "username": "Le_Roux_Bodenstein" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compass GUI not loading after latest version update - win 10
2023-09-27T09:13:49.379Z
Compass GUI not loading after latest version update - win 10
583
null
[]
[ { "code": "https://cloud.mongodb.com/api/atlas/v1.0/groups/myporojectID/clusters/myclusterName{\n \"detail\": \"Unexpected error.\",\n \"error\": 500,\n \"errorCode\": \"UNEXPECTED_ERROR\",\n \"parameters\": [],\n \"reason\": \"Internal Server Error\"\n}\n{\n \"acceptDataRisksAndForceReplicaSetReconfig\": \"2019-08-24T14:15:22Z\",\n \"autoScaling\": {\n \"compute\": {\n \"enabled\": true,\n \"scaleDownEnabled\": true\n },\n \"diskGBEnabled\": false\n },\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"PRIMARY\"\n },\n \"clusterType\": \"REPLICASET\",\n \"diskSizeGB\": 32,\n \"encryptionAtRestProvider\": \"NONE\",\n \"labels\": [],\n \"mongoDBMajorVersion\": \"4.4\",\n \"name\": myclusterName,\n \"numShards\": 1,\n \"paused\": false,\n \"pitEnabled\": true,\n \"providerBackupEnabled\": true,\n \"providerSettings\": {\n \"providerName\": \"AZURE\",\n \"autoScaling\": {\n \"compute\": {\n \"maxInstanceSize\": \"M50\",\n \"minInstanceSize\": \"M10\"\n } \n },\n \"diskTypeName\": \"P4\",\n \"instanceSizeName\": \"M20\",\n \"regionName\": \"UK_SOUTH\"\n },\n \"replicationSpecs\": [\n {\n \"id\": XXX,\n \"numShards\": 1,\n \"regionsConfig\": {\n \"UK_SOUTH\": {\n \"analyticsNodes\": 0,\n \"electableNodes\": 3,\n \"priority\": 7,\n \"readOnlyNodes\": 0\n }\n },\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}\n", "text": "I’m trying to update my cluster to M20 tier via API call\nThe URL\nhttps://cloud.mongodb.com/api/atlas/v1.0/groups/myporojectID/clusters/myclusterName\nThe response is:The API key has been created, also access/permissions granted\nI’m using postman\nThe GET works fine\nA JSON file as a body:One more question: where I should find “id”. I did GET request and took the id from answer. Is it possible to find via UI (portal) ?", "username": "Andrii_Tkachuk" }, { "code": "https://cloud.mongodb.com/api/atlas/v1.0/groups/myporojectID/clusters/myclusterName{\n \"detail\": \"Unexpected error.\",\n \"error\": 500,\n \"errorCode\": \"UNEXPECTED_ERROR\",\n \"parameters\": [],\n \"reason\": \"Internal Server Error\"\n}\nhttps://cloud.mongodb.com/api/atlas/v2/groups/{groupId}/clusters/{clusterName}\nv2v1groupIdclusterName", "text": "Hi @Andrii_Tkachuk,I’m trying to update my cluster to M20 tier via API call\nThe URL\nhttps://cloud.mongodb.com/api/atlas/v1.0/groups/myporojectID/clusters/myclusterName\nThe response is:Can you try the update using the following Modify One Multi-Cloud Cluster from One Project request:Note: The difference relative to your initial endpoint is a v2 instead of v1Let me know if you’re still getting an error, if so, please send the full request details here. You can redact the groupId and clusterName before doing so.Regards,\nJason", "username": "Jason_Tran" }, { "code": "{\n \"detail\": \"Invalid accept header or version date.\",\n \"error\": 406,\n \"errorCode\": \"INVALID_VERSION_DATE\",\n \"parameters\": [],\n \"reason\": \"Not Acceptable\"\n}\n", "text": "HI @Jason_Tran, thanks for replying\nV2 (https://cloud.mongodb.com/api/atlas/v2/groups/myporojectID/clusters/myclusterName) returnedI’m using postman. The same result in python.\nThe full request in my first post", "username": "Andrii_Tkachuk" }, { "code": "idclusterNameidid", "text": "The full request in my first postAre you following the request sample (request) as opposed to the response sample? The body you’ve provided looks to be the response. Refer to : Modify One Multi-Cloud Cluster from One ProjectE.g (from the docs):\nPayload:\nimage976×738 32.7 KBResponse:\nimage962×856 46.4 KBI noticed your body you’ve provided in your initial response contains id, clusterName values which the payload doesn’t require. I think this also answers your question regarding the id. The id is in the response once the update request is sent.Regards,\nJason", "username": "Jason_Tran" }, { "code": "{\n \"acceptDataRisksAndForceReplicaSetReconfig\": \"2019-08-24T14:15:22Z\",\n \"backupEnabled\": false,\n \"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"PRIMARY\"\n },\n \"clusterType\": \"REPLICASET\",\n \"diskSizeGB\": 32,\n \"encryptionAtRestProvider\": \"NONE\",\n \"labels\": [],\n \"mongoDBMajorVersion\": \"4.4\",\n \"name\": \"myclustername\",\n \"paused\": false,\n \"pitEnabled\": true,\n \"replicationSpecs\": [\n {\n \"numShards\": 1,\n \"regionConfigs\": [\n {\n \"providerName\": \"AZURE\",\n \"regionName\": \"UK_SOUTH\",\n \"analyticsNodes\": 0,\n \"electableNodes\": 3,\n \"priority\": 7,\n \"readOnlyNodes\": 0,\n \"analyticsSpecs\": {\n \"instanceSize\": \"M20\"\n }\n }\n ],\n \"zoneName\": \"Zone 1\"\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"tags\": [],\n \"terminationProtectionEnabled\": false,\n \"versionReleaseSystem\": \"LTS\"\n}\nhttps://cloud.mongodb.com/api/atlas/v2/groups/myprojectId/clusters/myclustername{\n \"detail\": \"Invalid accept header or version date.\",\n \"error\": 406,\n \"errorCode\": \"INVALID_VERSION_DATE\",\n \"parameters\": [],\n \"reason\": \"Not Acceptable\"\n}\n", "text": "Hello @Jason_Tran\nPayloadThe URL\nhttps://cloud.mongodb.com/api/atlas/v2/groups/myprojectId/clusters/myclustername", "username": "Andrii_Tkachuk" }, { "code": "{\n \"detail\": \"Invalid accept header or version date.\",\n \"error\": 406,\n \"errorCode\": \"INVALID_VERSION_DATE\",\n \"parameters\": [],\n \"reason\": \"Not Acceptable\"\n}\nAcceptAccept*/*Acceptapplication/vnd.atlas.2023-02-01+jsonbody{\n \"backupEnabled\": true,\n \"clusterType\": \"REPLICASET\",\n \"diskSizeGB\": 30,\n \"mongoDBMajorVersion\": \"6.0\",\n \"name\": \"Cluster0\",\n \"replicationSpecs\": [\n {\n \"numShards\": 1,\n \"regionConfigs\": [\n {\n \"electableSpecs\": {\n \"diskIOPS\": 0,\n \"ebsVolumeType\": \"STANDARD\",\n \"instanceSize\": \"M20\",\n \"nodeCount\": 3\n },\n \"priority\": 7,\n \"providerName\": \"AWS\",\n \"regionName\": \"AP_SOUTHEAST_2\"\n }\n ]\n }\n ],\n \"rootCertType\": \"ISRGROOTX1\",\n \"terminationProtectionEnabled\": false\n}\n", "text": "Thanks @Andrii_Tkachuk,What’s the Accept value you’re using in Postman? I managed to get the same error in postman for my own test environment when using an Accept value of */*:\nimage1058×422 36.8 KBUpon changing the Accept value in postman to application/vnd.atlas.2023-02-01+json, the request went through.For what it’s worth, here is the body of the request I tested:Note: This is just a sample body that I had used for my test environment. I am not advising you to use this exact same request body.Regards,\nJason", "username": "Jason_Tran" }, { "code": "curl", "text": "I just thought too, you could try running curl as well to see if you get same error. It could be related to the postman setup as opposed to the body details. Not mandatory but something to also help troubleshoot.Regards,\nJason", "username": "Jason_Tran" }, { "code": "\"biConnector\": {\n \"enabled\": false,\n \"readPreference\": \"secondary\"\n},\n", "text": "Thank you @Jason_Tran\nThe solution is working\nI changed one of headers:Accept : application/vnd.atlas.2023-02-01+jsonRemoved:Thanks!", "username": "Andrii_Tkachuk" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change the cluster tier. PATCH request
2023-09-27T12:57:19.084Z
Change the cluster tier. PATCH request
312
null
[ "dot-net" ]
[ { "code": "", "text": "Hi everyone!\nIn my app I need to be sure that indexes had been created before collection operations. For example, I must be sure that ALL indexes had been created before Create document operation.\nSo my question is: can I get the situation when some indexes were created then command was performed then other indexes were created if indexes creation and my command are executed in parallel? I use C# driver", "username": "astakhova.ksen.762" }, { "code": "db.currentOp()", "text": "I don’t fully understand your question. But this explains how a new index is built on a non-empty collection.You can alsoUse db.currentOp() to monitor the progress of ongoing index builds.", "username": "Kobe_W" }, { "code": "Collection.Indexes.CreateManyAsync()Collection.Indexes.CreateManyAsync()", "text": "I mean that in the first thread I do indexes creation (they are background indexes actually if it makes sense) with command Collection.Indexes.CreateManyAsync() and in the other thread I do some operation using this collection, for example, document insertion. My collection is empty before both indexes creation and document insertion.Let’s consider the situation when we want to create 5 background indexes. We do Collection.Indexes.CreateManyAsync() and document insertion at the same time. Can this thing happen: 3 indexes were created then document was inserted then other 2 indexes were created?", "username": "astakhova.ksen.762" }, { "code": "", "text": "Out of interest why does it matter? In only doing inserts and index creation why does itmattet which fimishes first.", "username": "John_Sewell" }, { "code": "", "text": "If we have a unique index on some field and firstly we create index and secondly do 2 documents insertion having the same field then we get an error on the second insertion. This is desirable behaviour. But if we firstly do these docs insertion and then create indexes then we get an error on index creation and this isn’t desirable.", "username": "astakhova.ksen.762" } ]
Atomicity of indexes creation
2023-09-29T05:50:00.098Z
Atomicity of indexes creation
302
null
[ "compass", "connecting" ]
[ { "code": "", "text": "The status is showing as data server failed to start but its connecting from mongodb compass .Is there any way for fixing the status.", "username": "Ritismita_Rath" }, { "code": "", "text": "Good morning, welcome to the MongoDB community.I don’t know if I understood correctly, what status? Can you post screenshots and error messages if possible?", "username": "Samuel_84194" }, { "code": "", "text": "\nHi, this is the status of mongodb showing on putty.", "username": "Ritismita_Rath" }, { "code": "systemctl status mongod.service -lps -ef | grep -i mongo", "text": "But are you able to connect to Mongo based on what you say above? Are you sure this is the correct service?Can you givesystemctl status mongod.service -lAlso, get the last few lines from the MongoDB log and also do a ps -ef | grep -i mongo to confirm that there is no other service running.", "username": "Samuel_84194" }, { "code": "systemctl restart mongod.servicesystemctl status mongod.service -lps -ef | grep -i mongod", "text": "", "username": "amit_bhargav" } ]
Failed to start MongoDB Database Server. but connecting from mongodb compass
2023-09-20T11:39:41.017Z
Failed to start MongoDB Database Server. but connecting from mongodb compass
400
null
[ "replication", "sharding" ]
[ { "code": "`2023-09-30T08:32:31.177Z I COMMAND [conn343] Command on database admin timed out waiting for read concern to be satisfied. Command: { find: \"system.keys\", filter: { purpose: \"HMAC\", expiresAt: { $gt: Timestamp(1696062714, 2) } }, sort: { expiresAt: 1 }, readConcern: { level: \"majority\", afterOpTime: { ts: Timestamp(1696003640, 1), t: 109 } }, maxTimeMS: 30000, $readPreference: { mode: \"nearest\" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1696062714, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configServerState: { opTime: { ts: Timestamp(1696003640, 1), t: 109 } }, $db: \"admin\" }. Info: ExceededTimeLimit: Error waiting for snapshot not less than { ts: Timestamp(1696003640, 1), t: 109 }, current relevant optime is { ts: Timestamp(1696062748, 1), t: 1 }. :: caused by :: operation exceeded time limit`\n \n\n`2023-09-30T08:32:31.177Z I COMMAND [conn343] command admin.$cmd command: find { find: \"system.keys\", filter: { purpose: \"HMAC\", expiresAt: { $gt: Timestamp(1696062714, 2) } }, sort: { expiresAt: 1 }, readConcern: { level: \"majority\", afterOpTime: { ts: Timestamp(1696003640, 1), t: 109 } }, maxTimeMS: 30000, $readPreference: { mode: \"nearest\" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1696062714, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configServerState: { opTime: { ts: Timestamp(1696003640, 1), t: 109 } }, $db: \"admin\" } numYields:0 reslen:683 locks:{} protocol:op_msg 30009ms`\n`2023-09-30T08:32:41.445Z I COMMAND [conn339] Command on database config timed out waiting for read concern to be satisfied. Command: { find: \"collections\", filter: { _id: /^config\\./ }, readConcern: { level: \"majority\", afterOpTime: { ts: Timestamp(1696003640, 1), t: 109 } }, maxTimeMS: 30000, $readPreference: { mode: \"nearest\" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1696062730, 1), signature: { hash: BinData(0, E6E45379DFB713828A3A464463950B385DC87F98), keyId: 7284513431665770523 } }, $configServerState: { opTime: { ts: Timestamp(1696003640, 1), t: 109 } }, $db: \"config\" }. Info: ExceededTimeLimit: Error waiting for snapshot not less than { ts: Timestamp(1696003640, 1), t: 109 }, current relevant optime is { ts: Timestamp(1696062748, 1), t: 1 }. :: caused by :: operation exceeded time limit`\n2023-09-30T08:32:41.445Z I COMMAND [conn339] command config.$cmd command: find { find: \"collections\", filter: { _id: /^config\\./ }, readConcern: { level: \"majority\", afterOpTime: { ts: Timestamp(1696003640, 1), t: 109 } }, maxTimeMS: 30000, $readPreference: { mode: \"nearest\" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1696062730, 1), signature: { hash: BinData(0, E6E45379DFB713828A3A464463950B385DC87F98), keyId: 7284513431665770523 } }, $configServerState: { opTime: { ts: Timestamp(1696003640, 1), t: 109 } }, $db: \"config\" } numYields:0 reslen:683 locks:{} protocol:op_msg 30007ms\n", "text": "Hi fellow mongodb guru:I ran into an issue with a sharded cluster. Someone accidentially deleted the data from one of the sharded groups. Since it is a QA db, no big issue. I figured I can stop all mongos. deleted all the data in all data nodes in all sharded groups. Deleted all db files on the config server replicaset. Start from scratch.I re-initilized the config server replicaset. Did the same for each of the data node sharded replicasets. started mongos. and added the following shard config:sh.addShard(“qa_group1/mongo-qa-vm1.fra1.framework:27018,mongo-qa-vm2.fra1.framework:27018”)\nsh.addShard(“qa_group2/mongo-qa-vm3.fra1.framework:27018,mongo-qa-vm3.fra1.framework:27018”)For some reason, I continue to get the following errors. I think because of the error, shard distribution is not working. Data stays on only one sharded groups.So I want to see if anyone out there can help me to troubleshoot what is going on.Thanks in advance.\nEricLogs from config server primary (all nodes have the same error)", "username": "Eric_Wong" }, { "code": "", "text": "I finally found the root cause.one mongos that was started by an user was opening the connection to the config server.", "username": "Eric_Wong" } ]
Config server operation exceeded time limit
2023-09-30T09:43:03.514Z
Config server operation exceeded time limit
262
null
[ "aggregation", "queries", "transactions" ]
[ { "code": "\"permissions\": [\n {\n \"module\": \"transactions\",\n \"sub_module\": [\n {\n \"name\": \"health\",\n \"headers\": [\n {\n \"header_id\": \"651526995487452392fba692\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f13\"\n },\n {\n \"header_id\": \"651526a75487452392fba695\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f14\"\n },\n {\n \"header_id\": \"651526ac5487452392fba698\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f15\"\n },\n {\n \"header_id\": \"651526b25487452392fba69b\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f16\"\n },\n {\n \"header_id\": \"651526c85487452392fba69e\",\n \"status\": false,\n \"_id\": \"65153030866481f26ac30f17\"\n }\n ]\n }\n ]\n },\n {\n \"module\": \"dashbord\",\n \"sub_module\": [\n {\n \"name\": \"health\",\n \"headers\": [\n {\n \"header_id\": \"651526995487452392fba692\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0d\"\n },\n {\n \"header_id\": \"651526a75487452392fba695\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0e\"\n },\n {\n \"header_id\": \"651526ac5487452392fba698\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0f\"\n },\n {\n \"header_id\": \"651526b25487452392fba69b\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f10\"\n },\n {\n \"header_id\": \"651526c85487452392fba69e\",\n \"status\": false,\n \"_id\": \"65153030866481f26ac30f11\"\n }\n ]\n },\n {\n \"name\": \"car\",\n \"headers\": [\n {\n \"header_id\": \"651526995487452392fba692\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0d\"\n },\n {\n \"header_id\": \"651526a75487452392fba695\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0e\"\n },\n {\n \"header_id\": \"651526ac5487452392fba698\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0f\"\n },\n {\n \"header_id\": \"651526b25487452392fba69b\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f10\"\n },\n {\n \"header_id\": \"651526c85487452392fba69e\",\n \"status\": false,\n \"_id\": \"65153030866481f26ac30f11\"\n }\n ]\n }\n ]\n }\n ]\n````Preformatted text`\n\"permissions\": [\n {\n \"module\": \"transactions\",\n \"sub_module\": [\n {\n \"name\": \"health\",\n \"headers\": [\n {\n \"name\": \"view\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f13\"\n },\n {\n \"name\": \"add\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f14\"\n },\n {\n \"name\": \"edit\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f15\"\n },\n {\n \"header_id\": \"remove\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f16\"\n },\n {\n \"name\": \"download\",\n \"status\": false,\n \"_id\": \"65153030866481f26ac30f17\"\n }\n ]\n }\n ]\n },\n {\n \"module\": \"dashbord\",\n \"sub_module\": [\n {\n \"name\": \"health\",\n \"headers\": [\n {\n \"name\": \"view\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0d\"\n },\n {\n \"name\": \"add\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0e\"\n },\n {\n \"name\": \"edit\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0f\"\n },\n {\n \"name\": \"remove\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f10\"\n },\n {\n \"header_id\": \"download\",\n \"status\": false,\n \"_id\": \"65153030866481f26ac30f11\"\n }\n ]\n },\n {\n \"name\": \"car\",\n \"headers\": [\n {\n \"name\": \"view\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0d\"\n },\n {\n \"name\": \"add\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0e\"\n },\n {\n \"name\": \"edit\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f0f\"\n },\n {\n \"name\": \"remove\",\n \"status\": true,\n \"_id\": \"65153030866481f26ac30f10\"\n },\n {\n \"name\": \"download\",\n \"status\": false,\n \"_id\": \"65153030866481f26ac30f11\"\n }\n ]\n }\n ]\n }\n]\n````Preformatted text`\n", "text": "I have arrayi want like thisto fetch name of header use header_id from header table", "username": "Sumit_Kumar15" }, { "code": "", "text": "Have you tried $lookup?But you do one thing really wrong. You store you id as string rather than a real object id. This takes more space and you would need to convert to object id every time you lookup.", "username": "steevej" } ]
How to group this using aggregation
2023-09-28T16:57:29.226Z
How to group this using aggregation
313
null
[ "server" ]
[ { "code": "", "text": "mongod.service: main process exited, code=killed status=4/ILL centos 7, my virtual machine has centos 7 x86_64, please help", "username": "Noe_Cruz_Asturias" }, { "code": "", "text": "Hi @Noe_Cruz_AsturiasThis and similar have been answered many times please use the search and update this topic if you have further questions.", "username": "chris" } ]
Mongod.service: main process exited, code=killed status=4/ILL centos 7
2023-09-30T22:53:24.454Z
Mongod.service: main process exited, code=killed status=4/ILL centos 7
462
null
[ "aggregation" ]
[ { "code": "{ \"_id\" : 1, \"item\" : \"foo\", values: [ \"foo\", \"foo2\", \"foo3\"] }\n{ \"_id\" : 2, \"item\" : \"bar\", values: [ \"bar\", \"bar2\", \"bar3\"] }\n{ \"_id\" : 3, \"item\" : \"baz\", values: [ \"baz\", \"baz2\", \"baz3\"] }\nvalues[\n {$sort: {\"_id\": 1}},\n {$unwind: \"$values\"}\n]\n{ \"_id\" : 1, \"item\" : \"foo\", values: \"foo\" }\n{ \"_id\" : 1, \"item\" : \"foo\", values: \"foo2\" }\n{ \"_id\" : 1, \"item\" : \"foo\", values: \"foo3\" }\n{ \"_id\" : 2, \"item\" : \"bar\", values: \"bar\" }\n{ \"_id\" : 2, \"item\" : \"bar\", values: \"bar2\" }\n{ \"_id\" : 2, \"item\" : \"bar\", values: \"bar3\" }\n{ \"_id\" : 3, \"item\" : \"baz\", values: \"baz\" }\n{ \"_id\" : 3, \"item\" : \"baz\", values: \"baz2\" }\n{ \"_id\" : 3, \"item\" : \"baz\", values: \"baz3\" }\n", "text": "I am wandering whether using $unwind operator in aggregation pipeline for document with nested array will return the deconstructed documents in the same order as the order of the items in the array.\nExample:\nSuppose I have the following documentsI would like to use paging for all values in all documents in my application code. So, my idea is to use mongo aggregation framework to:So the question using the example described above is:Is it guaranteed that the following aggregation pipeline:will always result to the following documents with exactly the same order?:", "username": "karaimin" }, { "code": "$match[\n {$match: { <query> } }\n {$unwind: \"$values\"}\n {$sort: {\"_id\": 1}},\n]\n", "text": "Hello @karaimin welcome to the community!There is a chance that you get the same results but no guarantee. Personally I always use the sort after the unwind, ideal with an $match before the unwind to try to get as few documents as possible.The above will provide for sure the wanted result. However, I hope that one oft the MDB internals can elaborate onCheers,\nMichael", "username": "michael_hoeller" }, { "code": "valuesvalues\"allowDiskUse: true\"", "text": "Thank you @michael_hoeller for you response.\nHowever, maybe this is not an option for me since I expect the real sample to have a small number documents which I will $match by some query (this is why I am sorting them by _id) and large number of items in values array.\nAs I mentioned I want to use pagination for the items in the values attribute. So I need a consistent order of them. Sorting them is not an option since after $unwind I loose the DB Index and it will exceed the memory of 100MB available for the pipeline. I can use \"allowDiskUse: true\", but that is also not a good option because it will slow down queries a lot.", "username": "karaimin" }, { "code": "\"allowDiskUse: true\"includeArrayIndex{ \"_id\" : 1, \"arrayIndex\" : NumberLong(0), \"item\" : \"foo\", values: \"foo\" }\n{ \"_id\" : 1, \"arrayIndex\" : NumberLong(1), \"item\" : \"foo\", values: \"foo2\" }\n{ \"_id\" : 1, \"arrayIndex\" : NumberLong(2), \"item\" : \"foo\", values: \"foo3\" }\n{ \"_id\" : 2, \"arrayIndex\" : NumberLong(1), \"item\" : \"bar\", values: \"bar\" }\n{ \"_id\" : 2, \"arrayIndex\" : NumberLong(2), \"item\" : \"bar\", values: \"bar2\" }\n{ \"_id\" : 2, \"arrayIndex\" : NumberLong(3), \"item\" : \"bar\", values: \"bar3\" }\n// Page 1\ndb.users.find().limit (10)\n// Page 2\ndb.users.find().skip(10).limit(10)\n// Page 3\ndb.users.find().skip(20).limit(10)\ndb.users.find().skip(pagesize*(n-1)).limit(pagesize)\n// Page 1\ndb.users.find().limit(pageSize);\n// Find the id of the last document in this page\nlast_id = ...\n\n// Page 2\nusers = db.users.find({` `'_id'` `> last_id}). limit(10);\n// Update the last id with the id of the last document in this page\nlast_id = ...\n$slice", "text": "Hello @karaiminI can use \"allowDiskUse: true\" , but that is also not a good option because it will slow down queries a lot.Yes, that should be avoid when you are acting in a realtime app.One thought on this, large arrays are almost always difficult to handle. Maybe you can change your schema and use embedding for $values? This would open completely different options.You can check if a multikey Index on one or some fields in your array can help. In case you can find a good index you can walk along the index without the $unwindsince after $unwind I loose the DB IndexI am not sure what you mean. When you deconstruct an array with $unwind you will get an document per field and element. You can add the option includeArrayIndex to $unwind. This will add an field to the new document which just numbers the fields. This could also provide a path to go.In general, to retrieve page ‘n’ the code looks like this:As the size of your data increases, this approach has performance problems. The reason is that every time the query is executed, the full result set is built up, then the server has to walk from the beginning of the collection to the specified offset. As your offset increases, this process gets slower. Also, this process does not make efficient use of the indexes. So typically the ‘skip()’ and ‘limit()’ approach is useful when you have small data sets,The reason the previous approach does not scale well is the skip() command. Depending how you have build your _id you may can use the natural order in the stored data like a timestamp, or an index (pls s. above).This approach leverages the inherent order that exists in the “_id” field. Also, since the “_id” field is indexed by default, the performance of the find operation is very good.There are further options to do pagination:both I do not think will help here so I skip the examples.Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "values", "text": "Hi @michael_hoellerI didn’t get your idea here:Maybe you can change your schema and use embedding for $values? This would open completely different options.If you mean changing my schema to looks similar to what $unwind does (reversing One-to-Many relation) and adding appropriate key index for each value will do the trick, but this schema also has a lot of concerns related to my other scenarios.I am not sure what you mean.I mean that now I may have a single key index on values field (used for queries), but after $unwind it is useless. So, next stage (which is $sort) won’t benefit from the sorted structure of the indexes in the B-Tree just like regular $sort operation on indexed field.My example is similar to the bucket pattern. The large document (with a too large array) is partitioned to some small documents (with smaller array size) in order to fit in the max document size.\nKeeping too many documents will lead to heavy update operations (sometimes including massive amount of documents ) which I try to avoid.", "username": "karaimin" }, { "code": "$unwind$sortallowDiskUse: true", "text": "The same question:Does $unwind keep the “unwound” documents in the same order as they were ordered in the source array?Need to know this as we would like to avoid an unneeded $sort that would require allowDiskUse: true (when the aggregation otherwise doesn’t). Couldn’t find anything about this in the docs.It seems potentially reasonable to assume that it would be in the same order, but can we be sure? @karaimin did you figure it out?", "username": "qtax" }, { "code": "", "text": "No @qtax, I didn’t find appropriate solution. I didn’t find any information in the official documentation and according to @michael_hoeller the order is not guaranteed, so I assume that I don’t have any evidence to rely on the returned order.\nI solve the problem with aggregation pipeline that sort the documents and give me the length of each nested array. After that with some application logic I am able to reduce the documents where my next page is. Then, I query only the filtered documents and this way the pagination is achieved. It is not so efficient like the assumption described in the first question but for now it is the best I have found.", "username": "karaimin" }, { "code": "arrayincludeArrayIndex", "text": "Hello @karaimin and @qtax,thanks for updating, I lost your message on my bookmarks.according to @Michael the order is not guaranteedThis was my experience in an previous customer project. The “unwound” documents are identical to the input document except for the value of the array field which now holds a value from the original array element. It seems that the documents are created by the sequence they occur in the array. I assume that one unwind can be idempotent but within the the flow of a process you might get side effects (e.g. empty array).\nIn case you need to keep the sequence you can add includeArrayIndex which will add a field to the unwound document which holds the position of the field in the previous array.To be absolutely sure what is correct I hope that @Asya_Kamsky can step in?regards,\nMichael", "username": "michael_hoeller" }, { "code": "includeArrayIndex{ \"_id\" : 1, \"item\" : \"foo\", values: [ \"foo\", \"foo2\", \"foo3\"] }\n{ \"_id\" : 1, \"item\" : \"foo\", values: \"foo2\", \"arrayIndex\" : NumberLong(1 }\n{ \"_id\" : 1, \"item\" : \"foo\", values: \"foo\", \"arrayIndex\" : NumberLong(0 }\n{ \"_id\" : 1, \"item\" : \"foo\", values: \"foo3\", \"arrayIndex\" : NumberLong(2 }\n", "text": "Hi @michael_hoellerI know the existence of includeArrayIndex option. However I would like to avoid application level sorting. So, saying in other words:Is there any chance that this:may be destructed to this.Take a look at the order (1 0 2)Having only the index won’t solve our problem. We need application level sorting if the above sequence is returned.\nBest Regards\nAleydin", "username": "karaimin" }, { "code": "", "text": "Hi @karaiminI understood your question, I have seen this happen in a project some month back. I did tests just for me, but I have not been able to recreate the issue. To be on the save side on customer projects, I did add an extra(?) sort. This is a good question! With the last message I “pinged” @Asya_Kamsky she probably can add the full insight.\nJust do not want to limt this to Asya she is often abroad, anyone else around who can add on this?Regards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Ok, Thank you @michael_hoeller for all your support. I am looking forward of further updates on this topic. I will be glad to see official statement from the MongoDB staff.", "username": "karaimin" }, { "code": "", "text": "Hi there folks,Yes, the order of documents will be the same as the order of elements in the array and the order pre-unwind will be preserved.But using $skip and $limit for pagination is not really a good idea as it’s not very performant, especially once you move to sharding…Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hello Asya,\nthanks for clearing that, lucky to save an extra sort in the future. Concerning pagination I provided some suggestions fairly at the beginning of the post.\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thanks @Asya_Kamsky and @michael_hoeller for your time on revealing the mystery … and the suggestions about the pagination approach.Since, $unwind will be used, I won’t be able to benefit from an indexed field and $skip and $limit is my only option for pagination (with this data model). I am going to test it if it can suits our needs. Maybe, combination from both “$skip/$limit” for the first few pages and the approach I described above can be appropriate for me.Best Regards\nAleydin", "username": "karaimin" }, { "code": "$unwind", "text": "Yes, the order of documents will be the same as the order of elements in the array and the order pre-unwind will be preserved.Thanks for sharing this statement, and I gather you are a very reputable source, having worked on the aggregation framework itself.It seems a colleague of yours contradicted this statement in a different forum thread a few months later sayng “MongoDB only gurantee order of documents based on a sort operation” which leaves me unsure whether it would be safe to avoid a sort operation (for performance reasons) when performing $unwindIs there any up-to-date definititve documentation on what guarantees there are regarding array ordering during aggregation?", "username": "Benjamin_Lowe" }, { "code": "", "text": "There is no contradiction.The $unwind preserves the order of the documents it receives and the order of the array elements.To get a predictable order of documents you must $sort.That is if you $sort before $unwind, documents will be sorted after $unwind.", "username": "steevej" }, { "code": "", "text": "That was my understanding from this thread, but if someone asks “Does X operator preserve the order of an array?” and the response is that no operator other than sort guarantees order, it does somewhat suggest that $unravel might not preserve the intrinsitc order of the array.I agree though that it seems likely to be exactly as you described though, and $unravel does preserve order (but maybe $in does not!)", "username": "Benjamin_Lowe" } ]
MongoDB Aggregation - Does $unwind order documents the same way as the nested array order?
2020-07-28T08:35:13.188Z
MongoDB Aggregation - Does $unwind order documents the same way as the nested array order?
10,763
null
[ "replication", "sharding" ]
[ { "code": "", "text": "Hi everyone!I have a three servers cluster in 3.4. Each server runs one MongoS, one MongoD for our single data replicaset, and one MongoD for the config server replicaset. This setup was designed a while ago when we thought we were going to need sharding, but we never actually enabled sharding on collections.I’d like to upgrade that MongoDB cluster to 3.6 (and more, up to the latest version). The documentation clearly states that the config server rs should be upgraded first, and then the data server and last MongoS. Except since those run on the same machine for me, the config server mongod and the data mongod use the same binary. And MongoS, while a different binary, comes from the same package. So I can’t really upgrade one and then the other (unless I keep a data MongoD running from a now removed binary, which doesn’t sound ideal).I can of course move the config server rs to 3 different servers before the upgrade, but I was wondering if there was a simpler way? The standard Ubuntu package installs the binary in a common path without the version in it, so I’m not sure if I can have co-existing setup of both Mongo 3.4 and 3.6.What do you think? Thanks!", "username": "Wenceslas_des_Deserts" }, { "code": "sh.stopBalancer()/path/to/your/mongod -f config_server_config_file.conf", "text": "Hi @Wenceslas_des_Deserts ,Welcome to the MongoDB Community Forums.I see that you’re mainly concerned about binaries being same for shard server and config server.Here is what you can do to manage this within your infrastructure and with the minimal efforts. Good thing is, you can have co-existing setup with the way described below.Before that, please note: For production setup, it is advisable to use different physical machines for each of mongods, be it shard server replica set or config server. Reason behind this is, if hardware failure occurs on the instance running primary nodes of both config server and shard server, there are chances you might end up with no active client server connection which would accept read, write requests.Feel free to shoot any questions you shall have in above setup or if you’re facing any issues.All the best!", "username": "viraj_thakrar" }, { "code": "", "text": "Hi,Thanks for the quick answer!I have two follow-up questions:", "username": "Wenceslas_des_Deserts" }, { "code": "", "text": "Yes, The binaries are self-contained and as far as you’ve supported package dependencies installed (which you will get error for if it’s not. For example, libcrypt.so or libssl.so which would be there in your disk already, but if it’s not there in /usr/lib/, you would just need to copy that file from the location and paste it to /usr/lib/),About the new primary being elected case, It will elect the new primary fine, but just think of a case when mongo client is making a query while election is happening, rare possibility of downtime but possible and you might want to avoid that in production.", "username": "viraj_thakrar" }, { "code": "", "text": "I hope you got the answer @Wenceslas_des_Deserts . If not, Feel free to let us know if you’ve any questions or facing any issues. Happy to help!", "username": "viraj_thakrar" }, { "code": "", "text": "Yes, thank you very much! I really appreciate the quality and quickness of your answers!", "username": "Wenceslas_des_Deserts" }, { "code": "", "text": "My pleasure @Wenceslas_des_Deserts", "username": "viraj_thakrar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cluster upgrade where configserver and replicat are on the same machine
2023-09-25T12:20:54.439Z
Cluster upgrade where configserver and replicat are on the same machine
369
null
[]
[ { "code": "{\n \"title\": \"AgendaItem\",\n \"bsonType\": \"object\",\n \"required\": [\"_id\", \"tenant\", \"date\"],\n \"properties\": {\n \"_id\": { \"bsonType\": \"objectId\" },\n \"tenant\": { \"bsonType\": \"objectId\" },\n \"date\": { \"bsonType\": \"string\" },\n \"attributes\": {\n \"bsonType\": \"object\",\n \"additionalProperties\": true\n }\n }\n}\n{\n \"_id\": \"xyz\",\n \"tenant\": \"t1\",\n \"date\": \"2020-01-01\",\n \"attributes\": {\n \"foo\": \"bar\"\n }\n}\n{\n \"_id\": \"xyz\",\n \"tenant\": \"t1\",\n \"date\": \"2020-01-01\",\n \"attributes\": {\n \"foo\": {\n \"bar\": \"baz\"\n }\n }\n}\n", "text": "I’m trying to implement Realm flexible sync, which works great when having a fixed schema. However, in my case, a part of the document is schema-less, and the exact structure is unpredictable and can differ per document instance.I tried to include an object property with “additionalProperties: true” to allow varying content (free structure), but this only works when putting primitive properties inside the object. When adding complex property types, like nested objects, the documents are being ignored (not read/synced) when using device sync.The schema looks like this:This document works (at least it is being read by the client):But this does not work (the document is not being read when starting sync):Is it expected behaviour that complex structures are not supported? Is there any other way to accomplish a flexible (partial) document structure? Thanks!", "username": "Joost_Farla" }, { "code": "attributes", "text": "Sync uses the schema to translate the MongoDB data to Realm objects, and vice versa. There’s not currently a way for it to have a “partial schema.”I’m not sure what SDK you’re using - have you considered making your attributes property use the Realm mixed data type? In the Swift SDK, this is AnyRealmValue, and it supports both primitives and objects. There are some limits - it doesn’t currently support EmbeddedObject or collection types, for example. But it offers some flexibility where the data may vary.", "username": "Dachary_Carey" }, { "code": "{\n \"_id\": \"xyz\",\n \"tenant\": \"t1\",\n \"date\": \"2020-01-01\",\n \"attributes\": {\n \"foo\": \"bar\"\n }\n}\n{\n \"title\": \"AgendaItem\",\n \"bsonType\": \"object\",\n \"required\": [\"_id\", \"tenant\", \"date\"],\n \"properties\": {\n \"_id\": { \"bsonType\": \"objectId\" },\n \"tenant\": { \"bsonType\": \"objectId\" },\n \"date\": { \"bsonType\": \"string\" },\n \"attributes\": { \"bsonType\": \"mixed\" }\n }\n}\n", "text": "Thanks for the quick answer!According to the docs, the “mixed” type indeed would support both primitives and objects (but not dictionaries). However, when setting the type for “attributes” to mixed, the following object is still not being synced:It is completely left out of the synced collection (received by the client). I am managing the schema server-side. This the schema being used for this example:What would be the right way to get this working? I am using the Node.js SDK.", "username": "Joost_Farla" } ]
Use device sync with schema-less document parts
2023-09-22T15:43:24.841Z
Use device sync with schema-less document parts
363
https://www.mongodb.com/…3_2_1024x218.png
[]
[ { "code": "", "text": "I have problem with Atlas registration in MongoDB university. I followed the steps and still the same error when I press check.Incorrect solution 4/4Please review your Atlas Authentication and retry, the ‘atlas auth register’ must completed successfully for you to continue. Please refer to the ‘Instructions’ section in right pane for more details.image1272×272 24.5 KB", "username": "Jakub_Hrebicek" }, { "code": "", "text": "I’m getting the same error.", "username": "Johan_Snitt" }, { "code": "", "text": "At the end I do not have the button “next” or “skip” as it was presented in video, but only button “check”", "username": "Jakub_Hrebicek" }, { "code": "", "text": "Do you have multiple organizations tied to your account? I think it’s trying to look at your first org even if you selected another in the previous step. My “atlas org page” link is not to the same org I selected.", "username": "Johan_Snitt" }, { "code": "", "text": "to be honest I am not sure. Where can I check it or change it?", "username": "Jakub_Hrebicek" }, { "code": "atlas auth registeratlas auth registerLog in Nowatlas auth loginatlas auth register", "text": "Hi,Thanks for reaching out.This error appears when no user is authenticated to the Atlas cli.When you run atlas auth register, you need to return to the lab terminal and follow the prompts to complete the authentication process. Then you can click check.If you already have an existing Atlas account and you ran atlas auth register, click Log in Now in the registration page instead. Alternatively, run atlas auth login instead of atlas auth register.If, despite this, the lab still fails, send an email to [email protected] for support from the MongoDB University team. Attach screenshots showing the terminal, i.e the commands and prompts as well as the error messages.thanks!", "username": "Davenson_Lombard" }, { "code": "", "text": "I followed all steps many times with the same result. I have already Atlas account, but it does not work. I contacted support.", "username": "Jakub_Hrebicek" } ]
Problem with Atlas registration in mongo university lab
2023-09-29T08:22:56.573Z
Problem with Atlas registration in mongo university lab
313
null
[ "queries", "compass" ]
[ { "code": " {\n \"_id\": ObjectId(\"636a6aa584d5f92f14f0c548\"),\n \"products\": [\n {\n \"quantity1\": '10 grams',\n \"quantity2\": '24 grams',\n \"user_id\": \"602cf72a3fcad3cc605b8d59\"\n },\n {\n \"quantity1\": '10 grams',\n \"quantity2\": null,\n \"user_id\": \"602cf72a3fcad3cc605b8d50\"\n }\n ]\n },\n // 2\n {\"_id\": ObjectId(\"602e443bacdd4184511d6e29\"),\n \"products\": [\n {\n \"quantity1\": 'null',\n \"quantity2\": 'null',\n \"user_id\": \"602cf72a3fcad3cc605b8d59\"\n },\n {\n \"quantity1\": 'null',\n \"quantity2\": 'null',\n \"user_id\": \"602cf72a3fcad3cc605b8d59\"\n },\n {\n \"quantity1\": 'null',\n \"quantity2\": 'null',\n \"user_id\": \"602cf72a3fcad3cc605b8d59\"\n }\n ]\n},\n// 3\n{\"_id\": ObjectId(\"60332242acdd4184511ed664\"),\n\"products\": [\n {\n \"quantity1\": 'null',\n \"quantity2\": 'null',\n \"user_id\": \"602cf72a3fcad3cc605b8d59\"\n },\n {\n \"quantity1\": null,\n \"quantity2\": 'null',\n \"user_id\": \"602cf72a3fcad3cc605b8d59\"\n },\n {\n \"user_id\": \"602cf72a3fcad3cc605b8d59\"\n }\n]\n}\n]```", "text": "Hello All,I need some help in writing a query where in one of our collections we have a field which is an array(products) and it holds an array of objects . In each of this object we have a field called quantity1 and quanity2 where I need to fetch all the documents which has all the objects within that array has quantity1 and quantity2 as null.Basically I need some thing exact opposite of what $nin can achieve.Note: $in doesnt work because it will return even if there is one or more objects that doesn not match filter. The units within these quanitites can be anything as this is something coming from scraped data.Example: For the below data the query should return only the 2nd document as its the only document which as all objects with quantity1 and quantity2 as null.", "username": "priyatham_ik" }, { "code": "", "text": "Does this do what you wanted?", "username": "John_Sewell" }, { "code": "", "text": "db.XYZ.find({$and:[{“products.0.quantity1”:{$all:[null]}},{“products.0.quantity2”:{$all:[null]}}]})I tried this way but its not working for my case . As per the example I shared it should be giving only the 2nd doc however this is giving all 3 @John_Sewell", "username": "priyatham_ik" }, { "code": "db.collection.find({\n products: {\n $all: [\n {\n \"$elemMatch\": {\n quantity1: null\n }\n },\n {\n \"$elemMatch\": {\n quantity2: null\n }\n }\n ]\n }\n})\n", "text": "Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "", "text": "This is still not working where if there is any one object which is satisfying its returning however I need only the ones which has all the objects with those fields as null @John_Sewell , Please check the attached monoglink to view the results.Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "priyatham_ik" }, { "code": "", "text": "You could probably switch it up and have a not over an or of them being populated. Im away from a computer today so cant try it at the moment.", "username": "John_Sewell" } ]
How to query an array of objects where I need those documents which have all the objects with a matching filter?
2023-09-29T19:16:18.761Z
How to query an array of objects where I need those documents which have all the objects with a matching filter?
297
null
[ "queries" ]
[ { "code": "UserSchema = {\n id: string;\n name: string;\n}\n\nTeamSchema = {\n ...\n members: UserSchema[]\n}\nteams.filtered(\"members.name CONTAINS $0\", \"somename\")members.id != '1'", "text": "Sorry if the title sounds confusing, it’s hard to describe my question in one phrase.I’m using RQL to filter my query, and with that I’m using dot notation to filter the query based on an list item that match certain condition. But I need to make sure that the item that passed the condition also passes another condition, and that’s where I’m not sure what to do.Example:Let’s suppose I have two schemas:Now I want to query Teams that has a member name that contains certain string.For that I can simply use dot notation and filter with: teams.filtered(\"members.name CONTAINS $0\", \"somename\").Until here it’s fine, but now I want to add one more condition, the member which his name contains the string, can’t have the id ‘1’.If I add members.id != '1' I don’t think it would work, since it would check in all members, not the one that contains the string in their name.Anyone knows if there’s anything to be done in this case?", "username": "Rossicler_Junior" }, { "code": "Teamsmemberssomename1somename1let results = realm.objects(PersonClass.self).where { $0.members.name.contains(\"somename\") && !$0.members.name.contains(\"1\") }!$0.members...!=", "text": "The question is a tad vague and the code provided doesn’t quite match the question.Now I want to query TeamsIs Teams a Realm List?a member nameI think that references members so it’s a UserSchema name?contains the string, can’t have the id ‘1’.And I think you’re asking about a substring query? Well, actually two substring queries? Wouldn’t it just be a matter of checking if the string contains somename and does not contain 1? If so, I think it’s just a syntax thing in your question.If so here’s a Swift solution which works for a quick test project I made - it returns only the member where the name contains somename and also does not have a 1. I know you’re working with RQL but the concept should apply to any SDKlet results = realm.objects(PersonClass.self).where { $0.members.name.contains(\"somename\") && !$0.members.name.contains(\"1\") }So use !$0.members... instead of !=", "username": "Jay" }, { "code": "Teams: [\n {\n id: 1,\n members: [\n { id: 1, name: \"John Doe\" },\n { id: 2, name: \"Todd\" }\n ]\n },\n {\n id: 2,\n members: [\n { id: 1, name: \"John Doe\" },\n { id: 3, name: \"John Senna\" }\n ]\n }\n]\nteams.filtered(\"members.name CONTAINS $0 AND members.id != 1\", \"John\")21members.id != 1", "text": "Thanks for your reply. I agree that the question and code looks confusing, sorry about that, I found it confusing to explain this specific question. Let me try to explain again with examples with data.Using the same schemas I provided, let’s say in my database I have the following data:Now let’s say I run the query teams.filtered(\"members.name CONTAINS $0 AND members.id != 1\", \"John\").What I expect as a result is to only get the team with id 2, because although the team with id 1 has a user with that name, the user that contains the name has the id that I want to exclude from my condition.And as far as I know, the actual result of this query would be an empty array. Since members.id != 1 would remove any team that has a member with id 1.", "username": "Rossicler_Junior" }, { "code": "members.name CONTAINS Johnmembers.id != 1", "text": "I see, so you want to query against two different properties.I think there may be a bit of a logic issue - remember that a query runs across all objects in the list.So taking the first partmembers.name CONTAINS JohnThat resolves to both team id 1 and team id 2 because they both have members in their members lists that contain a John.Then add in the second requirement:members.id != 1that returns the teams that do not have any members in their members list that contain 1 as their ID.Well, both teams have members that have an id of 1, so NO teams would be returned.", "username": "Jay" }, { "code": "", "text": "Yep, that’s exactly why that query won’t work for the behaviour I want. Like you said, each condition from my query will apply to all members, but I want to run “two conditions” for the same member, not any.Is there any solution for this? Or it’s not something that Realm supports?I guess if there was a way to run a subquery it would solve the problem, but I don’t think that’s possible.", "username": "Rossicler_Junior" }, { "code": "1{ id: 1, name: \"John Doe\" }team 1: { id: 2, name: \"Todd\" }team2: { id: 3, name: \"John Senna\" }", "text": "Yeah - totally get it.As I mentioned though - it’s a logic issue and not a matter of it’s supported by Realm. Based on the conditions, data and query, no database would have a match.Here’s what I meanalthough the team with id 1 has a user with that name, the user that contains the name has the id that I want to exclude from my condition.this parthas the id that I want to excludeIf you want to exclude any team that contains the user with that ID, you’re query does that because BOTH teams include a user with that ID, therefore, they are both excluded!The issue right now is we don’t know what the parameters are; what’s the criteria of returning team 2? Why are substring queries being used? e.g. This is an object{ id: 1, name: \"John Doe\" }Do you want a query that returns all teams that do not include that object?Let me ask this a different way; what are the requirements where team 2 would be returned and NOT team 1?The only different between them is that team 1 has a Toddteam 1: { id: 2, name: \"Todd\" }and team 2 has a John Sennateam2: { id: 3, name: \"John Senna\" }Is it because team 2 has an object count of 2 that contains the name John?", "username": "Jay" }, { "code": "{ id: 3, name: \"John Senna\" }", "text": "Do you want a query that returns all teams that do not include that object?No, that’s the behaviour from the query I sent, and that’s NOT what I want.Let me ask this a different way; what are the requirements where team 2 would be returned and NOT team 1?That’s because team 2 has another member that matches both conditions, name contains “John” and id != 1.Is it because team 2 has an object count of 2 that contains the name John?No, it’s not a matter of having a count of 2, it’s a matter of the member that got matched needs to satisfy both conditions.By saying that, the query (which I don’t know how to write to achieve this) would need match only member { id: 3, name: \"John Senna\" }, since this members matches both conditions (name contains “John” and id != 1).I hope that’s clearer now. So my question is which query would satisfy these conditions?", "username": "Rossicler_Junior" }, { "code": "(query).count > 0let results = realm.objects(Team.self).where { ($0.memberList.name.contains(\"John\") && !($0.memberList.id == 1) ).count > 0 }\n\nfor team in results {\n print(team.id)\n}\n2", "text": "I gotcha. My brain works in Swift so here’s the query (subquery actually) that will return team with id 2.We’re leveraging a subquery to iterate through the members collection property for company name and id on each object. Subqueries are defined by parens around the query and count (query).count > 0output is2", "username": "Jay" }, { "code": "teams.filtered(\"SUBQUERY(members, $member, $member.name CONTAINS[c] $0 AND $member.id != 1).@count > 0\", \"John\")", "text": "Oh ok, I see, that helps a lot. Now I’ll check the equivalent of that in RQL, but that’s a huge help, thanks very much for your time.EDIT:\nIn case anyone is interested in the solution for RQL, check this link about subqueries.\nThe query would look like:\nteams.filtered(\"SUBQUERY(members, $member, $member.name CONTAINS[c] $0 AND $member.id != 1).@count > 0\", \"John\")", "username": "Rossicler_Junior" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
RQL - How to use list filter and match two condition in the same item inside the list
2023-09-28T21:02:11.060Z
RQL - How to use list filter and match two condition in the same item inside the list
269
null
[ "flexible-sync" ]
[ { "code": "", "text": "I wanted to have two events - once when local database file is ready, and one is when it’s synced. I thought it would be easy just to raise the first before await realm.Subscriptions.WaitForSynchronizationAsync(); and one right after, but the thing is, both of them resolve even if I don’t have internet connection! Why would WaitForSynchronizationAsync resolve if it didn’t wait and why Subscriptions.State is Complete when without the connection there is no way to know whether it’s synced or no, is this by design?", "username": "Movsar_Bekaev" }, { "code": "", "text": "Hi. It’d be helpful if you:", "username": "Andrea_Catalini" }, { "code": " _app = App.Create(new AppConfiguration(myRealmAppId)\n {\n BaseFilePath = FileService.AppDataDirectory,\n });\n\n //_user = await _app.LogInAsync(Credentials.Anonymous());\n try\n {\n _user = await _app.LogInAsync(Credentials.EmailPassword(\"email\", \"pass\"));\n }\n catch (Exception ex)\n {\n throw;\n }\n\n _config = new FlexibleSyncConfiguration(_user, Path.Combine(FileService.AppDataDirectory, FileService.DatabaseName))\n {\n PopulateInitialSubscriptions = (realm) =>\n {\n ....\n realm.Subscriptions.Add(realm.All<Entities.Word>());\n }\n };\n DatabaseInitialized?.Invoke();\n await GetRealm().Subscriptions.WaitForSynchronizationAsync();\n DatabaseSynced?.Invoke();\n", "text": "Hello @Andrea_CataliniIt’s realm-dotnet:Here’s the code:By “local database ready” I meant opening and getting ready the .realm file.\nThe code above by design should raise the initialized event when the file is opened and ready and the second event when the file is synchronized with server, but the WaitForSynchronizationAsync resolves even if there is no network, which by my logic it shouldn’t, it should throw an error as it specifically says in its name to wait for synchronization, otherwise it imho it should be renamed to something like TryWaitForSync…", "username": "Movsar_Bekaev" }, { "code": "WaitForSynchronizationAsyncWaitForSynchronizationAsyncPopulateInitialSubscriptions_app = App.Create(new AppConfiguration(myRealmAppId)\n{\n BaseFilePath = FileService.AppDataDirectory,\n});\n\n_user = await _app.LogInAsync(Credentials.EmailPassword(\"email\", \"pass\"));\n\n_config = new FlexibleSyncConfiguration(_user, Path.Combine(FileService.AppDataDirectory, FileService.DatabaseName))\n{\n PopulateInitialSubscriptions = (realm) =>\n {\n ....\n realm.Subscriptions.Add(realm.All<Entities.Word>());\n }\n};\n\n// The process will complete when all the user's items have been downloaded.\nvar realm = await Realm.GetInstanceAsync(config);\nRealm.GetInstanceAsyncUser", "text": "Ok, now I can help you.About WaitForSynchronizationAsync returning even without internet connection sounds like a bug.\nIf you’re sure that you’re effectively testing without an internet connection, could you open an issue on our github repo with attached a repro project?\nThank you.As a side note, you don’t need WaitForSynchronizationAsync unless you’ve updated an active subscription.\nWhen you use PopulateInitialSubscriptions you’re actually bootstrapping a realm with an initial subscription. This means that when you open a synced realm the initial subscription is going to be honored by downloading the matching elements.\nSo you’d do something likeWhen Realm.GetInstanceAsync returns, the realm is synchronized and ready to use.If you were wondering what happens when you open a synced realm while offline I’ll just quote what’s in our docs as it’s concise and well written:To open a synced realm, you must have an authenticated User object. To obtain an initial User instance, you need to authenticate against the Atlas App Services backend, which requires the device to be online the first time a user logs in. Once initial authentication has occurred, you can retrieve an existing user while offline.Andrea", "username": "Andrea_Catalini" }, { "code": "", "text": "I wanted to have two events - once when local database file is ready, and one is when it’s synced. I thought it would be easy just to raise the first before await realm.Subscriptions.WaitForSynchronizationAsync(); and one right after, but the thing is, both of them resolve even if I don’t have internet connection! Why would WaitForSynchronizationAsync resolve if it didn’t wait and why Subscriptions.State is Complete when without the connection there is no way to know whether it’s synced or no, is this by design?It looks like the problem is caused by WaitForSynchronizationAsync() and Subscriptions.The presence of a state in your code might be related to the library’s design. These functions may resolve early, even without an internet connection, which might be a restriction or design choice of the library. Further examination, as well as maybe contacting the library’s maintainers, may reveal insights into this behavior.", "username": "jessy_khan" }, { "code": "", "text": "Thank you,\nfor sharing such good information. ", "username": "jessy_khan" } ]
Subscriptions.State resolves to Complete even when the network is down
2023-01-15T08:51:43.196Z
Subscriptions.State resolves to Complete even when the network is down
1,363
null
[ "python" ]
[ { "code": "", "text": "Hi, I am using Apache2 + Flask + PyMongo (4.3.3), there are memory leak observed. I tried Tracemalloc for memory usage snapshot and found top 2 memory consumer here which keep increasing:lib/python3.10/site-packages/pymongo/message.py:692: size=67.9 KiB (+67.9 KiB), count=1130 (+1130), average=62 Blib/python3.10/site-packages/bson/init .py:1122: size=42.6 KiB (+42.6 KiB), count=734 (+734), average=59 BIs it a known issue? Or possibly fixed in 4.5.0?Thank you very much!", "username": "Mary_Zhang1" }, { "code": "", "text": "Thanks for the report @Mary_Zhang1! I opened https://jira.mongodb.org/browse/PYTHON-3982 to investigate.", "username": "Steve_Silvester" }, { "code": "", "text": "Hi Steve, Thank you very much for help.", "username": "Mary_Zhang1" } ]
Possibly Memory leak in PyMongo 4.3.3?
2023-09-28T22:17:43.553Z
Possibly Memory leak in PyMongo 4.3.3?
307
null
[ "compass", "atlas-cluster" ]
[ { "code": "", "text": "Hi, I’m new here. I’m a student and I have MongoDB Atlas and MongoDB for VS Code installed on a mac. I’m having difficulty installing Compass and Shell. My instructor doesn’t have instructions for mac users. I’ve looked at several tutorials and gotten more confused.I’m using the username and password that I use for Atlas and to connect in VS Code. In VS Code I am \"currently connected to mongodb+srv://username/[email protected]. I can see “MongoDB Playground” but have been unable to use it.When I try and install Compass I get error message “bad auth:authentication failed”.Any help is appreciated. Thank you!This is so confusing as I can see the databases/collection in Atlas.", "username": "Jylian_Summers" }, { "code": "", "text": "Hi @Jylian_Summers,When I try and install Compass I get error message “bad auth:authentication failed”.This error generally indicates incorrect credentials being entered. You can try troubleshooting it by creating another test database user with no special characters in the password (only for troubleshooting purposes) to see if you’re able to log in via compass with those new credentials. More information on the Configure Database Users documentation which includes adding and modifying database users.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you. My password doesn’t have any special characters. I was going to uninstall but it seems I wouldn’t be able to create a new account for a few days and I’m currently taking a course that requires MongoDB.", "username": "Jylian_Summers" }, { "code": "", "text": "I was going to uninstall but it seems I wouldn’t be able to create a new account for a few daysI’m a bit confused regarding the above. Are you referring to a MongoDB Atlas user account? Or a Database User?I’ll add that the bad auth error being returned from compass relates to the database user’s credentials.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I’m referring to MongoDB Atlas. Ugh, this is so confusing.", "username": "Jylian_Summers" }, { "code": "Database Access", "text": "Gotcha You won’t need to create a new Atlas account or change the Atlas account’s password. What I was suggesting to test is:However, I’m not sure of the exact details of the course you’re following. It could be a case where the course has a pre-created cluster with database user credentials they provide to connect to said cluster. If it’s a cluster provided by the course then please let me know with any further information regarding the course.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Yes, I’ve done that. It is a pre-created cluster, that I’ve been trying to use my credentials to connect to Compass.Thank you for your help and patience Jason.", "username": "Jylian_Summers" }, { "code": "", "text": "It is a pre-created cluster, that I’ve been trying to use my credentials to connect to Compass.Is this cluster under the course creator’s Atlas account or your own Atlas account?If it’s your own cluster and you’re using the same credentials, i would then double check the connection string is correct. From the image in step 3. in my above response, you’ll also see a “Connect” button to the cluster if it’s your own cluster. You can then follow the connect modal and grab the connection string to be used in MongoDB Compass.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "It’s from our instructor’s created course cluster “sandbox”.", "username": "Jylian_Summers" }, { "code": "", "text": "Do you have access to the Database Deployment for this “sandbox” cluster? I.e. can you see it when logged in the Atlas Web UI? If you can provide any screenshots, that would be helpful too (redact any personal or sensitive information before posting here).", "username": "Jason_Tran" }, { "code": "username/password\nusername:password\n", "text": "Thewith a slash inmongodb+srv://username/[email protected] wrong.It has to be", "username": "steevej" }, { "code": "", "text": "I tried that. And it doesn’t work. I am logged in to VS code with mongodb+srv://username/[email protected]", "username": "Jylian_Summers" }, { "code": "", "text": "The slash is wrong inusername/[email protected]", "username": "steevej" }, { "code": "", "text": "When I’m logged in, I see my name in the upper right corner and when I click on Database this screen is returned. When I click “Browse Collections” I can see the ones for my class.\nScreenshot 2023-09-25 at 3.47.36 PM1677×577 57.3 KB\n", "username": "Jylian_Summers" }, { "code": "", "text": "mongodb+srv://username:@sandbox.vd7qxrw.mongodb.net/ connected", "username": "Jylian_Summers" }, { "code": "", "text": "Great catch Steve. Missed my eye ", "username": "Jason_Tran" }, { "code": "", "text": "username/[email protected] Code shows:\nusername/[email protected]/ connected\nScreenshot 2023-09-25 at 4.11.56 PM796×781 78.1 KB\n", "username": "Jylian_Summers" }, { "code": "Username contains unescaped characters username/password\n", "text": "mongodb+srv://username/[email protected] is generating errorso I have a hard time believing that VS is showing you are connected. But if you are connected then the issue is resolved.", "username": "steevej" }, { "code": "", "text": "I’m trying to install Compass. Don’t I need to do that in addition to VS Code?", "username": "Jylian_Summers" }, { "code": "", "text": "One is not required for the other. I assume the course is trying to get you to do something in VS code and perhaps another in Compass but it’s hard to say without knowing the course / it’s content though this is probably moving away from the original issue. I believe once you’re able to connect to both then you can proceed with it’s content so we’ll focus on that.The original issue from what was raised in the post itself was the error regarding bad auth in compass – Can you confirm you were able to connect via compass or you’re still getting bad auth error from compass?", "username": "Jason_Tran" } ]
MongoDB Atlas - Compass - Shell
2023-09-25T22:02:41.935Z
MongoDB Atlas - Compass - Shell
458
null
[ "aggregation", "views" ]
[ { "code": "", "text": "I want to give a third party access to our data and plan to provide them with a read-only view on the data that masks or removes all sensitive fields. What makes this tricky is that I have two collections that are connected with each other via an “id” field, which also contains sensitive data.I can’t remove the “id” field because it is needed to combine the two collections with each other.My current idea is to use something like md5_hash to deterministically mask the “id” field. As I understand it, this would need to be a custom function in the aggregation pipeline which would be quite slow (if possible at all).Is there a better approach to deterministic masking in a view?", "username": "Erik_Weitnauer" }, { "code": "", "text": "Hi @Erik_Weitnauer,\nI’ll preface this by saying that I’ve never used it, but this looks like it might be useful to you from a quick reading:Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "How about redact?Actually, you want a 1:1 mapping but to mask the data with something you can tie back later?", "username": "John_Sewell" }, { "code": "", "text": "Hi Fabio, this looks very interesting! I don’t think it quite applies to my situation though, since I’d only want to use the encryption inside the view, which doesn’t seem possible.", "username": "Erik_Weitnauer" }, { "code": "", "text": "Yes, that’s it exactly! I want a 1:1 mapping that hides the personal information, but still allows me to tie things together.For example, I might use student IDs in several collections and I want to mask them, but still need them to be consistent (i.e., the same ID should lead to the same masked value, and different IDs should lead to different masked values). Applying a hashing function would work for that.My question is if there is an efficient way to do that as part of a view that I define.", "username": "Erik_Weitnauer" }, { "code": "", "text": "Why do you need to mask the StudentID? Is their actual Student ID? Could you use an alternative value to link the values, generate a new ID or something then it does not matter if it’s visible.We did try and do something like this recently, we wanted to mask something like locations but have them remain a 1:1 mapping. In the end we just pulled that data from the output completely as opposed to writing a mapping routine, so not much help for you there I’m afraid!I know we have other systems in-house that do this on the IBM systems when refreshing development environments so only key people can actually see the information, but that’s done via logic in stored procedures or cobol.The MD5 has would leave you open to the possibility of a hash collision I believe anyway (given that it’s a small chance!), could you add a new field to the system that does not contain private information?Sorry not more help, there is a similar thread here:", "username": "John_Sewell" } ]
How to deterministically mask a field with sensitive data in a view
2023-09-27T13:24:42.982Z
How to deterministically mask a field with sensitive data in a view
301
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "{\n _id: ObjectId(65141e3d8fc74027df89e79d)\n version: \"14c613f7-84f9-4df6-a9c2-8c8923b40c31\"\n name: \"decoder-facd32602f9faa54\",\n map:[\n {\n prefix: 101,\n name:\"Digital Input\",\n scale: 1,\n size: 1,\n rage: [0, 255],\n value_type: \"uint8\"\n },\n {\n prefix: 102,\n name:\"Digital Output\",\n scale: 1,\n size: 1,\n rage: [0, 255],\n value_type: \"uint8\"\n }\n ]\n}\nname: decoder-rename-update\nmap:[\n {\n prefix: 101,\n name:\"Digital Input UPDATE\",\n scale: 1,\n size: 1,\n rage: [0, 255],\n value_type: \"uint8\"\n },\n {\n prefix: 103,\n name:\"Analog Output\",\n scale: 1,\n size: 1,\n rage: [0, 255],\n value_type: \"uint8\"\n }\n]\n{\n _id: ObjectId(65141e3d8fc74027df89e79d)\n version: \"14c613f7-84f9-4df6-a9c2-8c8923b40c31\"\n name: \"decoder-rename-update\",\n map:[\n {\n prefix: 101,\n name:\"Digital Input\",\n scale: 1,\n size: 1,\n rage: [0, 255],\n value_type: \"uint8\"\n },\n {\n prefix: 102,\n name:\"Digital Output UPDATE\",\n scale: 1,\n size: 1,\n rage: [0, 255],\n value_type: \"uint8\"\n },\n {\n prefix: 103,\n name:\"Analog Output\",\n scale: 1,\n size: 1,\n rage: [0, 255],\n value_type: \"uint8\"\n }\n ]\n}\nconst decoder = await this.decoderModel.aggregate( [\n {\n $match: {\n _id\n }\n },\n {\n $set: { name }\n },\n {\n $project: {\n map: {\n $concatArrays: [\n {\n $map: {\n input: '$map',\n as: 'map',\n in: {\n cond: [\n {\n $in: [\n \"$$map.prefix\",\n map.map( d => d.prefix )\n ]\n },\n {\n $mergeObjects: [\n \"$$map\",\n {\n $arrayElemAt: [\n {\n $filter: {\n input: map,\n cond: {\n $eq: [\n `$$this.prefix`,\n `$$map.prefix`\n ]\n }\n }\n },\n 0\n ]\n }\n ]\n },\n `$$map`\n ]\n }\n }\n },\n {\n $filter: {\n input: map,\n cond: {\n $not: {\n $in: [\n `$$this.prefix`,\n `$$map.prefix`\n ]\n }\n }\n }\n }\n ]\n }\n }\n }\n ] );\nMongoServerError: Invalid $project :: caused by :: FieldPath field names may not start with '$'. Consider using $getField or $setField.\n at Connection.onMessage (E:\\Gitlab\\babylon-data-registery\\node_modules\\mongodb\\src\\cmap\\connection.ts:413:18)\n at MessageStream.<anonymous> (E:\\Gitlab\\babylon-data-registery\\node_modules\\mongodb\\src\\cmap\\connection.ts:243:56)\n at MessageStream.emit (node:events:513:28)\n at processIncomingData (E:\\Gitlab\\babylon-data-registery\\node_modules\\mongodb\\src\\cmap\\message_stream.ts:193:12)\n at MessageStream._write (E:\\Gitlab\\babylon-data-registery\\node_modules\\mongodb\\src\\cmap\\message_stream.ts:74:5)\n at writeOrBuffer (node:internal/streams/writable:392:12)\n at _write (node:internal/streams/writable:333:10)\n at MessageStream.Writable.write (node:internal/streams/writable:337:10)\n at TLSSocket.ondata (node:internal/streams/readable:766:22)\n at TLSSocket.emit (node:events:513:28)\n at addChunk (node:internal/streams/readable:324:12)\n at readableAddChunk (node:internal/streams/readable:297:9)\n at TLSSocket.Readable.push (node:internal/streams/readable:234:10)\n at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23)\nWaiting for the debugger to disconnect...\n", "text": "Hi! I am tring to create an aggregation query in order to update an array inside a document like so:Document:Update:Expected result:I have tried this approach:And I’ve received this error:What am I missing?", "username": "Andrei_Nechita" }, { "code": "const { _id, map, name } = update;\n const decoder = await this.decoderModel.findById( _id );\n\n if ( !decoder ) {\n throw new NotFoundException( \"Decoder not found\" );\n }\n\n decoder.name = name ? name : decoder.name;\n\n map && map.forEach( ( item, index ) => {\n const existingIndex = decoder.map.findIndex( ( d ) => d.prefix === item.prefix );\n\n console.log( existingIndex );\n if ( existingIndex !== -1 ) {\n decoder.map[ index ] = { ...decoder.map[ existingIndex ], ...item };\n this.logger.log( decoder.toObject().map[ index ] );\n } else {\n decoder.map.push( item );\n }\n } );\n\n this.logger.log( decoder.toObject() );\n\n const updateDecoder = await this.decoderModel.findByIdAndUpdate( _id, { ...decoder }, { new: true } );\n\n this.logger.log( updateDecoder.toObject() );\n return { message: \"Decoder update success\", decoder: updateDecoder };\n", "text": "I’ve achieved the desired result like so:Yet this is resoruce heavy depending on how many elements are sent in the map array. I would like to let the mongodb instance handle the mutations if possible. Does anyone have any ideea with the query?", "username": "Andrei_Nechita" } ]
Mongo aggregation merge document nested array on update and return result
2023-09-29T11:31:59.225Z
Mongo aggregation merge document nested array on update and return result
247
null
[ "database-tools", "backup" ]
[ { "code": "\t\t\t\t{ \t\t\t\t\"v\" : 2, \t\t\t\t\"key\" : { \t\t\t\t\t\"lo_oat\" : 1 \t\t\t\t}, \t\t\t\t\"name\" : \"lo_oat\", \t\t\t\t\"background\" : \"true\", \t\t\t\t\"sparse\" : false, \t\t\t\t\"expireAfterSeconds\" : \"3600\", \t\t\t\t\"ns\" : \"something.inside.namespace\" \t\t\t}, 2023-09-29T11:08:02.231+0000\tFailed: something.inside.namespace: error creating indexes for something.inside.namespace: createIndex error: (CannotCreateIndex) TTL index 'expireAfterSeconds' option must be numeric, but received a type of 'string'. Index spec: { key: { lo_oat: 1 }, name: \"lo_oat_1\", background: \"true\", sparse: false, expireAfterSeconds: \"3600\", ns: \"something.inside.namespace\" } ", "text": "I wonder how it was even created on an existing collection but now I can’t seem to restore it.This is how i see it in original database\n\t\t\t\t{ \t\t\t\t\"v\" : 2, \t\t\t\t\"key\" : { \t\t\t\t\t\"lo_oat\" : 1 \t\t\t\t}, \t\t\t\t\"name\" : \"lo_oat\", \t\t\t\t\"background\" : \"true\", \t\t\t\t\"sparse\" : false, \t\t\t\t\"expireAfterSeconds\" : \"3600\", \t\t\t\t\"ns\" : \"something.inside.namespace\" \t\t\t}, \nError message from mongorestore.log says:\n2023-09-29T11:08:02.231+0000\tFailed: something.inside.namespace: error creating indexes for something.inside.namespace: createIndex error: (CannotCreateIndex) TTL index 'expireAfterSeconds' option must be numeric, but received a type of 'string'. Index spec: { key: { lo_oat: 1 }, name: \"lo_oat_1\", background: \"true\", sparse: false, expireAfterSeconds: \"3600\", ns: \"something.inside.namespace\" } \nCreated and restored on same version: v4.2.23", "username": "Tin_Cvitkovic" }, { "code": "", "text": "Has this database got a longer lineage? As recently as 4.0 an invalid TTL index could be created.The server log is probably full of TTLMonitor Errors.If it is not used for anything else drop it and create it anew. You could update the metadata in the dump too, when it is restored it will have the correct TTL.Note that if the TTL is invalid and there are many documents that are expired when the TTL job runs after correcting it you might see a lot of deletes/load on the server.", "username": "chris" } ]
Mongorestore fails on TTL Index Creation
2023-09-29T11:25:05.747Z
Mongorestore fails on TTL Index Creation
260
https://www.mongodb.com/…7da91f88644b.png
[ "ops-manager" ]
[ { "code": "", "text": "Ops Backup Daemon not recognizing Windows MongoDB Enterprise builds.can some one help to fix this issue. error1016×493 34.9 KB", "username": "Chalapathi_Raju_Nand" }, { "code": "", "text": "Any one can help on this ?", "username": "Chalapathi_Raju_Nand" }, { "code": "", "text": "Hi @Chalapathi_Raju_NandIf you are running MongoDB Enterprise, you should open a case with MongoDB Support, this is part of what you are paying for.", "username": "chris" }, { "code": "", "text": "Not i don’t have any entitlement with MongoDB. I am facing this error when I am on self-learning.", "username": "Chalapathi_Raju_Nand" }, { "code": "", "text": "please suggest me same issue I am getting …please tell anyone how to resolve it", "username": "Md_Azaz_Ahamad1" }, { "code": "", "text": "Hi @Md_Azaz_Ahamad1,\nAs mentioned from the error, you are not using an Enterprise version of MongoDB with the version higher than 4.2.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "ok but I want know that I have already install ops manager community version and also enable replication with 3 instance …Now when I tried to enable continous backup but this is the error showingmongodb versions greater than 4.2.0+ must be enterprise build to enable backupNow please anyone tell me , I have to again install ops manager enterprise version,and also I have to remove ops manager community version which I have installed right now and then have to enable replication again …and after that I have to setup continous backup", "username": "Md_Azaz_Ahamad1" }, { "code": "", "text": "ops manager community versionNo such thing as Community Version Ops Manager. Ops Manager is included with an Enterprise Advanced subscription.mongodb versions greater than 4.2.0+ must be enterprise build to enable backupThe key word here is Enterprise. Ops Manager only support continuous backup with Enterprise Edition. So the mongo deployment will need to be upgraded to Enterprise.", "username": "chris" }, { "code": "", "text": "but how to do it please suggest some steps", "username": "Md_Azaz_Ahamad1" }, { "code": "-ent6.0.106.0.10-ent", "text": "It is the same as this version upgrade tutorial, but you select the same version with -entFor example, if you are running 6.0.10 you would select 6.0.10-ent and let Ops Manager do its thing.You should really open a support case as MongoDB support can guide you well with this.", "username": "chris" }, { "code": "", "text": "when I am doing so it is showing the following error<rs-az_5> [06:52:09.899] Plan execution failed on step Download as part of move Download : <rs-az_5> [06:52:09.899] Postcondition failed for step Download because [‘desiredState.FullVersion’ is not a member of ‘currentState.VersionsOnDisk’ (‘desiredState.FullVersion’={“trueName”:“6.0.10-ent”,“gitVersion”:“8e4b5670df9b9fe814e57cb5f3f8ee9407237b5a”,“modules”:[“enterprise”],“major”:6,“minor”:0,“patch”:10}, ‘currentState.VersionsOnDisk’=[{“trueName”:“6.0.10”,“gitVersion”:“8e4b5670df9b9fe814e57cb5f3f8ee9407237b5a”,“modules”:,“major”:6,“minor”:0,“patch”:10}])]. Outcome=3", "username": "Md_Azaz_Ahamad1" }, { "code": "", "text": "how can I solve??? please suggest something", "username": "Md_Azaz_Ahamad1" }, { "code": "", "text": "I’m going to restate my previous comment.You should really open a support case as MongoDB support can guide you well with this.It appears the agent is fail to download the target version.", "username": "chris" } ]
Ops manager Backup Daemon not recognizing Windows MongoDB Enterprise builds
2020-12-19T09:31:39.198Z
Ops manager Backup Daemon not recognizing Windows MongoDB Enterprise builds
2,777
null
[ "java" ]
[ { "code": "mongodb-driver-sync[org.mongodb.driver.client] MongoClient with metadata {\"driver\": {\"name\": \"mongo-java-driver|sync\", \"version\": \"4.10.2\"}, \"os\": {\"type\": \"Windows\", \"name\": \"Windows 10\", \"architecture\": \"amd64\", \"version\": \"10.0\"}, \"platform\": \"Java/Oracle Corporation/17.0.4.1+1-LTS-2\", \"application\": {\"name\": \"TwitchIntergration\"}} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@19321df, com.mongodb.Jep395RecordCodecProvider@4598ebab, com.mongodb.KotlinCodecProvider@1eefe84b]}, loggerSettings=LoggerSettings{maxDocumentLength=10}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=1000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='TwitchIntergration', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, dnsClient=null, inetAddressResolver=null, contextProvider=null}\n[09:28:52 INFO]: [org.mongodb.driver.cluster] Cluster description not yet available. Waiting for 30000 ms before timing out\n[09:28:52 INFO]: [org.mongodb.driver.cluster] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=21, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=3236100}\n", "text": "I’m developing a Minecraft plugin, and I’m using MongoDB for the database I’m using version 4.10.2 of mongodb-driver-sync, and I want to suppress it as its quite big but everything I’ve tried hasn’t worked (tried using log back & slf4j)Here is the message that I would want to suppressPlease tell me if I’m in the wrong place", "username": "Chriss_Quartz" }, { "code": "", "text": "Hello @Chriss_Quartz ,Welcome to The MongoDB Community Forums! Can you please confirm additional details for me to understand your use case better?I want to suppress it as its quite big but everything I’ve tried hasn’t worked (tried using log back & slf4j)Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "I’ve decided to change databases to MySQL so imma close this", "username": "Chriss_Quartz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Long Message is sent in console
2023-09-22T23:43:20.158Z
Long Message is sent in console
357
null
[ "aggregation", "performance" ]
[ { "code": "", "text": "Hello, I have a scenario where $lookup would be preferential to embedding and came across this article in researching its efficacy. https://www.enterprisedb.com/blog/comparison-joins-mongodb-vs-postgresql It states that because postgres is build with iterative substitution. as well as merge join and hash join that it is 130 times faster at joining with damning timespans for queries to run. My question is simple, is this still the case given it being 2 years old? I am aware of some recent improvements with $lookup including the ability to have either collection sharded now so I suppose I am hoping the performance of the stage itself has been improved.EDIT: Set Slot-Based Query Execution seems to promise some performance improvement for $lookup but does it make a dent in the 130 times?", "username": "Ben_Gibbons" }, { "code": "", "text": "Landed here with the exact same question.\nGuess that the answer is, “Yes, still” then, from the fact of the lack of replies.", "username": "MBee" }, { "code": "", "text": "From the mongo .local London event last week there has been a lot of improvement on the $lookup operator, like orders of magnitude performance boost.\nI guess at the end of the day, try it with your workload, Mongo is not an relational database…if you need a relational database than choose one of them, if this type of operation is a key app path then you may need to model the documents differently.I believe you may need to be running V7 to take advantage of all the slot processing changes, only some operations made use of it in V6", "username": "John_Sewell" } ]
Is the performance of $lookup still 130 times worse than Postgres?
2022-10-05T16:43:47.831Z
Is the performance of $lookup still 130 times worse than Postgres?
3,013
null
[]
[ { "code": "", "text": "I distributed mongodb cluster. aws should also connect to mongodb through k8s and gcp should also connect to mongodb through k8s. At this time, aws and gcp were “vpc peering” with mongoDB, but only k8s of aws are connected to mongodb and not gcp k8s.What should I do?Connection via app is only available on aws and gcp gets a “ReplicaSetNoPrimary” error", "username": "Dan_Lee" }, { "code": "", "text": "Hi @Dan_Lee,Trying to understand / get some clarification on this one a bit more.Could you advise the following details?:Also, not sure if it may be relevant for this specific scenario without further information, but as per the Network Peering documentation:Atlas does not support Network Peering between clusters deployed in a single region on different cloud providers. For example, you cannot set up Network Peering between an Atlas cluster hosted in a single region on AWS and an application hosted in a single region on GCP.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "스크린샷 2023-09-29 오후 2.32.491410×964 105 KB", "username": "Dan_Lee" }, { "code": "", "text": "스크린샷 2023-09-27 오후 5.55.551740×483 67 KB", "username": "Dan_Lee" }, { "code": "", "text": "K8s clusters are operating in aws and gcp, respectively. That’s why I’m trying to connect through vpc peering.But if this is not possible, is there a way to satisfy the following three?", "username": "Dan_Lee" }, { "code": "", "text": "I wrote it down at the bottom", "username": "Dan_Lee" } ]
Multi-cloud vpc peering ( AWS, GCP )
2023-09-28T12:47:59.213Z
Multi-cloud vpc peering ( AWS, GCP )
221
null
[ "python", "graphql" ]
[ { "code": "\"job_list\": {\n \"job_1: {\n info: {\n \"a\":1,\n \"b\":2,\n }\n },\n \"job_2: {\n info: {\n \"p\":111,\n \"q\":12,\n }\n },\n \"job_3: {\n info: {\n \"x\":11,\n \"y\":22,\n }\n }\n}\n", "text": "Hello!\nI have a collection which stores python dict. I want to build a schema in app service for graphQL. My dict looks like this:Thanks", "username": "Utkarsh_Gupta4" }, { "code": "", "text": "This doc says that I can create dict type by not defining property field and setting additionalProperty field. But when I do that I get a warning “schema must have a “properties” field” and then I can’t access that particular attribute while querying in graphql", "username": "Utkarsh_Gupta4" }, { "code": " \"consumed\": {\n \"bsonType\": \"object\",\n \"additionalProperties\": {\n \"bsonType\": \"string\"\n }\n },\n", "text": "I have a simmiliar issue. I try to define dictionary in schema as it decribed in docs:but I get error: schema must have a “properties” field", "username": "Edgar_Jan" }, { "code": "{\n \"bsonType\": \"object\",\n \"title\": \"<Type Name>\",\n \"required\": [\"<Required Field Name>\", ...],\n \"properties\": {\n \"<Field Name>\": <Schema>\n }\n}\n", "text": "Hi Edgar,The schema format should look like this:It sounds like you’re missing the properties field as shown, Is this present?Regards\nManny", "username": "Mansoor_Omar" }, { "code": "\"test\": {\n \"bsonType\": \"object\",\n \"properties\": {}\n}\n\"test\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"test1\": {\n \"bsonType\": \"string\"\n }\n }\n}\nstring", "text": "Hi, well just adding properties like this:still gives error: schema must have a “properties” field\nadding a a field in properties works:but this will not give a dictionary. Desired result is a collection of dynamic and unique string keys paired with values of a given type as described in docs", "username": "Edgar_Jan" }, { "code": "", "text": "Hi Edgar,Could you share your app id?", "username": "Mansoor_Omar" }, { "code": "", "text": "Here:example-qjcctI have made a new app with separate cluster to demonstrate it. Technically I decided to change my data structure in a way that I no longer need this feature, but it still could be interesting to figure it out for a future reference.", "username": "Edgar_Jan" }, { "code": "{\n \"title\": \"Guest\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"createdAt\": {\n \"bsonType\": \"date\"\n },\n \"createdById\": {\n \"bsonType\": \"objectId\"\n },\n \"attributes\": {\n \"name\": \"attributes\",\n \"bsonType\": \"object\",\n \"additionalProperties\": true\n }\n },\n \"required\": [\n \"_id\",\n \"createdAt\",\n \"createdById\",\n ]\n}\n\"attributes\": {\n \"name\": \"attributes\",\n \"bsonType\": \"object\",\n \"properties\": {\n \"extraStuff\": {\n \"name\": \"extraStuff\",\n \"bsonType\": \"string\"\n }\n },\n \"additionalProperties\": true\n }\n", "text": "I’ve hit the same issueSchema looks like:This results in the GraphQL error against attributes: schema must have a “properties” field.If I modify the scheme to:Then I’ll get a sync error “object property “attributes” has invalid subschema: object cannot have additionalProperties in addition to existing properties”. Which makes sense but I don’t seem to be able to find a way to satisfy the GraphQL error…Edit: I will likely change the solution over to this suggestion from a similar thread: Schema not working as describe on documentation - #4 by Anuj_Garg", "username": "Graeme_Maciver1" }, { "code": "", "text": "@Graeme_Maciver1 Welcome to MongoDB Community.Yes. Dictionary property is is not allowed in API Schema.", "username": "Anuj_Garg" } ]
Create schema for dictionary in app service
2023-08-07T02:44:26.403Z
Create schema for dictionary in app service
984
null
[ "queries", "dot-net" ]
[ { "code": "", "text": "Hi,We are comparing MongoDB vs our current SQL database solution.\nOne thing we noticed is that in the C# driver when we want to retrieve around 1 million documents it’s super slow versus de SQL version.Some background information: we have a collection containing 70 million documents. One query we are currently testing is to retrieve some documents based on a filter. When that result is retrieved, it takes over 1 minute before the list is filled. In SQL that same query and table takes 15 seconds to fill the same list.Is there a way to optimize this?", "username": "Joeri_Klomp" }, { "code": "", "text": "Have you performed your comparison on the same hardware?Have you performed your comparison on cold or hot server? Filling the cache with data from disk might be slow so if SQL was hot and Mongo is cold then you are comparing apples and oranges.Do you have indexes that support your query?I think that a use-case that needs to download 1 million documents/rows should be redesigned. For example, if you download 1 million documents to do some calculations, then you should redesign so that the calculations are done on the server with the aggregation framework.Note that being schema less mongo needs to send field names over the wire. With 1M documents you might have hit a bandwidth bottleneck. One way to optimize is to $project only the fields needed for the computation.Also blindly copy one SQL table to one MongoDB collection fails to tap into the flexible nature of JSON documents. For example, if rows/docs with dates, you could use the bucketing pattern and group all rows/docs with same date with a single document. This will reduce the total number of documents and data duplication. Your 1_000_000 docs might end up being only 100_000.That is it for now. I hope others will join the conversation.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,Thanks for your inisghts. Let me indeed provide some more info:\nYes, comparison is done on the same hardware. Also on the same exact server so I don’t think hot or cold servers is the issue. We have indeed an index that supports the query, both on SQL as in Mongodb.I also think you’re right about the use case, that it’s bad design. In this case it was merely a benchmark.Hopefully others will chime in as well ", "username": "Joeri_Klomp" }, { "code": "", "text": "Hi there,\nI’m using c# too, and also doing a benchmark project on an isolated environment(yes, my laptop, i7 SSD)\nI replicated the same dataset on MS SQL Express and MongoDB Community 7.0, around 931k recorders(documents), and both indexed the properties that the query needed.In MongoDB Compass, the result is very very impressive, a filtered result with nearly 20k records and the explainer said it used 400ms, really cool. But in c# and Mongodb driver, the same query condition and back the same records, it took nearly 10 seconds to convert into a model array.Can anyone tell me what’s happened to the c# driver or how can I improve the reading speed?\nthanks!", "username": "James_Fan" }, { "code": "", "text": "Do you have the source code you’re running to do the comparison?What SQL flavour are you using? Maria or MS or Oracle or something else?/Edit I see on the follow up from James, he’s using SQL ExpressIn the C# code is it pulling ALL results back or first set of results? I assume that in Compass it’ll open a cursor and get the first set of results back, which could be quick. You could check the network usage between the two use-cases to verify that the same amount of data is flowing.", "username": "John_Sewell" }, { "code": "var collection = IntraMongoDB.GetCollection<RawBsonDocument>(\"br_Report\");\n filter.Add(\"dbname\", new BsonDocument()\n .Add(\"$in\", new BsonArray(new string[] { \"TMT\" }))\n);\n filter.Add(\"Data_year\", new BsonDocument()\n .Add(\"$in\", new BsonArray()\n .Add(\"2023\")\n )\n );\n var projection = Builders<RawBsonDocument>.Projection.Exclude(\"_id\");\n\n var rawBsonDoc = collection.Find(filter).Project<RawBsonDocument>(projection).ToList();\n", "text": "Hi John,\nThanks for your feedback,I’ve tested another type of object “RawBsonDocument” via seeing another forum article, and boom, the result was outstanding. it just took ~1100ms to get all the data (around 22k records). about 8~9 times faster than BsonDocument.\nBut all I need is my data model, so I used BsonSerializer.Deserialize to convert RawBsonDocument to my model, unfortunately, that’s very slow.\nHere’s the forum thread I mentioned.\nhttps://www.mongodb.com/community/forums/t/c-net-core-3-1-driver-2-11-1-slow-tolist-data-manifestation/8783/28So I think the bottleneck shouldn’t be the network cause I can fetch data in a very fast way, (and yes the MongoDB is located on “localhost”)\nMight it be the mapping or converter’s issue?", "username": "James_Fan" }, { "code": "", "text": "Depending on your model, could you create your own de-serialiser optimised to your model?", "username": "John_Sewell" }, { "code": "", "text": "Some questions:Do you really really need to download 22_000 documents and convert them?Are you sure that whatever you do with that many documents cannot be done with an appropriate aggregation pipeline?You only exclude the _id, are all other fields needed for what ever you do with these documents? Could you include only the fields that are needed? Even in SQL, doing select * from is wrong.", "username": "steevej" }, { "code": "", "text": "Good point…our model has 1700 fields…everything we do has a projection down to just what we’re dealing with. If we tried to deserialise a full document, that’s a serious overhead…when you may only need a handfull.", "username": "John_Sewell" }, { "code": "", "text": "Yes, maybe others don’t need such a huge amount of data.But my main point is, the driver seems need amount of time to do deserilization thing.However, in my case, we need to acquire this data to perform some post-calculation (pivot analysis) within our application.\nIf we want the flexibility of the pivot, the data should be almost raw data. Otherwise we need to prepare 10+ kinds of aggration pipe lines.*We’ve also grouped the data before putting it into MongoDB to reduce the dataset, but currently, these ~22k data points are all we need.", "username": "James_Fan" }, { "code": "", "text": "Since mongod is localhost on a laptop, I suspect that your laptop is memory starved at this point. I do not know much about C# but I suspect that RawBsonDocument is how the data is received from the server. When you start converting to your model then you might need twice the amount of memory.Do you have any metrics about CPU and RAM usage during the process?Rather than using high level API like ToList() you might one to try to convert to your model in a loop one document at a time making sure the RawBsonDocument object is deleted from the received list (to release memory) before converting the next one.", "username": "steevej" }, { "code": "", "text": "What does a document look like? Can you give a sample one?", "username": "John_Sewell" }, { "code": "", "text": "The most likely cause to this isn’t the driver itself, but instead is an issue of syntax and formatting.Make sure that you’re not building your data model the same as you would an SQL model, because I can tell you as someone who has built and constructed a 700TB blockchain node, that SQL doesn’t even hold a candle to the speed and performance of the MongoDB C# driver.Please send up your data model and how you have it laid out and I’m sure we can help you optimize it.SQL will never be even close to being as fast as JSON/BSON No-SQL…This is being dead honest, there’s nothing close. And this is just recently with 970 million transactions a minute across a multi sharded cluster in what’s already not the best idea to setup.I would encourage maybe checking out MongoDB University “MongoDB For SQL Developers” class, as that may highlight some issues you might not realize you’ve built.This also isn’t some fanboy thing either, I’m very agnostic of database admin and usages. But also make sure that your use case is even relevant to NoSQL DBs, as NoSQL DBs inherently are for horizontal, not vertical data model designs. So do make sure your use case is even relevant for using MongoDB for it and vice-versa.", "username": "Brock_Leonard" }, { "code": "", "text": "Hi Brock and John,Thanks for your advices, I’ll dive into the MongoDB University.\nBut as you said, RDB can never handle a mega dataset while MongoDB can, perhaps I didn’t find the right way to work with MongoDB yet.Cause my dataset contains sentive data, please give me some time to create a similar situation for simulation my issue. Thanks.", "username": "James_Fan" }, { "code": "", "text": "Hi John, I’ve created a sample project to demostrate the issue I encountered.\nPlease take a lookMongoDbBenchmark\nThe GenerateFakeData can generate 1M documents.\nAnd the MongoDbBenchmark measured two model, List and ListFYI", "username": "James_Fan" }, { "code": "", "text": "Thanks James, I’m away from home at the moment but shall take a look when Im back tomorrow.", "username": "John_Sewell" }, { "code": "", "text": "When I worked at MongoDB, just for general references, there were customers with hundreds of thousands of transactions a second just using the Atlas Functions and JavaScript….Just for reference of capabilities and some customers had even more than that. Just on Atlas Functions when you have third party intermediary services via middleware and proxies.It’s not like DragonflyDB where it’s a cached service design though, but in your use case it hasn’t even scratched surface level. So most commonly it may be an index or aggregation issue.And can be just how you modeled your data if that makes sense. I’ll take a look later at your sample project.I don’t advocate much of at all for the device sync product line, but the core product is extremely solid, solid enough 10% of its features can handle 90% of actual production.Full ACID support among many other things as well, which is why when I see complaints like this it’s usually just a training lesson.", "username": "Brock_Leonard" }, { "code": " .RuleFor(x => x.GrossProfitWoReturn_TMC_Spec, f => f.Random.Double(0, 2000));\n\n var faker = new Faker();\n\n var issueDate = GenIssueDate(faker);\n s.IssueDate = issueDate.IssueDate;\n s.IssueYear = issueDate.IssueYear;\n\n private static (DateTime IssueDate, int IssueYear, string IssueMonth, string IssueQuater) GenIssueDate(Faker faker)\n {\n var issueDate = faker.Date.Between(new DateTime(2012, 1, 1), DateTime.Now);\n\n int issueYear = issueDate.Year;\n string issueMonth = issueDate.Month.ToString().PadLeft(2, '0');\n string issueQuater = ConvertMonthToQuater(issueDate.Month);\n\n return (issueDate, issueYear, issueMonth, issueQuater);\n }\n\n private static (DateTime IssueDate, int IssueYear, string IssueMonth, string IssueQuater) GenIssueDate2()\n {\n int range = (DateTime.Now - new DateTime(2012, 1, 1)).Days;\n var issueDate = DateTime.Now.AddDays(_random.Next(range));\n\n int issueYear = issueDate.Year;\n string issueMonth = issueDate.Month.ToString().PadLeft(2, '0');\n string issueQuater = ConvertMonthToQuater(issueDate.Month);\n\n return (issueDate, issueYear, issueMonth, issueQuater);\n }\n\n private static (int Qty, decimal UnitCost, decimal Amount, decimal AmountLocalCurrency) GenSalesCost2()\n {\n var qty = _random.Next(2000);\n var unitCost = _random.Next(2000);\n var amount = qty * unitCost;\n var amountLocalCurrency = amount * (0.5m + (decimal)_random.NextDouble() * 30m);\n\n return (qty, unitCost, amount, amountLocalCurrency);\n }\n\n", "text": "Out of interest how long does the generation routine take on your machine? It’s taking a long time to run on my workstation with local mongo.Debugging it seems that nearly all of the time is the re-creation of the faker class within each function call.Not using the faker, takes about 10ms to create 100 data points, with faker takes about 500ms per 100.Pulling the faker class outside the loop and passing into the generation routines takes about the same (10ms) so spinning up a faker class is REALLY expensive:Example without faker class:Shall take a look at the retrieval code next.", "username": "John_Sewell" }, { "code": "", "text": "/Edit I re-did the test and the graph is now as expected, I had a limited collection for testing so once it hit the size of the smaller collection the graph flattened as it was processing the same data volume! My bad.I ran a quick check on scaling, this is doing a .ToList() so you’re pulling all the data back as opposed to streaming it and processing it as it comes in.If there is a large overhead on the converting to an in-memory object and you do a lot of processing of the object you probably want to be able to run them in parallel, so as data comes in (perhaps in batches) you pass them to another routine to process the data, while the system is preparing the next batch.I wanted to check if there was a straight performance hit or there was a scaling involved, seems that it’s a scalar performance (which to be honest is as expected):X-Axis is data volume and Y is time in ms to process.I’m tied up most of today on work deliveries but shall have a play and see what could be done to bring the three lines together.I guess another question is WHY are you pulling this data into the app, what are you going to do with it? I may have missed it earlier in the thread but what calculation did you want to do or procesing and then what do you want to do with it (i.e. chuck it back to Mongo, update documents with calculated fields or push to another system)?I’m not sure about some of the comments on performance on this thread, a variety of database engines can all easily deal with large data volumes, it’s what you want to do with it that’s the key and what you want the client to do.\nI work with a variety of platforms from Mongo to IBM i-series machines, the iSeries is a complete beast for transaction processing in volumes, as the banking and insurance world can attest to, processing all direct debits in the UK for a banks clients for example involves rather a large amount of transactions. As far as I’m aware most airlines still run on IBM hardware and that’s also a lot of data volumes.As has been mentioned, if you’re doing a grouping on this data, do it on the server and THEN pull back, there is no point pulling all the data into a .net model to then do a sum by year, when you can just do this on the server, were the data lives.If you’re doing something crazy on the client, then you may want to look at a view model, project the data down and then have a model that just represents what you want.Anyway, I’ll have a play more later when some time opens up.", "username": "John_Sewell" }, { "code": "vulbon:maindigitalanalysis:main", "text": "Using a cut down viewmodel to just return some of the data:\nimage752×452 23.5 KBWhich makes sense, it takes a while to take a document from the server, convert to a .net object with type-checking etc and then serve that up.It seems that the time difference between returning a model and a BSON document is on-par, it’s the conversion from RAW data to the BSON representation in .net is taking the time.I’ve not delved into the dotnet driver but the use cases seem to be if you want raw performance to plough through the data as fast as possible, get them as raw BSON objects and deal with that, if you are going to process them and pass to a strongly typed model in your app, then take the overhead of the conversion but just do this when you’re not pulling a million records from the server.Checking about a bit I found this:\nhttps://jira.mongodb.org/browse/CSHARP-666Which seems to allow to get the document as raw and then just deserialise the elements you needs as used, as opposed to on the whole document. This could be a halfway approach, grab the data as fast as you can and then process as and when, at which point you take the overhead. Note the details in that CR for how it de-serialises nested documents.image789×395 28.4 KB/Edit I create a pull request so you can see what I did to generate the above, I was trying to use generics to pass in different view models but I kept failing at syntax and didn’t have the time to work through it:Slight modifications for testing scenarios\nApologies for anyone who finds the code layout offensive ", "username": "John_Sewell" } ]
Poor driver performance, how to optimize?
2021-08-26T12:11:18.589Z
Poor driver performance, how to optimize?
4,643
null
[ "replication", "mongodb-shell" ]
[ { "code": "rs.conf(): {\n _id : \"rs0\",\n members: [\n { _id: 0, host: 'mongodb1:27017', priority: 1, votes: 1},\n { _id: 1, host: 'mongodb2:27017', priority: 0, votes: 0 }\n ]\n}\ncfg = rs.conf()\ncfg.members[0].priority = 0\ncfg.members[0].votes = 0\ncfg.members[1].votes = 1\ncfg.members[1].priority = 1\nrs.reconfig(cfg, {force:true})\n", "text": "Hello,I have a two node P/S replica set with the following configuration. I have a constrained environment and I’m unable to deploy a third node - arbiter or secondary. I understand that manual failover is the only option for a two-node replica set. I also understand that this is not a recommended deployment model.The following configuration has node ‘mongodb1’ backing up to ‘mongodb2’. In the event that ‘mongodb1’ fails, I’d like to manually force ‘mongodb2’ to become the active primary.I have a couple of questions regarding this configuration.Q1) Is this a “safe” configuration in the sense that the replica set will properly backup data from primary to secondary with ‘mongodb2’ backing up ‘mongodb1’?Q2) Is the following command mongosh sequence a “safe” method initiating a manual failover to force ‘mongodb2’ to become primary? In particular, I’m wondering there are side effects to using {force: true} to update the replica set configuration on a secondary node?By “safe” I mean 1) no data corruption on primary/secondary and 2) no writes are lost while a primary node is ‘active’. (Clients will have to retry writes if there are no active primary nodes.)Best Regards,\nMatt", "username": "Matt_H" }, { "code": "", "text": "Q1) Is this a “safe” configuration in the sense that the replica set will properly backup data from primary to secondary with ‘mongodb2’ backing up ‘mongodb1’?i think this is mostly ok, if connection between the nodes is fast enough. otherwise some of the writes may not happen yet on node2, and the clients can see a rollback after manual failover.Q2) Is the following command mongosh sequence a “safe” method initiating a manual failover to force ‘mongodb2’ to become primary? In particular, I’m wondering there are side effects to using {force: true} to update the replica set configuration on a secondary node?i recall mongodb manual explains those concepts. I’m not able to give more information than what the doc says (i’m not a mongodb employee).", "username": "Kobe_W" }, { "code": "", "text": "Note that while 1 node is down, no writes will be able to be performed since you will not have a majority which is needed for a PRIMARY.", "username": "steevej" }, { "code": "", "text": "Is there a preferred/recommended configuration for a 2-node ReplicaSet and a preferred/recommended sequence to perform a manual failover from ‘Primary’ to ‘Secondary’?", "username": "Matt_H" }, { "code": "", "text": "no there is none, the recommendation is an odd number of members. with 2 you are already outside the recommendation.", "username": "steevej" } ]
Manual failover for two-node Replica Set
2023-09-25T20:07:36.136Z
Manual failover for two-node Replica Set
354
null
[ "swift" ]
[ { "code": "final class Foo: Object\n{\n @Persisted var kids: Map<String, Bar?>\n}\n\nfinal class Bar: Object\n{\n ...\n}\nBartry someRealm.write {\n someRealm.delete(someBar)\n}\nkidsFookidsListnilMapObject", "text": "Suppose I have this:And then I take a Bar object that is part of a Realm and I do this:What becomes of the kids map on the Foo parent? Is the Key removed from the map entirely (like if kids were a List)? Or does the key remain in the map and simply have a nil value now?It would be nice if the docs explicitly described this. The presence of an Optional in the value part of Map makes it unclear what happens on deletion when the value is an Object subclass.", "username": "Bryan_Jones" }, { "code": "f.kids[\"some_key\"] = barf.kids[\"some_key\"] = nilf.kids[\"some_key\"] = barrealm.delete(bar)for map in foo.kids {\n print(map.key, map.value)\n}\nsome_key nil", "text": "Great question!Two things we see when working with Maps:f.kids[\"some_key\"] = barand then later this is donef.kids[\"some_key\"] = nilthe entire map entry is removed.f.kids[\"some_key\"] = barand then bar is deletedrealm.delete(bar)the map remains, and the value is nilIf you were to iterate over the kids map and the bar object was deletedit will outputsome_key nilAnd yes! I agree - the documentation on Map is a bit “thin” to begin with. I suggest heading over to that documentation page and in the lower right corner, click the Share Feedback button and make a suggestion that the map section needs some love and additional/clear examples.“To remove a Map, set it to nil etc etc. Deleting an object a map points too, leaves the map in place with a value of nil.”", "username": "Jay" }, { "code": "[someKey: nil]Map[someKey: nil]", "text": "Thanks Jay. Please see this discussion: Docs: Specify Behavior of Map when an Object value is deleted · Issue #8379 · realm/realm-swift · GitHubIt appears that Map’s current behavior (leaving the key in-place when an object is deleted from the Realm) will result in data-loss when using Atlas Device Sync. Atlas does NOT keep the [someKey: nil] record on the MongoDB document. The web interface shows the property backing the Map as completely empty.If I deleted the local Realm file and re-synced from the cloud, I’d get back an empty Map, not my Map with [someKey: nil]I also can’t tell why Map behaves this way. If I directly set a value to nil, the associated key is nuked. But if the value gets set to nil indirectly (the object is deleted from Realm), the key remains with a nil value (but this state can’t be synced to Atlas)…it seems like an incongruent mess.", "username": "Bryan_Jones" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Effect of Deleting an Object in a Map?
2023-09-27T23:41:12.686Z
Effect of Deleting an Object in a Map?
294
null
[ "queries", "python" ]
[ { "code": "CursorNotFoundPyMongodef sleep_for_minutes(minutes_to_sleep):\n for i in range(minutes_to_sleep):\n print(f'{i} sleeping for 1 minute')\n time.sleep(60 * 1)\n \n\n# Iterate over all documents in the collection\nfor document in collection.find():\n print(f'{document} before sleeping')\n\n sleep_for_minutes(15)\n # even tried sleeping for 35 minutes but didn't help\n # sleeping for 45 mins worked (don't know why)\n\n print(f'{document} after sleeping')\nCursorNotFoundpymongo.errors.CursorNotFound: cursor id <something> not found, full error: {'ok': 0.0, 'errmsg': 'cursor id <something> not found', 'code': 43, 'codeName': 'CursorNotFound'}sleep", "text": "How do I reproduce CursorNotFound error due to 10 minutes of cursor inactivity and due to session idle timeout in 30 minutes using PyMongo tools?I am trying something like this :ContextAm iterating a very large Mongo collection. Each document of a collection is quite large as well. Tech stack : MongoEngine, DjangoMy production system is timing out due to CursorNotFound error.Error goes like this : pymongo.errors.CursorNotFound: cursor id <something> not found, full error: {'ok': 0.0, 'errmsg': 'cursor id <something> not found', 'code': 43, 'codeName': 'CursorNotFound'}As per my understanding, there can 2 possible reasons for the same:To fix and verify the fixes, I am trying to reproduce there errors on a local setup to fix the issue. I do this by using sleep methods.", "username": "shrey" }, { "code": "", "text": "The MongoDB server task that actually times out and expires the session only runs every 5 minutes by default: https://www.mongodb.com/docs/manual/reference/parameters/#mongodb-parameter-param.logicalSessionRefreshMillis. This helps explain why 35 minutes of inactivity did not reproduce the timeout but 45 minutes did (the task was probably running or about to run). To reproduce the issue you can sleep for 37+ minutes or decrease the logicalSessionRefreshMillis and localLogicalSessionTimeoutMinutes.", "username": "Shane" }, { "code": "docker run -d -p 27017:27017 --name mongodb-container mongo --setParameter cursorTimeoutMillis=10000 --setParameter logicalSessionRefreshMillis=10000\n# sleep is to emulate slow query\ncursor = collection.find(batch_size=1).where(\"sleep(59000) || true\")\n\nfor document in cursor:\n print(f'sleeping and processing document: {document}')\n sleep_for_minutes(3)\n print(f'sleeping done')\n", "text": "Thanks for the response Shane.\nI was able to reproduce the session idle timeout flow based on what you suggested.BUT, I am still not able to reproduce the cursor timeout (10 mins default) due to inactivity.Following are my test steps:Setting params in mongoMaking the actual queryThis works without erroring out as such.What exactly is the meaning of “cursor inactivity” or “cursor being idle”?\nI guess I might have got it wrong.", "username": "shrey" }, { "code": "", "text": "In 4.4.8+ MongoDB always sets the noCursorTimeout option for cursors that are created with a session: https://jira.mongodb.org/browse/SERVER-6036So there is no more 10 minute cursor timeout, only the 30 minute session timeout. When the session is expired the cursor(s) it created are closed as well.", "username": "Shane" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Reproducing CursorNotFound error due to cursor inactivity and session idle timeout
2023-09-27T06:21:07.015Z
Reproducing CursorNotFound error due to cursor inactivity and session idle timeout
326
null
[]
[ { "code": "await MongoClient.connect(uri)MongoServerSelectionError: Client network socket disconnected before secure TLS connection was established at Timeout...Allow all trafficRoute only requests to private IPs through the VPC connector", "text": "I am trying to create MongoDB connection from google cloud functions.this is what I am doing to establish connection await MongoClient.connect(uri). I have already set up VPC peering and the status is available. I also added 10.128.0.0 to the whitelist. However, when I call the cloud function, I am gettingMongoServerSelectionError: Client network socket disconnected before secure TLS connection was established at Timeout...I also added vpc to cloud function that I am calling with Allow all traffic for ingress and Route only requests to private IPs through the VPC connector to egressWhat can I do to fix this issue?", "username": "developertk" }, { "code": "", "text": "Hey, welcome to the MongoDB community, it will be a pleasure to help you.Did you get the URI directly from the panel? Can you show how you put it together?", "username": "Samuel_84194" }, { "code": "", "text": "Hi, I am using the uri in form of\n‘mongodb+srv://username:password@mycluster/?retryWrites=true&w=majority’And I am able to connect to it locally", "username": "developertk" }, { "code": "", "text": "Basically, to use cloud function with Atlas it is necessary to have the vpc configured in Atlas (peering) and the release in the project’s IP access list.", "username": "Samuel_84194" } ]
MongoDB connection from cloud functions
2023-09-21T04:48:45.626Z
MongoDB connection from cloud functions
329
null
[ "server", "installation" ]
[ { "code": "[root@RKCOM ~]# systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Tue 2023-09-26 18:49:48 IST; 1h 29min ago\n Docs: https://docs.mongodb.org/manual\n Process: 1945 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=14)\n Main PID: 1945 (code=exited, status=14)\nSep 26 18:49:45 RKCOM systemd[1]: Started MongoDB Database Server.\nSep 26 18:49:46 RKCOM mongod[1945]: {\"t\":{\"$date\":\"2023-09-26T13:19:46.626Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":7484500, \"ctx\":\"main\",\"msg\":\"Environment variable MONGODB_CONFIG_OVERRIDE_NOFOR>\nSep 26 18:49:48 RKCOM systemd[1]: mongod.service: Main process exited, code=exited, status=14/n/a\nSep 26 18:49:48 RKCOM systemd[1]: mongod.service: Failed with result 'exit-code'.\n", "text": "", "username": "Rakesh_Kubehera" }, { "code": "mongod.service: Main process exited, code=exited, status=14/n/a\"Main process exited, code=exited, status=14/n/a,\"", "text": "Hey @Rakesh_Kubehera,Welcome to the MongoDB Community!mongod.service: Main process exited, code=exited, status=14/n/aAs per the logs you shared, it states: \"Main process exited, code=exited, status=14/n/a,\" which indicates that the MongoDB process exited with a status code of 14. In MongoDB, exit code14 typically indicates an unrecoverable error or an uncaught exception.Could you share how you installed MongoDB, the version you are using, and whether you’re using Docker or similar? Also, please share the MongoDB configuration file. These details will help us to assist you better.Looking forward to your response.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hello,In parallel to what @Kushagra_Kesav said, I would validate that /tmp/mongodb-27017.sock exists and remove it. Then I would restart the service again.This could be a permissions issue in /var/lib/mongodb (datadir) or in the /tmp/mongodb-27017.sock file itself", "username": "Samuel_84194" }, { "code": "#mongod.conf\n\n#for documentation of all options, see:\n#http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true\n pidFilePath: /var/run/mongodb/mongod.pid\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n#security:\n#operationProfiling:\n#replication:\n#sharding:\n## Enterprise-Only Options\n#auditLog:\n#snmp:\n", "text": "/var/lib/mongodbHii Please find the conf file I have installed Mongo enterprise version 4.4 on centos8 linux", "username": "Rakesh_Kubehera" }, { "code": "", "text": "Hello, did you manage to validate what we said above?", "username": "Samuel_84194" } ]
MONGODB_CONFIG_OVERRIDE_NOFOR - Exit code14 - MongoDB server on centos8 Linux
2023-09-26T14:57:13.302Z
MONGODB_CONFIG_OVERRIDE_NOFOR - Exit code14 - MongoDB server on centos8 Linux
603
https://www.mongodb.com/…_2_1024x1003.png
[ "api" ]
[ { "code": "", "text": "Is there a away to set the “Termination Protection” and “Require Indexes for All Queries” settings via the API or in the atlas cli “create cluster” config?\nScreenshot from 2023-09-25 15-45-412100×2058 392 KB\n", "username": "Ruud_van_Buul" }, { "code": "atlas clusters create--enableTerminationProtectionatlas clusters advancedSettings update--disableTableScan", "text": "Hi @Ruud_van_Buul,Have you checked out the following to see if they’re what you’re after:Please test these out and let me know if it suits your requirements Regards,\nJason", "username": "Jason_Tran" }, { "code": "atlas cliterminationProtectionEnablednoTableScan", "text": "In terms of the Atlas Administration API, I believe the following documentation corresponds to the previously mentioned atlas cli commands (and the options mentioned):", "username": "Jason_Tran" }, { "code": "", "text": "Yes that’s exactly what I waws looking for.Thanks Jason!", "username": "Ruud_van_Buul" } ]
Is there way to set termination protection and mandatory index settings through API
2023-09-25T21:49:56.730Z
Is there way to set termination protection and mandatory index settings through API
300
null
[ "ops-manager" ]
[ { "code": "curl --user '$(kubectl get secrets mongodb-ops-manager-admin-key -o jsonpath=\"{.data.publicKey}:{.data.privateKey}\")' --digest \\\n --header 'Accept: application/json' \\\n --include \\\n --request GET \"http://35.xxx.xxx.xxx/api/public/v1.0/orgs/\"\n", "text": "I’m running mongodb operator on kubernetes, using externalConnectivity to get a service with external IP for ops manager.I want to get my organization ID without using the GUI.I found this but when curling the thing curl responds nothing, get stuck", "username": "Lorenzo_Carrascosa" }, { "code": "PRIVATE_KEY=$(kubectl get secrets mongodb-ops-manager-admin-key -o jsonpath=\"{.data.privateKey}\" | base64 --decode)\nPUBLIC_KEY=$(kubectl get secrets mongodb-ops-manager-admin-key -o jsonpath=\"{.data.publicKey}\" | base64 --decode)\ncurl --user \"$PUBLIC_KEY:PRIVATE_KEY\" --digest \\\n --header 'Accept: application/json' \\\n --include \\\n --request GET \"http://35.xxx.xxx.xxx:8080/api/public/v1.0/orgs/\"\n", "text": "The secret is encode in base64 in k8s, so I have to decode it\nand use port 8080", "username": "Lorenzo_Carrascosa" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting the OrgId without using the UI
2023-09-28T13:15:28.530Z
Getting the OrgId without using the UI
258
null
[ "python", "spark-connector" ]
[ { "code": "com.mongodb.MongoSocketOpenException: Exception opening socketSparkSessionDataFrameReaderSparkSessionSparkContextwait", "text": "Spark 3.3.0, mongodb Atlas 5.0.9, Spark connector tested with v3.x & v10.xI run a basic ETL job using Pyspark reading data from various MongoDB collections and writing them into my sink (BigQuery).Everything works fine using one or few collections from my database, but as soon as I’m looping across all my collections (almost 200), I’m still getting the same error after sometimes (and some successes writing data into BigQuery):com.mongodb.MongoSocketOpenException: Exception opening socketFor information, I’m using only one SparkSession for this job, and I loop across all the 200 collections updating my DataFrameReader's options (only the collection name) before loading data during each iteration.I was thinking that it was maybe a design issue coming from that, as I’m note sure if I have to recreate the SparkSession or the SparkContext for every collections ?Or maybe it could come from the high frequency of connection attempts to the database and I should slow down a bit the process during each iteration by introducing some manual wait ?What would be according to you the best way to correctly read the data from all my collections to prevent this kind of error ?", "username": "Clovis_Masson" }, { "code": "", "text": "Hi @Clovis_MassonI too running same set of issue on my ETL pipeline. Did you able to resolve it? Can you please share your workarounds or approach how you have handled it. This would be helpful for everyone who are facing the same issue.", "username": "Mani_Perumal" } ]
"Exception opening socket" loading multiple collections
2022-08-01T07:46:25.725Z
&ldquo;Exception opening socket&rdquo; loading multiple collections
2,550
null
[ "aggregation" ]
[ { "code": "{\n \"array\": [\n { \"k\": \"A\", \"v\": 1},\n { \"k\": \"B\", \"v\": 2},\n { \"k\": \"C\", \"v\": 3}\n ]\n},\n{\n \"array\": [\n { \"k\": \"A\", \"v\": 3},\n { \"k\": \"B\", \"v\": 2},\n { \"k\": \"C\", \"v\": 1}\n ]\n}\n{ \"k\": \"A\", \"v\": 4}\n{ \"k\": \"B\", \"v\": 4}\n{ \"k\": \"C\", \"v\": 4}\n{\"$unwind\": \"$array\"},\n{\"$group\": {\n \"_id\": \"$k\",\n \"v\": {\n \"$sum\": \"$v\"\n }\n}\n", "text": "Hi, I have an aggregation pipeline that works on potentially many data and I was looking for how to make it faster.The format of the documents is this:With the execution of the pipeline I would like to obtain the sum of the “v” of each “k”:I wrote a classic pipeline with $unwind + $group, could I do better?I could use a $group + $project($reduce) to do the same thing ?Thanks", "username": "Giacomo_Benvenuti" }, { "code": "", "text": "Can you tell me more specifically what you are trying to do?\nAs I find this aggregation pipeline a little bit tough to optimize, one thing I would do is have a separate document with an array field", "username": "Anshul_Negi" }, { "code": "coll.aggregate([\n {\n $match: {\n ...\n }\n },\n {\n $group: {\n _id: null,\n total: {\n $accumulator: {\n init: function () {\n return { array: [] };\n },\n accumulate: function (state, array) {\n var result = {};\n\n state.ts.concat(ts).forEach(obj => {\n if (result.hasOwnProperty(obj.k)) {\n result[obj.k += obj.v;\n } else {\n result[obj.k] = obj.v;\n }\n });\n\n state.ts = Object.entries(result).map(([k, v]) => ({ k, v }));\n return state;\n },\n accumulateArgs: [\"$array\"],\n merge: function (state1, state2) {\n var result = {};\n\n state1.ts.concat(state2.ts).forEach(obj => {\n if (result.hasOwnProperty(obj.d)) {\n result[obj.k] += obj.v;\n } else {\n result[obj.k] = obj.v;\n }\n });\n\n state1.ts = Object.entries(result).map(([k, v]) => ({ k, v }));\n return state1;\n }\n }\n }\n }\n }\n])\n", "text": "Thanks for the reply.I was imagining using $reduce, but I don’t know how to write it.An example (wrong but to understand the pipeline output) would be a $accumulator:", "username": "Giacomo_Benvenuti" } ]
Merge array of objects
2023-09-27T15:48:33.520Z
Merge array of objects
205
null
[]
[ { "code": "", "text": "Hello, I’m a new here. I have MongoDB 3.4.1 on Windows Server 2012. I need to upgrade Windows Server to 2019. Will MongoDB 3.4.1 work on this Window Server version? Thank you", "username": "Ilya_Khilkevich" }, { "code": "", "text": "Hi @Ilya_Khilkevich,From the documentation:Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Thank you for your response. It means I need to upgrade from 3.4 to 4.4. Do you have any recommendations, steps how to do it in the best way?", "username": "Ilya_Khilkevich" }, { "code": "", "text": "Hi @Ilya_Khilkevich,I’ll link you some of the answers I liked:Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Thank you for the links you are shared", "username": "Ilya_Khilkevich" } ]
MongoDB 3.4.1 on Windows Server 2019
2023-09-27T18:13:44.670Z
MongoDB 3.4.1 on Windows Server 2019
323
null
[]
[ { "code": "", "text": "I was wondering why do I get charged for every hour, when I clearly don’t access data or do any operations on the cluster in the morning hours. It’s a bit misleading where it says “pay as you go” since the cluster is dormant in 3am for example.Do I have to pause the cluster so I don’t get charged, and if so is there a scheduler to pause it in certain hours?Thank you in advance", "username": "semperlabs" }, { "code": "paused", "text": "Hi @semperlabs,I was wondering why do I get charged for every hour, when I clearly don’t access data or do any operations on the cluster in the morning hours. It’s a bit misleading where it says “pay as you go” since the cluster is dormant in 3am for example.As per the billing page:Atlas:If you’re after an instance type where you are charged by usage then perhaps serverless instances may be something you can test to see if it suits your use case / requirements. There are also some limitations to consider.Do I have to pause the cluster so I don’t get charged, and if so is there a scheduler to pause it in certain hours?You will still be billed but at a lowered rate. For example from my test environment when going to pause the cluster:image1064×246 11.7 KBYou could possibly do some scheduling the MongoDB Atlas Administration API (check paused option).If you believe there is an error in billing or have any questions regarding your Atlas bill, please contact the Atlas in-app chat support team as they’ll have more insight into your Atlas account.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is cluster active all the time?
2023-09-28T09:15:25.810Z
Is cluster active all the time?
202
null
[]
[ { "code": "", "text": "why I am not able to load sample data in cluster", "username": "dinesh_babu" }, { "code": "", "text": "Hi @dinesh_babu,Are you getting any errors when you try to load the sample data? Make sure you have enough space to load the data.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran Now I am able to load sample data set by creating new MogoDB account, thankyou", "username": "dinesh_babu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot able to load sample data
2023-09-28T09:43:05.588Z
Cannot able to load sample data
184
null
[ "connecting", "mongodb-shell" ]
[ { "code": "mongosh mongodb://<credential>@<hostname>:<port>/MongoServerSelectionError: connect ETIMEDOUT{\"t\":{\"$date\":\"2023-09-22T04:48:46.471Z\"},\"s\":\"W\",\"c\":\"DEVTOOLS-CONNECT\",\"id\":1000000034,\"ctx\":\"mongosh-connect\",\"msg\":\"Server heartbeat failure\",\"attr\":{\"connectionId\":\"<host>:<port>\",\"failure\":\"connect ETIMEDOUT <host>:<port>\",\"isFailFast\":false,\"isKnownServer\":true}}\n{\"t\":{\"$date\":\"2023-09-22T04:48:55.439Z\"},\"s\":\"W\",\"c\":\"DEVTOOLS-CONNECT\",\"id\":1000000034,\"ctx\":\"mongosh-connect\",\"msg\":\"Server heartbeat failure\",\"attr\":{\"connectionId\":\"<host>:<port>\",\"failure\":\"connection establishment was cancelled\",\"isFailFast\":false,\"isKnownServer\":true}}\n", "text": "I have installed and set up a DB deployment on a RedHat Linux VM. In the config file I already enable access from anywhere by setting bindIp to 0.0.0.0. when I try to connect the DB deployment using the VM by mongosh mongodb://<credential>@<hostname>:<port>/, it works fine. However, when I ran the same command on my window PowerShell, I got MongoServerSelectionError: connect ETIMEDOUT error. I am using SSL VPN because the port is only open inside certain network. Any suggestion on possible causes of the connection etimeout error or suggestion will be appreciated. Thank You!mongosh logs:", "username": "Samson" }, { "code": "", "text": "Sounds like you’re firewalled out. Almost certainly TCP/IP problems, whatever.", "username": "Jack_Woehr" }, { "code": "", "text": "can you please check if iptables/firewalls are blocking?", "username": "ROHIT_KHURANA" }, { "code": "", "text": "hi @ROHIT_KHURANA\n\nimage1405×129 5.46 KB\n\nis that what you mean? if I am not mistaken, the port should be listeningupdate:\n\nimage877×56 7.63 KB\n\nI tried talent on my window11 local machine. It seem like the connection failed on my local machine. The Chinese text is saying “cannot connect to host, port 3389 connection fail”", "username": "Samson" }, { "code": "systemctl disable firewalld netstat -tulpn", "text": "Here are some updates on what I have done based on the comment so far:I have tried all the solutions above but still failed to establish the connection. any more suggestions will be appreciated.", "username": "Samson" }, { "code": "", "text": "Are you sure the problem isn’t on the Windows side?\nBuilt-in Windows firewall?\nIs this a company-managed Windows PC?\nCheck with your IT department whether or not you can connect to arbitrary remote ports.", "username": "Jack_Woehr" }, { "code": "", "text": "@Jack_Woehr Thank you for your reply again. I am using my own PC. I believe the problem should be on the server side since I tried to establish a connection with Mongo Atlas and it works fine.", "username": "Samson" }, { "code": "", "text": "@Samson , it seems to me that there are small number of possibilties:All that remains, as far as I can guess, is that the SSL VPN you are on is firewalling you out of the MongoDB port on the Linux VM. Other than that, I’m currently out of guesses ", "username": "Jack_Woehr" }, { "code": "", "text": "@Jack_Woehr Thank you for summarizing all the possibilities. I will just mark your reply as the solution. In case someone has a similar problem, they can follow your suggestion.", "username": "Samson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issue with connecting the db deployment remotely
2023-09-22T03:54:21.572Z
Issue with connecting the db deployment remotely
425
null
[ "node-js", "mongoose-odm" ]
[ { "code": "users.findOne()users.findOne()", "text": "i use cmd in terminal npm start and it start at port 5000 and also connect mongo db and when i brows localhost://5000 in browser, it open but when i sign up in it then in browser porcesssing only not any response.and when i see in terminal it show me error Express server is up and running on port 5000\nSuccessfully connected to db cars\nError in finding the user MongooseError: Operation users.findOne() buffering timed out after 10000ms\nat Timeout. (C:\\Users\\Irfan Ansari\\Desktop\\IssueTracker\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:175:23)\nat listOnTimeout (node:internal/timers:564:17)\nat process.processTimers (node:internal/timers:507:7)\nError in finding the user MongooseError: Operation users.findOne() buffering timed out after 10000ms\nat Timeout. (C:\\Users\\Irfan Ansari\\Desktop\\IssueTracker\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:175:23)\nat listOnTimeout (node:internal/timers:564:17)\nat process.processTimers (node:internal/timers:507:7)so pls solve it", "username": "Irfan_Ansari" }, { "code": "", "text": "C:\\Users\\Irfan Ansari>mongosh\nCurrent Mongosh Log ID: 63cbdffba0f2600a491b9ec3\nConnecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.2\nMongoServerSelectionError: Server selection timed out after 2000 msC:\\Users\\Irfan Ansari>", "username": "Irfan_Ansari" }, { "code": "", "text": "you can try this: Go to data services tab security on the left side and find network access inside that you choose to add the IP address and add 0.0.0.0/0 this IP address means you can access to database from any network address", "username": "52_Le_Qu_c_Thai" }, { "code": "users.findOne()", "text": "MongooseError: Operation users.findOne() buffering timed out after 10000ms\nat Timeout. (C:\\Users\\Hassan Dawood\\Documents\\commercehope-backend\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:\n185:23)\nat listOnTimeout (node:internal/timers:569:17)\nat process.processTimers (node:internal/timers:512:7)", "username": "Hassan_Dawood" }, { "code": "", "text": "@Hassan_Dawood have solved the issue please", "username": "Anas_Backend_Engineer" } ]
Error in finding the user MongooseError: Operation `users.findOne()` buffering timed out after 10000ms
2023-01-21T12:48:09.835Z
Error in finding the user MongooseError: Operation `users.findOne()` buffering timed out after 10000ms
11,134
null
[ "aggregation", "crud" ]
[ { "code": "offersquantities._iduserIdarrayFiltersoffers{\n\t\"_id\" : ObjectId(\"61043db60d237d00049712ad\"),\n\t\"__v\" : 0,\n\t\"quantities\" : [\n\t\t{\n\t\t\t\"_id\" : ObjectId(\"61043db60d237d00049712ae\"),\n\t\t\t\"quantity\" : 500,\n\t\t\t\"aimedPrice\" : 3.25,\n\t\t\t\"offers\" : [ ]\n\t\t}\n\t],\n\t\"technology\" : [ ]\n}\noffersdb.positions.updateOne({\n \"_id\": ObjectId(\"61043db60d237d00049712ad\"),\n \"quantities._id\": ObjectId(\"61043db60d237d00049712ae\")\n}, {\n $addToSet: {\n \"quantities.$[qty].offers\": {\n userId: ObjectId('60d794ba17471000041a7ef2'),\n seen: false,\n offeredQuantityPrice: 45.5,\n realisticDeliveryTime: ISODate(\"2023-10-16T02:00:00.000+02:00\"),\n dateOffered: ISODate(\"2023-09-14T22:13:41.970+02:00\"),\n }\n }\n}, {\n arrayFilters: [\n { \"qty.offers.userId\": ObjectId('60d794ba17471000041a7ef2') }\n ],\n upsert: true\n})\n", "text": "I have this document and what I’m trying to do is to push new values into offers array, but to specific quantities._id. If it matches userId from arrayFilters it should just update that value in offers array.So here’s what I wrote, it does update, but it won’t create a new value in this offers array.", "username": "semperlabs" }, { "code": "", "text": "I have been thinking about this and was not able to come up with a better solution than the following:Use and ordered bulkWrite with 2 updateOne documents. The firstOne to handle the case where the new value has to be updated and the second updateOne to handle the case where the value needs to be added.", "username": "steevej" }, { "code": "userIdconst bulkOperations = [];\n\nbulkOperations.push({\n updateOne: {\n filter: {\n _id: ObjectId(positionId),\n 'quantities._id': ObjectId(quantityId),\n },\n update: {\n $pull: {\n 'quantities.$.offers': { userId: ObjectId(otherArgs.userId) },\n },\n },\n },\n});\n\nbulkOperations.push({\n updateOne: {\n filter: {\n _id: ObjectId(positionId),\n 'quantities._id': ObjectId(quantityId),\n },\n update: {\n $push: {\n 'quantities.$.offers': { ...otherArgs },\n },\n },\n },\n});\n\nawait Position.bulkWrite(bulkOperations, { ordered: true });\n", "text": "I actually did a similar thing, removed all values from the array and then insert a new one, since they have to be unique by userId", "username": "semperlabs" } ]
How to update nested array using arrayFilters, but if it doesn't find a match it should insert new values
2023-09-20T08:55:59.428Z
How to update nested array using arrayFilters, but if it doesn&rsquo;t find a match it should insert new values
374
null
[ "node-js", "crud" ]
[ { "code": "", "text": "I have collection like this,\n{\n“_id”:“65141e29d2f4fcd9c160deet”,\n“CREATED_BY\":\"[email protected]”,\n“CONTACTS”:“”,\n“NOTES”:[\n{\n“DESCRIPTION”:“testing11”,\n“TYPE”:“IN”,\n“_id”:“65141e29d2f4fcd9c160d604”,\n“updatedAt”:\n“2023-09-27T12:21:05.198+00:00”,\n“createdAt”:“2023-09-27T12:21:05.198+00:00”\n}]\n}whenever I try to update the notes with _id createdAt and updatedAt is also getting updated.\nI use the below query to do that operation,db.mycollection.findOneAndUpdate({“_id”:“”, “NOTES._id”: NOTES[i]._id }, { $set: {“srType.$.NOTES”:obj} }, {new: true })", "username": "Rajalakshmi_R" }, { "code": "", "text": "I would be very surprised if that would be the case. Your sample document DOES NOT have a field named SR_NOTES. So you either redacted the query or the sample document. In this case, we really cannot help you because we have no way to know what part of the information you share is correct and which one is wrongly redacted.Please read Formatting code and log snippets in posts so that you documents and code can be used without further editing.", "username": "steevej" }, { "code": "", "text": "I have updated the code", "username": "Rajalakshmi_R" }, { "code": "", "text": "Thanks but it is still unusable without editing because it is still not formatted according to the link I provided.", "username": "steevej" }, { "code": "{\n\"_id\":\"65142cdf3978eb7d80bc965a\",\n\"ST_PARTY_SITE_ID\": 739731,\n \"ACTION\": \"submit\",\n \"NOTES\": [\n {\n \"TYPE\": \"INVOICE\",\n \"DESCRIPTION\": \"test 55454\",\n \n \"_id\": \"65143e3101b2a56e7e45022f\",\n \n \"updatedAt\":\"2023-09-27T15:21:03.815Z\",\n \"createdAt\":\"2023-09-27T15:21:03.815Z\"\n }\n ]\n}\ndb.mycollection.findOneAndUpdate({“_id”:“”, “NOTES._id”: NOTES[i]._id }, { $set: {“srType.$.NOTES”:obj} }, {new: true })\n", "text": "when I try to update the main collection, embedded documents behaves like it is getting inserted createdAt is always be same as updatedAt. but for me, createdAt should not get chnaged when we update the embedded document", "username": "Rajalakshmi_R" }, { "code": "{ $set: {“srType.$.NOTES”:obj} }", "text": "When you write{ $set: {“srType.$.NOTES”:obj} }what you actually asking is to replace the given array element with the object obj. So what ever fields and values the object obj has will end up stored in the document. You may use the dot notation to set the individuals fields you what. If $mergeObjects can be used in this context (something worth testing) then it would be less cumbersome than the dot notation.", "username": "steevej" } ]
Timestramps in nested objects
2023-09-27T15:11:41.028Z
Timestramps in nested objects
308
null
[ "queries", "compass" ]
[ { "code": "\n {\n \"Status\": \"Completed\",\n \"Score\": [\n {\n \"score\": {\n \"$numberLong\": \"25\"\n },\n \"title\": \"Screen Test\",\n \"maxScore\": {\n \"$numberLong\": \"41\"\n }\n },\n {\n \"score\": [\n {\n \"score\": {\n \"$numberLong\": \"5\"\n },\n \"comments\": [\n {\n \"text\": \"Range of grammar structures\"\n },\n {\n \"text\": \"Errors\"\n },\n {\n \"text\": \"Range and control of vocabulary\"\n },\n {\n \"text\": \"Control of mechanics\"\n }\n ],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"description\": \"Test\",\n \"_id\": \"60e061d1b1aab9d7e4be0525\",\n \"label\": \"Language\"\n },\n {\n \"score\": {\n \"$numberLong\": \"5\"\n },\n \"comments\": [\n {\n \"text\": \"Chat management\"\n },\n {\n \"text\": \"Coherence and cohesion\"\n }\n ],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"description\": \"Test\",\n \"_id\": \"60e061d1b1aab9d7e4be0523\",\n \"label\": \"Discourse\"\n },\n {\n \"score\": {\n \"$numberLong\": \"5\"\n },\n \"comments\": [\n {\n \"text\": \"Positive relationship\"\n },\n {\n \"text\": \"Managing feelings\"\n },\n {\n \"text\": \"Age/culture/gender awareness\"\n },\n {\n \"text\": \"Interactive strategies\"\n }\n ],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"description\": \"test\",\n \"_id\": \"60e51ae3c80f6a2528e0589d\",\n \"label\": \"Interpersonal\"\n },\n {\n \"score\": {\n \"$numberLong\": \"5\"\n },\n \"comments\": [\n {\n \"text\": \"Understanding and addressing purpose of task\"\n }\n ],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"description\": \"test\",\n \"_id\": \"60e51ae3c80f6a2528e0589c\",\n \"label\": \"Solution\"\n }\n ],\n \"title\": \"Conduent Chat Assessment\"\n },\n {\n \"score\": [\n {\n \"score\": {\n \"$numberLong\": \"5\"\n },\n \"comments\": [],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"description\": \"Pronunciation\",\n \"_id\": \"5bced1234f238d8eb40b84f5\",\n \"label\": \"Pronunciation\"\n },\n {\n \"score\": {\n \"$numberLong\": \"5\"\n },\n \"comments\": [],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"_id\": \"5bced1234f238d8eb40b84f4\",\n \"label\": \"Language\"\n },\n {\n \"score\": {\n \"$numberLong\": \"5\"\n },\n \"comments\": [],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"_id\": \"5bced1234f238d8eb40b84f3\",\n \"label\": \"Discourse\"\n },\n {\n \"score\": {\n \"$numberLong\": \"3\"\n },\n \"comments\": [],\n \"maxPoints\": {\n \"$numberLong\": \"5\"\n },\n \"_id\": \"5bced1234f238d8eb40b84f2\",\n \"label\": \"Interactive\"\n }\n ],\n \"title\": \"Voice Assessment\"\n }\n ],\n \"PDF\": \"\",\n \"Assessment Order ID\": \"12345\",\n \"Candidate Email Address\": \"[email protected]\",\n \"Assessment Name\": \"Screentest-Chat-VA\",\n \"Completed Date\": \"2023-09-22T20:08:00.725Z\"\n }\n \n {\nStatus: 'Completed',\nScore: [\n {\n score: 25,\n title: 'Screen Test',\n maxScore: 41\n },\n {\n score: [\n {\n score: 5,\n comments: [\n {\n text: 'Range of grammar structures'\n },\n {\n text: 'Errors'\n },\n {\n text: 'Range and control of vocabulary'\n },\n {\n text: 'Control of mechanics'\n }\n ],\n maxPoints: 5,\n description: 'Test',\n _id: '60e061d1b1aab9d7e4be0525',\n label: 'Language'\n },\n {\n score: 5,\n comments: [\n {\n text: 'Chat management'\n },\n {\n text: 'Coherence and cohesion'\n }\n ],\n maxPoints: 5,\n description: 'Test',\n _id: '60e061d1b1aab9d7e4be0523',\n label: 'Discourse'\n },\n {\n score: 5,\n comments: [\n {\n text: 'Positive relationship'\n },\n {\n text: 'Managing feelings'\n },\n {\n text: 'Age/culture/gender awareness'\n },\n {\n text: 'Interactive strategies'\n }\n ],\n maxPoints: 5,\n description: 'test',\n _id: '60e51ae3c80f6a2528e0589d',\n label: 'Interpersonal'\n },\n {\n score: 5,\n comments: [\n {\n text: 'Understanding and addressing purpose of task'\n }\n ],\n maxPoints: 5,\n description: 'test',\n _id: '60e51ae3c80f6a2528e0589c',\n label: 'Solution'\n }\n ],\n title: 'Conduent Chat Assessment'\n },\n {\n score: [\n {\n score: 5,\n comments: [],\n maxPoints: 5,\n description: 'Pronunciation',\n _id: '5bced1234f238d8eb40b84f5',\n label: 'Pronunciation'\n },\n {\n score: 5,\n comments: [],\n maxPoints: 5,\n _id: '5bced1234f238d8eb40b84f4',\n label: 'Language'\n },\n {\n score: 5,\n comments: [],\n maxPoints: 5,\n _id: '5bced1234f238d8eb40b84f3',\n label: 'Discourse'\n },\n {\n score: 3,\n comments: [],\n maxPoints: 5,\n _id: '5bced1234f238d8eb40b84f2',\n label: 'Interactive'\n }\n ],\n title: 'Voice Assessment'\n }\n],\nPDF: '',\n'Assessment Order ID': '12345',\n'Candidate Email Address': '[email protected]',\n'Assessment Name': 'Screentest-Chat-VA',\n'Completed Date': '2023-09-22T20:08:00.725Z'\n}\n}\n", "text": "Hi,\nI’m trying to retrieve JSON document from the mongodb using Compass. I queried using the “_id” and copied the document. Below is the document I see:My expectation is that I should not see $numberLong as I’ve not inserted those values. I tried the same using Mongoshell by running the command:\n.find({_id:“req-1695414301196-60dadd30-f6e0-4e73-b538-0ff68cd30dcb”});\nI got the below result:Result from mongshell appears fine. How could I get a similar result from compass? I’m on Mac and compass version is Version 1.39.4 (1.39.4) which is the latest.", "username": "Anil_Kuppa" }, { "code": "", "text": "When you do the export, have you tried changing the Advanced JSON format setting? Example:\nimage1118×838 68.1 KB\nRegards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Procedure noted here. However, for Relaxed Extended JSON the note on the docs state:This format is not recommended for data integrity.", "username": "Jason_Tran" }, { "code": "", "text": "Thanks for the reply. I’m not talking about export, but I’m copy pasting directly from Mongo Compass.\n\nimage2178×870 66.3 KB\n\nI’m copying the document using the icon and pasting it in text editor.", "username": "Anil_Kuppa" }, { "code": "abInt32Int64{\n \"_id\": {\n \"$oid\": \"6514dafb852b55cce0fb68c3\"\n },\n \"a\": 1,\n \"b\": {\n \"$numberLong\": \"1\"\n }\n}\nmongosh", "text": "Gotcha. Thanks for clarifying. I think it may have to do with the data type, for example:image2850×1164 233 KBField a and b with Int32 and Int64 respectively. When copying these and pasting onto a text editor:Perhaps using the embedded mongosh might be work for you? (as per screenshot above).Regards,\nJason", "username": "Jason_Tran" } ]
Returns "$numberLong" instead of number
2023-09-25T16:59:28.127Z
Returns &ldquo;$numberLong&rdquo; instead of number
339
null
[ "aggregation", "java" ]
[ { "code": "{\n\"field_1\" : \"value_1\"\n\"field_2\": null\n}\n{\n\"field_1\" : \"value_1\"\n\"field_2\": \"my_default_value\"\n}\n", "text": "How to set a default value to a projected field if it is null or field not exists in the mongo document using java?Document:Expected Result:", "username": "Vignesh_Paulraj" }, { "code": "<field>: <expression>Adds a new field or resets the value of an existing field.", "text": "Developers concerned about default values for fields use Mongoose.\nAs far as projecting a value to a field see the docs<field>: <expression>\nAdds a new field or resets the value of an existing field.", "username": "Jack_Woehr" } ]
How to set a default value to a projected field if it is null or not exists in the mongo document using java?
2023-09-27T15:54:21.254Z
How to set a default value to a projected field if it is null or not exists in the mongo document using java?
372
null
[]
[ { "code": "", "text": "I am trying to login to MDBU, but the login page is not working.After entering my one-time code, it would normally log me in and redirect back to the last page I was onNow it just is a white screen. This might be an issue for all login types, but for me now it is for MDBU.please check and fix the issue", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Also, it does not log me in because if I try to go back to the page I was browsing, I still see no login. (there were times only the redirect was broken, not this time)", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hi @Yilmaz_Durmaz - Would you mind emailing [email protected] regarding this?I’ve tested the university link on a few browsers on my own end but was able to get through.Regards", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_TranBefore doing that, I tried it again. using the “login” page from MDBU is definitely broken. Or it is the login page that is broken if extra parameters are given.I used the direct link to login page from “Cloud: MongoDB Cloud” link. It has no redirection parameter and it worked. I visited MDBU but did not auto-detect my login. Then I used the sign-in button there, and finally detected login.Anyways, I will send an email to “Learn” but maybe you too let the login page team know about this incident.Thanks.", "username": "Yilmaz_Durmaz" } ]
Urgent attention needed; MDBU sign in page does not work
2023-09-27T19:54:14.331Z
Urgent attention needed; MDBU sign in page does not work
325
null
[ "node-js", "mongoose-odm" ]
[ { "code": "saslprep", "text": "When I am using MongoDB connection with mongoose getting this error. But when I am using the SHA-1 mechanism the error get vanishes. I have also installed the saslprep", "username": "Emon_Reza" }, { "code": "", "text": "We had this connection issue from our AWS lambda (nodeJS / typescript) after upgrading the mongo client driver to (“Version”: “5.7.0”) along with some changes to the ESBuild to reduce the package size. However, it’s a weird issue since we only had this issue in US region. In Europe and APAC regions, our lambda are working fine. After switching the lambda in US region to use passwordless authentication using AWS IAM role, this issue is resolved.Hope it helps,Regards,\nRay", "username": "Ray_Chew" } ]
(0 , deps_1.saslprep) is not a function
2023-05-09T12:02:46.365Z
(0 , deps_1.saslprep) is not a function
832
null
[]
[ { "code": "", "text": "In my project I have requirement of two separate database. Hence have create two separate realm database and I am inserting/updating/getting data by using Realm.getInstance(realmConfiguration). I have specified different models for both database but when I check the database in realm studio I can both databse schema have all the models which i have specified in the two different schema which 0 rows.For example; realm1 I have two model model1 and model2 similarly for real2 db I have two model model3 and model4 and data is being insert in the specific model and also i can easily access this data from different model. But when i open these db files in realm studio I can see.\nRealm1 db have 4 model model1, model2, model3 model4 and out of that model1 and model2 have data and model3 and model4 is empty.\nSimilarly, Realm 2 DB have 4 model model1, model2, model3 and model4, and in model3 and model4 have data while model1 and model2 is empty.So, i am bit confuse why these models are creating in each other with 0 data? ideally realm1 should only have 2 molde model1 and model2 and similarly realm 2 should only have 2 model model3 and model4.Can anyone guide me how I can fix this issue and make is separate for both the realm schema.", "username": "Akshay_Jani" }, { "code": "", "text": "There’s likely an error in the code causing that, or perhaps the order in which tasks are being done.As a test using Swift on a Mac, I created a fresh app with two Realm objects. Type0 and Type1.In the UI, there are two buttons, button0 creates a Realm called realm0.realm and button1 creates a Realm realm1.realm.In code realm0 is initted with a config that only specified the Type0 object, likewise, realm1 is initted with a config that only specifies Type1 object.Upon pressing button 0, a realm file is created (realm0.realm) and only contains the type0 object. Pressing button 1 creates a realm1.realm file and only contains type1 object.So, based on the description, I cannot duplicate the issue. Can you include some code that duplicates it?", "username": "Jay" }, { "code": " val builder = RealmConfiguration.Builder()\n builder.name(realm1)\n .schemaVersion(1.2.0)\n .allowQueriesOnUiThread(true)\n .allowWritesOnUiThread(true)\n .addModule(realmModule1())\n .migration(MyMigration())\n builder.encryptionKey(realmKeyByte)\n config1 = builder.build()\n Realm.setDefaultConfiguration(config1)\n Realm.compactRealm(config1)\n@RealmModule(\n library = true,\n classes = [\n model1::class,\n model2::class,\n ]\n)\nclass realmModule1\nval builder = RealmConfiguration.Builder()\n builder.name(realm2)\n .schemaVersion(1)\n .allowQueriesOnUiThread(true)\n .allowWritesOnUiThread(true)\n .addModule(realmModule2())\n builder.encryptionKey(realmKeyByte)\n config2 = builder.build()\n Realm.compactRealm(config2)\n@RealmModule(\n library = true,\n classes = [\n model3::class,\n model4::class,\n ]\n)\nclass realmModule2\n", "text": "Hi Jay,Below is my code. Please have a look:realm1 database:This is realmModule1 which added for realm1 databaserealm2 database:This is realmModule2 which added for realm1 databaseSo, here you noticed that realm1 database is my default realm database and realm2 is my secondary database which I will access by the Realm.getInstance(realmConfig).Please advice if there is any issue in above code", "username": "Akshay_Jani" }, { "code": "schemaVersion.schemaVersion(1.2.0)@RealmModule(library = false, classes=[Model1::class.java])\ndata class Module1(val someString: String) { \n //...\n}\n\n@RealmModule(library = false, classes=[Model2::class.java])\ndata class Module2(val someString: String) {\n //...\n}\n\nval configOne = RealmConfiguration.Builder()\n .name(\"first.realm\")\n .modules(Module1())\n .build()\n\nval configTwo = RealmConfiguration.Builder()\n .name(\"second.realm\")\n .modules(Module2())\n .build()\n\nval realm1 = Realm.getInstance(configOne)\nval realm2 = Realm.getInstance(configTwo)\n", "text": "Perhaps off topic, but I think schemaVersion requires an integer (?).schemaVersion(1.2.0)Also, I believe the default module includes all Realm objects defined in your application. If you want to use your own realms, you would define realm1 and realm2 and specify the objects they would contain.Something like this", "username": "Jay" } ]
Two realm database have same schema
2023-09-25T15:53:38.925Z
Two realm database have same schema
386