image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "Hey,\nI am trying to install mongo version4.0.28 on Centos9 community edition server.updated the mongodb-org.repo as below\n[mongodb-org-4.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-4.0.ascand using sudo dnf install mongo-org command to install mongo, and facing below errorMongoDB Repository 1.6 kB/s | 390 B 00:00\nErrors during downloading metadata for repository ‘mongodb-org-4.0’:Please help me with the issue.Thank You",
"username": "aditya_k2"
},
{
"code": "",
"text": "Hi @aditya_k2 and welcome in the MongoDB Community! MongoDB 4.0 reached end of life in April 2022 so my first advice would be to upgrade without waiting.See the supported platform for Centos:And the MongoDB version policy:MongoDB Legacy Support PolicyFrom what I see on the MongoDB Download website:\nimage920×653 30.1 KB\nLooks like Centos 9 isn’t supported for such an old version.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to install mongodb version 4.0.28 on centos9 | 2023-08-02T11:42:04.165Z | Unable to install mongodb version 4.0.28 on centos9 | 574 |
null | [] | [
{
"code": "httpconst http = context.services.get(\"http\");Error: Cannot access member 'get' of undefined",
"text": "Hello,I cannot find any information anywhere on how to make an http call from within a MongoDB Atlas function. Here’s what I have tried so far:Nothing else comes to mind. So how to make an http request from a MongoDB Atlas function?",
"username": "Vladimir"
},
{
"code": "const http = context.services.get(\"http\");Error: Cannot access member 'get' of undefinedcontext.services.get",
"text": "Hi @Vladimir,const http = context.services.get(\"http\");\nThis has failed with an error: Error: Cannot access member 'get' of undefinedThe error is being returned because the context.services.get method is deprecated.Does the following work for you instead : https://www.mongodb.com/docs/atlas/app-services/functions/context/#std-label-context-httpRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you very much Jason!",
"username": "Vladimir"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to make an http call from an Atlas function? | 2023-07-07T15:21:33.855Z | How to make an http call from an Atlas function? | 506 |
null | [] | [
{
"code": "",
"text": "I followed the steps here to setup Unified AWS Access through AWS IAM authentication.However when I kickoff my lambda I get the following error:MongoError: Could not find user “arn:aws:sts::***:assumed-role/test-role/” for db “$external”It seems similar to this issue where its trying to use the STS role instead of the IAM role I registered.Any ideas what is wrong? I am using the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY for username and password, setting the authSource to $external and setting the authMechanism to MONGODB-AWS.Thanks",
"username": "Dev_Brett_Stearns"
},
{
"code": "",
"text": "Figured it out. I needed to manually add a database user for the IAM role, something that the documentation does not mention",
"username": "Dev_Brett_Stearns"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoError: Could not find user "arn:aws:sts::****:assumed-role/test-role/*" for db "$external" | 2023-07-31T08:07:03.577Z | MongoError: Could not find user “arn:aws:sts::****:assumed-role/test-role/*” for db “$external” | 594 |
null | [
"database-tools",
"backup"
] | [
{
"code": "mongodumpmongodump --uri=theuri -d=dbName -o=outputPath --gzip --numParallelCollections=10_id*.metadata.jsonE11000mongorestore_idE11000mongorestore --uri=\"\" --gzip --numParallelCollections 10 --numInsertionWorkersPerCollection 10 -d Name ./2023-08-01T03:27:19.666+0530 73379560 document(s) restored successfully. 0 document(s) failed to restore.",
"text": "I am using mongodump to back up the entire database, which is around 200 GB in size. My command looks like this: mongodump --uri=theuri -d=dbName -o=outputPath --gzip --numParallelCollections=10.After the dump is complete, I have a script that decompresses the gzipped files back to JSON format, removes the _id index from all *.metadata.json files, and then encodes them back to gz format. This process is necessary to address the E11000 error that occurs when I try to reimport using mongorestore. If I don’t remove the _id index, the restore process throws the E11000 error and only a few hundred thousand out of the 70 million documents are imported successfully.The restore process itself works fine for all collections, and all documents are reimported correctly without any failures. However, I have encountered an issue where the views I created in the original database are converted into collections after reimporting from the backup. I’m currently trying to understand why this is happening. I suspect it’s because I am changing the metadata.json? I only edit the indexes field of the metadata; Source: https://github.com/legendhimself/Mongo_RustBackup/blob/main/src/utils/mod.rs#L80;Here’s the command for restore\nmongorestore --uri=\"\" --gzip --numParallelCollections 10 --numInsertionWorkersPerCollection 10 -d Name ./\nOutput: 2023-08-01T03:27:19.666+0530 73379560 document(s) restored successfully. 0 document(s) failed to restore.",
"username": "Voxelli"
},
{
"code": "",
"text": "Views will not have UUID\nCheck this links.May help\nMongorestore preserveUUID\nhttps://jira.mongodb.org/browse/TOOLS-2531",
"username": "Ramachandra_Tummala"
},
{
"code": "--drop--preserveUUID--preserveUUID used but no UUID found in trades.metadata.json.gz, generating new UUID for Sofi-Test.trades_id_idE11000",
"text": "the attached link uses --drop and --preserveUUID\nUsing the above with the mongorestore command gives:\n--preserveUUID used but no UUID found in trades.metadata.json.gz, generating new UUID for Sofi-Test.tradesUUID here requires _id in the metadata right?\nBut I am removing _id to fix E11000 error.",
"username": "Voxelli"
}
] | Mongorestore restores view as collection from the backup done by mongodump | 2023-07-31T22:13:07.850Z | Mongorestore restores view as collection from the backup done by mongodump | 705 |
null | [
"cxx"
] | [
{
"code": " if (_impl->uri_t == nullptr) {\n throw logic_error{error_code::k_invalid_uri, error.message};\n }\nuri::uri(bsoncxx::string::view_or_value uri_string) {\n bson_error_t error;\n\n _impl = stdx::make_unique<impl>(\n libmongoc::uri_new_with_error(uri_string.terminated().data(), &error));\n\n if (_impl->uri_t == nullptr) {\n throw logic_error{error_code::k_invalid_uri, error.message};\n }\n}\n",
"text": "To preface, my code has been working in linux for a while. My next task is to get our program working on windows now.I have installed manually compiled mongoc (v1.23.5) and mongocxx (v3.7.2), and gotten it building. When i run in debug mode, it works perfectly, but when i run my executable in release mode, it crashes when creating a uri.in the mongcxx lib, it crashes here:this is part of:error.message is ‘invalid utf-8 in uri’I have absolutely no idea why this is throwing an invalid uri error when it works perfectly fine in debug mode (and on linux). I have verified via debug that the string going into the function is good.Any help would be appreciated.After more research, it seems similar to https://www.mongodb.com/community/forums/t/mongo-cxx-driver-r3-6-0-using-std-string-on-mongocxx-uri-leading-to-crash/9582\nbut not quite the same, and still crashes after setting /MD",
"username": "Ian_Haber"
},
{
"code": "",
"text": "Hi @Ian_Haber ,I suspect this issue may instead be caused by the use of incompatible build configurations for the library vs. the test application. The build configuration for a Windows application must be consistent with the build configuration of all the libraries being linked against. To compile and link the test application using the Release configuration, it must link against the library which was also built using the Release configuration.Weird string crashes are the canonical symptom of such misconfigurations.",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "i built all of the mongo libraries in release. interestingly enough, the debug mode works, but release doesnt.",
"username": "Ian_Haber"
},
{
"code": "",
"text": "Okay, i’ve found the issue and fixed it.For future reference, building with visual studio seems to ignore -DCMAKE_BUILD_TYPE when building mongoc-driver and mongocxx-driver. you have to specify --config RelWithDebInfo as well when building them.",
"username": "Ian_Haber"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Crash in windows when creating uri::uri | 2023-08-01T20:28:29.759Z | Crash in windows when creating uri::uri | 481 |
[
"aggregation",
"delhi-mug",
"bengaluru-mug",
"hyderabad-mug",
"mug-virtual-india"
] | [
{
"code": "MongoDB Enthusiast | AVP Technology @ FinarbMongoDB Champion | Founder @CosmoCloud MongoDB User Group Leader | Founder Webstring Services Software Engineer 3, MongoDBSolutions Architect, MongoDB",
"text": "\nvMUG-India1920×1080 222 KB\nJoin MongoDB India Virtual User Group on Saturday, July 15th for a day of learning, meeting other developers in the region, and taking back home some exciting MongoDB Swag!The event will focus on analyzing the sample_mflix dataset and follow up with interactive activities to learn how to build complex analytical queries with MongoDB’s Aggregation Framework and how to generate visualizations using MongoDB Charts.To RSVP - Please click on the “✓ RSVP” link at the top of this event page if you plan to attend. The link should change to a green highlighted button if you are going. You need to be signed in to access the button. We’ll start with an optional to attend overview session on MongoDB and Atlas App Services. Make sure you join at 10:30 AM to participate in the jumpstart session and get all the knowledge you need to attend the workshop later.We’ll begin the event with introductions and engaging ice-breakers. Then, we’ll explore the world of app-driven analytics and learn how to create complex analytical queries using MongoDB’s Aggregation Framework. Additionally, we’ll discover the process of generating visualizations through Charts.Once you’ve gained a solid understanding of the core concepts, we’ll provide opportunities to apply your newfound knowledge, participate in Breakout Room Activities and win exciting prizes This will give you the chance to craft complex aggregation queries, extract valuable insights, and potentially win some swag!We’ll also have a Hangout Room where one can join and network with other Attendees, Speakers, MongoDB Champions, Enthusiasts, MUG Leaders, and Staff Breakout Room Sessions:MongoDB VS Code Extension: Join Himanshu, Software Engineer 3 to see MongoDB for VS Code Extension in action! Learn how to navigate your data on MongoDB, build queries and aggregations and export them to popular programming languages, prototype with Playgrounds, and more.Get Introduced to MongoDB Atlas Search: With MongoDB Atlas Search, you can build a full-text search on top of your data in minutes. Stop wasting time on a separate search system alongside Atlas and start taking advantage of a seamless and scalable solution for building relevance-based application features. Join @viraj_thakrar’s session to get a quick introduction to its capabilities.MongoDB Enthusiast | AVP Technology @ Finarb–MongoDB Champion | Founder @CosmoCloud –MongoDB User Group Leader | Founder Webstring Services –\nimage512×512 93.5 KB\nSoftware Engineer 3, MongoDB–\nimage800×800 51.9 KB\nSolutions Architect, MongoDB",
"username": "shrey_batra"
},
{
"code": "",
"text": "This looks great - looking forward to it!",
"username": "Veronica_Cooley-Perry"
},
{
"code": "",
"text": "We just announced our first Builder’s Session -MongoDB VS Code Extension Demo: Join Himanshu, Software Engineer 3 @ MongoDB to see MongoDB for VS Code Extension in action! Learn how to navigate your data on MongoDB, build queries and aggregations and export them to popular programming languages, prototype with Playgrounds, and more.More sessions are to be announced soon!",
"username": "Harshit"
},
{
"code": "",
"text": "Wow! Excited to be part of this event!!",
"username": "Pranam_Bhat"
},
{
"code": "",
"text": "Hey folks! Super excited to announce our second session for the Builders’ Session! MongoDB Atlas Search Demo: Join Viraj Thakrar, Founder, Webstring Technologies, and our MongoDB User Group Leader to learn all about Atlas Search and see it live in action!",
"username": "Satyam"
},
{
"code": "",
"text": "Hello Everyone!Gentle Reminder: The event is tomorrow at 10:30 AM and we are thrilled to have you all join us.Zoom Link: Launch Meeting - ZoomWe want to make sure everyone has a fantastic time, so please join us at 10:30 AM to ensure you don’t miss any of the sessions. We can also have some time to chat before the talks begin.To make sure you are all set for the workshop and challenge, we recommend a few pre-requisites:Don’t worry if you are busy; we will provide time during the workshop to assist you with these steps.If you have any questions, please don’t hesitate to ask by replying to this thread! Looking forward to seeing you all at the event! ",
"username": "Harshit"
},
{
"code": "",
"text": "Hey guys!\nA gentle reminder - we are starting in 5 minutes. Look forward to seeing you! ",
"username": "Satyam"
},
{
"code": "",
"text": "Can we get the recording of the session? Missed it due to some personal issues",
"username": "Ankit_Sharma3"
},
{
"code": "",
"text": "Hey @Ankit_Sharma3 - We are processing the recordings - will post them as soon as possible and notify everyone! ",
"username": "Harshit"
},
{
"code": "",
"text": "@Harshit still waiting for recordings, It’s been 15+ days.\nAny update?",
"username": "Karnish_Master"
},
{
"code": "",
"text": "Hey @Karnish_Master and @Ankit_Sharma3,\nWe are still working on how those recordings will be edited and published. However, in the meantime, you could see this Zoom recording which has a lot of pre-event conversations as well which you will have to forward.Video Conferencing, Web Conferencing, Webinars, Screen Sharing - Zoom\nPasscode: 3y3?f%CF",
"username": "Harshit"
}
] | India vMUG: Building App-Driven Analytics with MongoDB Aggregation, VS Code Extension and more! | 2023-06-16T01:28:37.082Z | India vMUG: Building App-Driven Analytics with MongoDB Aggregation, VS Code Extension and more! | 3,991 |
|
null | [
"compass",
"database-tools"
] | [
{
"code": "",
"text": "I am trying to import a JSON file onto my MongoDB collection. The file has UTF-8 encoding, appears to have solid syntax and more importantly, has no issues being imported via Studio 3T. But when I try doing that on MongoDB Compass, I get the following error: Parser cannot parse input: expected a value… Any help would be appreciated. Cheers!",
"username": "Chandrashekhar_M"
},
{
"code": "",
"text": "The fact that Studio 3T does not complain does not mean that the file is correct.Try to open the file with firefox, its parser it quite strict and error messages are usually useful.The jq utility might also be useful.If the above fails you may share the file with us.",
"username": "steevej"
},
{
"code": "",
"text": "If you created the data using 3T export mechanism, you should be aware that it doesnt produce the correct json format but if you try to import that file, again using 3T, then you will have no problem. however, if you try to import it using Compass or any other gui to manage compass, you will most likely get the error you’re encountering right now. because the exported json file using 3T doesnt produce the correct format for json.",
"username": "Ali_Ziya_CEVIK"
}
] | MongoDB Compass : Trouble reading JSON file | 2023-04-20T11:04:43.206Z | MongoDB Compass : Trouble reading JSON file | 2,442 |
null | [
"aggregation",
"data-modeling"
] | [
{
"code": "UserCollection : { _id :1, cart:[{ itemsId:10 }], zipcode:{ userZipCodeId: 0000 } }ItemsCollection: [{ _id: 10, title: itemstitle }] ZipCodeCollection:[{ _id: 0000, placeName: NY,serviceProviders:[100,101,102] }] Providers :[ { _id: 100, title: Happy Store},{ _id: 101, title: ABC Store}] \t\t\t User.aggregate([\n \t\t\t \t{ $match: { _id: req.user._id } },\n\n \t\t\t \t{\n \t\t\t \t\t$lookup:\n \t\t\t \t\t{\n\t\t \t\t\t \tfrom: 'ItemsCollection',\n\t\t \t\t\t \tlocalField: 'cart',\n\t\t \t\t\t \tforeignField: '_id',\n\t\t \t\t\t \tas : \"cartList\"\n\t \t\t\t\t},\n\t \t\t\t},\n\n\n\t\t\t\t {\n\n \t\t\t \t\t$lookup:\n \t\t\t \t\t{\n\t\t \t\t\t \tfrom: 'zipcode',\n\t\t \t\t\t \tlocalField: 'zipcode.userZipCodeId',\n\t\t \t\t\t \tforeignField: '_id',\n\t\t \t\t\t \tas : \"zipcodeList\"\n\n\t \t\t\t\t},\t \t\n\t \t\t\t},\n\n\t \t\t\t{\n \t\t\t \t\t$lookup:\n \t\t\t \t\t{\n\t\t \t\t\t \tfrom: 'providers',\n\t\t \t\t\t \tlocalField: 'zipcodeList.providers',\n\t\t \t\t\t \tforeignField: '_id',\n\t\t \t\t\t \tas : \"providersList\"\n\n\t \t\t\t\t},\t\t \t\t\t\t\n\t \t\t\t},\n\t\n\t\t\t\t{\"$match\":{\"providersList.status\": true }},\n\t\t\t\t{\n\t\t\t\t\t$project:\n\t\t\t\t\t{\n\t\t\t\t\t\t_id:1,\n\t\t\t\t\t\tpincode:1,\n\t\t\t\t\t\temail:1,\n\t\t\t\t\t\tname:1,\n\t\t\t\t\t\tcartList:1,\n\t\t\t\t\t\tzipcodeList:1,\n\t\t\t\t\t\tcartList:1\n\n\t\t\t\t\t}\n\t\t\t\t}\t\n",
"text": "I have 4 collections, I want data from this collection for my Cart PageHere is the model of my collections\nUserCollection : { _id :1, cart:[{ itemsId:10 }], zipcode:{ userZipCodeId: 0000 } }ItemsCollection: [{ _id: 10, title: itemstitle }] ZipCodeCollection:[{ _id: 0000, placeName: NY,serviceProviders:[100,101,102] }] Providers :[ { _id: 100, title: Happy Store},{ _id: 101, title: ABC Store}]On my cart page, I want user cart Items data, check user Zipcode and search from ZipCodeCollection which are the service providers in the zip code, then get the provider info from the Provider collectionhere is my codehere it’s working, but take more time ( more than seconds )to execute I am afraid about the performance and bad design, how to increase this code performance and design well",
"username": "david_jk"
},
{
"code": "$lookupUsersItems$loopupfindOne$lookup",
"text": "Hi @david_jk and welcome in the MongoDB Community !Congratulation for finding and trying to eradicate─before it actually happened─the MongoDB #1 trap !You actually answered your problem yourself: it’s a terrible data model and this will result in terrible performance.The only way to fix this is to fix the data model.I highly recommend that you take the time to read the above doc and especially try to check the 2 talks at the bottom + the white paper.Your current data model looks like an SQL design. In MongoDB, data that is read together, should be stored together so $lookup is actually a very last resort.In your case here, Users and Items are most probably the only 2 real collections. The 2 others should be embedded and the entries eventually be duplicated across the different documents.Also, there are a few design pattern that could be helpful.For this, I recommend you have a look to:And in your case, I think you could benefit from the Extended Reference Pattern:\nimage842×774 179 KB\nIn your user cart, you would store the list of items IDs and quantities, but probably also a few other fields like the price, label and description for example.\nSo for example, when you open the cart in your app, you don’t need to $loopup the items because all the information you need is already in your user cart items. If the user clicks on an item to get more details about it (length, height, weight, technical specs…), then you can fetch directly (findOne) the details of that item. In the end, you might not even need $lookup at all.A comment though… Usually the carts are stored in a separated collection so they can benefit from a TTL index (cart expiration 30 min).Nothing of what I have said here is the absolute and unique truth. It’s all about your use cases and queries. Find these first. They will help design your documents.A quick example. Let’s say I have a collection of books and authors.\nIf I’m a bookstore, then authors should be embedded in books because my queries are all manipulating books before anything else.\nIf I’m creating Wikipedia for famous authors. Then it’s the opposite. Books should be embedded in my authors documents because they are just a detail in this system.\nAnd depending on the cardinality of both, it could also be a valid choice to keep them separated and maintain mono or bi-directional references.Schema design is an art . The reward is premium performances.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for the reply and the resources links. I am facing the same issue and will benefit from your answer.",
"username": "JulioW"
},
{
"code": "",
"text": "Hi @JulioW and welcome in the MongoDB Community !I’m glad that I was able to help you! \nIt’s the entire point of having a forum like this. We are building a giant knowledge base and hopefully by saving one, we can help many. Cheers, \nMaxime.",
"username": "MaBeuLux88"
}
] | Multiple lookup in aggregate | 2021-05-31T10:58:52.522Z | Multiple lookup in aggregate | 17,597 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "Hello,There is a database that consists of thousands of collections, and we want to export and import those collections. Is there an easy way to do this?Thanks.",
"username": "Samet_Turgut"
},
{
"code": "",
"text": "Mongodump as opposed to mongoexport?Or a bit of batch file / shell script scripting, getting a list of collections and then running the export command over it.If you need to change the restore locations you could use nsFrom and nsTo to map the names to the new locations when using mongorestore:",
"username": "John_Sewell"
},
{
"code": "",
"text": "This will be between different MongoDB versions, so we cannot use mongodump and mongorestore.",
"username": "Samet_Turgut"
},
{
"code": "",
"text": "Ahh, the other option I’ve seen was MongoSync but the limitations on that are V6 according to the docs.Scripting this should be pretty trivial, either with a hard coded list (I love using Excel for this kind of thing) or a dynamic script to export and restore.You could also use an ETL tool with no transforms, or if you’ve Kafka, setup some pipelines but that’s getting a little exotic.If you just need a copy, you could stop the old server, copy all the data files to a new server, start it up and then upgrade to the desired version, but this is not feasable on a regular basis.What versions are you going from/to?",
"username": "John_Sewell"
},
{
"code": "",
"text": "It’s going to be from 3.x to 4.xI guess we can do this with Notepad++\nIt allows selecting multiple lines, so we can copy collection names and add them to --collection=collection_name and --out=output_name.json flags that way, but that will be thousands of mongoexport commands to be run. I was just searching if there is an easier way to do this.",
"username": "Samet_Turgut"
}
] | Mongoexport for Thousands of Collections | 2023-08-02T07:41:20.972Z | Mongoexport for Thousands of Collections | 433 |
null | [] | [
{
"code": "",
"text": "We now use MongoDB version 4.4 but wants to upgrade to first 5.0 and then 6.0.But how does that work with App Services (Realm), do we need to do any changes in that project as well?\nOr is it just plug-and-play?",
"username": "Viktor_Nilsson"
},
{
"code": "",
"text": "Generally speaking, things should just work fine if you are on a dedicated cluster and not on an NVME. If you have a production app/load and a support contract, you can open a proactive ticket to have someone available as you make the update. We generally have not seen any issues around upgrading though as the entire platform needs to handle the full spectrum of MongoDB versions.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Okay, thanks. Will start with upgrading our staging env then.",
"username": "Viktor_Nilsson"
}
] | Mongo Realm - Does upgrading MongoDB version require any changes? 4.4 --> 5.0 --> 6.0 | 2023-08-02T09:09:57.379Z | Mongo Realm - Does upgrading MongoDB version require any changes? 4.4 –> 5.0 –> 6.0 | 313 |
null | [] | [
{
"code": "",
"text": "Hi, I am using community edition of 4.4.18 Version. I am not finding mongoperf utility under bin directory. Is it available only for enterprise edition?",
"username": "Ramya_Navaneeth"
},
{
"code": "mongoperfmongoperf",
"text": "I think it is deprectaedMongoDB 4.0 removes the mongoperf binary.",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks Ramachandra . Is there any other command to measure disk I/O performance like mongoperf",
"username": "Ramya_Navaneeth"
}
] | Is mongoperf utility available for Enterprise edition only? | 2023-08-01T07:17:34.902Z | Is mongoperf utility available for Enterprise edition only? | 513 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "I’ve been searching through the forums and Google, but can’t seem to find a straight forward answer to this question:I have two databases in Atlas: one production and one testing. We’ve created a bunch of Atlas functions and HTTP endpoints on our testing database and everything is working well. Now, we want to transfer them all over to our production database to start deploying our new app. However, I can’t find an easy way to do this without copying and pasting every function and recreating them on our production database.What is the easiest way to transfer or copy functions, HTTP endpoints and triggers between databases in Atlas?",
"username": "Noora_Chahine"
},
{
"code": "",
"text": "Hey @Noora_Chahine,I have two databases in Atlas: one production and one testingJust for clarification regarding the “two databases” - Are you referring to two databases within the same cluster or are you referring to two clusters? If 2 clusters, then are both of your clusters in the same Project?, Or do they belong to different Projects within the same MongoDB Organisation?Note: An organization can contain multiple projects.If your clusters/databases are within the same Project, then it’s relatively straightforward to make the necessary changes. All you need to do is edit the function and update the database and collection names accordingly. This should allow you to proceed smoothly without any major hindrances. If you have trouble doing so, please let us know.Since HTTP Endpoints and Triggers utilize the Atlas function in the background, the process should be relatively seamless.Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Kushagra_Kesav Thanks for the reply, but unfortunately, they’re on two different clusters under two different projects. I’m guessing there’s no way to just export them and import them between projects? ",
"username": "Noora_Chahine"
},
{
"code": "",
"text": "Hey @Noora_Chahine,Yes, I guess you will need to manually transfer them over to a different project.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Copy functions to new database | 2023-07-27T17:06:23.746Z | Copy functions to new database | 632 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I have records for members of a club. There is one record for each member and it contains fields for when they joined the club and when they left. I would like to create a chart that shows total membership over time. What’s the best way to do this?",
"username": "George_Richardson"
},
{
"code": "",
"text": "I found something similar here:\nandAnd the answer by @ Asya_Kamsky or\n@ steevej would do what you wanted.I took it apart slightly to re-work it for my understanding, but in effect you want to create a list of days that each member was active, you have their start and end date, so if member 1 was active from 1 Jan 2023 to 5 Jan 2023 you want to get a list of days:1 Jan 2023\n2 Jan 2023\n3 Jan 2023\n4 Jan 2023\n5 Jan 2023And if member 2 was active from the 3rd to 4th:\n3 Jan 2023\n4 Jan 2023Combining this data you get\n1 Jan 2023\n2 Jan 2023\n3 Jan 2023\n3 Jan 2023\n4 Jan 2023\n4 Jan 2023\n5 Jan 2023And grouping by date you get membership per day:\n1 Jan 2023 : 1\n2 Jan 2023 : 1\n3 Jan 2023 : 2\n4 Jan 2023 : 2\n5 Jan 2023 : 1The way this is done in the linked example was to use the $range and $map operators to build a list of days to add to each start date to generate a list of days they are active.\nIf you subtract the start from end, you get time active, which you can then convert to days, you then map over this, adding each to the start date to build an array of days that the user was a member.If you then $unwind the array and group all data up, you get a collated list of days that a user was active.Mongo playground: a simple sandbox to test and share MongoDB queries onlineYou can simplify the above, but I spread things out a bit to let me run it in stages to check each stage.So we start with:\nThen calculate the days\nNow the map and range operators blow the days into an array of days each user is active:\nThe $unwind turns this into a list of dates that a user was active:\n\nimage720×256 4.57 KB\nFinally we can group this up, counting how many users were active on each date:\nWith a lot of members who are members are active for a long time this could get big quickly, you could re-calculate on a periodic basis and exclude members who have left before the start calculation point to just calculate it weekly or something.Also I didn’t take into account partial days or limits on the end, so you would need to adjust for your requirements.You didn’t post actual documents so other things to take away from the other posts are to STORE DATES AS DATES! And when doing operations on dates, remember that you need to use a scalar to the period of interest, be it days, hours, minute, seconds or ms etc.",
"username": "John_Sewell"
},
{
"code": "",
"text": "@John_Sewell thanks for that - great answer and just what I needed.A couple of pointers for anyone else attempting the same:",
"username": "George_Richardson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Producing a chart of total membership over time given joining an leaving dates | 2023-08-01T13:53:07.417Z | Producing a chart of total membership over time given joining an leaving dates | 728 |
null | [
"connector-for-bi"
] | [
{
"code": "systemLog:\n logAppend: false\n path: \"/var/log/mongosqld/mongosqld.log\"\n verbosity: 2\n\nsecurity:\n enabled: true\n\nmongodb:\n net:\n uri: \"192.168.0.101:8635\"\n auth:\n username: \"username I have added\"\n password: \"password I have added\"\n source: admin\n mechanism: \"SCRAM-SHA-1\"\n\nsecurity:\n enabled: true\n defaultSource: \"admin\"\n\nnet:\n bindIp: 127.0.0.1\n port: 3307\n ssl:\n mode: \"disabled\"\nsystemLog:\n path: '/var/log/mongosqld/mongosqld.log'\n quiet: false\n verbosity: 2\n logRotate: \"rename\"\n\nprocessManagement:\n service:\n name: mongosqld\n displayName: mongosqld\n description: \"BI Connector SQL proxy server\"\n2023-07-24T19:47:16.103+0400 I CONTROL [initandlisten] mongosqld starting: version=v2.14.8 pid=55062 host=mongodb-test\n2023-07-24T19:47:16.103+0400 I CONTROL [initandlisten] git version: a72154240816a45fd921fe5712dbb290aabd31ed\n2023-07-24T19:47:16.103+0400 I CONTROL [initandlisten] OpenSSL version OpenSSL 1.1.1f 31 Mar 2020 (built with OpenSSL 1.1.1f 31 Mar 2020)\n2023-07-24T19:47:16.103+0400 I CONTROL [initandlisten] options: {}\n2023-07-24T19:47:16.103+0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for mongosqld.\n2023-07-24T19:47:16.103+0400 I CONTROL [initandlisten]\n2023-07-24T19:47:16.105+0400 I NETWORK [initandlisten] waiting for connections at 127.0.0.1:3307\n2023-07-24T19:47:16.105+0400 I NETWORK [initandlisten] waiting for connections at /tmp/mysql.sock\n2023-07-24T19:47:21.106+0400 E NETWORK [initandlisten] unable to load MongoDB information: failed to create admin session for loading server cluster information: unable to execute command: server selection error: context deadline exceeded, current topology: { Type: Unknown, Servers: [{ Addr: localhost:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }, ] }\n",
"text": "Dear Team, we have a mongo database service purchased from G42 cloud name DDS, however now we are trying to connect database with tableau but we unable to do that due to number of error however now we have install BI connector over a linux machine under the same vpc and I am able to connect database with cli but with BI connector configuration file we are getting issue below is the configuration I have used and error error ",
"username": "Rahul_Sharma12"
},
{
"code": "",
"text": "Dear Team,Kindly assist us on this.",
"username": "Rahul_Sharma12"
},
{
"code": "",
"text": "Hey @Rahul_Sharma12,we have a mongo database service purchased from G42 cloud name DDSG42 cloud name DDS is not a MongoDB product. Please see their documentation for more on this subject.Using any MongoDB official tools on a non-genuine MongoDB server is not supported. Even though there are no visible issues, their correctness cannot be guaranteed.If you need a cloud-hosted database, I would encourage you to have a look at MongoDB Atlas, as this is created and supported by MongoDB. By opting for MongoDB Atlas, you can continue to leverage the official tools you are already familiar with .Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Mongodb BI connector error | 2023-07-24T15:48:34.277Z | Mongodb BI connector error | 682 |
null | [
"sharding",
"transactions"
] | [
{
"code": "WiredTigerHS.wt320G\t/data/WiredTigerHS.wt\nmongodb-sh1-3\n280G\t/data/WiredTigerHS.wt\nmongodb-sh2-1\n1.1T\t/data/WiredTigerHS.wt\nmongodb-sh2-2\n1.1T\t/data/WiredTigerHS.wt\nmongodb-sh2-3\n993G\t/data/WiredTigerHS.wt\nmongodb-sh3-1\n454G\t/data/WiredTigerHS.wt\nmongodb-sh1-2 \n307G\t/data/WiredTigerHS.wt\nmongodb-sh3-2\n442G\t/data/WiredTigerHS.wt\nmongodb-sh3-3\n600G\t/data/WiredTigerHS.wt\nminSnapshotHistoryWindowInSeconds",
"text": "Hi there.\nMongodb v5.0.18 production db (has been gradually updated from v4.0).\n750Gb sharded collection on 3 replicasets.Some time after the start of the reshardCollection process , a constant growth in WiredTigerHS.wt size begins.\nAfter a 24h of the process (~70-75% of progress), this size reaches more than 1 TB on one of the shards. Had to cancel process due to running out of space.minSnapshotHistoryWindowInSeconds parameter is set to 300sec (default).During the resharding, our service is still working, but less active than usual (users have been warned).\nIs it normal for the transaction history file to grow?\nIs there any way to solve this without resorting to increasing the disk size?\nWe do not use Point-in-Time backups, so the size of the stored transactions history is not important.\nWe are unable to update the mongodb version at the moment.Couldn’t find a similar situation anywhere.",
"username": "Oleg_Kolobov"
},
{
"code": "",
"text": "it’s internal to wiredtiger logic, not a mongodb employee so i can’t help. But check this:",
"username": "Kobe_W"
},
{
"code": "",
"text": "thanks, i read this topic. There is a different situation and it could not be solved: the database was restored from a snapshot.\nIt’s just that if this is a regular situation, it is not clear why the documentation does not indicate this - it only requires 1.2x the size of the collection free disk space on each shard member.",
"username": "Oleg_Kolobov"
}
] | File WiredTigerHS.wt size uncotrolled growth during resharding big collection | 2023-08-01T12:02:17.049Z | File WiredTigerHS.wt size uncotrolled growth during resharding big collection | 688 |
null | [
"aggregation"
] | [
{
"code": "db.Clans.aggregate(\n{\n \"$project\": {\n \"faction\": 1,\n \"region\": 1,\n \"memberCount\": 1,\n \"level\": 1\n }\n},\n{\n \"$group\": { \n \"_id\": {\n \"faction\": \"$faction\", \n \"region\": \"$region\"\n },\n \"clanCount\": { \"$sum\": 1 },\n \"memberCount\": { \"$sum\": \"$memberCount\" },\n \"averageLevel\": { \"$avg\": \"$level\" },\n \"averageMemberCount\": { \"$avg\": \"$memberCount\" },\n \"level1Clans\": {\n \"$sum\": {\n \"$switch\": {\n \"branches\": [\n { \"case\": { \"$eq\": [ \"$level\", 1 ] }, \"then\": 1 }\n ],\n \"default\": 0\n }\n }\n },\n \"level2Clans\": {\n \"$sum\": {\n \"$switch\": {\n \"branches\": [\n { \"case\": { \"$eq\": [ \"$level\", 2 ] }, \"then\": 1 }\n ],\n \"default\": 0\n }\n }\n },\n \"level3Clans\": {\n \"$sum\": {\n \"$switch\": {\n \"branches\": [\n { \"case\": { \"$eq\": [ \"$level\", 3 ] }, \"then\": 1 }\n ],\n \"default\": 0\n }\n }\n }\n }\n},\n{\n \"$project\": {\n \"faction\": \"$_id.faction\",\n \"region\": \"$_id.region\",\n \"clanCount\": \"$clanCount\",\n \"memberCount\": \"$memberCount\",\n \"level\": \"$averageLevel\",\n \"avgMemberCount\": \"$averageMemberCount\",\n \"clansPerLevel\": {\n \"1\": \"$level1Clans\",\n \"2\": \"$level2Clans\",\n \"3\": \"$level3Clans\",\n }\n }\n}\n)\n",
"text": "Hey all. I am trying to write an aggregation for statistics of Clan entities, where each stat is grouped by Region and Faction of that Clan entity. Additionally, each clan has a level, so I want to count how many clans in each region+faction grouping have each level.\nSo far I build an aggregation like this (slightly simplified for brevity):This aggregation works and gives expected data, but it is pretty verbose - and will become even more verbose if a clan can have more than 3 levels (currently it can’t, but it might in future).I was wondering if it can be made better. I tried googling and trying out stuff like $map and what not, but nothing worked so far - the aggregation I posted above is so far the only one that worked.",
"username": "Piotr_Brycko"
},
{
"code": "\"_id\" : { \"faction\" : \"$faction\" , \"region\" : \"$region\" , \"level\" : \"$level\" } ,\n\"levelClans\" : { \"$sum\" : 1 }\n\"_id\" : { \"faction\" : \"$_id.faction\" , \"region\" : \"$_id.region\" } ,\n\"clansPerLevel\" : {\n \"$push\" : { \"level\" : \"$_id.level\" , \"clans\" : \"$levelClans\" }\n}\n",
"text": "One idea that comes to mind is that you can try to do a first $group withThe a second $group withYou might need to change where and how you get your $avg.",
"username": "steevej"
},
{
"code": "",
"text": "Sorry for late reply, got distracted by other things.Thank you, this works! As you mentioned, I had to move average to a separate query and then merge the results in app memory, but it’s a query that runs 2-3 times a hour so it’s not a huge problem.",
"username": "Piotr_Brycko"
}
] | Count per value in existing grouping | 2023-01-25T11:00:40.499Z | Count per value in existing grouping | 501 |
null | [] | [
{
"code": "",
"text": "Hello,I have a Relational Migrator project running locally that was working correctly until I clicked “Refresh Schema” in the “Manage Relational Model” window. Now, all the tables from the source database appear empty. The tables in the main diagram that used to have column names are now blank. I can’t see any of the table filters that had been created either.I looked in the diagnostic file that Relational Migrator generated and I can still see my previous tables, table filters, columns, etc. so I’m hoping that all my data hasn’t been lost. Has anyone experienced something like this?",
"username": "Cameron_McNair"
},
{
"code": "",
"text": "Hey @Cameron_McNair,I have a Relational Migrator project running locally that was working correctly until I clicked “Refresh Schema” in the “Manage Relational Model” window. Now, all the tables from the source database appear empty.Just to clarify a few details here, when you note the tables appear empty - Are you seeing the table names but no fields at all or you cannot see the table(s) at all? If you could send a screenshot (redacting any personal or sensitive information) to highlight this, that would help.Would you be also to provide the exact steps taken to try reproduce this behaviour?I looked in the diagnostic file that Relational Migrator generated and I can still see my previous tables, table filters, columns, etc. so I’m hoping that all my data hasn’t been lost.My assumption here is that you’ve yet to begin the migration but were preparing for it but please correct me if I am wrong. In saying so, when you state “Data hasn’t been lost” are you referring to the table filters / columns / mappings?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran - thanks for looking into this.Are you seeing the table names but no fields at all or you cannot see the table(s) at all?I do see the table names but none of the fields. Here’s a screenshot.\n\nimage777×401 6.7 KB\nI think I figured out what causes the issue and it should be easy to replicate. Start with a working project connected to a SQL Server database. The database and all tables included in the migration should have CDC enabled. Then add a new table that does not have CDC enabled. It will initially show up correctly in the main diagram pane, but if you click “Manage” in the schema model pane then “Save” or “Refresh Schema” everything goes blank. It still doesn’t work even after enabling CDC for the new table or removing it from the migration.when you state “Data hasn’t been lost” are you referring to the table filters / columns / mappings?Yes that’s correct, I meant the mappings.Thanks again,\nCameron",
"username": "Cameron_McNair"
},
{
"code": "",
"text": "I do see the table names but none of the fields. Here’s a screenshot.@Cameron_McNair - Thanks for providing the screenshot and steps to reproduce it. The team have attempted to follow the steps but were not able to achieve the behaviour demonstrated in the screenshot. Any chance you could do a video recording of this behaviour? It’s possible that it is a bug but we will need to confirm this first. Happy to receive this as a DM as well (If you have any console logs, please send this via DM).Additionally to try help narrow down what is causing this, can you specify what versions of SQL Server and Relational Migrator you’re running?Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "integratedSecurity=true",
"text": "Thanks for providing the logs and video @Cameron_McNair,The team were able to reproduce the missing fields as per your screenshot and video and confirmed it was a bug. They’ll work on a fix but in the meantime you can check out the following workaround:If you connect via JDBC URL instead of the full form, you can enter a username and password, even if you have integratedSecurity=true in the URL. The username/password is ignored, but just having it there is enough to prevent the bug.Additionally, if possible, try avoid using that option in the meantime.Appreciate your patience.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Relational Migrator - all fields and mappings disappeared | 2023-07-27T20:53:15.675Z | Relational Migrator - all fields and mappings disappeared | 585 |
null | [
"replication"
] | [
{
"code": "",
"text": "hi,\nI have a replica set version 3.6.63 and almost every evening around 21:00 the server goes down, when I open the log file I see this message:2023-07-30T21:06:48.847+0300 I COMMAND [conn14689698] command local.oplog.rs command: find { find: “oplog.rs”, filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, maxTimeMS: 3000, $readPreference: { mode: “secondaryPreferred” }, $db: “local” } planSummary: COLLSCAN exception: operation exceeded time limit code:ExceededTimeLimit numYields:0 reslen:246 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_query 3417msHas anyone encountered this phenomenon? Why does the system scan the file oplog.rs?",
"username": "Amit_Faibish"
},
{
"code": "",
"text": "Try setting higher timeout.db.adminCommand( { setParameter: 1, maxTransactionLockRequestTimeoutMillis: 60000 });\nYour error contains COLLSCAN, which is a collection scan, meaning the server must read all the documents in the collection to answer the query. This is typically the cause of a slow query. You can follow these guidelines to create indexes which can improve query performance.Hope this helps!",
"username": "Vishal_Alhat"
},
{
"code": "",
"text": "OK. I know this well…\nThe question was why was LOCAL’s collection even scanned?\nIs this an automatic operation?",
"username": "Amit_Faibish"
}
] | Connection to MongoDB is unavailable | 2023-08-01T06:37:05.058Z | Connection to MongoDB is unavailable | 621 |
null | [
"replication"
] | [
{
"code": "2023-08-01T06:09:50.152+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mhost1.foobar.net:270172023-08-01T06:09:50.152+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mhost2.foobar.net:27017\"I REPL [repl-writer-worker-13] applied op: CRUD\"",
"text": "hi - we have an 8TB cluster, one primary, and normally two secondaries.After an unfortunate series of events, we had a primary which was the only member of it’s replica set, while the secondaries were offline rebuilding indices.Eventually, we observed that one of the secondaries appeared to have recovered, as it was responding on 27017 with no errors of note in it’s logs. The logs had many entries of this form:2023-08-01T06:09:50.152+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mhost1.foobar.net:27017\n2023-08-01T06:09:50.152+0000 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to mhost2.foobar.net:27017Then, the secondary was added back back to the replica set with no voting and no priority. After that, we obseved that the primary had returned to a state where it appears to be processing op log entries from 2 days ago, with many entries of this form:\n\"I REPL [repl-writer-worker-13] applied op: CRUD\"While the primary is doing that , it’s not accepting connections on 27017. (And meantime, the secondary was failing to establish a connection to the primary, so at this point we’ve shut it down).Is this expected behavior? Is it possible that even though the secondary had no priority and no voting power, it nevertheless caused the primary to being behaving like a secondary?",
"username": "Deborah_Briggs"
},
{
"code": "",
"text": "even though the secondary had no priority and no voting power, it nevertheless caused the primary to being behaving like a secondary?i don’t quite get this. the info seems to say that the primary instance is replicating data to the newly added secondary (vote 0 and pri 0). What you mean by “the primay … like a secondary” ?",
"username": "Kobe_W"
}
] | Mongo 4.2 primary unexpectedly processing ops log entries | 2023-08-01T17:59:52.401Z | Mongo 4.2 primary unexpectedly processing ops log entries | 555 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "{\ncol1_id: \"123\",\ncol2_id: \"abc\",\n<metadata_CONT'D>\n}\n$lookup: {\n \"from\": \"col2\",\n \"let\": {\n \"id\": \"$col2_id\",\n <other variables>\n },\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"search_index\",\n \"compound\": {\n \"must\": [\n {\n \"phrase\": {\n \"path\": \"col2_id\",\n \"query\": \"$$id\"\n }\n },\n <other search conditions>\n ],\n }\n }\n },\n ],\n \"as\": \"col2_result\"\n }\n",
"text": "I have a search bar that needs to perform a conditional search where some of the conditions depend on another collection’s value. I have text input so I need to use the search index, rather then connecting via foreign/local id — since $search needs to happen as the first stage in the lookup pipeline.From the initial collection, I find all the documents that satisfy the initial conditions, then perform a lookup to the other collection based on those results.Let’s say this is what a doc looks like just before lookup:Using the id that connects us to the other collection,When I deliberately replace “$$id” with “abc”, the search and lookup work…so it’s just the $$let_variable that isn’t working. I’ve scoured the internet but google refuses to relinquish its secrets. I’m sure this is answered somewhere but I couldn’t find it in community, either, so can someone answer my question:Is this possible? — Using a let variable in the $search of a lookup?I’ve tried messing with the variable names and single $ as opposed to double $$, and random things like that, but nothing is working, so I’m led to believe it simply doesn’t work this way because of how it compiles when you run the query and it evaluates the searches and whatnot.Anyway, any help would be appreciated.At this point, I’m thinking of just getting all the results from the first bit of the query, then running a forEach and doing an aggregation on col2 with the doc.col2_id!",
"username": "Charlie_Buyas"
},
{
"code": "",
"text": "Hi @Charlie_Buyas I believe the behaviour you’ve described is detailed in the following SERVER ticket : https://jira.mongodb.org/browse/SERVER-71036My understanding is that at this stage the variables are not being resolved. I suggest monitoring this ticket for any changes but I believe the workaround you’ve described should work also.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you for your reply. At least I confirmed my suspicions — it’s not just me. Thanks for linking the ticket, I’ll keep an eye on it.",
"username": "Charlie_Buyas"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I utilize a 'let' variable in the $search operator of a lookup? | 2023-08-02T01:35:00.304Z | How can I utilize a ‘let’ variable in the $search operator of a lookup? | 623 |
null | [
"atlas"
] | [
{
"code": "",
"text": "I am currently developing / testing my APP on MongoDB’s free-tier Atlas. I am the only user of my APP, so I do not know how it will handle real traffic. So a proxy way to answer this question is to know how much traffic the free tier of Atlas receives.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "M0M2/M5Network",
"text": "Hey @Big_Cat_Public_Safety_Act,I do not know how it will handle real trafficMay I ask what specifically you mean by “real traffic”? If you are referring to the Data Transfer limit, then you can see the documentation here for more details.this question is to know how much traffic the free tier of Atlas receives.Instead of focusing on Atlas’ overall free tier limits, it might be helpful to consider monitoring your own app’s expected usage and growth. Understanding the kind of traffic reads/writes, and data storage you anticipate over time will give you a better idea of when you might need to upgrade from the free tier to avoid hitting the Atlas free tier limits.Additionally, the Metrics view of an M0 free cluster or M2/M5 shared cluster displays only the following metrics:Perhaps the Network metric may be something you wish to monitor for this particular use case. For more information on certain metrics, please refer to the Review Available Metrics documentation.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | How much traffic does MongoDB's free tier Atlas receive? | 2023-07-29T17:28:11.416Z | How much traffic does MongoDB’s free tier Atlas receive? | 605 |
null | [
"queries",
"node-js",
"crud"
] | [
{
"code": "const result = await db.updateOne(\n { userId: \"100\", \"guilds.$.guildId\": \"1\" },\n { \n $set: { \n \"guilds.xp\": 5,\n },\n },\n );\n",
"text": "I have been trying to figure this out for over two days now, and just do not understand the error (or lack of) that Mongo is giving me. I am using Node.js.Here’s the query code:On printing the result, I can see the modifiedCount is 0 but the matchedCount is 1.There is only one record in the database:\nuserId: “100”,\nguilds: [\nguildId: “1”,\nlevel: 1,\nxp: 1,\n]Why does this not work?",
"username": "Calebt"
},
{
"code": "updateOne(\n { userId: \"100\", guilds: { $elemMatch: { guildId: { $eq: '1' } } } },\n { \n $set: { \n \"guilds.$.xp\": 5,\n },\n },\n );\n",
"text": "Just to note, changing the query as follows:Does update the database, however it deletes the array and replaces it with an empty Object named guilds",
"username": "Calebt"
},
{
"code": "${ userId: \"100\", \"guilds.$.guildId\": \"1\" },{\n \"userId\": \"100\",\n \"guilds\": [\n {\n \"guildId\": \"1\",\n \"level\": 1,\n \"xp\": 1\n }\n ]\n}\nguilds.xpuserIdguildId.updateOne(\n { \n \"userId\": \"100\", \n \"guilds.guildId\": \"1\" \n },\n {\n \"$set\": {\n \"guilds.$.xp\": 5\n } \n }\n)\n { userId: \"100\", guilds: { $elemMatch: { guildId: { $eq: '1' } } } },\n",
"text": "Hello @Calebt, Welcome to the MongoDB community forum,First, you can not use $ in the query part, so this is an invalid query,{ userId: \"100\", \"guilds.$.guildId\": \"1\" },Second, your document does not look correct/valid JSON,userId: “100”,\nguilds: [\nguildId: “1”,\nlevel: 1,\nxp: 1,\n]I am just predicting the below document you have in your database,You wanted to update the first matching element’s guilds.xp to 5 if userId is “100” and guildId is “1”, Your update query would be,PlaygroundThis is also a valid syntax to check the condition,You need to take care of the below point and check by yourself,",
"username": "turivishal"
},
{
"code": "updateOne(\n { _id: user._id },\n { $set: { \n \"guilds.xp\": 5,\n },\n },\n );\n\"guilds.$.xp\": 5\"guilds.0.xp\": 5\"guilds.xp\": 5",
"text": "Thanks for the reply.I tried this but it throws an error (can’t create field ‘guildId’ in element).updateOne(\n{\n“userId”: “100”,\n“guilds.guildId”: “1”\n},\n{\n“$set”: {\n“guilds.$.xp”: 5\n}\n}\n)I have also tried the following:With the above query, I have also tried using\n\"guilds.$.xp\": 5\"guilds.0.xp\": 5\"guilds.xp\": 5all produce the same result of matching one record but 0 being updated. I have tried using quotation marks around $set, there is no difference here.",
"username": "Calebt"
},
{
"code": "",
"text": "After three days of trying to solve this, I am giving up on trying to use the mongoose schema to do this. I am now getting an error saying \"Cannot create field ‘guildId’ in element. So for anyone else reading this, give up now and save yourself three days.I created a new function that connects to the database using the MongoDB library, and update the database that way.",
"username": "Calebt"
}
] | updateOne matches but doesn't update | 2023-08-01T01:40:06.508Z | updateOne matches but doesn’t update | 794 |
null | [
"security"
] | [
{
"code": "",
"text": "The documentation indicates that one way to secure custom user data is by denying access to the collection to all users and then use a system function to manage custom user data on behalf of users.This is what I would like to do, but I’m just a little unsure about how this should work. My initial thought was that I should set a denyAllAccess role on the custom user data collection. However, if I do this, won’t that also deny the system function from accessing the collection or do functions that have their Authentication set to System just ignore the roles entirely?",
"username": "Wilber_Olive"
},
{
"code": "",
"text": "Hi Wilber,The system auth function will ignore the rule as mentioned here.A system function runs as the system user instead of a specific application user. System functions have full access to MongoDB CRUD and Aggregation APIs and bypass all rules and schema validation.Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Securing Custom User Data | 2023-07-31T20:59:17.881Z | Securing Custom User Data | 513 |
null | [
"atlas-search"
] | [
{
"code": "authorauthor = Lionel Messi\"Lio\"\"Me\"\"ssi\"\"Lio\"author = Delio Valdezauthor = Lionel Messi",
"text": "Let’s say I have an author field and I want to return any document partially containing the search input as a match.So, for example, if I have author = Lionel Messi, either \"Lio\", \"Me\" or \"ssi\" should return documents having that value.Now, suppose I search \"Lio\", and there are also some documents where author = Delio Valdez. I want those to be returned also (“lio” is a partial match), but the ones with author = Lionel Messi should have a higher score in this case, given that the match is at the beginning of the string.What would be the best way to accomplish this in terms of index definition and search configuration?",
"username": "German_Medaglia"
},
{
"code": "autocomplete",
"text": "Hi @German_Medaglia ! Have you seen the Partial Match tutorial? I would recommend looking into the autocomplete field mapping and operator.",
"username": "amyjian"
},
{
"code": "autocomplete \"author\": {\n \"analyzer\": \"lucene.standard\",\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 2,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n\"lio\"author = Lionel Messiauthor = Delio Valdez",
"text": "Hi @amyjian! Thanks for your quick reply. Yes, I’ve seen that tutorial and also though autocomplete is the best option here. But I couldn’t figure out how to specify different tokenizations for one single field, and assign a different score boost depending on the position of the match.And, for example, if I have this on my index definition:then when searching \"lio\", I will get documents with author = Lionel Messi but not those with author = Delio Valdez.",
"username": "German_Medaglia"
},
{
"code": "edgeGram[li, lio, lion, lione, lionel, lionel[SPACE]]author = Lionel Messi[de, del, deli, delio, delio[SPACE] , delio V]author = DelionGram[de, del, deli, delio, delio[SPACE] , delio V, el, eli, elio, elio[SPACE], elio V, elio Va, li, lio, lio[SPACE], lio V, lio Va, lio Val, io, io[SPACE], ..., va, val, vald, valde, valez, al, ald, ...., ld, lde, ..., de, dez, ez]nGram",
"text": "Since you are using the edgeGram tokenization strategy, Atlas Search creates tokens from your documents from left-to-right, with a minimum of 2 characters and a maximum of 7 characters.For “Lionel Messi”, the token outputs would be: [li, lio, lion, lione, lionel, lionel[SPACE]]. Since the search term “lio” matches one of the token outputs, the document with author = Lionel Messi is returned.\nSimilarly, “Delio Valdez” will be tokenized from left-to-right to generate the following output tokens: [de, del, deli, delio, delio[SPACE] , delio V]. Since the search term “lio” does not match any of the output tokens, the document with author = Delio Valdez is not returned.To achieve the experience you are describing, you can use the nGram tokenization strategy, which would create the following tokens for “Delio Valdez”: [de, del, deli, delio, delio[SPACE] , delio V, el, eli, elio, elio[SPACE], elio V, elio Va, li, lio, lio[SPACE], lio V, lio Va, lio Val, io, io[SPACE], ..., va, val, vald, valde, valez, al, ald, ...., ld, lde, ..., de, dez, ez]. As you can see, a search for “lio” would match the “lio” token generated by Atlas Search for this document and it would be returned in the query results.It should be noted using the nGram tokenization strategy significantly increases the number of tokens generated and stored in your Atlas Search index, subsequently increasing the size of your search index.",
"username": "amyjian"
},
{
"code": "edgeGramnGram\"lio\"edgeGramnGram",
"text": "Yes yes, I already know how edgeGram and nGram work. What I need is a combination of both, that’s what I’m asking for.When I search for \"lio\", I need both documents where author is Lionel Messi and documents where author is Delio Valdez to be retrieved as results.And the problem is that when using edgeGram I only get results for Lio Messi, and when using nGram only for Delio Valdez.I think probably wildcards with a keyword analyzer would be a better approach for this use case.",
"username": "German_Medaglia"
},
{
"code": "",
"text": "Subject: Need Assistance with Phone Number Search in MongoDBHey @German_Medaglia,I hope you’re doing well. I came across your post on the forum and it seems like we have a similar use case. I’m dealing with a “phone” field of string type in my MongoDB documents and would like to implement a search functionality for phone numbers.For Example, I want to perform a search for the digits “987” and retrieve all the documents that contain this sequence. However, I also want to rank the results in a way that gives higher scores to documents where “987” appears at the beginning, followed by occurrences at the end, and then finally occurrences in the middle.I’ve been trying to implement this functionality, but I haven’t succeeded so far. Could you please share any insights or solutions you might have for achieving this kind of search behavior in MongoDB?Thank you in advance for your help!Best regards,",
"username": "Harsh_Taliwal"
},
{
"code": "{\n \"phone\": [\n {\n \"type\": \"autocomplete\",\n \"tokenization\": \"nGram\",\n \"minGrams\": 2,\n \"maxGrams\": 5,\n }.\n {\n \"type\": \"string\"\n }\n ]\n}\n",
"text": "Hi @Harsh_Taliwal , sorry for the delayed response! Can you try adding a “string” field mapping to “phone”? The field mapping for “phone” would look something like this",
"username": "amyjian"
}
] | Different scores depending on partial match position | 2023-03-07T16:51:06.390Z | Different scores depending on partial match position | 1,264 |
null | [
"time-series"
] | [
{
"code": "densifydaymonthunittimezonestepEurope/Rome{\n \"ts\" : ISODate(\"2000-12-31T23:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.84432598221046\n}\n{\n \"ts\" : ISODate(\"2001-06-30T22:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.14378032120374,\n}\ndensify{\n \"ts\" : ISODate(\"2000-12-31T23:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.84432598221046\n}\n{\n \"ts\" : ISODate(\"2001-01-31T23:00:00.000+0000\")\n}\n{\n \"ts\" : ISODate(\"2001-02-28T23:00:00.000+0000\")\n}\n{\n \"ts\" : ISODate(\"2001-03-28T23:00:00.000+0000\")\n}\n{\n \"ts\" : ISODate(\"2001-04-28T23:00:00.000+0000\")\n}\n{\n \"ts\" : ISODate(\"2001-05-28T23:00:00.000+0000\")\n}\n{\n \"ts\" : ISODate(\"2001-06-28T23:00:00.000+0000\")\n}\n{\n \"ts\" : ISODate(\"2001-06-30T22:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.14378032120374,\n}\n{\n \"ts\" : ISODate(\"2001-01-31T23:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.50917810392826,\n}\n{\n \"ts\" : ISODate(\"2001-02-28T23:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.84507500021624,\n}\n{\n \"ts\" : ISODate(\"2001-03-31T22:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.51826982331077,\n}\ndensify{\n \"ts\" : ISODate(\"2001-01-31T23:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.50917810392826,\n}\n{\n \"ts\" : ISODate(\"2001-02-28T23:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.84507500021624,\n}\n{\n \"ts\" : ISODate(\"2001-03-28T23:00:00.000+0000\")\n}\n{\n \"ts\" : ISODate(\"2001-03-31T22:00:00.000+0000\"),\n \"meta\" : {\n \"device\" : \"custom\",\n \"series\" : \"custom:1\"\n },\n \"v\" : 100.51826982331077,\n}\n",
"text": "If I apply the densify function using day or month as a range unit on a period that contains daylight saving adjustment, Mongo returns to me sample with ugly dates, because it is not able to apply timezone with the densify step.Example, if I have 2 samples like this (base location is Europe/Rome):when I apply densify the result is:you can see it starts to use day 28!!!If I have samples like this:and I apply densify, the result is:you can see Mongo creates a document that is not usefull",
"username": "Maurizio_Merli"
},
{
"code": "",
"text": "I am facing the same issue when trying to densify my data on the date field… have you found a solution to this problem? I have tried several approaches, but the underlying issue is that densify is not aware of the timezone, so it will never “catch” the right bucket when the date range spans across a DST change.",
"username": "Tiziano_Pigliucci"
}
] | BUG: densify and daylight saving | 2022-12-30T12:08:02.192Z | BUG: densify and daylight saving | 1,580 |
null | [
"node-js"
] | [
{
"code": "RealmApp.logIn",
"text": "Dear MongoDB community,I am currently working with the MongoDB Web SDK and have been using the RealmApp.logIn method to authenticate users in my application. While exploring the documentation and resources available, I noticed that there is no specific information about the generation of the “deviceId” field when using this method.My question pertains to the following:Thank you very much for your time and assistance!",
"username": "Artjom_Valdas"
},
{
"code": "",
"text": "Hello @Artjom_ValdasWould you be able to share our details of why you would like to use deviceID in your application?",
"username": "Sergey_Gerasimenko"
}
] | RealmApp.logIn: How is the "deviceId" generated and can it be modified later? | 2023-07-21T13:37:20.510Z | RealmApp.logIn: How is the “deviceId” generated and can it be modified later? | 438 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "While working with BsonString as identificator I noticed that mongodb orders them with respect to register but C# driver does not. For example, mongodb sorts BsonString( “a”), BsonString( “B”) in order “B” then “a”, while driver sorts them in order BsonString( “a”) then BsonString( “B”). Is it bug or feature? ",
"username": "astakhova.ksen.762"
},
{
"code": "using System;\n\nvar items = new[]{ \"a\", \"B\", \"b\", \"A\" };\n\nArray.Sort(items, StringComparer.CurrentCulture); // default, sort based on current culture\nConsole.WriteLine(string.Join(\", \", items)); // a, A, b, B\n\nArray.Sort(items, StringComparer.Ordinal); // sort by underlying numeric representation\nConsole.WriteLine(string.Join(\", \", items)); // A, B, a, b\nmongoshEnterprise test> db.items.find({}).sort({v:1}) // ordinal sort\n[\n { _id: ObjectId(\"64bfee537bb12babe73bf8cb\"), v: 'A' },\n { _id: ObjectId(\"64bfee537bb12babe73bf8c9\"), v: 'B' },\n { _id: ObjectId(\"64bfee537bb12babe73bf8c8\"), v: 'a' },\n { _id: ObjectId(\"64bfee537bb12babe73bf8ca\"), v: 'b' }\n]\nEnterprise test> db.items.createIndex({v:1}, {collation: { locale: \"en_US\" }})\nv_1\nEnterprise test> db.items.find({}).sort({v:1}).collation({locale: \"en_US\"})\n[\n { _id: ObjectId(\"64bfee537bb12babe73bf8c8\"), v: 'a' },\n { _id: ObjectId(\"64bfee537bb12babe73bf8cb\"), v: 'A' },\n { _id: ObjectId(\"64bfee537bb12babe73bf8ca\"), v: 'b' },\n { _id: ObjectId(\"64bfee537bb12babe73bf8c9\"), v: 'B' }\n]\n",
"text": "Hi, @astakhova.ksen.762,Welcome to the MongoDB Community Forums. I understand that you have a question about differing sort behaviour between MongoDB and the MongoDB .NET/C# Driver.I wouldn’t say this is a feature but more a consequence of different defaults. C# defaults to sorting strings based on current culture whereas MongoDB defaults to an ordinal sort.MongoDB defaults to an ordinal sort by default. In mongosh:You can change this by creating an index with collation and specifying the collation for the field:Hopefully the differing behaviour makes sense now.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thank you!\nDoes driver have a comparer that considers collation or its absence? It would be nice feature-request if the answer is no",
"username": "astakhova.ksen.762"
},
{
"code": "",
"text": "Hello, @James_Kovacs!\nI forgot to mention you in the previous message so I suppose you haven’t seen it yet",
"username": "astakhova.ksen.762"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | BsonString ordering in C#-driver differs from mongodb BsonString ordering | 2023-07-25T12:01:57.132Z | BsonString ordering in C#-driver differs from mongodb BsonString ordering | 501 |
null | [
"queries",
"node-js"
] | [
{
"code": "",
"text": "Hi! First post here!I need help with how to query a .find() in my nodeJs backend\nI search for project_x and intent_y and sort responseTime where i want to get back the document after if a ‘conversation_id’ is the same as the one im searching for.And where can I find some good reading / videos about learning more advanced queries? ",
"username": "Niklas_Soderberg"
},
{
"code": "",
"text": "Please publish sample documents from your collections.Course at https://university.mongodb.com/ helped me a lot.Read Formatting code and log snippets in posts before posting your documents so that we can cut-n-paste them easily in our system.",
"username": "steevej"
},
{
"code": "_id: 61f06a83afbf51c80b9a9b4b\nproject_id: \"sorteringshatt-dgwo\"\nname: \"2022-01-25 22:23:41.315399\"\ninputContexts: Array\noriginalRequestSource: \"SORTERINGSHATT_MAI\"\nv2Response: Object\nresponseTime: \"2022-01-25 22:24:19.697221\"\nlogType: \"TESTING\"\nupdated: false\n__v: 0\nconst history = await History\n .find({ project_id: project_id })\n .find({ updated: false })\n .sort({ responseTime: 1 });\n",
"text": "Thanks.where name is current conversation start, so next document will have the same name if a user talked to chatbot more.It looks like this now, so i get ALOT of returns, then i start checking for if(i and i+1) is the same and do logic on the server…",
"username": "Niklas_Soderberg"
},
{
"code": "const history = await History\n .find({ project_id: project_id })\n .find({ 'v2Response.queryResult.intent.displayName': intent_name })\n .find({ updated: false })\n .sort({ responseTime: 1 });\n",
"text": "So i would like something like:find right project, then right intent, then get back next document in responseTime if it also has the same name (not intentName).",
"username": "Niklas_Soderberg"
},
{
"code": "await History\n .find( { project_id: project_id ,\n 'v2Response.queryResult.intent.displayName': intent_name ,\n updated: false } )\n .sort( ... ) ;\n",
"text": "You sample document is not valid JSON that we can cut-n-paste.The major issue with your query is the chaining of find() calls.You need to put all conditions in one find() call like:",
"username": "steevej"
},
{
"code": "",
"text": "Got it, thanks!So right now with your help im finding the startpoint(s), now I need to check if next document has the same name value and then only get those. If possible.",
"username": "Niklas_Soderberg"
},
{
"code": "nameHistory.find({\n \"project_id\": project_id,\n \"v2Response.queryResult.intent.displayName\": intent_name,\n \"updated\": false,\n \"name\": name\n}).sort({\n \"responseTime\": 1\n})\n",
"text": "Hi @Niklas_Soderberg,If you want to get all the documents with same name value, you can just filter by name in addition.Working exampleMongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "NeNaD"
},
{
"code": "",
"text": "Hi! Its kinda hard this query i think.I need to find current project and intent and sort on responseTime\nThats the startpoint. It will be lots of documents. I AM HERE.\nBut I want nothing of that.\nI want the document after that (only one for each of the above), sorted on responseTime, if it exists with the same name. This only cares about if its the next one and if name is the same.\nThat can also be alot of documents.",
"username": "Niklas_Soderberg"
},
{
"code": "",
"text": "You won’t be able to do that with a simple find() query.You will need to use the aggregation framework using $push within a $group or $setWindowFields.But withoutsample documents from your collections\n…\nthat we can cut-n-paste them easily in our systemit is hard to help further.",
"username": "steevej"
},
{
"code": "{\"_id\":{\"$oid\":\"622f2cd4075c226bde0ba966\"},\"project_id\":\"sorteringshatt-dgwo\",\"name\":\"2022-03-14 12:53:52.712155\",\"inputContexts\":[],\"originalRequestSource\":\"SORTERINGSHATT_MAI\",\"v2Response\":{\"responseId\":\"d317ddda-fbe5-4511-bf2b-0a985a61ecf1-53cb9be6\",\"queryResult\":{\"queryText\":\"phase_1_name_mai\",\"allRequiredParamsPresent\":true,\"fulfillmentText\":\"Hej, Välkommen till denna upplevelse. Tillsammans ska vi undersöka ditt innersta väsen. Mitt namn är MAI, vad heter du?\",\"fulfillmentMessages\":[{\"text\":{\"text\":[\"Hej, Välkommen till denna upplevelse. Tillsammans ska vi undersöka ditt innersta väsen. Mitt namn är MAI, vad heter du?\"]},\"platform\":0}],\"outputContexts\":[{\"name\":\"projects/sorteringshatt-dgwo/agent/sessions/2022-03-14 12:53:52.712155/contexts/phase1intro-followup\",\"lifespanCount\":2},{\"name\":\"projects/sorteringshatt-dgwo/agent/sessions/2022-03-14 12:53:52.712155/contexts/phase1name-followup\",\"lifespanCount\":2}],\"intent\":{\"name\":\"projects/sorteringshatt-dgwo/agent/intents/47119b7f-827c-4f4f-a304-5bbf23d53f39\",\"displayName\":\"Phase 1: Intro MAI\",\"webhookState\":0,\"priority\":0,\"isFallback\":false,\"mlDisabled\":false,\"liveAgentHandoff\":false,\"endInteraction\":false,\"inputContextNames\":[],\"events\":[],\"trainingPhrases\":[],\"action\":\"\",\"outputContexts\":[],\"resetContexts\":false,\"parameters\":[],\"messages\":[],\"defaultResponsePlatforms\":[],\"rootFollowupIntentName\":\"\",\"parentFollowupIntentName\":\"\",\"followupIntentInfo\":[]},\"intentDetectionConfidence\":1,\"languageCode\":\"sv\",\"speechRecognitionConfidence\":0,\"action\":\"\",\"webhookSource\":\"\"},\"webhookStatus\":{\"code\":0,\"message\":\"\",\"details\":[]},\"outputAudioConfig\":{\"audioEncoding\":1,\"sampleRateHertz\":44100,\"synthesizeSpeechConfig\":{\"speakingRate\":1,\"voice\":{\"name\":\"sv-SE-Wavenet-D\",\"ssmlGender\":0},\"pitch\":0,\"volumeGainDb\":0,\"effectsProfileId\":[]}},\"outputAudio\":\"\"},\"responseTime\":\"2022-03-14 12:53:56.138199\",\"logType\":\"PRODUCTION\",\"updated\":false,\"__v\":0}\n{\"_id\":{\"$oid\":\"622f2ce0075c226bde0ba96a\"},\"project_id\":\"sorteringshatt-dgwo\",\"name\":\"2022-03-14 12:53:52.712155\",\"inputContexts\":[],\"originalRequestSource\":\"SORTERINGSHATT_MAI\",\"v2Response\":{\"responseId\":\"1faf96ba-7fb1-4741-87a1-d67f4ccaeefe-53cb9be6\",\"queryResult\":{\"queryText\":\"Fredrik\",\"action\":\"Phase1Intro.Phase1Intro-custom\",\"parameters\":{\"person\":{\"name\":\"Fredrik\"}},\"allRequiredParamsPresent\":true,\"fulfillmentText\":\"Hejsan Fredrik. Angenämt! Framför oss har vi ett litet personlighetstest för att avgöra vilket värdeord du tillhör. Testet kommer ta några minuter att genomföra.\",\"fulfillmentMessages\":[{\"text\":{\"text\":[\"Hejsan Fredrik. Angenämt! Framför oss har vi ett litet personlighetstest för att avgöra vilket värdeord du tillhör. Testet kommer ta några minuter att genomföra.\"]},\"platform\":0}],\"outputContexts\":[{\"name\":\"projects/sorteringshatt-dgwo/agent/sessions/2022-03-14 12:53:52.712155/contexts/phase1name-followup\",\"lifespanCount\":1,\"parameters\":{\"person.original\":\"Fredrik\",\"person\":{\"name\":\"Fredrik\"}}},{\"name\":\"projects/sorteringshatt-dgwo/agent/sessions/2022-03-14 12:53:52.712155/contexts/phase1intro-followup\",\"lifespanCount\":1,\"parameters\":{\"person\":{\"name\":\"Fredrik\"},\"person.original\":\"Fredrik\"}}],\"intent\":{\"name\":\"projects/sorteringshatt-dgwo/agent/intents/63425b74-2641-498e-8435-23b9b462439b\",\"displayName\":\"Phase 1: Intro - myNameIs\",\"endInteraction\":true,\"webhookState\":0,\"priority\":0,\"isFallback\":false,\"mlDisabled\":false,\"liveAgentHandoff\":false,\"inputContextNames\":[],\"events\":[],\"trainingPhrases\":[],\"action\":\"\",\"outputContexts\":[],\"resetContexts\":false,\"parameters\":[],\"messages\":[],\"defaultResponsePlatforms\":[],\"rootFollowupIntentName\":\"\",\"parentFollowupIntentName\":\"\",\"followupIntentInfo\":[]},\"intentDetectionConfidence\":1,\"diagnosticInfo\":{\"end_conversation\":true},\"languageCode\":\"sv\",\"speechRecognitionConfidence\":0,\"webhookSource\":\"\"},\"webhookStatus\":{\"code\":0,\"message\":\"\",\"details\":[]},\"outputAudioConfig\":{\"audioEncoding\":1,\"sampleRateHertz\":44100,\"synthesizeSpeechConfig\":{\"speakingRate\":1,\"voice\":{\"name\":\"sv-SE-Wavenet-D\",\"ssmlGender\":0},\"pitch\":0,\"volumeGainDb\":0,\"effectsProfileId\":[]}},\"outputAudio\":\"\"},\"responseTime\":\"2022-03-14 12:54:08.498651\",\"logType\":\"PRODUCTION\",\"updated\":false,\"__v\":0}\n{\"_id\":{\"$oid\":\"6228bb45075c226bde0ba1d1\"},\"project_id\":\"sorteringshatt-dgwo\",\"name\":\"2022-03-09 15:34:42.939426\",\"inputContexts\":[],\"originalRequestSource\":\"SORTERINGSHATT_KAI\",\"v2Response\":{\"responseId\":\"b907a90a-1432-4fd1-b146-3a92e2a7e163-53cb9be6\",\"queryResult\":{\"queryText\":\"phase_2_perfectsaturday\",\"allRequiredParamsPresent\":true,\"fulfillmentText\":\"Hur skulle du beskriva den perfekta lördagen?\",\"fulfillmentMessages\":[{\"text\":{\"text\":[\"Hur skulle du beskriva den perfekta lördagen?\"]},\"platform\":0}],\"outputContexts\":[{\"name\":\"projects/sorteringshatt-dgwo/agent/sessions/2022-03-09 15:34:42.939426/contexts/phase2simplequestion3-perfectsaturday-followup\",\"lifespanCount\":2}],\"intent\":{\"name\":\"projects/sorteringshatt-dgwo/agent/intents/239938ed-0d10-429b-9802-ae868dc37a09\",\"displayName\":\"Phase 2: Simple Question 3 - Perfect Saturday\",\"webhookState\":0,\"priority\":0,\"isFallback\":false,\"mlDisabled\":false,\"liveAgentHandoff\":false,\"endInteraction\":false,\"inputContextNames\":[],\"events\":[],\"trainingPhrases\":[],\"action\":\"\",\"outputContexts\":[],\"resetContexts\":false,\"parameters\":[],\"messages\":[],\"defaultResponsePlatforms\":[],\"rootFollowupIntentName\":\"\",\"parentFollowupIntentName\":\"\",\"followupIntentInfo\":[]},\"intentDetectionConfidence\":1,\"languageCode\":\"sv\",\"speechRecognitionConfidence\":0,\"action\":\"\",\"cancelsSlotFilling\":false,\"webhookSource\":\"\"},\"webhookStatus\":{\"code\":0,\"message\":\"\",\"details\":[]},\"outputAudioConfig\":{\"audioEncoding\":1,\"sampleRateHertz\":44100,\"synthesizeSpeechConfig\":{\"speakingRate\":1,\"voice\":{\"name\":\"sv-SE-Wavenet-E\",\"ssmlGender\":0},\"pitch\":0,\"volumeGainDb\":0,\"effectsProfileId\":[]}},\"outputAudio\":\"\"},\"responseTime\":\"2022-03-09 15:35:49.121453\",\"logType\":\"PRODUCTION\",\"updated\":false,\"__v\":0}\n{\"_id\":{\"$oid\":\"6228bb4d075c226bde0ba1d3\"},\"project_id\":\"sorteringshatt-dgwo\",\"name\":\"2022-03-09 15:34:42.939426\",\"inputContexts\":[],\"originalRequestSource\":\"SORTERINGSHATT_KAI\",\"v2Response\":{\"responseId\":\"413e5d0b-fdfd-4a3b-8b4b-544e07417032-53cb9be6\",\"queryResult\":{\"queryText\":\"utan planer\",\"action\":\"Phase2SimpleQuestion3-PerfectSaturday.Phase2SimpleQuestion3-PerfectSaturday-fallback\",\"allRequiredParamsPresent\":true,\"fulfillmentText\":\"Vilken lördag!\",\"fulfillmentMessages\":[{\"text\":{\"text\":[\"Vilken lördag!\"]},\"platform\":0}],\"outputContexts\":[{\"name\":\"projects/sorteringshatt-dgwo/agent/sessions/2022-03-09 15:34:42.939426/contexts/phase2simplequestion3-perfectsaturday-followup\",\"lifespanCount\":1},{\"name\":\"projects/sorteringshatt-dgwo/agent/sessions/2022-03-09 15:34:42.939426/contexts/__system_counters__\",\"lifespanCount\":1,\"parameters\":{\"no-input\":0,\"no-match\":1}}],\"intent\":{\"name\":\"projects/sorteringshatt-dgwo/agent/intents/d1eda7c3-cf96-459c-abab-b937690aa911\",\"displayName\":\"Phase 2: Simple Question 3 - Perfect Saturday - fallback\",\"isFallback\":true,\"endInteraction\":true,\"webhookState\":0,\"priority\":0,\"mlDisabled\":false,\"liveAgentHandoff\":false,\"inputContextNames\":[],\"events\":[],\"trainingPhrases\":[],\"action\":\"\",\"outputContexts\":[],\"resetContexts\":false,\"parameters\":[],\"messages\":[],\"defaultResponsePlatforms\":[],\"rootFollowupIntentName\":\"\",\"parentFollowupIntentName\":\"\",\"followupIntentInfo\":[]},\"intentDetectionConfidence\":1,\"diagnosticInfo\":{\"end_conversation\":true},\"languageCode\":\"sv\",\"speechRecognitionConfidence\":0,\"cancelsSlotFilling\":false,\"webhookSource\":\"\"},\"webhookStatus\":{\"code\":0,\"message\":\"\",\"details\":[]},\"outputAudioConfig\":{\"audioEncoding\":1,\"sampleRateHertz\":44100,\"synthesizeSpeechConfig\":{\"speakingRate\":1,\"voice\":{\"name\":\"sv-SE-Wavenet-E\",\"ssmlGender\":0},\"pitch\":0,\"volumeGainDb\":0,\"effectsProfileId\":[]}},\"outputAudio\":\"\"},\"responseTime\":\"2022-03-09 15:35:57.307794\",\"logType\":\"PRODUCTION\",\"updated\":false,\"__v\":0}\n{\"_id\":{\"$oid\":\"62289ea1075c226bde0ba1bb\"},\"project_id\":\"sorteringshatt-dgwo\",\"name\":\"2022-03-09 13:31:16.723215\",\"inputContexts\":[],\"originalRequestSource\":\"SORTERINGSHATT_KAI\",\"v2Response\":{\"responseId\":\"2d9530a4-31ba-4ab3-a6b7-f101618b909a-53cb9be6\",\"queryResult\":{\"queryText\":\"phase_3_outro\",\"allRequiredParamsPresent\":true,\"fulfillmentText\":\"Jag tycker att vi tar och går vidare till den sista fasen av testet!\",\"fulfillmentMessages\":[{\"text\":{\"text\":[\"Jag tycker att vi tar och går vidare till den sista fasen av testet!\"]},\"platform\":0}],\"intent\":{\"name\":\"projects/sorteringshatt-dgwo/agent/intents/9f93a04f-4868-45d5-8824-dae8b81e3d7e\",\"displayName\":\"Phase 3: Outro\",\"endInteraction\":true,\"webhookState\":0,\"priority\":0,\"isFallback\":false,\"mlDisabled\":false,\"liveAgentHandoff\":false,\"inputContextNames\":[],\"events\":[],\"trainingPhrases\":[],\"action\":\"\",\"outputContexts\":[],\"resetContexts\":false,\"parameters\":[],\"messages\":[],\"defaultResponsePlatforms\":[],\"rootFollowupIntentName\":\"\",\"parentFollowupIntentName\":\"\",\"followupIntentInfo\":[]},\"intentDetectionConfidence\":1,\"diagnosticInfo\":{\"end_conversation\":true},\"languageCode\":\"sv\",\"speechRecognitionConfidence\":0,\"action\":\"\",\"cancelsSlotFilling\":false,\"webhookSource\":\"\",\"outputContexts\":[]},\"webhookStatus\":{\"code\":0,\"message\":\"\",\"details\":[]},\"outputAudioConfig\":{\"audioEncoding\":1,\"sampleRateHertz\":44100,\"synthesizeSpeechConfig\":{\"speakingRate\":1,\"voice\":{\"name\":\"sv-SE-Wavenet-E\",\"ssmlGender\":0},\"pitch\":0,\"volumeGainDb\":0,\"effectsProfileId\":[]}},\"outputAudio\":\"\"},\"responseTime\":\"2022-03-09 13:33:36.821703\",\"logType\":\"PRODUCTION\",\"updated\":false,\"__v\":0}\n",
"text": "Thanks for letting me know what i need to learn. I will add some samples here.For this five documents, two should be returned.I hope I did it correctly.",
"username": "Niklas_Soderberg"
},
{
"code": "",
"text": "Check out this series of blog post as well (see at the bottom of the page the list of blog posts). Great way to learn by doing.Learn how to execute the CRUD (create, read, update, and delete) operations in MongoDB using Node.js in this step-by-step tutorial.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "For this five documents, two should be returned.Which two of the five? And for which query?",
"username": "steevej"
},
{
"code": "",
"text": "typo, only one should be returned!await History\n.find( { project_id: “sorteringshatt-dgwo” ,\n‘v2Response.queryResult.intent.displayName’: “phase_1_name_mai” ,\nupdated: false } )\n.sort( … ) ;So here my startpoint / query is #1 and #2 is the one I want back. Since it has the same “name” property and is first in line on the “responseTime” property.",
"username": "Niklas_Soderberg"
},
{
"code": "{ _id: ObjectId(\"622f2cd4075c226bde0ba966\"),\n v2Response: { queryResult: { intent: { displayName: 'Phase 1: Intro MAI' } } } }\n{ _id: ObjectId(\"622f2ce0075c226bde0ba96a\"),\n v2Response: { queryResult: { intent: { displayName: 'Phase 1: Intro - myNameIs' } } } }\n{ _id: ObjectId(\"6228bb45075c226bde0ba1d1\"),\n v2Response: { queryResult: { intent: { displayName: 'Phase 2: Simple Question 3 - Perfect Saturday' } } } }\n{ _id: ObjectId(\"6228bb4d075c226bde0ba1d3\"),\n v2Response: { queryResult: { intent: { displayName: 'Phase 2: Simple Question 3 - Perfect Saturday - fallback' } } } }\n{ _id: ObjectId(\"62289ea1075c226bde0ba1bb\"),\n v2Response: { queryResult: { intent: { displayName: 'Phase 3: Outro' } } } }\n",
"text": "I was able to import your documents.But none match your query.await History\n.find( { project_id: “sorteringshatt-dgwo” ,\n‘v2Response.queryResult.intent.displayName’: “phase_1_name_mai” ,\nupdated: false } )\n.sort( … ) ;Always use triple back ticks when publishing code as mentioned in one of the link above. HTML changes the quotes to fancy back and forward single and double quotes and we cannot cut-n-paste it without editing.\nBut none match your query.*Here are the v2Response…displayName of your documents:",
"username": "steevej"
},
{
"code": "",
"text": "I think it helps if I paint a picture of what I need and why.We have created a chatbot that stands at some hotels in Stockholm. They are kinda stupid but thats why we are trying with methods like this to make it better.\nI’ve made an app that find missMatches and sort them in amount so if alot of people ask where u can find a good restaurant and the chatbot goes to fallback we see that and can prioritize that answer before others.When i search for a project_id and then an intent, the point is to have that as a start-point and then look at all the next documents that also had that intent in the same conversation. I can now do this, but it is kinda hard on the server to look for everything (and then check for intent, and then check for if next blabla is the same conversation, and do that for everything). Maybe because its a free atlas cluster? Anyway i made a new index 2 days ago sorting project then intents and some magic happened. From 20-30 seconds to 0.5-2 seconds responses.My query is not complete, its what I use right now because I don’t know better, and that gives me the start-point. After that i use my server to find the answers i need.So the perfect query would need the projectId and intent, then give me back the documents that has the same projectId and the first name sorted in responseTime (name is conversation, kinda strange but whatever)Kinda hard! I think. Im learning as fast as i can! ",
"username": "Niklas_Soderberg"
},
{
"code": "",
"text": "Here is an idea using the aggregation framework.You start with a $match stage using the query you have in your find().Then you do $graphLookup from the same collection startingWith:$name, with a connectFromField:name and connectToField:name and maxDepth:0. You also want to restrictSearchWithMatch with an expression that ensure you do not pickup your starting point.After the $graphLookup, you end up with an array of 1 element that should be what you want. You could then use $replaceRoot to only output that resulting document.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you, that is probably the correct answer. We will upgrade, I noticed $graphLookup was not available on free atlas tier.",
"username": "Niklas_Soderberg"
},
{
"code": "",
"text": "We will upgrade, I noticed $graphLookup was not available on free atlas tier.I would be surprised if that is the case. Any links to documentation to that effect?",
"username": "steevej"
},
{
"code": "MongoServerError: $$graphLookup is not allowed in this atlas tier\n at lotsOfText {\n ok: 0,\n code: 8000,\n codeName: 'AtlasError'\n}\n",
"text": "This is the console.logged catch on the serverbut yes, you might be correct again. I might just have written the query wrong on my testQuery I did here.",
"username": "Niklas_Soderberg"
},
{
"code": "",
"text": "I might just have written the query wrong on my testQueryThe best way to find out is to share the query with us.",
"username": "steevej"
}
] | Need help with how to query a .find() in my nodeJs backend | 2022-03-14T12:25:11.609Z | Need help with how to query a .find() in my nodeJs backend | 8,718 |
null | [
"compass"
] | [
{
"code": "",
"text": "Is it dangerous if credentials for connecting Compass to Atlas are seen on Github? I have to share my project there anyway. Is it possible for someone else to use them somehow?",
"username": "Valyo_Gennoff"
},
{
"code": "",
"text": "Hey @Valyo_Gennoff,Welcome to the MongoDB Community!It is advisable that you don’t push any type of credentials on the GitHub as it could be accessed by anyone on the internet if the repository is public.Yet, you can prevent someone from accessing the database by whitelisting only the IP addresses which have database access authority. This is an extra layer of security. However, we still recommend you avoid sharing the credentials in the public domain to prevent misuse.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Exposing credentials | 2023-08-01T14:13:24.997Z | Exposing credentials | 374 |
null | [
"crud",
"time-series"
] | [
{
"code": "db.my_collection.deleteMany({_id: {$in: <my_ids_list>}}) \n",
"text": "Hello everyone,I just installed MongoDB version 7.0 on my machine (MacOS with Apple M1) , because I wanted to test some features for time series collections.I have a time series collection for which I want to delete a list of documents based on their ids.\nFor my understanding this should be possible with version 7.0, since most limitation on deletes operations have been deleted, as explained in the changelog.The operation I would like to execute is something likeUnfortunately this is still not permitted and the operation fails, returning the error “Cannot perform an update or delete on a time-series collection when querying on a field that is not the metaField ‘meta’”I would like to understand if I’m doing something wrong or if there is something not perfect with Mongo 7.0 since it’s still in release-candidate status.Thanks in advance",
"username": "Vincenzo_Martello"
},
{
"code": "featureCompatibilityVersion",
"text": "I solved the issue, now the deletion works well.\nThe problem was due to the fact that I had featureCompatibilityVersion set to 6.0, locking all the new Mongo 7.0 features",
"username": "Vincenzo_Martello"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Time Series collection limitation still present in version 7.0 | 2023-07-31T15:25:49.809Z | Time Series collection limitation still present in version 7.0 | 561 |
[
"node-js",
"connecting",
"atlas-cluster"
] | [
{
"code": "",
"text": "I am getting this error can anyone help?Error: querySrv ETIMEOUT _mongodb._tcp.cluster0.11oatej.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\nerrno: undefined,\ncode: ‘ETIMEOUT’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.cluster0.11oatej.mongodb.net’\n}\nHere is the screenshot of that\n\nimage1484×908 58.2 KB\n",
"username": "Harshil_K_Dangar_20BECE30024"
},
{
"code": "Error: querySrv ETIMEOUT _mongodb._tcp.cluster0.11oatej.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\nerrno: undefined,\ncode: ‘ETIMEOUT’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.cluster0.11oatej.mongodb.net’\n}\n8.8.8.88.8.4.4",
"text": "Hey @Harshil_K_Dangar_20BECE30024,Welcome to the MongoDB Community!It seems like the DNS issue, try using Google’s DNS 8.8.8.8 and 8.8.4.4. Please refer to the Public DNS for more details.Apart from this, please refer to this post and try using the connection string from the connection modal that includes all three hostnames instead of the SRV record.If it returns a different error, please share that error message here.In addition to the above, I would recommend also checking out the Atlas Troubleshoot Connection Issues documentation.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "solved the error by downgrading the node version as you can see below and everything is working fine\n\nimage978×727 41.1 KB\n",
"username": "Harshil_K_Dangar_20BECE30024"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Error while connecting to the monogdb | 2023-07-21T06:31:58.719Z | Error while connecting to the monogdb | 745 |
|
null | [
"aggregation",
"android",
"kotlin"
] | [
{
"code": "",
"text": "Hello, everyone! Apologies if this is yet another discussion of the topic, but all threads I found are relatively outdated, and it looks like many things have changed.I am researching a solution for adding sync functionality to an existing app that exists independently on iOS and Android. The app has ~1m users and is currently fully offline. Android app is built on Sqlite + Room, iOS uses Realm offline. My first choice was Firebase, but then I downloaded Realm’s Kotlin template app and was very pleased by how it uses modern API’s like coroutines, flows, and how fast it works offline. That kind of made me look deeper into Realm and frankly, now I’m torn apart Realm has very strong points for first-class offline support. That is exactly what I’m looking for. I want the app to continue being fully functional offline, with sync being only available to users who have upgraded to paid tier app. Firebase-only solution would force me to always have an anonymous user, which means that even my free users would incur potentially significant costs. I want to understand how easy it would be to avoid with Realm. Do I understand correctly that I will need to keep a separate Realm for free users, and then make it synced when they update/create account? What if I want to only sync some data for users, and keep some on the device? Would it be possible to have two realms side by side simultaneously?One significant issue that puts me off using Realm was that adding realm to my app has added ~7mb apk size for each ABI split. That would basically double my app size and I pride myself on having a small APK. Is there any way to reduce this, are there any plans for reducing the size of bundled native libraries?Another thing is I’m looking to offload some of the duplicate code for the app to the server, specifically daily aggregation of statistics for each user (it’s an education app, and every time user answers a question, I store an entry and then I aggregate them and also create daily exercises for users based on their answers). I am quite familiar with GCP ecosystem but don’t know much about MongoDB ecosystem. Would that be possible to do fully on the server? Do I understand correctly that Atlas Functions can help me with this? Would it be possible to verify purchases on the backend using both Google Play and App Store APIs?I’m also considering building the entire data layer in apps using KMM if I go Realm way, for which Firebase doesn’t seem to have an out of the box support, but wondering how truly production ready that is for iOS + Android?Apologies for so many questions, just want to gain a full understanding before committing. ",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "Hi, first off, I am glad you have been having a great experience using Realm and Atlas Device Sync so far. Ill try to answer your questions in the order you asked them.Do I understand correctly that I will need to keep a separate Realm for free users, and then make it synced when they update/create account? What if I want to only sync some data for users, and keep some on the device? Would it be possible to have two realms side by side simultaneously?Yes, we have many users that want to sync some data but not sync other data and just let it live locally on the device. The best way to do this is to just open a local-only realm and a sync-only realm (just initialize 2 different realms). As for the “upgrade”, you are correct that it will involve taking the objects in the local realm and inserting them into the new “synced” realm. This should be relatively easy with the Kotlin SDK. Additionally, I think you can use the copyToRealm() function to help with some of the logic.One significant issue that puts me off using Realm was that adding realm to my app has added ~7mb apk size for each ABI split. That would basically double my app size and I pride myself on having a small APK. Is there any way to reduce this, are there any plans for reducing the size of bundled native libraries?I do not have the full information on this, but I will ask a member of the Kotlin team to respond to this. I do know that sometimes the issue here is in the differences between the production library and the debug library (where the latter is much larger), but I will let the Kotlin team respond to this one.Would that be possible to do fully on the server? Do I understand correctly that Atlas Functions can help me with this?Yes, you can use Atlas Functions (https://www.mongodb.com/docs/atlas/app-services/functions/) to deduplicate some logic and have them be run in a serverless environment where you just pay for use. You can call functions directly from the SDK (https://www.mongodb.com/docs/realm/sdk/kotlin/app-services/call-function/). Additionally, you can run functions in reaction to operations on your collections (see Database Triggers) or on a CRON schedule (Scheduled Triggers).Please let me know if you have any other questions!\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hello. Thank you for your detailed response.I’d like to clarify that I was talking specifically about the release APK, which was also minified with R8. Debug APK was significantly larger indeed.One thing that confused me about Functions was this quote from the documentation here: “Common use cases include low-latency, short-running tasks”. What does short-running mean in this case? I imagine running some data aggregation task for each user daily will not be that short-running and that of course also depends on the amount of users.Thanks,\nAlex.",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "Hi, like I said, I will let the APK size questions be answered by the Realm SDK team who I have sent this to.As for your question on functions, I think the idea here is that Atlas Functions functions are designed to be short-running tasks (do some database calls, send some HTTP requests, hit a Twilio API, etc). For this reason, function execution is limited to around 2 minutes I believe.I think this quote from the docs is meant to distinguish us from another service like Lambda which is designed for these kinds of longer-running scripts/actions. Our functions are best used to replace application logic that you would otherwise write and manage in your own backend.Does that answer your question?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Yes, thank you, that is very useful.I just clarified about the release build as that could be helpful info for the SDK team. Looking forward to hearing back from them ",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "Hi, just a quick clarification on my end. Functions are limited to 4 minutes: https://www.mongodb.com/docs/atlas/app-services/functions/#constraints",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi @TheHiddenDuckRegarding the size of the APK, it is true that the current size of the library is bigger than we would like and the reason is our native code.There are a couple of reasons for this:So we have some ideas for shaving down the size, but realistically I wouldn’t expect us to get below 5MB for arm64. I realize this might not be what you wanted to hear and ultimately the tradeoff is up to you. There are advantages to shipping the native code though:I hope this answers your questions?Best,\nChristian (Kotlin Team Lead)",
"username": "Christian_Melchior"
},
{
"code": "",
"text": "Ups missed this:I’m also considering building the entire data layer in apps using KMM if I go Realm way, for which Firebase doesn’t seem to have an out of the box support, but wondering how truly production ready that is for iOS + Android?Yes, KMM (iOS + Android) support should be just as stable and production ready as Android support alone. It is the same code.",
"username": "ChristanMelchior"
},
{
"code": "",
"text": "I’m also considering building the entire data layer in apps using KMMNice to see one more KMM enthusiast here. Sharing a few of my repo’s for your referenceApp for managing session queries in real time using KMM with MongoDB & Realm. - GitHub - mongodb-developer/Conference-Queries-App: App for managing session queries in real time using KMM with ...Demo application for conference management, built using Flexible Sync and Realm. - GitHub - mongodb-developer/mongo-conference: Demo application for conference management, built using Flexible Syn...An app that allow to pick jobs based on user location in real time using Flexible Sync and Realm. - GitHub - mongodb-developer/Job-Tracker: An app that allow to pick jobs based on user location in ...",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Hi Christian! Thank you for the detailed response. Of course, I understand it’s challenging to reduce the native code size, but happy to hear that it is on your radar and that improvements are being made! Even 1-2 mb is a significant cut!",
"username": "TheHiddenDuck"
},
{
"code": "",
"text": "Nice to see one more KMM enthusiast here. Sharing a few of my repo’s for your referencNice! Thank you for sharing! ",
"username": "TheHiddenDuck"
},
{
"code": "// Create a local, unsynced Realm for free users\nvar freeRealmConfig = new RealmConfiguration { IsReadOnly = false };\nvar freeRealm = Realm.GetInstance(freeRealmConfig);\n\n// Create a synced Realm for paid users\nvar user = await AuthenticatePaidUser(); // authenticate the user however you choose\nvar syncConfig = new SyncConfiguration(user, realmServerUrl);\nvar paidRealm = await Realm.GetInstanceAsync(syncConfig);\n// Create a local, unsynced Realm for all users\nvar localConfig = new RealmConfiguration { IsReadOnly = false };\nvar localRealm = Realm.GetInstance(localConfig);\n\n// Create a synced Realm for paid users with a partition key of \"paid\"\nvar user = await AuthenticatePaidUser();\nvar syncConfig = new SyncConfiguration(user, realmServerUrl) { PartitionValue = \"paid\" };\nvar paidRealm = await Realm.GetInstanceAsync(syncConfig);\n\n// Create a synced Realm for some data with a partition key of \"shared\"\nvar sharedSyncConfig = new SyncConfiguration(user, realmServerUrl) { PartitionValue = \"shared\" };\nvar sharedRealm = await Realm.GetInstanceAsync(sharedSyncConfig);\n// In your project file, define the ABIs you support\n<ItemGroup>\n <SupportedAbis Include=\"armeabi-v7a\" />\n <SupportedAbis Include=\"arm64-v8a\" />\n <SupportedAbis Include=\"x86\" />\n <SupportedAbis Include=\"x86_64\" />\n</ItemGroup>\n\n// In your AndroidManifest.xml file, enable APK splits\n<manifest ...>\n <dist:module dist:onDemand=\"true\" />\n <dist:split dist:abi=\"armeabi-v7a\" />\n <dist:split dist:abi=\"arm64-v8a\" />\n <dist:split dist:abi=\"x86\" />\n <dist:split dist:abi=\"x86_64\" />\n</manifest>\n",
"text": "Hello @TheHiddenDuck great seeing you! lol You know me by my other username in the other place.I commonly go back and forth between both Firebase and Realm, being formerly a Realm TSE I can every interview ends up going in the direction of questions you ask. I’m not only going to answer your questions with the typical marketing “yeah you can, yada yada” but I’m actually going to give you examples.Anyways, I actually get this question a ton in interviews of Realm vs Firebase, and here’s my take on this and in relation to your concerns:Sure, I’d be happy to provide examples in C#!First, regarding Realm’s offline support, you can definitely use Realm to create a fully functional offline app that only syncs data for paid users. To do this, you can create two separate Realms - one for free users and one for paid users - and only sync the paid user’s Realm. Here’s an example of how you could do this:With this setup, the freeRealm will only exist on the user’s device and won’t be synced to the server. The paidRealm, on the other hand, will be synced to the server and only be available to authenticated paid users.If you want to only sync some data for users and keep some on the device, you can use Realm’s partitioning feature to create separate partitions for synced and unsynced data. Here’s an example:With this setup, the localRealm will only exist on the user’s device and won’t be synced to the server. The paidRealm will be synced to the server and only contain data with a partition key of “paid”, which can be used to store data that only paid users should have access to. The sharedRealm will also be synced to the server, but contain data with a partition key of “shared”, which can be used to store data that should be available to all users.Regarding the APK size increase when adding Realm to your app, there are a few things you can do to reduce the size. First, you can use APK splits to only include the native libraries for the ABIs that your app supports. Here’s an example:By using APK splits, you can significantly reduce the size of your app by only including the native libraries for the ABIs that your app actually supports.Another thing you can do is to leverage Realm’s built-in partitioning feature to separate data between free and paid users. Partitioning allows you to logically separate data in a Realm database and control access to it based on a partition key. In your case, you could use a partition key to separate data for free and paid users, and then only synchronize the data for paid users.Regarding the size of the bundled native libraries, Realm does offer a feature called “fat APK” splitting, which can significantly reduce the size of the APK. This feature splits the native libraries into multiple APKs, one per CPU architecture, and downloads only the required APK at runtime. This way, users only download the native libraries that they need, instead of downloading everything. You can read more about fat APK splitting in the Realm documentation.As for offloading some of the duplicate code to the server, MongoDB Atlas Functions can indeed help you with that. Atlas Functions are serverless functions that allow you to run JavaScript code directly in MongoDB Atlas, using triggers like database events, HTTP requests, or scheduled intervals. You can use Atlas Functions to implement your daily aggregation of statistics and exercises, as well as to verify purchases using Google Play and App Store APIs. You can read more about Atlas Functions in the MongoDB documentation.Finally, building the entire data layer in apps using KMM is definitely a viable option, especially if you are already considering using Kotlin and Realm. KMM allows you to share business logic and data models across multiple platforms, while still using native UI components and frameworks. You can read more about KMM in the Kotlin documentation. Keep in mind, however, that KMM is still a relatively new technology, and you may encounter some limitations or issues as you develop your app.Regarding your question about verifying purchases on the backend using both Google Play and App Store APIs, it is definitely possible to do so. Both Google Play and App Store provide APIs that allow you to verify purchases made by users in your app. With Google Play, you can use the Google Play Developer API to verify purchases, while with App Store, you can use the StoreKit API.As for your question about KMM, while KMM is a relatively new technology, it has been gaining popularity and adoption in the mobile development community. KMM provides a way to share code between iOS and Android, allowing for faster development, easier maintenance, and more consistent behavior across platforms. While it may not have out-of-the-box support for Firebase, there are ways to integrate Firebase with KMM, such as using a common Kotlin module. However, as with any new technology, there may be limitations or issues that you may encounter during development.Overall, it seems like you have a lot of options to consider for adding sync functionality to your app. Firebase and Realm both offer strong offline support, but come with their own tradeoffs and considerations. If you are already familiar with the GCP ecosystem, using MongoDB may require a bit of a learning curve, but it could provide a way to offload some of the duplicate code to the server. Similarly, using KMM may require some additional setup and configuration, but it could provide a way to share code between platforms and speed up development. Ultimately, the best solution will depend on your specific needs and requirements, so it may be helpful to experiment with different options and see which one works best for your use case.In conclusion, both Firebase and Realm offer strong offline support for mobile apps, but they have different strengths and limitations.Firebase offers a comprehensive suite of tools and services for app development, including real-time database, authentication, cloud messaging, and more. It provides a straightforward way to add sync functionality to your app, but it may incur costs for anonymous users and may not be as fast as Realm when working offline.Realm, on the other hand, provides a first-class offline experience and modern APIs such as coroutines and flows. It allows you to keep your app fully functional offline and sync data only for paid users. However, adding Realm to your app can significantly increase the APK size, and it may require more effort to set up compared to Firebase.If you are familiar with GCP and want to offload some of the duplicate code to the server, MongoDB Atlas Functions can be a good option to consider. It allows you to run serverless functions on Atlas to perform data aggregation and other tasks.When it comes to building the entire data layer using KMM, it can be a viable option if you are comfortable with Kotlin and want to share code between iOS and Android. Keep in mind, however, that KMM is still a relatively new technology, and you may encounter some limitations or issues as you develop your app.Ultimately, the best solution will depend on your specific needs and requirements, so it may be helpful to experiment with different options and see which one works best for your use case.Anything else, feel free to ask.",
"username": "Brock"
},
{
"code": "// Create a local, unsynced Realm for free users\nval freeRealmConfig = RealmConfiguration.Builder()\n.readOnly(false)\n.build()\nval freeRealm = Realm.getInstance(freeRealmConfig)\n\n// Create a synced Realm for paid users\nval user = authenticatePaidUser() // authenticate the user however you choose\nval syncConfig = SyncConfiguration.Builder(user, realmServerUrl)\n.build()\nval paidRealm = Realm.getInstanceAsync(syncConfig).await()\n// Create a local, unsynced Realm for all users\nval localConfig = RealmConfiguration.Builder()\n.readOnly(false)\n.build()\nval localRealm = Realm.getInstance(localConfig)\n\n// Create a synced Realm for paid users with a partition key of \"paid\"\nval user = authenticatePaidUser()\nval syncConfig = SyncConfiguration.Builder(user, realmServerUrl)\n.partitionValue(\"paid\")\n.build()\nval paidRealm = Realm.getInstanceAsync(syncConfig)\n\n// Create a synced Realm for some data with a partition key of \"shared\"\nval sharedSyncConfig = SyncConfiguration.Builder(user, realmServerUrl)\n.partitionValue(\"shared\")\n.build()\nval sharedRealm = Realm.getInstanceAsync(sharedSyncConfig)\[email protected] {\n defaultConfig {\n ndk {\n abiFilters 'armeabi-v7a', 'arm64-v8a'\n }\n }\n}\narmeabi-v7aarm64-v8a// Create a local, unsynced Realm for free users\nconst freeRealmConfig = new Realm.Configuration({\nreadOnly: false,\n});\nconst freeRealm = new Realm(freeRealmConfig);\n\n// Create a synced Realm for paid users\nconst user = authenticatePaidUser(); // authenticate the user however you choose\nconst syncConfig = new Realm.Sync.Configuration({\nuser: user,\nserver: realmServerUrl,\n});\nconst paidRealm = await Realm.open(syncConfig);\n\n// Create a local, unsynced Realm for all users\nconst localConfig = new Realm.Configuration({\nreadOnly: false,\n});\nconst localRealm = new Realm(localConfig);\n\n// Create a synced Realm for paid users with a partition key of \"paid\"\nconst user = authenticatePaidUser();\nconst syncConfig = new Realm.Sync.Configuration({\nuser: user,\nserver: realmServerUrl,\npartitionValue: \"paid\",\n});\nconst paidRealm = await Realm.open(syncConfig);\n\n// Create a synced Realm for some data with a partition key of \"shared\"\nconst sharedSyncConfig = new Realm.Sync.Configuration({\nuser: user,\nserver: realmServerUrl,\npartitionValue: \"shared\",\n});\nconst sharedRealm = await Realm.open(sharedSyncConfig);\nbuild.gradleandroidandroid {\n ...\n defaultConfig {\n ...\n ndk {\n abiFilters 'armeabi-v7a', 'arm64-v8a', 'x86', 'x86_64'\n }\n }\n}\nbuild.gradledependenciesdependencies {\n ...\n implementation \"io.realm:realm-android-library:${realmVersion}\"\n implementation \"io.realm:realm-annotations:${realmVersion}\"\n implementation \"io.realm:realm-gradle-plugin:${realmVersion}\"\n implementation \"io.realm:realm-sync:${realmVersion}\"\n implementation \"io.realm:realm-kotlin:${realmVersion}\"\n}\ngradle.propertiesandroid.useAndroidX=true\nandroid.enableJetifier=true\nrealmVersion=10.7.2\nrealmVersion",
"text": "@TheHiddenDuckI rewrote sample codes in Kotlin for you:And getting more into your Kotlin Specific cases:Yes, Realm adds significant size to the app, especially when multiple ABI splits are needed. However, there are a few ways to reduce the size of bundled native libraries:Enable Proguard or R8 obfuscation and minification, which can remove unused code and shrink the size of the library.Use the @Keep annotation on classes and methods that you want to keep in the code, which prevents them from being removed during the build process.Use ABI filters to only include the native libraries for the specific ABIs that your app is targeting. For example, if your app only targets ARM and ARM64, you can exclude the x86 and x86_64 libraries to reduce the size.Here’s an example of how to use ABI filters in your build.gradle file:By only including the armeabi-v7a and arm64-v8a ABIs, you can significantly reduce the size of the bundled native libraries.Overall, while Realm does add some additional size to your app, it’s important to weigh the benefits of its offline-first features against the impact on APK size. But needless to say you’re not the only person to have problems with how big Realm is lol.Also @TheHiddenDuck This is how you can do it in React.Native instead, I’m not sure what type of app you’re making, but React.Native is probably the easiest for apps if you don’t have to use a lot of device hardware features.Also I do want to make it clear, React Native is flat out the easiest to use to reduce the size of Realm.Here’s an example of how to reduce the size of the bundled native libraries for Realm in React Native:For iOS:For Android:Make sure to replace realmVersion with the version of Realm you’re using.These changes should help reduce the size of the bundled native libraries for Realm in your React Native app.",
"username": "Brock"
},
{
"code": "",
"text": "Hello! Thank you for your detailed responses; that is very useful! Although I’m a bit intrigued now, what that other place and username could be ",
"username": "TheHiddenDuck"
}
] | Realm Sync vs Firebase in 2023 | 2023-04-09T14:08:59.096Z | Realm Sync vs Firebase in 2023 | 1,749 |
null | [
"node-js"
] | [
{
"code": "",
"text": "We observed mongo db was slow when there is delete/cleanup of documents. Total number doc was 20M.\nIs there configuration/setting to make read operation to be timeout quickly in production environment.\nusing 3.5.9 version.Can you suggest the better approach for cleanup when db is used by customers.Thanks",
"username": "rajesh_kumar10"
},
{
"code": "",
"text": "Hello @rajesh_kumar10When you say version 3.5.9 what are you referring to?I am not aware of a MongoDB release 3.5.9 (if there was, it would be well past end-of-life at this point) do you mean the Node driver?That said, it is rather hard to answer without more details, but a few ideas to consider:What sort of query are you using to delete?Do you have an index on whatever field you are using to delete the documents? If you don’t MongoDB will need to do a collection scan, and that is very slow.You could look into using a Bulk operation like Bulk.find.remove()",
"username": "Justin_Jenkins"
},
{
"code": "",
"text": "3.5.9 is node driver. Thanks for you suggestion, i will look into that",
"username": "rajesh_kumar10"
}
] | Mongodb server becomes slow when cleanup of documents | 2023-01-20T13:46:25.798Z | Mongodb server becomes slow when cleanup of documents | 1,386 |
null | [
"react-native"
] | [
{
"code": "",
"text": "I am using realm for offline implementation. I have stuck since one week regarding below issue. If suppose two users update the same record during offline, One of the user gets online and sync the data to the realm server. When the second user gets online his data also sync into the server that means override the first user data instead of checking whether the data changed or not in the realm server. Can any one help here?",
"username": "Raghavender_Balasani"
},
{
"code": "",
"text": "Hi, please see this documentation page for our conflict resolution algorithm: https://www.mongodb.com/docs/atlas/app-services/sync/details/conflict-resolution/#custom-conflict-resolutionFor simple fields we employ a “last writer wins” policy. If you require custom conflict resolution for some fields, we advise re-thinking the data model. For example:Let me know if this solves your issue? If not, please let me know what you would expect to be able to do in the above situation.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Can you give some example snippet?",
"username": "Raghavender_Balasani"
},
{
"code": "",
"text": "Do you have more details about what exactly you are trying to do? See here for how to use a dictionary: https://www.mongodb.com/docs/realm/sdk/react-native/model-data/data-types/dictionaries/ and here for how to use a list https://www.mongodb.com/docs/realm/sdk/react-native/model-data/data-types/collections/",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Your answers not cleared",
"username": "Raghavender_Balasani"
},
{
"code": "",
"text": "Hi, I would love to help out, but can you please provide more information about what is unclear or what exactly you would like to do?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "I need to check the collection values before syncing to realm server. I am using react native SDK.",
"username": "Raghavender_Balasani"
},
{
"code": "",
"text": "There is no way to check the values specifically before it is synced. You can however pause and un-pause the sync session if you have to do so and capture the information about the pre and post value for the field in question. Additionally, you can use things like listeners to be notified when sync changes the value of a field: https://www.mongodb.com/docs/realm/sdk/react-native/react-to-changes/",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Okay Thanks for response, Can you give some code snippet checking the changes with server data during pause and un pause sync?",
"username": "Raghavender_Balasani"
},
{
"code": "",
"text": "During offline, when pause the sync and try to reload the app. It’s giving error and crashing the appError: no internal fieldThis error is located at:\nin AppSync (created by AppWrapperSync)\nin Unknown (created by AppWrapperSync)\nin UserProvider (created by AppWrapperSync)\nin AuthOperationProvider (created by AppProvider)\nin AppProvider (created by AppWrapperSync)\nin RCTView (created by View)\nin View (created by AppWrapperSync)\nin AppWrapperSync (created by App)\nin App\nin RCTView (created by View)\nin View (created by AppContainer)\nin RCTView (created by View)\nin View (created by AppContainer)\nin AppContainer\nin realApp(RootComponent), js engine: hermes\nERROR [Error: no internal field]",
"username": "Raghavender_Balasani"
}
] | Using flexible sync check the data before syncing to Realm using react native | 2023-07-25T12:58:06.550Z | Using flexible sync check the data before syncing to Realm using react native | 662 |
null | [
"replication",
"connecting",
"containers"
] | [
{
"code": "FROM mongo:latest\nENV TZ=\"Europe/Budapest\"\nENV TIME_ZONE=\"Europe/Budapest\"\nRUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && dpkg-reconfigure -f noninteractive tzdata\nRUN echo \"rs.initiate({'_id':'rs0', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});\" > \"/docker-entrypoint-initdb.d/init_replicaset.js\"\nRUN echo \"********************\" > \"/tmp/key.file\"\nCOPY ./mongo_init.js /docker-entrypoint-initdb.d/init_users.js\nRUN chmod 600 /tmp/key.file\nRUN chown 999:999 /tmp/key.file\nversion: '3.7'\nservices:\n mongodb:\n image: mongo6.0.5.t\n container_name: mongo\n hostname: mongo\n environment:\n - TZ=Europe/Budapest\n env_file:\n - ./mongo.env\n ports:\n - 27017:27017\n command: mongod --replSet \"rs0\" --bind_ip_all --keyFile /tmp/key.file\nvolumes:\n mongodb_data:\nversion: '3.7'\nservices:\n mongodb:\n image: mongo6.0.5.t\n container_name: mongo\n hostname: mongo\n environment:\n - TZ=Europe/Budapest\n env_file:\n - ./mongo.env\n volumes:\n - mongodb_data:/data/db\n ports:\n - 27017:27017\n healthcheck:\n test: test $$(echo \"rs.initiate().ok || rs.status().ok\" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1\n interval: 10s\n start_period: 10s\n command: \"mongod --bind_ip_all --replSet rs0 --keyFile /tmp/key.file\"\nvolumes:\n mongodb_data:\n",
"text": "I tried to make a docker container that runs a Mongo DB 6.0.5 as a single-node replication set.\nBut every time when I tried, the container stopped, and I got an error: MongoServerError: This node was not started with replication enabled.DockerFile:Dokcer-compose:I tried the old one which is good with 4.4 or 5.0:Same result.Does anyone have a working docker definition with a replica set?",
"username": "Hegyi_Gergely"
},
{
"code": "image: mongo6.0.5.tservices:\n mongo1:\n hostname: mongo1\n image: mongo\n expose:\n - 27017\n ports:\n - 30001:27017 \n restart: always\n command: mongod --replSet my-mongo-set\n mongo2:\n hostname: mongo2\n image: mongo\n expose:\n - 27017\n ports:\n - 30002:27017\n restart: always\n command: mongod --replSet my-mongo-set\n mongo3:\n hostname: mongo3\n image: mongo\n expose:\n - 27017\n ports:\n - 30003:27017\n restart: always\n command: mongod --replSet my-mongo-set\n\n mongoinit:\n image: mongo\n # this container will exit after executing the command\n restart: \"no\"\n depends_on:\n - mongo1\n - mongo2\n - mongo3\n command: >\n mongo --host mongo1:27017 --eval \n '\n db = (new Mongo(\"localhost:27017\")).getDB(\"test\");\n config = {\n \"_id\" : \"my-mongo-set\",\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"mongo1:27017\"\n },\n {\n \"_id\" : 1,\n \"host\" : \"mongo2:27017\"\n },\n {\n \"_id\" : 2,\n \"host\" : \"mongo3:27017\"\n }\n ]\n };\n rs.initiate(config);\n '\naasawari.sahasrabuddhe@M-C02DV42LML85 ~ % docker ps -a\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n478b7e4f2dc1 mongo \"docker-entrypoint.s…\" 15 seconds ago Exited (127) 12 seconds ago aasawarisahasrabuddhe-mongoinit-1\n761b79c62ef0 mongo \"docker-entrypoint.s…\" 15 seconds ago Up 13 seconds 0.0.0.0:30003->27017/tcp aasawarisahasrabuddhe-mongo3-1\n24c7cdbfa412 mongo \"docker-entrypoint.s…\" 15 seconds ago Up 13 seconds 0.0.0.0:30002->27017/tcp aasawarisahasrabuddhe-mongo2-1\n0de0fec70540 mongo \"docker-entrypoint.s…\" 15 seconds ago Up 13 seconds 0.0.0.0:30001->27017/tcp aasawarisahasrabuddhe-mongo1-1\naasawari.sahasrabuddhe@M-C02DV42LML85 ~ %\n",
"text": "Hi @Hegyi_Gergely and welcome to the MongoDB community forum!!image: mongo6.0.5.tFirstly, the image mentioned above does not seem valid MongoDB image. Can you confirm if this is a custom image created for the application.\nHowever, please note that, there are two images:Does anyone have a working docker definition with a replica set?I tried to deploy a 3 node replica set using the docker-compose using the latest MongoDB image, and I was successfully able to run the deployment.This is how my docker compose.yaml looks like:Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "FROM mongo:6.0.5\nENV TZ=\"Europe/Budapest\"\nENV TIME_ZONE=\"Europe/Budapest\"\nRUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && dpkg-reconfigure -f noninteractive tzdata\nRUN echo \"rs.initiate({'_id':'rs0', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});\" > \"/docker-entrypoint-initdb.d/init_replicaset.js\"\nRUN echo \"xxxxxxxxxxxxxxxx\" > \"/tmp/key.file\"\nCOPY ./init_replicaset.js /docker-entrypoint-initdb.d/init_replicaset.js\nCOPY ./mongo_init.js /docker-entrypoint-initdb.d/init_users.js\nRUN chmod 600 /tmp/key.file\nRUN chown 999:999 /tmp/key.file\n\nCMD [\"mongod\", \"--replSet\", \"rs0\", \"--bind_ip_all\", \"--keyFile\", \"/tmp/key.file\"]\n",
"text": "Thanks for your answer!\nI 'll try this wayThe mongo6.0.5.t was a custom test Image:",
"username": "Hegyi_Gergely"
},
{
"code": "",
"text": "Dear Aasawari!Sorry, but there are more problems with this example:\nmongoinit: /usr/local/bin/docker-entrypoint.sh: line 420: exec: mongo: not found\nThe mongo replaced to mongosh in V6. Are you sure to use MongoDB V6?\nyou should update the lates mongo image:\ndocker pull mongomongo1…3:\n“id”:4939300, “ctx”:“monitoring-keys-for-HMAC”,“msg”:“Failed to refresh key cache”,“attr”:{“error”:“NotYetInitialized: Cannot use non-local read concern until replica set is finished initializing.”,“nextWakeupMillis”:3600}}",
"username": "Hegyi_Gergely"
},
{
"code": " mongodb:\n container_name: mongodb\n image: mongo:latest\n expose:\n - 27017\n ports:\n - \"27017:27017\"\n volumes:\n - ./data/mongodb:/data/db\n - ./scripts/mongodb/rs-initiate.js:/docker-entrypoint-initdb.d/rs-initiate.js\n command: [\"--replSet\", \"dbrs\", \"--bind_ip_all\"]\nrs.initiate();\n",
"text": "Hi Gergely!In docker-compose:in /scripts/mongodb/rs-initiate.js:",
"username": "Szanto_Zoltan"
}
] | V 6 docker one node replica set | 2023-03-25T21:32:04.550Z | V 6 docker one node replica set | 4,384 |
null | [] | [
{
"code": "",
"text": "Hi,i have one query that have execution time different… 49761 millisecond or 4658 millisecondi have activated profiler and this is that i viewprofiler.json (183.3 KB)both query use the same indexi know that probably i have to change the query for optimize, but i don’t understand because the execution time is real different!Thank you",
"username": "ilmagowalter"
},
{
"code": " \"numYield\": 2396 --> When time was \"millis\": 4658\n \"numYield\": 26383 --> When time was \"millis\": 49761\n",
"text": "Hello @ilmagowalter ,Upon checking the logs from Profiler, it looks identical only difference it shows is inAs per the Database Profilersystem.profile.numYieldThe number of times the operation yielded to allow other operations to complete. Typically, operations yield when they need access to data that MongoDB has not yet fully read into memory. This allows other operations that have data in memory to complete while MongoDB reads in data for the yielding operation. For more information, see the FAQ on when operations yield.Did your first run of the query took more time to return results than the second run?If yes, then I think when you first ran the query it performed the operations and certain things are stored in cache for future use and when you re-ran the query it was faster due to the cache. Also, you can check if there were other operations already running while you ran the query first time and by the time you ran the query second time, the other operations were already complete.I would recommend you to refer below documents for achieving better performanceRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Yes the first execution was more slow than second execution.Thank you",
"username": "ilmagowalter"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [PERFORMANCE] same query, different execution time | 2023-07-27T10:40:24.469Z | [PERFORMANCE] same query, different execution time | 331 |
null | [] | [
{
"code": "",
"text": "I have a collection with user data with emails. Is it possible to use MongoDB Atlas Search to search through them?Example:\[email protected]\[email protected]\[email protected] for:\n‘user’ - I want to get all three\n‘gmail’ - last two\nand so on.Previously I was using regex, but it seems to start having issues with the number of documents in the database. Is there any better way to do this?",
"username": "Mykhailo_Fomenko"
},
{
"code": "[ {\n _id: ObjectId(\"64c759c53f1a52e9fd5053b8\"),\n email: '[email protected]'\n},\n{\n _id: ObjectId(\"64c759da3f1a52e9fd5053b9\"),\n email: '[email protected]'\n},\n{\n _id: ObjectId(\"64c759f33f1a52e9fd5053ba\"),\n email: '[email protected]'\n} ]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"email\": {\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n }\n }\n }\n}\ndb.collection.aggregate( [ {\n $search: {\n autocomplete: {\n path: \"email\",\n query: \"gmail\",\n }\n }\n }])\n{\n _id: ObjectId(\"64c759da3f1a52e9fd5053b9\"),\n email: '[email protected]'\n}\n{\n _id: ObjectId(\"64c759f33f1a52e9fd5053ba\"),\n email: '[email protected]'\n}\ndb.collection.aggregate( [ {\n $search: {\n autocomplete: {\n path: \"email\",\n query: \"user\",\n }\n }\n }])\n{\n _id: ObjectId(\"64c759da3f1a52e9fd5053b9\"),\n email: '[email protected]'\n}\n{\n _id: ObjectId(\"64c759f33f1a52e9fd5053ba\"),\n email: '[email protected]'\n}\n{\n _id: ObjectId(\"64c759c53f1a52e9fd5053b8\"),\n email: '[email protected]'\n}\nautocomplete",
"text": "Hello @Mykhailo_Fomenko ,Welcome to The MongoDB Community Forums! MongoDB Atlas search provides a seamless and scalable solution for creating relevance-based application features through full-text search capabilities. It eliminates the necessity of running a separate search system alongside your database.For your use-case, I added below 3 documents in my collectionCreated a search index via Atlas UI, kept most settings as default but turned off Dynamic mapping and added a field mapping on my field ‘email’ with Data type as Autocomplete which performs a search for a word or phrase that contains a sequence of characters from an incomplete input string, below is my index definitionRan below query to check if any email field contains gmailOutputRan below query to check if any email field contains userOutputPlease note that the above examples I have posted is based off your provided search terms and sample email strings. If you have any other examples / search terms that you believe the autocomplete operator may not work for then please let me know.You may also wish to look at the following topic which may be of use to you too:Let me know if this helps or if you have any more queries, would be happy to help you! Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Can Atlas Search be used for exact substring search in a single-word field (like email)? | 2023-07-24T08:21:12.166Z | Can Atlas Search be used for exact substring search in a single-word field (like email)? | 581 |
null | [
"serverless"
] | [
{
"code": "",
"text": "Hello everyone,I am currently working on a project where I’m using Spring WebFlux and MongoDB Reactive, with MongoDB Atlas Serverless as my database. I’ve encountered an issue where my application seems to experience occasional disconnections from the MongoDB Atlas Serverless instance. My suspicion is that the server might be closing the connection after a period of inactivity.Does anyone have experience with this kind of issue? Does MongoDB Atlas Serverless close connections after a certain idle time? And if so, what would be the best way to handle these disconnections?I’m currently exploring the idea of implementing a reconnection logic within the application, but I’m not sure if this is the best solution. Could anyone recommend a more appropriate approach to tackle this situation or perhaps share how you’ve solved a similar issue in the past?Any insights, suggestions, or references would be greatly appreciated. Thank you in advance!Best regards,Frank Lin",
"username": "Lin_Frank"
},
{
"code": "",
"text": "Same issue, searching for a solution",
"username": "Danushka_Herath"
}
] | Connection Handling in WebFlux with MongoDB Reactive on MongoDB Atlas Serverless | 2023-07-16T13:45:25.758Z | Connection Handling in WebFlux with MongoDB Reactive on MongoDB Atlas Serverless | 719 |
null | [
"queries",
"swift",
"atlas-device-sync",
"flexible-sync"
] | [
{
"code": "",
"text": "With flexible sync linked objects do not get downloaded to the client when an object gets downloaded. That, and another use case of our app lead to the following question:How would one best get objects based on the results of a subscription?\nSo let’s say I have a collection “Assignment” where each assignment contains an “ownerId” and a “foreignUserId” (or a linked User object)\nThese Assignments are the source of truth and the only place where the client can find what other users to sync. So the client has a subscription on these Assignments where he gets all the assignments with his id as “ownerId”. But now the clients needs the actual other users he has an assigment for, so he would need a different subscription on Users, based on all the \"foreignUserId\"s retrieved from the first subscription. How could that be done with the Swift SDK?Of course this could be solved easily by having a field “owner” inside a User and syncing directly the Users. But in our app, a User can have multiple \"owner\"s and each “Assignment” contains also additional information, different for each pair of “owner” and “foreignUser”. Additionally, we would like to separate the source of truth for the relations of these users from the actual users, since not every user should see the assigments of other users.A different example where this could be useful is when you have employer and notes. lets say you query for the employers based on some of their properties. Now you want to sync all notes that are “createdBy” these employers. So you would also need a subscription on “Notes” based on the results of a “Employer” subscription.This seems like a very usual use-case, yet I could not find anything in the swift SDK docs.Thank you for your help!P.S: if that was not understandable, or more information is needed I can always elaborate…cc @Ian_Ward @Tyler_Kaye",
"username": "David_Kessler"
},
{
"code": "",
"text": "I have thought about using “realm.objects(Example.self).observe” but quite unsure if there is no more straight forward way to do this.“getting objects based on the results of a subscription” is necessary whenever linked objects do not contain themselves the same properties that were used to query the “parent” objects, which in my opinion is the case most of the time… I feel like I am missing something.I suppose one could also use onComplete, or Async/await as described herebut then again I am not sure if it makes sense to add a new subscription for each “parent” object as there might be hundreds",
"username": "David_Kessler"
},
{
"code": "onComplete",
"text": "As I thought, onComplete is only run when subscriptions are newly set or removed, but not if the actual data being synced changes. So I still do not have an idea how to setup subscriptions that rely on results of other subscriptions.",
"username": "David_Kessler"
},
{
"code": "",
"text": "Hi, sorry for the delayed response. Do you mind sharing a little JSON snippet of your data for the two collections and what you are trying to do? Having a bit of a difficult time understanding what you are describing.",
"username": "Tyler_Kaye"
},
{
"code": "regionregionnotesemployeeclass Employee: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n @Persisted var region: String\n}\n\nclass Note: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n @Persisted var text: String?\n @Persisted var creatorId: String? // this is needed, because flexible sync does not allow queries on embedded objects (?)\n @Persisted var creator: Employee?\n // putting region in here as well is not an option\n}\n",
"text": "Thanks for your response @Tyler_Kaye. Sure I can elaborate. Let’s take the second example I stated above:\nThere are employees and notes. Then let’s say a manager has a few employees assigned to him, so he has a query for employees, based on properties of the employee (e.g. region).\nHe also needs all the notes of the employees that he gets as a result of his query, but the note object itself does not contain the necessary fields (region) that he used to query the employees. So the subscription on notes objects would have to be based on the results of the employee subscription. For example a subscription that is effectively “give me every Note that has creatorId == (id of one of my employees)”\nHere would be example swift models:",
"username": "David_Kessler"
},
{
"code": "",
"text": "Just in case my answer went unnoticed, I’ll bump this question, as it is still open.\n@Tyler_Kaye",
"username": "David_Kessler"
},
{
"code": "creator IN {\"list\", \"of\", \"employee\" \"ids\"}",
"text": "Hi! Apologies on the delay again, will try to not let it become a pattern. I think that right now you are correct that this is a slightly awkward experience for you. We have two projects that have already started work that may be of help to you:I suspect that 1 will make your experience more pleasant in making this query, but am curious for your thoughts on the two projects / solutions.Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "What’s the roadmap for the linked objects? It would literally remove the consideration of ever using any other database for mobile",
"username": "Tim_Tati"
},
{
"code": "",
"text": "Hi, automatically including linked objects in the result of a subscription is unlikely to happen in the near term. The reasons for this are:I hope this explains our choice. I would be happy to hear if you have any additional feedback about this.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hello Tyler,Thank you for your response.Well, firstly the performance loss sounds awful so I understand the hesitation. But it might definitely be useful even in a very simple capacity. For example, I use flexible sync to sync all recent Posts but the comments are not embedded so I have to add a new subscription every time somebody opens a post to make sure the right comments are syncing. That process would be way simpler if I could just add a subscription for all comments in the initial config by just adding a predicate that all the linked comments should be synced along with the posts with the same permissions as the parent object.Side note:\n2) Is there a way to do this without using linking objects? I there a way I can somehow always have access to a dynamic list of all the POSTIDS so I can maintain a subscription for the comments using those as keys? I know realm swift SDK is updated very frequently but the docs are somewhat lagging or confusing sometimes. So if there’s a way to do that. That would really help me out. Thank you!",
"username": "Tim_Tati"
}
] | Flexible sync, subscription based on other subscriptions results | 2022-07-16T11:03:01.374Z | Flexible sync, subscription based on other subscriptions results | 3,629 |
null | [
"app-services-user-auth"
] | [
{
"code": "\"%oidToString\": \"_id\"{\n \"_id\": { \"$oid\": \"63ed2e4fb7f367c92578e526\" },\n \"user_id\": \"63ed2dbe5960df2af7fd216e\",\n \"name\": \"Fred\"\n}\n{\n \"_id\": { \"$oid\": \"63ed2dbe5960df2af7fd216e\" },\n \"name\": \"Fred\"\n}\n",
"text": "When setting up Cutom User Data, according to the documentation the User Id Field in the user’s custom data document must be of type string.I’m just wondering why this cannot be an ObjectId instead or if there is anyway for it to be an ObjectId? If the app services backend can’t handle an ObjectId for some reason, then it would be good if we could enter something like \"%oidToString\": \"_id\" as the User Id Field for example.The reason I ask is because the custom user data document already contains an _id field that can easily be used as the user’s id if it is set at creation. This then prevents having to store a second id for the user. The documentation makes it clear to keep the custom user data document as small as possible as it is encoded into the access token. However, forcing a separate string id causes the document to be bigger than it needs to be.For example, instead of this:We could just have this (almost have the size):",
"username": "Wilber_Olive"
},
{
"code": "",
"text": "Hello! As of our most recent release on July 26 2023, we now support both string and ObjectID types in the User ID field in custom user data documents. We’re working to update our documentation at the moment but we do support the functionality you mentioned in your example.",
"username": "Gabby_Asuncion"
},
{
"code": "",
"text": "Ah yes so it does now. It did not work for me at first, but after trying again now, it appears to be working.",
"username": "Wilber_Olive"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using an ObjectId as the User Id Field | 2023-07-31T18:36:13.496Z | Using an ObjectId as the User Id Field | 600 |
null | [
"java",
"spring-data-odm"
] | [
{
"code": "",
"text": "I have a Spring boot application configured wih MongoDB.Everything worked just fine when working with mongodb on localhost.The problem is when i try to connect to mongodb on remote server i get “Exception opening socket” and “connect timed out”.\nI already changed bind_ip = 127.0.0.1 to bind_ip = 0.0.0.0.\nThis is the config in my application.properties file:\nspring.data.mongodb.uri=mongodb://user:password@remote_url/database_name",
"username": "youssef_boudaya"
},
{
"code": "",
"text": "Is it a replica or a standalone DB?\nWhat does you remote url look likeSample from mongo doc\nspring.data.mongodb.uri=mongodb://user:[email protected]:12345,mongo2.example.com:23456/test",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "how can i identify if it’s a replica or a standalone ?",
"username": "youssef_boudaya"
},
{
"code": "",
"text": "Can you access your remote db from shell?You can use rs.status(),rs.conf() to know about your setup",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "rs.status\nfunction() {\nreturn db._adminCommand(“replSetGetStatus”);\n}\nrs.conf()\nuncaught exception: Error: Could not retrieve replica set config: {\n“ok” : 0,\n“errmsg” : “not running with --replSet”,\n“code” : 76,\n“codeName” : “NoReplicationEnabled”\n} :\nrs.conf@src/mongo/shell/utils.js:1599:11\n@(shell):1:1",
"username": "youssef_boudaya"
},
{
"code": "",
"text": "Hi, did you ever solve the problem? I have the same issue. How did you fix it?",
"username": "Kieyan_Mamiche"
},
{
"code": "",
"text": "Are you using correct host & port?\nIs your mongodb running with auth?\nIs it standalone or replica\nCheck this link for various connect strings",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Sorry for taking so long to respond. If I am using a URI, then I thought I’m not supposed to configure a host and port on the application,properties file? No it is not running with auth. Essentially, I have successfully connected to MongoDB Compass with no problems, but when I try to set up the applications.properties file to connect to MongoDB Atlas, I have errors. I get the exact same error: “Exception opening socket” and “connect timed out”. Do you have any advice on how to configure a Java Springboot application to connect to MongoDB Atlas if I have been trying the URI method, and I’ve enabled all IP’s to be able to access my database.",
"username": "Kieyan_Mamiche"
},
{
"code": "",
"text": "Can you connect by shell?\nWhat type of connect string are you using in URI?\nSRV or long form(old style)\nWith long form you have to use port\nCheck our forum threads.Many examples given for application.properties",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hello, thanks for your help. I finally figured it out. The wifi was the problem, and it works on my personal mobile hotspot.",
"username": "Kieyan_Mamiche"
},
{
"code": "",
"text": "Hi , sorry to post the subject again but I have a problem with the host name ex: cluster0.e4xb04k.mongodb.netI’m using spring boot to connect, and the error says hostname not found?",
"username": "Gino_Allison"
}
] | How to connect to mongodb on remote server from spring boot? | 2021-05-28T15:42:59.279Z | How to connect to mongodb on remote server from spring boot? | 11,678 |
null | [] | [
{
"code": "",
"text": "Half a day ago, I had no problem. Then, I reinstalled my computer and haven’t yet installed a few of my fav extensions (that do not auto-sync) on the browser. So I am not sure if it was always like this.Right now, I am having a hard time browsing the forum. Loading speed is pretty well as usual, but scrolling is a pain.All other, including other MongoDB sites, are lightning fast but here I am experiencing maybe a 10 times slower scrolling. (again not the loading)If you had a change today, please revert it back and fix whatever you have added. Else you need a thorough inspection of Forums.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz I am not experiencing any issues with the forums and scrolling currently. I have been active on the forums throughout the day (US time) without issue. Are you still having problems?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "I have restarted the browser and pc. I can’t remember how fast it was before. At least, it is now much faster and more browsable without pain.But still a bit sluggish compared to SO for example. Scrolling SO homepage a few times longer than here takes half the time.I can’t say what was the cause for slugging, yet I might add this gave me an another sight on the design. It may still be sluggish on low-core slow-memory machines (mine is 10y old but 8GB with 8-core, so still fast) as not all of us have high resources. It is worth having an optimizing look at the Forum’s codes.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz,Thank you for your feedback, as always!We have a few monitoring solutions in place and I don’t see any specific alerts or other reports in the same time period. Some more localised issues can be difficult to track down, but we’re definitely interested in trying to improve the user experience.Can you share more details including:Are you still seeing slowness as compared to before reinstalling your computer?One common slowdown for a new O/S install (particularly with many existing files) is O/S indexing files for search (eg Search indexing for Windows or Spotlight Search for macOS). The initial O/S indexing build can take many hours depending on the number of files to index and system resources.Other possible reason for slowdowns could include:Our community platforms team has some ongoing projects to improve user experience including performance and design, but we’re still a relatively small team compared to our backlog of tasks :).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_XI really can’t remember (fish memory) how was it before the system was re-installed. But after the installation, it sure was like playing a 60fps action game at 3-5 fps. And I can’t tell either what has changed in one reboot but the comparison was the same: the Forum was slow yet other pages were functioning normally.PS: the link opens a preview video at a reduced size of 360p. details are not seen much but the scrolling difference is clear.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@Stennie_X I had another test about the slow experience.I had to reinstall (again) my system for an account sync problem; easier than trying to fix This time I had no extensions installed. Yet again, other sites were fine while the Forum was still slow.After a few minutes, I noticed the pages were partly fast. Then I opened the browser’s task manager and saw a high CPU usage.The usual workings of web pages are to “load and keep” on page load or after scroll, but it seems it is like “load then forget and reload” for every scroll action.Although it was initially worse just like I described in the first post, it gradually become faster to scroll.This new video shows an increase in CPU usage while scrolling: mongodb forums high cpu usage.webm - Google DriveThese high levels are the culprit behind. It lowers the browsing experience on mid-to-low-end devices.I hope web devs will find the actual code causing this and fix it.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I just created similar topic. It’s because they’re using a 5000x10000px PNG in the background Looks like these backend guys have no idea how the frontend works.",
"username": "Bear_Town"
},
{
"code": "",
"text": "It’s because they’re using a 5000x10000px PNG in the backgroundWow, I had never imagined this would be the culprit.I have identified at least one of these image files: dark-bg.png (4098×10726) (mdb-community.s3.amazonaws.com)In a 10s page performance test, almost 50% was spent in the page painting stage. I have added it to my blocked URL list and cleared the cache for the forums. now pages flow blazing fast compared to the previous state.Files with 500kb sizes are generally just big in disk size, but this guy is in png format and can occupy a lot of CPU resources to display. and the forum pages try to refresh for every scroll event.@Stennie_X, can you please warn the responsible team? (by the way, did the forums disable the tagging/viewing MongoDB team members?)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@Stennie_X, can you please warn the responsible team? (by the way, did the forums disable the tagging/viewing MongoDB team members?)Unfortunately Stennie is no longer with MongoDB. It sounds like he’s taking time to travel, be with family/friends and basically just enjoy life, before starting up his next career adventure. He’ll definitely be missed around these parts.",
"username": "Doug_Duncan"
}
] | Forum Pages becomes very sluggish to scroll | 2022-09-08T22:17:03.487Z | Forum Pages becomes very sluggish to scroll | 2,922 |
[] | [
{
"code": "use tutorialspoint\n\ndb.address_home.insertOne({ \"_id\": ObjectId(\"534009e4d852427820000002\"), \"building\": \"22 A, Indiana Apt\", \"pincode\": 123456, \"city\": \"Los Angeles\", \"state\": \"California\" })\n\ndb.users.insertOne( { \"_id\": ObjectId(\"53402597d852426020000002\"), \"address\": { \"$ref\": \"address_home\", \"$id\": ObjectId(\"534009e4d852427820000002\"), \"$db\": \"tutorialspoint\" }, \"contact\": \"987654321\", \"dob\": \"01-01-1991\", \"name\": \"Tom Benzamin\" })\n\nvar user = db.users.findOne({\"name\":\"Tom Benzamin\"})\nvar dbRef = user.address\ndb[dbRef.$ref].findOne({\"_id\":(dbRef.$id)})\n",
"text": "\nqiDsj737×284 6.44 KB\nAs seen in this image when I try to retrieve address details from collection ‘address_home’ using DB Reference in MongoDB, it returns null value. Anybody know what is the problem with this code?I tried the code in the image. I want it to print data from the ‘address_home’ collection in place of the ‘DBRef()’ data on users table.and finally,but, this command returns null as seen in the image.I tried this code from here:MongoDB Database References - As seen in the last chapter of MongoDB relationships, to implement a normalized database structure in MongoDB, we use the concept of Referenced Relationships also referred to as Manual References in which we manually...",
"username": "Muhammed_Saajid"
},
{
"code": "$lookup$graphLookup$lookupordersdb.orders.insertMany( [ \n{ \"_id\" : 1, \"item\" : \"almonds\", \"price\" : 12, \"quantity\" : 2 }, \n{ \"_id\" : 2, \"item\" : \"pecans\", \"price\" : 20, \"quantity\" : 1 }, \n{ \"_id\" : 3 }\n] )\ninventorydb.inventory.insertMany( [ \n{ \"_id\" : 1, \"sku\" : \"almonds\", \"description\": \"product 1\", \"instock\" : 120 }, \n{ \"_id\" : 2, \"sku\" : \"bread\", \"description\": \"product 2\", \"instock\" : 80 }, \n{ \"_id\" : 3, \"sku\" : \"cashews\", \"description\": \"product 3\", \"instock\" : 60 }, \n{ \"_id\" : 4, \"sku\" : \"pecans\", \"description\": \"product 4\", \"instock\" : 70 }, \n{ \"_id\" : 5, \"sku\": null, \"description\": \"Incomplete\" }, \n{ \"_id\" : 6 }\n] )\nordersordersinventoryitemordersskuinventorydb.orders.aggregate( [ \n{ \n $lookup: \n { \n from: \"inventory\", \n localField: \"item\", \n foreignField: \"sku\", \n as: \"inventory_docs\" \n } \n }\n] )\n{ \n \"_id\" : 1, \n \"item\" : \"almonds\", \n \"price\" : 12, \n \"quantity\" : 2, \n \"inventory_docs\" : [ \n { \"_id\" : 1, \"sku\" : \"almonds\", \"description\" : \"product 1\", \"instock\" : 120 } \n ]\n}\n{ \n \"_id\" : 2, \n \"item\" : \"pecans\", \n \"price\" : 20, \n \"quantity\" : 1, \n \"inventory_docs\" : [ \n { \"_id\" : 4, \"sku\" : \"pecans\", \"description\" : \"product 4\", \"instock\" : 70 } \n ]\n}\n{ \n \"_id\" : 3, \n \"inventory_docs\" : [ \n { \"_id\" : 5, \"sku\" : null, \"description\" : \"Incomplete\" }, \n { \"_id\" : 6 } \n ]\n}\n",
"text": "Hello @Muhammed_Saajid ,Welcome to The MongoDB Community Forums! I noticed that you have not had a response to this topic yet, were you able to find a solution?\nIf not, then I would suggest you to use $lookup for this. As per documentation on Database ReferencesThis page outlines alternative procedures that predate the $lookup and $graphLookup pipeline stages.$lookup - Performs a left outer join to a collection in the same database to filter in documents from the “joined” collection for processing. Below is an example for the same.Create a collection orders with these documents:Create another collection inventory with these documents:The following aggregation operation on the orders collection joins the documents from orders with the documents from the inventory collection using the fields item from the orders collection and the sku field from the inventory collection:The operation returns these documents:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "No, I got solution for DBRef. It was a mistake which was caused by an outdated syntax",
"username": "Muhammed_Saajid"
}
] | DBRef returns null value | 2023-06-29T07:17:32.812Z | DBRef returns null value | 393 |
|
null | [
"replication"
] | [
{
"code": "{\"t\":{\"$date\":\"2021-08-02T21:29:52.720+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"error\":\"ShutdownInProgress: Pool for mongo02.abc.com:27017 has expired.\"}}\n{\"t\":{\"$date\":\"2021-08-02Txx:xx:xx.720+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"mongo03.abc.com:27017\",\"error\":\"ShutdownInProgress: Pool for mongo03.abc.com:27017 has expired.\"}}\n\n",
"text": "Hello everyone,In a 3 node MongoDB Replicaset, the below error messages occur repeatedly only on the Primary node (master). The error messages show up twice/thrice every hour.I tried checking the logs of mongo02 and mongo03. But, there are no error messages there.This is a normal replicaset with default configuration. No arbiters etc. This issue happens on all clusters that we have.I googled this error message and read that… this may be because of glibc versions >= 2.27. We are using CentOS and glibc version is 2.17. So, this may not be the cause.The replicasets are on top of VMs in Azure and they talk internally via private IPs and the VMs are in same subnet. So, no firewall or security rule issue here.If anyone faced similar issue in the past, please provide your inputs. Thanks in advance !!",
"username": "Manu"
},
{
"code": "",
"text": "bring this topic to the top for visibility",
"username": "Manu"
},
{
"code": "{\"t\":{\"$date\":\"2021-08-06T11:34:22.860+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22578, \"ctx\":\"MirrorMaestro\",\"msg\":\"Updating pool controller\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"poolState\":\"{ requests: 0, ready: 1, pending: 0, active: 0, isExpired: true }\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.860+00:00\"},\"s\":\"D2\", \"c\":\"CONNPOOL\", \"id\":22571, \"ctx\":\"MirrorMaestro\",\"msg\":\"Delistinng connection pool\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.860+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"error\":\"ShutdownInProgress: Pool for mongo02.abc.com:27017 has expired.\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.869+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22570, \"ctx\":\"MirrorMaestro\",\"msg\":\"Triggered refresh timeout\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.869+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22579, \"ctx\":\"MirrorMaestro\",\"msg\":\"Pool is dead\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.933+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22578, \"ctx\":\"ReplNetwork\",\"msg\":\"Updating pool controller\",\"attr\":{\"hostAndPort\":\"mongo03.abc.com:27017\",\"poolState\":\"{ requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.933+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22575, \"ctx\":\"ReplNetwork\",\"msg\":\"Comparing connection state to controls\",\"attr\":{\"hostAndPort\":\"mongo03.abc.com:27017\",\"poolControls\":\"{ maxPending: 2, target: 1, }\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.933+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22578, \"ctx\":\"ReplNetwork\",\"msg\":\"Updating pool controller\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"poolState\":\"{ requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\"}}\n{\"t\":{\"$date\":\"2021-08-06T11:34:22.933+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22575, \"ctx\":\"ReplNetwork\",\"msg\":\"Comparing connection state to controls\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"poolControls\":\"{ maxPending: 2, target: 1, }\"}}\n\n{\"t\":{\"$date\":\"2021-08-06T12:03:14.953+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22578, \"ctx\":\"ReplNetwork\",\"msg\":\"Updating pool controller\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"poolState\":\"{ requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:14.953+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22575, \"ctx\":\"ReplNetwork\",\"msg\":\"Comparing connection state to controls\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"poolControls\":\"{ maxPending: 2, target: 1, }\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:14.953+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22578, \"ctx\":\"ReplNetwork\",\"msg\":\"Updating pool controller\",\"attr\":{\"hostAndPort\":\"mongo03.abc.com:27017\",\"poolState\":\"{ requests: 0, ready: 1, pending: 0, active: 0, isExpired: false }\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:14.953+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22575, \"ctx\":\"ReplNetwork\",\"msg\":\"Comparing connection state to controls\",\"attr\":{\"hostAndPort\":\"mongo03.abc.com:27017\",\"poolControls\":\"{ maxPending: 2, target: 1, }\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.061+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22559, \"ctx\":\"ReplCoord-99839\",\"msg\":\"Using existing idle connection\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.061+00:00\"},\"s\":\"D2\", \"c\":\"ASIO\", \"id\":4646300, \"ctx\":\"ReplCoord-99839\",\"msg\":\"Sending request\",\"attr\":{\"requestId\":17203576,\"target\":\"mongo02.abc.com:27017\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.061+00:00\"},\"s\":\"D2\", \"c\":\"ASIO\", \"id\":4630601, \"ctx\":\"ReplCoord-99839\",\"msg\":\"Request acquired a connection\",\"attr\":{\"requestId\":17203576,\"target\":\"mongo02.abc.com:27017\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.061+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22578, \"ctx\":\"ReplNetwork\",\"msg\":\"Updating pool controller\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"poolState\":\"{ requests: 0, ready: 0, pending: 0, active: 1, isExpired: false }\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.061+00:00\"},\"s\":\"D3\", \"c\":\"NETWORK\", \"id\":22925, \"ctx\":\"ReplCoord-99839\",\"msg\":\"Compressing message\",\"attr\":{\"compressor\":\"snappy\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.061+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22575, \"ctx\":\"ReplNetwork\",\"msg\":\"Comparing connection state to controls\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\",\"poolControls\":\"{ maxPending: 2, target: 1, }\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.061+00:00\"},\"s\":\"D3\", \"c\":\"EXECUTOR\", \"id\":23107, \"ctx\":\"ReplCoord-99839\",\"msg\":\"Not reaping this thread\",\"attr\":{\"nextThreadRetirementDate\":{\"$date\":\"2021-08-06T12:03:32.799Z\"}}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D3\", \"c\":\"REPL\", \"id\":21296, \"ctx\":\"WTJournalFlusher\",\"msg\":\"Setting oplog truncate after point\",\"attr\":{\"oplogTruncateAfterPoint\":{\"\":{\"$timestamp\":{\"t\":1628251392,\"i\":1}}}}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D3\", \"c\":\"STORAGE\", \"id\":22414, \"ctx\":\"WTJournalFlusher\",\"msg\":\"WT begin_transaction\",\"attr\":{\"snapshotId\":868767029,\"readSource\":\"kUnset\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D3\", \"c\":\"STORAGE\", \"id\":22413, \"ctx\":\"WTJournalFlusher\",\"msg\":\"WT rollback_transaction for snapshot id {getSnapshotId_toNumber}\",\"attr\":{\"getSnapshotId_toNumber\":868767035}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D3\", \"c\":\"STORAGE\", \"id\":22414, \"ctx\":\"WTJournalFlusher\",\"msg\":\"WT begin_transaction\",\"attr\":{\"snapshotId\":868767035,\"readSource\":\"kUnset\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D3\", \"c\":\"STORAGE\", \"id\":22413, \"ctx\":\"WTJournalFlusher\",\"msg\":\"WT rollback_transaction for snapshot id {getSnapshotId_toNumber}\",\"attr\":{\"getSnapshotId_toNumber\":868767036}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D4\", \"c\":\"STORAGE\", \"id\":22419, \"ctx\":\"WTJournalFlusher\",\"msg\":\"flushed journal\"}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D3\", \"c\":\"NETWORK\", \"id\":22927, \"ctx\":\"ReplNetwork\",\"msg\":\"Decompressing message\",\"attr\":{\"compressor\":\"snappy\"}}\n{\"t\":{\"$date\":\"2021-08-06T12:03:15.063+00:00\"},\"s\":\"D4\", \"c\":\"CONNPOOL\", \"id\":22569, \"ctx\":\"ReplNetwork\",\"msg\":\"Returning ready connection\",\"attr\":{\"hostAndPort\":\"mongo02.abc.com:27017\"}}\n",
"text": "Is everyone getting the error Dropping all pooled connections recurrently in your MongoDB replicaset configurations ?If so, can I ignore this error ?I am not sure about the root cause of this error. But, I have enabled log-level of 5 and found the below information. For some reason, the connection pool used by mongo01 VM to connect to mongo02 and mongo03 expire and this is causing the above error message.",
"username": "Manu"
},
{
"code": "{\"t\":{\"$date\":\"2021-09-22T14:31:27.530-04:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"MirrorMaestro\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"my_replicate_mongo_server:27017\"}}\n.... other log entries ...\n{\"t\":{\"$date\":\"2021-09-22T14:36:27.643-04:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"my_replicate_mongo_server:27017\",\"error\":\"ShutdownInProgress: Pool for my_replicate_mongo_server:27017 has expired.\"}}",
"text": "I’m seeing it too on a 3 node system. Version 4.4.6. Doesn’t seem to be causing any problems. Replication is fine.",
"username": "Benjamin_Slade"
},
{
"code": "",
"text": "I know its been a while since this was posted, but I am facing exact same issue. Getting connection pool expired error when a write operation is attempted on the primary node. Was anyone able to resolve this issue? @Benjamin_Slade @Manu",
"username": "Divisha_Gupta"
},
{
"code": "",
"text": "I’m facing the same issue, did you manage to fix it ?",
"username": "Ahmed_Asim"
}
] | MongoDB replicaset | version: 4.4 | "Dropping all pooled connections" to secondaries | 2021-08-03T12:15:49.856Z | MongoDB replicaset | version: 4.4 | “Dropping all pooled connections” to secondaries | 9,158 |
[
"aggregation"
] | [
{
"code": "",
"text": "Hello everyone,I’m looking for a solution to convert a UUID to a string using projection. I’ve tried many different ways, but none of them are working.\nimage1090×233 21.5 KB\nThe stranger thing is that Metabase displays the ID correctly, but I cannot manipulate this data because it’s not in string format.Have you any idea ?Thanks a lotBenjamin",
"username": "Benjamin_THOMAS"
},
{
"code": "",
"text": "View on metabase : \nimage370×506 17.2 KB\n",
"username": "Benjamin_THOMAS"
}
] | Convert UUID to string, in projection | 2023-07-31T13:22:43.413Z | Convert UUID to string, in projection | 610 |
|
null | [
"queries",
"indexes"
] | [
{
"code": ".createIndex({ a: 1, b: 1, c: 1 }).find({ a: 'test' }).explain()IXSCANkeysExamined.find({ a: 'test', c: 'test' }).explain()keysExaminedIXSCANkeysExamined",
"text": "Im a bit confused on the behavior of compound index using prefix\nlet say my compound index is .createIndex({ a: 1, b: 1, c: 1 })\nIf my query is .find({ a: 'test' }).explain() it will give me on the IXSCAN stage a keysExamined let say 100\nBut if my query is this .find({ a: 'test', c: 'test' }).explain() Im expecting to get the same number of keysExamined because my query is not a prefix of my index\nWhat I got on my 2nd query on IXSCAN stage was a lower number of keysExaminedCan you help me understand what was happened?",
"username": "Phillip_Causing"
},
{
"code": "keysExamineddb.test.insertMany([ {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"foo\", b:\"bar\", c:\"yo\"} ])db.test.createIndex({ a: 1, b: 1, c: 1 })db.test.find({ a: 'test' }).explain(\"executionStats\")\"keysExamined\" : 6db.test.find({ a: 'test', c: 'test' }).explain(\"executionStats\")\"keysExamined\" : 6",
"text": "Hey there,I’ve tried to reproduce the described behavior but I’m getting the same amount of keysExamined for both queries:db.test.insertMany([ {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"test\", b:\"test\", c:\"test\"}, {a:\"foo\", b:\"bar\", c:\"yo\"} ])db.test.createIndex({ a: 1, b: 1, c: 1 })db.test.find({ a: 'test' }).explain(\"executionStats\") ==> \"keysExamined\" : 6db.test.find({ a: 'test', c: 'test' }).explain(\"executionStats\") ==> \"keysExamined\" : 6Could you please share your code so that I can have a look?",
"username": "Carl_Champain"
},
{
"code": "db.getCollection(\"my-collection\").createIndex(\n {\n type: 1,\n 'details.category': 1,\n 'details.description': 1,\n 'details,overview': 1,\n 'details.reference': 1\n })\ndb.getCollection(\"my-collection\").find({\n type: \"core.module.safetyplan\",\n}).explain(\"executionStats\")\nexecutionStats{\n \"executionSuccess\": true,\n \"nReturned\": 302.0,\n \"executionTimeMillis\": 0.0,\n \"totalKeysExamined\": 302.0,\n \"totalDocsExamined\": 302.0,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 302.0,\n \"executionTimeMillisEstimate\": 1.0,\n \"works\": 303.0,\n \"advanced\": 302.0,\n \"needTime\": 0.0,\n \"needYield\": 0.0,\n \"saveState\": 0.0,\n \"restoreState\": 0.0,\n \"isEOF\": 1.0,\n \"docsExamined\": 302.0,\n \"alreadyHasObj\": 0.0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 302.0,\n \"executionTimeMillisEstimate\": 1.0,\n \"works\": 303.0,\n \"advanced\": 302.0,\n \"needTime\": 0.0,\n \"needYield\": 0.0,\n \"saveState\": 0.0,\n \"restoreState\": 0.0,\n \"isEOF\": 1.0,\n \"keyPattern\": {\n \"type\": 1.0,\n \"details.category\": 1.0,\n \"details.description\": 1.0,\n \"details,overview\": 1.0,\n \"details.reference\": 1.0\n },\n \"indexName\": \"type_1_details.category_1_details.description_1_details,overview_1_details.reference_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"type\": [],\n \"details.category\": [],\n \"details.description\": [],\n \"details,overview\": [],\n \"details.reference\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2.0,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"type\": [\n \"[\\\"core.module.safetyplan\\\", \\\"core.module.safetyplan\\\"]\"\n ],\n \"details.category\": [\n \"[MinKey, MaxKey]\"\n ],\n \"details.description\": [\n \"[MinKey, MaxKey]\"\n ],\n \"details,overview\": [\n \"[MinKey, MaxKey]\"\n ],\n \"details.reference\": [\n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\": 302.0,\n \"seeks\": 1.0,\n \"dupsTested\": 0.0,\n \"dupsDropped\": 0.0\n }\n }\n}\ndb.getCollection(\"my-collection\").find({\n type: \"core.module.safetyplan\",\n 'details.description': \"This is long description.\"\n}).explain(\"executionStats\")\nexecutionStats{\n \"executionSuccess\": true,\n \"nReturned\": 21.0,\n \"executionTimeMillis\": 0.0,\n \"totalKeysExamined\": 26.0,\n \"totalDocsExamined\": 21.0,\n \"executionStages\": {\n \"stage\": \"FETCH\",\n \"nReturned\": 21.0,\n \"executionTimeMillisEstimate\": 0.0,\n \"works\": 26.0,\n \"advanced\": 21.0,\n \"needTime\": 4.0,\n \"needYield\": 0.0,\n \"saveState\": 0.0,\n \"restoreState\": 0.0,\n \"isEOF\": 1.0,\n \"docsExamined\": 21.0,\n \"alreadyHasObj\": 0.0,\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 21.0,\n \"executionTimeMillisEstimate\": 0.0,\n \"works\": 26.0,\n \"advanced\": 21.0,\n \"needTime\": 4.0,\n \"needYield\": 0.0,\n \"saveState\": 0.0,\n \"restoreState\": 0.0,\n \"isEOF\": 1.0,\n \"keyPattern\": {\n \"type\": 1.0,\n \"details.category\": 1.0,\n \"details.description\": 1.0,\n \"details,overview\": 1.0,\n \"details.reference\": 1.0\n },\n \"indexName\": \"type_1_details.category_1_details.description_1_details,overview_1_details.reference_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"type\": [],\n \"details.category\": [],\n \"details.description\": [],\n \"details,overview\": [],\n \"details.reference\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2.0,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"type\": [\n \"[\\\"core.module.safetyplan\\\", \\\"core.module.safetyplan\\\"]\"\n ],\n \"details.category\": [\n \"[MinKey, MaxKey]\"\n ],\n \"details.description\": [\n \"[\\\"This is long description.\\\", \\\"This is long description.\\\"]\"\n ],\n \"details,overview\": [\n \"[MinKey, MaxKey]\"\n ],\n \"details.reference\": [\n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\": 26.0,\n \"seeks\": 5.0,\n \"dupsTested\": 0.0,\n \"dupsDropped\": 0.0\n }\n }\n}\n",
"text": "Hello Carl\nThanks for looking into thisSo here is my indexAnd here is my first queryHere is the executionStatsAnd when I add another field on my queryAnd here is the executionStats of my second queryAs you can see the keysExamined decreased",
"username": "Phillip_Causing"
},
{
"code": "",
"text": "As you can see the keysExamined decreasedWhy do you think it is not normal?When you make a more restrictive query it is kind of normal than less key or the same number of keys are examined.In the first case, you start with 1 key for type:, then you recursively scan the keys with the same type prefix. So you scan all details.category that exists for the given type, and then all the details.descriptions for each details.category. In the second case you only have 1 details.description per details.category.",
"username": "steevej"
},
{
"code": "keysExamined",
"text": "Hello Steve\nCan you elaborate more?Because the keysExamined should be the same on both query because the second query in terms of prefix is same with the first queryAs you can see with the result from @Carl_Champain that result was the same I was expecting",
"username": "Phillip_Causing"
},
{
"code": "",
"text": "Do some documenta not have b set?/Edit, never mind I tested a few scenarios and could not get it behave the way you are…do you have a sample from your data that can demo this?I was thinking that if some document didnt have a field set on the second index match then it would be able to short-circuit within the lookup.",
"username": "John_Sewell"
},
{
"code": "db.test.insertMany( [\n {a:\"test\", b:\"test\", c:\"test\"},\n {a:\"test\", b:\"test\", c:\"test\"},\n {a:\"test\", b:\"test\", c:\"test\"},\n {a:\"test\", b:\"test\", c:\"test\"},\n {a:\"test\", b:\"test\", c:\"test\"},\n {a:\"test\", b:\"test\", c:\"zoo\"}, /* in this document c: is zoo rather than test */\n {a:\"foo\", b:\"bar\", c:\"yo\"}\n] )\ndb.test.createIndex( { a: 1, b: 1, c: 1 } )\ndb.test.find({ a: 'test'})db.test.find({ a: 'test', c: 'zoo' })",
"text": "Carl_Champlain example was an edge case where all values were the same and both queries were returning the same number of documents. So the second query was not more selective as the first.If you start with a slightly modified collection:Then1 - db.test.find({ a: 'test'}) we get 6 keysExamined.\n2 - db.test.find({ a: 'test', c: 'zoo' }) we get 2 keysExamined",
"username": "steevej"
},
{
"code": "",
"text": "Because the behavior that Im encountering contradicts to the index prefix concept in the mongodb manual\nDo you know any articles/documents that fully explain this behavior?",
"username": "Phillip_Causing"
},
{
"code": "",
"text": "Could you please provide more details about your observations thatcontradicts to the index prefix conceptI do not see any.",
"username": "steevej"
},
{
"code": "itemstocklocationstocklocationitemexecutionStatsexplainkeysExaminedFETCHIXSCANdetails.description",
"text": "Is says there thisif a query omits a particular index prefix, it is unable to make use of any index fields that follow that prefix.Since a query on item and stock omits the location index prefix, it cannot use the stock index field which follows location. Only the item field in the index can support this query. See Create Indexes to Support Your Queries for more information.What I was expecting on the executionStats on explain is same number of keysExamined and it should have FETCH stage after the IXSCAN where it will filter the the keys/documents using details.description",
"username": "Phillip_Causing"
},
{
"code": "db.test.find({ a: 'test', c: 'zoo' })",
"text": "On your example\nI still dont understand why only 2 keysExamined in this query\ndb.test.find({ a: 'test', c: 'zoo' })",
"username": "Phillip_Causing"
},
{
"code": "itemstockitemitemstockdb.test.find({ a: 'test', c: 'zoo' })",
"text": "I did not see any contradiction in the link you shared because I read:MongoDB can also use the index to support a query on the item and stock fields, since the item field corresponds to a prefix.I could also read:However, in this case the index would not be as efficient in supporting the query as it would be if the index were on only item and stock.I still dont understand why only 2 keysExamined in this query\ndb.test.find({ a: 'test', c: 'zoo' })Simply because it examines only 1 instance of the key a:test/b:test/c:test while with c:test it has to examined once for each document with that keys.Indexes are trees, when a node does not match branches do not need to be followed.",
"username": "steevej"
},
{
"code": "",
"text": "Maybe I’ll go deeper on how index works to be able to fully understandThanks",
"username": "Phillip_Causing"
}
] | Getting confuse on the Compound Index prefixes | 2023-07-18T14:13:57.728Z | Getting confuse on the Compound Index prefixes | 831 |
null | [
"replication"
] | [
{
"code": "db.TestTTL.getIndexes()\t{\n\t\t\"v\" : 2,\n\t\t\"key\" : {\n\t\t\t\"date\" : 1\n\t\t},\n\t\t\"name\" : \"date_1\",\n\t\t\"expireAfterSeconds\" : 60\n\t}\n{\n\t\"_id\" : ObjectId(\"645b571f15f29c26c0eb0367\"),\n\t\"created\" : NumberLong(\"1683707679869\"),\n\t\"byUid\" : \"1513736125051203587\",\n\t\"toUid\" : \"f75f87d8-f59b-49fb-a2f0-7212434cf7a6\",\n\t\"date\" : ISODate(\"2023-06-18T00:00:00Z\")\n}\ndb.adminCommand({getParameter:1, ttlMonitorSleepSecs: 1});{\n\t\"ttlMonitorSleepSecs\" : 60,\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1690613147, 259),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1690613147, 259)\n}\ndb.adminCommand({getParameter:1, ttlMonitorEnabled:1});{\n\t\"ttlMonitorEnabled\" : true,\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1690613223, 316),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1690613223, 316)\n}\ndb.serverStatus().metrics.ttl{ \"deletedDocuments\" : NumberLong(75598788), \"passes\" : NumberLong(10) }\ndb.setLogLevel(1, \"index\");grep -i ttl /var/log/mongod/mongodb.log",
"text": "We are using MongoDB 4.4.3 as a replica set with PSSSA setup. Earlier without replica set, the TTL was working fine but now with the replica set it is not.Details:\nI created an index on the date, here’s how db.TestTTL.getIndexes() shows it:Sample document:Output of db.adminCommand({getParameter:1, ttlMonitorSleepSecs: 1});Output of db.adminCommand({getParameter:1, ttlMonitorEnabled:1});Output of db.serverStatus().metrics.ttl has remained same for the last 7 days:We increased the log verbosity for “index” but were unable to see any TTL logs. Using\ndb.setLogLevel(1, \"index\"); and grep -i ttl /var/log/mongod/mongodb.log giving no results.",
"username": "Mayank_Chaudhary"
},
{
"code": "",
"text": "Is your cluster currently healthy and has a Primary?",
"username": "chris"
},
{
"code": "rs.status(){\n\t\"set\" : \"rs\",\n\t\"date\" : ISODate(\"2023-07-31T05:00:39.684Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(7),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 3,\n\t\"writeMajorityCount\" : 3,\n\t\"votingMembersCount\" : 5,\n\t\"writableVotingMembersCount\" : 4,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1690779639, 423),\n\t\t\t\"t\" : NumberLong(7)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2023-07-31T05:00:39.671Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1690779639, 423),\n\t\t\t\"t\" : NumberLong(7)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2023-07-31T05:00:39.671Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1690779639, 425),\n\t\t\t\"t\" : NumberLong(7)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1690779639, 410),\n\t\t\t\"t\" : NumberLong(7)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2023-07-31T05:00:39.674Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2023-07-31T05:00:39.608Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1690779603, 73),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"priorityTakeover\",\n\t\t\"lastElectionDate\" : ISODate(\"2023-07-22T05:14:35.184Z\"),\n\t\t\"electionTerm\" : NumberLong(7),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1690002875, 38),\n\t\t\t\"t\" : NumberLong(6)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1690002875, 38),\n\t\t\t\"t\" : NumberLong(6)\n\t\t},\n\t\t\"numVotesNeeded\" : 3,\n\t\t\"priorityAtElection\" : 3,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"priorPrimaryMemberId\" : 0,\n\t\t\"targetCatchupOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1690002875, 522),\n\t\t\t\"t\" : NumberLong(6)\n\t\t},\n\t\t\"numCatchUpOps\" : NumberLong(135),\n\t\t\"newTermStartDate\" : ISODate(\"2023-07-22T05:14:35.753Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-07-22T05:14:37.069Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"ip-172-31-12-111:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 172629,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1690779638, 432),\n\t\t\t\t\"t\" : NumberLong(7)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1690779638, 425),\n\t\t\t\t\"t\" : NumberLong(7)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-07-31T05:00:38Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-07-31T05:00:38Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-07-31T05:00:38.637Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-07-31T05:00:39.344Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"172.31.12.118:27017\",\n\t\t\t\"syncSourceId\" : 4,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 20,\n\t\t\t\"configTerm\" : 7\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"172.31.12.57:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 172159,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1690779638, 158),\n\t\t\t\t\"t\" : NumberLong(7)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1690779638, 151),\n\t\t\t\t\"t\" : NumberLong(7)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-07-31T05:00:38Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-07-31T05:00:38Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-07-31T05:00:38.203Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-07-31T05:00:39.317Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"172.31.12.118:27017\",\n\t\t\t\"syncSourceId\" : 4,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 20,\n\t\t\t\"configTerm\" : 7\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"name\" : \"172.31.12.181:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 172263,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1690779638, 105),\n\t\t\t\t\"t\" : NumberLong(7)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1690779638, 105),\n\t\t\t\t\"t\" : NumberLong(7)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-07-31T05:00:38Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-07-31T05:00:38Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-07-31T05:00:38.181Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-07-31T05:00:38.149Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"172.31.12.118:27017\",\n\t\t\t\"syncSourceId\" : 4,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 20,\n\t\t\t\"configTerm\" : 7\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 4,\n\t\t\t\"name\" : \"172.31.12.118:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 777328,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1690779639, 425),\n\t\t\t\t\"t\" : NumberLong(7)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-07-31T05:00:39Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1690002875, 518),\n\t\t\t\"electionDate\" : ISODate(\"2023-07-22T05:14:35Z\"),\n\t\t\t\"configVersion\" : 20,\n\t\t\t\"configTerm\" : 7,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 5,\n\t\t\t\"name\" : \"172.31.10.121:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 7,\n\t\t\t\"stateStr\" : \"ARBITER\",\n\t\t\t\"uptime\" : 171250,\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-07-31T05:00:39.522Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-07-31T05:00:39.318Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 20,\n\t\t\t\"configTerm\" : 7\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1690779639, 425),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1690779639, 425)\n}\n",
"text": "Yes. This is the output of rs.status()",
"username": "Mayank_Chaudhary"
},
{
"code": "db.serverStatus().metrics.ttl{ \"deletedDocuments\" : NumberLong(118720859), \"passes\" : NumberLong(11) }grep -i ttl /var/log/syslog",
"text": "I am also noticing the output of db.serverStatus().metrics.ttl changing now:\n{ \"deletedDocuments\" : NumberLong(118720859), \"passes\" : NumberLong(11) }But I don’t see any output in logs for: grep -i ttl /var/log/syslog",
"username": "Mayank_Chaudhary"
},
{
"code": "",
"text": "Hi @Mayank_Chaudhary,\nWhy are you searching this informations in syslog and not in mongod.log?Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "/etc/mongod.conf# mongod.conf\n \n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /data/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: syslog\n logAppend: true\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\nreplication:\n oplogSizeMB: 2048\n replSetName: \"rs\"\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n",
"text": "My bad, I should have mentioned it earlier. I am using syslog as destination for system logs because of easy log rotation. This is my /etc/mongod.conf",
"username": "Mayank_Chaudhary"
},
{
"code": "",
"text": "I recommend upgrading to the latest 4.4 before looking into this much further.4.4.3 is not recommended for production use due to WT-7995I don’t see anything specific in the release notes for TTL but all jiras closed for release would have to be reviewed.",
"username": "chris"
}
] | TTL Deletion not working | 2023-07-29T06:52:57.256Z | TTL Deletion not working | 506 |
null | [
"compass",
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi all, unable to find mongosh in latest update, unable to open settings too, can anyone help me with it.",
"username": "Sarang_Patil"
},
{
"code": "",
"text": "Please update with:",
"username": "chris"
},
{
"code": "",
"text": "I am using Compass Version 1.39.0 (1.39.0) on Mac (M1 chip) Operating system - macos Monterey. I have been using compass since the last 1 year. previously the Mongosh terminal used to be at the bottom most place on the window but its blank in the latest update (attached screenshot)\n\nimage1920×220 19.7 KB\nI am not able to open the settings too, I tried with following way to open the compass settingsThanks for the quick reply, let me know if you need any other info to look into this issue.",
"username": "Sarang_Patil"
},
{
"code": "",
"text": "On linux here, so I cannot compare. But mine is certainly showing mongosh at the bottom.Done the usual ‘turn it off and on again’ trouble shooting?",
"username": "chris"
},
{
"code": "",
"text": "I also tried on Ubuntu, facing the same problem there, will download the 1.38 version and see if its working.",
"username": "Sarang_Patil"
},
{
"code": "",
"text": "Downloaded the 1.38 version on Ubuntu and now I can see the Mongosh terminal. i guess the problem is with the 1.39 version only\n\nimage1280×298 5.99 KB\n",
"username": "Sarang_Patil"
},
{
"code": "",
"text": "While it might get attention here on the forums I suggest pop over to jira.mongodb.com and log an issue.I suspect it could be specific to your installation as there is heavy use of Mac at MongoDB.",
"username": "chris"
}
] | Unable to find mongosh in latest update | 2023-07-30T09:55:52.421Z | Unable to find mongosh in latest update | 456 |
null | [
"queries",
"crud"
] | [
{
"code": "try {\n var List = db.getCollection('XXX').distinct('xxx');\n for (var i = 0; i < List.length; i++) {\n var isb = List[i];\n var idNumber = 10001;\n try {\n\n db.getCollection('xxx').find({xxx: isb}).sort({xxx:1}).forEach(function(doc){\n db.getCollection('xxx').updateOne({\n _id: doc._id\n }, {\n $set: {\n xxx: NumberInt(idNumber)\n }\n });\n idNumber++;\n printjson(`${xxx} : Success`)\n })\n \n } catch (e) {\n printjson(`${xxx} : Error Found`)\n printjson(\"ERR:: \" + e);\n }\n }\n\n} catch (e) {\n printjson(\"ERR:: \" + e);\n}\n",
"text": "Below query is taking lots of time to execute. Can someone help in optimising it.",
"username": "A_W"
},
{
"code": "",
"text": "Bulk update and batch up.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks!. Is there any example if you can share.(considering above requirement)",
"username": "A_W"
},
{
"code": "var batchSize = 1000;\nvar progressCounter = 0;\nvar bulkObject = db.getCollection('XXX').initializeUnorderedBulkOp();\nvar sourceData = db.getCollection('SourceColl').distinct('theField');\nvar loopCounter = 10001;\n\nsourceData.forEach(theItem =>{\n\tprint(`Processing: ${theItem}`)\n\tdb.getCollection.find({'theField':theItem}, {_id:1}).sort({sortField:1}).forEach(theDoc =>{\n\t\tprogressCounter++;\n\t\tbulk.find( { _id: theDoc._id } ).update( { $set: { counter: loopCounter++ } } );\n\t\tif(progressCounter % batchSize == 0){\n\t\t\tvar results = bulk.execute();\n\t\t\tbulkObject = db.getCollection('XXX').initializeUnorderedBulkOp();\t\t\n\t\t}\t\n\t})\n})\n\nif(progressCounter % batchSize != 0){\n\tvar results = bulk.execute();\n\tbulkObject = db.getCollection('XXX').initializeUnorderedBulkOp();\t\t\n}\n{_id:0, brand:'ford'},\n{_id:1, brand:'ford'},\n{_id:2, brand:'ford'},\n{_id:3, brand:'VW'},\n{_id:4, brand:'VW'},\n{_id:5, brand:'VW'},\n{_id:0, brand:'ford', newField:10001},\n{_id:1, brand:'ford', newField:10002},\n{_id:2, brand:'ford', newField:10003},\n{_id:3, brand:'VW', newField:10001},\n{_id:4, brand:'VW', newField:10002},\n{_id:5, brand:'VW', newField:10003},\n",
"text": "Create a bulk object and add updates to it, when it gets to a certain size call execute and then reset the bulk object and repeat.Rough pseudocode:From the looks of things you want to add an incrementing fields for each group, so if you had a collection of cars, with each car document having a brand, for each brand you want to add a field to each one that increments, i.e.Would get updated to:Is this right?To start with you could wrap the inner loop in the bulk operation block so do one (or many depending on the batch size) server call per group.I was actually trying to add an incrementing field to a query a while back and it did not seem trivial, but you may be able to use the windowFields operator to do the grouping and adding of the new field.How big is the collection and what’s the grouping look like in terms of number of groups and documents per group?",
"username": "John_Sewell"
},
{
"code": "{_id:0, brand:'ford', seq:10001},\n{_id:1, brand:'ford',seq:10002},\n{_id:2, brand:'ford',seq:10002},\n{_id:3, brand:'VW',seq:10001},\n{_id:4, brand:'VW',seq:10001},\n{_id:5, brand:'VW',seq:10003}\n{_id:0, brand:'ford',seq:10001},\n{_id:1, brand:'ford',seq:10002},\n{_id:2, brand:'ford',seq:10003},\n{_id:3, brand:'VW',seq:10001},\n{_id:4, brand:'VW',seq:10002},\n{_id:5, brand:'VW',seq:10003}\n",
"text": "Thank you john!\nvar brand = [‘ford’,‘VW’];Here we want to update seq data for the list of brands where the incorrect sequece is stored. For this we want to sort the collection brandwise by seq first and then update the seq by increment number starting with 10001.Current DataExpected DataThe total collection document size for the list of records where update is required is not more than 25000 records.",
"username": "A_W"
},
{
"code": "",
"text": "Excellent, so something like I put above should work, note that I have no error handling in the above script, unlike your original so you will want to probably have something in for that and we capture the return value of the update query, you can collate these as you go to have a sanity check of matched and updated records.If you do a simple query on a test collection and then printjson the results you can see what the properties are to pull out and store.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks john! Appreciate your help!",
"username": "A_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can i optimize update query | 2023-07-28T19:22:23.142Z | How can i optimize update query | 506 |
null | [
"java"
] | [
{
"code": "(com.mongodb.kafka.connect.source.MongoSourceTask:458)\n[2023-07-27 18:24:25,165] ERROR [mongodb-source-connector|task-0] WorkerSourceTask{id=mongodb-source-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:212)\norg.apache.kafka.connect.errors.ConnectException: Unexpected error: Cannot invoke \"com.mongodb.client.MongoChangeStreamCursor.tryNext()\" because \"this.cursor\" is null\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:597)\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.pollInternal(StartedMongoSourceTask.java:211)\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.poll(StartedMongoSourceTask.java:188)\n\tat com.mongodb.kafka.connect.source.MongoSourceTask.poll(MongoSourceTask.java:173)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.poll(AbstractWorkerSourceTask.java:462)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:351)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:75)\n\tat org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:857)\nCaused by: java.lang.NullPointerException: Cannot invoke \"com.mongodb.client.MongoChangeStreamCursor.tryNext()\" because \"this.cursor\" is null\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:579)\n\t... 14 more\nname=mongodb-source-connector\nconnector.class=com.mongodb.kafka.connect.MongoSourceConnector\n\ntasks.max=1\nconnection.uri=mongodb://localhost:27017\ndatabase=productDb\ncollection=products\n\nkey.converter=org.apache.kafka.connect.storage.StringConverter\nkey.field=_id\n\nvalue.converter=org.apache.kafka.connect.storage.StringConverter\nvalue.converter.schemas.enable=false\ntopic=output-topic\n\npoll.max.batch.size=1000\npoll.await.time.ms=500\n\ninitial.sync.source=true\n\ndb version v6.0.6\nBuild Info: {\n \"version\": \"6.0.6\",\n \"gitVersion\": \"26b4851a412cc8b9b4a18cdb6cd0f9f642e06aa7\",\n \"modules\": [],\n \"allocator\": \"system\",\n \"environment\": {\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n",
"text": "Hi,I am getting following error while running mongodb-kafka source connector:Here is my Source connector configuration:MongoDb Version:mongodb-kafka connector Jar: mongo-kafka-connect-1.10.1-confluent.jarkafka connector executing as: connect-standalone.properties",
"username": "link2anjan_N_A"
},
{
"code": "offset.partition.nameerrors.tolerance: all",
"text": "We have been hitting the same error with a DocumentDB connector.We tried changing the offset name by setting the offset.partition.name property, but this did not help. It would run if we set errors.tolerance: all; however, we did not want to leave this property enabled long-term, and the connector broke with the same error as soon as we removed it. Both of these suggestions were sourced from here: https://www.mongodb.com/docs/kafka-connector/current/troubleshooting/recover-from-invalid-resume-token/#invalid-resume-tokenWe’re hoping for a more elegant solution, but we did find that just redeploying the connector with an entirely new name worked (or at least has so far).",
"username": "Mike_Ray"
},
{
"code": "mongo.errors.tolerance",
"text": "Hi @link2anjan_N_A,Are there any log messages regarding the cursor / mongodb that occur before that error? I want to understand if the connector was mid shutdown or if it was during the general running.@Mike_Ray - there is the mongo.errors.tolerance setting for just the connector.I’ve added KAFKA-383 to track.Ross",
"username": "Ross_Lawley"
},
{
"code": "[2023-07-27 18:44:38,943] INFO [mongo-source|task-0] These configurations '[metrics.context.connect.kafka.cluster.id]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:378)\n[2023-07-27 18:44:38,944] INFO [mongo-source|task-0] Kafka version: 3.5.0 (org.apache.kafka.common.utils.AppInfoParser:119)\n[2023-07-27 18:44:38,944] INFO [mongo-source|task-0] Kafka commitId: unknown (org.apache.kafka.common.utils.AppInfoParser:120)\n[2023-07-27 18:44:38,945] INFO [mongo-source|task-0] Kafka startTimeMs: 1690463678944 (org.apache.kafka.common.utils.AppInfoParser:121)\n[2023-07-27 18:44:38,960] INFO [mongo-source|task-0] [Producer clientId=connector-producer-mongo-source-0] Cluster ID: tzhR2bbzT76vdhT3DONr9A (org.apache.kafka.clients.Metadata:287)\n[2023-07-27 18:44:38,961] INFO [mongo-source|task-0] Starting MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:109)\n[2023-07-27 18:44:38,962] INFO Created connector mongo-source (org.apache.kafka.connect.cli.ConnectStandalone:76)\n[2023-07-27 18:44:38,998] INFO [mongo-source|task-0] MongoClient with metadata {\"driver\": {\"name\": \"mongo-java-driver|sync|mongo-kafka|source\", \"version\": \"4.7.2|1.10.1\"}, \"os\": {\"type\": \"Darwin\", \"name\": \"Mac OS X\", \"architecture\": \"x86_64\", \"version\": \"12.6.7\"}, \"platform\": \"Java/IBM Corporation/17.0.8+5\"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=[com.mongodb.kafka.connect.source.MongoSourceTask$1@54efa557], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@d4e9fa70]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null} (org.mongodb.driver.client:71)\n[2023-07-27 18:44:39,004] INFO [mongo-source|task-0] Opened connection [connectionId{localValue:3, serverValue:184}] to localhost:27017 (org.mongodb.driver.connection:71)\n[2023-07-27 18:44:39,005] INFO [mongo-source|task-0] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=7190556} (org.mongodb.driver.cluster:71)\n[2023-07-27 18:44:39,006] INFO [mongo-source|task-0] Watching for collection changes on 'productDb.products' (com.mongodb.kafka.connect.source.MongoSourceTask:637)\n[2023-07-27 18:44:39,005] INFO [mongo-source|task-0] Opened connection [connectionId{localValue:4, serverValue:185}] to localhost:27017 (org.mongodb.driver.connection:71)\n[2023-07-27 18:44:39,061] INFO [mongo-source|task-0] New change stream cursor created without offset. (com.mongodb.kafka.connect.source.MongoSourceTask:417)\n[2023-07-27 18:44:39,093] INFO [mongo-source|task-0] Opened connection [connectionId{localValue:5, serverValue:186}] to localhost:27017 (org.mongodb.driver.connection:71)\n[2023-07-27 18:44:39,114] WARN [mongo-source|task-0] Failed to resume change stream: The $changeStream stage is only supported on replica sets 40573\n\n=====================================================================================\nIf the resume token is no longer available then there is the potential for data loss.\nSaved resume tokens are managed by Kafka and stored with the offset data.\n\nTo restart the change stream with no resume token either: \n * Create a new partition name using the `offset.partition.name` configuration.\n * Set `errors.tolerance=all` and ignore the erroring resume token. \n * Manually remove the old offset from its configured storage.\n\nResetting the offset will allow for the connector to be resume from the latest resume\ntoken. Using `startup.mode = copy_existing` ensures that all data will be outputted by the\nconnector but it will duplicate existing data.\n=====================================================================================\n (com.mongodb.kafka.connect.source.MongoSourceTask:458)\n[2023-07-27 18:44:39,116] INFO [mongo-source|task-0] Started MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:163)\n[2023-07-27 18:44:39,117] INFO [mongo-source|task-0] WorkerSourceTask{id=mongo-source-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:275)\n[2023-07-27 18:44:39,122] INFO [mongo-source|task-0] Watching for collection changes on 'productDb.products' (com.mongodb.kafka.connect.source.MongoSourceTask:637)\n[2023-07-27 18:44:39,127] INFO [mongo-source|task-0] New change stream cursor created without offset. (com.mongodb.kafka.connect.source.MongoSourceTask:417)\n[2023-07-27 18:44:39,132] WARN [mongo-source|task-0] Failed to resume change stream: The $changeStream stage is only supported on replica sets 40573\n\n=====================================================================================\nIf the resume token is no longer available then there is the potential for data loss.\nSaved resume tokens are managed by Kafka and stored with the offset data.\n\nTo restart the change stream with no resume token either: \n * Create a new partition name using the `offset.partition.name` configuration.\n * Set `errors.tolerance=all` and ignore the erroring resume token. \n * Manually remove the old offset from its configured storage.\n\nResetting the offset will allow for the connector to be resume from the latest resume\ntoken. Using `startup.mode = copy_existing` ensures that all data will be outputted by the\nconnector but it will duplicate existing data.\n=====================================================================================\n (com.mongodb.kafka.connect.source.MongoSourceTask:458)\n[2023-07-27 18:44:39,135] ERROR [mongo-source|task-0] WorkerSourceTask{id=mongo-source-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:212)\norg.apache.kafka.connect.errors.ConnectException: Unexpected error: Cannot invoke \"com.mongodb.client.MongoChangeStreamCursor.tryNext()\" because \"this.cursor\" is null\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:597)\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.pollInternal(StartedMongoSourceTask.java:211)\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.poll(StartedMongoSourceTask.java:188)\n\tat com.mongodb.kafka.connect.source.MongoSourceTask.poll(MongoSourceTask.java:173)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.poll(AbstractWorkerSourceTask.java:462)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:351)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:75)\n\tat org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:857)\nCaused by: java.lang.NullPointerException: Cannot invoke \"com.mongodb.client.MongoChangeStreamCursor.tryNext()\" because \"this.cursor\" is null\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:579)\n\t... 14 more\n[2023-07-27 18:44:39,139] INFO [mongo-source|task-0] Stopping MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:178)\n[2023-07-27 18:44:39,139] INFO [mongo-source|task-0] Stopping MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:337)\n[2023-07-27 18:44:39,151] INFO [mongo-source|task-0] [Producer clientId=connector-producer-mongo-source-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1310)\n[2023-07-27 18:44:39,159] INFO [mongo-source|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:693)\n\n",
"text": "errors.tolerance: allPlease find details:",
"username": "link2anjan_N_A"
},
{
"code": "[2023-07-28 15:36:08,635] INFO [mongodb-source-connector|task-0] These configurations '[metrics.context.connect.kafka.cluster.id]' were supplied but are not used yet. (org.apache.kafka.clients.producer.ProducerConfig:378)\n[2023-07-28 15:36:08,636] INFO [mongodb-source-connector|task-0] Kafka version: 3.5.0 (org.apache.kafka.common.utils.AppInfoParser:119)\n[2023-07-28 15:36:08,636] INFO [mongodb-source-connector|task-0] Kafka commitId: unknown (org.apache.kafka.common.utils.AppInfoParser:120)\n[2023-07-28 15:36:08,636] INFO [mongodb-source-connector|task-0] Kafka startTimeMs: 1690538768635 (org.apache.kafka.common.utils.AppInfoParser:121)\n[2023-07-28 15:36:08,656] INFO [mongodb-source-connector|task-0] [Producer clientId=connector-producer-mongodb-source-connector-0] Cluster ID: tzhR2bbzT76vdhT3DONr9A (org.apache.kafka.clients.Metadata:287)\n[2023-07-28 15:36:08,657] INFO [mongodb-source-connector|task-0] Starting MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:109)\n[2023-07-28 15:36:08,664] INFO Created connector mongodb-source-connector (org.apache.kafka.connect.cli.ConnectStandalone:76)\n[2023-07-28 15:36:08,702] INFO [mongodb-source-connector|task-0] MongoClient with metadata {\"driver\": {\"name\": \"mongo-java-driver|sync|mongo-kafka|source\", \"version\": \"4.7.2|1.10.1\"}, \"os\": {\"type\": \"Darwin\", \"name\": \"Mac OS X\", \"architecture\": \"x86_64\", \"version\": \"12.6.7\"}, \"platform\": \"Java/IBM Corporation/17.0.8+5\"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=[com.mongodb.kafka.connect.source.MongoSourceTask$1@8d88f514], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider@e1134d6c]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=null, autoEncryptionSettings=null, contextProvider=null} (org.mongodb.driver.client:71)\n[2023-07-28 15:36:08,707] INFO [mongodb-source-connector|task-0] Opened connection [connectionId{localValue:4, serverValue:193}] to localhost:27017 (org.mongodb.driver.connection:71)\n[2023-07-28 15:36:08,707] INFO [mongodb-source-connector|task-0] Opened connection [connectionId{localValue:3, serverValue:194}] to localhost:27017 (org.mongodb.driver.connection:71)\n[2023-07-28 15:36:08,708] INFO [mongodb-source-connector|task-0] Watching for collection changes on 'productDb.products' (com.mongodb.kafka.connect.source.MongoSourceTask:637)\n[2023-07-28 15:36:08,708] INFO [mongodb-source-connector|task-0] Monitor thread successfully connected to server with description ServerDescription{address=localhost:27017, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=5418304} (org.mongodb.driver.cluster:71)\n[2023-07-28 15:36:08,762] INFO [mongodb-source-connector|task-0] New change stream cursor created without offset. (com.mongodb.kafka.connect.source.MongoSourceTask:417)\n[2023-07-28 15:36:08,793] INFO [mongodb-source-connector|task-0] Opened connection [connectionId{localValue:5, serverValue:195}] to localhost:27017 (org.mongodb.driver.connection:71)\n[2023-07-28 15:36:08,828] WARN [mongodb-source-connector|task-0] Failed to resume change stream: The $changeStream stage is only supported on replica sets 40573\n\n=====================================================================================\nIf the resume token is no longer available then there is the potential for data loss.\nSaved resume tokens are managed by Kafka and stored with the offset data.\n\nTo restart the change stream with no resume token either: \n * Create a new partition name using the `offset.partition.name` configuration.\n * Set `errors.tolerance=all` and ignore the erroring resume token. \n * Manually remove the old offset from its configured storage.\n\nResetting the offset will allow for the connector to be resume from the latest resume\ntoken. Using `startup.mode = copy_existing` ensures that all data will be outputted by the\nconnector but it will duplicate existing data.\n=====================================================================================\n (com.mongodb.kafka.connect.source.MongoSourceTask:458)\n[2023-07-28 15:36:08,830] INFO [mongodb-source-connector|task-0] Started MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:163)\n[2023-07-28 15:36:08,830] INFO [mongodb-source-connector|task-0] WorkerSourceTask{id=mongodb-source-connector-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.AbstractWorkerSourceTask:275)\n[2023-07-28 15:36:08,834] INFO [mongodb-source-connector|task-0] Watching for collection changes on 'productDb.products' (com.mongodb.kafka.connect.source.MongoSourceTask:637)\n[2023-07-28 15:36:08,838] INFO [mongodb-source-connector|task-0] New change stream cursor created without offset. (com.mongodb.kafka.connect.source.MongoSourceTask:417)\n[2023-07-28 15:36:08,842] WARN [mongodb-source-connector|task-0] Failed to resume change stream: The $changeStream stage is only supported on replica sets 40573\n\n=====================================================================================\nIf the resume token is no longer available then there is the potential for data loss.\nSaved resume tokens are managed by Kafka and stored with the offset data.\n\nTo restart the change stream with no resume token either: \n * Create a new partition name using the `offset.partition.name` configuration.\n * Set `errors.tolerance=all` and ignore the erroring resume token. \n * Manually remove the old offset from its configured storage.\n\nResetting the offset will allow for the connector to be resume from the latest resume\ntoken. Using `startup.mode = copy_existing` ensures that all data will be outputted by the\nconnector but it will duplicate existing data.\n=====================================================================================\n (com.mongodb.kafka.connect.source.MongoSourceTask:458)\n[2023-07-28 15:36:08,846] ERROR [mongodb-source-connector|task-0] WorkerSourceTask{id=mongodb-source-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:212)\norg.apache.kafka.connect.errors.ConnectException: Unexpected error: Cannot invoke \"com.mongodb.client.MongoChangeStreamCursor.tryNext()\" because \"this.cursor\" is null\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:597)\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.pollInternal(StartedMongoSourceTask.java:211)\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.poll(StartedMongoSourceTask.java:188)\n\tat com.mongodb.kafka.connect.source.MongoSourceTask.poll(MongoSourceTask.java:173)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.poll(AbstractWorkerSourceTask.java:462)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:351)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259)\n\tat org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:75)\n\tat org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:857)\nCaused by: java.lang.NullPointerException: Cannot invoke \"com.mongodb.client.MongoChangeStreamCursor.tryNext()\" because \"this.cursor\" is null\n\tat com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:579)\n\t... 14 more\n[2023-07-28 15:36:08,849] INFO [mongodb-source-connector|task-0] Stopping MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:178)\n[2023-07-28 15:36:08,849] INFO [mongodb-source-connector|task-0] Stopping MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:337)\n[2023-07-28 15:36:08,860] INFO [mongodb-source-connector|task-0] [Producer clientId=connector-producer-mongodb-source-connector-0] Closing the Kafka producer with timeoutMillis = 30000 ms. (org.apache.kafka.clients.producer.KafkaProducer:1310)\n[2023-07-28 15:36:08,864] INFO [mongodb-source-connector|task-0] Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics:693)\n[2023-07-28 15:36:08,865] INFO [mongodb-source-conn\n",
"text": "please find log details",
"username": "link2anjan_N_A"
},
{
"code": "mongo.errors.tolerancemongo.errors.tolerance",
"text": "there is the mongo.errors.tolerance setting for just the connector.This section (https://www.mongodb.com/docs/kafka-connector/current/sink-connector/fundamentals/error-handling-strategies/#handle-errors-at-the-connector-level) makes it seem like it would only apply to MongoDB errors; however, the related information page (https://www.mongodb.com/docs/kafka-connector/current/sink-connector/configuration-properties/error-handling/#std-label-sink-configuration-error-handling) makes it sound like it’s just an override property, and would apply to all errors.We did not want all errors to be silently ignored, which is why we did not leave that setting on. Can you confirm that mongo.errors.tolerance only applies to MongoDB-related errors?If not, we will wait on the results of that ticket you opened – thanks for doing so.",
"username": "Mike_Ray"
},
{
"code": "mongo.errors.tolerance",
"text": "Hi,Thanks for the logs - looks like the cursor couldnt be restarted as the resume token is no longer there.It shouldn’t NPE though and that will have to be fixed.mongo.errors.tolerance relates to errors coming from the MongoDB connector and does not impact Kafka connect error tolerance handling.Ross",
"username": "Ross_Lawley"
}
] | MongoDd Kafka connect error: org.apache.kafka.connect.errors.ConnectException: Unexpected error: Cannot invoke "com.mongodb.client.MongoChangeStreamCursor.tryNext()" because "this.cursor" is null | 2023-07-27T13:19:15.990Z | MongoDd Kafka connect error: org.apache.kafka.connect.errors.ConnectException: Unexpected error: Cannot invoke “com.mongodb.client.MongoChangeStreamCursor.tryNext()” because “this.cursor” is null | 1,026 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "mongodump 100.6.0 has CVE-2022-32149.\nin which version it was fixed ?",
"username": "Sri_Sai_Ram_Akam"
},
{
"code": "",
"text": "also, the mongorestore has the same vulnerability. can you pleas share the version in which the issue was not present",
"username": "Sri_Sai_Ram_Akam"
},
{
"code": "",
"text": "Hello @Sri_Sai_Ram_Akam ,Welcome back to The MongoDB Community Forums! Please upgrade your database tools to latest version 100.7.3 via Download Database Tools.The new version includes bug fixes as well as improvements, let me know if you face any issues.\nI will be happy to help you.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi @Tarun_Gaur, Thank you for your reply. But I want to know in which database tools version the go version was updated? and the vulnerability is resolved?\nThat would be more clear and appropriate to which version we can update.",
"username": "Sri_Sai_Ram_Akam"
},
{
"code": "",
"text": "Hello @Sri_Sai_Ram_Akam ,As per my understanding, CVE-2022-32149 does not affect the Database tools.In case you face any issues or have any queries, kindly feel free to post a new thread, will be happy to help you! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi @Tarun_Gaur\nMy concern is the Database tools 100.6.0 is built on GO version 1.17.10\nwhich has the vulnerability CVE-2022-32149.\nI think that should also affect the mongodump and mongorestore which we are using .\nKindly help with this.mongodump --version\nmongodump version: 100.6.0\ngit version: 1d46e6e7021f2f5668763dba624e34bb39208cb0\nGo version: go1.17.10\nos: windows\narch: amd64\ncompiler: gc",
"username": "Sri_Sai_Ram_Akam"
},
{
"code": "",
"text": "Hello @Sri_Sai_Ram_Akam ,I got a confirmation from the team that CVE-2022-32149 does not affect the database tools.Also, you can download our latest release from our Download Center that is Database Tools version 100.7.4 which is built upon Go version 1.19. Please refer Database Tools Changelog for more information.Tarun",
"username": "Tarun_Gaur"
}
] | Mongodump 100.6.0 has CVE-2022-32149 | 2023-07-13T10:25:54.909Z | Mongodump 100.6.0 has CVE-2022-32149 | 824 |
null | [
"queries"
] | [
{
"code": "",
"text": "Having this error for a several hours. My app stopped working due to inabilty to connect to db. Live support will be able to help only tomorrow, but I need to recover my app now. How to solve it? Thanks in advance.",
"username": "Alex_N_A9"
},
{
"code": "",
"text": "Hi @Alex_N_A9,You should be able to contact the Atlas in-app chat support now. From the same linked page, there are instructions to activate / subscribe to a support plan. Currently as of the time of this message, the cloud services support policy states the following hours for the Atlas Developer support plan (and higher):Hours: 24 x 7 for Severity Levels 1 and 2Regards,\nJason",
"username": "Jason_Tran"
}
] | PROBLEM - An error occurred while querying your MongoDB deployment. Please try again in a few minutes | 2023-07-29T16:30:08.406Z | PROBLEM - An error occurred while querying your MongoDB deployment. Please try again in a few minutes | 426 |
null | [
"compass",
"atlas-cluster"
] | [
{
"code": "",
"text": "My IntelliJ fails to access http://dev-as1-pl-1.uinbl.mongodb.net/, but using MongoDB Compass, I can access it. Any thoughts to help me out. I",
"username": "Michael_von_Ruecker"
},
{
"code": "",
"text": "Hi @Michael_von_Ruecker,My IntelliJ fails to access http://dev-as1-pl-1.uinbl.mongodb.net/,I’m not too familiar with IntelliJ but are you following any particular documentation on how to connect? For example : MongoDB | IntelliJ IDEA Documentationhttp://dev-as1-pl-1.uinbl.mongodb.net/,It also appears the format appears the format you’ve specified doesn’t match the Connect String formats:You can specify the MongoDB connection string using either:Additionally, just to confirm, are you attempting to an Atlas cluster?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I’m unsure about the Atlas cluster:atlas-b8iu73-shard-0I can access it using MDB Compas but not S3T. I tried to add my IntelliJ log file, but I was blocked in doing so.I am using the DNS Seed List connection Format: ST3 fails,",
"username": "Michael_von_Ruecker"
},
{
"code": "",
"text": "I cannot send the S3T Details",
"username": "Michael_von_Ruecker"
},
{
"code": "",
"text": "\nimage1466×902 230 KB\n",
"username": "Michael_von_Ruecker"
},
{
"code": "1.11 or earliernslookup -type=txt dev-as1-pl-1.uinbl.mongodb.net\nServer:\t\t127.0.0.1\nAddress:\t127.0.0.1#53\n\nNon-authoritative answer:\ndev-as1-pl-1.uinbl.mongodb.net\ttext = \"authSource=admin&replicaSet=atlas-b8iu73-shard-0\"\n",
"text": "The error itself and the message from Studio 3T indicates a DNS resolution failure - In particular it looks like a TXT lookup is failing. I’m not too familiar with Studio 3T itself but is it possible you can use the standard connection string format to see if you can connect? If it connects, then its most likely a DNS issue (although it’s strange since you’re able to connect using Compass from the same machine I assume) although again, I am not too familiar with Studio 3T and it’s inner workings.I believe you can get the standard connection string format from the Atlas cluster connect modal. The only thing you’ll need to select is Compass and then version 1.11 or earlier when obtaining the connection string.From my own environment it seems to resolve as per normal:Regards,\nJason",
"username": "Jason_Tran"
}
] | Network Access of Mongo via IntelliJ | 2023-07-19T15:49:57.513Z | Network Access of Mongo via IntelliJ | 609 |
null | [
"queries",
"crud"
] | [
{
"code": "const query = {\n userId: context.user.id\n }\n\n const userProfile = await usersCollection.findOne(query, {\n payment: 1\n })\n const payment = userProfile.payment || {}\n const activePaymentGateways = payment.activePaymentGateways\n\n // get active gateways\n const activeGateways = payment.gateways.filter(gateway => activePaymentGateways.includes(gateway.name))\n\n // get gateway user wants to delete card from\n const pg = activeGateways.find(gateway => gateway.name === gatewayName)\n const defaultCardId = pg.defaultCardId\n\n let update = {}\n if (paymentCardId === defaultCardId) {\n // delete default card\n update = {\n $pull: {\n \"payment.gateways.$[gateway].cards\": {\n id: paymentCardId\n }\n },\n $set: {\n \"payment.gateways.$[gateway].defaultCardId\": \"\"\n }\n }\n } else {\n // delete non-default card\n update = {\n $pull: {\n \"payment.gateways.$[gateway].cards\": {\n id: paymentCardId\n }\n }\n }\n }\n\n const options = {\n returnDocument: \"after\",\n returnNewDocument: true,\n projection: {\n \"payment.gateways\": 1\n },\n arrayFilters: [{ \"gateway.name\": gatewayName }]\n }\n const result = await usersCollection.findOneAndUpdate(\n query,\n update,\n options\n )\n let response\n ...\n",
"text": "I am using the following code snippet below to delete an array item that is within an object which is itself an array item in a document.When run in atlas, I get “FunctionError: No array filter found for identifier ‘gateway’ in path ‘payment.gateways.$[gateway].cards’” but can’t replicate this error in my local setup where I use jest-mongodb (which uses mongodb memory server) for testing.\nDo you have any idea on what I’m doing wrong?\nThanks",
"username": "Adedayo_Ayeni"
},
{
"code": "",
"text": "It turns out that findOne works with the array filter, but not findOneAndUpdate. The documentation shows that both methods support the array filter syntax. Any idea why this is so?",
"username": "Adedayo_Ayeni"
}
] | Differences between mongodb atlas and mongodb community | 2023-07-30T11:47:43.402Z | Differences between mongodb atlas and mongodb community | 397 |
null | [
"node-js",
"mongoose-odm",
"compass",
"atlas-cluster"
] | [
{
"code": "error - MongooseServerSelectionError: connect ETIMEDOUT 54.227.xxx.xxx:27017\n at NativeConnection.Connection.openUri (/Users/adamjackson/Documents/birdinghotspots/node_modules/mongoose/lib/connection.js:825:32)\n at /Users/adamjackson/Documents/birdinghotspots/node_modules/mongoose/lib/index.js:414:10\n at /Users/adamjackson/Documents/birdinghotspots/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/Users/adamjackson/Documents/birdinghotspots/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)\n at Mongoose._promiseOrCallback (/Users/adamjackson/Documents/birdinghotspots/node_modules/mongoose/lib/index.js:1288:10)\n at Mongoose.connect (/Users/adamjackson/Documents/birdinghotspots/node_modules/mongoose/lib/index.js:413:20)\n at connect (webpack-internal:///./lib/mongo.ts:87:78)\n at getStaticProps (webpack-internal:///./pages/index.tsx:498:65)\n at Object.renderToHTML (/Users/adamjackson/Documents/birdinghotspots/node_modules/next/dist/server/render.js:386:26)\n at async doRender (/Users/adamjackson/Documents/birdinghotspots/node_modules/next/dist/server/base-server.js:809:34)\n at async cacheEntry1.responseCache.get.incrementalCache.incrementalCache (/Users/adamjackson/Documents/birdinghotspots/node_modules/next/dist/server/base-server.js:926:28)\n at async /Users/adamjackson/Documents/birdinghotspots/node_modules/next/dist/server/response-cache/index.js:83:36 {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'ac-s1ovtzi-shard-00-01.m629ebf.mongodb.net:27017' => [ServerDescription],\n 'ac-s1ovtzi-shard-00-02.m629ebf.mongodb.net:27017' => [ServerDescription],\n 'ac-s1ovtzi-shard-00-00.m629ebf.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-bexyua-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: 30\n },\n code: undefined,\n page: '/'\n}\nconnect ETIMEDOUT 34.201.190.214:27017",
"text": "Is Atlas having known issues today? All afternoon my site has been going down for 5-10 minutes at a time to a point where things are significantly disrupted. When it’s down, I also cannot connect from my dev environment on localhost or from the Compass app. The errors I’m getting are:And on Compass it says: connect ETIMEDOUT 34.201.190.214:27017I’m on AWS with us-east-1. Strangely, when checking some the projects from my work that also use free atlas clusters, they’re not having issues, so I’m a bit confused why mine is having issues. Any suggestions on how to break out of those horrible cycle of going down every would be great. It’s down about half the time this afternoon.",
"username": "Adam_Jackson"
},
{
"code": "",
"text": "Hi @Adam_Jackson,Is Atlas having known issues today?I’m on AWS with us-east-1. Strangely, when checking some the projects from my work that also use free atlas clusters, they’re not having issues, so I’m a bit confused why mine is having issues.I cannot see anything in the cloud status page that indicates an Atlas issue at this stage. However, it may be more cluster specific in which case I would recommend you contact the Atlas in-app chat support and provide them with the link to the cluster that is linked to your application that is generating the error logs provided.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks Jason,I tried the in app chat support but never got a response, probably because I’m on the M0 free cluster (I take that back, I did now get a response suggesting it was related to elections, but I doubt elections last for 5-10 minutes?). I’ve tried switching to a new cluster in us-west-2 and so far so good. Hopefully that solves the issues. Would be nice to know what was causing the problems though.",
"username": "Adam_Jackson"
},
{
"code": "us-west-2",
"text": "I tried the in app chat support but never got a response, probably because I’m on the M0 free cluster (I take that back, I did now get a response suggesting it was related to elections, but I doubt elections last for 5-10 minutes?). I’ve tried switching to a new cluster in us-west-2 and so far so good. Hopefully that solves the issues. Would be nice to know what was causing the problems though.Glad to hear you got a response. I’d recommend continuing to work with the Atlas in-app chat support team on the errors. If you haven’t already done so, please advise the details regarding switching to the new cluster in us-west-2 to the support team.Best Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Is Atlas having issues today? | 2023-07-31T00:18:55.297Z | Is Atlas having issues today? | 496 |
null | [] | [
{
"code": "realm-core/src/realm/object-store/sync/app.cpp:607(lldb) po response.body\n\"<!DOCTYPE html>\\n<html>\\n <head>\\n <title>App Services</title>\\n....\nresponse.body",
"text": "I have an app that uses App Services Authentication service, and Sign In with Apple.When testing on my device, it logs in as expected. When I try to login with the Simulator, login fails with a JSON parsing error. Stepping trough the code, I found that the response body in realm-core/src/realm/object-store/sync/app.cpp:607 prints as a HTML page when logging in trough the Simulator.On my device, response.body is a JSON object as expected.What’s going on with this? It reproduces every time after re-install, product clean, depenency updates, and using emailPassword auth.",
"username": "Oscar_Apeland"
},
{
"code": "",
"text": "and Sign In with Apple.using emailPassword auth.Are you stating the issue occurs with both?Did you try deleting the CoreSimulator directory so a new one is generated?Also, can you share your authentication code, and what version of Realm are you using (if you use a podfile, that may be helpful as well)",
"username": "Jay"
},
{
"code": "",
"text": "The exact error is:\nError Domain=io.realm.app Code=-1 “[json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - invalid literal; last read: ‘<’” UserInfo={Error Code=-1, Server Log URL=, Error Name=MalformedJson, NSLocalizedDescription=[json.exception.parse_error.101] parse error at line 1, column 1: syntax error while parsing value - invalid literal; last read: ‘<’}",
"username": "Oscar_Apeland"
},
{
"code": "(std::string) body = \"<!DOCTYPE html>\\n<html>\\n <head>\\n <title>App Services</title>\\n <link rel=\\\"shortcut icon\\\" href=\\\"/static/favicon.ico\\\" type=\\\"image/x-icon\\\" />\\n <style>\\n #app { display: none; }\\n </style>\\n <script>\\n var settings = {\\\"accountUIBaseUrl\\\":\\\"https://account.mongodb.com\\\",\\\"adminUrl\\\":\\\"https://realm.mongodb.com\\\",\\\"apiUrl\\\":\\\"https://realm.mongodb.com\\\",\\\"bugsnagToken\\\":\\\"d93dd442ef3db183db76cc9ded3bc109\\\",\\\"chartsUIBaseUrl\\\":\\\"https://charts.mongodb.com\\\",\\\"cloudUIBaseUrl\\\":\\\"https://cloud.mongodb.com\\\",\\\"endpointAPIUrl\\\":\\\"https://data.mongodb-api.com\\\",\\\"endpointAPIUrlsByProviderRegion\\\":{\\\"aws-ap-south-1\\\":\\\"https://ap-south-1.aws.data.mongodb-api.com\\\",\\\"aws-ap-southeast-1\\\":\\\"https://ap-southeast-1.aws.data.mongodb-api.com\\\",\\\"aws-ap-southeast-2\\\":\\\"https://ap-southeast-2.aws.data.mongodb-api.com\\\",\\\"aws-eu-central-1\\\":\\\"https://eu-central-1.aws.data.mongodb-api.com\\\",\\\"aws-eu-west-1\\\":\\\"https://eu-west-1.aws.data.mongodb-api.com\\\",\\\"aws-eu-west-2\\\":\\\"https://eu-west-2.aws.data.mongodb-api.com\\\",\\\"aws-sa-east-1\\\":\\\"https://sa-east\"...\n func authorizationController(controller: ASAuthorizationController, didCompleteWithAuthorization authorization: ASAuthorization) {\n guard\n let credential = authorization.credential as? ASAuthorizationAppleIDCredential,\n let token = credential.identityToken,\n let tokenString = String(data: token, encoding: .utf8)\n else {\n let alert = UIAlertController(title: \"Login Error\", message: \"No Credential\", preferredStyle: .alert)\n alert.addAction(UIAlertAction(title: \"Dismiss\", style: .cancel))\n present(alert, animated: true)\n return\n }\n\n Task { @MainActor in\n do {\n let user = try await realmApp.login(credentials: .apple(idToken: tokenString))\n delegate?.loginViewControllerDidLogin(with: user, name: credential.fullName)\n } catch {\n print(String(describing: error))\n presentError(title: \"Login Error\", error: error)\n }\n }\n }\n\"eyJraWQiOiJXNldjT0tCIiwiYWxnIjoiUlMyNTYifQ.eyJpc3MiOiJodHRwczovL2FwcGxlaWQuYXBwbGUuY29tIiwiYXVkIjoiY29tLmdldHdlYXJpdC53ZWFyaXQiLCJleHAiOjE2OTA1NzM4NDEsImlhdCI6MTY5MDQ4NzQ0MSwic3ViIjoiMDAwMTQ2Ljg4NTEzNGFlYzIyZjQyOTRiMWMzYTIzNTU1YzU5NTE5LjAzMzMiLCJjX2hhc2giOiIwY0QxZElMX1RnLUN0cHpzMnZiSmx3IiwiZW1haWwiOiJvc2Nhci5hcGVsYW5kQGljbG91ZC5jb20iLCJlbWFpbF92ZXJpZmllZCI6InRydWUiLCJhdXRoX3RpbWUiOjE2OTA0ODc0NDEsIm5vbmNlX3N1cHBvcnRlZCI6dHJ1ZX0.hMjobLqtaBz8bqxVqKisW4UH56mcRI6z7FhxCifwGweN23nNMIxCSyKfUBrA_m2YMZ06JWJIYK5ZTDvziD6dIHQ-l-WLnhcPef715RUGzad-sSGhIvE4-_9V-rDoWYYnmZq2UfC9VCfBDIaeJCP_rFw4oNHu2E37sm5I7VS1u1zp110RkAKYp8ezL2UktB0NkBYAzjRw3czqvZSi9HJLhrNXHSNd2OQSMUfHdELhq9touZEvvs0Km6wL9zF3P4rjTknVP0tsp-tkS890NDeAYeV6nCJADCA3U2sfJrp8qMseAaqJwKtFAG-daLoFX2QdBZbb14gQEcGXsFBF54yMNw\"\n",
"text": "Hi Jay! Thanks for responding, and apologies for leaving you hanging.I worked around this by just testing on Mac (Designed for iPad) and physical devices instead. But today, this happened on a physical device as well.It crashed iPhone 11 Pro if that’s helpful. It DOES work on an iPhone 14 Pro Max, and a iPhone 13.So let’s resolve this The issue happens with both SIwA and email/password auth, yes. It seems to stem from an invalid response from Atlas Auth where a HTML document is returned in place of the intended auth response.\n\nScreenshot 2023-07-27 at 21.52.582124×794 166 KB\nPrinting the response at that function yields this, rather than JSON.This is in response to this login code:where i’ve also verified that the token looks real:I’m on the latest version of Realm.",
"username": "Oscar_Apeland"
},
{
"code": "",
"text": "I have now also verified by reproducing in a fresh and extremely minimal iOS project. Same error. Error has to be server-side (or something really obvious I’m missing in the pasted code).",
"username": "Oscar_Apeland"
},
{
"code": "",
"text": "This certainly could be a server issue; however there’s a lot of unknowns in your code that could be either the cause or a contributing factor. Eliminating as many variables as possible may lead to more info.One thing to try is to see if commenting out your authentication code and using the exact code supplied in the Apple User Auth Guide makes a difference. That will eliminate that section of code.From there, working backwards to see if code called prior to that may be delivering wrong data. Reviewing Sign In With Apple is a good step.One other question is that does the crash occur with both a fresh user, who has not signed in before as well as one that has signed in on the device previously? In the delegate, are you creating an account in your system using the data contained in the user identifier (per the Apple guide)?",
"username": "Jay"
},
{
"code": "",
"text": "Also, cross post to this StackOverflow post in case someone produces an answer there.",
"username": "Jay"
},
{
"code": "baseURLRLMApplet realmApp = App(\n id: \"$id\",\n configuration: AppConfiguration(\n // baseURL: \"https://realm.mongodb.com/groups/#/apps/#\",\n localAppName: Bundle.main.infoDictionary?[kCFBundleNameKey as String] as? String,\n localAppVersion: Bundle.main.infoDictionary?[kCFBundleVersionKey as String] as? String\n )\n)\nbaseURL",
"text": "I messed about with the process, and found the culprit.I had specified a baseURL in my RLMApp init…Simply commenting baseURL away fixed the auth problem, without appearing to cause other issues.I had added that because I setup Realm with some doc suggesting to use a configuration plist which contained a baseURL (from this doc: https://www.mongodb.com/docs/atlas/app-services/tutorial/swiftui/), and then I refactored that to simply inline the value. It seems to work fine without that parameter tho.Jez. Hope this is useful for someone else.",
"username": "Oscar_Apeland"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Login Fails on Simulator | 2023-03-08T01:04:43.013Z | Login Fails on Simulator | 1,128 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "\nrequire('dotenv').config();\nconst mongoose = require(\"mongoose\");\n\nasync function dbConnect() {\n // use mongoose to connect this app to our database on mongoDB using the DB_URL (connection string)\n mongoose\n .connect(\n process.env.DB_URL,\n {\n // these are options to ensure that the connection is done properly\n useNewUrlParser: true,\n useUnifiedTopology: true\n })\n .then(() => {\n console.log(\"Successfully connected to MongoDB Atlas!\");\n })\n .catch((error) => {\n console.log(\"Unable to connect to MongoDB Atlas!\");\n console.error(error);\n });\n}\n\nmodule.exports = dbConnect;\n\n",
"text": "Hi All,I am new to working on the back end of a website and have written the below code. It runs locally but the connection isn’t made when I host the back end code on Heroku. It seems that a connection isn’t even being made to MongoDB. Perhaps someone has some insight because I am very lost.",
"username": "Ben_Cruise"
},
{
"code": "",
"text": "Ive not used it myself…but i found this which indicated a number of steps needed…This guide walks you through the steps required to deploy your application to Heroku with a MongoDB database using MongoDB Atlas.",
"username": "John_Sewell"
}
] | Node.js app is working locally but doesn't seem to be connecting to the MongoDB database via Heroku | 2023-07-30T19:15:56.588Z | Node.js app is working locally but doesn’t seem to be connecting to the MongoDB database via Heroku | 400 |
null | [
"mongodb-shell"
] | [
{
"code": "mongosh --authenticationDatabase admin --username admin --password password123\nmongosh --host [server ip] --port 27017 --authenticationDatabase admin --username admin --password password123\nMongoServerError: Authentication failed",
"text": "Whenever I run the following command inside the server this works:But on my PC, I try to connect to the same server with the followingand this doesn’t work MongoServerError: Authentication failed, I’ve already checked my firewall and everything seems okay, port 27017 is open both udp and tcp, this has been bugging me out since yesterday, what could be causing this?",
"username": "Saylent_N_A"
},
{
"code": "\"c\":\"ACCESS\"db.getUser('username', {showAuthenticationRestrictions:true}).authenticationRestrictions// example\ndb.getUser('foo',{showAuthenticationRestrictions:1}).authenticationRestrictions\n[ { clientSource: [ '127.0.0.1/8', '192.168.1.1/24' ] } ]\n",
"text": "Hi @Saylent_N_A, welcome to the forums.Server logging should also show any failure reasons, search/filter on \"c\":\"ACCESS\".A user can be created to include extra authentication restrictions on the client ip address(es) and server ip address(es).Use the db.getUser('username', {showAuthenticationRestrictions:true}).authenticationRestrictions command to list any.Another possibility us that the port is being forwarded to another mongodb instance?",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to authenticate to MongoDb if attempting remotely | 2023-07-28T04:11:07.113Z | Unable to authenticate to MongoDb if attempting remotely | 509 |
null | [
"aggregation",
"crud",
"views"
] | [
{
"code": "findOneAndUpdate(\n {_id: id, movies: {$elemMatch: {_id: movieID}}}, // there is an index **_id_movies_id**\n { $set: {\n 'movies.$.status': 'Available'\n }\n }\n)\nfindOneAndUpdate(\n { _id: id, movies: {$elemMatch: {_id: movieID}}},\n [\n { \n $set: {\n 'movies.$.status': 'Available' // << this fails\n }\n },\n // other operations where the value depends on the field in the collection\n {\n $set: {\n isDone: { $eq: [ '$total', '$available' ]}\n }\n }\n ]\n)\n",
"text": "Hi,I have a findOneAndUpdate() query composed by:This query works perfectly, but now I’m trying to rewrite this query using updates aggregation.It fails because I can’t reference “movies.$.status”.Thanks.\nd.",
"username": "dar84"
},
{
"code": "findOneAndUpdate",
"text": "Hi @dar84 and welcome to MongoDB community forums!!In order to understand the requirement in more details, it would be helpful if you could share a few details of the deployment.However, please note that, findOneAndUpdate updates the first matching value for the given condition and will not perform any update if there are no matching documents.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi,\nthanks for your reply.The query fails and I’m looking for a way to fix it.Regarding the dataset you asked, I thought it was already clear:\n{_id: ObjectID, movies: [{status: String}], isDone: Number}What part of my question is not clear?",
"username": "dar84"
},
{
"code": "",
"text": "Hi @dar84I am looking for a sample document which would help me reproduce query in my local system and help you with the correct or updated query for the result you are looking for.As MongoDB is a document-oriented flexible schema database, the document’s structure and datatypes cannot be determined beforehand. This is why we need some example document to be able to reproduce what you’re seeing.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Have you looked at arrayFilter?/Edit\nChecking docs and SO seems this may not work with comparing two fields together, in those cases other options are used.Be interesting if anyone can come up with an example of using an arrayFilter to compare two elements within an array object to each other if it’s possible…",
"username": "John_Sewell"
},
{
"code": "",
"text": "AboutA sample data on which the the above queries are being operated.andRegarding the dataset you asked, I thought it was already clear:\n{_id: ObjectID, movies: [{status: String}], isDone: Number}It is customary to ask for sample documents. While a schema is clear, it forces us to create our own documents to help you solve your problem. This is extra work that we won’t need to do when you provide sample documents.I can’t rely on the index anymore in that case?The query parts is the same in both case so the index will be used to find the document. No matter how you do the update the resource intensive part is to find the document and then write back the document after the update. But if you find $map to intensive, you can avoid it by using a combination of $concatArrays and $slice using $indexOfArray to find where to $slice.",
"username": "steevej"
},
{
"code": "",
"text": "Hi John,\nArrayFilters doesn’t work because it still uses $position as my problem statement suggest, thanks anyway.",
"username": "dar84"
},
{
"code": "",
"text": "Thanks Steeve, I’m fine with a workaround and I ended up with a workaround at the moment.My question is just and only focused to check if it expected for MongoDB to not work in that scenario.\nIn that case I will open a ticket on MongoDB JIRA.",
"username": "dar84"
},
{
"code": "",
"text": "No matter how you do the update the resource intensive part is to find the document and then write back the document after the updateI think this might be incorrect.My questions is:\nWhy does MongoDB in-place update stop working with MongoDB update aggregate?\nTo me that seems a bug.",
"username": "dar84"
},
{
"code": "",
"text": "Reason: MongoDB uses in-place updates for “movies.$.status”, so it will NOT write the whole array back on the disk but just the element that got affected by the update.I do not think that this is correct. What I understand is that when any part of a document is modified, the whole document is written back to disk.",
"username": "steevej"
},
{
"code": "",
"text": "With an index on the “movies._id”, the search will be faster.The index is used to locate the document. The same index can be used what ever you do to update the document.",
"username": "steevej"
},
{
"code": "",
"text": "This is not how WiredTiger is supposed to work, according to the documentation this is called in-place updates. It might be a good question for a MongoDB engineer, but I’m pretty sure it only write the element got changed in the array. NOT the whole array.",
"username": "dar84"
},
{
"code": "",
"text": "if my memory serves me well is that in place updates was something MMAP could do.The little I know about wiredTiger is that data is written compressed on disk. I do not know a compression algorithm that can compress effectively a simple value and replace it in a compressed block. So I think the whole document, non just the array, is written back. something like copy-on-write is implemented.@MaBeuLux88, @Jason_Tran, @chris, @wan, can someone point us to some documentation to settle this out.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevejWiredTiger will perform a copy on write.My notes say it is covered in MongoDB Training OC700. I don’t have any other reference.MVCC controls",
"username": "chris"
}
] | $elemMatch with update in aggregation | 2023-07-22T19:24:00.241Z | $elemMatch with update in aggregation | 992 |
null | [
"performance"
] | [
{
"code": "",
"text": "Hi!Recently I did a lot of benchmarks by applying Profile-Guided Optimization (PGO) to different software (including a lot of databases) - the results are available in https://github.com/zamazan4ik/awesome-pgo/.Also, I did MongoDB benchmarks with PGO - https://github.com/zamazan4ik/awesome-pgo/blob/main/mongodb.mdI think these results will be interesting for the MongoDB community. I hope the MongoDB developers will consider providing a note about PGO and MongoDB somewhere in the documentation, adding a build option to MongoDB for easier PGO builds, preparing PGO-optimized binaries, or even will use PGO internally for cloud-based MongoDB installation.",
"username": "zamazan4ik_N_A"
},
{
"code": "",
"text": "I also reported the results here:",
"username": "zamazan4ik_N_A"
}
] | Profile-Guided Optimization (PGO) and MongoDB | 2023-07-30T15:30:22.379Z | Profile-Guided Optimization (PGO) and MongoDB | 509 |
null | [] | [
{
"code": "",
"text": "We have a MongoDB 3.6.14 standalone community edition running on Linux Ubuntu 16.6. we have a requirement to upgrade it to Enterprise standalone in Linux only. I checked the document and got to know that I may have to upgrade the linux server for Mongo 6.0 as per the recommended platforms consideration. Would anyone of you guide me how to approach, plan for this migration and any other aspects we may would like to check.",
"username": "Manish_Kumar9"
},
{
"code": "",
"text": "Hi @Manish_Kumar9If you are moving to Enterprise Advanced then your first step should be opening a support ticket with MongoDB SupportUsing Cloud Manager or Ops Manager can make these MongoDB version upgrades easy as well as adding performance and monitoring metrics and alerting.Adding additional nodes and converting to a replica set will minimise/eliminate downtime for the upgrades and could provide a cleaner path for OS upgrades.Keep application drivers in mind and that their version is compatible with MongDB versions along the way.Upgrade as far as you can on current OS version\nUbuntu 16.04 supports MongoDB version up to version 4.4\nUpgrade MongoDB 3.6.14 → 3.6.23 → 4.0.28 → 4.2.24 → 4.4.x1Upgrade OS to Ubuntu 20.04\nIf using a replica set the option of a new install is available, As the new install can perform an initial sync off the other replica set membersUbuntu 20.04 will support 4.4 through to the upcoming 7.0\nUpgrade 4.4.x1 → 5.0.x1 → 6.0.x1Upgrade OS to Ubuntu 22.04\nThis will support MongoDB versions 6.0 and the upcoming 7.0\nReally this is optional but will bring you up to date.1 Use latest patch version available",
"username": "chris"
},
{
"code": "",
"text": "Many thanks for providing such detailed insights. I’m curious, if we were to set up a new Linux 20.4 server and install MongoDB 6 Enterprise Advanced from scratch, would it be possible to restore database backups taken using mongodump on version 3.6.14 Community Edition on 6.0 enterprise advanced ? The database size is 60 GB.",
"username": "Manish_Kumar9"
},
{
"code": "mongodumpmongorestore",
"text": "mongodump/mongorestore are intended to be run against the same major version as well as restoring using the same version that the dump was taken with.Its possible you could have success but this is not a supported or recommended method of upgrading.I should have also mentioned that you should read each versions release-notes for upgrade procedures. The upgrade to 4.0, for example, removes MONGODB-CR authentication. 4.2 removes MMAPv1 storage engine.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Migrating mongo community standalone to enterprise | 2023-07-28T08:37:49.832Z | Migrating mongo community standalone to enterprise | 487 |
null | [] | [
{
"code": "",
"text": "I have recently received the error:“non-recoverable error processing event: dropping namespace ns=‘.’ being synchronized with Device Sync is not supported. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning.”As per the below thread, this has resulted in devices being unable to communicate with MongoDB. We have a client using the app right now so this is a major issue.The thing is I had no intention of dropping the collection in question. I don’t really know what this even means. In fact I can pretty much pinpoint the exact time the issue occurred because that is when the last data was uploaded and it was while I was on holiday, so I don’t think it’s anything I could have done.So my questions are:Thanks very much for your help.",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi @Laurence_Collingwood ,We heard from you after a long time, how are you? Thank you for raising your concerns.This error happens when a synced collection is dropped from the database. This brings sync to a halt. The team does not recommend dropping synced collections. The only recovery from here is to terminate and re-enable sync.If your application is running with dev mode on, then anyone with access to your app can delete the collection from the database. Could you please verify if the dev mode is off?Could you please confirm if your application has client-reset implemented? The client-reset implementation can assist in recovering unsynced changes from client devices. Please follow the Client-Reset section for more details.I hope the provided information is helpful.Cheers, \nhenna",
"username": "henna.s"
},
{
"code": "",
"text": "Thanks for your message Henna.Can you please help me understand how I could have dropped the synced collection from the database? I have no idea how this could have happened but it was certainly not my intention so it would be great to understand how to stop it from happening again as it’s caused a fairly major issue!Does dropping the database meaning deleting it? Because I very much don’t want this to happen it has lots of key data!Dev mode is off.Client recovery is on.Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi @Laurence_Collingwood ,If your application is running with dev mode on, then anyone with access to your app can delete the collection from the database. Could you please verify if the dev mode is off?A correction to this section. With developer mode ON, it allows anyone to change the schema from the client side. If other users have permission to access the database they can delete a collection. You can look at the project feed for recent actions performed.This is accessible from the Data Services tab, top right corner.Depending on your SDK version, if you have implemented client-reset logic in your application, then unsynced changes can be recovered. But to enable sync again between Atlas and the device, you will have to terminate and re-enable.Thanks,\nhenna",
"username": "henna.s"
},
{
"code": "",
"text": "Thanks @henna.s.The below image shows the activity feed from when stuff seems to have gone wrong. Can you help me understand what’s going on by any chance?No action was taken at the time in question by anyone within our organisation that could have caused it as far as I’m aware.Dev mode is OFF so it sounds like from what you’ve said that nothing could have happened from the client side to cause it.Could it have been caused by the cluster update? (see the image)\nScreenshot 2023-06-29 at 13.00.381887×1001 111 KB\n",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi, quick clarification here, no device sync user (regardless of development mode being on) is capable of dropping a collection. This is an action that an Admin takes either through the Data Explorer / Compass or using the Shell / Drivers. A cluster update does not result in dropping collections.It is possible for us to track down how this was dropped depending on when it happened, but I am quite certain that this must have been a manual action. To that end, if you would like us to keep looking at this could you possibly:In terms of places to go from here, terminating and restarting sync is the only option at the moment. In special cases we can allow you to pass over this, but it would result in an unhealthy state in which Realm and Atlas are no longer synchronized (and we do not recommend that). If client reset logic is enabled, then you should not experience data loss when terminating and re-enabling sync.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thanks very much for your help with this @Tyler_Kaye.Just so I know (so that I can confirm with my team that this wasn’t done accidentally) if I wanted to drop a collection in Compass how would I do this? Would it be by hitting the delete button next to the collection (see image below).The collection that it says has been dropped btw is still there in the Compass (it’s the User collection). If it had been dropped wouldn’t it be expected that it would not be there anymore?There’s only one other member of the team who is an admin. If it turns out that this wasn’t accidentally done by him, does that mean then that we’re likely to have been hacked?For the group_id, please find atlas console URL below. If this is the wrong URL let me know.https://cloud.mongodb.com/v2/605354f17a7359418e9c72f6#/clustersBest wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi, dropping a collection can be done in Compass / Data Explorer with the trash button or the shell/drivers with the dropCollection() method.Thank you for passing along the link to your cluster. I am forwarding it along to someone on the Atlas team to verify that nothing occurred during the cluster upgrade that might have caused this.As to why the collection exists still, a number of things could have re-created it. Do you still see all of the users in there that you would expect to see? If so, there is a chance we can allow you to keep syncing, though it would be very odd and surprising to me if so.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Tyler,Ok thanks very much, let me know what the Atlas team say. The app does seem to have stopped syncing at pretty much the exact time that the cluster upgraded and I would be very surprised if either one of us accidentally deleted the collection by hitting the trash button for our user collection (presumably an “are you sure?” popup would have appeared before the collection was dropped?), especially given that neither of us can remember being online at that time. So if it’s not the upgrade I can only think we were hacked, although I can’t really imagine who would want to do that either.I can still see all of the users that I would expect to see. The only issue is their data has stopped updating due to the synchronisation having stopped, so the device data will be out of sync with the atlas data.You mention that you may be able to allow us to keep syncing. Do you mean without terminating and restarting the sync? Would the fact that device and atlas data are currently out of date with each other be an issue with this?Thanks again very much for your help with this.Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi @Laurence_Collingwood ,I have filed a ticket with the Atlas team to identify if there was an issue that might have occurred and if not, produce some sort of evidence that this was indeed user driven.In the meantime, I still would suggest terminating and re-enabling sync (it will be the same regardless of the outcome of the investigation). Additionally, I would take this time to mention that we really recommend you not use shared tier clusters for any sort of production environment. They have rate limitting built-in, limited visibility and metrics, and generally can be more susceptible to network outages.Ideally, you can terminate sync. scale up your cluster to an M10, and then enable sync. This should be a safe operation and will just cause your clients to reset and re-upload any unsynced changes.See here for more: https://www.mongodb.com/docs/atlas/app-services/sync/go-to-production/production-checklist/I will keep you informed of the investigation. I too find the timing suspect, though I also would be very surprised if routine maintenance led to this.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi @Tyler_Kaye,Thanks very much for this.I will take your advice on scaling up our cluster (cost depending) as would like to avoid issues like this happening again!I just want to clarify before terminating and re-enabling sync, would you expect that our users’ data that they’ve been saving over the past week and a half will not be lost? We are doing a 5 week trial with this client so if it is likely that user data will be lost it may be best to wait till the trial is over and then terminate and re-enable sync.Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi, quick question first. Does your workload involve making changes to Atlas and ensuring that they make their way to Realm? Looking at the metrics from before it does seem like you used to have some writes making it from Atlas to the Device.Unfortunately, as designed, there is no option but to terminate and re-enable sync. See here for the details: https://www.mongodb.com/docs/atlas/app-services/sync/error-handling/client-resets/#client-reset-recovery-rulesIf you are in a trial and you do not need writes to be going to/from MongoDB at the moment, the best bet for you and your business is likely to wait out of safety, but terminating and re-enabling will eventually be necessary and should be a safe operation. (It is needed for migrating shared-tier clusters, but once you are dedicated you can scale up and down freely).We are looking into a few re-architectures of this replication component to be able to more safely resume without terminating sync in these cases, but unfortunately, they are in the early stages still.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi @Tyler_Kaye,Yes we do make changes in the Atlas that are needed to make their way to realm as part of our normal workload. Our app lets people track the carbon footprint of their food purchases so we have a database of food items and their associated footprints, so occasionally we update this database in the atlas. We also occasionally update user data.Ok I will make a decision on terminating and re-syncing. The downside of not doing this is that right now new users cannot create an account, because creating a new user requires communication between their device and the Atlas, so the app is currently crashing every time a new user tries to create an account.Let me know when the Atlas team get back to you with the results of their investigation. If this is something we’ve accidentally done on our end it would be good to know so that we don’t do it again.Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi @Tyler_Kaye,I was just wondering whether the Atlas team have got back to yet as to whether this issue was caused by an action that was taken manually or whether it happened during the cluster update?Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi @Laurence_Collingwood, I checked in with the team and they couldn’t find any evidence that this was related to ongoing maintenance work and I cross-checked that with occurrences of this error across our systems and found no spike in this error type (which I would have expected to find if this were an issue across all free-tier clusters). I looked over what that maintenance was doing and it seems unlikely to have caused this issue.Unfortunately, because it is a free-tier cluster, we have very limited metrics and logging for the clusters so it is difficult to investigate this any further (https://www.mongodb.com/docs/atlas/mongodb-logs/).I would be happy to continue chatting about the best path forward, but unfortunately, I am at a bit of a dead end in terms of investigating what exactly happened here.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Tyler,Ok thanks for letting me know. Very strange! We will update our passwords and just have to keep our fingers crossed it doesn’t happen again then…Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi @Tyler_Kaye,I just terminated and restarted the sync and it looks as though all the data that has been stored on devices since the issue occurred (one month ago) has been lost.Just wanted to make you aware of this because the advice above from your team was that this was unlikely to happen due to the fact that we have client recovery on.I also just wanted to double check that there’s no way of recovering this data? And also whether there’s anything I should have done differently which would have meant that client data would not have been lost? All I did was terminate and re-enable sync as requested.Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "",
"text": "Hi, that is unexpected. What should happen is that when the device re-connects after the client resetting it should re-upload all of the lost changes. Have those devices reconnected yet?We are working on ways to make this not rely on the device reconnecting. Unfortunately, as of now, until the device reconnects the history has been wiped by terminating sync.I am attaching this to an Epic in the hopes we can prioritize changing this interaction.As a check-in, is the system healthy now and you can see changes flowing to/from atlas?Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi @Tyler_Kaye,I can only speak for my own device, but after terminating and re-enabling sync when trying to log in the app crashes with the following error:“The server has forgotten about this client-side file (Bad client file identifier (IDENT)). Please wipe the file on the client to resume synchronization”The only fix I’ve been able to find to this in the past is deleting and reinstalling the app. So I did that and this, presumably, wiped the locally saved data and therefore it is gone forever. If there is a way of resolving the bad client error without deleting the app and therefore the locally saved data I’d be grateful if you’d let me know.One thing to note though, which is curious, is that when I was testing the app on an Xcode simulator (essentially another device) over the past month since the issue occurred, I’m pretty sure I could see data that I saved on my actual device - suggesting that the data was not only saved to the device? In any case, when I log in on any device now, all the data I saved over the previous month has gone.The system does seem to be healthy now, although I’ve not done a ton of testing. Immediately after syncing there were some strange events (values not being updated that I would expect to be) but this seems to have smoothed out for now.By the way, I am currently looking at upgrading from the shared cluster, per your recommendation. Would the “serverless” option do the job? We have very low usage at the moment while we’re in the trial phase, so I think it would save us a lot of money compared to the “dedicated” option.Best wishes,\nLaurence",
"username": "Laurence_Collingwood"
},
{
"code": "“The server has forgotten about this client-side file (Bad client file identifier (IDENT)). Please wipe the file on the client to resume synchronization”One thing to note though, which is curious, is that when I was testing the app on an Xcode simulator (essentially another device) over the past month since the issue occurred, I’m pretty sure I could see data that I saved on my actual device - suggesting that the data was not only saved to the device? In any case, when I log in on any device now, all the data I saved over the previous month has gone.\n",
"text": "Hi, apologies for the delay here.The error “The server has forgotten about this client-side file (Bad client file identifier (IDENT)). Please wipe the file on the client to resume synchronization” is just as you would expect, it is what happens when you terminate and re-enable sync and we force each client to reset. You should see those clients connect and re-upload their unsynced changes as long as they were built with an SDK released within the last year or so: https://www.mongodb.com/docs/atlas/app-services/sync/error-handling/client-resets/#client-reset-recovery-rulesAs for this:I think I am a little unclear on what you mean. Do you mind elaborating?As for Serverless clusters, Device Sync does not yet support using them for the same reason we do not support migrations from Shared to Dedicated clusters. Can I use Realm Sync with Serverless AtlasThis is an unfortunate limitation but we are working with the Serverless team to remove the limitations that prevent us from being able to reliably sync data to/from serverless clusters.Best,\nTyler",
"username": "Tyler_Kaye"
}
] | Collection dropped unintentionally causing synchronisation between atlas and device sync to be stopped | 2023-06-29T08:06:00.269Z | Collection dropped unintentionally causing synchronisation between atlas and device sync to be stopped | 1,161 |
null | [
"queries",
"php"
] | [
{
"code": "",
"text": "Hi community!\nI am trying to understand the challenge proposed by this link\n“Write a MongoDB query to find the restaurant Id, name, borough and cuisine for those restaurants which achieved a score which is not more than 10.”The solution says it is:\nRestaurants.find(‘grades.score’:{ $not: {$gt : 10}}}) → 340 results\nwhich is ok, I get it.\nthis was my solution, which does not shield the same result count.\n{‘grades.score’:{ $lt : 10}} → more than 3529 resultsI am not understanding why. I have read how to query on arrays, and I thought that whenever I wrote a query on an array, it would return me any document which satified at least one element in the said array, but the proposed solution is returning all documents satifying all elements in the document’s array.extra:\nIf you have any link, blog, or example explaining this, it would be really appreciated",
"username": "Maxi_dziewa"
},
{
"code": "",
"text": "Looking at that dataset it’s easiest to work with a single simple example:One document has the following scores:\n[14,2]So we have two possible filters:\nDocuments with a score that is NOT greater than 10\nDocuments with a score of less than 10In our sample we can see that it satisfies the second filter but not the first, while it does have values that are less than 10, it ALSO has values that are greater and so does not match the first.This had me scratching my head a bit I’m afraid to say until I narrowed it down to a sample document to work through…",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you for taking the time to analyze the situation John.\nWhat I don’t understand is why the 1° filter (Not greater than 10) takes into consideration every element in the array, while the 2° filter just takes into consideration one element fulfilling the condition to render it as true.It is like if the 1° filter applies an “and” on every element in the array, while the second filter applies an ‘or’.Then I read the problem: Documents with a score that is NOT greater than 10\nand it makes it worse.“documents with A score…” So I understand Documents with at least a single score for which the filter is true.For it to be valid in my mind, it should read, Documents with all the scores not greater than 10. But then I would think that filtering by “Not greater than 10” tries to match every element in the array, but the 2° filter just tries to match a single element in the array, and I don’t understand why.There is something I am missing here ",
"username": "Maxi_dziewa"
},
{
"code": "{\n MyField :[1,2,3,4]\n}\n\nMyField : {$lt:3}\n",
"text": "When you wrap the criteria in a not you are basically turning it into an AND from an OR, basically this:In propositional logic and Boolean algebra, De Morgan's laws, also known as De Morgan's theorem, are a pair of transformation rules that are both valid rules of inference. They are named after Augustus De Morgan, a 19th-century British mathematician. The rules allow the expression of conjunctions and disjunctions purely in terms of each other via negation.\n The rules can be expressed in English as:\n or\n or\nSo when you do something like this:We’re saying find me any document where one of the values is less than 3, i.e.\n(1 < 3) OR (2 < 3) OR (3 < 3) OR (4 < 3)\nWhich comes out to:\n=> TRUE OR TRUE OR FALSE OR FALSE\n=> TRUESo with the normal condition we’re saying that it’s an OR over all elements to work out if any of them match.Now if we wrap that in a not:\n$not:{MyField : {$lt:3}}We’re saying that we want to find a document where NONE of the items are less than 3, in effect the opposite of the above query is none of the elements can be set to less than 3. So we’re swapping from finding a single matching element to demanding that none of them match which is more restrictive.NOT (A OR B) = (NOT A) AND (NOT B)The negation of an AND is:\nNOT (A AND B) = (NOT A) OR (NOT B)So in our above case:\n(1 < 3) OR (2 < 3) OR (3 < 3) OR (4 < 3)\nbecomes\nNOT( (1 < 3) OR (2 < 3) OR (3 < 3) OR (4 < 3))Which can be expanded to:\nNOT(1 < 3) AND NOT(2 < 3) AND NOT(3 < 3) AND NOT(4 < 3)\nEvaluates to:\n=> NOT(TRUE) AND NOT(TRUE) AND NOT(FALSE) AND NOT(FALSE)\n=> FALSE AND FALSE AND TRUE AND TRUE\n=> FALSEHopefully this helps explain it a bit more…it can be a bit of a mind bend to swap a condition and then NOT it when working over a range of elements in an array due to this.",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help needed understanding two similar queries | 2023-07-25T23:46:57.411Z | Help needed understanding two similar queries | 506 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "Hi,I did a lot of research on backup/restore methods where I need to take backups on multi terabytes MongoDB. So, I need to take full backup and then incremental backups. I found mongodump/mongorestore doesn’t suit for this type of size.\nSo, I have planned to go with copy of DB files directly for full backup and mongodump for incremental. Would like to know anyone’s opinion whether this would be a feasible solution? If not, then, which solution would work.\nAlso, lvm snapshots takes incremental backups also?",
"username": "muser_33"
},
{
"code": "",
"text": "Disk system provided snapshot solutions is always better in case of big data size (e.g. aws ebs volume.). So, it depends on your architecture solution.Using manual backup methods can be tedious work and may require you to deal with op logs.",
"username": "Kobe_W"
}
] | Best solution for taking backup and restore | 2023-07-29T14:54:49.906Z | Best solution for taking backup and restore | 481 |
null | [
"indexes"
] | [
{
"code": "{\n last_logged_in: -1, // milliseconds\n age: -1,\n height: -1,\n gps: -1,\n education: -1,\n // etc., 20 more fields\n}\nlast_logged_in",
"text": "Is this compound index feasible:The challenge is that this compound index contains more than 20 fields and the lead field, last_logged_in, will be highly volatile. It is constantly updated.So the question is, will write queries be too slow?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Whenever a user logs in, last log in time will change and that means an index update (and with B+ tree, the index may also have to be moved around).So it may not be a good idea if you have a lot of “active log in” users every day.",
"username": "Kobe_W"
}
] | Feasibility of using a highly volatile field as the lead of a compound index | 2023-07-29T08:07:04.363Z | Feasibility of using a highly volatile field as the lead of a compound index | 452 |
null | [
"replication"
] | [
{
"code": "",
"text": "I had problems setting up a replica set and constantly got “MongoServerError: This node was not started with replication enabled.”. Then I switched from ports 27017 to 27019 and instead ran the prime node on 27027 and it worked (as long as none of the others were run on 27017). I tried other ports (and of course checked that there was another prg listening to it) but only 27027 seems to work. Any ideas on why?",
"username": "Fredrik_Schulte1"
},
{
"code": "",
"text": "Make sure all your mongodb processes are set to run with replication set name. Refer to the tutorial.",
"username": "Kobe_W"
},
{
"code": "",
"text": "I followed the tutorial and the only thing I change is the portnumbers. Still only works on 27027 as prime.",
"username": "Fredrik_Schulte1"
},
{
"code": "",
"text": "Hi @Fredrik_Schulte1,\nWhen you have initiate the replica set, you’ve set the pair hostname:Port for alle member of cluster, so i think is normal you’ve issue with the other Port.From the documentation:Regards",
"username": "Fabio_Ramohitaj"
}
] | Can only run replica set on 27027 as prime | 2023-07-28T14:34:41.962Z | Can only run replica set on 27027 as prime | 556 |
null | [
"aggregation",
"queries",
"indexes"
] | [
{
"code": "{\n last_logged_in: -1, // in milliseconds\n age: 1,\n city: 1,\n}\nuser.aggregate([\n {\n $match: {\n age: { $gt: 18 },\n city: { $in: [\"chicago\", \"paris\"] }\n }\n },\n {\n $sort: {\n last_logged_in: -1\n }\n },\n {\n $limit: 10000\n }\n])\nlast_logged_in$matchlast_logged_in",
"text": "index:query:The last_logged_in field is in milliseconds. So let’s further suppose that all documents hold unique values of this field.Is this index useful at all in the $match stage to find the documents?EDITED:The better way to phrase this question is, would this index be more effective if the last_logged_in field is more granular or less granular? For example, milliseconds vs seconds vs hours vs days?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "last_logged_in$match$sort",
"text": "Another thing to note is that the last_logged_in field is never used in the $match stage, but is always part of the $sort.",
"username": "Big_Cat_Public_Safety_Act"
}
] | More granular vs less granular lead field in compound index | 2023-07-29T06:05:22.015Z | More granular vs less granular lead field in compound index | 410 |
null | [] | [
{
"code": "Both the private server and AWS server are running the same version of MongoDB.\nI have followed the necessary steps to set up MongoDB on the AWS server, but the problem persists.\nThe application itself appears to be working fine on the AWS server, but the data just doesn't seem to be stable.\n",
"text": "I have an application that is hosted on a private server with MongoDB, and the data is available and working fine. However, when I try to host the same application on an AWS server, I encounter data loss issues. The data seems to be disappearing or not persisting correctly.Here are some additional details:What could be the potential reasons for this data loss issue? Are there any specific configurations or settings that need to be adjusted when hosting MongoDB on AWS? Any insights or troubleshooting steps would be greatly appreciated. Thank you!",
"username": "Yogesh_Ravichandran"
},
{
"code": "problem persiststhe data just doesn't seem to be stable",
"text": "problem persistswhat problem, the detail?the data just doesn't seem to be stablewhat you mean by not stable?please explain what problem you are seeing.",
"username": "Kobe_W"
},
{
"code": "",
"text": "I have my react application with data been stores in mongodb… But the data getting lost without we do any work on db… Its automatically getting deleted after 12 hours… This issue occurs only in aws ec2… I have the same application running in our private qa server… The same installation method has been followed in both but data is not erased in our private server",
"username": "Yogesh_Ravichandran"
},
{
"code": "",
"text": "Did you find anything related in mongodb log file? if not, you can contact aws support for help. They know more on ec2 related activities.If should be easy for them to monitor if this issue is stably reproducible.",
"username": "Kobe_W"
}
] | Data loss issue when hosting MongoDB on AWS server | 2023-07-28T13:11:08.467Z | Data loss issue when hosting MongoDB on AWS server | 219 |
null | [
"atlas-search"
] | [
{
"code": "[\n { \n \"hotelName\": \"Hotel ABC\",\n \"facilities\": [\n { \"id\": \"free-wifi\", \"title\": \"Free Wi-Fi\" },\n { \"id\": \"by-the-beach\", \"title\": \"By the beach\" },\n { \"id\": \"facilities-for-disabled\", \"title\": \"Facilities for disabled\" }\n ]\n },\n { \n \"hotelName\": \"Hotel DEF\",\n \"facilities\": [\n { \"id\": \"for-elders\", \"title\": \"For elders\" },\n { \"id\": \"by-the-beach\", \"title\": \"By the beach\" },\n { \"id\": \"facilities-for-disabled\", \"title\": \"Facilities for disabled\" }\n ]\n },\n { \n \"hotelName\": \"Hotel GHI\",\n \"facilities\": [\n { \"id\": \"free-wifi\", \"title\": \"Free Wi-Fi\" },\n { \"id\": \"spa-zone\", \"title\": \"SPA Zone\" },\n { \"id\": \"facilities-for-disabled\", \"title\": \"Facilities for disabled\" }\n ]\n }\n]\n[\n {\n $search: {\n index: \"hotels_content\",\n embeddedDocument: {\n path: \"facilities\",\n operator: {\n compound: {\n must: [\n {\n text: {\n path: \"facilities.id\",\n query: [\n \"free-wifi\",\n \"by-the-beach\",\n ],\n },\n },\n ],\n },\n },\n },\n },\n },\n ]\ntexttext",
"text": "Hello!I have the following documents in my MongoDB collection:Now I want to search hotels with both “Free Wi-Fi” and “By the beach” facilities included, so I’m doing the following query:But this query is not working as I expected, looks like it’s using OR expression in text operator, so all 3 documents are returned. How can I change this query to use AND expression in the text filter so only the first document will be returned?",
"username": "Adam_Krawiec"
},
{
"code": "[\n\t{\n\t\t$search: {\n\t\t\tindex: \"hotels_content\",\n\t\t\tcompound: {\n\t\t\t\tmust: [\n\t\t\t\t\t{\n\t\t\t\t\t\tembeddedDocument: {\n\t\t\t\t\t\t\tpath: \"facilities\",\n\t\t\t\t\t\t\toperator: {\n\t\t\t\t\t\t\t\ttext: {\n\t\t\t\t\t\t\t\t\tpath: \"facilities.id\",\n\t\t\t\t\t\t\t\t\tquery: \"free-wifi\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\tembeddedDocument: {\n\t\t\t\t\t\t\tpath: \"facilities\",\n\t\t\t\t\t\t\toperator: {\n\t\t\t\t\t\t\t\ttext: {\n\t\t\t\t\t\t\t\t\tpath: \"facilities.id\",\n\t\t\t\t\t\t\t\t\tquery: \"by-the-beach\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n]\n",
"text": "Since a query array performs an OR search you need to add multiple embeddedDocument operators to the compound must array.I’m looking for the same solution and this was the best I could come up with. If anyone has a better idea please comment!",
"username": "George_Hess"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Search - search in array of objects with text operator as AND condition | 2023-07-20T09:40:38.988Z | Atlas Search - search in array of objects with text operator as AND condition | 555 |
null | [
"replication",
"performance"
] | [
{
"code": "",
"text": "Hi,We run a fairly large MongoDB installation and we’ve recently come across something i’ve not seen happen before - the write performance of a cluster tanking completely upon adding a new node to a replica set.\nWe use backup disks to initialise the data directory on new nodes ie. it only replicates the last few days worth of data.Usually, we add the node as a hidden member with priority 0 until it catches up and then we reconfigure it to a normal secondary. However, for whatever reason when we attempt this procedure in one of our clusters all writes basically freeze up and looking at the resource utilisation the primary is maxed out from COLLSCAN-queries towards oplog.rs.\nRemoving the new node from the set immediately resolves the issue.What we have tried without success so far:The cluster is currently running 5 nodes and there are no issues with any of the other nodes and/or performance in general - the cluster is deliberately quite oversized and so steady state CPU load is around 10%.Does anyone have any ideas as to what could be happening?",
"username": "Jesper_Carlsson"
},
{
"code": "",
"text": "Significantly reducing the size of the oplog in the clusteri don’t think this is a good idea. (of course, depend on how big it is now).If you have a lot of data in the past few days, it will definitely require a big scan over ops log entries and thus can cause high disk IO.So what you can try is .Generally primary node and secondary node has the same write traffic, but given you use write concern: 1 , only primary needs to ack the write. So using a secondary node will only slow down the replication for the sec node, but not impact write performance on primary.",
"username": "Kobe_W"
}
] | Extreme degradation in write performance during newly added node replication | 2023-07-28T10:39:07.944Z | Extreme degradation in write performance during newly added node replication | 473 |
[
"dot-net",
"android",
"kotlin"
] | [
{
"code": "subscribe()",
"text": "You can now migrate your backend app from partition-based sync to flexible sync in a backward-compatible way.The migration is automatic and aims to minimize disruption to end-users aside from a period of downtime. Learn more about the requirements and what to expect in the Migrate Device Sync Modes section of the Atlas App Services docs.The Realm engineering team is pleased to introduce full-text search (FTS) support for Realm - a feature requested by many developers. The feature allows quick and efficient search within large datasets. A detailed explanation with examples on FTS with .NET SDK is provided by our engineer Ferdinando Papale on the MongoDB Developer Blog.Available now on Maven Central and Gradle Plugin Portal. This release improves testing Android apps by adding support for JVM tests and Robolectric, introduces support for data ingest or “write-only” sync, and makes it easier to work with Flexible Sync subscriptions with a new subscribe() API. For more details, check out the Realm Team Blog .Kudos to our community member @Peter_Rauscher for helping fellow developers with a solution to add custom data when a new user signs-up. Steps to take:Thanks to our community members @Jason_Tulloch1, and @Adam_Holt for bringing up the discussion on 3rd party services and providing an alternative to handle AWS S3 requests.The MongoDB team is extending the timeline for deprecation of 3rd party services from Aug 1, 2023 to Nov 1, 2024 and banners on documentation pages will be updated to reflect the new date. Additional communications on recommended actions will be shared in the coming months.Please use our MongoDB Feedback Engine to provide your feedback on our offerings and upvote the features you hope to see.Henna Singh\nCommunity Manager\nMongoDB Community TeamRealm Community ForumKnow someone who might be interested in this newsletter? Share it with them. Subscribe here to receive a copy in your inbox.",
"username": "henna.s"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | Realm Kotlin, Full - Text Search, and Sync Modes | Community Update July 2023 | 2023-07-28T15:54:15.059Z | Realm Kotlin, Full - Text Search, and Sync Modes | Community Update July 2023 | 551 |
|
null | [] | [
{
"code": "import {BSON} from \"realm\";\n\nexport const Developer = {\n name: 'Developer',\n properties: {\n _id: {type: 'objectId', default: new BSON.ObjectId()},\n createdAt: {type: 'date', default: new Date()},\n founded: {type: 'date', default: new Date()},\n name: 'string?',\n website: 'string?',\n status: 'string?',\n image: 'string?',\n deleted: { type: 'bool', default: false},\n userId: 'string?',\n },\n primaryKey: '_id',\n};\nimport {BSON} from \"realm\";\n\nexport const System = {\n name: 'System',\n properties: {\n _id: {type: 'objectId', default: new BSON.ObjectId()},\n createdAt: {type: 'date', default: new Date()},\n releaseDate: {type: 'date', default: new Date()},\n name: 'string?',\n shortName: 'string?',\n generation: 'string?',\n website: 'string?',\n developer: 'Developer?',\n image: 'string?',\n userId: 'string?',\n },\n primaryKey: '_id',\n};\nrealm.write(() => {\n return realm.create('System', {\n name,\n releaseDate,\n shortName,\n generation,\n website,\n developer,\n image,\n userId: userId ?? 'SYNC_DISABLED',\n });\n });\n",
"text": "Hello! I am refering another Document in one of my collections and for some reason when i try to add 2 in a row it says \" Attempting to create an object of type ‘Developer’ with an existing primary key value ‘64c26d0bedcdf93ffacbcc1f’\"Shouldn’t this be automatically generated and unique? This is my Model.I thought this generates a new _id every single time. Also the weird thing is in my System Model, I am referencing a Developer and sometimes when making a new System it says I tried to create a new Developer with an existing primary key? here is my System Model.what am I doing wrong? thank you. This is my realm.create code. I shouldn’t need to pass in _id as it should be created and unique each time?",
"username": "Mike_Powell"
},
{
"code": "",
"text": "Hi @Mike_Powell,That snippet is generating an ObjectId at process start and using that value as a static default value. What you’re looking for is a dynamic default value, which is available starting with v11.1.0: https://www.mongodb.com/docs/realm/sdk/react-native/model-data/define-a-realm-object-model/#set-a-default-property-value",
"username": "Kiro_Morkos"
},
{
"code": "import Realm, {BSON} from 'realm';\n\n// To use a class as a Realm object type in Typescript with the `@realm/babel-plugin` plugin,\n// simply define the properties on the class with the correct type and the plugin will convert\n// it to a Realm schema automatically.\nexport class Task extends Realm.Object {\n _id: BSON.ObjectId = new BSON.ObjectId();\n description!: string;\n note!: string;\n isComplete: boolean = false;\n createdAt: Date = new Date();\n userId!: string;\n\n static primaryKey = '_id';\n\n",
"text": "Thank you for the reply, I am using 11.10.1.This is the example code provided in the realmtest template. I had to change it when I made some destructive changes and for some reason it did not match up correctly. So did i break it? Will the below code generate a new ID each time? otherwise the code provided is wrong.",
"username": "Mike_Powell"
},
{
"code": "return realm.create('System', { \n developer: developer._id,\n });\n\nreturn realm.create('System', { \n developer: developer\n });\n\nreturn realm.create('System', { \n developer: {_id:developer._id}\n });\n",
"text": "SO it seems my real issue is that when I am creating a System, instead of using the developer I am passing in, it is attempting to create another developer with the same _id. Why? what is the correct way to link an existing developer?The dogs only show how to set up the schema and now how to link the actual code?Do I set the developer field to a string with the _id? do i just pass in the developer object?I have tried numerous things.none of these work correctly and all try to make a new developer.",
"username": "Mike_Powell"
},
{
"code": "developer: Developer & Realm.Object,\ndeveloper: Developer,\nexport const Developer = {\n name: 'Developer',\n properties: {\n _id: {type: 'objectId', default: () => new BSON.ObjectId()},\n createdAt: {type: 'date', default: new Date()}, \n },\n primaryKey: '_id',\n};\n",
"text": "ok I have solved both my issues, one of which seems to be a typescript thing.Seems I was passing in the type to the add function but I guess I need to tell it that it was also a Realm.Object. Not really but once I added that behaviour started to work as expected.The docs really need to be updated to show examples of setting the reference. Because passing in an object to create the reference is not really clear why the create function would try to create a new one instead of using the current one.Would love to see a non typescript version.instead ofas for the unique _id. once I added an arrow function it fixed the issue i think.\nThe example project needs to be checked to ensure it is correct, as I copied that.\nJust happy to have solved it, thank you.",
"username": "Mike_Powell"
},
{
"code": "createdAt",
"text": "Glad to hear you figured it out! Keep in mind you’ll want to use a factory function for your createdAt default value in that last snippet you shared as well.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to create a unique Primary Key | 2023-07-27T13:26:53.763Z | How to create a unique Primary Key | 817 |
null | [
"queries",
"data-modeling",
"java",
"indexes"
] | [
{
"code": " List<Document> offers = new ArrayList<>();\n offersCollection\n .find(filters)\n .projection(SEARCH_PROJECTION)\n .sort(sort)\n .iterator()\n .forEachRemaining(offers::add);\n return offers;\nfilters (ranges): And Filter{filters=[Filter{fieldName='gearbox', value=AT}, Filter{fieldName='registered', value=true}, Filter{fieldName='engineCapacityRange', value=1500_2000}, Operator Filter{fieldName='price', operator='$gte', value=40000}, Operator Filter{fieldName='price', operator='$lte', value=60000}, Operator Filter{fieldName='year', operator='$gte', value=2015}, Operator Filter{fieldName='make', operator='$in', value=[Volkswagen, Toyota]}, Operator Filter{fieldName='bodyType', operator='$in', value=[SUV, SEDAN, COMBI]}, Operator Filter{fieldName='fuelType', operator='$in', value=[PETROL, HYBRID]}]}\nfilters (one element ranges transformed into equality filters): And Filter{filters=[Filter{fieldName='make', value=Toyota}, Filter{fieldName='bodyType', value=SUV}, Filter{fieldName='fuelType', value=HYBRID}, Filter{fieldName='gearbox', value=AT}, Filter{fieldName='registered', value=true}, Filter{fieldName='engineCapacityRange', value=1500_2000}, Operator Filter{fieldName='price', operator='$gte', value=40000}, Operator Filter{fieldName='price', operator='$lte', value=100000}, Operator Filter{fieldName='year', operator='$gte', value=2015}]}\ndb.offers.createIndex({ make: 1, model: 1, generation: 1, bodyType: 1, fuelType: 1, price: -1, publishedDate: 1 })\ndb.offers.createIndex({ make: 1, model: 1, generation: 1, bodyType: 1, fuelType: 1, gearboxType: 1, registered: 1, damaged: 1, mileageRange: 1, engineCapacityRange: 1, price: -1, publishedDate: 1, year: 1 })\n",
"text": "Dear MongoDB Developer Community,I’d like to ask you for an advice on how to get the most (in terms of performance) out of the java sync driver for MongoDB. I’m not talking about having proper indexes for the DB itself - it’s already handled. My use-case is fairly simple: I’ve got a lot of Documents (c.a. 200k), I filter them and return as-is - no object mapping included, only projected JSON. My current approach is:Is iterator + ‘forEachRemaining’ the best approach here? I imagine something like “bulk read”, but couldn’t find a method for that.The 2nd question (it’s on the verge of data modeling and java driver) that I’d like to ask you is how to arrange my filters to get the most performance out of the DB itself? I know the ESR rule and I’ve created a model for SearchRequest that allows me to optimize the filters (i.e. if there is a Request with 1-element array, it’s modeled as EqualityFilter, InRangeFilter otherwise), which leads to filters like these (this is the BSON that’s passed to ‘find’ method above as ‘filters’):As you can see, ‘in range’ filters are moved to the last positions of ‘filters’ so the DB can use the indexes optimally. I’ve got c.a. 12 indexes, starting from:and ending with:and variations between them.\nI’m not sure if the way I arrange filters order in case of ‘in range filters’ is fine (e.g. make is the 1st field in index and works superb for ‘eq’ but is it fine that I move it to the end of the filters when it’s ‘in’? I think so, then it should be fine with ESR rule, but maybe I’m wrong and you’ve got some other ideas, or you see some improvements here?Currently, I’m getting end-to-end (req to resp in Postman) times under 1s for remote DB (M0 Atlas cloud) filled with c.a. 140k documents for probable query scenarios. What made me wonder though is the small difference between limiting to and returning 50 docs and getting all of the docs that fit the filters (c.a. 750 docs) → it’s c.a. 500 ms vs c.a. 800 ms.If you can see any improvement points in terms of performance here besides the questions that I stated, please share your thoughts with me!Best regards,\nPrzemek, Poland",
"username": "przemek"
},
{
"code": "iteratorMongoCursorCloseableList<Document> offers =\n offersCollection\n .find(filters)\n .projection(SEARCH_PROJECTION)\n .sort(sort)\n .into(new ArrayList<>());\nIterator#forEachRemaining",
"text": "Is iterator + ‘forEachRemaining’ the best approach here? I imagine something like “bulk read”, but couldn’t find a method for that.Performance-wise it’s fine, but note that the iterator method returns a MongoCursor, which implements Closeable, and your code doesn’t ensure that the cursor is closed in the face of an exception during iteration (unlikely to happen, but it’s good practice. So you could use try-with-resources to ensure that the cursor is closed. Alternatively, this is a simpler way of doing the same thing:Updated: Added https://jira.mongodb.org/browse/JAVA-5085 to address the cursor closing issue with Iterator#forEachRemaining.",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Thanks @Jeffrey_Yemin ! I’ve applied ‘into’ to my code.\nIs it possible to use JSON directly as an argument to some method to perform MongoDB query? I mean something similar to mongosh → db.coll.find(JSON) → but how this doc should look like to involve sorting and skip/limit?Are you able to share your knowledge in regards to my 2nd question? To sum up:Thanks!",
"username": "przemek"
},
{
"code": "org.bson.json.JsonObjectorg.bson.Document",
"text": "There’s a class called org.bson.json.JsonObject that is a bridge between the driver API and a JSON string. You can use it any place you would use an org.bson.Document.I don’t have quick answers for your other questions (It’s generally better to not ask a bunch of unrelated questions in a single forum question. Makes it easier for different people to help you and to make the answers easier to find for future readers).Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Sure @Jeffrey_Yemin , I’ll create a new thread for the 2nd question.\nIf I’m only interested in returning the found Docs, shall I return the collection of JsonObject instead of Document? Will it help with performance?\nOn the query side, I’ll probably stick to what I have (req → model → filters from driver), as these are only one object per req and gives me more flexibility.",
"username": "przemek"
},
{
"code": "",
"text": "All monodb member connect with me in unique app that is facebook with monodb account. Than send me message of any question. In separate message box of my Facebook ID. Thank you.",
"username": "Manoj_Singh_rawat"
},
{
"code": "",
"text": "If I’m only interested in returning the found Docs, shall I return the collection of JsonObject instead of Document? Will it help with performance?It really depends on your use case, in particular what you need to do with the returned documents.Have a look at https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/data-formats/documents/#documents for an overview of the options, and make the best decision for your application.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Great, thanks for the link. The docs are really comprahensive. From what I can see, as my use case is simply query for data and return it via controller without any modifications, JsonObject will help performance because conversion to Map won’t be performed.",
"username": "przemek"
}
] | Best performance for java sync for getting pure Documents | 2023-07-26T13:50:48.187Z | Best performance for java sync for getting pure Documents | 670 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "mongodump fail to dump correctly two collections that differ only by casing of their names.\nI use mongodump on Windows.\nIn my data base there are two collections: ‘ai_patients’ and ‘ai_Patients’.\nThey differ by the ‘p’ and ‘P’ in their names.\nvlanalyticsApp> db.ai_Patients.countDocuments()\n4481\nvlanalyticsApp> db.ai_patients.countDocuments()\n0After dumping the database only two files are created: ‘ai_Patients.bson’ and ‘ai_patients.metadata.json’\nAfter restoring the database only the ‘ai_Patients’ collection is created with no documents at all and the ‘ai_Patients’ is not created.It seems like a bug.Thanks,\nItzhak",
"username": "Itzhak_Kagan"
},
{
"code": "",
"text": " When dumping to a case-insensitive file system such as Windows or macOS, collections with names that differ only by capitalization will be overwritten. For case-insensitive file systems, always use the –archive option.",
"username": "chris"
},
{
"code": "",
"text": "Thanks Chris,\nIt helped a lot.",
"username": "Itzhak_Kagan"
}
] | Mongodump bad dumping | 2023-07-27T20:55:55.512Z | Mongodump bad dumping | 523 |
null | [
"node-js",
"transactions"
] | [
{
"code": "class BookService {\n\tprivate repo: BookRepository;\n\n\tconstructor() {\n\t\tthis.repo = new BookRepository();\n\t}\n\n\tcreateBook = async (param) => {\n\t\treturn await this.repo.createBook(param)\n }\n}\n\nclass BookRepository {\n\tcollection: Collection;\n\n\tconstructor() {\n\t\tthis.collection = db.collection(\"books\");\n\t}\n\n\tcreateBook = async (param) => {\n\t\tconst session = client.startSession();\n\t\tsession.startTransaction();\n\n\t\ttry {\n\t\t\tconst newDocument = (\n\t\t\t\tawait this.collection.insertOne(param, { session })\n\t\t\t).ops[0];\n\n\t\t\tawait session.commitTransaction();\n\t\t\treturn newDocument;\n\t\t} catch (e) {\n\t\t\tif (session.inTransaction()) await session.abortTransaction();\n\t\t\tthrow e;\n\t\t} finally {\n\t\t\tawait session.endSession();\n\t\t}\n\t};\n}\nCase 1:\nbookService().createBook()\n\nCase 2:\nPromise.all([\n bookService().createBook(),\n authorService().createAuthor()\t\n])\nclass BookRepository {\n\tcollection: Collection;\n\n\tconstructor() {\n\t\tthis.collection = db.collection(\"books\");\n\t}\n\n\tcreateBook = async (param, {session}) => {\n\t\tconst newDocument = (await this.collection.insertOne(param, { session })).ops[0];\n\t};\n}\n\n\nclass BookService {\n\tprivate repo: BookRepository;\n\n\tconstructor() {\n\t\tthis.repo = new BookRepository();\n\t}\n\n\tcreateBook = async (param, {session}) => {\n\t\treturn await this.repo.createBook(param, {session})\n }\n}\n\nconst session = client.startSession();\nsession.startTransaction();\nPromise.all([\n bookService().createBook(param, {session}),\n authorService().createAuthor(param, {session})\t\n])\n",
"text": "Hi, I know this is not really Mongo related problem, but I’ve implemented repository pattern in my NodeJS so far. It works like this:There is an “Author” service that’s also implemented the same.\nMy use cases are like thisWith case 1, the method benefits from mongodb session and transaction. But with case 2, I cant seem to find a clean way to pass session object down to both book and author collection. The obvious way is to create the object outside the two services, and pass it down the services and the repositories. Something like this:I think it’s not really scalable. Anyone has the same experience or a solution for this problem? Thank you for reading.",
"username": "Nam_Le"
},
{
"code": "",
"text": "im facing the same scenario… did you manage to solve it ?",
"username": "Santiago_Villagomez"
},
{
"code": "",
"text": "Did anyone manage to solve this?",
"username": "LuciferX_N_A"
}
] | Way to pass mongo session in repository pattern | 2022-08-05T02:27:22.062Z | Way to pass mongo session in repository pattern | 3,296 |
null | [
"aggregation",
"queries",
"swift"
] | [
{
"code": "",
"text": "We have the brand and model information of the vehicle in the MongoDB collection.We have written the query for the facet. We can able to get the unique brands with the count. But We need to get the unique models with a count of each brand in each brand object.For ex:We can able to get the brand count in facet. But we can’t able to get the model count in each brand. Any solutions for this?$data =[ ‘brand’ => [ { ‘brand_1’ => [ brand_name => ‘Maruti’, brand_id =>1, count => 5 Models => [ { ‘model_name’ => ‘Swift’, ‘model_id’ => ‘1’, ‘count’ => 4, } ] }, { ‘brand_1’ => [ brand_name => ‘Maruti’, brand_id =>1, count => 5 Models => [ { ‘model_name’ => ‘Swift’, ‘model_id’ => ‘1’, ‘count’ => 4, } ] }] ] ]]",
"username": "subramanian_k1"
},
{
"code": "",
"text": "May be best to upload sample data and query to mongoplayground…Do you want the count of distinct models per brand or the count of documents with that model and brand?",
"username": "John_Sewell"
}
] | MongoDB: Challenges in $facet query in MongoDb | 2023-07-28T12:28:45.875Z | MongoDB: Challenges in $facet query in MongoDb | 442 |
null | [
"queries",
"node-js",
"crud"
] | [
{
"code": "_id: ''\nappointments: [\n{\n _id: '',\n location: '',\n dateTimeStamp: [ 1689952800000, 1689954600000],\n},\n]\n const appointmentsExist = await Appointments.findOneAndUpdate({_id:user._id}, \n [\n {\n $set: {\n appointments:{\n $cond: {\n if: {\n $or:[\n { $lt: [\n { $ifNull: [ \"$appointments.dateTimeStamp.1\", 0 ] }, args.input.appointments[0].dateTimeStamp[0]\n ]\n }, \n { $gt: [\n { $ifNull: [ \"$appointments.dateTimeStamp.0\", 0 ] }, args.input.appointments[0].dateTimeStamp[1]\n ]\n },\n ],\n },\n then: {$concatArrays:[{$ifNull: [\"$appointments\", []]}, args.input.appointments]},\n else: {}\n }\n }\n }\n }\n ],{upsert: true})\nargs.input.appointments[{\n location: '',\n dateTimeStamp: [ 1689952900000, 1689954900000],\n}]\n\nargs.input.appointments$appointments.dateTimeStamp.1args.input.appointments[0].dateTimeStamp[0]$appointments.dateTimeStamp.1args.input.appointments[0].dateTimeStamp[0]$appointments.dateTimeStamp.0\"args.input.appointments[0].dateTimeStamp[1]$appointments.dateTimeStamp.0\"args.input.appointments[0].dateTimeStamp[1]dateTimeStamp",
"text": "Hi,\nI’m working on a project where I need to be able to conditionally add data if the start date is greater than the end dates saved in the database or if the end date is less than the start dates saved in the database.example of data:I’ve tried achieving this by:where args.input.appointments (example) is:The problem that I’m having is that the array of args.input.appointments is being pushed to the database whether $appointments.dateTimeStamp.1< args.input.appointments[0].dateTimeStamp[0] or $appointments.dateTimeStamp.1> args.input.appointments[0].dateTimeStamp[0], and the same is true for $appointments.dateTimeStamp.0\" > args.input.appointments[0].dateTimeStamp[1] or $appointments.dateTimeStamp.0\" < args.input.appointments[0].dateTimeStamp[1].Does anyone know why this is happening or what I’m doing wrong? I’ve even tried testing a simpler version in mongoplayground where dateTimeStamp is only a number value instead of an array, and I still am getting similar issues. I would really appreciate any help. Thank you!",
"username": "p_p1"
},
{
"code": "let newAppt = [\n {\n _id:'Zoo',\n dateTimeStamp:[1,5]\n }\n]\ndb.getCollection(\"Appointments\").update(\n{\n $or:[\n {'appointments.dateTimeStamp.0':{$gt:newAppt[0].dateTimeStamp[1]}},\n {'appointments.dateTimeStamp.1':{$lt:newAppt[0].dateTimeStamp[0]}},\n {'appointments.0':{$exists:false}},\n ]\n},\n[\n {\n $set:{\n appointments:{\n $concatArrays:[\n {\n $ifNull:[\n '$appointments',\n []\n ]\n },\n newAppt\n ]\n }\n }\n }\n]\n)\nlet newStart = 11\nlet newEnd = 16\n\ndb.getCollection(\"Appointments\").aggregate([\n{\n $unwind:\n {\n path: '$appointments',\n preserveNullAndEmptyArrays: true\n }\n},\n{\n $addFields:{\n itemStart:{$arrayElemAt:['$appointments.dateTimeStamp', 0]},\n itemEnd:{$arrayElemAt:['$appointments.dateTimeStamp', 1]},\n }\n},\n{\n $addFields:{\n comboField:{\n $cond:{\n if:{\n $or:[\n {$eq:['$itemStart', null]},\n {$gt:[newStart, {$ifNull:['$itemEnd', 9999999]}]},\n {$lt:[newEnd, {$ifNull:['$itemStart', -9999999]}]},\n ]\n },\n then:true,\n else:false\n } \n }\n }\n},\n{\n $group:{\n _id:'$_id',\n checks:{$push:'$comboField'},\n appointments:{$push:'$appointments'}\n }\n},\n{\n $match:{\n 'checks':{$ne:false}\n }\n},\n{\n $project:{\n 'checks':0\n }\n},\n{\n $project:{\n 'appointments':{\n $concatArrays:[\n '$appointments',\n [\n {\n location:'London',\n dateTimeStamp: [ 11, 16] \n } \n ]\n ]\n }\n }\n},\n{\n $merge:{\n into:'Appointments',\n on:'_id',\n whenMatched:'merge'\n }\n}\n])\n",
"text": "I had a play, getting the matching to work BUT with the below it’ll add it twice (if you run it twice), as the condition finds a document that has an array element that is not overlapping with the new entry. After we add it, we still get a match as there exists a non-overlapping appt (in addition to the overlapping).You need to check that NONE of the events in the array overlap the new item as far I can tell from your post.You could use an aggregate merge back, where you unwind the appointments, check each one, re-group and then check for any grouped items that have no matches (or have an empty appointment schedule).Perhaps someone can respond with a more elegant solution?This was my second try to work out which items could be updated:I added in the $ifNulls to cope with the condition where the values were not set, I also pulled the array elements into new fields to make my life easier when debugging…So the stages are:Unwind\nCheck each current appointment to see if it’s not overlapping the new item to insert\nGroup the results back up to the ID\nFilter out anything that has an overlap and so cannot insert the new item\nRemove calculated fields\nCreate a new appointments array from the old one concat with the new data\nMerge back into collection, specifying a merge of the objectsI did this with a collection of multiple documents but if you know the single document you could just filter that one.I imagine you could also do this with a $reduce but this was the rabbit hole that I went down…I’ll bet someone comes along with a one-line update now…Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "John_Sewell"
}
] | Can't get $lt and $gt to work properly within $set | 2023-07-28T00:44:13.533Z | Can’t get $lt and $gt to work properly within $set | 421 |
[
"queries",
"node-js"
] | [
{
"code": "const getDbProjectManagerUsers = async () => {\n try {\n const { MONGODB_PROJECT_PUBLIC_KEY, MONGODB_PROJECT_PRIVATE_KEY, MONGODB_PROJECT_ID } = process.env;\n const url = `https://cloud.mongodb.com/api/atlas/v2/groups/${MONGODB_PROJECT_ID}/databaseUsers`;\n const url2 = `https://cloud.mongodb.com/api/atlas/v2/groups/{groupId}/databaseUsers`\n const auth = {\n username: MONGODB_PROJECT_PUBLIC_KEY,\n password: MONGODB_PROJECT_PRIVATE_KEY,\n };\n console.log(\"auth: \", auth)\n console.log('url: ', url)\n const response = await axios.get(url, {\n auth,\n headers: {\n Accept: 'application/vnd.atlas.2023-02-01+json'\n }\n })\n\n if (response.status !== 200) {\n throw new Error('Error getting project manager users from the db')\n }\n\n console.log('response.data: ', response.data)\n } catch (error) {\n console.error('An error has occurred getting the project managers users from the db: ', error);\n }\n}\n",
"text": "I’m trying to get all the Project Access Manager Users (the users in the screen-shot) of the project I’m working on for a client. I’m doing it programmatically by making a get request with the generated public key and private key as the username and password for the auth header of the request.Here’s my code:The response that I’m getting is a 401 with the following error message: ‘You are not authorized for this resource.’I’ve checked the logs; I’m not getting undefined for any of my environment variables.This doesn’t make sense to me because the public-private key pair that was generated has a ‘Project Read Only’ role, which is necessary to send the request to the API, as mentioned in the docs linked here: MongoDB Atlas Administration API.What am I missing? Thanks for any responses.\nmongodb-project1891×792 49.3 KB\n",
"username": "Gabriel_Torion"
},
{
"code": "",
"text": "Hi @Gabriel_TorionAll api requests require digest authentication.The endpoint that matches your screenshot is https://cloud.mongodb.com/api/atlas/v2/groups/{groupId}/usersHope this helps!",
"username": "chris"
},
{
"code": "const getDbProjectManagerUsers = async () => {\n try {\n const { GABES_DB_PROJECT_ID, GABES_DB_PASSWORD, GABES_DB_USERNAME } = process.env;\n const url = `https://cloud.mongodb.com/api/atlas/v2/groups/${GABES_DB_PROJECT_ID}/users`\n const options = {\n method: \"GET\",\n rejectUnauthorized: false,\n digestAuth: `${GABES_DB_USERNAME}:${GABES_DB_PASSWORD}`,\n };\n const response = await urllib.request(url, options)\n\n console.log('response ', response)\n console.log('response statusText: ', response.status)\n } catch (error) {\n console.error('An error has occurred getting the project managers users from the db: ', error);\n }\n}\n",
"text": "Hi Chris.Thanks for the reply.Still getting an error. I’m getting a 406: “Not acceptable.”Here’s my refactored code:",
"username": "Gabriel_Torion"
},
{
"code": " Accept: 'application/vnd.atlas.2023-02-01+json'\n",
"text": "Looks like you threw away your Accept header:You’ll want to keep that!",
"username": "chris"
},
{
"code": "",
"text": "Yea, sorry, I forgot about that. I added that in, and now I’m getting a 403 “Forbidden response” error.To provide some context, I am the project owner for the example, and I’ve also added my IP address to the “Edit Access List” for the API key.",
"username": "Gabriel_Torion"
},
{
"code": "",
"text": "Check the response body for the full reason for the 403.",
"username": "chris"
}
] | Trying To Get Project Access Manager Users Programmatically via Atlas Administration API, Getting 401 Response | 2023-07-27T16:56:12.288Z | Trying To Get Project Access Manager Users Programmatically via Atlas Administration API, Getting 401 Response | 506 |
|
null | [
"atlas-functions",
"graphql"
] | [
{
"code": "",
"text": "Hello… newbie here and super excited about MongoDB, a whole world of opportunities opened up… I hope you guys can help me out as I am struggling to do the following.With GraphQL API and the Function (Fucntion Editor) did I understand how to call data from 3rd party API to add to my endpoint. The endpoint already has data from my MongoDB collection. What I aim to achieve is to have the data from the 3rd party API to be added to my collection so I can define “relationships” and do other things with the 3rd party API data in my MongoDB. How would I go about doing that?Thanks a lot for your help.",
"username": "borabora"
},
{
"code": "",
"text": "To add data from a third-party API to your MongoDB collection using a GraphQL API and Function Editor, you can follow these general steps:Set up your GraphQL API: Ensure that you have a GraphQL API in place, which serves as the endpoint for your application.Create a Function in the Function Editor: In your GraphQL API provider’s Function Editor or serverless function environment, create a new function. This function will be responsible for fetching data from the third-party API and adding it to your MongoDB collection.Integrate the third-party API: In your function, use appropriate HTTP libraries or tools to make a request to the third-party API and retrieve the desired data. You may need to provide any required authentication tokens or API keys to access the data.Parse and transform the data: Once you receive the data from the third-party API, parse and transform it as needed to match the structure and format of your MongoDB collection. This step ensures that the data can be properly stored in your existing collection.Connect to your MongoDB: Establish a connection to your MongoDB database from within your function. You’ll need to provide the necessary credentials and connection details to access your collection.Add the data to your collection: Using the MongoDB connection, insert or update the data retrieved from the third-party API into your MongoDB collection. You can use appropriate MongoDB driver libraries or tools to perform the database operations.Handle relationships and other actions: Once the data is added to your MongoDB collection, you can define relationships between this data and your existing data. Depending on your schema design and requirements, you might update existing documents, create new documents, or link related data through references or embedding.Test and deploy: Test your function locally to ensure it functions correctly. Once you are satisfied, deploy your function to your serverless environment so that it can be invoked by your GraphQL API whenever required.It’s worth noting that the specific implementation details may vary depending on your chosen GraphQL API provider, programming language, and libraries/tools you are using. Refer to the documentation of your GraphQL API provider and the tools you’re working with for more specific instructions on setting up functions and integrating with MongoDB.",
"username": "Ronald_Higgins"
}
] | GraphQL API > Function (add data from 3rd party API to collection) | 2023-01-23T10:13:32.262Z | GraphQL API > Function (add data from 3rd party API to collection) | 1,433 |
null | [
"java",
"kotlin"
] | [
{
"code": "var lcalDateTime: LocalDateTime = LocalDateTime.now()\n @PersistedName(\"dayScheduled\")\n private var _dayScheduled: RealmInstant = RealmInstant.now()\n var dayScheduled: LocalDate\n get() = LocalDate.from(_dayScheduled.toInstant().atZone(ZoneOffset.UTC))\n set(value) {\n _dayScheduled = value.atStartOfDay().toRealmInstant()\n }\n",
"text": "Is there a way to define for example:\nvar lcalDateTime: LocalDateTime = LocalDateTime.now()\nin RealmObject in Kotlin SDK and add support for serialization of this class into Realm?\nI see that new Kotlin SDK versions support KSerializer but thats probably not meant for this right?At this moment, I am doing it like so:This solution works but I am seraching for something with less boilerplate.",
"username": "David_Bubenik"
},
{
"code": "",
"text": "The Kotlin SDK is multiplatform and due to the lack of common date/time implementations we don’t have support for standard Java time entities.Your approach is more or less following our advised workaround which can be found in Realm does not support properties of this type (Date) · Issue #1378 · realm/realm-kotlin · GitHub. You should however remember to @Ignore the transient attribute.The above example further shows how to wrap these kind of custom wrappers into a delegate that you could reuse for multiple properties.",
"username": "Claus_Rorbech"
},
{
"code": "class LocalDateAdapter(private val property: KMutableProperty0<RealmInstant?>) {\n operator fun getValue(thisRef: Any?, property: KProperty<*>) =\n this.property.get()?.let { LocalDate.from(it.toInstant().atZone(ZoneOffset.UTC)) }\n\n operator fun setValue(thisRef: Any?, property: KProperty<*>, value: LocalDate?) =\n this.property.set(value?.atStartOfDay()?.toRealmInstant())\n}\n\n@PersistedName(\"dayScheduled\")\n private var _dayScheduled: RealmInstant = RealmInstant.now()\n @Ignore\n var dayScheduled: LocalDate by LocalDateAdapter(this::_dayScheduled) <-- Does not work\n",
"text": "Thanks for info. I am littlebit struggling to make the delegate work for both nullable and non-nullable fields:Any idea how to fix it?",
"username": "David_Bubenik"
}
] | Is possible to save custom datatypes like Java's LocalDateTime? | 2023-07-28T09:32:54.145Z | Is possible to save custom datatypes like Java’s LocalDateTime? | 585 |
null | [
"queries",
"node-js",
"mongoose-odm",
"serverless"
] | [
{
"code": "{\n brand: 1,\n facture: 1,\n results.content.algo1.version: 1,\n results.content.algo1.bestOf.segments.cluster: 1\n _id: 1,\n}\nfind({\n brand: \"MY_BRAND\",\n facture: \"MY_FACTURE\",\n 'results.content.algo1.version': MY_VERSION,\n 'results.content.algo1.bestOf.segments.cluster': MY_CLUSTER\n})\nprojection({\n 'platforms.platform1.devices': 1,\n 'platforms.platform1.devices': 1,\n})\n.sort({_id:1})\n.limit(1000)\n",
"text": "Hi,\nHere is the summary of our case :note that: we add _id to index just for trying _id pagination to get data.So here is the problem:We may missing something about dealing with huge data on MongoDB.\nSo, do you think the gap between query and response time (300ms → 3seconds) is normal because of the size of returning data?\nAnd do you have any other suggestion for handling this situation ?\nThank you.",
"username": "hazal"
},
{
"code": "db.collection.find(...).explain()$match$project$limitfind()",
"text": "Hey @hazal,Welcome to the MongoDB Community!When we try to querying data from GCF, it takes 3-4 seconds to response for a single query(limit 1000)You mentioned the query takes 3-4 seconds even when returning a smaller dataset. To clarify - if you reduce the limit to say 100 records, do you still see the query taking 3-4 seconds consistently? Or does it scale down with the smaller result set size?To eliminate network latency as a possible factor, could you try running the query from your local machine using mongosh connected directly to Atlas? How does the performance compare when running from your laptop?Could you share the full output of db.collection.find(...).explain()? That may provide some insights into what operations are taking time during the query execution.As one of the performance tests - could you try structuring the same query as an aggregation pipeline with $match, $project, and $limit stages? Does running it as an aggregation show any difference in timing compared to the find() query?Also, could you share a sample document from your collection on which you are working?These additional troubleshooting steps and comparisons will provide us with further clues into what may be causing the slower performance. The objective is to narrow down where the slowness is occurring - in the network, query execution, or other factors.Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'myDb.combinedUsers',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n {\n 'results.content.algo1.version': {\n '$eq': '2023-06-01-0-12'\n }\n },\n {\n 'results.content.algo1.bestOf.segments.cluster': {\n '$eq': 1\n }\n },\n {\n brand: {\n '$eq': 'mybrand'\n }\n },\n {\n facture: {\n '$eq': 'myfacture'\n }\n }\n ]\n },\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'LIMIT',\n limitAmount: 1000,\n inputStage: {\n stage: 'PROJECTION_DEFAULT',\n transformBy: {\n 'platforms.platform1.devices': 1,\n 'platforms.platform2.devices': 1\n },\n inputStage: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: {\n brand: 1,\n facture: 1,\n 'results.content.algo1.version': 1,\n 'results.content.algo1.bestOf.segments.cluster': 1,\n _id: 1\n },\n indexName: 'brand_1_facture_1_results.content.algo1.version_1_results.content.algo1.bestOf.segments.cluster__id_1',\n isMultiKey: true,\n multiKeyPaths: {\n brand: [],\n facture: [],\n 'results.content.algo1.version': [],\n 'results.content.algo1.bestOf.segments.cluster': [\n 'results.content.algo1.bestOf.segments'\n ],\n _id: []\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n brand: [\n '[\"mybrand\", \"mybrand\"]'\n ],\n facture: [\n '[\"myfacture\", \"myfacture\"]'\n ],\n 'results.content.algo1.version': [\n '[\"2023-06-01-0-12\", \"2023-06-01-0-12\"]'\n ],\n 'results.content.algo1.bestOf.segments.cluster': [\n '[1, 1]'\n ],\n _id: [\n '[MinKey, MaxKey]'\n ]\n }\n }\n }\n }\n },\n rejectedPlans: [\n ...\n ]\n },\n command: {\n find: 'combinedUsers',\n filter: {\n brand: 'mybrand',\n facture: 'myfacture',\n 'results.content.algo1.version': '2023-06-01-0-12',\n 'results.content.algo1.bestOf.segments.cluster': 1\n },\n sort: {\n _id: 1\n },\n projection: {\n 'platforms.platform1.devices': 1,\n 'platforms.platform2.devices': 1\n },\n limit: 1000,\n '$db': 'myDb'\n },\n serverInfo: {\n ...\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1689950028, i: 139 }),\n signature: {\n ...\n }\n },\n operationTime: Timestamp({ t: 1689950028, i: 139 })\n}\n",
"text": "KushagraHi Kushagra,\nthanks for your reply!yes we tried agregation pipeline too. There was no difference between find.And additionally this is our monitoring of service network while we tried to get this data (total 120K matched records)\n** first big pick is while we tried get data with limit 1000,\n** the 2nd and 3rd ones belongs to tries with limit 100\nimage719×358 70.6 KB\nSo,\ni think the problem is not performance of MongoDB, it is about network.\nMaybe we need edit something about network preferences, idk. Is there any setting for max network out bytes or something else on Mongodb Atlas?Best regards",
"username": "hazal"
},
{
"code": "",
"text": "Hey,\nWe found the solution for our case. Maybe it will help someone else.It looks like, it is about Google Cloud Function’s sources in our case. Altough metric graphics were good, when we increase the memory of GCF to 4GB we got so much better results. (3s ->. 400-500 ms). More memory is good but it directly effects the CPU source too in Google Cloud Functions. (Limiti di memoria | Documentazione di Cloud Functions | Google Cloud)We were trying so much complex things, we suprised about this basic solution ! But keep in mind:\nThis is not a solution by itself.you can also try:Best Regards,\nHazal",
"username": "hazal"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Slow queries between mongoose and mongoDB Atlas on Google Cloud Function | 2023-07-20T13:22:14.315Z | Slow queries between mongoose and mongoDB Atlas on Google Cloud Function | 995 |
null | [
"aggregation",
"indexes",
"performance"
] | [
{
"code": "[\n {\n $graphLookup:\n {\n from: \"FlatSite\",\n startWith: \"$ParentRowId\",\n connectFromField: \"ParentRowId\",\n connectToField: \"RowId\",\n maxDepth: 10,\n as: \"Parents\",\n depthField: \"level\",\n },\n },\n {\n $match:\n {\n LevelId: 2,\n },\n },\n {\n $project:\n {\n _id: 1,\n Columns: 1,\n Value: 1,\n RowId: 1,\n ParentColumns: {\n $reduce: {\n input: \"$Parents.Columns\",\n initialValue: [],\n in: {\n $concatArrays: [\n \"$$value\",\n \"$$this\",\n ],\n },\n },\n },\n },\n },\n {\n $project:\n {\n _id: 1,\n Value: 1,\n RowId: 1,\n Columns: {\n $concatArrays: [\n \"$Columns\",\n \"$ParentColumns\",\n ],\n },\n },\n },\n {\n $project:\n {\n _id: 1,\n Value: 1,\n RowId: 1,\n Columns: {\n $filter: {\n input: \"$Columns\",\n as: \"column\",\n cond: {\n $in: [\n \"$$column.ColumnId\",\n [\n ObjectId(\n \"60707b306d3a5d6157bfe469\"\n ),\n ObjectId(\n \"60707b336d3a5d6157bfe47c\"\n ),\n ObjectId(\n \"64ad9acfff44fd298888d34e\"\n ),\n ],\n ],\n },\n },\n },\n },\n },\n {\n $match:\n {\n Columns: {\n $all: [\n {\n $elemMatch: {\n ColumnId: ObjectId(\n \"60707b306d3a5d6157bfe469\"\n ),\n $and: [\n {\n Value: /^E-211$/i,\n },\n ],\n },\n },\n\n ],\n },\n },\n },\n {\n $project:\n {\n _id: 1,\n Columns: 1,\n Value: 1,\n RowId: 1,\n SortColumn1: {\n $arrayElemAt: [\n {\n $filter: {\n input: \"$Columns\",\n as: \"c\",\n cond: {\n $eq: [\n \"$$c.ColumnId\",\n ObjectId(\n \"60707b306d3a5d6157bfe469\"\n ),\n ],\n },\n },\n },\n 0,\n ],\n },\n SortColumn2: {\n $arrayElemAt: [\n {\n $filter: {\n input: \"$Columns\",\n as: \"c\",\n cond: {\n $eq: [\n \"$$c.ColumnId\",\n ObjectId(\n \"64ad9acfff44fd298888d34e\"\n ),\n ],\n },\n },\n },\n 0,\n ],\n },\n },\n },\n {\n $setWindowFields:\n {\n output: {\n TotalRecords: {\n $count: {},\n },\n },\n },\n },\n \n {\n $sort:\n /**\n * Provide any number of field/order pairs.\n */\n {\n \"SortColumn1.Value\": -1,\n \"SortColumn2.Value\": -1,\n },\n },\n {\n $skip:\n /**\n * Provide the number of documents to skip.\n */\n 0,\n },\n {\n $limit:\n /**\n * Provide the number of documents to limit.\n */\n 100,\n },\n {\n $project:\n /**\n * specifications: The fields to\n * include or exclude.\n */\n {\n _id: 0,\n SortColumn1: 0,\n SortColumn2: 0,\n },\n },\n]\n",
"text": "We have a fairly complicated aggregation query used for gluing some data together and providing it in a server-side paging application. What is the best way to determine what the indexes should be for this kind of complex query? I thought that Mongodb would “recommend indexes” like our SQL Server does. I have seen that in the past, but we have not received any guidance for this collection. It’s fairly new - is there some way to accelerate the recommendations?Query below as an example, but I don’t expect you to be able to understand it without some explanation of our crazy data ",
"username": "Deanna_Delapasse1"
},
{
"code": "$match$skip$limit$project$projectexplain('executionStats')explain('executionStats')explain()",
"text": "Hey @Deanna_Delapasse1,What is the best way to determine what the indexes should be for this kind of complex query?The “Best” is a relative term here since it relies on various factors but I have some thoughts on general approaches that might help you in optimizing your pipeline:Consider indexing fields in the early pipeline stages, as it can really boost performance by filtering/sorting data before further processing. For your case, you can optimize index usage by using $match early on and avoiding $skip and $limit. However, these are just general approaches, and may not necessarily apply to your specific use case. For more information on this topic and how the server optimizes certain situations, refer to Aggregation Pipeline Optimization.One thing - I noticed that you are using $project multiple times. Try to refactor your pipeline and see if you can make it more efficient by reducing the number of $project stages.Use the explain('executionStats') method: You can run your aggregation with the explain('executionStats') method to identify which stages are taking the most time.If you prefer to use a GUI, you can also use MongoDB Compass query plan view to see the explain() output.You can also consider using compound indexes for multiple fields if they cover multiple filtered/sorted fields from the pipeline stages. Combining fields in one index can further optimize performance, especially when querying on multiple criteria.Please note that generally, MongoDB only uses one index to fulfill most queries. To read more, please refer to the Indexes Strategies - documentation.Query below as an example, but I don’t expect you to be able to understand it without some explanation of our crazy data May I ask what goals you are trying to accomplish with this aggregation pipeline? Is this something you do regularly, or if you’re open to schema changes that could simplify queries?In case you need further help, please feel free to share the sample documents, and sample output you are expecting. This will help us assist you better.Let us know if any of this helps or if you need further assistance!Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Best way to determine which indexes are needed for aggregation? | 2023-07-18T17:42:18.187Z | Best way to determine which indexes are needed for aggregation? | 622 |
null | [
"aggregation",
"crud"
] | [
{
"code": "{ foo: [\n { _id: 0, meta: { embed: { a: 1, b: 2 } } }\n]}\ndb.foo.updateMany({_id: 0}, {$set: { _id: 0, 'meta.embed': { b: 3 } } })\n{ _id: 0, meta: { embed: { b: 3 } } }\ndb.foo.updateMany({_id: 0}, [{$set: { _id: 0, 'meta.embed': { b: 3 } } }])\n{ _id: 0, meta: { embed: { a: 1, b: 3 } } }\n",
"text": "let’s say my database is likewhen i dothe result isHowever, if i use aggregation pipelinethe result isSo what’s the difference here?\nI have to use aggregation pipeline because the real case contains other operators, so how can i use aggregation pipeline to achive the same result?Environment: Windows 10, MongoDB 6.0.6",
"username": "Hieuzest"
},
{
"code": "db.Test.updateMany(\n {_id: 1}, \n [\n {\n $unset:'meta.embed'\n },\n {\n $set: {'meta.embed': { b: 3 } } \n }\n ]\n)\ndb.Test.updateMany(\n {_id: 1}, \n [\n {\n $unset:'meta.embed'\n },\n {\n $set: {'meta.embed.b':3} \n }\n ]\n)\n",
"text": "I had written a post about working with arrays but that’s not the case here, but looking (again) through the documentation it seems that within an aggregation pipeline $set is an alias for $addFields, in this case I guess it makes sense that it does not replace the existing field but adds a new element to the location specified.\nIn non-aggregation mode, it’s running so that it’ll replace what’s there.If you wanted to do the same in the aggregation you need to add a new stage to the pipeline as far as I can tell to do this:You could also simply (subjectively) it to:",
"username": "John_Sewell"
},
{
"code": " collection.updateMany(query, [\n { $set: preset }, // Copy all involved fields of inner operators to a temporary field by walking through $set \n { $unset }, // Remove all objectlike fields of $set\n { $set }, // Actual operation\n { $unset: [tempKey] }, // Remove the temporary field in first stage\n ])\n",
"text": "I managed to work around this by writing a 4 stage pipeline to fully emulate the behavior. The pseudo is likeThis was painful but works. Though i hope if we could do it better.",
"username": "Hieuzest"
}
] | Why $set in aggregation pipeline does not overwrite embedded object | 2023-07-27T08:28:12.468Z | Why $set in aggregation pipeline does not overwrite embedded object | 467 |
null | [
"swift"
] | [
{
"code": "",
"text": "We are sharing the realm db between different app groups and extensions. What could be better practices to follow during migration which involves app groups + extensions.",
"username": "Shreesha_Kedlaya"
},
{
"code": "",
"text": "@Shreesha_Kedlaya Welcome to the forums!It’s important that questions are clear and contain enough information so we understand what’s being asked.When you say ‘sharing a realm’, in what context are you sharing? Is this a sync’d realm with multiple users or some other implementation? Sync’d realms don’t have a migration so that’s a bit unclear.Can you tell us what ‘app groups’ and ‘extensions’ mean in this use case?And when you ask about ‘better practices’, better than what? Are you encountering an issue? If so, what is it?Provide more info and we’ll take a look.",
"username": "Jay"
},
{
"code": "",
"text": "@Jay There is an issue we are encountering during migration in iOS. The realm is not able to migrate after schema update. We could reproduce if we share realm db with app groups + extensions (widget extensions). We wanted to know if we are following correct practices while migrating realm in this scenario. We have already posted an issue in here . You can go through this for more info.",
"username": "Shreesha_Kedlaya"
},
{
"code": "",
"text": "just use filemanager to move the realm files. think of it like sqlite.",
"username": "Alex_Ehlke"
}
] | What is the best practice to migrate realm shared with app groups + extensions? | 2023-01-20T13:29:20.958Z | What is the best practice to migrate realm shared with app groups + extensions? | 1,294 |
null | [] | [
{
"code": "",
"text": "The below attached is my output for the sh.status():shards\n[\n{\n_id: ‘replicaset1’,\nhost: ‘replicaset1/:27018,:27018’,\nstate: 1,\ntopologyTime: Timestamp({ t: 1687678667, i: 1 })\n},\n{\n_id: ‘replicaset2’,\nhost: ‘replicaset2/:27018,:27018’,\nstate: 1,\ntopologyTime: Timestamp({ t: 1687678683, i: 1 }),\ndraining: true\n}\n]",
"username": "Aryan_Semwal"
},
{
"code": "",
"text": "Did you run removeshard command?\nWas it successful?\nWhat does the status show?",
"username": "Ramachandra_Tummala"
},
{
"code": "draining: true",
"text": "yes I did run the removeShard command and for the shard I did that was in draining: true state. It stayed there for a few days even though the data inside it was very small.And the cluster was not working fine the very whole time, it showed an error that one of my shard replicaset does not have a preferedPrimary as it read preference when I tried running simple commands such as show dbs and few other operations.",
"username": "Aryan_Semwal"
},
{
"code": "",
"text": "For small data idraining should not take that long\nDid you check if any db exists on the shard you are dropping\nYou have to move it to primary shard and issue removeShard\nWhat readpreference you used?\nIs it set from connect string?",
"username": "Ramachandra_Tummala"
}
] | Data not getting distributed across other shards of my mongodb cluster | 2023-07-26T11:30:46.215Z | Data not getting distributed across other shards of my mongodb cluster | 484 |
null | [
"node-js",
"mongoose-odm",
"connecting",
"containers"
] | [
{
"code": "version: \"3.4\"\n\nx-common-variables: &common-variables\n MONGO_URI: mongodb://root:${MONGO_PASSWORD}@mongodb:27017/\n\n HCAPTCHA_SITEKEY: ${HCAPTCHA_SITEKEY}\n HCAPTCHA_SECRET: ${HCAPTCHA_SECRET}\n ALLOWED_ORIGIN: https://skizzium.com\n TOKEN_KEY: ${TOKEN_KEY}\nMongooseServerSelectionError: getaddrinfo EAI_AGAIN fgfe6k2y\n at Connection.openUri (/app/node_modules/mongoose/lib/connection.js:825:32)\n at Mongoose.createConnection (/app/node_modules/mongoose/lib/index.js:356:10)\n at Object.<anonymous> (/app/node_modules/@skizzium-api/common/config/database.js:6:28)\n at Module._compile (node:internal/modules/cjs/loader:1256:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)\n at Module.load (node:internal/modules/cjs/loader:1119:32)\n at Module._load (node:internal/modules/cjs/loader:960:12)\n at Module.require (node:internal/modules/cjs/loader:1143:19)\n at require (node:internal/modules/cjs/helpers:110:18)",
"text": "Hi! So, I’ve built a microservice-based application in express.js using mongoose. I want to deploy that to my VPS using docker compose, however I’m having connection issues. I’ve linked a GitHub gist with the error and my docker-compose.yml.\nI can provide more context if needed.\n",
"username": "Radostin_Stoyanov"
},
{
"code": "",
"text": "Check this link.It may help",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Post says the container is likely not running, however that’s not my case. For me, I see it’s running and if I check the logs there are no traces of attempted connections from the microservices. I can also connect to it using Compass (probably should’ve mentioned that).",
"username": "Radostin_Stoyanov"
},
{
"code": "",
"text": "Alright so the issue was that my auto-generated password contained @ and wasn’t escaped, which made mongoose think it’s the DB host. I solved it by changing my password to one that doesn’t contain special characters.",
"username": "Radostin_Stoyanov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connection issues with MongoDB in Docker (with compose) | 2023-07-26T16:36:09.590Z | Connection issues with MongoDB in Docker (with compose) | 712 |
[
"atlas-data-lake",
"atlas-online-archive"
] | [
{
"code": "",
"text": "Newbie here to the online archive with Atlas… hoping I can get a few simple answers here to unblock me.\nI have a collection (TAP.raw) with an online archive setup. See screenshot below.\n\nimage1061×307 12.5 KB\nI know that archive is working (archiving data older than 15 days) because I can browse it using a data-federation connection string using Studio3T.But how do I find the archived documents using the web interface for MongoDB Atlas? It would seem that the default for browsing collections would be the unified view of Atlas + Online Archive, right?Also, on the archive page, the “total data archived” field on my archive summary page simply says ‘N/A’. When I hover over the “N/A” it shows “Metrics are available only for new Online Archives on Atlas Data Lake”. Is this an error? Did I set something up wrong?",
"username": "Nicholas_Vandehey"
},
{
"code": "Connect to Cluster and Online Archive",
"text": "Hi @Nicholas_Vandehey,Also, on the archive page, the “total data archived” field on my archive summary page simply says ‘N/A’. When I hover over the “N/A” it shows “Metrics are available only for new Online Archives on Atlas Data Lake”. Is this an error? Did I set something up wrong?Thanks for bringing this one up. I’m just checking with the team some possible causes of this but will update here when I have any information. In the meantime, could you just verify when this online archive was created? You can find this out in your Project Activity Feed - You could filter for “Online Archive” and should find an option for the online archive being created.But how do I find the archived documents using the web interface for MongoDB Atlas? It would seem that the default for browsing collections would be the unified view of Atlas + Online Archive, right?It is not currently possible to query the documents storage in Online Archive via Atlas UI. You could use MongoDB Compass and go through the connection modal for the online archive and select the Connect to Cluster and Online Archive option. I have not tested this using Studio3T.In terms of the Online Archive + Cluster unified view via Atlas Data Explorer / Atlas Web UI, I would recommend creating a post for this in our feedback engine in which others can vote for.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "N/ATotal Data ArchivedTotal Data ArchivedN/AcollStatsfind().explain()",
"text": "Hi @Nicholas_Vandehey,Just a quick update regarding the N/A you’re seeing under Total Data Archived - Depending on the date of creation, the online archive would have this message under Total Data Archived:Metrics are available only for new Online Archives on Atlas Data LakeMore recently created online archives would not show this message.There are a few options here for the N/A value you’re seeing:Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to access data archive from web interface? | 2023-07-25T18:54:59.413Z | How to access data archive from web interface? | 531 |
|
null | [] | [
{
"code": "Director is listening on port 3001\nSyntaxError: Unexpected token 'I', \"Internal S\"... is not valid JSON\n at JSON.parse (<anonymous>)\n at parseJSONFromBytes (node:internal/deps/undici/undici:6662:19)\n at successSteps (node:internal/deps/undici/undici:6636:27)\n at node:internal/deps/undici/undici:1236:60\n at node:internal/process/task_queues:140:7\n at AsyncResource.runInAsyncScope (node:async_hooks:206:9)\n at AsyncResource.runMicrotask (node:internal/process/task_queues:137:8)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n",
"text": "",
"username": "Chingu_Sky"
},
{
"code": "",
"text": "Thats pretty light on anything that people can look at to help diagnose he issue.Code/server config etc? Documents, anything?",
"username": "John_Sewell"
},
{
"code": "",
"text": "code this is the code",
"username": "Chingu_Sky"
},
{
"code": "",
"text": "I don’t have time to set all that up in a VM for testing as I’m not running that on my machine but I can’t see from the call stack where the error is taking place, is that the whole stack trace?When exactly does it fall over, what API are you calling in that project?From the looks of it, the code is trying to parse a return value from undici which looks like it’s an http client, so it’s downloading from somewhere, could the value it’s getting be an error code and not valid JSON?I think you’re going to at least have to debug this a bit more before getting more help as I don’t think it’s a mongo issue.",
"username": "John_Sewell"
},
{
"code": "",
"text": "I have no idea what Api its calling in but I have the mongo connecting from the ai_director that holds chatgpt key, uberduck keys and a discord bot. I have also noticed when i type the command !topic it goes through the suggested topics and stops there and doesnt generate in generated_topics so maybe something is going on with that… I am not that well versed in this as its my first project using mongoDB but any help I appreciate it.",
"username": "Chingu_Sky"
}
] | My directory code was running fine until it decided to throw another error | 2023-07-24T20:41:37.522Z | My directory code was running fine until it decided to throw another error | 707 |
[
"aggregation",
"atlas-cluster"
] | [
{
"code": "MongoDBAtlasODBC.Query(\"mongodb://federateddatabaseinstance0-z7f7m.a.query.mongodb.net/data_base?ssl=true&authSource=admin\",\n\"Flexiweb_Production\",\n\"Select *\nfrom Flexiweb_Production.Batch\nWHERE CAST(EXTRACT(YEAR FROM date )as integer)>=2023\")\ndb.Batch.aggregate([\n {\n $match: {\n CreationDate: {\n $gte: ISODate(\"2023-01-01T00:00:00.000Z\"),\n $lt: ISODate(\"2024-01-01T00:00:00.000Z\")\n }\n }\n }\n])\n",
"text": "Hello,indeed I was able to connect using the sitaxismy question now is, in the configuration of my federated database there is an option to add a view\nimage1273×754 43.5 KB\nIn this view, I can add a pipeline that is something likeThe idea is to create an already filtered view so that power bi already connects to that collection. Is that possible?",
"username": "Adolfo_Adrian"
},
{
"code": "AtlasDataFederation testdb> show collections\ncollection\nAtlasDataFederation testdb> db.collection.find()\n[\n { _id: ObjectId(\"64accacd1c9d59c7264bbfdf\"), a: 3 },\n { _id: ObjectId(\"64accacd1c9d59c7264bbfdd\"), a: 1 },\n { _id: ObjectId(\"64accacd1c9d59c7264bbfde\"), a: 2 },\n { _id: ObjectId(\"64accacd1c9d59c7264bbfe0\"), a: 4 },\n { _id: ObjectId(\"64accacd1c9d59c7264bbfe1\"), a: 5 }\n]\n{'a': 2}AtlasDataFederation testdb> db.createCollection( \"testViewColl\", { viewOn: \"collection\", pipeline: [{ $match: { 'a':2 } }] } )\n{ ok: 1 }\nAtlasDataFederation testdb> db.runCommand({\"listCollections\":1})\n{\n ok: 1,\n cursor: {\n firstBatch: [\n {\n name: 'collection',\n type: 'collection',\n options: {},\n info: { readOnly: true }\n },\n {\n name: 'testViewColl',\n type: 'view',\n options: {\n viewOn: 'collection',\n pipeline: [ { '$match': { a: { '$eq': 2 } } } ]\n },\n info: { readOnly: true }\n }\n ],\n id: Long(\"0\"),\n ns: 'testdb.$cmd.listCollections'\n }\n}\nAtlasDataFederation testdb> db.testViewColl.find()\n[ { _id: ObjectId(\"64accacd1c9d59c7264bbfde\"), a: 2 } ]\n",
"text": "Hi @Adolfo_Adrian,I’ve not yet tested this on Power BI but I believe it seems to work on DBeaver assuming you just want to see the output from the view. For example from my test environment:Initial data:View being created to match documents with {'a': 2}:Verifying the view:DBeaver table for the initial data:\n\nimage1954×484 136 KB\nDBeaver showing data for the view:\n\nimage1960×484 103 KB\nI’ll try test this out with Power BI connector when possible and update here with any information. But if you’re running into any issues so far let me know what the scenario / error messages are.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran thanks for your response.I try this>>>>>I create the view follow your example, tanking random _id form the colection\nimage1166×52 3.38 KB\nBut, I go to Dbeaver and Power Bi and… I dont see the view\nimage1126×445 18.1 KB\nConnecting directly from power BI, I don’t see it either.\nimage984×536 16.7 KB\nYou don’t know if it is possible to create the view from here in the federated database configuration\nimage2370×712 59.5 KB\nI don’t know if what I want in that way is possible, I’m not an expert in mongo, but what you do is exactly what I want, to have a view to be able to consult it and be able to take filtered data.Thanks, ",
"username": "Adolfo_Adrian"
},
{
"code": "mongoshuse <dbname><dbname>show collectionsLotedb.runCommand({ \"create\" : \"<view-name>\", \"viewOn\" :\" <collection-name>\", \"pipeline\" : [\"<stage1>\",\"<stage2>\",...] })db.runCommand({\"listCollections\":1})databaseDriver Propertiesmongosh",
"text": "but what you do is exactly what I want,Thank you for confirming Adolfo!I create the view follow your example, tanking random _id form the colectionWhat database did you create this on? Is it the same one where the collection belongs which the View is being created?To make sure, can you perform the following in mongosh and provide the output when possible:Additionally, from the DBeaver instance, can you advise database value you used when connecting? It should be in the Driver Properties setting. Additionally, please provide the JDBC URL format you’ve used (Redact all credentials and sensitive information before posting here).You don’t know if it is possible to create the view from here in the federated database configurationI’ll need to test this later today as I’ve not yet done so but the View that I created from mongosh does appear in the Atlas UI at the same screen where you display “Create View”. I’ll let you know once I finish attempting to create this via the UI. In the meantime, were you having any particular issues creating the view from there?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "db.Batch.aggregate([\n {\n $match: {\n CreationDate: {\n $gte: ISODate(\"2023-01-01T00:00:00.000Z\"),\n $lt: ISODate(\"2024-01-01T00:00:00.000Z\")\n }\n }\n }\n])\n$match[\n {\n \"$match\": {\n \"CreationDate\": {\n \"$gte\": { \"$date\": \"2023-01-01T00:00:00.000Z\" },\n \"$lt\": { \"$date\": \"2024-01-01T00:00:00.000Z\" }\n }\n }\n }\n]\nSaveEdit ViewSave",
"text": "In this view, I can add a pipeline that is something likeYou don’t know if it is possible to create the view from here in the federated database configuration@Adolfo_Adrian - I created a view via the UI based off the $match example you made. Check out the below example. I don’t have any data which matches this view so I recommend testing it on your own environment to verify that it works:\nimage1762×396 24.4 KB\n\nimage812×1142 46.9 KB\nFor your reference / to make it easier to copy and paste, here is the text snippet of the above pipeline:Hit Save for the Edit View screen.Hit Save for the Data Federation instance configuration screen (highlighted in the red box):\nimage1808×724 52.5 KB\nRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "Option::unwrap()None",
"text": "Helllo my friend …I followed your steps as is, I was able to create the view successfully and connect via Power Bi. But unfortunately I have this error in the load. Do you know what it refers to?ERRORFailed to save modifications to the server. Error returned: 'OLE DB or ODBC error: [DataSource.Error] ERROR [HY000] [MongoDB][API] Caught panic: called Option::unwrap() on a None value Ok(“in file ‘C:\\Users\\mci-exec\\.cargo\\registry\\src\\index.crates.io-6f17d22bba15001f\\mongodb-2.5.0\\src\\cursor\\mod.rs’ at line 229”). '.",
"username": "Adolfo_Adrian"
},
{
"code": "",
"text": "Hi @Adolfo_Adrian,I’ve not encountered the error before but did you follow the Connect from Power BI documentation for setting up the connection?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello again.,Yes, of course, I have followed all the steps but it throws me that error, I have not been able to find the documentation of said error either.",
"username": "Adolfo_Adrian"
},
{
"code": "sqlGenerateSchemaAtlasDataFederation admin> db.runCommand({ sqlGenerateSchema: 1, sampleNamespaces: ['VirtualDatabase.newView'], sampleSize: 100, setSchemas: true })\n{\n ok: 1,\n schemas: [\n {\n databaseName: 'VirtualDatabase',\n namespaces: [\n {\n name: 'newView',\n schema: {\n version: Long(\"1\"),\n jsonSchema: {\n bsonType: [ 'object' ],\n properties: {\n _id: {\n bsonType: [ 'objectId' ],\n additionalProperties: false\n },\n a: { bsonType: [ 'int' ], additionalProperties: false }\n },\n additionalProperties: false,\n required: [ '_id', 'a' ]\n }\n }\n }\n ]\n }\n ]\n}\n\n'VirtualDatabase.newView'",
"text": "Hi Adolfo,What’s the role associated with the database user trying to connect?I was able to load up the following view in Power BI for Desktop:\nimage2006×776 105 KB\nYou’ll also need to sqlGenerateSchema first before connecting. For example:Note: 'VirtualDatabase.newView' is the namespace that includes reference to the view.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "atlasAdmin",
"text": "For what it’s worth, please see my test environment details used for Power BI and the data federation view:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "indeed I can load the views, where segment tables, the error seems to be between the connectivity with PB",
"username": "Adolfo_Adrian"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Views on data federation | 2023-07-06T16:34:14.802Z | Views on data federation | 831 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "Hello everyone,\nI’m trying to do a huge query that retrieves every documents in multiple collections in a database.\nThe problem is that I want to make sure that the documents are coherent since some of them are linked together and i want to make sure no documents are modified during my find.While reading the mongo documentation, I came across the read concern Snapshot but I dont really understand if it will solve my problem and since it’s hard to actually try it out, I wanted to ask you guys for some help\nThanks for your time !\nHave a nice day",
"username": "Ben_Varb"
},
{
"code": "",
"text": "As i understand, by default, mongodb/wiredtiger uses snapshot isolation, similar to repeatable read in mysql.You can get the aspects here. Isolation (database systems) - Wikipedia. Repeatable read/snapshot read can avoid many problems, but not every thing. So check it out.since it’s hard to actually try it outno it’s not that hard. You can try tuning the batch size and then test.",
"username": "Kobe_W"
}
] | Will read concern "snapshot" solve my problem? | 2023-07-27T08:26:09.699Z | Will read concern “snapshot” solve my problem? | 310 |
null | [
"python"
] | [
{
"code": "id_idallow_population_by_field_name = Trueclass StudentModel(BaseModel):\n id: PyObjectId = Field(default_factory=PyObjectId, alias=\"_id\")\n name: str = Field(...)\n email: EmailStr = Field(...)\n course: str = Field(...)\n gpa: float = Field(..., le=4.0)\n\n class Config:\n allow_population_by_field_name = True\n arbitrary_types_allowed = True\n json_encoders = {ObjectId: str}\n schema_extra = {\n \"example\": {\n \"name\": \"Jane Doe\",\n \"email\": \"[email protected]\",\n \"course\": \"Experiments, Science, and Fashion in Nanophotonics\",\n \"gpa\": \"3.0\",\n }\n }\nid_idWe set this id value automatically to an ObjectId string, so you do not need to supply it when creating a new student._id",
"text": "From Getting Started with MongoDB and FastAPI | MongoDB, this pydantic model has id field with alias _id.Can someone more clearly explain the datatypes at each stage of the request/response data transformation pipeline? (Pydantic(fastapi) ↔ pymongo driver ↔ mongodb )\nI’m not sure which stage expects/produces id and which stage expects/produces _id, and why the api should accept both id and _id.\nI’m also confused about 3 cases when is id designed to bewhich the article wasn’t clear at explaining during the alias section.That article says We set this id value automatically to an ObjectId string, so you do not need to supply it when creating a new student.\nDoes this mean the alias of _id only comes into play when someone calls fastapi endpoint and provided an _id in body of post request, so it doesn’t have to be automatically created by pydantic?Is case 1 and 2 above even relevant when working with fastapi (when users interact only with the api)?\nThe article says “when creating a new student”. What happens if I had already inserted the data directly through mongo shell or driver (meaning not going through pydantic), and I only use fastapi to read data, does this mean this whole id aliasing thing can be ignored?\nFurthermore, does it also mean I don’t even have to make any pydantic models if i only want to write a read-only fastapi?",
"username": "Han_N_A"
},
{
"code": "_id_idStudentModel_ididid_idallow_population_by_field_name = TrueididFalseallow_population_by_field_nameTrueidFalseidid_id_idid_idid_idallow_population_by_field_nameFalse",
"text": "Hi @Han_N_A and welcome to the forums!What is the significance of this alias?In MongoDB, a document must have a field name _id. This is reserved for use as a primary key; its value must be unique in the collection, and is immutable. See also MongoDB Field Names. While in Pydantic, the underscore prefix of a field name would be treated as a private attribute. The alias is defined so that the _id field can be referenced.The StudentModel utilises _id field as the model id called id. This means, whenever you are dealing with the student model id, in the database this will be stored as _id field name instead.Must allow_population_by_field_name = True be added for this alias to be used to intended effect or having default False works too?Depends on the application use case. If the application needs to be able to accept id input in the creation, then yes. Otherwise, if the application will just create id value automatically then you can set this to False.The reason why allow_population_by_field_name is True, is so that the application can accept a student model creation with id field. If you were to set this option to False, a field value of id input will be ignored in the creation of student model. i.e. in def create_student().I’m not sure which stage expects/produces id and which stage expects/produces _id , and why the api should accept both id and _id.Although the API accepts _id because of the alias, you should just use id throughout the application. The clients that consumes the API would likely need to handle _id in the JSON response.and I only use fastapi to read data, does this mean this whole id aliasing thing can be ignored?You would need the alias to translate id to _id as a typed return, but you won’t need the allow_population_by_field_name (default False)Furthermore, does it also mean I don’t even have to make any pydantic models if i only want to write a read-only fastapi?You can use FastAPI without Pydantic models and just utilise Body. See FastAPI without PydanticRegards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Due to recent updates in MongoDB, is this still a valid method for Pydantic to validate ObjectId in FastAPI:Getting started with MongoDB and FastAPI",
"username": "Dixon_Dick"
}
] | Why do we need alias=_id in pydantic model of fastapi? | 2022-06-21T02:57:01.988Z | Why do we need alias=_id in pydantic model of fastapi? | 18,101 |
null | [
"node-js",
"mongoose-odm",
"atlas-cluster"
] | [
{
"code": "{\n \"message\": \"Error creating user\",\n \"error\": {\n \"ok\": 0,\n \"code\": 8000,\n \"codeName\": \"AtlasError\",\n \"name\": \"MongoError\"\n }\n}\nDB_URL=mongodb+srv://emc_admin-prime:[PASSWORD]@cluster1.9ifxogd.mongodb.net/?retryWrites=true&w=majorityconst mongoose = require(\"mongoose\");\nrequire('dotenv').config()\n\nasync function dbConnect() {\n mongoose\n .connect(\n process.env.DB_URL,\n {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n }\n )\n .then(() => {\n console.log(\"Successfully connected to MongoDB Atlas!\");\n })\n .catch((error) => {\n console.log(\"Unable to connect to MongoDB Atlas!\");\n console.error(error);\n });\n}\n\nmodule.exports = dbConnect;\nconst mongoose = require(\"mongoose\");\n\nconst UserSchema = new mongoose.Schema({\n email: {\n type: String,\n required: [true, \"Please provide an Email!\"],\n unique: [true, \"Email Exist\"],\n },\n\n password: {\n type: String,\n required: [true, \"Please provide a password!\"],\n unique: false,\n },\n});\n\nmodule.exports = mongoose.model.Users || mongoose.model(\"Users\", UserSchema);\nconst express = require(\"express\");\nconst app = express();\nconst bcrypt = require('bcrypt');\nconst bodyParser = require('body-parser');\nconst dbConnect = require('./db/dbConnect');\nconst User = require('./db/userModel');\n\ndbConnect();\n\napp.use(bodyParser.json());\napp.use(bodyParser.urlencoded({ extended: true }));\n\napp.get('/', (request, response, next) => {\n response.json({ message: 'Hey! This is your server response!' });\n next();\n});\n\napp.post('/register', (request, response) => {\n bcrypt.hash(request.body.password, 10)\n .then((hashedPassword) => {\n const user = new User({\n email: request.body.email,\n password: hashedPassword,\n });\n\n user\n .save()\n .then((result) => {\n response.status(201).send({\n message: 'User created successfully',\n result,\n });\n })\n .catch((error) => {\n response.status(500).send({\n message: 'Error creating user',\n error,\n });\n });\n })\n .catch((error) => {\n response.status(500).send({\n message: 'Password was not hashed successfully',\n error,\n });\n });\n});\n\nmodule.exports = app;\n",
"text": "I am extremely new to MongoDB and am just now pushing more and more into fullstack development (mostly front end historically). I’m trying to build a simple authentication end point, and everything seems perfectly fine and error free (connection to DB is successful), until I try to create a new user in postman and it seems like no matter what I do, I get the following error:The issue I’m having is this error doesn’t seem to be telling me much and my attempts at searching on it (google, mongo docs) have not given me much to go on. I’m not necessarily looking to be handed the answer, but it would be nice if I could be pointed in the right direction because at this point I’m at a complete loss and have no idea where to even start looking. I’m fairly certain the issue isn’t in code but in my cluster setup, mostly because that’s the area of this that is by far most foreign to me, but I’m not really sure where to even start looking on that end of things. It seems to be hitting the catch in the user.save() block of the register endpoint in app.js, but I’m not sure if that implies an issue with the code, or an issue with the setup in mongo. Below are some potentially relevant code snippets:.env file (password intentionally obscured):\nDB_URL=mongodb+srv://emc_admin-prime:[PASSWORD]@cluster1.9ifxogd.mongodb.net/?retryWrites=true&w=majoritydbConnect file:userModel file:app.js file:",
"username": "Tyler_Anyan"
},
{
"code": "",
"text": "I was able to get this resolved, so posting my answer here.This was in fact related to the setup within mongodb atlas, and, predictably, was a seemingly minor setting that I was just unaware of (and unaware of its importance). My primary database user did not have a built-in role selected, and it seems that user access is the conduit by which my application can access the cluster. Once I updated it to “Atlas admin”, I was able to successfully add a new user to the DB.EDIT: Some terminology challenges here, as referenced up top in the original post. The distinction here is that I needed to add a new entry to the user collection, I was being blocked from accessing it though because my database user (emc_admin-prime) did not have the proper role selected. So, one is the DB user within mongodb atlas (emc_admin-prime), the other is a new entry to a collection called user.",
"username": "Tyler_Anyan"
},
{
"code": "",
"text": "From your original code fragment you’re just adding into a collection called “user” as opposed to actually adding a database user?\nIn which case you don’t want to grant an application user that much power, you should be able to grant it a lower role which has read and write access.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Yes, the goal was to simply add to a collection called user. This plays into the terminology challenges I reference up top; the intention here ultimately is to build an application that will allow new users to sign up and create an account, and I think the “user” in that context is getting conflated with the “database user” from the POV of the cluster itself. My answer should in fact be the correct one from how I understand things, and it definitely resolved the issue. The database user “emc_admin-prime”, as seen in the db_url, should be able to have full access to anything within that cluster, I was just unaware I needed to assign a role to specify that and it was the thing preventing me from adding a new entry to the user collection.Thanks for the feedback! I’m definitely open to being schooled on if there’s a more nuanced way I should be setting up that DB user, keeping in mind that that is intended to be the primary admin and so should be able to access most if not all. The overall gist of the issue though was that I needed to add an entry to the user collection, and it seems the lack of a specified role on the emc_admin-prime DB user was the reason I was getting the 500 error.",
"username": "Tyler_Anyan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error code 8000 when creating new user | 2023-07-26T17:19:56.588Z | Error code 8000 when creating new user | 742 |
null | [
"dot-net",
"replication"
] | [
{
"code": "",
"text": "HiWe have C# application which run as service and every 5 minutes pushes data to MongoDB.Our connection string for MongoDB is (It uses Kerberose)mongodb://svc_APP-UAT%40MYCOMPANY.COM@server-n1:27017,server-n2:27017,server-n3:27017/?ssl=true&replicaSet=RS_PROD&readPreference=primary&serverSelectionTimeoutMS=5000&connectionTimeoutMS=10000&authSource=$external&authMechanism=GSSAPI&applicationName=MY TEST APPLICATIONApplication able to save data to MongoDB every 5 minutes , however intermittently we get following error and data doesn’t get save to MongoDBMongoDB.Driver.MongoAuthenticationException: Unable to authenticate using sasl protocol mechanism GSSAPI. —> MongoDB.Driver.MongoCommandException: Command saslContinue failed: Failed to acquire LDAP group membership.Any thoughts above error?Thanks\nDhru",
"username": "Dhruvesh_Patel"
},
{
"code": "",
"text": "As you’re on Enterprise Advanced my first advise is open a support ticket with MongoDB Support.I have seen this with slow or unresponsive LDAP/AD servers. You need to discuss this performance with the team responsible.Also check if you have ldapUserCacheInvalidationInterval set on the servers this can reduce the frequency of which the group membership has to be queried from ldap in general but may not address the root cause.",
"username": "chris"
},
{
"code": "",
"text": "Thank you Chris. We will look into to open ticket with MongoDB support\nCurrently we are using kerberose (just user name used in connection string) for MongoDB connectivity , will make a difference if we use LDAP (user name and password both in connection string) instead of Kerberose while initiating the connection from client app?Thanks\nDhru",
"username": "Dhruvesh_Patel"
},
{
"code": "",
"text": "Kerberos is providing authentication. (Are you who you say you are)Authorization(What you can access) is via LDAP.The LDAP portion is what is causing you issues right now. The server will query ldap as to what group memberships the authenticaed user has access to.",
"username": "chris"
},
{
"code": "",
"text": "I see. Thanks for clarificationDhru",
"username": "Dhruvesh_Patel"
}
] | Failed to acquier LDAP group membershop | 2023-07-27T13:43:52.165Z | Failed to acquier LDAP group membershop | 657 |
null | [
"android",
"flutter"
] | [
{
"code": "",
"text": "Hey,Can you please guide me on how can I fetch the existing data of my native(android and iOS) application to the newly built Flutter application?Note: native app database is encryptedThank you in advance ",
"username": "Prachi_Palkhiwala"
},
{
"code": "@RealmModel()\nclass _Car {\n @PrimaryKey()\n late String make;\n}\ndart run realm generateConfiguration.localfinal encryptedConfig = Configuration.local([Car.schema], encryptionKey: key, path: \"absolute path to the realm file\");\nfinal realm = Realm(encryptedConfig);\n",
"text": "Hi @Prachi_Palkhiwala!\nIf your app use Atlas cloud and realm with flexible sync it would be easier and you will be able to convert the model classes to dart code and to download the data to a new file dedicated for the Flutter app.\nBut I suppose you are asking for a local realm file.\nIt is possible to open an encrypted realm file in flutter if you have the encryptionKey.\nFirst you need to create dart classes that to define the same model as the model in the existing app, for example:then run:\ndart run realm generate\nto generate the RealmObjects classes based on the defined model.After that you can set the RealmObjects.schema into the list parameter of Configuration.local, set the encryptedKey and the path to the existing file, as follow:Let me know if you are using flexible sync with the cloud, because you can use the server schema there to convert the model classes to Flutter/Dart and you won’t write them manually.",
"username": "Desislava_St_Stefanova"
}
] | I want to migrate my native application to flutter with existing data store on realm datatbase | 2023-07-27T12:55:13.026Z | I want to migrate my native application to flutter with existing data store on realm datatbase | 545 |
null | [
"aggregation",
"queries"
] | [
{
"code": "\"att\": {\"$ne\": []}$searchMetamy_collection.aggregate(\n [\n {\n \"$searchMeta\": {\n \"index\": \"MsgAtlasIndex\",\n \"compound\": {\n \"must\": [\n {\"equals\": {\"path\": \"evnt.st\", \"value\": 1}},\n {\"range\": {\"path\": \"ts\", \"gte\": new Date(Date.now() - 24*60*60 * 1000)}},\n {\"text\": {\"query\": \"violent\", \"path\": \"evnt.tag\"}},\n {\"equals\": {\"path\": \"evnt.cls\", \"value\": True}},\n ]\n },\n },\n },\n ]\n)\n",
"text": "I’m counting documents based on specific filters.What is the proper way to add this line \"att\": {\"$ne\": []} to this aggregation pipeline?Note that I’m using $searchMeta way for better speed!",
"username": "ahmad_al_sharbaji"
},
{
"code": "exists",
"text": "Hi @ahmad_al_sharbaji ,It is recommended to remove the field when the value is an empty array. Then you can implement this filter logic with the exists operator.",
"username": "amyjian"
}
] | How to apply $ne using $searchMeta? | 2023-07-27T12:54:00.858Z | How to apply $ne using $searchMeta? | 412 |
null | [
"replication",
"sharding"
] | [
{
"code": "",
"text": "Hello,\nWe have a two-sharded cluster in our setup, and our shard key is {“a”: 1, “b”: 1}. We scheduled chunk balancing for one hour daily during less busy traffic hours. We want to measure the write throughput of the cluster.\nTo achieve this, we are inserting documents with GUIDS for fields “a” and “b.” However, we’ve encountered an issue where MongoDB is creating new chunks in only one shard. I can understand that until the first rebalance, one shard might hold all the ranges because of that, all writes are going to a single shard. However, even after a substantial amount of data and enough chunks, and despite the rebalances, MongoDB continues to create new chunks on only one shard.\nDuring the rebalancing process, MongoDB ensures that the two shards have an equal number of chunks, which seems to be working correctly. However, the problem persists as new chunks are still directed to a single shard, causing uneven data distribution.\nWe are puzzled by this behaviour, as it goes against our expectation of achieving a balanced distribution of chunks across both shards. We would appreciate any insights or suggestions to resolve this issue and achieve the desired even data distribution in our sharded cluster.\nWhen does MongoDB create a new chunk? I know it splits a chunk when it exceeds a certain size, but I’ve noticed an increased chunk count while inserting data. What criteria trigger the creation of a new chunk, and how does MongoDB decide on which shard to create the chunk?\nIn my scenario, all the new chunks are being created in replica set-2, creating a performance bottleneck on that specific node.\nI’d appreciate any insights or explanations to understand the chunk creation process better and address the performance issue. Thank you.",
"username": "Kiran_Sunkari"
},
{
"code": "sh.getBalancerState()\n",
"text": "Outside of the maintanance can you run the following command. This will check if the balancer is enabled or not. If it isn’t enabled then it will not balance the chunks.:I’m assuming you will find that replica set-2 is the primary db of the shard. if you run sh.status() in the “databases” section you will find the primary shard for the database. I believe when the balancer is disabled the data writes to the primary shard but I can’t find documentation to support this.We scheduled chunk balancing for one hour daily during less busy traffic hours.During the rebalancing process, MongoDB ensures that the two shards have an equal number of chunksI believe this is the correct outcome, because until the balancer is running the chunks won’t be even.",
"username": "tapiocaPENGUIN"
}
] | Mongo new chunks are creating in only one shard | 2023-07-27T11:55:32.189Z | Mongo new chunks are creating in only one shard | 539 |
null | [
"atlas-search"
] | [
{
"code": "$search{{fieldName}} is not indexed as sortable.",
"text": "I’ve been trying to use the new sort option for $search stage with no luck.I get the following error:\n{{fieldName}} is not indexed as sortable.What should I do in order to index the corresponding field as sortable? I have already rebuilt the index as mentioned in the docs. I’m using MongoDB 6 on an M0 cluster.",
"username": "German_Medaglia"
},
{
"code": "",
"text": "Did you… ‘For string fields, you must manually index the field as token type. To learn more, see How to Index String Fields for Sorting.’?",
"username": "Elle_Shwer"
},
{
"code": "tokenstringtokenIndex for Sort and Querytokenstring{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"<field-name>\": [{\n \"type\": \"string\"\n },\n {\n \"type\": \"token\"\n }]\n }\n }\n}\n",
"text": "Hello @German_Medaglia ,Welcome back to The MongoDB Community Forums! @Elle_Shwer is right, a little addition to this from my side.\nI got the same error when I missed the step to index the field as token and string type.As per the Atlas Search SortFor string fields, you must manually index the field as token type. To learn more, see How to Index String Fields for Sorting.The example provided in the How to Index String Fields for Sorting for Index for Sort and Query shows how you can manually index the field as type token and string for Querying and Sorting.Note: I used JSON Editor in the Atlas UI to configure the index.Let us know if you face any more issues, would be happy to help you! Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "@Elle_Shwer and @Tarun_Gaur thanks for your replies! Yes, I saw that, but I’m trying it on a date field.",
"username": "German_Medaglia"
},
{
"code": "{\n \"analyzer\": \"lucene.standard\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"created_at\": {\n \"type\": \"date\"\n },\n \"description\": {\n \"multi\": {\n \"keywordAnalyzer\": {\n \"analyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n },\n \"type\": \"string\"\n },\n \"properties\": {\n \"dynamic\": true,\n \"type\": \"document\"\n }\n }\n },\n \"storedSource\": {\n \"include\": [\n \"description\",\n \"properties\",\n \"created_at\",\n \"ip\"\n ]\n }\n}\n",
"text": "Just in case, this is my index definition:",
"username": "German_Medaglia"
},
{
"code": "",
"text": "Please share a few additional details such as:",
"username": "Tarun_Gaur"
},
{
"code": "$search{\n index: \"default\",\n compound: {\n filter: [\n {\n range: {\n path: \"created_at\",\n lte: ISODate(\n \"2023-12-01T00:00:00.000Z\"\n ),\n gte: ISODate(\n \"2022-04-01T00:00:00.000Z\"\n ),\n },\n },\n ],\n should: [\n {\n text: {\n query: \"{{query}}\",\n fuzzy: {\n maxEdits: 1,\n prefixLength: 1,\n },\n path: {\n wildcard: \"properties.*\",\n },\n },\n },\n {\n text: {\n query: \"{{query}}\",\n fuzzy: {\n maxEdits: 1,\n prefixLength: 1,\n },\n path: \"description\",\n },\n },\n ], \n },\n count: {\n type: \"lowerBound\",\n threshold: 1000,\n },\n sort: {\n created_at: 1\n },\n returnStoredSource: true,\n highlight: {\n path: [\"description\"],\n maxCharsToExamine: 500,\n maxNumPassages: 1,\n },\n}\n{\n \"_id\": {\n \"$oid\": \"63adcea85f7c5357a41c68d3\"\n },\n \"description\": \"User created\",\n \"ip\": \"127.0.0.1\",\n \"user_agent\": \"stensul/3.92.0.1 (core)\",\n \"controller\": \"\",\n \"action\": \"\",\n \"properties\": {\n \"user_id\": \"system\",\n \"changed_user_id\": {\n \"$oid\": \"63adcea75f7c5357a41c68d2\"\n },\n \"changed_user_name\": \"Germán\",\n \"changed_user_last_name\": \"Medaglia\",\n \"changed_user_email\": \"[email protected]\"\n },\n \"updated_at\": {\n \"$date\": \"2022-12-29T17:30:16.428Z\"\n },\n \"created_at\": {\n \"$date\": \"2022-12-29T17:30:16.428Z\"\n }\n}\n{\n \"_id\": {\n \"$oid\": \"63adcedc81cdce61285a2334\"\n },\n \"description\": \"User logged in\",\n \"ip\": \"172.18.0.1\",\n \"user_agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36\",\n \"controller\": \"BaseLoginController\",\n \"action\": \"postLogin\",\n \"properties\": {\n \"user_id\": {\n \"$oid\": \"63adcea75f7c5357a41c68d2\"\n },\n \"email\": \"[email protected]\",\n \"method\": \"default\"\n },\n \"updated_at\": {\n \"$date\": \"2022-12-29T17:31:08.572Z\"\n },\n \"created_at\": {\n \"$date\": \"2022-12-29T17:31:08.572Z\"\n }\n}\n",
"text": "Here’s a simple $search stage sample.And here are some sample documents:",
"username": "German_Medaglia"
},
{
"code": "",
"text": "@Tarun_Gaur any idea? Maybe the option is not available on M0 clusters?",
"username": "German_Medaglia"
},
{
"code": "",
"text": "Hello @German_Medaglia ,The documentation on the Atlas Search Sort was recently updated as followsLimitations\nYou must have an M10 or higher cluster to sort the results using the sort option. The sort option is not available on free and shared tier clusters.You can’t sort on fields of embeddedDocuments type.You can’t use the sort option with the knnBeta operator.So you won’t be able to currently use this on an M0 cluster. I will update this post if there are any changes to this.Feel free to open new thread for any more queries or issues, will be happy to help you! Thank you,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks @Tarun_Gaur ! Yes, tried it on an M30 and it worked!",
"username": "German_Medaglia"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting error when using new sort option in $search stage | 2023-07-21T02:49:26.858Z | Getting error when using new sort option in $search stage | 810 |
null | [
"react-native"
] | [
{
"code": "LOG error= [Error: Exception in HostFunction: Unknown argument 'recoverUnsyncedChanges' for clientReset.mode. Expected 'manual' or 'discardLocal'.]",
"text": "We upgraded ‘realm’ npm package from 10.23.0 to 11.0.0-rc.1 due to internal NPM package dependencies and version upgrades at react-native level.But after upgrade the earlier values for ‘clientReset’ mode are not valid for new version. We get the below error:LOG error= [Error: Exception in HostFunction: Unknown argument 'recoverUnsyncedChanges' for clientReset.mode. Expected 'manual' or 'discardLocal'.]Both the alternatives do not seem to help with the recover of unsynced changes. Is the expectation henceforth to handle callback and manage locally?. What is the alternative for ‘recoverUnsyncedChanges’ in the new version?.Any helpful pointers are really appreciated! Thanks! ",
"username": "Gulvel"
},
{
"code": "\"react\": \"18.1.0\",\n\"react-native\": \"0.70.0\",\n\n\"realm\": \"11.1.0\",\n\n\"react-native-fs\": \"^2.20.0\",\n\"react-native-reanimated\": \"^2.10.0\",\n\"react-native-vision-camera\": \"^2.15.4\",\n\"vision-camera-face-detector\": \"^0.1.8\"\n",
"text": "Shifted to 11.1.0, were these have been added.Enhancements\n\nAdd support for using functions as default property values, in order to allow dynamic defaults #5001, #2393\nAll fields of a Realm.Object treated as optional by TypeScript when constru...Here are the react combinations that work fine:Hope that helps someone. Thanks!",
"username": "Gulvel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unknown argument 'recoverUnsyncedChanges' for clientReset.mode. Expected 'manual' or 'discardLocal' | 2023-07-27T06:04:24.231Z | Unknown argument ‘recoverUnsyncedChanges’ for clientReset.mode. Expected ‘manual’ or ‘discardLocal’ | 504 |
null | [
"queries"
] | [
{
"code": " db.area.aggregate([\n {\n $lookup: \n {\n from: 'restaurant',\n let: { distance: \"$distance\" },\n pipeline: [\n { \n \"$geoNear\": {\n \"spherical\": true,\n \"maxDistance\": \"$$distance\",\n \"near\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 77.61546419999999,\n 12.9131813\n ]\n },\n \"distanceField\": \"data.distance\",\n \"distanceMultiplier\": 0.001,\n \"query\":{}\n }\n },\n },\n ],\n as: 'records'\n }\n }\n ]) \n",
"text": "I am trying to write an aggregation query using $lookup. This is the whole pipeline.I am getting an error maxDistance should be a Number. Any leads on how to resolve this?",
"username": "Charchit_Kapoor"
},
{
"code": "",
"text": "Hi @Charchit_Kapoor, I have the same problem, have you solved this already? thanks.",
"username": "Nicole_Alday"
}
] | Error: maxDistance should be a Number | 2021-05-12T12:46:21.072Z | Error: maxDistance should be a Number | 1,772 |
null | [
"flutter"
] | [
{
"code": "",
"text": "How to merging between 2 realm files in Flutter",
"username": "zmonx_gg"
},
{
"code": " final configRead = Configuration.local([Product.schema], isReadOnly: true, path: r\"absolute path to realm file to read\");\n final realmRead = Realm(configRead);\n\n final configWrite = Configuration.local([Product.schema], path: r\"absolute path to realm file to write\");\n final realmWrite = Realm(configWrite); \n\n realmWrite.write(() {\n realmWrite.addAll<Product>(realmRead.all<Product>().map((p) => Product(ObjectId(), p.name)));\n });\n realmWrite.close();\n realmRead.close();\n",
"text": "Hi @zmonx_gg!\nThere is no automated way for merging for now. But you can open both realm files and copy the data between them. Be sure to re-create the primary keys in case there are duplicated. It will be good to make a copy of the realm files before this operation.\nHere is an example:",
"username": "Desislava_St_Stefanova"
}
] | Realm database. Can merging between 2 realm files ? in Flutter | 2023-07-26T06:15:20.835Z | Realm database. Can merging between 2 realm files ? in Flutter | 546 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.