image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
[] | [
{
"code": "",
"text": "Hi there,I’m going through the mongo db tutorials and currently going with “Lesson 3: Managing Databases, Collections, and Documents in Atlas Data Explorer”.The 1st practice lab is not working and keeps on loading in second step.\nI’ve followed the list of instructions and have made sure that the proper project, user, db exist.Please check the screenshot below., I’ve tried different browsers and connections.\nScreenshot 2023-04-24 at 12.38.05 PM1959×1080 106 KB\n",
"username": "Karthik_RP"
},
{
"code": "cache and cookies",
"text": "Hey @Karthik_RP,Welcome to the MongoDB Community Forums! The behavior you’re facing is not expected. I checked and tried things from my personal learn account as well and things were working as expected. I would recommend you make sure that you have read all the instructions and worked according to those instructions only. Also, do try clearing the cache and cookies and then try loading the lab again. If the issue still persists, I would request you mail the issue to [email protected] so the university team can investigate this further.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lab 3, Practice 1 not loading | 2023-04-24T07:08:53.577Z | Lab 3, Practice 1 not loading | 947 |
|
[
"node-js",
"dot-net"
] | [
{
"code": "",
"text": "Is this meant to have unit 10 as unit 3 etc?Are these unit numbers arbitrary, or is this in the wrong order?I’m doing the Node.JS and then will do the C# Dev Paths in the MongoDB U for a project I’m working on, making sure I’m not missing something with a project I\"m working on.But would like to make sure I’m doing this all in the right manner for the cert exam.\n\nScreenshot 2023-04-18 at 7.28.29 PM2812×1372 233 KB\n",
"username": "Brock"
},
{
"code": "",
"text": "Hey @Brock,The lessons should be taken in the order they are presented in the path, so don’t worry, you’re not missing out on anything. Please reach out to [email protected] in case you have any more queries.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unit levels in MongoDB Developer Cert Paths | 2023-04-19T02:31:46.251Z | Unit levels in MongoDB Developer Cert Paths | 882 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "How do I query by _id, BUT then I want to get 50 above and 50 below the document, sorted by a number age?and bonus if I can get the index the elements based on the sort",
"username": "Darren_Zou"
},
{
"code": "",
"text": "Hi @Darren_Zou and welcome to MongoDB community forums!!It’s great to see that a similar question has been asked on StackOverflow and that solutions have been provided to resolve the issue.If those solutions don’t work for you, would you mind considering posting a comment on the thread requesting further clarification or updating on your progress?Regards\nAasawari",
"username": "Aasawari"
}
] | How to get middle of document? | 2023-04-19T21:38:15.505Z | How to get middle of document? | 413 |
null | [
"queries",
"node-js",
"mongoose-odm",
"containers"
] | [
{
"code": "console.time('FullQuery')\nawait MyMongooseModel.find(findCondition)\nconsole.timeEnd('FullQuery')\n\nconsole.time('OnlyCount')\nawait MyMongooseModel.countDocuments(findCondition)\nconsole.timeEnd('OnlyCount')\nFullQuery: 821.835ms\nOnlyCount: 33.472ms\nFullQuery: 3.595s\nOnlyCount: 42.642ms\n",
"text": "I’m fairly new to MongoDB query optimizing. I have a query that runs quite slow when there are about 7000 not-too-complex documents from a single collection to be retrieved (about 5 to 6 MB of data). I’m using Node.js and Mongoose.\nFor analysis, I added some time measurements and also an additional “countDocuments” for comparison. However, the actual query needs to return all documents at once:Result I get on my local Node.js server connecting to a local Mongo installation running in a Docker container:Connecting my local Node.js server to my remote Atlas instance:My interpretation of that data:I now have the questions below:",
"username": "Christian_Schwaderer"
},
{
"code": "",
"text": "What is your query and document like? without this info, it’s difficult for us to give any more insights.Anyway, i personally think 800+ms for 5MB data (running locally) is still slow. i suppose your query is not using an index.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thanks for your answer. I cannot show any more details because of organisational secrets. Hower, the find condition is pretty simple, nothing special.If a missing index is the problem I’m wondering why the count query is so fast and the other so slow. In my understanding, an index helps in the engine to filter out the documents you want to do something with. If the filtering part were to be the problem, the count query would be equally slow and there would be almost no difference between them. But that’s not the case. So, what am I overlooking?",
"username": "Christian_Schwaderer"
},
{
"code": "",
"text": "Is the “6M” data size accurate? After you switch from local mongo server to mongo atlast (which is remote) the same code runs 2.5s longer. All those time are supposed to be spent on network transmission only. So either data size is big, or network is too slow?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Yes, six megabyte is correct. And my own network and internet connection is quite fast. But the connection between the Atlas server and their network access point is of course not under my control.But I take that as confirmation that something weird is going on here. I’ll have to dig deeper.",
"username": "Christian_Schwaderer"
}
] | Performance: Long networking time and huge difference between count and actual returning | 2023-04-18T04:51:40.809Z | Performance: Long networking time and huge difference between count and actual returning | 992 |
null | [
"queries",
"node-js",
"data-modeling",
"performance"
] | [
{
"code": "// Dog entity with owners\n{\n _id: ObjectId(\"abc\"),\n breed: \"Pitbull\",\n owners: [\n { ownerId: ObjectId(\"123\"), walkCount: 5 },\n { ownerId: ObjectId(\"456\"), walkCount: 2 },\n ]\n}\n\ndb.dogs.index({ _id: 1, \"owners.ownerId\": 1 }, { unique: true })\n\n// Sample query\ndb.dogs.find({ _id: ObjectId(\"abc\"), \"owners.ownerId\": ObjectId(\"123\") })\n// Dog entity (tenant-specific)\n{\n dogId: \"abc\",\n breed: \"Pitbull\",\n ownerId: ObjectId(\"123\")\n walkCount: 5,\n},\n{\n dogId: \"abc\",\n breed: \"Pitbull\",\n ownerId: ObjectId(\"456\")\n walkCount: 2,\n}\n\ndb.dogs.index({ dogId: 1, ownerId: 1 }, { unique: true })\n\n// Sample query\ndb.dogs.find({ _id: ObjectId(\"abc\"), ownerId: ObjectId(\"123\") })\n",
"text": "Schema for multitenant entities with shared/public vs. tenant-specific/private propertiesIf the most common queries are\na) get all dogs by ownerId\nb) update dog by dogId+ownerIdIs a nested schema for dog with dog+owner relationship viable performance-wise or is it advisable to separate the collections (e.g. data duplication / joins)?Public attributes (e.g. “breed”) need to be synced between documents.",
"username": "Kim_Kern"
},
{
"code": "executionStatsdb.collection1.find({ _id: ObjectId(\"643ce9212c3e1d50dd8c493c\"), \"owners.ownerId\": 123 }).explain(\"executionStats\")--> Document\n{\n \"_id\": {\n \"$oid\": \"643ce9212c3e1d50dd8c493c\"\n },\n \"breed\": \"Pitbull\",\n \"owners\": [\n {\n \"ownerId\": 123,\n \"walkCount\": 5\n },\n {\n \"ownerId\": 456,\n \"walkCount\": 2\n }\n ]\n}\n\n-->Index\n\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { _id: 1, 'owners.ownerId': 1 },\n name: '_id_1_owners.ownerId_1',\n unique: true\n }\n]\n\n--> Output\n\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 0,\n totalKeysExamined: 1,\n totalDocsExamined: 1,\ndb.collection2.find({ \"dogId\": (\"abc\"), \"ownerId\": (123) }).explain(\"executionStats\")--> Documents\n{\n \"_id\": {\n \"$oid\": \"643cebfd2c3e1d50dd8c493f\"\n },\n \"dogId\": \"abc\",\n \"breed\": \"Pitbull\",\n \"ownerId\": 123,\n \"walkCount\": 5\n},\n{\n \"_id\": {\n \"$oid\": \"643cec642c3e1d50dd8c4940\"\n },\n \"dogId\": \"abc\",\n \"breed\": \"Pitbull\",\n \"ownerId\": 456,\n \"walkCount\": 2\n}\n \n-->Index\n\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { ownerId: 1, dogId: 1 },\n name: 'ownerId_1_dogId_1',\n unique: true\n }\n]\n\n--> Output\n\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 0,\n totalKeysExamined: 1,\n totalDocsExamined: 1,\n",
"text": "Hello @Kim_Kern ,Welcome to The MongoDB Community Forums! Is a nested schema for dog with dog+owner relationship viable performance-wise or is it advisable to separate the collections (e.g. data duplication / joins)?The design of your schema depends on your requirements and how comfortable are you to handle a complex system. As you have discussed some scenarios, I would recommend you to check the performance of your specific use-cases as per the requirements. You are right on the separate collection for both as well, I have seen many use-cases where people try to avoid duplication in their data. These are mostly for applications where you have a document that will never change, for example if we have an address collection and another collection of people, multiple people can stay at the same address so instead of writing the address every time some new documents of people are populated, they generally try to refer to the id of the address from another collection. Though it might increase the overhead as you need to use $lookup in your query but if you think that this will help you in some way in your use-case, you can design your application schema like this. Overall it totally depends on your requirements and your understanding of your application.Below are some links that I think will be helpful for you with regards to your schema design. Please take a look at these.Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!One way to check the performance of your system is to check the output of your query with explain in executionStats mode (e.g. `db.collection.explain(‘executionStats’).aggregate(…)). Let’s check this with an exampleThis is the document and index I added as first example and the execution stats of query db.collection1.find({ _id: ObjectId(\"643ce9212c3e1d50dd8c493c\"), \"owners.ownerId\": 123 }).explain(\"executionStats\")Below are the documents and index I added as second example and the execution stats of query db.collection2.find({ \"dogId\": (\"abc\"), \"ownerId\": (123) }).explain(\"executionStats\")As you can see in both the cases the execution time is pretty quick as our query was using the respective index and hence it was able to provide results without any overhead. We can also deduce that on small databases, we can have any type of schema and it will hardly matter, it will matter when your database will grow in size, whether your application will be write intensive/read intensive, resources available(RAM, Memory etc…), Indexes used (Efficient indexes improves the performance significantly) and others.Note: This is just an example and users should test and decide their application schemas and performance based on their own requirements/testing. For professional help, you can Get in touch with our team and they could help you with any related questions.Additionally, in case you put ObjectId as ObjectId(“abc”) or ObjectId(“123”), you will receive below errorBSONError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer.\nFor more information regarding this, please refer ObjectId documentationTo learn more about performance, please check below resourceRegards,\nTarun",
"username": "Tarun_Gaur"
}
] | Is there a performance penalty for querying in an array of subdocuments vs normalized schemas? | 2023-04-05T15:02:08.533Z | Is there a performance penalty for querying in an array of subdocuments vs normalized schemas? | 1,204 |
null | [] | [
{
"code": "",
"text": "I plan to translate Introduction to MongoDB — MongoDB Manual to chinese using hugo and github pages. I want to use the examples and logos in the documents, but I do’t know if I can use them. I can’t guarantee that I’ll be able to finish this job, however I hope to receive an official response. Could anyone answer me or help me consult the official team?",
"username": "AXHu"
},
{
"code": "",
"text": "Hi @AXHu welcome to the community!I have provided you with the details that may be helpful in a DM Best regards\nKevin",
"username": "kevinadi"
}
] | Could I use the examples and logos in chinese translation? | 2023-04-23T14:30:42.713Z | Could I use the examples and logos in chinese translation? | 811 |
null | [] | [
{
"code": "",
"text": "Has anyone experienced issues with Atlas App Service triggers not working in a Global Deployment setup?\nI have an app in one project that is a Single Deployment and the triggers work fine/execute as expected.When I push the exact same app code to a project with a Global Deployment, the same trigger code NEVER fires.I updated the Global Deployment to be Single Deployment in the second project and the triggers work fine with no code changes.WHY???",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Hello,Please clarify this point.When I push the exact same app code to a project with a Global Deployment, the same trigger code NEVER fires.Are there any error logs at all in the logs? If so, please share.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "#2 - There’s no sign the trigger is ever fired.\nNo errors, nothing.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Do you have exactly the same configuration for the trigger on both apps? (Particularly the match expression)Please share the json config files for the trigger on both apps.Regards",
"username": "Mansoor_Omar"
},
{
"code": "databaseSuffix",
"text": "The trigger config is the same and is attached.\nThe config references the databaseSuffix environment variable, which is different between the two projects.Once the app is deployed, the full database name ends up being:DEV: users-FAV-24-Add-modified-\nTEST: users-2023-04-10_1681166526TEST is the environment where the trigger did not fire with Global Deployment. After I changed to Single Deployment with no other changes, the trigger began firing.onUserProfileSavedTrigger.json (1.2 KB)",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "exports = function(changeEvent){\n\nconsole.log(JSON.stringify(changeEvent.updateDescription));\n\n};\n{\"updateDescription.updatedFields._modifiedTS\":{\"$exists\":false}}",
"text": "Thanks for that, I’ve found the app in question based on the trigger name.As the trigger has a match condition, please try the troubleshooting step of creating a duplicate copy of this trigger but remove the match condition and link it to a function that only has the following:Perform an update operation and check the logs to see if what is printed in updateDescription does allow your match expression {\"updateDescription.updatedFields._modifiedTS\":{\"$exists\":false}} to pass.If the trigger is still not firing with no match expression then this is most likely due to the cluster size being M0 which is not uncommon to experience issues with changestreams (that triggers use) since the resources are shared with other clusters. In fact I do see error logs on my side for this trigger pertaining to changestream limitations. Please try upgrading to a larger tier size (preferably a dedicated tier such as M10).Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "Logs:\n\n[ \"{\\\"updatedFields\\\":{\\\"email\\\":\\\"[email protected]\\\"},\\\"removedFields\\\":[],\\\"truncatedArrays\\\":[]}\" ]",
"text": "So the trigger does fire with the setup you mentioned. I can see entries in the logs and when I expand one of the entries, I see the below info.\nThis leads me to believe your last statement must be true – the trigger not firing is related to the cluster size.My next concern is, if I upgrade to a larger tier size, and then hit a certain limit, how will I know the trigger is no longer working, similar to the current situation?There’s no alert that I can see to inform me of this.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "And also, FYI, I just upgraded to a dedicated M10 cluster and triggers still DID NOT work with the app deployed globally. However, I changed the app deployment to local (same region as the cluster) and the triggers finally did work.So, it seems triggers are not supported with global deployment or something…I would like to know about alerting though. If triggers simply are not firing, then there needs to be an alert of some kind.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Another interesting twist…it seems just redeploying the app sometimes fixes the trigger issue (not necessary switching the deployment location). I made a simple variable update (not related to triggers) and redeployed using the UI and then re-ran my tests. The trigger did fire after redeployment.Any ideas here??",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "const { MongoClient } = require('mongodb');\nconst accountSid = 'your_account_sid';\nconst authToken = 'your_auth_token';\nconst client = require('twilio')(accountSid, authToken);\n\n// Replace with your MongoDB Atlas connection string\nconst uri = 'mongodb+srv://<username>:<password>@<clustername>.mongodb.net/test?retryWrites=true&w=majority';\n\nconst dbName = 'test';\nconst collectionName = 'myCollection';\n\nasync function checkTrigger() {\n const client = await MongoClient.connect(uri);\n const collection = client.db(dbName).collection(collectionName);\n\n // Query the collection to check if data is being inserted\n const result = await collection.find({}).toArray();\n\n // If the result is empty, the trigger is not working\n if (result.length === 0) {\n // Send an SMS alert using Twilio API\n client.messages\n .create({\n body: 'Alert: Trigger is not working',\n from: '+1your_twilio_number',\n to: '+1your_phone_number'\n })\n .then(message => console.log(`Alert sent: ${message.sid}`));\n }\n\n await client.close();\n}\n\n// Call the function to check the trigger every 5 minutes\nsetInterval(checkTrigger, 5 * 60 * 1000);\n",
"text": "You have the option to sign up for a free trial subscription for development support and request a comprehensive evaluation of the backend associated with your triggers and functions. As M10 is designed for low-traffic production environments, it might be too small for your requirements. Therefore, it would be advisable to conduct a more in-depth analysis.Regarding identifying trigger failure, there are a few indicators:A way I get alerts is with my text alert script, all of my Atlas accounts (All DevOps honestly…) I have text alerts sent to me if something goes wrong.",
"username": "Brock"
},
{
"code": "",
"text": "As M10 is designed for low-traffic production environments, it might be too small for your requirements.So the answer is to continue paying more and more money and eventually triggers will work?\nWe’re talking one simple trigger here with the same code that is used in a separate project and works.Every time this trigger is deployed to the second project, it never works until the app is manually re-deployed.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "{\"updateDescription.updatedFields._modifiedTS\":{\"$exists\":false}}\n",
"text": "Hi, I am pretty sure that there should be no difference in the trigger for local vs global deployment and your trigger should definitely work regardless of your cluster tier. We do some rate limiting and limit the number of triggers you can have, but you have not hit those.I think that Manny is correct and this issue is more about your match expression. If I understand correctly, removing the match expression led to your trigger firing correctly, so the conclusion of that experiment is not that “the trigger not firing is related to the cluster size”, but rather that the match expression is likely misconfigured.Match expressions can admittedly be tricky. One thing that stands out is that your trigger is configured for Insert, Replace, and Update events, but your match expression is:This match expression will only pass for an update event where one of the modified fields is “_modifiedTS”. If something is inserted or replaced (and many tools like Compass are replacing objects, not updating them) then the match expression here will skip over the event (as designed). I see that you have specific logic in your function for handling insert events so I suspect this is the issue. This also explains why removing the match expression led to the execution of your trigger.I think it is worth pointing out that the Match Expression is more of a power-user feature in order to prevent the trigger from firing too much under a lot of load; however, it can be tricky to configure given it is a filter on the Change Event’s which are not something people are used to interacting with much. Therefore, I often advise people to use no Match Expression and instead write the filtering logic directly into the function where you have more control and understanding of the input.If you do want to continue with the Match Expression, can you clarify a few things:Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Tyler_KayeThere might actually be a bug in that case, as this isn’t the first time hearing something working in regional vs global.It’s also been observed seeing peoples apps regionally connecting with as much as 64.4% (per one user who launched their companies dispatch app) more easily with nearest regional cluster vs global deployments as well.This has been observed since global was released, and is viewable in previous ticket histories too for the TS department.Potentially it could be the match expression, but ultimately having something work like this in regional vs global is a fairly consistent phenomenon that’s been seen on more than one occasion.",
"username": "Brock"
},
{
"code": "",
"text": "I think that Manny is correct and this issue is more about your match expression.I think this is an incorrect assessment. I’ve done more testing and this issue does not seem to be related to global vs. local deployment like I first thought.To summarize my setup:\nI have a Github repo/Github Actions that automatically deploy a Realm app based on push/pull request.\nOn push, the Realm app is automatically pushed to a “DEV” environment (which is a separate Atlas project), and integration tests are run immediately after deployment, which include testing trigger functionality. So far, the triggers have been working as expected in DEV.On pull request, the exact SAME codebase is deployed to a “TEST” environment (which is another Atlas project) and the SAME integration tests are run, which currently fail consistently every time on the trigger tests.What I’ve noticed is the trigger doesn’t seem to “initialize” for at least 10-15 minutes after deployment in the TEST environment, so when the integration tests run immediately after deployment, the trigger is not yet running (I can confirm in the UI (“Last Cluster Time Processed” shows blank). If I wait (don’t have to re-deploy either like I initially thought), the trigger does eventually start.But now I’m wondering why there is a delay with the trigger in TEST project vs. DEV.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "What cloud services are each environment, and what regions?\nIs one on global and the other on regional?\nAre they both the same region?\n@Try_Catch_Do_NothingThe behavior you’re describing has been observed with globals (Time delays).Also @Try_Catch_Do_Nothing and @Tyler_KayeGitlab and Github deployments to Atlas can vary based on which service provider the cluster is on.This is largely speculated to be due to the hardware differences in say an Azure cluster vs AWS cluster specs etc.",
"username": "Brock"
},
{
"code": "",
"text": "Hi Brock, I appreciate you chiming in but I am happy to take this over.@Try_Catch_Do_Nothing do you mind answering my questions above? While it might not answer the entire question, I think there is definitely something to be said that you have function code asserting on the operation type being “insert” but have a match expression that will only let in Update events. However, I realize there might be two issues going on here.Can you link to your Test app? I see all of your organizations. Prod / Dev is a single region in Oregon and Test has no application linked to it from what I can see. I can tell you that there should not be any difference between any of these environments. The environment badges you place on your application do not affect anything in our deployment or the service we offer you.The conversation here seems to have gotten a little confusing to follow, so mind if I summarize:Let me know if this sounds correct.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hi Tyler, I’m deploying the latest build to test right now and it will automatically kick off the integration tests.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "App ID: 643ad6b60ee565a48b911db7\nTrigger ID: 643ad6c9d74168373224fcc5\n",
"text": "Hi, I need to step out for a few hours, but I just took a quick look and I can see trigger with:Starting up properly at: 4/15/23 4:57:44.315 PM (this is 3 minutes ago from the time of this post)Can you let me know how the test works? Also, can you let me know what update you are making that you are expecting to see the trigger fire on?Lastly, I am a touch confused still because it sounds like when you removed the match expression from the trigger, it worked. Is that correct?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "The same tests just ran against both environments with the same result. The trigger in TEST was not initialized in time when the tests ran, so they failed. DEV passed as usual.TEST:\n\nImage 4-15-23 at 10.04 AM1267×217 47.7 KB\nDEV:\n\nImage 4-15-23 at 10.05 AM1060×176 41.3 KB\n",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "Sat, 15 Apr 2023 16:55:15 GMT\n",
"text": "Starting up properly at: 4/15/23 4:57:44.315 PM (this is 3 minutes ago from the time of this post)This is the problem!\nThe tests began executing immediately after deployment which was:Is it normal to take several minutes after deployment for triggers to initialize??",
"username": "Try_Catch_Do_Nothing"
}
] | Triggers not working with Global Deployment | 2023-04-11T01:16:52.481Z | Triggers not working with Global Deployment | 1,654 |
null | [
"node-js",
"mongoose-odm",
"next-js"
] | [
{
"code": "import {\n models,\n model,\n Schema,\n} from 'mongoose';\nimport bcrypt from 'bcrypt';\n\nconst UserSchema: Schema = new Schema({\n email: {\n type: String,\n required: true,\n unique: true,\n },\n password: {\n type: String,\n required: true,\n },\n displayName: {\n type: String,\n required: true,\n },\n role: {\n type: String,\n },\n});\n\nUserSchema.pre('save', function (next) {\n console.log('Pre-Save Hash has fired.');\n let user = this;\n bcrypt.genSalt(10, (err, salt) => {\n if (err) console.error(err);\n bcrypt.hash(user.password, salt, (err, hash) => {\n user.password = hash;\n next();\n });\n });\n});\n\nconst UserModel = models.Users || model('Users', UserSchema, 'users');\n\nexport default UserModel;\nimport dbConnect from '@/utils/mongodb';\nimport UserModel from '@/models/user.model';\nimport { NextApiRequest, NextApiResponse } from 'next';\nimport { MongoError } from 'mongodb';\nexport default async function handler(\n req: NextApiRequest,\n res: NextApiResponse\n) {\n // const { email, password } = req.query;\n try {\n dbConnect();\n const query = req.body;\n const newUser = new UserModel(query);\n const addedUser= await newUser.save(function (err: MongoError) {\n if (err) {\n throw err;\n }\n });\n res.status(200).json(addedUser);\n } catch (error) {\n console.error(error);\n res.status(500).json({ message: 'Internal server error' });\n }\n}\n",
"text": "0I am trying to pre save and hash password with bcrypt in mongoose in my next.js project, but password still unhashed. i tryed every link in stackoverflow and didnt solve it, the password still saved unHashed. mongoose version: 6.9.1this is my users.model file:this is my adding function file:i cant see the ‘Pre-Save Hash has fired.’ in my console also…any help please?",
"username": "mikha_matta"
},
{
"code": "",
"text": "Same Happen with me Plz anyone have solution",
"username": "Hamza_Tahir_4301"
}
] | Mongoose middleware not triggered at all | 2023-02-16T22:51:46.486Z | Mongoose middleware not triggered at all | 1,673 |
null | [
"aggregation",
"dot-net"
] | [
{
"code": "$geoNear{\n near: {\n type: \"Point\",\n coordinates: [-110.29665, 31.535699],\n },\n distanceField: \"distance\",\n maxDistance: 100,\n query: {\n $and: [\n {\n IsResidential: true,\n },\n {\n DaysSinceLastSale: {\n $gt: 10,\n },\n },\n ],\n },\n spherical: true,\n},\n\n$lookup:\n{\n from: \"tax_assessor\",\n let: {\n propertyCity: \"Chicago\",\n propertyState: \"IL\"\n ownerName: \"$OwnerName\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {$eq: [\"$PropertyCity\", \"$$propertyCity\"]},\n {$eq: [\"$OwnerName\", \"$$ownerName\"]},\n {$eq: [\"$PropertyState\", \"$$propertyState\"]},\n ],\n },\n },\n },\n ],\n as: \"NumberOfProperties\",\n},\n\n$project:\n{\n _id: 0,\n FullAddress: 1,\n OwnerName: 1,\n distance: 1,\n YearBuilt: 1,\n NumberOfProperties: {\n $size: \"$NumberOfProperties\"\n }\n}\nselect res1.owner_name, res1.full_address, res1.distance, res1.year_built, count(res2.owner_name) as property_count from \n(select * from properties where geolocation is <within a given range> and <some filters>) res1\nleft join \n(select * from properties where city=<given city> and state=<given state>) res2\non res1.owner_name = res2.owner_name\ngroup by res1.owner_name\norder by res1.distance\n\"stages\" : [\n {\n \"$geoNearCursor\" : {\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"tax_assessor\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {..},\n \"queryHash\" : \"4B38534E\",\n \"planCacheKey\" : \"2328FDE9\",\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {..},\n \"inputStage\" : {\n \"stage\" : \"GEO_NEAR_2DSPHERE\",\n \"keyPattern\" : {\n \"PropertyGeoPoint\" : \"2dsphere\",\n \"DaysSinceLastSale\" : 1,\n \"IsResidential\" : 1\n },\n \"indexName\" : \"sta_geo_idx\",\n \"indexVersion\" : 2,\n \"inputStages\" : [..]\n }\n },\n \"rejectedPlans\" : [ ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 2,\n \"executionTimeMillis\" : 1911,\n \"totalKeysExamined\" : 450,\n \"totalDocsExamined\" : 552,\n \"executionStages\" : {..},\n }\n },\n \"nReturned\" : NumberLong(2),\n \"executionTimeMillisEstimate\" : NumberLong(0)\n },\n {\n \"$lookup\" : {\n \"from\" : \"tax_assessor\",\n \"as\" : \"NumberOfProperties\",\n \"let\" : {\n \"propertyCity\" : {\n \"$const\" : \"COLUMBUS\"\n },\n \"propertyState\" : {\n \"$const\" : \"OH\"\n },\n \"ownerName\" : \"$OwnerName\"\n },\n \"pipeline\" : [\n {\n \"$match\" : {\n \"$expr\" : {\n \"$and\" : [\n {\n \"$eq\" : [\n \"$PropertyCity\",\n \"$$propertyCity\"\n ]\n },\n {\n \"$eq\" : [\n \"$PropertyState\",\n \"$$propertyState\"\n ]\n },\n {\n \"$eq\" : [\n \"$OwnerName\",\n \"$$ownerName\"\n ]\n }\n ]\n }\n }\n }\n ]\n },\n \"nReturned\" : NumberLong(2),\n \"executionTimeMillisEstimate\" : NumberLong(1910)\n },\n {\n \"$project\" : {\n \"OwnerName\" : true,\n \"FullAddress\" : true,\n \"distance\" : true,\n \"YearBuilt\" : true,\n \"NumberOfProperties\" : {\n \"$size\" : [\n \"$NumberOfProperties\"\n ]\n },\n \"_id\" : false\n },\n \"nReturned\" : NumberLong(2),\n \"executionTimeMillisEstimate\" : NumberLong(1910)\n },\n {\n \"$sort\" : {\n \"sortKey\" : {\n \"Distance\" : 1\n }\n },\n \"nReturned\" : NumberLong(2),\n \"executionTimeMillisEstimate\": NumberLong(1910)\n }\n ]\n",
"text": "I have a MongoDB aggregation pipeline, which I have written in C#.Here what I need is something similar to this SQL:I could get the correct result but this aggregation is very slow.When checking the execution plan, I saw the first stage - GeoNear has used an index. But in the second stage - lookup, it has not used any of the indexes.Based on the above stats, $lookup is why it doesn’t use any indexes. Does it give me an optimized result by rearranging the stages or applying a proper index? Or is there a better way to get the NumberOfProperties without using $lookup.",
"username": "Shehan_Vanderputt"
},
{
"code": "",
"text": "Hi @Shehan_Vanderputt and welcome to the MongoDB Community forum!!Does it give me an optimized result by rearranging the stages or applying a proper index? Or is there a better way to get the NumberOfProperties without using $lookup.In order to understand the requirement better, could you help me with some information to replicate in my local environment.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi, @Aasawari I have created sample data set with the necessary fields. You can find this using the below link.temp_data",
"username": "Shehan_Vanderputt"
},
{
"code": "test> db.sample.findOne()\n{\n _id: ObjectId(\"642e45dda86d7c5ba907fedc\"),\n DaysSinceLastSale: 59,\n IsResidential: true,\n Owner1NameFull: 'Elijah Fisher',\n AddressCity: 'New Bedford',\n AddressState: 'Arkansas',\n AddressZIP: 132895,\n position: [ -72.29083, 39.13419 ],\n year: '2055'\n}\ntest> db.sample.getIndexes()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n {\n v: 2,\n key: { position: '2dsphere' },\n name: 'position_2dsphere',\n '2dsphereIndexVersion': 3\n },\n { v: 2, key: { AddressCity: 1 }, name: 'AddressCity_1' }\n]\ndb.sample.aggregate([{\n\t$geoNear: {\n\t\tnear: {\n\t\t\ttype: \"Point\",\n\t\t\tcoordinates: [-72.29083, 39.13419]\n\t\t},\n\t\tdistanceField: \"distance\",\n\t\t\"maxDistance\": 200000\n\t}\n}, {\n\t$lookup: {\n\t\tfrom: \"location\",\n\t\tlocalField: \"AddressCity\",\n\t\tforeignField: \"AddressCity\",\n\t\tlet: {\n\t\t\taddressCity: \"Portsmouth\",\n\t\t\taddressState: \"Idaho\",\n\t\t\townerName: \"Owner1NameFull\"\n\t\t},\n\t\tpipeline: [{\n\t\t\t$match: {\n\t\t\t\tAddressCity: '$$addressCity',\n\t\t\t\tAddressState: \"$$addressState\",\n\t\t\t\tOwner1NameFull: '$$ownerName'\n\t\t\t}\n\t\t}],\n\t\tas: \"newFields\"\n\t}\n}, {\n\t$project: {\n\t\t_id: 0,\n\t\t\"AddressCity\": 1,\n\t\t\"AddressState\": 1,\n\t\t\"Owner1NameFull\": 1,\n\t\t\"newFields\": {\n\t\t\t$size: \"$newFields\"\n\t\t}\n\t}\n}])\n$geoNearinputStage: {\n stage: 'IXSCAN',\n nReturned: 8,\n executionTimeMillisEstimate: 0,\n works: 27,\n advanced: 8,\n needTime: 18,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: { position: '2dsphere' },\n indexName: 'position_2dsphere',\n$matchtotalDocsExamined: Long(\"0\"),\n totalKeysExamined: Long(\"0\"),\n collectionScans: Long(\"0\"),\n indexesUsed: [ 'AddressCity_1' ],\n nReturned: Long(\"28\"),\n executionTimeMillisEstimate: Long(\"5\")\n",
"text": "Hi @Shehan_Vanderputt and thank you for sharing the sample data here.Taking reference from the sample data shared, I tried to create sample document inside the collection as:and the indexes defined on the collection are:As mentioned, I tried to use the query similar to the one mentioned in the first post:and it makes use of the Index for the geoNear and the match stage of the pipeline:For the $geoNearand for the $match stage:Please note that, for index to be used after the lookup stage, the joined collection need to have the index created on the fields.Please visit the documentation on Query Optimisation for further understanding.Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "@Aasawari Thank you very much. That worked!!!",
"username": "Shehan_Vanderputt"
}
] | How to improve the performance of Aggregation Pipeline in MongoDb C# | 2023-03-21T12:03:24.865Z | How to improve the performance of Aggregation Pipeline in MongoDb C# | 1,156 |
null | [] | [
{
"code": "",
"text": "We are trying to move our data in real time from mongodb to redshift using oplog - change data capture method, is there is any idea to make it?",
"username": "SAKTHI_ESWARAN"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
] | How to migrate the data to live streaming from Mongodb to redshift via Kafka and kinesis | 2022-11-14T08:37:03.190Z | How to migrate the data to live streaming from Mongodb to redshift via Kafka and kinesis | 1,264 |
null | [
"backup"
] | [
{
"code": "",
"text": "We want to automate backup for Mongodb using powershell mongodb dump command but getting error while connecting to mongo db. Please help.",
"username": "Neeraj_Tripathi1"
},
{
"code": "",
"text": "What error are you getting?\nShow us a scrernshot",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Getting this error2023-04-17T13:23:41.764+0530 Failed: can’t create session: could not connect to server: connection() error occurred during connection handshake: auth error: unable to authenticate using mechanism “SCRAM-SHA-256”: (AuthenticationFailed) Authentication failed.using command as below$date = Get-Date -UFormat %Y-%m-%d;\n$backupFolder = $date;\n$basePath = “G:\\Backup”;\n$destinationPath = Join-Path $basePath $backupFolder;if(!(Test-Path -Path $destinationPath)) {\nNew-Item -ItemType directory -Path $destinationPath;\n(C:\"Program Files\"\\mongodb-database-tools\\bin\\mongodump.exe --uri --out $destinationPath);\n}skipped URI to hide environment details",
"username": "Neeraj_Tripathi1"
},
{
"code": "",
"text": "Can you connect to your mongodb from shell?\nAre you using correct user/PWD in your uri?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "yes username and pwd are correct. Even I can take backup using same URI and it is working fine.",
"username": "Neeraj_Tripathi1"
},
{
"code": "",
"text": "Are you using authentication DB in your uri?\nDoes you password has any special characters?\nSo mongodump works from command line but not from your automated script?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Nope I want to backup all DB but do you have a way then please share.\nyes\nIt works from commmand line but not from powershell. I get below error from powershell\nFailed: can’t create session: could not connect to server: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism “SCRAM-SHA-1”: (AuthenticationFailed) Authentication failed.Please help if you have any way to backup using powershell",
"username": "Neeraj_Tripathi1"
},
{
"code": "",
"text": "How do you connect to mongodb using power shell command/variables?\nMay be your user id not setup properly or you may not be authenticating against the correct db\nCheck this link.May helpIn this MongoDB tutorial, we'll show you how to use PowerShell to set up a MongoDB service on Windows, for an easier, all-encompassing installation process.",
"username": "Ramachandra_Tummala"
}
] | Powershell command to backup Mongodb dump | 2023-04-15T17:50:03.050Z | Powershell command to backup Mongodb dump | 1,406 |
null | [
"python",
"spark-connector"
] | [
{
"code": "spark.mongodb.change.stream.publish.full.document.onlyTrue(_id, version)-------------------------------------------\nBatch: 10\n-------------------------------------------\n+------------------------+-------+\n|_id |version|\n+------------------------+-------+\n|63eaa487426f9a7396ba6199|3 |\n|63eaa487426f9a7396ba6199|3 |\n+------------------------+-------+\n-------------------------------------------\nBatch: 20\n-------------------------------------------\n+------------------------+-------+\n|_id |version|\n+------------------------+-------+\n|63eaa487426f9a7396ba6199|4 |\n+------------------------+-------+\n\n-------------------------------------------\nBatch: 22\n-------------------------------------------\n+------------------------+-------+\n|_id |version|\n+------------------------+-------+\n|63eaa487426f9a7396ba6199|5 |\n+------------------------+-------+\n",
"text": "While experimenting with Spark connector I noticed that when changes to a single document in MongoDB are made very fast, only the latest version of a document will be sent to Spark connector. In such cases, very often number of events sent to Spark is correct, but all events contain the latest version of a given document. It that a normal behavior? How can I avoid it, and have the whole change history sent to Spark?Technical detailsI’m using pyspark 3.3.1, connector mongo-spark-connector_2.12:10.1.1, Mongodb 4.4.13\nI have spark.mongodb.change.stream.publish.full.document.only property set to True.ExampleSample code can be found here.For instance, say I’m modifying a collection in Mongo that consists only of docs like (_id, version).\nRunning 2 separate update operations in a row (updating version from 1 to 2, and afterwards from 2 to 3), without any delay, Spark in a single batch receives:While if i make a small delay in between, Spark will be sent separate events instead:",
"username": "Artur_Poplawski"
},
{
"code": "full_documentfull_document\"updateLookup\"full_document",
"text": "Mongodb spark connector is based on mongodb changestreams. However, when multiple changes to the same document occur in rapid succession, it is possible that the change stream will only capture the latest version of the document.This behavior is influenced by the full_document option in the change stream. When full_document is set to \"updateLookup\", the change stream returns the most current version of the document in the full_document field. This means that if multiple updates occur in rapid succession, the change stream may return the latest version of the document for each update event.It isn’t possible to configure this within Mongodb spark connector. To obtain the desired behavior you can try this:\nUse a separate collection to store the history of changes for each document. Instead of updating the document directly, you can insert a new document into the history collection with the updated version and a timestamp.",
"username": "Prakul_Agarwal"
}
] | Spark connector skipping update events made on the same document (while reading stream from MongoDB) | 2023-02-14T08:35:15.742Z | Spark connector skipping update events made on the same document (while reading stream from MongoDB) | 1,215 |
null | [
"python",
"spark-connector"
] | [
{
"code": "",
"text": "For example, mongodb collection have 2 fields already.pyspark dataframe is contains 3 fields with primary key .\nso, if I m updating same document on key I want to keep old fields too while writing dataframe . I don’t want to lose old data and update fields which has been in dataframeIs it possible ? Please suggest if pyspark writing configuration available that would be helpful.Example as below:Data present in collection:New Dataframe:I want result in mongodb collection as below:",
"username": "Aishwarya_N_A"
},
{
"code": "operationTypeinsertreplaceidFieldListupsertDocumentupdateidFieldListupsertDocument",
"text": "Hi Aishwarya,\nThe write configurations are defined here: https://www.mongodb.com/docs/spark-connector/current/configuration/write/The relevant config would be the following:\n||operationType|Specifies the type of write operation to perform. You can set this to one of the following values:Let us know if this answered your question",
"username": "Prakul_Agarwal"
}
] | Writing configuration for upsert with pyspark in mongodb | 2022-12-13T12:32:51.007Z | Writing configuration for upsert with pyspark in mongodb | 2,288 |
null | [
"spark-connector"
] | [
{
"code": "",
"text": "It is documented here https://www.mongodb.com/docs/spark-connector/current/write-to-mongodb/ that null-valued column will be written into MongoDB. Is there anyway we could bypass this",
"username": "Vincent_Chee"
},
{
"code": "",
"text": "Before v10 Spark connector, the Spark connector didn’t write null value column into MongoDB. This is changed after v10. There is a tracker ticket to make this configurable: https://jira.mongodb.org/projects/SPARK/issues/SPARK-394",
"username": "Prakul_Agarwal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to not write null column with Mongo Spark? | 2023-02-17T06:50:36.197Z | How to not write null column with Mongo Spark? | 1,411 |
null | [
"change-streams",
"spark-connector"
] | [
{
"code": "",
"text": "Dear Community,We are evaluating the spark-connector in version 10.1.1 to stream the data into Spark but could not find an option on below yet and appreciate your suggestions. We are using payspark and with Databricks to structure stream the data.How to stream data from multiple collections of a database\n.option(“spark.mongodb.read.collection”, collection1, collection2,…collectionN)How to stream data from multiple databases\n.option(“spark.mongodb.read.database”, DB1, DB2,…DBn)How read the existing data of collection first and then start the streaming\nExample: “copy.existing” which will copy the existing data first then start the stream of data.Thanks in anticipation!",
"username": "Ravi_Kottu"
},
{
"code": "",
"text": "1 & 2.\nThe MongoDB Spark Connector facilitates the transfer of data between MongoDB and a Spark DataFrame. Despite its capabilities, the connector is limited to interacting with only one MongoDB collection during each read or write operation. As a result, it does not natively support reading or writing from multiple database/collections/ schemas, simultaneously in a single operation.With that said, here is the approach you can use with MongoDb Spark connector: You can create a loop that iterates over the list of collections you want to read from, and for each collection, use the MongoDB Spark Connector to read the data into Spark. You can then perform any necessary transformations and write the data to the target Delta table.This approach involves setting up one pipeline for each collection, but it can be automated using a loop. This also applied to working with multiple MongoDB instances, you will need to create separate Spark Configuration for each instance, as the connector’s Configuration is specific to a single MongoDB instance. If you want to be reading from only a subset of collections in a MongoDB instance you can create a config file that can be used as the initial list to iterate over and create connections, or you can query the database for a list of collections.The Spark connector doesnt have the native ability to read the existing data of collection first and then start the streaming. We have a Jira to track the ability to copy existing data. https://jira.mongodb.org/projects/SPARK/issues/SPARK-303 Here are some ways described to copy the data over.How to Seamlessly Use MongoDB Atlas and Databricks Lakehouse Together",
"username": "Prakul_Agarwal"
}
] | Streaming From Multiple Specific Collections Using MongoDB Spark Connector 10.x | 2023-03-15T05:25:18.760Z | Streaming From Multiple Specific Collections Using MongoDB Spark Connector 10.x | 1,381 |
null | [
"aggregation"
] | [
{
"code": "I have to inner join below collection with multiple conditions \ncards: id, name, status, \ncard_emotions: id, card_id, user_id,text \ncommon field: id and card_id ( foreign field of cards [ id ]\n\n select * from cards c on inner join card_emotions cm on c.id = cm.card_id \nwhere c.status = ' ACTIVE' AND cm.user_id = 13\n",
"text": "",
"username": "Sarang_Patel"
},
{
"code": "",
"text": "Please provide sample documents we can play with in our environment.Also provide sample resulting documents.Share what you have tried so far and indicate where it fails so that we do not orient ourselves in the same direction.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Sarang_Patel\nDid you figure out the inner join query in MongoDB? Am looking for the same.",
"username": "Vidya_Swar"
}
] | How to write equivalent mongodb pipleline, same as below sql | 2022-02-02T08:56:02.872Z | How to write equivalent mongodb pipleline, same as below sql | 1,711 |
null | [] | [
{
"code": "",
"text": "Hi Team,Starting from mongo 4.4, mongo supports mirrored read for good reasons. I understand that. But the document lacking few important information,Thanks,\nVenkataraman",
"username": "venkataraman_r"
},
{
"code": "",
"text": "Mirrored reads do not affect the primary’s response to the client. The reads that the primary mirrors to secondaries are “fire-and-forget” operations. The primary doesn’t await responses.from https://www.mongodb.com/docs/manual/replication/#std-label-mirrored-reads",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Kobe_W Thanks for your reply. Can you please answer the other questions.",
"username": "venkataraman_r"
},
{
"code": "",
"text": "Hi Team,Can you please answer the questions. Due to this mirroredRead we see few extra connections created between primary to secondary which is causing resource usage. We would like to control the number of connections opened between pri and sec. Please reply.",
"username": "venkataraman_r"
}
] | Clarification required on Mirrored read maxTimeMS setting | 2023-03-29T05:18:16.722Z | Clarification required on Mirrored read maxTimeMS setting | 1,018 |
[] | [
{
"code": "",
"text": "Hi\nAfter 2 years of using Realm db, My database storage exceeds 250GB . I deeply investigate and see that there is another database name “__realm_sync” that takes a lot of storage.\nScreen Shot 2023-04-20 at 15.28.191010×401 35.9 KB\nSo i want to delete this db, or the collections in this db like “history” for reducing storage.\nBut i don’t know the correct way to delete it.\nCould u help me to give step by step for this?Also what happens to the clients after i delete this db? For example in the mobile (client) user turn off wifi (offline mode) and update something to realm db. At that time, i terminate the sync of realm to delete the __realm_sync db, and re-enable it again. What happens when the user turn the wifi one (online mode) ?",
"username": "Phuong_Dao_Quang"
},
{
"code": "",
"text": "Hi,Please do not delete that database. Sync is “history based”, and the history collection is how we send deltas, resolve conflicts, catch up clients after some time. Dropping that collection would result in sync just not working any more (you would need to terminate and re-enable)For your app right now, we do have some processes to compact away history that is no longer necessary. This is always running asynchronously but I can prioritize it on your application if you can send me a link to your application in realm.mongodb.com I can prioritize your application and share the details of how much history we are compacting away. See here for our docs on this process: https://www.mongodb.com/docs/atlas/app-services/reference/partition-based-sync/#std-label-backend-compactionIn terms of actions you can take, you have 2 options:Please let me know which route you would like to go and I would be happy to continue discussing this with you.Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Phuong_Dao_Quang I would listen to @Tyler_Kaye he would most definitely know the result of deleting history…I’ve also seen what happens when someone deletes history lol, it’s not pretty.But my recommendation would be Option 1, moving to Flexible Sync, it’s so much nicer and is honestly the main version of Sync I recommend to anyone that looks at Device Sync or asks me about it. Especially the performance improvement and it’s inability to over or under fetch because of underlying tech.",
"username": "Brock"
},
{
"code": "",
"text": "Thank you, @Tyler_Kaye\nYes please prioritize my app App Services.\nAnother question.\nIf i move to Flexible Sync, so we have to config it on backend and on the app, also release a new version of app. What happens to old app? old app is still using config of Partition Based sync. Should i force user to update to the new one ?",
"username": "Phuong_Dao_Quang"
},
{
"code": "",
"text": "Hi,It seems like you terminated and re-enabled sync, so I will hold off on prioritizing any compaction processes. This should limit the amount of history for now which is great.I did take a peek at your compactions and we were performing a lot of compactions on your application (about 30,000 partitions compacted in the last 3 days). Most of the compactions were successful at reducing the size by about 80%. This is with the exception of a single partition you have that is 62 GB and was only being partially compacted due to its size. We do have the ability to set more aggressive limits for this on-demand, so I can check back in in a few days to see if you still have a large partition.As for migrating to Flexible Sync, we are planning on releasing a solution very soon that will allow you to upgrade the version of your SDK, release it to your applications, and then migrate the backend to use Flexible Sync instead of Partition Sync.Will try to poke in to see if the initial sync completes soon,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank @Tyler_Kaye\nYeah i terminated and re-enabled sync.\nI’m intending to inform u haha.“As for migrating to Flexible Sync, we are planning on releasing a solution very soon that will allow you to upgrade the version of your SDK, release it to your applications, and then migrate the backend to use Flexible Sync instead of Partition Sync.”Good to hear that.Thank you!",
"username": "Phuong_Dao_Quang"
},
{
"code": "Worker completed initial sync for namespace \"DetailActivityData\" (214804945 docs scanned) in 12m52.625842952s\t\t\t\nWorker completed initial sync for namespace \"StaticHR\" (321890108 docs scanned) in 1h21m50.33524404s\t\t\t\nWorker completed initial sync for namespace \"TotalActivityData\" (9426129 docs scanned) in 27m50.944153949s\t\t\t\nWorker completed initial sync for namespace \"HRVData2\" (1741170 docs scanned) in 10m8.47805795s\t\t\t\nWorker completed initial sync for namespace \"Spo2Data\" (1251942 docs scanned) in 10m7.047377981s\t\t\t\nWorker completed initial sync for namespace \"ActivityModeData\" (837124 docs scanned) in 10m4.935638082s\t\t\t\nWorker completed initial sync for namespace \"TemperatureData\" (954323 docs scanned) in 10m2.879689105s\t\t\t\t\nWorker completed initial sync for namespace \"ScaleData\" (62854 docs scanned) in 1m46.092303562s\n",
"text": "Hi, just took a look at your initial sync and looks like it is all done. IF you are curious here are some stats on how long things took (all collections are synced in parallel):Anonymizing the collection names above a bit.As a warning, writes made by devices might be a little delayed in going to MongoDB for an hour or 2 as it churns through the backlog of writes. This is another thing that was very much improved and avoided in Flexible Sync. Either way, your app should return to normal within an hour or 2.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank you for your information @Tyler_Kaye",
"username": "Phuong_Dao_Quang"
},
{
"code": "",
"text": "Just want to add, Flexible sync is so nice! It really is an upgrade from the older Partition based designs, I do highly encourage moving to it asap and just see how much nicer it is both on your data storage, and on the speed of the transactions!All clients etc. I do consults for, I honestly push flexible sync when they want Realm as an option. Flexible sync is definitely a great long-term choice.@Phuong_Dao_Quang You would definitely not regret going to Flexible sync. Even if you manually made the changes yourself, or waited for this new migration tool. It’s well worth it.",
"username": "Brock"
},
{
"code": "",
"text": "After hear that i want to move to Flexible sync asap haha.\nCould you give me step by step to move to do that ? @Brock\nIf it’s too complex so i will wait for the migration tool from @Tyler_Kaye",
"username": "Phuong_Dao_Quang"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm, The __realm_sync database take a lot of storage. What happens if i delete the __realm_sync database? | 2023-04-20T08:35:40.622Z | Realm, The __realm_sync database take a lot of storage. What happens if i delete the __realm_sync database? | 893 |
|
null | [
"data-modeling"
] | [
{
"code": "",
"text": "is there an equivalent to drawsql for mongoDB or nosql in general? Like some kind of tool to visually plot out how the data may or may not be stored?",
"username": "James_Hagood"
},
{
"code": "",
"text": "I do not see a reason why you cannot use drawSQL to model your collections",
"username": "Gio_N_A"
}
] | Drawsql equivalent | 2023-01-02T02:24:58.001Z | Drawsql equivalent | 1,430 |
null | [
"queries",
"replication",
"python",
"atlas-cluster"
] | [
{
"code": "from pymongo import MongoClient\nmyclient = MongoClient(\"mongodb+srv://LALALA:[email protected]/?retryWrites=true&w=majority\")\nmydb = myclient[\"somedb\"]\ncoll = mydb['mycoll']\n\nprint(coll.find())\n",
"text": "Dear all,this has been happening for a long time. I remember looking into different forums and lots of people saying they were having the same problem.I have a simple script that connects to Mongo, but it is working intermittently. Most of the time doesn’t work and rarely works.Here is the code:And here is the error message:pymongo.errors.ServerSelectionTimeoutError: No replica set members match selector “Primary()”, Timeout: 30s, Topology Description: <TopologyDescription id: 64426016e16bb77f0fd1ec49, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription (‘makesmiledb-shard-00-00.trs4s.mongodb.net’, 27017) server_type: RSSecondary, rtt: 0.02426325880332851>, <ServerDescription (‘makesmiledb-shard-00-01.trs4s.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=NetworkTimeout(‘makesmiledb-shard-00-01.trs4s.mongodb.net:27017: timed out’)>, <ServerDescription (‘makesmiledb-shard-00-02.trs4s.mongodb.net’, 27017) server_type: RSSecondary, rtt: 0.02303117101703737>]>Don’t know what else todo. As I said, sometimes it retrieve the data, and sometimes I get timeout.Thanks.",
"username": "Caio_Marcio"
},
{
"code": "pip list>>> from pymongo import MongoClient\n>>> myclient = MongoClient(\"mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority\")\n>>> myclient.somedb.command('ping')\n{'ok': 1}\n",
"text": "Hi @Caio_Marcio, two questions,I was able to connect to locally your cluster with PyMongo 4.3.3:",
"username": "Steve_Silvester"
},
{
"code": "Python 3.11.0 (main, Oct 24 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-2)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from pymongo import MongoClient\n>>> myclient = MongoClient(\"mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority\")\n>>> myclient.somedb.command('ping')\nTraceback (most recent call last):\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 1358, in connect\n sock = _configured_socket(self.address, self.opts)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 1052, in _configured_socket\n sock = _create_connection(address, options)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 1036, in _create_connection\n raise err\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 1029, in _create_connection\n sock.connect(sa)\nTimeoutError: timed out\nThe above exception was the direct cause of the following exception:\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/_csot.py\", line 105, in csot_wrapper\n return func(self, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/database.py\", line 805, in command\n with self.__client._socket_for_reads(read_preference, session) as (\n File \"/usr/lib64/python3.11/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n ^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/mongo_client.py\", line 1282, in _socket_from_server\n with self._get_socket(server, session) as sock_info:\n File \"/usr/lib64/python3.11/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n ^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/mongo_client.py\", line 1217, in _get_socket\n with server.get_socket(handler=err_handler) as sock_info:\n File \"/usr/lib64/python3.11/contextlib.py\", line 137, in __enter__\n return next(self.gen)\n ^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 1407, in get_socket\n sock_info = self._get_socket(handler=handler)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 1520, in _get_socket\n sock_info = self.connect(handler=handler)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 1366, in connect\n _raise_connection_failure(self.address, error)\n File \"/usr/local/lib64/python3.11/site-packages/pymongo/pool.py\", line 254, in _raise_connection_failure\n raise NetworkTimeout(msg) from error\npymongo.errors.NetworkTimeout: makesmiledb-shard-00-01.trs4s.mongodb.net:27017: timed out\n[root@LAB ~]# mongosh 'mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority'\nCurrent Mongosh Log ID: 644302559e6d3fd5579fe4ec\nConnecting to: mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority&appName=mongosh+1.8.0\nMongoServerSelectionError: Server selection timed out after 30000 ms\n[root@LAB ~]# mongosh 'mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority'\nCurrent Mongosh Log ID: 64430277f3f213c02e7fe426\nConnecting to: mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority&appName=mongosh+1.8.0\nUsing MongoDB: 6.0.5\nUsing Mongosh: 1.8.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nAtlas atlas-srdjma-shard-0 [primary] test> exit\n[root@LAB ~]# mongosh 'mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority'\nCurrent Mongosh Log ID: 64430284fd8412c7e567b211\nConnecting to: mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority&appName=mongosh+1.8.0\nUsing MongoDB: 6.0.5\nUsing Mongosh: 1.8.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nAtlas atlas-srdjma-shard-0 [primary] test> exit\n[root@LAB ~]# mongosh 'mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority'\nCurrent Mongosh Log ID: 64430288d6f8de4610cb6a4c\nConnecting to: mongodb+srv://makesmiledb.trs4s.mongodb.net/?retryWrites=true&w=majority&appName=mongosh+1.8.0\nMongoServerSelectionError: Server selection timed out after 30000 ms\n",
"text": "Hi @Steve_Silvester , thanks for your answer.There you go.PINGPIP[root@LAB ~]# pip3.11 list\nPackage Version\nargcomplete 2.0.0\ncffi 1.15.1\ncharset-normalizer 2.1.0\nclick 8.1.3\ndasbus 1.6\ndbus-python 1.3.2\ndistro 1.7.0\ndnspython 2.3.0\nfile-magic 0.4.0\nFlask 2.2.3\nFlask-Cors 3.0.10\ngpg 1.17.0\nidna 3.3\nitsdangerous 2.1.2\nJinja2 3.1.2\nlibcomps 0.1.18\nMarkupSafe 2.1.2\nnftables 0.1\npexpect 4.8.0\npip 22.2.2\nply 3.11\nptyprocess 0.6.0\npycparser 2.20\nPyGObject 3.42.2\npymongo 4.3.3\nPySocks 1.7.1\npython-augeas 1.1.0\npython-dateutil 2.8.2\nrequests 2.28.1\nrpm 4.18.0\nselinux 3.4\nsepolicy 3.4\nsetools 4.4.0\nsetroubleshoot 3.3.31\nsetuptools 62.6.0\nsix 1.16.0\nsos 4.4\nsystemd-python 235\nurllib3 1.26.12\nWerkzeug 2.2.3MONGOSHTried a few times… some worked and some not.Thanks.",
"username": "Caio_Marcio"
},
{
"code": "",
"text": "Typically this error occurs when your client cannot connect to your cluster (ex: the application’s IP address is not in Atlas’ IP Access List ), or if there is a network issue between the client and the server.",
"username": "Steve_Silvester"
}
] | Pymongo getting timeout from atlas | 2023-04-21T11:20:17.632Z | Pymongo getting timeout from atlas | 835 |
null | [] | [
{
"code": "",
"text": "Hi,1.Do we have rest endpoint for also listing db names and listing all collection name2.SCRAM,X509 Authentication, LDAP, Kerberos Authentication mechanism – this Authentication mechanism is for Database User login, right? We are not using it externally when we connect to db with Rest Endpoints.3.For rest endpoints Authentication mechanism (Authentication Provider) is :API key, email-password, JWT token – From https://www.mongodb.com/docs/atlas/app-services/users/sessions/ I am understanding we can get bearer token and refresh token.I tried from postman ,getting below response. “Authentication via local-userpass is unsupported”, same observed with apikey and annon-user.Thanks,\nShubhangi",
"username": "Shubhangi_Pawar"
},
{
"code": "",
"text": "Can you share more context on what you’re looking to do/build?",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "we are looking for REST endpoint only to list the all collections. we are planning to implement connector for Mongo DB, currently we don’t have JDBC driver support so planning to use Rest endpoints.",
"username": "Shubhangi_Pawar"
},
{
"code": "",
"text": "I’ll reach out via email to ensure you’re getting the help you need",
"username": "Andrew_Davidson"
}
] | Do we have Mongo DB Atlas Endpoints to list collections /Databases | 2023-04-19T05:43:51.341Z | Do we have Mongo DB Atlas Endpoints to list collections /Databases | 414 |
null | [] | [
{
"code": "",
"text": "Is there a way to specify a deployment region (or global deployment) while deploying from Realm CLI?I have an app that is deployed to a DEV project, and the deployment region is selected as Single Deployment/region automatically.\nWhen I deploy the same app to a different TEST project, the Global Deployment option is selected.Hoping to control this option during deployment and not have to manually change later.",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "--deployment-targetrealm-cli deploy --project-id=my-project --deployment-model=GLOBAL --deployment-target=us-west-2\n--deployment-model--deployment-targetrealm-cli deployment list",
"text": "Hi @Try_Catch_Do_NothingYes, you can specify the deployment region while deploying from Realm CLI using the --deployment-target flag.Here’s an example command to deploy to the “us-west-2” region:In this example, the --deployment-model flag is set to “GLOBAL” to indicate a global deployment, and the --deployment-target flag is set to “us-west-2” to specify the deployment region.You can replace “my-project” with your actual project ID.Note: The available deployment regions may depend on the selected deployment model and the location of your Realm app. You can use the realm-cli deployment list command to see the available deployment regions for your app.",
"username": "Brock"
},
{
"code": "unknown command \"deployment\" for \"realm-cli\"realm-cli -v \nrealm-cli version 2.6.2\n",
"text": "realm-cli deployment listWith realm-cli 2.6.2, I get the below error when I run the command you mentioned:\nunknown command \"deployment\" for \"realm-cli\"",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "deploy",
"text": "Also, deploy is not a valid command for realm-cli that I have installed.\nAre you referencing an older version or am I missing something?",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "One sec, let me look back at the docs.",
"username": "Brock"
},
{
"code": "--deployment-targetrealm-cli deploy --project-id=my-project --deployment-model=GLOBAL --deployment-target=us-west-2\nmy-projectus-west-2realm-cli app describe--project-idrealm-cli app describe --project-id=my-project\nmy-projectrealm-cli deploynpm install -g mongodb-realm-clirealm-cli deploy --project-id=my-project\nmy-projectdeploymentrealm-cli atlas env listrealm-cli atlas env list --project-id=my-project\nmy-project",
"text": "@Try_Catch_Do_Nothing\nHoly moly, sorting through the CLI docs is like sorting spaghetti…After searching through the MongoDB Atlas documentation and related links, I found the following information related to the discussion:Replace my-project with your actual project ID, and us-west-2 with the desired region.Replace my-project with your actual project ID. This command will output information about your app, including the available deployment targets.Replace my-project with your actual project ID.Replace my-project with your actual project ID.I hope this information helps to resolve the issue.",
"username": "Brock"
},
{
"code": "realm-cli -hAvailable Commands:\n whoami Display information about the current user\n login Log the CLI into Realm using a MongoDB Cloud API Key\n logout Log the CLI out of Realm\n push Imports and deploys changes from your local directory to your Realm app (alias: import)\n pull Exports the latest version of your Realm app into your local directory (alias: export)\n apps Manage the Realm apps associated with the current user (alias: app)\n users Manage the Users of your Realm app (alias: user)\n secrets Manage the Secrets of your Realm app (alias: secret)\n logs Interact with the Logs of your Realm app (alias: log)\n function Interact with the Functions of your Realm app (alias: functions)\n schema Manage the Schemas of your Realm app (alias: schemas)\n accessList Manage the allowed IP addresses and CIDR blocks of your Realm app (aliases: accesslist, access-list)\n help Help about any command\n",
"text": "None of the commands/arguments you listed seem to work with the latest realm-cli version.\nWhen I run realm-cli -h, the following commands are listed:",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "I’m coming to realize the docs are all over the place for the Realm CLI…",
"username": "Brock"
},
{
"code": "",
"text": "It appears the functionality you mentioned is available in the Admin API, but I haven’t seen anything for the Realm CLI so far.https://www.mongodb.com/docs/atlas/app-services/admin/api/v3/#tag/deploy/operation/adminListAppRegions",
"username": "Try_Catch_Do_Nothing"
},
{
"code": "",
"text": "Yeah, when you try to use the documents search, the snips it pulls are coming from both admin API and Realm CLII didn’t realize the text paragraphs it pulls are coming from both of them.",
"username": "Brock"
},
{
"code": "realm-cli pullrealm_config.jsonrealm-cli push",
"text": "Hey folks! When you use realm-cli pull to get a local copy of your app, you should see a file called realm_config.json in your root directory. Here’s the documentation for that file: https://www.mongodb.com/docs/atlas/app-services/reference/config/app/You’ll notice that the file includes a deployment_model field - this should do what you’re looking for.Then, you can use realm-cli push (docs here: https://www.mongodb.com/docs/atlas/app-services/cli/realm-cli-push/#std-label-realm-cli-push) to deploy the app to whichever project you desire.Let me know if this works for you!",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "@Sudarshan_Muralidhar is right - I had a similar case where I wanted to use a GCP region that was not part of the old cli commands (they seem to be hardcoded).So yes - pull-edit-push is the way to go",
"username": "Sven_Hergenhahn"
},
{
"code": "",
"text": "I’ve found this out as well @Sven_Hergenhahn I kind of wish the docs would update the cli commands for more regions etc, as you’re also right, they do seem to be hard coded.",
"username": "Brock"
},
{
"code": "",
"text": "I had an answer back then from the developers of the cli that a new version is due and will contain more regions - so it seems the are still hardcoded, but more will be added…",
"username": "Sven_Hergenhahn"
},
{
"code": "",
"text": "oof, that’s a hard way of doing that.I’m actually kind of surprised they don’t just make it relative and run a route service to direct to the region specified, than it would be agnostic and just send and error back if it doesn’t exist or the wrong place address/name used. (This is how a lot of services that are cloud based do this to route between regions etc)",
"username": "Brock"
},
{
"code": "",
"text": "",
"username": "henna.s"
},
{
"code": "config.jsonconfig.json",
"text": "G’day Folks ,Thank you for raising your concerns and discussing this. Please note, the deployment region and other app settings can be changed by editing the config.json file as suggested above by @Sudarshan_Muralidhar.When a new app is deployed via Cli, it defaults to global deployment while it picks the existing region, if changes are made via cli to an existing app. This is termed as ‘default’ behavior instead of hard-coded as it does allow changing regions via the config.json file.Please note, there is an ongoing project on cli that will allow you to specify region and deployment model instead of default values.Cheers,\nHenna",
"username": "henna.s"
}
] | Specifying deployment region while pushing from Realm CLI? | 2023-04-12T00:26:04.637Z | Specifying deployment region while pushing from Realm CLI? | 1,023 |
null | [] | [
{
"code": "# SELinux policy for MongoDB\n\nThis is the official SELinux policy for the MongoDB server.\n\nSecurity-Enhanced Linux (SELinux) is an implementation of mandatory access controls (MAC)\nin the Linux kernel, checking for allowed operations after standard discretionary access\ncontrols (DAC) are checked.\n\n## Scope\n\n* policies apply to computers running RHEL7 and RHEL8 only.\n* covers standard mongodb-server systemd based installations only.\n* both community and enterprise versions are supported.\n\nSupplied policies do not cover any daemons or tools other than mongod, such as: mongos,\nmongocryptd, or mongo shell\n\n## Installation\n\nYou will need to install following packages in order to apply this policy:\n[root@]#unzip mongodb-selinux-master.zip\n\n[root@]#cd mongodb-selinux-master/\n\n[root@]#make\n(cd selinux; make -f /usr/share/selinux/devel/Makefile)\nmake[1]: Entering directory '/home/xx/Mongo/mongodb-selinux-master/selinux'\nCompiling targeted mongodb module\nCreating targeted mongodb.pp policy package\nrm tmp/mongodb.mod.fc tmp/mongodb.mod\nmake[1]: Leaving directory '/home/xx/Mongo/mongodb-selinux-master/selinux'\nmkdir -p build/targeted\nmv selinux/mongodb.pp build/targeted/\n\n[root@]# make install\ncp build/targeted/mongodb.pp /usr/share/selinux/targeted/mongodb.pp\n/usr/sbin/semodule --priority 200 --store targeted --install /usr/share/selinux/targeted/mongodb.pp\nlibsemanage.semanage_direct_install_info: Overriding mongodb module at lower priority 100 with module at priority 200.\n/sbin/fixfiles -R mongodb-enterprise-server restore || true\nmongodb-enterprise-server not found\n\n/sbin/fixfiles -R mongodb-org-server restore || true\nmongodb-org-server not found\n\n/sbin/restorecon -R /var/lib/mongo || true\n/sbin/restorecon -R /run/mongodb || true\n/sbin/restorecon: lstat(/run/mongodb) failed: No such file or directory\n\n[root@]# systemctl status mongod\nFailed to get properties: Access denied\n",
"text": "V6.0.5 / RHEL 9.1Are there steps to configure selinux on RHEL9? The instructions in Configure SELinux point to a git repo to run some make installs, but the release notes in there say RHEL 7 & 8 only. Are there steps for RHEL 9?I tried it to see what would happen and I’m getting some errors. I’m not sure if it’s related to the release or the way I brought the repo onto the server (download onto a pc and moved up the server). It’s failing in the make looking for mongodb-enterprise-server or mongodb-org-server with “not found” which makes me think i missed something in the repo.Thanks for any guidance",
"username": "Eric_Barberan"
},
{
"code": "Starting in MongoDB 5.0, a new SELinux policy is available for MongoDB installations that:\n\n* Use an `.rpm` installer.\n* Use default configuration settings.\n* Run on RHEL7 or RHEL8.\n\nIf your installation does not meet these requirements, refer to the [SELinux Instructions](https://www.mongodb.com/docs/manual/tutorial/install-mongodb-enterprise-on-red-hat-tarball/#std-label-install-enterprise-tarball-rhel-configure-selinux) for `.tgz` packages.\n",
"text": "Sorry I just saw this under the configure selinuxLet me give these a shot",
"username": "Eric_Barberan"
}
] | SELinux config for v6 on RHEL 9 | 2023-04-21T14:27:08.225Z | SELinux config for v6 on RHEL 9 | 659 |
null | [
"crud"
] | [
{
"code": " {\n \"top_level\": [\n {\n \"name\": \"1\",\n \"second_level\": [\n {\n \"surname\": \"2\"\n }\n ]\n },\n {\n \"name\": \"3\",\n \"second_level\": [\n {\n \"surname\": \"4\"\n },\n {\n \"surname\": \"5\"\n }\n ]\n }\n ]\n}\ndb.collection.updateMany(\n {},\n {'$set': {'top_level.1.second_level.1.surname': 'Hello World'}}\n)\n{\n \"top_level\": [\n {\n \"name\": \"1\",\n \"second_level\": [\n {\n \"surname\": \"2\"\n }\n ]\n },\n {\n \"name\": \"3\",\n \"second_level\": [\n {\n \"surname\": \"4\"\n },\n {\n \"surname\": \"Hello World\"\n }\n ]\n }\n ]\n}\n",
"text": "Dear Community,let’s assume, we have a document something like this in our collection:If we want to update only the surname property of only one entry, the usual update ends up to look like this:Thus leaving us with:Now the issue is I am failing to produce the same behaviour using pipeline style update statements. Can someone advice on how to achieve this behaviour with a pipeline style update. Many thanks in Advance.",
"username": "Kevin_Luckas"
},
{
"code": "",
"text": "I forgot to add:I would like to use pipelines, because there is no way of removing an item from an array in an atomic way based on its index without using pipelines (at least upto my [limited] knowledge).Me and my colleagues try to implement patching (RFC 6902). This allows removing and altering at the same time (given it works on different arrays), so we’d like to get it to work.",
"username": "Kevin_Luckas"
},
{
"code": "top_level_index = 1\ntop_level_element = { \"$arrayElemAt\" : [ \"$top_level\" , top_level_index ] }\nsecond_level = { \"$getField\" : { \"field\" , \"second_level\" , top_level_element } }\nsecond_level_index = 1\nsecond_level_element = { \"$arrayElemtAt\" : [ second_level , second_level_index ] }\nnew_value = \"Hello World\" \n$concatArrays : [\n { \"$slice\" : [ \"$top_level\" , 0 , top_level_index ] } ,\n [ { \"$mergeObjects\" : [\n top_level_element ,\n { \"second_level\" : { \"$concatArrays\" : [\n { \"$slice\" : [ second_level , 0 , second_level_index ] } ,\n [ { \"$mergeObjects\" : [\n second_level_element ,\n { \"surname\" : new_value }\n ] } ] ,\n { \"$slice\" : [ second_level , 0 , second_level_index + 1 ] } ,\n ] } }\n ] } ] ,\n { \"$slice\" : [ \"$top_level\" , top_level_index + 1 ] }\n]\n",
"text": "What determines that you want element 1 of top_level and element 1 of second_level?If the index is determined by other field of the array, then you are better off using arrayFilters rather than indexes.For example if you want to update top_level.1.second_level.1 because top_level.1.name is 3 and the corresponding second_level.1.surname is 5 then specifying arrayFilters should work.With indexes rather than arrayFilters, something like the following might work:Ugly and rather complex but not that complicated.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for your response.Basically, we are using a monogDB behind a run-of-the-mill CRUD API. So which index to manipulate is given by the outside patch request. There are workaround solutions, yes. All these break the atomicity bar yours potentially. I had hoped that there was an easy implementation in mongo query language, as the standard JSON patching is so similar to the general document structure in a MongoDB.Thank you for your answer. I will try it out. Anyhow, it becomes rather involved the more nested lists you amass. Will have to think about this.",
"username": "Kevin_Luckas"
},
{
"code": "",
"text": "I have just reread the whole thread and I am not sure about what you mean withpipeline style update",
"username": "steevej"
},
{
"code": "",
"text": "",
"username": "Kevin_Luckas"
},
{
"code": "",
"text": "What I thought.May be you could take a look at Compass source code. They surely implement a similar use-case for when you update an array.",
"username": "steevej"
}
] | Updating specific array elements in a pipeline style update | 2023-04-18T10:17:30.200Z | Updating specific array elements in a pipeline style update | 776 |
[
"compass"
] | [
{
"code": "npm run bootstrapnpm run bootstrapnpm i",
"text": "Hello,I’ve been struggling for many hours (even a day) trying to build MongoDB Compass locally on my PC whose OS is Windows 10. I’ve been following carefully the readme file related to contribution (CONTRIBUTING.md).I am using Node v16 with NPM v8 as requested. When entering npm run bootstrap, the download of packages begins and then suddenly an error occurs:\nimage1442×323 134 KB\nEach time I repeat the installation (whether through npm run bootstrap or npm i), I keep having this same error. I searched about it everywhere (on Stack Overflow, Github, ChatGPT, even MongoDB forum), but the answers are not applicable to my case. Note also that I deleted node_modules many times and repeated the same process without any success.I even did the same process with Node v18 and NPM v8 but the same error is appearing… However, the installation continues till the end. Then when lerna is executed, I can see like the repos are compiling (I’m not sure of the terms if they are correct) but then lerna fails (which is expected since the installation was not 100% successful).Would be glad if you can assist me. I already contributed to Compass in the past but didn’t need at that time to set the whole environment.Waiting for an answer asap",
"username": "Yves_Daaboul"
},
{
"code": "npm run bootstrap",
"text": "IMPORTANT UPDATES:After having installed everything with Node v18 (as I talked about it in the post), I switched back to v16, then I typed npm run bootstrap. I had the same classic error related to buildcheck, but then, when lerna began its execution, everything succeeded and the electron app opened although there is an error with the installation. Really strange…We, developers, are used to solve software problems without sometimes understanding what happened exactly In all cases thank you. But please do not delete this question. I may refer back to it if the build doesn’t work anymore.Regards",
"username": "Yves_Daaboul"
}
] | [Urgent] Cannot build Compass from source (cloned from GitHub) on Windows 10 | 2023-04-21T13:19:44.748Z | [Urgent] Cannot build Compass from source (cloned from GitHub) on Windows 10 | 829 |
|
null | [] | [
{
"code": "",
"text": "Do we have REST Endpoint to get list of DB Collections in Mongo DB Atlas.",
"username": "Shubhangi_Pawar"
},
{
"code": "",
"text": "Hi Shubhangi, Out of curiosity, what’s your use case for this? You could craft an endpoint for this purpose using https://www.mongodb.com/docs/atlas/app-services/data-api/custom-endpoints/ but it’s worth validating you don’t have an application client that can do this directly in your app code via a mongodb driver",
"username": "Andrew_Davidson"
}
] | Do we have REST Endpoint to get list of DB Collections | 2023-04-20T05:34:39.802Z | Do we have REST Endpoint to get list of DB Collections | 454 |
[
"atlas-device-sync",
"schema-validation",
"flexible-sync",
"app-services-data-access"
] | [
{
"code": "{\n \"type\": \"flexible\",\n \"state\": \"enabled\",\n \"development_mode_enabled\": false,\n \"service_name\": \"mongodb-atlas\",\n \"database_name\": \"MyDatabase\",\n \"is_recovery_mode_disabled\": false,\n \"queryable_fields_names\": [\"userId\", \"taskId\"]\n}\n{\n \"collection\": \"MyCollection\",\n \"database\": \"MyDatabase\",\n \"roles\": [\n {\n \"name\": \"my_role_name\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": false,\n \"read\": {\n \"$or\": [\n { \"taskId\": { $in: \"%%user.custom_data.ownedTasks\" }},\n { \"taskId\": { $in: \"%%user.custom_data.tempTasks\" }},\n ]\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n }\n ]\n}\nError occurred while executing findOne: cannot use 'taskId' in expression; only %% like expansions and top-level operators may be used as top-level fields\n\"read\": { \"taskId\": \"someTask\" } // works\n\"read\": { \"taskId\": { \"$in\": \"%%user.custom_data.ownedTasks\"} } // works\n",
"text": "It was working before MongoDb Atlas automatically migrated the Sync rules out for Sync configuration into the common Rules configuration.But after that, I started getting an error if I used “$or”The sync:The rules.json:The app id deployed properly, but I get error while running a query:Other options I have tried:",
"username": "Georges_Jamous"
},
{
"code": "https://realm.mongodb.com/groups/<group-id>/apps/<app-id>",
"text": "Hi,I’m taking a look into the issue right now, but in the meantime do you mind linking you app URL (looks like https://realm.mongodb.com/groups/<group-id>/apps/<app-id>)?Jonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "Hi @Jonathan_Lee,sure thing, this is the app url:\nhttps://realm.mongodb.com/groups/6192acea6e11d14b9a1454eb/apps/6192ad66a4c07eea3a528d8dthanks",
"username": "Georges_Jamous"
},
{
"code": "%%root{\n \"collection\": \"MyCollection\",\n \"database\": \"MyDatabase\",\n \"roles\": [\n {\n \"name\": \"my_role_name\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": false,\n \"read\": {\n \"$or\": [\n { \"%%root.taskId\": { $in: \"%%user.custom_data.ownedTasks\" }},\n { \"%%root.taskId\": { $in: \"%%user.custom_data.tempTasks\" }},\n ]\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n }\n ]\n}\n%%root{\n \"collection\": \"MyCollection\",\n \"database\": \"MyDatabase\",\n \"roles\": [\n {\n \"name\": \"nonSyncRole\",\n \"apply_when\": { \"_id\": { $exists: true } },\n \"document_filters\": {\n \"write\": false,\n \"read\": {\n \"$or\": [\n { \"%%root.taskId\": { $in: \"%%user.custom_data.ownedTasks\" }},\n { \"%%root.taskId\": { $in: \"%%user.custom_data.tempTasks\" }},\n ]\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n },\n {\n \"name\": \"my_role_name\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": false,\n \"read\": {\n \"$or\": [\n { \"taskId\": { $in: \"%%user.custom_data.ownedTasks\" }},\n { \"taskId\": { $in: \"%%user.custom_data.tempTasks\" }},\n ]\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n }\n ]\n}\napply_when%%root$orapply_when{\n \"collection\": \"MyCollection\",\n \"database\": \"MyDatabase\",\n \"roles\": [\n {\n \"name\": \"nonSyncRole\",\n \"apply_when\": { \"$or\": [\n { \"%%root.taskId\": { $in: \"%%user.custom_data.ownedTasks\" }},\n { \"%%root.taskId\": { $in: \"%%user.custom_data.tempTasks\" }},\n ] },\n \"document_filters\": {\n \"write\": false,\n \"read\": true\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n },\n {\n \"name\": \"catchAllForNonSyncRequestsRole\",\n \"apply_when\": { \"_id\": { $exists: true } },\n \"document_filters\": {\n \"write\": false,\n \"read\": false\n },\n \"read\": false,\n \"write\": false,\n \"insert\": false,\n \"delete\": false,\n \"search\": false\n },\n {\n \"name\": \"my_role_name\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": false,\n \"read\": {\n \"$or\": [\n { \"taskId\": { $in: \"%%user.custom_data.ownedTasks\" }},\n { \"taskId\": { $in: \"%%user.custom_data.tempTasks\" }},\n ]\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n }\n ]\n}\n",
"text": "Hey @Georges_Jamous,This is actually a known issue that existed prior to the Rules migration, when permissions in Flexible Sync were used in a non-sync context, so the migration itself shouldn’t have introduced a regression here. We have a ticket to fix this issue.In the meantime, I would suggest doing one of the following:The flexible sync app would use the permissions as you have defined above, and the non-sync app would use a nearly identical rule but using the %%root operator like this:Note that the reason we can’t use this rule in a flexible sync app is because the %%root expansion is unsupported.This takes advantage of the idea that apply_when expressions referencing document fields in Flexible Sync doesn’t work, but does work for non-sync requests; this is because roles are applied per-document in non-sync requests, whereas they are applied per-sync-session on flexible sync requests (at session start, no documents have been queried for yet). Hence, the first role above will always apply on non-sync requests, and the latter will always apply on Flexible Sync requests.Note that the first role is sync incompatible because it references the %%root expansion, and will be reported as such in the UI with warning banners / modals (trying to set a sync incompatible role would actually result in an error in the CLI, but is tolerated in the UI – so this change would have to be done through the UI). This should be fine because the first role will never actually get used in flexible sync requests.If you don’t want to deal with sync incompatible roles, then another option would be to capture the $or in the apply_when expression of a role, and then have a “catch-all” role for non-sync requests like:Under this configuration, the third role would apply to sync requests, whereas the first two will apply to non-sync requests.Let me know if any of those options work for you,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "\"apply_when\": { \"_id\": { $exists: true } },",
"text": "@Jonathan_Lee, first of all thank you for this detailed answer!I will be trying it out and let you know what we went with.Some notes of now:A. so, a cloud function that is being invoked from a Flexible Sync Enabled App Client (iOS) and that is executing a query to the DB, is considered a non-sync context?non-sync contextThis is actually a known issue that existed prior to the Rules migration, when permissions in Flexible Sync were used in a non-sync context, so the migration itself shouldn’t have introduced a regression here. We have a ticket to fix this issue.“name”: “catchAllForNonSyncRequestsRole”,\n\"apply_when\": { \"_id\": { $exists: true } },Thanks",
"username": "Georges_Jamous"
},
{
"code": "$orsync/config.jsonapply_whenfalse_idapply_whentrue",
"text": "A. so, a cloud function that is being invoked from a Flexible Sync Enabled App Client (iOS) and that is executing a query to the DB, is considered a non-sync context?Yep. A “non-sync” context also includes things like: querying MongoDB from a Realm SDK, DataAPI, GraphQL, etc.B. I am not sure about this, we have tests that call cloud functions and we never saw such error, the tests have not changed.Hmm, I can poke around in our logs to see if I can dig anything up there. Do you happen to recall roughly when you started seeing these errors? Also, do you remember if the above usage of the $or operator existed prior to the migration in the old flexible sync permissions? (if using the UI, the “old permissions” was the permissions JSON blob in the sync config page; if using the CLI, this was the object under the “permissions” key in sync/config.json)C. What is the purpose of this apply_when here ?When referencing document fields, the apply_when expression will evaluate to false when used in flexible sync, because it is evaluated on a per-session basis (at session start, no documents have been queried for yet). Since role evaluation occurs at a per-document basis for non-sync requests, and the _id field is required for MongoDB documents, then this “trivial” apply_when will always evaluate to true for non-sync requests. In turn, a non-sync request will never have the last role be applied due to role order evaluation.",
"username": "Jonathan_Lee"
},
{
"code": "{\n \"name\": \"nonSyncRoleWrite\",\n \"apply_when\": { \"$or\": [ ... ] },\n \"document_filters\": { \"write\": true, \"read\": false },\n ...\n},\n{\n \"name\": \"nonSyncRoleRead\",\n \"apply_when\": { .... },\n \"document_filters\": { \"write\": false, \"read\": true },\n ...\n},\n{\n \"name\": \"catchAllForNonSyncRequestsRole\",\n \"apply_when\": { \"_id\": { \"$exists\": true } },\n \"document_filters\": { \"write\": false, \"read\": false },\n ...\n},\n{\n \"name\": \"syncRole\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": { \"$or\": [ .... ] },\n \"read\": { \"$or\": [ .. ] }\n },\n$orsync/config.json",
"text": "Thanks for your help, we finally went with the last option, and it seems to work fine for our use case.Hmm, I can poke around in our logs to see if I can dig anything up there. Do you happen to recall roughly when you started seeing these errors? Also, do you remember if the above usage of the $or operator existed prior to the migration in the old flexible sync permissions? (if using the UI, the “old permissions” was the permissions JSON blob in the sync config page; if using the CLI, this was the object under the “permissions” key in sync/config.json )What I recall is that $or always existed, and yes in the old permission prior to the migration as well",
"username": "Georges_Jamous"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sync document_filters stopped working for $or | 2023-04-04T08:47:11.940Z | Sync document_filters stopped working for $or | 1,232 |
|
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6436a6e365bb0bf723a17a21\"\n },\n \"name\": \"Indian Institute of Nursing \",\n \"city\": \"\",\n \"memberCount\": 0,\n \"state\": \"Karnataka\"\n}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": [\n {\n \"foldDiacritics\": true,\n \"maxGrams\": 30,\n \"minGrams\": 1,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n },\n {\n \"type\": \"string\"\n }\n ]\n }\n }\n}\n{\n index: \"institutesEdge\",\n returnStoredSource: true,\n compound: {\n must: [\n {\n autocomplete: {\n path: \"name\",\n query: \"indian institute of\",\n tokenOrder: \"sequential\",\n },\n },\n ],\n },\n}\nindian institute ofindian institute of nurindian institute of ",
"text": "Here is a sample doc I am trying to search:My current index mapping:My aggregation for search:The above runs correctly and show result, but if I change the text from indian institute of to indian institute of nur or indian institute of (space after of), it says no result.\nThis happens for much smaller search terms as well, it doesnt show any result after third space is entered but works perfectly before that.I have tried using ngram as well, same issue with that too.Please suggest if I am doing something wrong?",
"username": "Arbaz_Siddiqui"
},
{
"code": "{\n index: \"institutesEdge\",\n returnStoredSource: true,\n compound: {\n should: [\n {\n autocomplete: {\n path: \"name\",\n query: \"indian institute of\",\n tokenOrder: \"sequential\",\n },\n },\n {\n text: {\n path: \"name\",\n query: \"indian institute of\",\n }\n }\n ],\n },\n}\n",
"text": "Hi @Arbaz_Siddiqui, and welcome to the forum!As it looks like your query has multiple word tokens then using a standard ‘text’ search operator could help with relevancy.Try updating your query like:This way your query can capture use cases where the user has incomplete words by leveraging autocomplete’s partial matching. But in addition the text operator will provide excellent relevancy matching words that have already been completely typed.For further tuning you could add ‘fuzzy’ options to the ‘text’ operator to capture mispellings.",
"username": "Junderwood"
},
{
"code": "indian institute of nurindian institute of b",
"text": "Thanks for the reply @Junderwood.Text operator introduces undesirable results, for ex in the above query indian institute of nur gives the correct result but the same results are also shown when i search for term indian institute of b which should not be the case as it does not exist.I need to search exact edgeGram on strings with phrases i.e. strings with few spaces.",
"username": "Arbaz_Siddiqui"
},
{
"code": "mappings: {\n dynamic: false,\n fields: {\n name: [\n {\n \"foldDiacritics\": true,\n \"maxGrams\": 30,\n \"minGrams\": 2,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n },\n {\n analyzer: \"keywordlowercase\",\n \"type\": \"string\"\n }\n ]\n }\n },\n analyzers:[\n {\n \"charFilters\": [],\n \"name\": \"keywordlowercase\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n ]\n{\n $search: {\n index: 'default',\n compound: {\n should: [\n {\n autocomplete: {\n path: \"name\",\n query: \"indian institute of t\",\n tokenOrder: \"sequential\",\n },\n },\n //optional clause that is only added if number of spaces > 2\n {\n wildcard: {\n path: \"name\",\n query: \"indian institute of t*\",\n allowAnalyzedField: true,\n }\n }\n ],\n },\n }\n }\n",
"text": "Hi @Arbaz_SiddiquiThere does seem to be some limitation with how many spaces autocomplete can handle. You can solve this by using a wildcard query. But since these can be slow you may want to add some client logic to only use wildcard if the query string contains more than 2 spaces - this should be a pretty fast check.There are likely a few different ways to approach this though!I ran some tests with data similar to yours and it seemed to work pretty well.Documentation to reference:Then you can add this to your index definition:And your query would look like (note that you append “*” to the end of the input query string):",
"username": "Junderwood"
}
] | Autocomplete using search doesnt work after 3 spaces | 2023-04-17T13:48:29.383Z | Autocomplete using search doesnt work after 3 spaces | 750 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "{\n _id: \"something\",\n array: [\n {\n \"name\": \"name 1\",\n \"other field\": 1\n },\n {\n \"name\": \"name 2\",\n \"other field\": 1\n },\n {\n \"name\": \"name 3\",\n \"other field\": 1\n }\n ]\n}\ndb.collection.findOne({ _id: _id}, { _id: 0, \"array.0.field\": 1})",
"text": "Good night guys, help me out there…I have a collection that has an array…Example:I’m using NodeJS and I’m trying to do a FindOne through the _id… but I want to return only a specific array element and one or more specific fields…Example…db.collection.findOne({ _id: _id}, { _id: 0, \"array.0.field\": 1})I’ve turned the internet and the docs upside down and haven’t found a way to do this…The funny thing is that with the update methods… if I do… “array.0.campo”… it works… but with findOne it doesn’tI even used $elemMatch… but it doesn’t allow returning specific fields…",
"username": "Lucas_Almeida"
},
{
"code": "Atlas atlas-b8d6l3-shard-0 [primary] test> db.filter.find()\n[\n {\n _id: ObjectId(\"6440e61fd80fa2295ca82d76\"),\n array: [\n { userId: ObjectId(\"6440e6a3d80fa2295ca82d77\"), name: 'test1' },\n { userId: ObjectId(\"6440e6a3d80fa2295ca82d78\"), name: 'test2' },\n { userId: ObjectId(\"6440e6a3d80fa2295ca82d79\"), name: 'test3' }\n ]\n }\n]\nconst filter = {\n '_id': new ObjectId('6440e61fd80fa2295ca82d76')\n };\n const projection = {\n 'array': {\n '$elemMatch': {\n 'name': 'test1'\n }\n }\n };\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.filter.find( filter, projection)\n[\n {\n _id: ObjectId(\"6440e61fd80fa2295ca82d76\"),\n array: [ { userId: ObjectId(\"6440e6a3d80fa2295ca82d77\"), name: 'test1' } ]\n }\n]\n",
"text": "Hi @Lucas_Almeida and welcome to MongoDB community forums!!Based on the above sample document, I tried to insert the document as:and used the following query using $elemMatchAdditionally, you can also use aggregation for similar desired output.L:et us know if you have further queries.Regards\nAasawari",
"username": "Aasawari"
}
] | Array Filter Embedded | 2023-04-20T02:30:14.953Z | Array Filter Embedded | 433 |
[] | [
{
"code": "[mongodb-org-6.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/9/mongodb-org/6.0/aarch64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-6.0.asc\nsudo yum install -y mongo-org",
"text": "Hi,\nI have a VPS AArch64 server using Rocky Linux 9.\nI’m following the tutorial here Install MongoDB Community Edition on Red Hat or CentOS — MongoDB Manual to install mongodb.\nI have create the repo fileThe aarch64 repo is there. MongoDB Repositories\nBut, when I run sudo yum install -y mongo-org I get this message\n\nimage927×312 12.5 KB\nI tried doing the same on another AArch64 server using Rocky Linux 8 and it worked fine.Am I doing something wrong that it isn’t working on Rocky Linux 9?\nI hope someone could help me with this.\nCheers!",
"username": "Eustachio_N_A"
},
{
"code": "",
"text": "Hello @Eustachio_N_A ,Welcome to The MongoDB Community Forums! As per the documentation mentioned belowAs of now, we are supporting Rocky 8 on arm64 hence you were able to install and work on that successfully.\nRocky 9 is supported on x86_64 and not on arm64 architecture yet, therefore the error.Please keep an eye out on the production notes for any additions to this list of supported platforms.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Installing Mongodb on Rocky Linux 9 - AArch64 | 2023-04-15T18:12:29.266Z | Installing Mongodb on Rocky Linux 9 - AArch64 | 1,906 |
|
null | [
"queries"
] | [
{
"code": "",
"text": "Hi All, I am using mongoDB( version - 4.4.6) in RHEL 8, could you please share the steps for upgrade DB.I could see DB version 4.4.20 is available.MongoDB shell version v4.4.19\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { “id” : UUID(“8c3d5642-4e61-43be-bd3c-264510d8cc30”) }\nMongoDB server version: 4.4.6\nWelcome to the MongoDB shell.\nFor interactive help, type “help”.\nFor more comprehensive documentation, see\nhttps://docs.mongodb.com/\nQuestions? Try the MongoDB Developer Community ForumsMongo DB shell is updated and DB server version is still showing the OLD.How can we update DB version too?, Please correct me, if any step wrong.Regards\nKrishna",
"username": "krishnamoorthy_sonai"
},
{
"code": "",
"text": "Hello @krishnamoorthy_sonai ,Welcome to The MongoDB Community Forums! Could you please check a few things to make sure you are doing things as expected?Kindly go through below tutorial for updating your MongoDB server.Please feel free to reach out if the issue persists.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi Tarun,Thanks for the update.Below is the points for your queries.",
"username": "krishnamoorthy_sonai"
},
{
"code": "",
"text": "extracted files to the appropriate location?,Hi TarunThanks for your great help.I have finished the Single instance Db upgrade now.Could you please also help me on mongo-HA patching steps?Regards\nKrishna",
"username": "krishnamoorthy_sonai"
},
{
"code": "mongo-HA patching",
"text": "Happy to hear that you were able to resolve the upgrade issue.Could you please also help me on mongo-HA patching steps?Could you please confirm what do you mean by mongo-HA patching? Are you referring to upgrade your MongoDB replica sets? If yes then please share the current and desired versions.Also, MongoDB 4.4 series is due to be out of support by February 2024. To make sure that the version/desired version you are using will be supported in the near future, please seeMongoDB Software Lifecycle Schedules",
"username": "Tarun_Gaur"
}
] | Mongo DB - community Edition - 4.4.6 | 2023-04-13T13:12:06.380Z | Mongo DB - community Edition - 4.4.6 | 826 |
null | [
"queries",
"node-js",
"atlas-search",
"text-search"
] | [
{
"code": "product_nameProduct.find({ $text: { $search: , $caseSensitive: false } });",
"text": "So i’ve created index for product_name of products collection and i am using this query on nodejs to search for products:\nProduct.find({ $text: { $search: \"${req.body.value}\", $caseSensitive: false } });\nnow the problem here is that the search returns product only when a word is complete e.g.:",
"username": "Avelon_N_A"
},
{
"code": "$textblueberryblueblueberryblueberries",
"text": "Hello @Avelon_N_A ,Welcome to The MongoDB Community Forums! The behaviour you have described appears to match what is mentioned in the below blob from the $text documentation with regards to stemmed words:For case insensitive and diacritic insensitive text searches, the $text operator matches on the complete stemmed word. So if a document field contains the word blueberry, a search on the term blue will not match. However, blueberry or blueberries will match.If this is an Atlas deployment, you can take a look at Atlas search as it provides the autocomplete feature which performs a search for a word or phrase that contains a sequence of characters from an incomplete input string. You can also take a look at below blog which could help you in setting up partial search with Atlas searchIn case you are interested in Atlas search and want to migrate your data from on-prem deployment to Atlas search you can followAdditional resource to learn more about $regex, $text and Atlas search, please referCode, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Case insensitive Search not working | 2023-04-14T16:40:25.420Z | Case insensitive Search not working | 1,001 |
null | [
"aggregation",
"java",
"crud"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6437db5e791549e390d8a0f0\"\n },\n \"idUser\": \"testUser1\",\n \"numYear\": 2023,\n \"activities\": {\n \"63ff26237d68b622e28852b3\": {\n \"_id\": { \"$oid\": \"63ff26237d68b622e28852b3\" },\n \"tstActivity\": { \"$date\": \"2023-03-01T11:17:06.000Z\" },\n \"txtNameActivity\": \"testActivity1\"\n },\n \"63ff2fff7d68b622e2886288\": {\n \"_id\": { \"$oid\": \"63ff2fff7d68b622e2886288\" },\n \"tstActivity\": { \"$date\": \"2023-03-01T11:59:11.000Z\" },\n \"txtNameActivity\": \"testActivity2\"\n }\n }\n}\nWriteError{code=17419, message='Resulting document after update is larger than 16777216', details={}}MongoWriteException.getError().getCode() == 17419Bson match = Aggregates.match(Updates.combine(\n Filters.eq(\"numYear\", 2023),\n Filters.eq(\"idUser\", testUser1)));\n\nBson addFields = Aggregates.addFields(new Field(\"activityArray\",\n new Document(\"$objectToArray\", \"$activities\")));\n\nBson project = Aggregates.project(Projections.fields(\n Projections.excludeId(),\n Projections.computed(\"minActivityKey\",\n new Document(\"$min\", \"$activityArray.k\"))));\n\t\nMongoCollection<Activities> mongoCollection = mongoDatabase\n .getCollection(\"activities\", Activities.class);\n\nAggregateIterable<Activities> aggregate = mongoCollection.aggregate(\n Arrays.asList(match, addFields, project));\n\nBson unset = Updates.unset(\"activities.\" + \n aggregate.first().getMinActivityKey());\n\nmongoCollection.updateOne(match, unset);\n",
"text": "Hi mongodb community,I would be very grateful for help with the following issue.\nIt is about a document containing yearly activities for a user:We have some edge-cases, where such a document grows too big, causing:WriteError{code=17419, message='Resulting document after update is larger than 16777216', details={}}All mongodb access happens through java services (using only the native mongodb java driver).\nSo the basic idea to cope with the issue is:And the question would be about how point (2) can be solved in an efficient way.\nThis is the approach I have so far:The problem about this is, that the “Activities” class represents the document schema and consequently does not contain the aggregated field “minActivityKey”.So my question would be if there is a way to access the result of aggregation pipelines without mapping it to the class the MongoCollection is associated with?In case anyone know of better options or best practices to achieve this possibly not so uncommon issue, I would also be very glad for input.Thank you in advance and br,\nJan",
"username": "Jan_de_Wilde"
},
{
"code": " \"activities\": {\n \"63ff26237d68b622e28852b3\": {\n \"_id\": { \"$oid\": \"63ff26237d68b622e28852b3\" },\n \"tstActivity\": { \"$date\": \"2023-03-01T11:17:06.000Z\" },\n \"txtNameActivity\": \"testActivity1\"\n },\n \"63ff2fff7d68b622e2886288\": {\n \"_id\": { \"$oid\": \"63ff2fff7d68b622e2886288\" },\n \"tstActivity\": { \"$date\": \"2023-03-01T11:59:11.000Z\" },\n \"txtNameActivity\": \"testActivity2\"\n }\n }\n \"activities\": [\n {\n \"_id\": { \"$oid\": \"63ff26237d68b622e28852b3\" },\n \"tstActivity\": { \"$date\": \"2023-03-01T11:17:06.000Z\" },\n \"txtNameActivity\": \"testActivity1\"\n },\n {\n \"_id\": { \"$oid\": \"63ff2fff7d68b622e2886288\" },\n \"tstActivity\": { \"$date\": \"2023-03-01T11:59:11.000Z\" },\n \"txtNameActivity\": \"testActivity2\"\n }\n ]\n",
"text": "May be, just may be may you could avoid hitting the limit by having a model with less redundancy.String representation of _id takes more space and it is slower to compare than an $oid. And in your case you also store the _id as an $oid. It is very wasteful.You should not have dynamic key names.Your object activities should be an array that uses the attribute pattern. This way you could index, sort, slice, filter or map it. The fact that your implementation needs $objectToArray is an indication that you should have an array. What objectToArray does is to take an object and convert it to the attribute pattern.So I would convert the schema fromtoThis will give you more compact data and more efficient code. More compact data will give you better performance because you will hit your special case less often.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks a lot for your input!I’m a bit puzzled because a MongoDB consultant has told us to rather avoid arrays, because they are MongoDB internally stored as object structures anyways (with key 0, 1, 2 …) and it would often be a good idea to rather set a unique identifier as key of object structures instead, especially to make update operations simpler.Also we are relying heavily on using the $set and $unset operations in upserts utilizing these manually set keys to identify entries to insert/update/remove efficiently and avoid data duplication - I am clueless yet about how that would work with arrays.Obviously the data efficiency of storing the id twice is not optimal, but in reality these objects are quite a bit larger, so the ids do not really make that much of a difference. And also with the most optimal data efficiency, we could not avoid to reach the document size cap in some edge cases (some test system produce unrealistically large amounts of activities for always the same test user, this is not a productive problem).However even if we should optimize the data model, I would still need a solution to work with the current data model and am hoping for helpful input about that.",
"username": "Jan_de_Wilde"
},
{
"code": "db.as_array.stats().size\n315\ndb.as_object.stats().size\n361\nkey = \"63ff26237d68b622e28852b3\"\n{ \"$unset\" : [ \"activities.\" + key ] }\nkey = \"63ff26237d68b622e28852b3\"\n{ \"$pull\" : { \"activities\" : { \"_id\" : new ObjectId( key ) } } }\nactivity = {\n \"_id\": { \"$oid\": \"63ff26237d68b622e28852b3\" },\n \"tstActivity\": { \"$date\": \"2023-03-01T11:17:06.000Z\" },\n \"txtNameActivity\": \"testActivity1\"\n}\n... { \"$addToSet\" : { \"activities\" : activity } }\nMongoCollection<Document>MongoCollection<Activities>",
"text": "MongoDB consultant has toldA consultant that knows MongoDB or a MongoDB employee that you hired as a consultant. As a independent consultant myself my recommendation is to not blindly listen to consultants. If the recommendation of not using array is from a MongoDB employee I would like MongoDB staff in this forum to comment on the avoid array recommendation.I just made 2 simple collections, one using the map way and the other using the array way with the 2 documents from my previous post and I got:Yes 46 bytes for 2 ‘activity’ is not that much. But to reach the 16M limit you must have a lot of ‘activity’. End of my argument about size.make update operations simplerThe update source of your post is not that simple. With array you could easily implement the bucket pattern using $size in the query part to limit the array size completely avoiding the need to handle a 16M error. With the bucket pattern you canavoid to reach the document size capAre you sure aboutto insert/update/remove efficientlyAre those dynamic keys sorted/indexed? I do not think they are. With an array you could easily index activities._id and make query an order of magnitude faster.I am curious about how you do queries like Which user did txtNameActivity:testActivity2 or What activity was done on a give tstActivity date?I am clueless yet about how that would work with arrays.The consultant should have shown you.For examplewould beTo avoid duplicates you wouldMy point is that we can do with dynamic keys can be done with arrays.still need a solution to work with the current data model and am hoping for helpful input about that.With dynamic keys I have to idea how this could be done without $objectToArray like you do.So my question would be if there is a way to access the result of aggregation pipelines without mapping it to the class the MongoCollection is associated with?You could use MongoCollection<Document> rather than MongoCollection<Activities> to aggregate in order to get the $min.",
"username": "steevej"
},
{
"code": "activities._id$objectToArray_id$addToSet$setChangeStreamDocumentupdatedFieldsactivitiesactivities.0activities.63ff26237d68b622e28852b3",
"text": "Wow, thanks so much for your comprehensive advice!It was a MongoDB employee who explicitly stated:So that’s what we did, but now I’m starting to consider redesigning the whole solution with arrays instead of object structures.I am curious about how you do queries like Which user did txtNameActivity:testActivity2 or What activity was done on a give tstActivity date ?The efficiency of queries has not been the main criteria so far because of the bigger picture of the solution. There are in fact multiple collections with similar structure and the main idea was to upsert new activities in the first collection, then read the upserted activity out of a change stream document, and perform follow-up upserts in the next downstream collection and so on.So the queries you suggested would then only be performed on the very last of these collections. But also there we currently have these objects structures instead of arrays, which I suppose will be hindering efficient querying to answer that kind of questions (that most probably will be asked at some point).With array you could easily implement the bucket pattern using $size in the query part to limit the array size completely avoiding the need to handle a 16M error.Using the bucket pattern sounds like an approach that possibly should have been followed too. It would just add quite a bit of complexity especially to the already complex recalculation logic, to rebuild or update all those collections for changed logic and/or migrated/deleted data.Also I guess you could call the current implementation already sort of static (yearly) bucket pattern, and I already had the “fun” manually implementing a dynamic bucket pattern in cassandra - size dependent with dynamic start/end timestamps of the point of time when the cap has been reached - and I was praying that I don’t have to do that again. I guess this issue goes too far, but good to know that arrays could help with that too.Are those dynamic keys sorted/indexed? I do not think they are. With an array you could easily index activities._id and make query an order of magnitude faster.Especially the arguments about the better performance with an index on activities._id (the dynamic keys are not indexed) and not needing $objectToArray anymore to find the minimum _id also sound convincing to me.The first questions that come to my mind about the migration from object structure to arrays would be:I guess this exceeds the scope of the initial question (and probably of appropiate questions in this forum in general) and with some time, research and perhaps more consultance I’ll find the missing pieces to reevaluate the schema design.",
"username": "Jan_de_Wilde"
},
{
"code": "",
"text": "One more issue, that would applies to both map and array implementation.Is your activities withing a single document often updated? By this I mean do you update by adding one activity at a time or you add most of the activities at the same time.The reason I asked is because when an document is updated the whole document is written back to storage. So if a document is update once per activity, then a document with N activities will be written N times. In that case it would probably be more efficient (update wise) to have the activities in a separate collection. This way only the small activity documents are written rather than a big document over and over again.",
"username": "steevej"
},
{
"code": "activitiesactivityidUseractivitiesactivity",
"text": "Is your activities withing a single document often updated? By this I mean do you update by adding one activity at a time or you add most of the activities at the same time.Yes, it is part of a real-time aggregation stream and activities are always upserted one at a time each time a relevant event arrives.This makes me think of my previous idea of having a flat data model with one document per activity. I was then confronted with the argument that such an approach would be a result of still thinking in the relational world and with mongodb there should rather be one document for what belongs together. But it would solve any potential problems to document size or embedded documents.I am currently thinking in the direction, that we should perhaps go for such documents per idUser and indexed arrays(?) of activities in a last collection, that will contain a subset of relevant data and allow flexible querying. But in the upstream collections that are very frequently updated for one activity at a time indeed it seems more reasonable to me to have one activity per document atm.Thanks for pointing that out!",
"username": "Jan_de_Wilde"
},
{
"code": "activity",
"text": "This makes me think of my previous idea of having a flat data model with one document per activity. I was then confronted with the argument that such an approach would be a result of still thinking in the relational world and with mongodb there should rather be one document for what belongs together. But it would solve any potential problems to document size or embedded documents.Look at GridFS-Can we use aggregation query from GridFS specification - #4 by steevej.I wroteIn some cases, plain old normalization, is still a valid solution.And we could add: in others it is the best solution.",
"username": "steevej"
},
{
"code": "",
"text": "Hi Jan and Steeve,In addition to what Steeve has already mentioned I went through this topic and in summary:This makes me think of my previous idea of having a flat data model with one document per activityMy conclusion is, the current schema design works fine for most of your use cases, but there are edge cases that requires you to work around the design’s limitations. Changing this structure into an array would not help you to avoid this situation, but might make it easier to handle. As per Steeve’s suggestion, although denormalization is usually how people design their schema in MongoDB, perhaps for this use case, you can consider normalization, as long as it doesn’t make your other existing workflow harder to do.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Removing map entry with minimum key | 2023-04-19T12:30:56.858Z | Removing map entry with minimum key | 993 |
null | [
"node-js"
] | [
{
"code": "mongodbmongodb",
"text": "The MongoDB Node.js team is pleased to announce version 4.16.0 of the mongodb package!We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "Warren_James"
},
{
"code": "",
"text": "Tried it out, working for me.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB NodeJS Driver 4.16.0 Released | 2023-04-18T18:24:57.637Z | MongoDB NodeJS Driver 4.16.0 Released | 1,257 |
null | [
"compass"
] | [
{
"code": "",
"text": "Hi,I have 3 node setup (2 data nodes + one arbiter). That works fine, secondary is promoted when primary fails. However, when one data node is down, there is sudden drop in performance of write operations. from around 15.000 updates/sec to exactly 100. I’m using write concern of 1. The only thing different I see in MongoDb compass is that after one data node is down QWRITES start to appear (in performance tab). Any help is welcomed.\nIn 3 data node setup performance stays the same. Am I missing anything obvious (as google search yields no result about this)?All nodes v6.0.5 running on linux.",
"username": "Goran_Sliskovic"
},
{
"code": "",
"text": "Apparantly the issue was caused by flow control. Since there are no data nodes that can keep up with replication, primary (only data node now) will throttle request so replicas are not too far out of sync. Thus suspiciously “round” number of requests per second (100).changing flow control helped:\ndb.adminCommand( { setParameter: 1, enableFlowControl: false } )",
"username": "Goran_Sliskovic"
},
{
"code": "",
"text": "Yeah, by default flow control is enabled, however i’m not very sure how it works with an arbiter only (when your secondary node is down).Generally using an arbiter is not recommended unless really needed.",
"username": "Kobe_W"
}
] | Performance drop when node goes down in repl. set | 2023-04-20T09:09:28.030Z | Performance drop when node goes down in repl. set | 453 |
[] | [
{
"code": "",
"text": "I’m desperate all my services are down and I have no way to get it back up, my cluster is downand there is no justified reason for being down, the ram was fine, the hard drive barely used 50% and it barely used 100 connections out of 3000\nimage1924×1362 317 KB\nAt the same time I am creating a cluster 1 with the backup of cluster0 to try to be online as soon as possible for my customersplease help",
"username": "Hugo_Jerez"
},
{
"code": "",
"text": "Hi HugoPlease contact the Atlas in-app chat support regarding this.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | My cluster is unavailable | 2023-04-21T03:09:32.410Z | My cluster is unavailable | 562 |
|
null | [] | [
{
"code": "",
"text": "Is there a way to setup MongoDB Atlas to Auto Scale to a specific point, and then automatically generate and build a new cluster entirely and keep it under a limit, transfer over configs and have it take over the load from the other clusters overload?Basically make another cluster set that’s mirrored/cloned to a point, I know you can build functions to make the data query from one cluster to the other. But is there a more streamlined way of doing this, so you can have multiple clusters of say M10 or M20 for general performance vs one giant M30 or M40. But do this automatically, and as compaction or destruction of older unneeded data occurs, decommission the “new” cluster as demand declines. |What’ I’m looking for is a way to just through automation build and destroy clusters of a set tier instead of upgrading a tier because of needs for better CPU and RAM not being necessary vs say something like connections and so on.Plus for performance, this route in grand scheme talking with Technical Services I was advised more smaller clusters vs one big cluster is more ideal. But they weren’t quite sure how to implement this in an automated way. Such as on AWS I would just typically run several other services that would do it automatically, but sticking with Atlas specifically for data storage uses, what are my options?And yes, I do understand there is the server less, but the costs vs doing this other way are very different. As it also gives the scaling and ability to go by leaps instead of the smaller sprints server less uses.And opens the ability to use Realm and other things that server less doesn’t. I can build all of this in AWS, but again, looking at keeping it all in Atlas to simplify things as there’s other services besides AWS I’m using.And a contract bid I got in the pipeline will need the ability to cross between Azure and AWS and cost wise, it would save almost 23% of costs to just have Atlas as the centralized DB location and just pipe it over to Azure and AWS than to have it on AWS and pipe over to Azure, and Azure to AWS. Which would make this a lot more beneficial, at least for my use case.",
"username": "Brock"
},
{
"code": "",
"text": "Probably the largest and most dramatic feature needed: Automation, and ability to automate. As friendly as possible, like even ways to implement Atlas into Terraform or a means to remotely automate these above features, specifying tier, and the configs and populating the indexes wanted, with the pipelines and deploying the desired Atlas Functions and Triggers with the appropriate parameters and then building the endpoints for various application outputs to funnel into Atlas.Daisy chaining Atlas clusters and querying data from multiple clusters is a trivial thing for in project and multiple outside projects via Atlas Functions using HTTPS etc., but it’s just the automation of building and destroying on a whim/automatically is what I’m having problems orchestrating. Otherwise I’d just use Kubernetes and Docker all over.",
"username": "Brock"
}
] | Atlas Auto Scaling for Small Businesses | 2023-04-21T02:16:27.258Z | Atlas Auto Scaling for Small Businesses | 398 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "I have many clusters in MongoDB Cloud Atlas and many applications conected in this clusters.When a need to reorganize ou change a application between the clusters, i need to change the URI in the applications. This is a lot of work, enter in git, change URI in configmaps and deploy de application to see the new database.Example: mongodb+srv://user:[email protected]. Needs change to mongodb+srv://user:[email protected]. Here I used an example name that is provided when creating a cluster.Have a way to use my own DNS in this URI to access the cluster?Example: mongodb+srv://user:[email protected]. In this way when i need to change my application database, i only need to change DNS and not the complete URI in applications.I already see this documentation connection-string-uri-format, but i could’nt make it work",
"username": "Luiz_Fernando_Marques_de_Oliveira"
},
{
"code": "_mongodb._tcp.cluster01.mydomain.com. 60 IN SRV 0 0 27017 cluster0-shard-00-00-jxeqq.mongodb.net.\n_mongodb._tcp.cluster01.mydomain.com. 60 IN SRV 0 0 27017 cluster0-shard-00-01-jxeqq.mongodb.net.\n_mongodb._tcp.cluster01.mydomain.com. 60 IN SRV 0 0 27017 cluster0-shard-00-02-jxeqq.mongodb.net.\n",
"text": "I think that what you need is something like zookeeper.But nothing stops you from having one Atlas instance that contains the configuration of your applications.For the above 2 you will need an application launcher that get the current URI for your application and then starts the application with the given configuration.It surely could be done with DNS but as the solution above you would also need an application launcher. In this case, it would read TXT or CNAME records to get the URI.But once you understand SRV records, may be, just may be, you could maintain your own SRV. After all SRV records simply supply a list of A records which are nodes of the cluster. For example, using your cluster01.mydomain.com, you could create an SRV record such aswhich correspond the real cluster cluster0-jxeqq.mongodb.net used in the courses. After all an SRV simply supplies a list of hosts for a given service.Before SRV records a PTR record could be used but I never experience PTR with SRV.",
"username": "steevej"
}
] | How do i change name of default cluster connection, to use my DNS? | 2023-04-20T18:08:24.122Z | How do i change name of default cluster connection, to use my DNS? | 664 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "When trying to connect I receive this error Error querySrv ENOTFOUND _mongodb._tcp.cluster0.rx6v7.mongodb.net I’m not sure of the issue I’ve searched and searched. The frustrating this is my app worked; however I’m unsure of when it stopped working. If anyone could be of assistance that would be great!!",
"username": "Tamara_Wilburn"
},
{
"code": "",
"text": "Hi @Tamara_Wilburn - Welcome to the community I would read over a similar post : Can't connect to MongoDB Atlas - querySrv ENOTFOUNDThe frustrating this is my app worked; however I’m unsure of when it stopped working.You can try with a different DNS (like 8.8.8.8) for troubleshooting purposes but the error you’ve provided indicates a DNS resolution failure for the SRV record associated with the connection string used. Do you know if any network settings would have changed recently since you have advised it worked prior?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I know according to others, that Heroku did some sort of update; I’m assuming that’s when the issues occurred, I literally finished the projects, deployed them and never touched them again out of fear I would break them lol.",
"username": "Tamara_Wilburn"
},
{
"code": "** server can't find _mongodb._tcp.cluster0.rx6v7.mongodb.net: NXDOMAIN\nnslookupnslookup -type=srv _mongodb._tcp.cluster0.<REDACTED>.mongodb.net\nServer:\t\t127.0.0.1\nAddress:\t127.0.0.1#53\n\nNon-authoritative answer:\n_mongodb._tcp.cluster0.qemgxcq.mongodb.net\tservice = 0 0 27017 ac-iceu1mh-shard-00-00.<REDACTED>.mongodb.net.\n_mongodb._tcp.cluster0.qemgxcq.mongodb.net\tservice = 0 0 27017 ac-iceu1mh-shard-00-01.<REDACTED>.mongodb.net.\n_mongodb._tcp.cluster0.qemgxcq.mongodb.net\tservice = 0 0 27017 ac-iceu1mh-shard-00-02.<REDACTED>.mongodb.net.\n",
"text": "Interesting - Might be worth contacting them also regarding the sudden DNS error.In terms of the cluster itself, can you check the connection string matches or if the cluster is active? I was unable to resolve the one you provided although I can understand if this is because you changed it for security concerns:For reference just to compare, I performed a nslookup on my own test cluster where 3 hostnames were able to be resolved:",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I’m not sure how to do that\n\nScreen Shot 2023-04-20 at 8.50.39 PM2500×1382 300 KB\n",
"username": "Tamara_Wilburn"
},
{
"code": "nslookup -type=srv _mongodb._tcp.cluster0.ztqwf.mongodb.net\nServer:\t\t127.0.0.1\nAddress:\t127.0.0.1#53\n\nNon-authoritative answer:\n_mongodb._tcp.cluster0.ztqwf.mongodb.net\tservice = 0 0 27017 cluster0-shard-00-00.ztqwf.mongodb.net.\n_mongodb._tcp.cluster0.ztqwf.mongodb.net\tservice = 0 0 27017 cluster0-shard-00-01.ztqwf.mongodb.net.\n_mongodb._tcp.cluster0.ztqwf.mongodb.net\tservice = 0 0 27017 cluster0-shard-00-02.ztqwf.mongodb.net.\nconnect",
"text": "_mongodb._tcp.cluster0.rx6v7.mongodb.netThis is the error message you provided at the start The screenshot you provided has a different unique identifer as far as I can see. rx6v7 in the original error vs ztqwf in the screenshot.It does look like the screenshot’s SRV record resolves to the hostnames from my system (where as the original error srv record didn’t):Can you try check your connection string in Heroku and make sure it matches this cluster (that you’ve attached in the screenshot)? You can get the connection string in Atlas from the connect button of the cluster.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I fixed it. for some strange reason the backend of the project wasn’t showing up in MongoDB Atlas, all of a sudden it loaded, and the cluster0 was paused…, and that was the fix. OMG LOL. thx for your help!!",
"username": "Tamara_Wilburn"
},
{
"code": "M0M2M5",
"text": "Thanks for confirming FWIW from the Pause, Resume or Terminate a Cluster documentation:Atlas automatically pauses all inactive M0 , M2 , and M5 clusters after 60 days.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error querySrv ENOTFOUND _mongodb._tcp.cluster0.rx6v7.mongodb.net | 2023-04-20T21:11:58.500Z | Error querySrv ENOTFOUND _mongodb._tcp.cluster0.rx6v7.mongodb.net | 1,795 |
null | [
"atlas-search"
] | [
{
"code": "mustshould{\n “_id”: 1,\n \"baseScore\": 0.5,\n \"textField\": \"Apple\"\n},\n{\n “_id”: 2,\n \"baseScore\": 0.2,\n \"textField\": \"Banana\"\n}\nshould",
"text": "Hi team,May I ask if it is possible to assign a base score when using MongoDB Atlas Search, so that the must or should query could boost the score based on the base score?Example documents could be:So when I use a should query to search, it will also consider my baseScore, and boost based on that",
"username": "williamwjs"
},
{
"code": "baseScorebaseScore",
"text": "Hi William,I’m a bit confused regarding the below details:Considering the above, is the baseScore field that exists in the example documents what you want to achieve or is it a field you’re wanting to use in the actual search score calculation?Just to clarify, would you be able to give an example search query and your expected output?Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": " $search: {\n \"compound\": {\n \"should\": [{\n \"autocomplete\": {\n \"path\": \"textField\",\n \"query\": \"b\",\n \"score\": { \"boost\": { \"value\": 2}}\n }\n }]\n }\n }\n",
"text": "@Jason_Tran Thank you for the quick response!!!So the question is how to make Atlas Search use my baseScore in the actual search score calculation.Example query could be:So for the above examples, although id2 will be boosted, it will be 0.2*2, which is still less than 0.5, thus the returning order is still id1 and then id2Let me know if the above makes sense to you",
"username": "williamwjs"
},
{
"code": "\"ba\"\"b\"query_id: 2bscore> db.collection.find()\n[\n { _id: 1, baseScore: 0.5, textField: 'Apple' },\n { _id: 2, baseScore: 0.2, textField: 'Banana' }\n]\n$search_id: 2score> db.collection.aggregate(\n[\n {\n '$search': {\n index: 'textindex',\n compound: {\n should: [\n {\n autocomplete: {\n path: 'textField',\n query: 'ba',\n score: { boost: { value: 2 } }\n }\n }\n ]\n }\n }\n },\n {\n '$project': {\n _id: 1,\n baseScore: 1,\n textField: 1,\n searchScore: { '$meta': 'searchScore' }\n }\n }\n])\n[\n {\n _id: 2,\n baseScore: 0.2,\n textField: 'Banana',\n searchScore: 0.9241962432861328\n }\n]\nscore> \n",
"text": "So the question is how to make Atlas Search use my baseScore in the actual search score calculation.Curious what’s the use case here for combining Atlas Search’s scoring with your own custom scoring? Every document returned by an Atlas Search query is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.So for the above examples, although id2 will be boosted, it will be 0.2*2, which is still less than 0.5, thus the returning order is still id1 and then id2From your example I believe only 1 document would be returned (I used \"ba\" as opposed to \"b\" for the query value). So return order would not match what you would have described as there would only be a single document returned.For instance, your example query would presumably return only document with _id: 2 (basing this off the b query value you provided):$search pipeline only returning document with _id: 2:",
"username": "Jason_Tran"
},
{
"code": "constant",
"text": "Do you think use of the constant scoring option would help your use case?:The constant option replaces the base score with a specified number.",
"username": "Jason_Tran"
},
{
"code": "should",
"text": "@Jason_Tran Thank you for your detailed reply!!!Interesting that it would only return 1 result as I thought should would only boost instead of filter, but that is a different topic that I could look into later.The potential use case for a base score is like:\nConsider a use case like LinkedIn, where there’re many profiles that may or may not be completed, e.g., missing profile photo, content looks suspicious, etc. So I would like to assign a base score so that my search sorting will be a combination of text search native ranking and how I want to boost certain profiles.And constant scoring option would not help because my base score is irrelevant to the searching text.",
"username": "williamwjs"
},
{
"code": "",
"text": "The potential use case for a base score is like:\nConsider a use case like LinkedIn, where there’re many profiles that may or may not be completed, e.g., missing profile photo, content looks suspicious, etc. So I would like to assign a base score so that my search sorting will be a combination of text search native ranking and how I want to boost certain profiles.Interesting - thanks for providing the use case details William Just another question: What would the base score represent here or how is it determined? I.e. What makes one base score higher than another? I assume all of this base scoring would occur before any searching as per your example documents.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "yes, the score would be computed before search. I am using an offline processing flow to pre-compute a rule-based score from certain non-text factors.",
"username": "williamwjs"
},
{
"code": "baseScorebaseScore$searchnearoriginbaseScorescore> db.collection.aggregate(\n[\n {\n '$search': {\n index: 'textindex',\n compound: {\n should: [\n {\n autocomplete: {\n path: 'textField',\n query: 'ba',\n score: { boost: { value: 2 } }\n }\n },\n { near: { path: 'baseScore', origin: 1, pivot: 0.1 } }\n ]\n }\n }\n },\n {\n '$project': {\n _id: 1,\n baseScore: 1,\n textField: 1,\n searchScore: { '$meta': 'searchScore' }\n }\n }\n])\n_id: 1[\n {\n _id: 8,\n textField: 'Banana 7',\n baseScore: 0.7,\n searchScore: 0.5457944273948669\n },\n {\n _id: 7,\n textField: 'Banana 6',\n baseScore: 0.6,\n searchScore: 0.495794415473938\n },\n {\n _id: 6,\n textField: 'Banana 5',\n baseScore: 0.5,\n searchScore: 0.46246111392974854\n },\n {\n _id: 5,\n textField: 'Banana 4',\n baseScore: 0.4,\n searchScore: 0.43865156173706055\n },\n {\n _id: 4,\n textField: 'Banana 3',\n baseScore: 0.3,\n searchScore: 0.42079442739486694\n },\n {\n _id: 3,\n textField: 'Banana 2',\n baseScore: 0.2,\n searchScore: 0.40690553188323975\n },\n {\n _id: 2,\n baseScore: 0.2,\n textField: 'Banana',\n searchScore: 0.3748180866241455\n },\n {\n _id: 1,\n baseScore: 0.5,\n textField: 'Apple',\n searchScore: 0.1666666716337204\n }\n]\n",
"text": "Based off your description and correct me if I am wrong, the baseScore is an independent score calculated outside of Atlas search. Because of this, I don’t believe it’s possible to combine the Atlas search scoring with the baseScore field you’ve mentioned to create the custom sorting you are after without use of additional aggregation stages. I assume you wanted all of this done alone in the $search stage for performance reasons maybe.However, does the following example perhaps look a bit closer to what you’re after? I have a possible workaround, although i’m not entirely sure if it works for your use case, which uses the near operator with an origin of 1 (Let’s say this is the baseScore we want to be closest to for sorting):Output (which includes the document with _id: 1):I added a few extra documents as I was trying to understand how to achieve that ordering you were after but perhaps this approach won’t work for your data set / use case.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I think this would work for me well.\n@Jason_Tran Thank you a lot for your suggestion!!!",
"username": "williamwjs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Assign base score for MongoDB Atlas Search | 2023-04-20T01:23:19.021Z | Assign base score for MongoDB Atlas Search | 714 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.4.20 is out and is ready for production deployment. This release contains only fixes since 4.4.19, and is a recommended upgrade for all 4.4 users.Fixed in this release:4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.20 is released | 2023-04-12T19:27:18.370Z | MongoDB 4.4.20 is released | 1,188 |
null | [] | [
{
"code": "",
"text": "Realizei o upgrade do mongodb de um ambiente de replica da versão 4.2 para versão 4.4 porem toda vez que reinicio o servidor a pasta mongodb é excluida…Saberia informar se é um bug da versao 4.4 ou se tem alguma solução",
"username": "Tatiana_Jandira"
},
{
"code": "I upgraded mongodb from a replica environment from \nversion 4.2 to version 4.4 but every time I restart \nthe server the mongodb folder is deleted.\n\nI would know if it is a bug in \nversion 4.4 or if there is a solution\n",
"text": "Hello @Tatiana_Jandira,Welcome to the MongoDB Community forums I believe this is the English translation of your question above.Could you kindly share the operating system you are currently using?Additionally, can you provide me with details regarding the steps you are taking to upgrade MongoDB? Also, I would appreciate it if you could specify the exact version of MongoDB you are currently using, which is 4.2.x, and the specific sub-version to which you are upgrading, which is 4.4.x.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Could you kindly share the operating system you are currently using?Additionally, can you provide me with details regarding the steps you are taking to upgrade MongoDB? Also, I would appreciate it if you could specify the exact version of MongoDB you are currently using, which is 4.2.x, and the specific sub-version to which you are upgrading, which is 4.4.xThanks for the return. I managed to get around the problem.\nTo solve it I had to create the mongodb/mongod.pid directory in another location and change the path in /etc/mongod.conf.\nAfter restarting the MongoDB service it worked.But the installation procedure was as follows:Current MongoDB Version: 4.2.18 running on Amazon Linux. Aim to upgrade to 6.0I updated using the mongoDB documentation: https://www.mongodb.com/docs/manual/release-notes/4.4-upgrade-replica-set/After I upgraded to 4.4, whenever I restarted the server the mongodb/mongod.pid directory was deleted.To solve it I had to create the mongodb/mongod.pid directory in another location and change the path in /etc/mongod.conf.\nAfter restarting the MongoDB service it worked.",
"username": "Tatiana_Jandira"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Erro upgrade to mongo4.4 | 2023-04-18T14:36:40.082Z | Erro upgrade to mongo4.4 | 554 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 5.0.17-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.16. The next stable release 5.0.17 will be a recommended upgrade for all 5.0 users.Fixed in this release:5.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.17-rc0 is released | 2023-04-20T17:55:42.346Z | MongoDB 5.0.17-rc0 is released | 908 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.21-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.20. The next stable release 4.4.21 will be a recommended upgrade for all 4.4 users.Fixed in this release:4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.21-rc0 is released | 2023-04-20T17:53:25.071Z | MongoDB 4.4.21-rc0 is released | 798 |
null | [
"atlas-device-sync",
"app-services-user-auth",
"react-native",
"flutter",
"react-js"
] | [
{
"code": "import Realm from 'realm';\nimport { AppRegistry } from 'react-native';\n\nconst appConfig = {\n id: '<your-realm-app-id>',\n timeout: 10000,\n};\n\nconst app = new Realm.App(appConfig);\n\nAppRegistry.registerComponent('MyApp', () => App);\nasync function loginWithJWT(token) {\n try {\n const credentials = Realm.Credentials.jwt(token);\n const user = await app.logIn(credentials);\n return user;\n } catch (err) {\n console.error('Failed to log in', err);\n }\n}\nfunction getUserRole(token) {\n try {\n const decodedToken = jwt.decode(token);\n return decodedToken.role;\n } catch (err) {\n console.error('Failed to decode token', err);\n }\n}\nfunction hasRole(userRole, requiredRole) {\n const roles = {\n admin: 3,\n moderator: 2,\n user: 1,\n };\n\n return roles[userRole] >= roles[requiredRole];\n}\nasync function viewUserRecords(token) {\n const userRole = getUserRole(token);\n\n if (!hasRole(userRole, 'admin')) {\n throw new Error('Unauthorized');\n }\n\n const users = await realm.objects('User');\n return users;\n}\nclass App extends React.Component {\n constructor(props) {\n super(props);\n\n this.state = {\n userToken: null,\n user: null,\n };\n\n this.handleLogin = this.handleLogin.bind(this);\n }\n\n async handleLogin() {\n const token = await fetchTokenFromServer();\n const user = await loginWithJWT(token);\n\n this.setState({\n userToken: token,\n user,\n });\n }\n\n render() {\n return (\n <View>\n <Button title=\"Log in\" onPress={this.handleLogin} />\n </View>\n );\n }\n}\n",
"text": "As this became a majorly hot topic that sent temperatures to levels like that of the sun, I thought I’d make this and make this information more publicly viewable.This is how you can implement role-based privileges with JWT user authentication that expands to deeper levels of login access than what the Device Sync Docs cover. If if you wish to republic just give credit, all I ask.Here is an example of implementing JWT user login with role-based privileges in MongoDB Realm React Native SDK:First, set up a MongoDB Realm application and configure authentication providers.Create a new Realm app client in your React Native app, and initialize the Realm app:This function checks if the user has an “admin” role and returns all user records if so. If the user does not have an “admin” role, an “Unauthorized” error is thrown.This example stores the user’s JWT in the app’s state after logging in.With these functions and example code, you can implement JWT user login with role-based access control in your MongoDB Realm React Native app.",
"username": "Brock"
},
{
"code": "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"ec2:DescribeInstances\",\n \"ec2:DescribeTags\"\n ],\n \"Resource\": \"*\"\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"ec2:CreateNetworkInterface\",\n \"ec2:DescribeNetworkInterfaces\",\n \"ec2:DeleteNetworkInterface\"\n ],\n \"Resource\": \"*\"\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"ec2:AuthorizeSecurityGroupIngress\",\n \"ec2:AuthorizeSecurityGroupEgress\",\n \"ec2:RevokeSecurityGroupIngress\",\n \"ec2:RevokeSecurityGroupEgress\",\n \"ec2:CreateSecurityGroup\",\n \"ec2:DescribeSecurityGroups\",\n \"ec2:DeleteSecurityGroup\"\n ],\n \"Resource\": \"*\"\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"ec2:AttachNetworkInterface\",\n \"ec2:DetachNetworkInterface\"\n ],\n \"Resource\": \"*\"\n },\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"dynamodb:BatchGetItem\",\n \"dynamodb:GetItem\",\n \"dynamodb:Query\",\n \"dynamodb:Scan\",\n \"dynamodb:BatchWriteItem\",\n \"dynamodb:PutItem\",\n \"dynamodb:UpdateItem\",\n \"dynamodb:DeleteItem\"\n ],\n \"Resource\": \"*\"\n }\n ]\n}\nconst myCustomRole = {\n name: \"customRole\",\n apply: {\n read: true,\n write: true,\n execute: true,\n schema: true\n },\n // Define permission sets for this role\n // ...\n}\n\nRealm.App.Sync.refreshCustomData({ customUserData: { roles: [myCustomRole] } });\nconst AWS = require('aws-sdk');\nAWS.config.update({accessKeyId: '<access_key>', secretAccessKey: '<secret_key>'});\n\nconst iam = new AWS.IAM({apiVersion: '2010-05-08'});\n\nconst params = {\n AssumeRolePolicyDocument: JSON.stringify({\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Principal\": {\"Service\": \"lambda.amazonaws.com\"},\n \"Action\": \"sts:AssumeRole\"\n }\n ]\n }),\n RoleName: \"myRole\"\n};\n\niam.createRole(params, function(err, data) {\n if (err) {\n console.log(err, err.stack);\n } else {\n console.log(data);\n }\n});\n\nconst myCustomRole = {\n name: \"customRole\",\n apply: {\n read: true,\n write: true,\n execute: true,\n schema: true\n },\n // Define permission sets for this role\n permissions: [\n {\n resource: { type: 'Function', name: 'myFunction' },\n actions: ['execute']\n },\n {\n resource: { type: 'Collection', name: 'myCollection' },\n actions: ['read', 'write']\n }\n ]\n}\n\nRealm.App.Sync.refreshCustomData({ customUserData: { roles: [myCustomRole] } });\n\nexports.handler = function(event, context, callback) {\n const AWS = require('aws-sdk');\n const iam = new AWS.IAM();\n\n const username = event.identity.username;\n\n // Map Realm user roles to AWS IAM roles\n let roleName = '';\n switch (event.user.custom_data.role) {\n case 'admin':\n roleName = 'myAdminRole';\n break;\n case 'editor':\n roleName = 'myEditorRole';\n break;\n case 'viewer':\n roleName = 'myViewerRole';\n break;\n default:\n roleName = 'myDefaultRole';\n }\n\n const params = {\n RoleName: roleName,\n PrincipalArn: `arn:aws:iam::${process.env.AWS_ACCOUNT_ID}:user/${username}`,\n StatementId: `${roleName}-${username}`\n };\n\n iam.assumeRole(params, function(err, data) {\n if (err) {\n console.log(err, err.stack);\n callback(err);\n } else {\n console.log(data);\n const response = {\n statusCode: 200,\n body: JSON.stringify({ success: true })\n };\n callback(null, response);\n }\n });\n};\n\n// Define roles and permissions\nconst roles = {\n admin: { read: true, write: true, execute: true },\n editor: { read: true, write: true, execute: false }\n viewer: { read: true, write: false, execute: false }\n};\n\n// Assign roles to users\nconst users = [\n { username: 'user1', roles: ['viewer'] },\n { username: 'user2', roles: ['viewer', 'editor'] },\n { username: 'user3', roles: ['viewer', 'editor','admin'] }\n];\n\n// Define sync rules\nconst syncConfig = {\n user: currentUser,\n partitionValue: 'myPartitionKey',\n filter: (document) => {\n const userRoles = users.find(u => u.username === currentUser)?.roles || [];\n const documentRoles = document.roles || [];\n return documentRoles.some(r => userRoles.includes(r));\n }\n};\n\n// Open a Realm synced realm\nconst realm = await Realm.open({\n schema: [MySchema],\n sync: syncConfig\n});\n\n// Query synced data\nconst results = realm.objects('MyObject');\n",
"text": "Going to take this and go crazy, going to get into as much detail of how to do this as I can for you guys and gals. As there seems to be major confusion for role-based access using Realm, a lot of this has to be custom logic as none of it is organic to Realm itself.Implementing user role-based authentication with MongoDB Realm, AWS IAM, and MongoDB Atlas involves a few steps:All steps to implement user role-based authentication with MongoDB Realm, AWS IAM, and MongoDB Atlas:Create a MongoDB Atlas account and set up a cluster:Configure AWS IAM roles and permissions:Set up MongoDB Realm and link it to the Atlas cluster:Define user roles in Realm and map them to AWS IAM roles:Expanding on this:\nTo configure AWS IAM roles and permissions for MongoDB Atlas and Realm, follow these steps:4a. Log in to the AWS Management Console and navigate to the IAM dashboard.4b. Create an IAM role for your application users. This role will be used to grant access to your MongoDB Atlas resources. When creating the role, select the “AWS service” as the trusted entity and then choose “EC2” as the use case.4c. Attach the appropriate policies to your IAM role to grant permissions for the actions your users need to perform in MongoDB Atlas. For example, you may want to attach the AmazonEC2ReadOnlyAccess policy to allow read-only access to MongoDB Atlas resources.4d. Create a policy that grants access to your MongoDB Atlas cluster. The policy should include the following permissions:4e Attach the policy to the IAM role you created in step 2.4f. Configure your MongoDB Atlas cluster to use IAM authentication. This requires you to create an IAM database user in MongoDB Atlas and configure the cluster to use IAM authentication.4g. Configure your Realm application to use IAM authentication by creating a new IAM provider and linking it to your AWS account. You will need to provide the AWS access key ID and secret access key to complete this step.4gi. To configure your Realm application to use IAM authentication with AWS, follow these steps:4gii. Log in to the MongoDB Realm console and select your project.\n4giii. In the left-hand navigation menu, select “Authentication Providers.”\n4giv. Click the “Add a Provider” button and select “AWS IAM” from the dropdown menu.\n4gv. Enter a name for the provider and click “Create.”\n4gvi. Enter your AWS access key ID and secret access key in the fields provided.\n4gvii. Select the AWS region where your IAM users are located.\n4gviii. Click “Save” to create the provider.4gu. Once you have created the IAM provider, you can link it to your AWS account by following these steps:4gua. In the AWS console, navigate to the IAM service and select “Users” from the left-hand menu.\n4gub. Select the user that you want to link to your Realm application.\n4guc. Click the “Add Permissions” button and select “Attach Existing Policies Directly.”\n4gud. Search for and select the policy that you created for your Realm application.\n4gue. Click “Review” and then “Add Permissions” to attach the policy to the user.You can now assign IAM roles to your application users and configure permissions based on their roles using the policy that you created. When an application user logs in to your Realm application, they will be authenticated using their IAM credentials and their access to MongoDB Atlas resources will be determined by their assigned IAM role.4h. Configure your Realm application to use role-based access control by creating roles and assigning them to users. You can create custom roles and assign permissions based on the needs of your application.4i. Test your application to ensure that users are able to authenticate and access the appropriate resources based on their assigned roles.To define user roles in Realm and map them to AWS IAM roles, you can follow these steps:b1. Define custom user roles in Realm:\nDefine user roles in Realm that align with your application’s business logic. For example, you may define roles such as “admin”, “manager”, and “user”. These roles can be created in the Realm UI or through the Realm SDK.b2. Define permission sets for each role:\nDefine permission sets for each role that include read, write, and execute permissions on collections and functions. You can set permissions for specific fields, query filters, and other criteria.b3. Configure AWS IAM roles and policies:\nCreate IAM roles and policies that grant access to your MongoDB Atlas resources. Assign permissions to these roles based on the user roles defined in Realm. For example, you may create an IAM role called “admin” and assign it permissions to perform all actions on the Atlas cluster. Similarly, you may create an IAM role called “user” and assign it permissions to read data from specific collections.b4. Map Realm user roles to AWS IAM roles using Lambda functions:\nUse AWS Lambda functions to map Realm user roles to AWS IAM roles. When a user logs in to your application, Realm can invoke a Lambda function that retrieves the user’s role from Realm and maps it to an IAM role. The IAM role is then used to grant the user access to the necessary resources.Overall, the key is to align your user roles and permissions with your application’s business logic and map them to IAM roles that provide the necessary level of access to your MongoDB Atlas resources.b5. Secure your application with role-based access control (RBAC):Role-based access control (RBAC) is a method of controlling access to resources in a system based on the roles of individual users within the organization. In Realm Sync, you can implement RBAC using query-based sync rules, which are rules that are evaluated server-side to determine which documents should be synchronized to a client.To implement RBAC using query-based sync rules in Realm Sync, you would typically follow these steps:c1. Define roles and permissions: Define the roles that are available in your application, along with the permissions that each role should have. For example, you might have an “admin” role with read/write access to all documents, and a “user” role with read-only access to certain documents.c2. Assign roles to users: Assign each user in your application to one or more roles. This can be done programmatically or through an administrative interface.c3. Define sync rules: Define sync rules that allow users to only sync the documents they are authorized to access. For example, you might have a sync rule that only allows users with the “admin” role to sync all documents, while users with the “user” role can only sync documents that have a specific tag or attribute.Here’s an example of how you might implement RBAC using query-based sync rules in Realm Sync:In this example, we define three roles to fit the rest of the code (“viewer”, “editor”, and “admin”) and assign them to three. We then define a sync rule that filters documents based on their “roles” attribute and the roles assigned to the current user. Finally, we open a Realm synced realm and query data based on the sync rule.NOTE: this is just a simple example, and RBAC can be implemented in a more complex application using more sophisticated sync rules and query filters. Additionally, RBAC should be combined with other security measures such as authentication and encryption to ensure that your application is as secure as possible.Overall, this approach provides a powerful and flexible way to implement user role-based authentication with MongoDB Realm, AWS IAM, and MongoDB Atlas. By leveraging these tools and technologies, you can create a secure and scalable backend for your React Native application, with granular access controls that ensure that users can only access the data they need.",
"username": "Brock"
},
{
"code": "",
"text": "Admittedly I got lazy near the end of the tutorial, which was the wrong things to do, so I’ve made the appropriate corrections to make it all read more cohesive.",
"username": "Brock"
},
{
"code": "",
"text": "@magentodevelopment Let me know how the tutorial works for you and if you run into any problems/where I can improve it.Great seeing ya, and I hope your new job is going awesome! Shiv is pretty chill. They did a great job for Nyucrush, lot of great solutions they implemented for the cloud infra they built on it.",
"username": "Brock"
}
] | Build React.Native App With React SDK and Role Based User Privileges - Tutorial | 2023-04-13T00:09:43.902Z | Build React.Native App With React SDK and Role Based User Privileges - Tutorial | 1,603 |
null | [
"data-api"
] | [
{
"code": "",
"text": "Hi,I have a question that can MongoDB Atlas Data API endpoints can be configured to limit to certain IP addresses (i.e. IP Access List) in Preview release? If not, is it something you guys plan to implement before prod release?Thanks,\nShoaib",
"username": "Shoaib_Akhtar1"
},
{
"code": "",
"text": "Hey Shoaib - thanks for calling this out. This is something we intend to provide for GA, but isn’t publicly available for preview release. If you have a support subscription with MongoDB, we can look into how we can configure this internally for you.Thanks,\nSumedha",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Thanks Sumedha for clarifying.Any timelines for the GA release?",
"username": "Shoaib_Akhtar1"
},
{
"code": "",
"text": "We’re targetting sometime between Q2/Q3 (mid-year) of 2022",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "hi @Sumedha_Mehta1 , would like to know if this feature made it to GA? I couldn’t find anything in official docs on ip restriction. Thanks in advance.",
"username": "Ivan_Chong1"
},
{
"code": "",
"text": "@Ivan_Chong1It’s in App Services tab, click on your Data API then go to app settings then you’ll see IP Access List.\n\nScreenshot 2023-04-13 at 10.20.19 PM2856×1598 309 KB\n",
"username": "Brock"
},
{
"code": "",
"text": "Found it. Thanks for the help!",
"username": "Ivan_Chong1"
},
{
"code": "",
"text": "@Ivan_Chong1 anytime my friend.",
"username": "Brock"
},
{
"code": "",
"text": "",
"username": "henna.s"
}
] | IP Access List for Data API End Points | 2022-02-07T16:02:41.439Z | IP Access List for Data API End Points | 3,747 |
[] | [
{
"code": "",
"text": "I think something is broken with Free Tier for the DBA labs.I’m not sure why I keep getting this pop up?\nScreenshot 2023-04-19 at 4.56.01 PM2474×1202 263 KB\n",
"username": "Brock"
},
{
"code": "",
"text": "Hi @Brock ,I believe this is due to the sample data set being imported being around 350MB. Based off your screenshot the current Data Size for that cluster is already at 336.6MB / 512MB (i.e., Only about 176MB free).Since theres only ~176MB free, when you try import the sample data set of 350MB, this error will be shown.For example, I have a test cluster thats already ~420MB:\n\nimage2474×974 221 KB\nWhen I try import the sample data set:\n\nimage2432×882 214 KB\nHope this clears things up.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "mongodump",
"text": "Some options I can currently think of is:Jason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I get that, but I never imported any data, so I’m not sure what the other 300+ MB is for/from? I just created this as a part of the lab, the labs been over for a hot minute now, just manufactured a second lab from the current lesson, generated different users automatically, I’m kind of lost on the logic of this line of classes so far and the consistent creation/load of M0’s and sample data.Some lessons are auto loading the sample data while saying you need to upload it, others are giving you statements saying you need to install 9 of the sample data sets (there are 8 for options) and others saying the sample data is preloaded didn’t have sample data loaded.It’s not that big of a deal per se, just confusing when you’re trying to go by the instructions step by step what it’s asking on the fly to get through the easy parts fast.But also, even when not trying to load in sample data, that pop up comes up everywhere, even with 0MB per the chart.The data will still upload, but the banner never goes away.Also the course will say it’s 1.25 long, 2.00 long etc. I’m not sure what this number is supposed to mean, as some labs are only 10 minutes long if that including videos. In several labs you don’t go over anything but copying and pasting a single script to say create a DB in the first lesson, the quiz says Cluster0 etc. is the cluster name, which it’s not. And user generated wasn’t it was Cluster46957 just for correction for the exam segment, the username and cluster names it says it’s supposed to generate weren’t accurate.It took until I think the 3rd lesson lab where you can finally generate:\nProject name: “MDB_EDU”\nCluster name: “myAtlasClusterEDU”\nDatabase user: “myAtlasDBUser”\nPassword: Left blank\nPermissions: “readWriteAnyDatabase,dbAdminAnyDatabase”And this was created automatically by pressing “next” you don’t actually type in any CLI commands to build this.@Jason_TranHi Brock - Can you elaborate on what you are trying to do in terms of the lab and which lab(s) you are referring to?Some lessons are auto loading the sample data while saying you need to upload it, others are giving you statements saying you need to install 9 of the sample data sets (there are 8 for options) and others saying the sample data is preloaded didn’t have sample data loaded.If there a specific page or step that you believe “auto loading the sample data” then please advise of these as well.Regards,\nJasonI’m referring to the labs in the image.\n\nScreenshot 2023-04-19 at 7.05.08 PM2794×1438 311 KB\nGet started was a copy and paste, as typing in the username didn’t work and then it auto generated its own username and populated date on its own. After saying you were supposed to do it.The Next one it populated the labs and everything by just clicking next, and made a project with the M0, stated sample data was already going to be loaded but also said there were 9 collections that would be present, there was no sample data loaded and only 8 sample datas available with the 9th you had to make yourself.The 1 hour unit also only took 15 minutes.",
"username": "Brock"
},
{
"code": "",
"text": "Hi Brock - Can you elaborate on what you are trying to do in terms of the lab and which lab(s) you are referring to?Some lessons are auto loading the sample data while saying you need to upload it, others are giving you statements saying you need to install 9 of the sample data sets (there are 8 for options) and others saying the sample data is preloaded didn’t have sample data loaded.If there a specific page or step that you believe “auto loading the sample data” then please advise of these as well.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @BrockIt looks like there are multiple different issues at play here, in order to help you with the lab-related issues can you email the subject and the link to this post to [email protected] that will allow us, specifically the labs team, to help more precisely focus on your problems and on the suggestions you have made.Apologies for the run around on this, the labs team do not monitor or support in this forum in an active fashion so to get the right help around the lab questions/suggestions, please email [email protected] you and appreciate your patience in taking this to another ticketing system,\nEoin",
"username": "Eoin_Brazil1"
},
{
"code": "",
"text": "Hi Eoin,I’ll send a comprehensive list after I finish all the courses in the path, I’ll take notes as I go along.",
"username": "Brock"
}
] | Free Tier is broken.... 326MB too much for 512MB deployment | 2023-04-19T23:57:19.068Z | Free Tier is broken…. 326MB too much for 512MB deployment | 492 |
|
null | [] | [
{
"code": "",
"text": "Hello,I did a migration of Mongodb 4.4 version from on premise linux to AWS server ( mngodb version 6).\nI notice the size of index is 3 times on the AWS environment. Is this expected or there are some configuration required on AWS ?Thanks.",
"username": "Shilpi_Pandey"
},
{
"code": "",
"text": "@Shilpi_Pandey It is possible that the size of the index increased during the migration from on-premise Linux to AWS, but it’s hard to say without more information. Here are some possible reasons why the index size increased:To investigate further, you could try comparing the configurations and hardware settings between the two environments and see if there are any significant differences. You could also try running the same indexing process on both environments and compare the index sizes to see if there is a noticeable difference.\nOverall, it’s not necessarily a problem if the index size increases, as long as it doesn’t cause any performance issues or storage capacity concerns.",
"username": "Deepak_Kumar16"
},
{
"code": "",
"text": "How can a slower CPU or less available memory may produce a larger index size?The data in the AWS environment may be different from the on-premise Linux environment, which could affect the index size. For example, if there are more unique values in the index keys or if the data is more sparse, it could result in a larger index size.It is a data migration, the data should be the same and the indexes should be the same. I am sure that if they were not, the original author would not have complained and ask why the size is different. It looks again like a detail missed by ChatGPT.ou could also try running the same indexing process on both environments and compare the index sizes to see if there is a noticeable difference.This is what he did and he did notice a difference, that is why he posted.",
"username": "steevej"
},
{
"code": "",
"text": "Yeah, Mongodb documentation does not mention that the size of the index will be larger on the higher Mongodb version or on AWS.",
"username": "Shilpi_Pandey"
}
] | Size of index large on AWS | 2023-03-09T20:31:54.285Z | Size of index large on AWS | 916 |
null | [
"app-services-data-access"
] | [
{
"code": "",
"text": "Hey I have problem with realm rules when using realm flex sync.My document access are based on property « OfficeId » in each document.I have Office collection who store all office.I have user collection with array of office object wich contains “role” into the office and “officeId”When user join one office, by function we had office into array of object of user which include role for this office and officeId.We would like based our realm rules on this array of objectexemple ( not working ^^)officeId: $in: user.custom_data.offices.officeIdBut after 2 day of test I don’t find anything to do this and I don’t want change my database model for tech limitation …Thx",
"username": "Maxime_Boceno"
},
{
"code": "{\n \"offices\": [\n {\n \"officeId\": 1\n },\n {\n \"officeId\": 2\n }\n ]\n}\n$in{\n \"offices\": [1, 2]\n}\n",
"text": "Hi @Maxime_Boceno,I just want to confirm that I’m understanding your data model correctly, it sounds like your custom user data documents look something like this?If so, then you’re correct that your current permissioning scheme will not work since the $in operator expects an array, so you’ll need to flatten the field to be an array of IDs instead, like so:",
"username": "Kiro_Morkos"
},
{
"code": "{\n \"offices\": [1, 2]\n}\nimport Realm from 'realm';\n\n// Define the User schema\nconst UserSchema = {\n name: 'User',\n properties: {\n _id: 'objectId',\n offices: 'int[]',\n },\n};\n\n// Open a Realm with the User schema\nconst realm = await Realm.open({\n schema: [UserSchema],\n sync: {\n user: user,\n partitionValue: 'somePartitionValue',\n existingRealmFileBehavior: 'openLocal',\n },\n});\n\n// Update a user's offices array\nrealm.write(() => {\n const user = realm.objectForPrimaryKey('User', 'someUserId');\n user.offices = [1, 2];\n});\n\n// Query for all documents where OfficeId is in the user's offices array\nconst results = realm.objects('SomeCollection').filtered('OfficeId IN $0', [1, 2]);\n// Define your Office class to match the structure of your Office documents\npublic class Office : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; }\n\n [MapTo(\"officeId\")]\n public int OfficeId { get; set; }\n\n [MapTo(\"name\")]\n public string Name { get; set; }\n}\n\n// Create a MongoDB Realm client\nvar realmApp = new App(new AppConfiguration\n{\n AppId = \"my-app-id\",\n JwtAuthProvider = new JwtAuthProviderClient(JwtAuthProviderClient.DefaultScheme, () => \"my-jwt-token\")\n});\n\n// Log in to the app using username/password\nvar user = await realmApp.LogInAsync(new Credentials.UsernamePassword(\"[email protected]\", \"password\"));\n\n// Get the user's custom data\nvar customUserData = user.GetCustomData();\n\n// Get the array of office IDs from the user's custom data\nvar officeIds = customUserData[\"offices\"].AsBsonArray.Select(office => office[\"officeId\"].ToInt32());\n\n// Configure flexible sync\nvar syncConfig = new SyncConfiguration(\"my-partition-value\")\n{\n BsonSchema = RealmSchemas.Register<Office>()\n};\nvar syncClient = realmApp.GetSyncClient(user);\nvar syncSession = await syncClient.StartSessionAsync();\nvar syncRealm = await syncClient.OpenRealmAsync(syncConfig);\nvar officeCollection = syncRealm.GetCollection<Office>();\n\n// Sync the Office collection\nvar query = officeCollection.Where(office => officeIds.Contains(office.OfficeId));\nvar subscription = query.Subscribe();\nvar result = await subscription.WaitForCompletionAsync();\n\n// Handle errors\nif (result.ErrorCode == ErrorCode.SessionClosed) \n{\n // Handle session closed\n}\nelse if (result.ErrorCode == ErrorCode.ClientReset) \n{\n var clientReset = result.ClientReset;\n // Handle the client reset\n if (clientReset.InitiatedBy == ClientReset.InitiatedByServer)\n {\n // Delete any unsynced changes\n var unsyncedChanges = clientReset.GetRecoveredChanges<Office>();\n foreach (var change in unsyncedChanges)\n {\n officeCollection.Write(() =>\n {\n if (change.OperationType == ChangeOperationType.Delete)\n {\n var obj = officeCollection.Find(change.DocumentKey);\n obj?.Delete();\n }\n else\n {\n var bsonDoc = change.FullDocument;\n officeCollection.Add(bsonDoc.ToObject<Office>());\n }\n });\n }\n // Restart the sync\n await syncSession.EndSessionAsync();\n syncRealm.Dispose();\n syncRealm = await syncClient.OpenRealmAsync(syncConfig);\n officeCollection = syncRealm.GetCollection<Office>();\n query = officeCollection.Where(office => officeIds.Contains(office.OfficeId));\n subscription = query.Subscribe();\n result = await subscription.WaitForCompletionAsync();\n }\n}\n\n// Access the synced Office documents\nvar offices = officeCollection.ToList();\n\n// End the sync session\nawait syncSession.EndSessionAsync();\nSessionClosedClientResetClientResettry...catchofficeCollection",
"text": "@Maxime_BocenoI’m rewriting this in C# right now, but here’s the solution to this in React Native:Your custom data documents contain an array of office objects, each with an “officeId” property, and to my understanding the goal is to use this array to determine document access in Realm Flex Sync. The current attempt to use “$in: user.custom_data.offices.officeId” as a Realm rule is not working because the “$in” operator expects an array of IDs.The suggested solution is to flatten the “offices” field to be an array of IDs instead of an array of objects. This can be done by changing the user data model to look like this:To implement this solution in React Native for iOS, you can use the Realm SDK for React Native to read and update user data. Here is some example code:This code opens a Realm with a User schema that includes an “offices” field of type “int” (an array of integers). It then updates a user’s “offices” array and queries for all documents in “SomeCollection” where the “OfficeId” field is in the user’s “offices” array.I’ll update in about a half hour or so with another explanation of how to implement this in C# to give you a more clear idea of how this works from different angles.Here’s the update with csharp, I kept screwing up the error handling as it wouldn’t compile right because my IDE kept trying to pull from another project I have running.NOTE: The updated code checks for the SessionClosed error in addition to the ClientReset error. In the case of a ClientReset, the code now deletes any unsynced changes and restarts the sync. The updated code also uses a try...catch block around the code that accesses the synced officeCollection to handle any exceptions that might occur.",
"username": "Brock"
},
{
"code": "",
"text": "I discovered an issue while building a C# app with MAUI and other services that use ML.Net. The ML Pipelines in ML.Net seem to call something in the Realm SDK, which causes compilation and launch errors. While the code looks good, there appears to be a conflict between Realm’s CRL and error handlers and ML.Net. I am not sure if this is a problem with ML Net or the Realm SDK, but it’s an interesting finding to keep in mind.ML dot Net even tries to run the self healing, but Realm SDK or it just fight each other. It was really odd, but anyways the above code should work.",
"username": "Brock"
},
{
"code": "",
"text": "Hello @Maxime_Boceno,Were you able to find a solution to your issue? Once you confirm, I will mark the topic as resolved.Cheers,\nHenna",
"username": "henna.s"
}
] | Realm Flex sync Rules Problem | 2023-04-04T07:36:03.098Z | Realm Flex sync Rules Problem | 1,155 |
null | [
"compass",
"database-tools"
] | [
{
"code": "",
"text": "I want to import a csv file to the mongodb but it has many fields that I dont need (70). So how can I import fields that i need via mongoimport. Compass has this functionality but I need to do it through mongoimport.",
"username": "Murod_Tursunaliev"
},
{
"code": "",
"text": "Look at jq it might help you.An alternative is to mongoimport the whole file into a temporary collection, then write an aggregation to $project the fields you want, then $out the result into the final collection.",
"username": "steevej"
}
] | Import specified fields to db collection via mongoimport | 2023-04-20T07:52:00.188Z | Import specified fields to db collection via mongoimport | 683 |
null | [] | [
{
"code": "",
"text": "Hi everyone,My name is Corrin Morgan and I am a graduate student at the University of California, Berkeley. I am currently working on a research project about virtual learning platforms like MongoDB University. The goal is to understand what users like and dislike about these offerings. Your feedback, as MongoDB University users, is valuable to me. If you have time, please complete this five-minute survey: https://forms.gle/6rTnHrgsGjigmrX66There is an opportunity to get paid! Consenting survey participants who are selected for additional interviewing will be compensated. For questions, please feel free to respond to this post.Thank you for your time,\n-Corrin",
"username": "Corrin_Morgan"
},
{
"code": "",
"text": "Hi Corren, I just have done the reserach! ",
"username": "Denis_Kolling"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Research Survey: MongoDB University Feedback w/ Paid Opportunity | 2023-04-06T19:36:34.908Z | Research Survey: MongoDB University Feedback w/ Paid Opportunity | 1,056 |
null | [
"containers",
"devops"
] | [
{
"code": "",
"text": "Hello,I have a DevOps infrastructure for a few of my projects at work, which work as a automated version release/deployment.During the process of the pipeline (building and deploying the project), the project is built into a docker image. Based on MongoDB:4.2 docker image (Link: MongoDB:4.2 docker imageand during the set-up of the docker image (AKA the dockerfile commands) there is a failure during the apt-get update process. I receive the following error:#5 23.68 W: GPG error: MongoDB Repositories bionic/mongodb-org/4.2 Release: The following signatures were invalid: EXPKEYSIG 4B7C549A058F8B6B MongoDB 4.2 Release Signing Key [email protected] tried allowing unauthorized connections with --allow-unauthenticated in the apt-get command, but still the same error.I read somewhere this might be on MongoDB’s side and they let the signature expire, and they need to fix this, but I’m unsure whether or not this is correct. (I’m starting to believe it because I’ve tried so many different potential solutions, but nothing works)Best regards,\nMat",
"username": "Matan_Cohen"
},
{
"code": "RUN mv /etc/apt/sources.list.d/mongodb-org.list /tmp/mongodb-org.list && \\\n apt-get update && \\\n apt-get install -y curl && \\\n curl -o /etc/apt/keyrings/mongodb.gpg https://pgp.mongodb.com/server-4.2.pub && \\\n mv /tmp/mongodb-org.list /etc/apt/sources.list.d/mongodb-org.list;\nRUN apt-get update \n",
"text": "I ran into this today. It does look like the GPG key expired on the 17th. The devs seem to have updated the key on their pgp site but not in the Docker image. I was able to get around the issue by adding the following lines to a Dockerfile based on the 4.2 image:",
"username": "Logan_Lembke"
},
{
"code": "apt-update sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 4B7C549A058F8B6B\n",
"text": "You can use the following command to fix the issue for the Invalid signature. Place it before apt-update command:It solved the problem for me. Had the same issue.",
"username": "Yash_Thakur"
}
] | MongoDB:4.2 docker image failing to apt-get update due to expired signature | 2023-04-19T05:19:33.381Z | MongoDB:4.2 docker image failing to apt-get update due to expired signature | 4,977 |
null | [
"mongodb-shell"
] | [
{
"code": "mongoshclone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:...<omitted>...)} could not be cloned.\ndb.createUser(user: \"api_user\", pwd: \"replacement password\",roles: \"dbAdmin\")\n",
"text": "When I try to create a user in mongosh I get the following error:with the following command:What am I doing wrong?",
"username": "Cezar"
},
{
"code": "db.createUser(user: “api_user”, pwd: “replacement password”,roles: “dbAdmin”)\n{}[]rolesdb.createUser({user: \"api_user\", pwd: \"replacement password\",roles: [\"dbAdmin\"]})\n",
"text": "Hello @Cezar,Welcome to the MongoDB Community forums What am I doing wrong?You are missing the curly braces {} and square brackets [] in the roles section of your command.Please refer to the db.createUser() documentation to learn more.Let us know if you have any other queries.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Cannot create a database user in mongosh | 2023-04-20T10:12:56.441Z | Cannot create a database user in mongosh | 896 |
null | [
"queries",
"java"
] | [
{
"code": "",
"text": "every time retrieve from database.String \"Hello: \" is stored & retrieved from DB it will changed to “Hello:”\nSpace character changing to  this circumflex accent.whenever retrieving the data from DB it will appear like 2X times “Hello:”please suggest some solutions",
"username": "Mr_J"
},
{
"code": "",
"text": "Looks like a bug in your code.Please share your code that store and retrieve.",
"username": "steevej"
}
] | Â characters appearing in database values & increase 2x time everytime retrieve from DB | 2023-04-20T10:25:18.302Z | Â characters appearing in database values & increase 2x time everytime retrieve from DB | 399 |
null | [
"python",
"connecting"
] | [
{
"code": "pymongo.errors.OperationFailure where the user account is not found.\npymongo.errors.ServerSelectionTimeoutError:\npymongo.errors.NetworkTimeout: SSL handshake failed:\npymongo.errors.NetworkTimeout: SSL handshake failed:\npymongo.errors.NetworkTimeout: SSL handshake failed:\npymongo.errors.ServerSelectionTimeoutError: SSL handshake failed:\npymongo.MongoClient(<connection_string>,\n authMechanism='SCRAM-SHA-1',\n connectTimeoutMS = 4000,\n serverSelectionTimeoutMS = 4000,\n socketTimeoutMS = 4000,\n appname = '<app_name>',\n retryWrites = True,\n retryReads = True,\n readPreference = 'secondaryPreferred',\n #tlsDisableOCSPEndpointCheck = True, ##Try deploying with this also.\n)\n",
"text": "Hello,We have a flask based server which interacts with the Atlas cluster. we have noticed that the server keeps going down again and again at random intervals.After investigation, we noticed that we are getting multiple ‘AtlasError’ in our logs. We have various types of pymongo.errors.And various similar issues.Please note that once we rebuild the environment and reset everything the connection is established and then again after sometime these errors start occurring and the system goes down.Driver details :\nHave tested this with pymongo 3.7.x, 3.11.x, 3.12.xUsing this with client configurations as follows:In this, we have tried different timeout values ranging from 5-30 secs. And we have recently put out a version where we have succumbed to disabling OCSP Endpoint checks (which we wouldn’t like to continue with since its a huge security risk - we have done this for the time being to see if it keeps the server up for longer).Regards,\nShubham",
"username": "Shubham_Sharma6"
},
{
"code": "pip list",
"text": "Hi thanks for opening this issue. Nothing stands out yet as an obvious culprit to me. Please provide the full traceback and full error message for the various exceptions you are experiencing. You can redact and sensitive info like host names or user names.Please also provide the output of pip list so we can determine if you are using PyOpenSSL or not.Can you also describe what server product (atlas dedicated, atlas free tier) and version (MongoDB 4.0, 5.0, etc) you are using?",
"username": "Shane"
},
{
"code": "",
"text": "Hey Shane,Thanks for the response.Specifics logs related to issues are:[Mon Sep 20 15:00:13.672604 2021] [:error] [pid 6534] [remote <remote_ip>] [2021-09-20 15:00:13,656] ERROR in app: Exception on /wi/api/v1/gettacurrent/ [GET]\n[Mon Sep 20 15:00:13.672623 2021] [:error] [pid 6534] [remote <remote_ip>] Traceback (most recent call last):\n[Mon Sep 20 15:00:13.672627 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/current/app/app/init.py”, line 193, in before_request\n[Mon Sep 20 15:00:13.672630 2021] [:error] [pid 6534] [remote <remote_ip>] sec_check = client_sec.admin.command(‘ping’)\n[Mon Sep 20 15:00:13.672633 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/database.py”, line 752, in command\n[Mon Sep 20 15:00:13.672636 2021] [:error] [pid 6534] [remote <remote_ip>] read_preference, session) as (sock_info, secondary_ok):\n[Mon Sep 20 15:00:13.672639 2021] [:error] [pid 6534] [remote <remote_ip>] File “/usr/lib64/python3.6/contextlib.py”, line 81, in enter\n[Mon Sep 20 15:00:13.672642 2021] [:error] [pid 6534] [remote <remote_ip>] return next(self.gen)\n[Mon Sep 20 15:00:13.672645 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/mongo_client.py”, line 1387, in _socket_for_reads\n[Mon Sep 20 15:00:13.672648 2021] [:error] [pid 6534] [remote <remote_ip>] server = self._select_server(read_preference, session)\n[Mon Sep 20 15:00:13.672651 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/mongo_client.py”, line 1346, in _select_server\n[Mon Sep 20 15:00:13.672654 2021] [:error] [pid 6534] [remote <remote_ip>] server = topology.select_server(server_selector)\n[Mon Sep 20 15:00:13.672657 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/topology.py”, line 246, in select_server\n[Mon Sep 20 15:00:13.672660 2021] [:error] [pid 6534] [remote <remote_ip>] address))\n[Mon Sep 20 15:00:13.672663 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/topology.py”, line 203, in select_servers\n[Mon Sep 20 15:00:13.672665 2021] [:error] [pid 6534] [remote <remote_ip>] selector, server_timeout, address)\n[Mon Sep 20 15:00:13.672668 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/topology.py”, line 220, in _select_servers_loop\n[Mon Sep 20 15:00:13.672671 2021] [:error] [pid 6534] [remote <remote_ip>] (self._error_message(selector), timeout, self.description))\n[Mon Sep 20 15:00:13.672675 2021] [:error] [pid 6534] [remote <remote_ip>] pymongo.errors.ServerSelectionTimeoutError: \n[Mon Sep 20 15:08:11.630506 2021] [:error] [pid 6534] close and retry db connections.\n[Mon Sep 20 15:08:11.715236 2021] [:error] [pid 6534] [remote <remote_ip>] [2021-09-20 15:08:03,121] ERROR in app: Request finalizing failed with an error while handling an error\n[Mon Sep 20 15:08:11.715260 2021] [:error] [pid 6534] [remote <remote_ip>] Traceback (most recent call last):\n[Mon Sep 20 15:08:11.715263 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask/app.py”, line 2292, in wsgi_app\n[Mon Sep 20 15:08:11.715267 2021] [:error] [pid 6534] [remote <remote_ip>] response = self.full_dispatch_request()\n[Mon Sep 20 15:08:11.715270 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask/app.py”, line 1815, in full_dispatch_request\n[Mon Sep 20 15:08:11.715273 2021] [:error] [pid 6534] [remote <remote_ip>] rv = self.handle_user_exception(e)\n[Mon Sep 20 15:08:11.715276 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask/app.py”, line 1718, in handle_user_exception\n[Mon Sep 20 15:08:11.715279 2021] [:error] [pid 6534] [remote <remote_ip>] reraise(exc_type, exc_value, tb)\n[Mon Sep 20 15:08:11.715282 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask/_compat.py”, line 35, in reraise\n[Mon Sep 20 15:08:11.715285 2021] [:error] [pid 6534] [remote <remote_ip>] raise value\n[Mon Sep 20 15:08:11.715288 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask/app.py”, line 1813, in full_dispatch_request\n[Mon Sep 20 15:08:11.715291 2021] [:error] [pid 6534] [remote <remote_ip>] rv = self.dispatch_request()\n[Mon Sep 20 15:08:11.715294 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask/app.py”, line 1799, in dispatch_request\n[Mon Sep 20 15:08:11.715297 2021] [:error] [pid 6534] [remote <remote_ip>] return self.view_functionsrule.endpoint\n[Mon Sep 20 15:08:11.715299 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask_httpauth.py”, line 99, in decorated\n[Mon Sep 20 15:08:11.715302 2021] [:error] [pid 6534] [remote <remote_ip>] if not self.authenticate(auth, password):\n[Mon Sep 20 15:08:11.715305 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib/python3.6/site-packages/flask_httpauth.py”, line 136, in authenticate\n[Mon Sep 20 15:08:11.715308 2021] [:error] [pid 6534] [remote <remote_ip>] return self.verify_password_callback(username, client_password)\n[Mon Sep 20 15:08:11.715311 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/current/app/app/auth.py”, line 281, in verify_pwd_2\n[Mon Sep 20 15:08:11.715313 2021] [:error] [pid 6534] [remote <remote_ip>] doc = db_sec.our_user.find_one({‘mn’:mn},{‘pwd_hash’:1, ‘email’:1})\n[Mon Sep 20 15:08:11.715316 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/collection.py”, line 1328, in find_one\n[Mon Sep 20 15:08:11.715319 2021] [:error] [pid 6534] [remote <remote_ip>] for result in cursor.limit(-1):\n[Mon Sep 20 15:08:11.715322 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/cursor.py”, line 1238, in next\n[Mon Sep 20 15:08:11.715324 2021] [:error] [pid 6534] [remote <remote_ip>] if len(self.__data) or self._refresh():\n[Mon Sep 20 15:08:11.715327 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/cursor.py”, line 1155, in _refresh\n[Mon Sep 20 15:08:11.715341 2021] [:error] [pid 6534] [remote <remote_ip>] self.__send_message(q)\n[Mon Sep 20 15:08:11.715344 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/cursor.py”, line 1045, in __send_message\n[Mon Sep 20 15:08:11.715347 2021] [:error] [pid 6534] [remote <remote_ip>] operation, self._unpack_response, address=self.__address)\n[Mon Sep 20 15:08:11.715350 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/mongo_client.py”, line 1426, in _run_operation\n[Mon Sep 20 15:08:11.715353 2021] [:error] [pid 6534] [remote <remote_ip>] address=address, retryable=isinstance(operation, message._Query))\n[Mon Sep 20 15:08:11.715355 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/mongo_client.py”, line 1515, in _retryable_read\n[Mon Sep 20 15:08:11.715358 2021] [:error] [pid 6534] [remote <remote_ip>] read_pref, session, address=address)\n[Mon Sep 20 15:08:11.715361 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/mongo_client.py”, line 1346, in _select_server\n[Mon Sep 20 15:08:11.715364 2021] [:error] [pid 6534] [remote <remote_ip>] server = topology.select_server(server_selector)\n[Mon Sep 20 15:08:11.715367 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/topology.py”, line 246, in select_server\n[Mon Sep 20 15:08:11.715369 2021] [:error] [pid 6534] [remote <remote_ip>] address))\n[Mon Sep 20 15:08:11.715372 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/topology.py”, line 203, in select_servers\n[Mon Sep 20 15:08:11.715375 2021] [:error] [pid 6534] [remote <remote_ip>] selector, server_timeout, address)\n[Mon Sep 20 15:08:11.715378 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/topology.py”, line 220, in _select_servers_loop\n[Mon Sep 20 15:08:11.715381 2021] [:error] [pid 6534] [remote <remote_ip>] (self._error_message(selector), timeout, self.description))\n[Mon Sep 20 15:08:11.715385 2021] [:error] [pid 6534] [remote <remote_ip>] pymongo.errors.ServerSelectionTimeoutError: SSL handshake failed:\n[Mon Sep 20 15:08:11.715391 2021] [:error] [pid 6534] [remote <remote_ip>]\n[Mon Sep 20 15:08:11.715393 2021] [:error] [pid 6534] [remote <remote_ip>] During handling of the above exception, another exception occurred:\n[Mon Sep 20 15:08:11.715396 2021] [:error] [pid 6534] [remote <remote_ip>]\n[Mon Sep 20 15:08:11.715399 2021] [:error] [pid 6534] [remote <remote_ip>] Traceback (most recent call last):\n[Mon Sep 20 15:08:11.715404 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/pool.py”, line 1394, in _get_socket\n[Mon Sep 20 15:08:11.715407 2021] [:error] [pid 6534] [remote <remote_ip>] sock_info = self.sockets.popleft()\n[Mon Sep 20 15:08:11.715410 2021] [:error] [pid 6534] [remote <remote_ip>] IndexError: pop from an empty deque\n[Mon Sep 20 15:08:11.715413 2021] [:error] [pid 6534] [remote <remote_ip>]\n[Mon Sep 20 15:08:11.715415 2021] [:error] [pid 6534] [remote <remote_ip>] During handling of the above exception, another exception occurred:\n[Mon Sep 20 15:08:11.715418 2021] [:error] [pid 6534] [remote <remote_ip>]\n[Mon Sep 20 15:08:11.715420 2021] [:error] [pid 6534] [remote <remote_ip>] Traceback (most recent call last):\n[Mon Sep 20 15:08:11.715423 2021] [:error] [pid 6534] [remote <remote_ip>] File “/opt/python/run/venv/local/lib64/python3.6/site-packages/pymongo/pool.py”, line 1040, in _configured_socket\n[Mon Sep 20 15:08:11.715426 2021] [:error] [pid 6534] [remote <remote_ip>] sock = ssl_context.wrap_socket(sock, server_hostname=host)\n[Mon Sep 20 15:08:11.715429 2021] [:error] [pid 6534] [remote <remote_ip>] File “/usr/lib64/python3.6/ssl.py”, line 407, in wrap_socket\n[Mon Sep 20 15:08:11.715432 2021] [:error] [pid 6534] [remote <remote_ip>] _context=self, _session=session)\n[Mon Sep 20 15:08:11.715434 2021] [:error] [pid 6534] [remote <remote_ip>] File “/usr/lib64/python3.6/ssl.py”, line 817, in init\n[Mon Sep 20 15:08:11.715437 2021] [:error] [pid 6534] [remote <remote_ip>] self.do_handshake()\n[Mon Sep 20 15:08:11.715440 2021] [:error] [pid 6534] [remote <remote_ip>] File “/usr/lib64/python3.6/ssl.py”, line 1077, in do_handshake\n[Mon Sep 20 15:08:11.715442 2021] [:error] [pid 6534] [remote <remote_ip>] self._sslobj.do_handshake()\n[Mon Sep 20 15:08:11.715445 2021] [:error] [pid 6534] [remote <remote_ip>] File “/usr/lib64/python3.6/ssl.py”, line 689, in do_handshake\n[Mon Sep 20 15:08:11.715448 2021] [:error] [pid 6534] [remote <remote_ip>] self._sslobj.do_handshake()\n[Mon Sep 20 15:08:11.715451 2021] [:error] [pid 6534] [remote <remote_ip>] socket.timeout: _ssl.c:835: The handshake operation timed out\n[Mon Sep 20 15:08:11.715453 2021] [:error] [pid 6534] [remote <remote_ip>]Also, we have noticed that corresponding to these errors occurring on application end - the mongodb logs show -{“t”:{\"$date\":“2021-09-20T15:19:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}{“t”:{\"$date\":“2021-09-20T15:12:00.827+00:00”},“s”:“I”, “c”:“NETWORK”, “id”:22989, “ctx”:“conn10652”,“msg”:“Error sending response to client. Ending connection from remote”,“attr”:{“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Connection reset by peer”},“remote”:\":\",“connectionId”:10652}}{“t”:{\"$date\":“2021-09-20T15:11:54.820+00:00”},“s”:“I”, “c”:“CONNPOOL”, “id”:22572, “ctx”:“MirrorMaestro”,“msg”:“Dropping all pooled connections”,“attr”:{“hostAndPort”:“host:port”,“error”:“ShutdownInProgress: Pool for host:port has expired.”}Also, noticed repeated OCSP stapling errors in the mongod logs of primary of the cluster in question:Line 5673: {“t”:{\"$date\":“2021-09-20T14:44:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 5673: {“t”:{\"$date\":“2021-09-20T14:44:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 5799: {“t”:{\"$date\":“2021-09-20T14:49:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 5799: {“t”:{\"$date\":“2021-09-20T14:49:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 5799: {“t”:{\"$date\":“2021-09-20T14:49:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 5967: {“t”:{\"$date\":“2021-09-20T14:54:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 5967: {“t”:{\"$date\":“2021-09-20T14:54:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 5967: {“t”:{\"$date\":“2021-09-20T14:54:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6157: {“t”:{\"$date\":“2021-09-20T14:59:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6157: {“t”:{\"$date\":“2021-09-20T14:59:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6157: {“t”:{\"$date\":“2021-09-20T14:59:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6497: {“t”:{\"$date\":“2021-09-20T15:04:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6497: {“t”:{\"$date\":“2021-09-20T15:04:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6497: {“t”:{\"$date\":“2021-09-20T15:04:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6868: {“t”:{\"$date\":“2021-09-20T15:09:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6868: {“t”:{\"$date\":“2021-09-20T15:09:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 6868: {“t”:{\"$date\":“2021-09-20T15:09:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7211: {“t”:{\"$date\":“2021-09-20T15:14:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7211: {“t”:{\"$date\":“2021-09-20T15:14:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7211: {“t”:{\"$date\":“2021-09-20T15:14:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7356: {“t”:{\"$date\":“2021-09-20T15:19:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7356: {“t”:{\"$date\":“2021-09-20T15:19:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7356: {“t”:{\"$date\":“2021-09-20T15:19:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7509: {“t”:{\"$date\":“2021-09-20T15:24:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7509: {“t”:{\"$date\":“2021-09-20T15:24:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7509: {“t”:{\"$date\":“2021-09-20T15:24:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7673: {“t”:{\"$date\":“2021-09-20T15:29:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7673: {“t”:{\"$date\":“2021-09-20T15:29:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}\nLine 7673: {“t”:{\"$date\":“2021-09-20T15:29:41.053+00:00”},“s”:“W”, “c”:“NETWORK”, “id”:5512201, “ctx”:“OCSP Fetch and Staple”,“msg”:“Server was unable to staple OCSP Response”,“attr”:{“reason”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL peer certificate revocation status checking failed: Could not verify X509 certificate store for OCSP Stapling. error: 00000000:lib(0):func(0):reason(0)”}}}Output from pip list:Package Versionboto3 1.18.44\nbotocore 1.21.44\ncertifi 2021.5.30\nchardet 3.0.4\nclick 6.7\ndocutils 0.17.1\nFlask 1.0.2\nFlask-HTTPAuth 3.2.4\nFlask-Limiter 1.2.1\nidna 2.8\nitsdangerous 0.24\nJinja2 2.10\njmespath 0.10.0\nlimits 1.5.1\nMarkupSafe 1.1.1\npasslib 1.7.1\npip 20.0.2\npkg-resources 0.0.0\npolyline 1.4.0\npycryptodome 3.9.8\npymongo 3.12.0\npython-dateutil 2.8.2\nredis 3.2.1\nrequests 2.22.0\ns3transfer 0.5.0\nsetuptools 44.0.0\nsix 1.16.0\nurllib3 1.25.11\nWerkzeug 0.14.1\nwheel 0.37.0I don’t see the PyOpenSSL package in this.We are using MongoDB Atlas M10 cluster. Initially this was an M0 where we noticed this issue. We thought this could be a network issue due to shared hardware so bumped it up to M10. The server version currently is 4.4.8Please let me know if you need any more details.",
"username": "Shubham_Sharma6"
}
] | SSL Hanshake Failure, User authentication Failure and a series of other failure | 2021-09-20T16:14:37.144Z | SSL Hanshake Failure, User authentication Failure and a series of other failure | 4,831 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Hi all.If I had the following sample data:[\n{\nname: “John”,\ncity: “London”\n},\n{\nname: “Jason”,\ncity: “London”\n}\n]And ran the following query:db.collection.find({ $or: [ { name: “John” }, { city: “London” } ] })Would the $or operator be able to scan both of the following indexes or would it just pick one, best-matching index?{ name: 1 }\n{ city: 1 }Would I be better off using{ city: 1, name: 1 }For maximum efficiency here?Thanks in advance.",
"username": "Lewis_Dale"
},
{
"code": "$or",
"text": "Hey @Lewis_Dale,Does the index usage information listed on the $or operator documentation help answer your questions?Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Yes it does, I must have missed that. Thanks!",
"username": "Lewis_Dale"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can a query/aggregation with an $or operator scan multiple indexes? | 2023-04-20T07:20:24.915Z | Can a query/aggregation with an $or operator scan multiple indexes? | 373 |
null | [
"node-js",
"typescript"
] | [
{
"code": "",
"text": "Hi everyone, I’m Ezekiel from Warri, Nigeria. A full-stack developer specialized in using React and Node.js in providing solutions to businesses, both locally and internationally. I started using MongoDB about a year ago, and it’s been an amazing ride.\nI’m glad to be part of the community and look forward to learning from everyone.",
"username": "Ezekiel_Okpukoro1"
},
{
"code": "",
"text": "Welcome to the community Ezekiel Glad to hear learning MongoDB been going well for you. Feel free to ask questions, join discussions, and connect with other members here.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "@Jason_Tran Thanks for the warm welcome ",
"username": "Ezekiel_Okpukoro1"
},
{
"code": "",
"text": "Hello Ezekiel welcome aboard!",
"username": "Nestor_Daza"
}
] | Hey Everyone, Ezekiel from Warri, Nigeria | 2023-04-18T18:34:02.060Z | Hey Everyone, Ezekiel from Warri, Nigeria | 1,006 |
null | [
"aggregation",
"dot-net",
"replication"
] | [
{
"code": "var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>()\n .Match(change => change.DatabaseNamespace.DatabaseName.StartsWith(\"MyDb\"));\n",
"text": "I want to use Change Stream on the whole replica set, but filtered on database name.\nTried with a pipeline like (e.g.):But exception is thrown when calling MongoClient.Watch(pipeline):\nSystem.NotSupportedException: ‘Serializer for MongoDB.Driver.DatabaseNamespace must implement IBsonDocumentSerializer to be used with LINQ.’What can I do?",
"username": "Magne_Edvardsdal_Ryholt"
},
{
"code": "\"MyDb\"",
"text": "Hi @Magne_Edvardsdal_Ryholt and welcome to MongoDB community forums!!I want to use Change Stream on the whole replica set, but filtered on database name.Could you help me understand in more details on what you are trying to achieve from the above statement.\nFor example, are you trying to open a change stream on the \"MyDb\" database?You can refer to the documentation on working with Change streams in C# for more details.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks for your effort, I am going to have a session with MongoDB’s support service tomorrow and hopefully this issue will be cleared.I can add the result here in the forum if I think it is of interest to other users as well.",
"username": "Magne_Edvardsdal_Ryholt"
}
] | Exception when using Database name in pipeline match | 2023-04-17T14:28:22.529Z | Exception when using Database name in pipeline match | 777 |
[] | [
{
"code": "",
"text": "We’re using Azure Cosmos DB for MongoDB. When running a query we’re receiving an incorrect _id in the response.\n\nMongoId_Issue711×594 146 KB\n",
"username": "Brian_Murphy"
},
{
"code": "",
"text": "hi @Brian_Murphy ,Can you share query and few of your BsonDocuemnt from you database to understand the query result.",
"username": "Abhishek_bahadur_shrivastav"
},
{
"code": "var tempFromDate = BsonDateTime.Create(fromDate);\nvar tempfilter = Builders<BsonDocument>.Filter.Gte(\"CreatedDateTime\", tempFromDate);\nvar tempprojection = Builders<BsonDocument>.Projection.Include(\"CreatedDateTime\").Include(\"DecisionToken\");\nvar tempDocList = collection.Find<BsonDocument>(tempfilter).Project(tempprojection).ToList();\n",
"text": "Here is the code snippet.",
"username": "Brian_Murphy"
},
{
"code": "",
"text": "I can’t actually share the BsonDocuments since it’s an underwriting decision journal. Overall we’re running a query to return a list of some basic data including the _id. The end user can then select a specific journal and we will then use the _id to fetch the record from the database. Unfortunately the _id isn’t the same and therefore we can’t return the record they are looking for.",
"username": "Brian_Murphy"
},
{
"code": "",
"text": "The values of _id do not magically change even if you are using CosmosDB rather than the real MongoDB.You are doing find() so your criteria might match and return more than 1 document. I think you are simply looking at the wrong documents.And since you do not sort the orders in one application/view vs the other might differ.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Brian_MurphyIn addition to what @steevej mentioned, CosmosDB is a Microsoft product and is semi-compatible with a genuine MongoDB server. Hence, I cannot comment on how it works, or even know why it’s not behaving like a genuine MongoDB server.At the moment of this writing, CosmosDB currently passes only 33.51% of MongoDB server tests, so I would encourage you to engage CosmosDB support regarding this issue.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "I appreciate all of the feedback! We’ll open a ticket with Microsoft.",
"username": "Brian_Murphy"
}
] | Getting Incorrect _id returned in a query | 2023-04-19T18:03:33.645Z | Getting Incorrect _id returned in a query | 424 |
|
null | [
"schema-validation"
] | [
{
"code": "{\n \"title\": \"user\",\n \"type\": \"object\",\n \"required\": [\n \"firstName\",\n \"lastName\",\n \"email\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"firstName\": {\n \"bsonType\": \"string\",\n \"description\": \"First Name of the user\"\n },\n \"lastName\": {\n \"bsonType\": \"string\",\n \"description\": \"Last Name of the user\"\n },\n \"email\": {\n \"bsonType\": \"string\",\n \"description\": \"Email address of the user\"\n }\n },\n \"errorMessage\": {\n \"required\": {\n \"firstName\": \"Please enter your first name\",\n \"lastName\": \"Please enter your last name\",\n \"email\": \"Please enter your email address\"\n }\n }\n}\n",
"text": "I want to validate my schema using Atlas app services serverless function as I did with Mongoose and MongoDB.My Schema",
"username": "Shahzad_Safdar"
},
{
"code": "",
"text": "Hello @Shahzad_Safdar,Welcome to the MongoDB Community forums I want to validate my schema using Atlas app servicesI presume you want to validate that existing documents in the collection conform to the schema. In addition to this, you can also validate changes to documents by defining a validation expression in the schema.Please refer to the six steps procedure to Enforce a Schema in the Atlas App Services.I hope it helps. Let us know if you have any other queries.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hey @Kushagra_Kesav. Thank you. Yes I love the way that Atlas app services schema validation feature validate all my existing documents in the collection but I really want to validate my schema before inserting single document or multiples documents in my collection. Can I validate my atlas app services schema before insertion? If yes than how can I validate? Are App services have built in schema validation feature like mongoose?",
"username": "Shahzad_Safdar"
},
{
"code": "",
"text": "Hey, @Shahzad_Safdar,The Atlas App Services documentation link shared above states:Schemas let you require specific fields, control the type of a field’s value, and validate changes before committing write operations.In fact, all App Services data access (Data API, Graph QL, Functions, Sync) enforce that documents are valid according to schema before inserting or updating any data.Please refer to How App Services Enforces Schemas documentation to learn more.Let us know if you have any other queries.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "{\n \"errorMessage\": {\n \"required\": {\n \"email\": \"Please enter your email address\",\n \"firstName\": \"Please enter your first name\",\n \"lastName\": \"Please enter your last name\"\n }\n },\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"email\": {\n \"bsonType\": \"string\",\n \"description\": \"Email address of the user\"\n },\n \"firstName\": {\n \"bsonType\": \"string\",\n \"description\": \"First Name of the user\"\n },\n \"lastName\": {\n \"bsonType\": \"string\",\n \"description\": \"Last Name of the user\"\n }\n },\n \"required\": [\n \"firstName\",\n \"lastName\",\n \"email\"\n ],\n \"title\": \"user\",\n \"type\": \"object\"\n}\nexports = async function(payload, response){\nconst doc = {\nfirstName: 'Shahzad',\nlastName: 'Safdar'\n}\nconst myCollection = comtext.services.get('mongodb-atlas').db('db_name).collection('col_name');\nconst result = await myCollection.insertOne(doc);\n}\n",
"text": "Hey, @Kushagra_Kesav I have this schema in Atlas App Services users collection schemaNow I want to validate my document before insertion. This is my function codeNow In my schema I have set email field required but in my doc I am not passing email field but this method is insert doc into my database instead of giving error that email field is required so this is not validating my doc before insertion.\nAtlas app services schema only validate existing documents in my users collection. Please review this reply and I am waiting for your reply.",
"username": "Shahzad_Safdar"
},
{
"code": "",
"text": "Hi, I would be happy to help look into this. Do you mind sharing a link to the function in question? There are various settings at play that might be contributing to this and it might reduce the number of back-and-forths for me to just view them.",
"username": "Tyler_Kaye"
},
{
"code": "xports = async function(payload, response){\nconst doc = {\nfirstName: 'Shahzad',\nlastName: 'Safdar'\n}\nconst myCollection = comtext.services.get('mongodb-atlas').db('db_name).collection('col_name');\nconst result = await myCollection.insertOne(doc);\n}\n",
"text": "Am I need to copy the whole function code or just function link? This function I have written on atlas cloud so this will not allow any other user to view function code on my account before logging in. so I am pasting my function code here",
"username": "Shahzad_Safdar"
},
{
"code": "",
"text": "Hi, I was asking for the link / url in the UI when you are editing your function so that I can tie it to our internal logging system and see what might be going on. If you have either your URL, Project Id, or Application Id I can use any of those. Only MongoDB employees are able to see this URL.",
"username": "Tyler_Kaye"
}
] | Schema validation before insertion | 2023-04-13T09:32:45.156Z | Schema validation before insertion | 1,063 |
null | [
"queries"
] | [
{
"code": "",
"text": "I’m doing online archival from Atlas mongoDB using custom criteria. As per my requirement I’ve to change the query and conditions as my collection hold different schema of documents. I’d like to understand how the archived data will be deleted after certain days (for example 365 days) as I might have update the different custom criteria query.\nCould you please explain more on this topic?",
"username": "Veera_Kumar1"
},
{
"code": "",
"text": "Hey @Veera_Kumar1,Welcome to the MongoDB Community Forums! While setting up the online archive, you can specify how many days you want to store data in the online archive and a time window when you want Atlas to run the archiving job. Kindly refer to the documentation to read more about it: Managing an Online Archive\nDo note, that as of the time of this message, Atlas Online Archive data expiration is available as a Preview feature. The feature and the corresponding documentation may change at any time during the Preview stage.Additionally, not related to online archives and if it suits your use case, another option is using TTL Indexes which are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific clock time.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Just to add onto this here @Veera_Kumar1 , I think the key is that the “Data Expiration” feature that deletes data from the archive does not depend on a field inside your documents, it depends on when the data was archived. So the clock starts the second that the data was archived, and it will stay in the archive for the number of days you indicate in the expiration rule based on metadata that we store about when data was archived.",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "Thanks Benjamin. Yes, this would be helpful.",
"username": "Veera_Kumar1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Online archival to delete documents | 2023-04-16T00:12:10.211Z | Online archival to delete documents | 708 |
null | [
"aggregation",
"node-js",
"performance"
] | [
{
"code": "",
"text": "Hi - I’ve recently completed M220 & M121. Excellent courses. I recommend both.I have several aggregation pipelines I’ve developed. In general, they are slow and one exceeds 100MB available storage when I configure the settings a certain way.I really like how you can run explain() on find queries and get the execution time in milliseconds. I’d like to do the same for aggregation, but I can’t make that work. Any suggestions?Thanks - Peter.",
"username": "Peter_Winer"
},
{
"code": "// Find\ndb.collection.find({ ... }).explain(<verbosity>);\n\n// Aggregate\ndb.collection.explain(<verbosity>).aggregate([ ... ]);\nfinddb.collection.explain(<verbosity>).find({ ... });\n<verbosity>queryPlannerexecutionStatsallPlansExecution",
"text": "Hi @Peter_Winer,You can run generate Explain Results for an aggregate operation using the mongo or mongosh shell as follows:Note that you can explain a find operation in the same fashion if you choose:The <verbosity> above indicates one of the three available verbosity modes: queryPlanner, executionStats and allPlansExecution.",
"username": "alexbevi"
},
{
"code": "executionStatsdb.coll.explain(\"executionStats\").aggregate( [ <your-pipeline-here> ])\n",
"text": "I have several aggregation pipelines I’ve developed. In general, they are slow and one exceeds 100MB available storage when I configure the settings a certain way.I’m not sure what you mean by that - do you mean 100MBs RAM usage? I explain how that works here.Unless your pipelines spill to disk or are writing an output file they shouldn’t be that slow. Can you give some examples of pipelines that are slow? If you are using MongoDB version 4.4 then explain with executionStats will show how long each stage took as well as how many documents entered each stage.Example:I walk through an example of this at the end of the talk I did for last year’s MDB.live event but unfortunately the resolution of the code didn’t turn out that good…Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Asya - Thank you for responding. My database is hosted on MongoDB Atlas with version 4.2.13 Enterprise. Clearly I should upgrade to 4.4. The stats you referenced will be very helpful.The most important collection is a set of roughly 1.4MM timed events, growing about 25K events per day. I’m implementing a dashboard that shows various views of the events, including time-based distributions.Users can configure the duration of the time series and how many divisions per day. The most fine grained option is 40 days of data, divided into 1-hour divisions, or 960 ‘buckets’ total.The $group stage is problematic. I use $addFields to give each event a ‘bucket number’, then group by bucket number. So, I am grouping the events into 960 separate buckets. I’m very interested to know the time spent on this stage, and looking forward to trying 4.4 to discover this.I wonder if using $bucket would be more efficient? I could calculate the bucket boundaries in JavaScript and compose the $bucket stage. Then I would use $bucket to assign each event to the proper bucket, based on the time stamp, which is indexed. Does that make sense?I would be happy to share one or more of my pipelines, if you would review them. LMK and I’ll send a couple of examples over. Best - Peter.",
"username": "Peter_Winer"
},
{
"code": "",
"text": "Alex - Thank you for replying.I tried explain() as you recommended. It seems like the problem is with my MongoDB version, as pointed out by Asya in the next message of this thread. I am using 4.2.13 Enterprise hosted on MongoDB Atlas. I will upgrade to 4.4 in the next available time-window. This should help my effort significantly.Best - Peter.",
"username": "Peter_Winer"
},
{
"code": "",
"text": "If you post the full pipeline that’s slow, we can take a look and make suggestions about where it can be optimized. Of course having 4.4 explain output would be more enlightening but even 4.2 explain output will show some useful details about how the pipeline is executing and how it’s using indexes (if any).Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Asya - I updated to 4.4.4, also updated my local mongo shell and node.js driver. I now have execution stats available and i’m finding areas to improve. In a few days after I’ve fixed the ‘low hanging fruit’, I’d like to take you up on your offer and send a couple of complete pipelines. Thank you! Peter.",
"username": "Peter_Winer"
},
{
"code": "",
"text": "",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": ".explain does not working on vscode Is .explain works only on Terminal",
"username": "Abhishek_Rai1"
},
{
"code": "",
"text": ".explain does not working on vscode Is .explain works only on Terminal\nhow to make it work on vs code",
"username": "Abhishek_Rai1"
}
] | Aggregation: Measuring performance | 2021-03-30T17:10:41.113Z | Aggregation: Measuring performance | 12,845 |
null | [
"backup"
] | [
{
"code": "",
"text": "We need ransomeware protection. Immutable backup is one option. Does anyone know if it is supported today? I could not find it on document",
"username": "Yu_Zhang1"
},
{
"code": "",
"text": "Hi @Yu_Zhang1 - Welcome to the community Perhaps going over the Configure a Backup Compliance Policy documentation may help you here.Regards,\nJason",
"username": "Jason_Tran"
}
] | Does Atlas support immutable backup today? | 2023-04-18T15:32:16.651Z | Does Atlas support immutable backup today? | 743 |
null | [] | [
{
"code": "",
"text": "I’m building a webapp with realm-web, and I have a Schema with some properties as Double. Everything is alright if I send the number with decimal places, but If i send a flat number like 10, I get a SchemaValidationFailedWrite Error. I don’t know if this is a realm-web issue or just the nature of javascript dealing with numbers… I need some help here",
"username": "Rui_Cruz"
},
{
"code": "",
"text": "i was using the default types suggested by the “Generate Schema”, and I didn’t know there was a type “number”. It works fine now.",
"username": "Rui_Cruz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Int32 / Double validation on realm-web | 2023-04-19T22:54:13.157Z | Int32 / Double validation on realm-web | 305 |
null | [
"queries"
] | [
{
"code": " db.inventory.find( { \"size.h\": { $lt: 15 }, \"size.uom\": \"in\", status: \"D\" } )db.inventory.find( { \n \"size\": {\n \"h\": { $lt: 15 },\n \"uom\": \"in\"\n },\n \"status\": \"D\" \n})\n",
"text": "Dear colleagues,\nCan anyone help me with confuse\nWhy this code works (and return 1 result):\n db.inventory.find( { \"size.h\": { $lt: 15 }, \"size.uom\": \"in\", status: \"D\" } )And this one didn’t (return 0 result):",
"username": "Vadym_Onyshchenko"
},
{
"code": "",
"text": "Field based query vs object based query.See Embedded document query not working! - #2 by steevej",
"username": "steevej"
}
] | Nested document different syntaxes | 2023-04-19T21:08:55.826Z | Nested document different syntaxes | 462 |
null | [] | [
{
"code": "",
"text": "How we can protect data corruption in MongoDB? What are the different ways to protect our data from corruption? If RepicaSet/Shared configured, will the corrupted data replicated to secondary node?",
"username": "Ramya_Navaneeth"
},
{
"code": "",
"text": "What do you mean bydata corruptionAll updates are replicated. If you make an update that you did not really wanted, it will still be replicated.But if you mean corruption as when a disk crash, then the server will probably crash or terminate when trying to do I/O. The rest of the cluster will do the same as if the secondary was terminated cleanly.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks Steeve.\nData Corruption- any user error , errors at the storage layer due to hardware failure.To mitigate hardware failure, replication will help if the storage is not shared. What about any user error? what is the mitigation other than backup?\nIn Oracle, if the data is corrupted, it wont replicate to other node and RMAN backup also fail. In Mongo, do we have any feature like this?",
"username": "Ramya_Navaneeth"
},
{
"code": "",
"text": "Are you telling me that Oracle is smart enough to detect that a user has made a mistake and that the update/delete was a mistake and it will not replicate this error.WOW I am impressed.Sorry we are not that lucky. If a user had the credential to delete something, we do not know if it is a mistake or not so it will be replicated. You can have delayed hidden nodes. But an operation, intended or mistake, is eventually replicated.Hardware failure will crash the server. An election will occur with the remaining nodes. What will happen next is rather complex and rather well explained in the documentation.If you want a fool’s proof system, don’t let the fools use it.",
"username": "steevej"
},
{
"code": "",
"text": "To mitigate hardware failure, replication will help if the storage is not shared.Storage / resources shouldn’t be shared, IE a 3 node replica set should have different hardware (vms, servers, etc). Otherwise the whole high availability of MongoDB isn’t useful.",
"username": "tapiocaPENGUIN"
}
] | How we can protect data corruption in MongoDB? What are the different ways to protect our data from corruption? If RepicaSet configured, will the corrupted data replicated to secondary node? | 2023-04-11T08:54:46.633Z | How we can protect data corruption in MongoDB? What are the different ways to protect our data from corruption? If RepicaSet configured, will the corrupted data replicated to secondary node? | 564 |
null | [] | [
{
"code": "",
"text": "Hello beautiful people! I have some issues after a update on our sharded cluster and all of these new connection timeouts. I’m not sure why we need ports 995 and67381? We are needing to know what services we are needing these for because we just upgraded to 6.0.5 yesterday and we just installed the new patch.",
"username": "UrDataGirl"
},
{
"code": "",
"text": "You need to revert the changes NOW you just downloaded a tool called B!TXS, it uses 995 and 67381 to egress data from your cluster.DO NOT open those ports for it!That’s a hackers tool, it disguises itself as a prompt as a stability or security update. Revert all changes you made, sanitize and reload from your last snapshot/backup if you need to.Look for anything related to com.mongodb1, com.mongod1, or mongodb1.com or mongod1.com as a source, also look for any packages containing “mongo1” or “mongodbsecurity” in the names or “criticalmongod.” That would be components of B1TXs.It’s a data extraction tool made from Julius Kivimäki’s extraction tool for 2.2*.",
"username": "Brock"
},
{
"code": "",
"text": "@UrDataGirl When your team downloads support tools for MongoDB, even plugins for VS Code, or any other third party tool like 3T etc, I recommend installing to a network separated system for testing and evaluations for what it’s doing, what it does etc, and look at the package contents of what you downloaded.Also verify with HASH to make sure it is in fact the exact thing you think you’re installing. B1TXs tool is used as a payload in a lot of different “tools” to look like they help you with MongoDB administration, and tries to look official by throwing buzzwords in the package names, or code contents.",
"username": "Brock"
},
{
"code": "",
"text": "Who is Julius? And ok thank you I told my supervisor and we are reverting everything now thank you! @Brock",
"username": "UrDataGirl"
},
{
"code": "",
"text": "Julius Kivimäki is a dude who spent years of his life making tools exclusively to breach Redis and MongoDB databases. (Not just these, but anything NoSQL like one of his partners made tools similar but for SQL based DBs.)He was arrested in 2014 after lizard squad breached some high profile targets, a lot of his tools before his arrest got released (dude knew his time was up.) so a bunch of groups have been keeping the tools up to date since.Most commonly what people do to get people to get this tool installed, is they’ll take and disguise payloads in third party tools that interact with the database(s), so then it can generate prompts and pop up notifications that look like they are official from MongoDB itself, or Redis itself, etc. So the admin unwittingly installs and pushes the update that’s really B1TXs in disguise, then it changes config files etc and generates indexes to then funnel the data out.I know about this tool because of work I did in Incident Response and forensics for several breaches the tools were used, the predecessor of B1TXs is what Julius used to breach a health clinics medical records, same tool that was later used on a power plant in Ukraine etc. It’s a very serious tool, very heavy in capability.Be sure to take whatever you had downloaded previously the location and so on, and make a report to ic3.gov about it, with the information for how they can download whatever you had downloaded and installed. You’ll never be able to permanently remove the tool from the internet, but they can at least hunt down whoever got a hold of it this time around.",
"username": "Brock"
},
{
"code": "",
"text": "Thank you again! Our security team is filling out all of the reports now thanks again!",
"username": "UrDataGirl"
}
] | On site shared cluster new ports | 2023-04-19T17:07:36.545Z | On site shared cluster new ports | 454 |
null | [
"queries"
] | [
{
"code": "",
"text": "I have tried collection.bulk_write(bulkset) for insert, update as well as delete operations. But, matchcount and modifiedcount / insertcount / deletecount do not match.May I know possible reasons for this mismatch",
"username": "Monika_Shah"
},
{
"code": "{ _id : 0 , foo : 1 , bar : 1 }\n{ _id : 1 , foo : 1 , bar : 2 }\n\nc.updateMany( { foo : 1 } , { \"$set\" : { bar : 2 } } )\n",
"text": "A document might match the query but does not need to be updated.The above will have a match count of 2 because both documents have foo:1 but update count will be 1 because document _id:1 already has bar:2 so only document _id:0 will be updated.",
"username": "steevej"
}
] | Why Matchedcount is different than Modified Count in acknowledgement of Bulkwrite | 2023-04-19T15:24:15.271Z | Why Matchedcount is different than Modified Count in acknowledgement of Bulkwrite | 872 |
null | [
"dot-net"
] | [
{
"code": "Builders<BsonDocument>.Filter.In(\"_id\", id);\nFound exception One or more errors occurred. (A write operation resulted in an error. WriteError: { Category: \"ExecutionTimeout\", Code: 50, Message: \"Request timed out.\" }\nprotected FilterDefinition<BsonDocument> ItemWithListOfId(List<ObjectId> id){\n return Builders<BsonDocument>.Filter.In(\"_id\", id)}\nParallel.ForEach(_batch, list =>\n{\n try\n {\n DeleteResult result = collection.DeleteManyAsync(ItemWithListOfId(ids));\n if (result.IsAcknowledged)\n {\n totalDeleteRecords = totalDeleteRecords + (Int32)result.DeletedCount;\n \n }\n }\n catch (Exception ex)\n {\n throw ex;\n }});\n",
"text": "Hi,I am using MongoDB.Driver (.net nuget package) and having around 1 lakh documents in DB.I am deleting multiple BsonDocuement in the batches of 1500 ObjectIds using this filterI am getting this below errorAt first, the time when I run code, it deletes 1500 records successfully for a few batches around 20-25 but for later batches, it gives a timeout. we have 1600 RUs in the database.Can anyone help me to fix this issue, I can further share more info if needed.",
"username": "Abhishek_bahadur_shrivastav"
},
{
"code": "",
"text": "Did you notice anything on your dashboard when you get that error?\nWhat write concern are you using? in case of majority, did the replication succeed?",
"username": "Kobe_W"
},
{
"code": "",
"text": "hi @Kobe_WNothing suspicious at dashboard, and i am not using any write concern.",
"username": "Abhishek_bahadur_shrivastav"
},
{
"code": "Builders<BsonDocument>.Filter.In(\"_id\", id);\ndb.test.deleteMany({_id:{$in:[1,2,3...]}})\ndeleteMany()WriteError: { Category: \"ExecutionTimeout\", Code: 50, Message: \"Request timed out.\" }OperationTime",
"text": "Hello @Abhishek_bahadur_shrivastav,Welcome to the MongoDB Community forums I am deleting multiple BsonDocuement in the batches of 1500 ObjectIds using this filterWhat I understand is that this query will be translated intowhich means that there will be one deleteMany() command, but the $in array will have 1500 entries. Let me know if my understanding is correct here.we have 1600 RUs in the database.Could you please clarify the meaning of RUs in this context, and also let me know where you have deployed the MongoDB server?WriteError: { Category: \"ExecutionTimeout\", Code: 50, Message: \"Request timed out.\" }Here, the Execution timeouts mean the entire task ran for longer than the configured amount of time.Could you kindly share the total number of documents in your collection and the percentage that you intend to delete?If the percentage of documents to be deleted is high, you can use the aggregation pipeline with $match and $out to write the documents you want to keep to a different collection (assuming this is not a sharded cluster). Afterward, drop the original collection and rename the new collection to retain the desired documents.However, there could be a couple of workarounds that can be considered:One approach is to raise the batch size from 1500 to 10000, which may enable the operation to complete within the default time limit.Alternatively, you may want to adjust the settings for MongoServerSettings.OperationTimeout to fine-tune the default OperationTime.I hope it helps!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "db.test.deleteMany({_id:{$in:[1,2,3...]}})\nOperationTime",
"text": "hi @Kushagra_Kesav ,Thanks for reply.Your understanding is correct. I am sending 2 parallel request (each request has 1500 ObjectId’s) to delete BsonDocument asynchronously.I have Azure CosmosDB db via mongo API and having more than 10 lakh records in each collections and RUs means [$Request unit] (Request Units as a throughput and performance currency - Azure Cosmos DB | Microsoft Learn).Could you please clarify the meaning of RUs in this context, and also let me know where you have deployed the MongoDB server?How can i verify that percentage of deletion is high than expected in my database. I am not much familiar about shared cluster. our use case is to clean up our database if few Bsondocument are not in use. we can’t drop existing collection and create new one.If the percentage of documents to be deleted is high, you can use the aggregation pipeline with $match and $out to write the documents you want to keep to a different collection (assuming this is not a sharded cluster ). Afterward, drop the original collection and rename the new collection to retain the desired documents.I tried to keep then 500 Object-Id’s in each request but still its giving execution timeout.MongoServerSettings.OperationTimeohow can we use this option through MongoDB.Driver nuget package, since we are setting up connection via MongoClientSettings from our code.It would be more fruitful for me if we meet on call, if possible. Please schedule a call at my booking page\nBook with Abhishek. i can explain my problem better and find the best solution together.thanks @Kushagra_Kesav",
"username": "Abhishek_bahadur_shrivastav"
}
] | Getting timeout error when deleting data from colletion | 2023-04-17T15:02:50.720Z | Getting timeout error when deleting data from colletion | 1,300 |
null | [
"dot-net"
] | [
{
"code": " Func<Type, bool> serializer = delegate (Type type)\n {\n return true;\n };\n\n var objectSerializer = new ObjectSerializer(serializer);\n BsonSerializer.RegisterSerializer(objectSerializer);\nWidgetData.ToBsonDocument()",
"text": "After updating to 2.19.1 I’m getting the “is not configured as a type that is allowed to be serialized for this instance of ObjectSerializer” error on a bunch of my classes.I registered this serializer to ok any and all types. It cleared up a bunch of errors.But I’m still getting errors when using this line of code. When WidgetData is an “object” type. I tried feeding the objectSerializer into ToBsonDocument(). But that didn’t help.\nWidgetData.ToBsonDocument()",
"username": "Donny_V"
},
{
"code": "ObjectSerializerWidgetDataObjectSerializerObjectSerializerWidgetDataObjectSerializerObjectSerializerObjectSerializerusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Bson.Serialization.Serializers;\n\nvar objectSerializer = new ObjectSerializer(x => true);\nBsonSerializer.RegisterSerializer(objectSerializer);\n\nobject obj = new WidgetData();\nvar bson = obj.ToBsonDocument();\nConsole.WriteLine(bson);\n\nrecord WidgetData;\nToBsonDocument(Type type)BsonClassMapSerializer<T>ObjectSerializerusing System;\nusing MongoDB.Bson;\n\nobject obj = new WidgetData();\nvar bson = obj.ToBsonDocument(obj.GetType());\nConsole.WriteLine(bson);\n\nrecord WidgetData;\n",
"text": "Hi, @Donny_V,Welcome to the MongoDB Community Forums. I understand that you’re running into a situation where the ObjectSerializer is failing to serialize WidgetData. This can happen if your custom ObjectSerializer registration is not early enough in your bootstrap process and the default ObjectSerializer is registered first.The following is a simple console program that serializes an empty WidgetData successfully after first registering a custom ObjectSerializer. This succeeds. If you comment out the custom ObjectSerializer registration, it will fail with the error that you noted. This suggests that the root cause of the problem is not registering your custom ObjectSerializer early enough.Another way to avoid this problem is to use the ToBsonDocument(Type type) overload as that uses the BsonClassMapSerializer<T> rather than the ObjectSerializer to serialize the POCO to BSON.Hopefully this assists you in resolving the issue.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | The 2.19.1 driver update is causing havoc on my project | 2023-04-19T15:39:16.612Z | The 2.19.1 driver update is causing havoc on my project | 2,006 |
[
"aggregation",
"london-mug"
] | [
{
"code": "Principal SRE Database Engineer, BeamerySenior SRE Engineer, BeameryDistinguished Engineer, MongoDB",
"text": "MongoDB London meetup is happening on the 20th of April at Beamery London in Bunhill Row. \n_London MUG(April20)1920×1080 247 KB\nEvent Type: In Person\nLocation: Beamery, HYLO, 105 Bunhill Row, London EC1Y 8LZ \nVideo Conferencing URLIn this talk, we will explore the benefits and capabilities of using MongoDB within the ecosystem of Beamery,\na leading talent management software company. We will cover the features of MongoDB that make it a powerful and flexible database solution, such as its ability to handle unstructured data and its horizontal scaling capabilities. We will discuss how Beamery leverages MongoDB to provide robust and scalable data storage for its platform. We will also highlight the importance of implementing the best MongoDB monitoring practices. Finally, we will showcase examples of real-world use cases where MongoDB has enabled Beamery to deliver powerful and innovative features to its customers. Attendees will come away from this talk with a deeper understanding of how MongoDB can be used effectively within a modern software ecosystem to deliver scalable, flexible, and powerful solutionsArek Borucki\nPrincipal SRE Database Engineer, Beamery\nAaron(ldn)512×512 59.9 KB\nAaron Walker\nSenior SRE Engineer, BeameryPeople often ask - what’s new in MongoDB but the time something was added isn’t the\nreal factor the question is really “What’s been added to MongoDB that I should know about\nbut I don’t - because it’s beyond basic tutorials and I’ve not studied the last 10 years\nof features”. This talk tells you about the important things in the core database\nyou probably don’t know but should.\njohn866×890 140 KB\nJohn Page\nDistinguished Engineer, MongoDBTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Who are Beamery?We are a Talent Lifecycle Management Platform, leveraging an industry-first AI-powered\nTalent Graph, with an incredible team of creators, problem solvers, and engineers.\nBeamery’s Platform helps organizations unleash human potential within their business.They can identify and prioritize candidates that are likely to thrive at their organization, build a\na more inclusive and diverse workforce, unlock career ambition opportunities for existing\nemployees, and understand the skills and capabilities they need for the future.Here from the team at Beamery in this video about what life is like in Engineering,\nProduct & Design Team!Workplace GuidelinesWe look forward to having you all at the meetup.Cheers, \nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "How do we confirm or not attendance ?",
"username": "NeilM"
},
{
"code": "",
"text": "Hello @NeilM,Thank you for raising the question. You can RSVP at the top of the event page. Let us know if you are unable to do it.Cheers,\nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Thanks, got it now I was looking for it.",
"username": "NeilM"
},
{
"code": "",
"text": "My apologises, I have had to cancel. I have updated the RSVP.",
"username": "NeilM"
}
] | MongoDB London MUG is happening on April 20th in collaboration with Beamery | 2023-03-20T17:13:20.037Z | MongoDB London MUG is happening on April 20th in collaboration with Beamery | 2,413 |
|
null | [
"security"
] | [
{
"code": "",
"text": "How can I enable access to database through my private VPN?\nI don’t want to disable VPN every time I want yo use your services, which is a lot of downtime for my VPN secured connection.\nI rly cannot youse your services for development and I am considering alternatives…",
"username": "Michal_Mach"
},
{
"code": "",
"text": "Hi Michal,Generally speaking if you wish you connect via a VPN you will want to either transitively connect via an AWS DirectConnect or Azure ExpressRoute through an Atlas Private Endpoint or if your VPN is more of a stand-alone set-up, you would want to add the public IP address of whatever your VPN reaches out to the public internet with to your Atlas IP Access ListLet us know if that helps\n-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew,Thank you for response. I’m using proton VPN, I’ve allowed 0.0.0.0/0 IP address, this is not a problem. The problem is that I receive timeouts though VPN and can not connect at all.\nFor now I’m using docker mongo cluster locally, and I’m seriously considering using my own setup for dev and prod environments instead of atlas because of this.Regards,\nMike",
"username": "Michal_Mach"
},
{
"code": "",
"text": "Please cut-n-paste the exact error message that you get.",
"username": "steevej"
},
{
"code": "mongosh \"mongodb+srv://cluster0.ibXXXX.mongodb.net/test\" --apiVersion 1 --username MY_USERNAME\nEnter password: ****************\nCurrent Mongosh Log ID:\t640XXXXXXXXXXXXX\nConnecting to:\t\tmongodb+srv://<credentials>@cluster0.ibXXXX.mongodb.net/test?appName=mongosh+1.7.1\nMongoServerSelectionError: Server selection timed out after 30000 ms\n",
"text": "Just a timeout.\nThe same command works without VPN in multiple locations (IPs)",
"username": "Michal_Mach"
},
{
"code": "",
"text": "I would contact the VPN provider.But before try to enforce the VPN to use Google’s DNS 8.8.8.8 and/or 8.8.4.4.",
"username": "steevej"
},
{
"code": "2701710.0.0.102",
"text": "Hi I was having this issue too (I am actually also using ProtonVPN), and I got the answer.When connecting to mongodb Atlas, we can see that mongodb Atlas opens dynamic ports in our local machine in order to communicate.\nimage900×470 62.2 KB\nIn the screenshot, you can see that the servers in mongodb Atlas that opened the 27017 ports, are opening dynamic ports to our local machine (i.e. 10.0.0.102) that are probably necessary for communication.In our case, it is known that ProtonVPN is blocking all ports through their firewall by default, thus they are not allowing mongodb Atlas to open the dynamic ports it wants for establishing the communication. And this results with a connection timeout, and a failure.So unfortunately, because there is no option for ProtonVPN to open dynamic ports, there is no option to connect to mongodb Atlas through ProtonVPN.",
"username": "Tal_Jacob_Sir_Jacques"
},
{
"code": "",
"text": "I’m having the same issue, also connecting with ProtonVPN. Is there anything that can be done to workaround this @Andrew_Davidson?",
"username": "atineoSE"
},
{
"code": "",
"text": "open dynamic portsAll the above text about have to open dynamic ports to established an outgoing connection to port 27017 shows a lack of networking knowledge. The port that needs to be open is 27017. Do the same netstat/findstr with https. You will see a bunch of 10.* dynamic port ESTABLISHED to what ever IP:https web site you have. ProtonVPN does not open each and everyone of what you call dynamic port, it opens https.SoI would contact the VPN provider.The issue is the VPN provider. It is not a mongodb issue.",
"username": "steevej"
},
{
"code": "",
"text": "I got the following from ProtonVPN support on the issue:Please be informed that outgoing connections to some database-related ports are currently being blocked on most of our servers for anti-abuse reasons, so this could be the reason you are experiencing such an issue. Normally, any user connected to the same ProtonVPN server would have the same authorization to access the database you are willing to connect to unless there are additional security measures in place, so this is not recommended and is insecure. Even if you whitelist some ProtonVPN IP addresses with your firewall, that is still not enough because any user would still be able to reach your database through the very same ProtonVPN IP address.",
"username": "atineoSE"
},
{
"code": "",
"text": "So they basically tell you to use another VPN provider or not to use their VPN to access your database.While it is absolutely true that anyone using the same VPN exit point that is white listed will be able to access your database, they will only be able to do it if they have the appropriate credentials. That isunless there are additional security measures in place, so this is not recommended and is insecureBut since they won’t accommodate you despite the fact that MongoDB has additional security measures.ConclusionThe issue is the VPN provider. It is not a MongoDB issue.",
"username": "steevej"
},
{
"code": "",
"text": "This really looks like a dead-end, as it looks like it’s not possible to change the default port of an Atlas cluster It’s either change VPN provider, or DB provider, or manage deployment in a self-managed instance where one could change the DB ports, any of which is painful enough.",
"username": "atineoSE"
},
{
"code": "",
"text": "change …, or DB providerWhy do you think that it would work? What you share with us is clear:outgoing connections to some database-related ports are currently being blocked on most of our servers for anti-abuse reasonsWhy would you think they block MongoDB port and not other DB provider?Byblocked on most of our serversI understand that some of their servers are not blocked and will let you connect. May be you can try that before changing VPN or DB.Byblocked … for anti-abuse reasonsI understand that rather than trying to take care of the abusers, they prefer to block all legitimate users. The sad part is that they do not seem to be inclined to help you.But networking wise, can’t you simply create a different IP route that bypass your VPN for what ever IP network your cluster is using?",
"username": "steevej"
}
] | Connection timeout over VPN, unable to access the service | 2023-02-27T07:29:36.111Z | Connection timeout over VPN, unable to access the service | 2,353 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hello everyone😀,The Health Data Hub is organizing “Data Challenges for Health”, open for free to all data (and health) enthusiasts!Each challenge promises a prize of up to 30,000 euros, which will be shared between the best performing teams.Three Challenges are currently open :We hope that these challenges will arouse your interest and gather as many participants as possible!",
"username": "Pauline_Cornille"
},
{
"code": "",
"text": "Hi there,\nI can offer to support or join a team.\nRegards,\nMichael",
"username": "michael_hoeller"
}
] | Take part in a Health Data Challenge! | 2023-04-19T14:49:00.266Z | Take part in a Health Data Challenge! | 844 |
[
"compass"
] | [
{
"code": "",
"text": "Mongodb (Compass) does not detect the data type when a text has an accent and the CSV import stops.\nIf the accent is removed from “Vacío” in row 2, the import proceeds without any problem.If the accented text is created on the database, there is no problem.Is there any way to import a CSV file that even has an accent or special character in text or column names?",
"username": "Richard_Navarro"
},
{
"code": "",
"text": "@Richard_Navarro could you please share the file you are trying to import?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "@Massimiliano_Marcon :PruebaMongo2.csv - Google DriveThank you",
"username": "Richard_Navarro"
},
{
"code": "",
"text": "Thank you RichardThat file is not valid utf8. The error is supposed to show up in the UI and it doesn’t. I’ll file a ticket for that.But import only supports files encoded as utf8 which would be why it doesn’t work.",
"username": "Le_Roux_Bodenstein"
},
{
"code": "",
"text": "Thank you, i will check how to change this in the excel.It worked!",
"username": "Richard_Navarro"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb does not accept text with accents when importing from CSV | 2023-04-18T04:07:38.596Z | Mongodb does not accept text with accents when importing from CSV | 1,205 |
|
null | [] | [
{
"code": "",
"text": "Hi, I registered here to ask about MongoDB, but exploring the forums for a minute I decided to start with another topic.These curvy shapes in the background make the forum super slow, at least on my mac (Safari). It’s a 4098x10726px image. I experimented and removing the background makes the website snappy again. Why would you guys use such a huge raster image instead of putting an SVG there? Like come on…",
"username": "Bear_Town"
},
{
"code": "",
"text": "dark-bg.png (4098×10726) (mdb-community.s3.amazonaws.com)To pinpoint a culprit, at least this one for the dark theme. I added it to my blocked url list and cleared the forum page cache. now the forum sped up.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "It’s because it’s a 8k image, I personally think going to a 1080 image would be much smoother imho.",
"username": "Brock"
},
{
"code": "",
"text": "But why? This image should be a SVG file, it’s a simple shape.",
"username": "Bear_Town"
},
{
"code": "",
"text": "I think it can even be achieved by pure CSS and would be much faster than loading an image file.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Something that may be beneficial for responsive design may be to implement 1080, 2K, and 4K versions of the background to be loaded based upon device specs/ natively set resolutions.I’ve verified on a test Android A21 that the forums sometimes don’t even load up on it.Likewise using a Dell 740 USFF desktop the forums often times do need to be refreshed a few times to load to view, this may be due to the caching/loading of the larger image.This can be adjusted by using properties to check the screen resolution set on the client, and loading the website as appropriate to it using the assets specified for that resolution.",
"username": "Brock"
},
{
"code": "",
"text": "Because some may not have full knowledge of responsive design principles. Everyone has something to learn and grow, the background can easily be done with just CSS or tailwind css, you are correct.But if they wish to use images, a series of much lower res images would be more ideal. Or just using CSS3 and flex box.",
"username": "Brock"
},
{
"code": "",
"text": "By the way, I tested on Chrome and Firefox too and it’s much better there. Looks like Safari handles backgrounds differently. I also tested with Safari on MacBooks M1/M2 and it’s not perfect, but acceptable. That being said, using this big image as the background is a bad idea.",
"username": "Bear_Town"
},
{
"code": "",
"text": "Hi All,Thanks for raising these details. I’ll check this one out with the responsible team and update here accordingly.Just to confirm @Bear_Town, what particular load times are you experiencing on Safari (and if possible, Chrome / Firefox)? I’ll do some testing on my side too but have only used Chrome to this point.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "@Jason_Tran load times for safari vary, as does support. From iPhone on Safari when you go to make a post, or reply, there’s no text features available. You can’t select formatting or anything of the kind, and the text boxes of what’s written can be limited/not in full.Load times on mobile can be upwards of 12 seconds, for Safari on a Mac, it varies as well. With an Intel i5 processor on an older Mac mini it can take upwards of 15 to 20 seconds (2013) and on M1 it can take up to 7 seconds sometimes.",
"username": "Brock"
},
{
"code": "",
"text": "I’m not talking about load times here. I mean the performance of the client-side of the website, particularly scrolling. It feels like I was using some ancient device, framerate drops to about 5 or so.",
"username": "Bear_Town"
},
{
"code": "",
"text": "Thanks for clarifying @Bear_Town , i’ll relay this information across as well as the information @Brock has advised.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran ,\nIn case you missed in the crowd, I included the one causing the issue in the dark theme:In the performance tab of developer tools, with this image, in 10 seconds test, almost half the time is spent on the Painting stage with almost no idle time.I added it to the blocked URL list. Now, painting takes only about 1 sec and idle time is about 5 sec, half the time browsing. (the file should be cleared from the cache)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Sorry for the delay guys - I’ve been advised some changes were made regarding the background. Please update here if you’re still having issues.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "It’s definitely better, but it’s still far from perfect. When I remove the background, it’s much smoother. It looks like it was not only the image, but also the script repositioning it to blame.",
"username": "Bear_Town"
}
] | The background makes this forum extremely slow! | 2023-03-31T16:18:35.594Z | The background makes this forum extremely slow! | 1,562 |
null | [
"atlas-cluster",
"php"
] | [
{
"code": "try {\n $mdbserver = 'foo.bar.mongodb.net';\n $user = 'username';\n $pw = 'password';\n\n $client = new MongoDB\\Client('mongodb+srv://'.$user.':'.$pw.'@'.$mdbserver.'/?retryWrites=true&w=majority');\n echo(MSG_CLIENT_SUCCESS);\n}\ncatch (Throwable $e) {\n // catch throwables when the connection is not a success\n echo \"Captured Throwable for connection : \" . $e->getMessage() . PHP_EOL;\n}\nFailed to parse MongoDB URI: 'mongodb+srv://username:[email protected]/?retryWrites=true&w=majority'. Invalid URI Schema, expecting 'mongodb://'.\n",
"text": "phpinfo:PHP Version 7.2.24-0ubuntu0.18.04.17|MongoDB extension version|1.3.4|\n|MongoDB extension stability|stable|\n|libbson bundled version|1.8.2|\n|libmongoc bundled version|1.8.2|\n|libmongoc SSL|enabled|\n|libmongoc SSL library|OpenSSL|\n|libmongoc crypto|enabled|\n|libmongoc crypto library|libcrypto|\n|libmongoc crypto system profile|disabled|\n|libmongoc SASL|enabled|I try to connect with this code:but I get this error:",
"username": "Levente_Kosa"
},
{
"code": "mongodb+srvext-mongodb",
"text": "Support for the mongodb+srv URI scheme was added in libmongoc 1.9, which was first included in ext-mongodb 1.4.0. You’ll have to upgrade the extension in order to connect to MongoDB Atlas. On the bright side, the most recent minor version (1.15) still supports PHP 7.2, so you don’t have to run a 5 year old driver version.",
"username": "Andreas_Braun"
},
{
"code": "",
"text": "Thank you so much for the help. It did work!!",
"username": "Levente_Kosa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't connect to Atlas with PHP | 2023-04-19T11:27:01.936Z | Can’t connect to Atlas with PHP | 912 |
null | [
"aggregation",
"queries"
] | [
{
"code": "$sortArray$filter $project: {\n products: {\n $filter: {\n input: \"$products\",\n cond: {\n $eq: [\n \"$$product.requiresSignature\",\n true\n ]\n },\n as: \"product\"\n },\n $sortArray: {\n input: \"$products\",\n sortBy: {\n recipient: 1.0\n }\n }\n }\n }\n",
"text": "Hello,What is the right way to do $sortArray after $filter?I have tried doing the following:However, this will give the following error:‘Invalid $project :: caused by :: FieldPath field names may not start with ‘$’. Consider using $getField or $setField.’Could you kindly let me know how to correct the issue? Thank you",
"username": "Vladimir"
},
{
"code": "{ $project: {\n products: { $filter: {\n input: \"$products\",\n cond: { $eq: [ \"$product.requiresSignature\", true ] } },\n as: \"product\"\n } },\n} } ,\n{ $project: {\n $sortArray: {\n input: \"$products\",\n sortBy: { recipient: 1.0 }\n }\n} }\n{ $project: {\n $sortArray: {\n input: { $filter: {\n input: \"$products\",\n cond: { $eq: [ \"$product.requiresSignature\", true ] } },\n as: \"product\"\n } },\n sortBy: { recipient: 1.0 }\n }\n} }\n",
"text": "There are at least 2 ways.My prefered way is to $sortArray in a separate $project such as:Or by replacing $products input on one of the expression by the other expression, such as:",
"username": "steevej"
},
{
"code": "cond: { $eq: [ \"---->$<----product.requiresSignature\", true ] }",
"text": "Thank you! One quick question - I have noticed you have changed the $$product line, by removing 1x ‘$’ symbol. Why would that be?cond: { $eq: [ \"---->$<----product.requiresSignature\", true ] }",
"username": "Vladimir"
},
{
"code": "",
"text": "Why would that be?Because I goofed.You definitively need 2 dollar signs because product is a variable in this context.Saru mo ki kara ochiru",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $sortArray after $filter | 2023-04-18T09:41:32.108Z | $sortArray after $filter | 501 |
null | [
"queries",
"node-js"
] | [
{
"code": "let {MongoClient} = require('mongodb') ;\n\nlet client = await new MongoClient(urlToAtlasCluster)\n\n if(! (await client.db(dbName).listCollections.toArray().length)) {\n",
"text": "I check to see if the db name is in atlas I update it but if the db is not I create the db . My code is something like :`async function isDbExist(dbName) {// db is not in atlas … createDb()\n}\n}`\nI know that I could have used the cursor returned by ‘listCollections’ to enhance performance .\nBut is there any way I can listdatabases in node application without using this lame workaround !!",
"username": "Ali_M_Almansouri"
},
{
"code": "listDatabaseslet {MongoClient} = require('mongodb');\nconst urlToAtlasCluster = 'mongodb+srv://....';\nconst client = new MongoClient(urlToAtlasCluster);\n\nfunction containsEntry(array, value) {\n return array.filter(e => e.name == value).length > 0;\n}\n\nasync function run() {\n try {\n await client.connect();\n const admin = client.db(\"admin\");\n \n const result = await admin.command({ listDatabases: 1, nameOnly: true });\n console.log(result.databases);\n console.log(`Contains \"myTestDatabase\": ${containsEntry(result.databases, 'myTestDatabase')}`);\n console.log(`Contains \"local\": ${containsEntry(result.databases, 'local')}`);\n } finally {\n await client.close();\n }\n}\nrun().catch(console.dir);\nlistDatabasesfilter",
"text": "Hey @Ali_M_Almansouri,You can use the listDatabases command for just this purpose, then call the command from your Node.js code.For example:The listDatabases command optionally has a filter command field you could use to narrow down the results further if you so choose.",
"username": "alexbevi"
},
{
"code": "",
"text": "alexbevi , thank you for your reply . This answers my question, and the quality of the code you provided is mind blowing . I think I need to learn more about these commands that are callable on any db like :\nclient.(‘dbName’).command({dbStats:1}) ;\nand others .\nThank you again ",
"username": "Ali_M_Almansouri"
},
{
"code": "Adminimport { MongoClient } from 'mongodb';\n\nconst client = new MongoClient('mongodb://localhost:27017');\nconst admin = client.db().admin();\nconst dbInfo = await admin.listDatabases();\nfor (const db of dbInfo.databases) {\n console.log(db.name);\n}\n",
"text": "Hey @Ali_M_Almansouri, I’m glad this was helpful.Just as an added note the driver has an Admin class that provides convenience APIs to some of these methods.For example:",
"username": "alexbevi"
},
{
"code": "const admin = client.db(\"admin\")",
"text": "Thank you for taking the time to add to your reply , I think I will be fine with this way :const admin = client.db(\"admin\")Because it is consistent with how you would work with other databases that has your actual data .\nI’m , however, thankful for you and ahighly appreciate your help . This community might be the best community I have ever seen ! quick relpy , percise info and even high quality code !\nIf you can link to a documentation where I can find all the classes of nodejs mongodb driver and the related methods I will appreciate it . What I am looking for is a documentation that takes the style of nodejs documentation style , like :",
"username": "Ali_M_Almansouri"
},
{
"code": "",
"text": "What I am looking for is a documentation that takes the style of nodejs documentation styleThe MongoDB Node.js Driver API documentation is likely what you’re looking for ",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to list all databases in atlas using nodejs driver? | 2023-04-18T07:49:43.192Z | How to list all databases in atlas using nodejs driver? | 1,626 |
null | [
"aggregation"
] | [
{
"code": "user:{\n _id\n userType: enum(BUYER, MERCHANT)\n}\n\nbuyer:{\n_id,\nuserId\n}\n\nmerchant:{\n_id,\nuserId\n}\nuser.aggregate([\n\t{\n\t\t$lookup: {\n\t\t\tfrom: 'buyer',\n\t\t\tas: 'buyer',\n\t\t\tlocalField: '_id',\n\t\t\tforeignField: 'userId',\n\t\t},\n\t},\n\t{ $unwind: { path: '$buyer', preserveNullAndEmptyArrays: true } },\n\t{\n\t\t$lookup: {\n\t\t\tfrom: 'merchant',\n\t\t\tas: 'merchant',\n\t\t\tlocalField: '_id',\n\t\t\tforeignField: 'userId',\n\t\t},\n\t},\n\t{ $unwind: { path: '$merchant', preserveNullAndEmptyArrays: true } },\n])\n\n",
"text": "I have three collection: user, buyer, merchant, example below:I want get user and detail of it then I using aggregate with lookup to two collection ( buyer and merchant). Now I want lookup only one collection with correct userType. Have any solution. My query below:Thanks",
"username": "V_Nguy_n_Van"
},
{
"code": "",
"text": "Hi @V_Nguy_n_Van and welcome to MongoDB community forums!!I want get user and detail of it then I using aggregate with lookup to two collection ( buyer and merchant). Now I want lookup only one collection with correct userType. Have any solution. My query below:Could you please elaborate more on the above statement with some supporting information like:Generally, in MongoDB, data formation into the collection plays an important role while writing efficient and simpler queries.\nI would highly recommend you to go through Data Modelling documentation for more clear understanding.I am suggesting some resources that you can go through to expand your knowledge of MongoDB:\nIntroduction to MongoDB\nData Modelling Course\nData Model DesignRegards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Lookup multiple collection with conditions | 2023-04-19T02:51:15.465Z | Lookup multiple collection with conditions | 699 |
null | [
"queries",
"crud",
"time-series"
] | [
{
"code": "db.createCollection(\n \"weather\",\n {\n timeseries: {\n timeField: \"timestamp\",\n metaField: \"metadata\",\n granularity: \"hours\"\n }\n }\n)\ndb.weather.insertMany( [\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T00:00:00.000Z\"),\n \"temp\": 12\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T04:00:00.000Z\"),\n \"temp\": 11\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T08:00:00.000Z\"),\n \"temp\": 11\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T12:00:00.000Z\"),\n \"temp\": 12\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T16:00:00.000Z\"),\n \"temp\": 16\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-18T20:00:00.000Z\"),\n \"temp\": 15\n }, {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-19T00:00:00.000Z\"),\n \"temp\": 13\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-19T04:00:00.000Z\"),\n \"temp\": 12\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-19T08:00:00.000Z\"),\n \"temp\": 11\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-19T12:00:00.000Z\"),\n \"temp\": 12\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-19T16:00:00.000Z\"),\n \"temp\": 17\n },\n {\n \"metadata\": { \"sensorId\": 5578, \"type\": \"temperature\" },\n \"timestamp\": ISODate(\"2021-05-19T20:00:00.000Z\"),\n \"temp\": 12\n }\n] )\n> db.weather.countDocuments({})\n12\n",
"text": "Hi all,\nSuppose we insert 10 objects to mongodb time series collection. some of those objects contains common metaField of time series collection. I had expectations that number of documents created will be less than the number of objects inserted. But this is not the case.\nExample:Number of documents seen in the collection are 12 (which is same as number of objects inserted). Why it did not merge the objects together in single document.",
"username": "Yogesh_Sonawane1"
},
{
"code": "weatherdb.system.buckets.weatherweatherdb.system.buckets.weathertest > db.weather.countDocuments()\n12\ntest > db.system.buckets.weather.countDocuments()\n1\ndb.system.buckets.weather.find()\n{\n _id: ObjectId(\"6405b9a0b11b6de5c9ada789\"),\n control: {\n version: 1,\n min: { \t\t\t\t\t\t// holds the min value of each field as determined by BSON comparison \n timestamp: ISODate(\"2023-03-06T10:00:00.000Z\"),\n measure: -20,\n _id: ObjectId(\"642d1bb1ab4b695fbb4415ce\")\n },\n max: { \t\t\t\t\t\t// holds the max value of each field as determined by BSON comparison \n timestamp: ISODate(\"2023-03-06T10:59:47.478Z\"),\n measure: 78,\n _id: ObjectId(\"642d1bb1ab4b695fbb4415f9\")\n }\n },\n meta: { unit: 'Celsius' },\n data: {\n timestamp: {\n '0': ISODate(\"2023-03-06T10:59:47.478Z\"),\n ..\n '11': ISODate(\"2023-03-06T10:16:47.478Z\")\n },\n measure: {\n '0': 49,\n ..\n '11': 56\n },\n _id: {\n '0': ObjectId(\"642d1bb1ab4b695fbb4415ce\"),\n ..\n '11': ObjectId(\"642d1bb1ab4b695fbb4415f9\")\n }\n }\n}\n",
"text": "Hey @Yogesh_Sonawane1,Welcome to the MongoDB Community forums Thanks for asking the question.In MongoDB, the time-series collection follows a bucketing pattern to store the data in an optimized format. So, when you create a time-series collection, it creates 3 collections within the same database out of which 2 are internal collections:For example, in your case, you have created a time-series collection for weather data. The weather collection acts as a view that allows you to interact with all the documents and perform operations.However, the actual data is getting stored in a bucketing pattern (as you mentioned, merging the objects together into a single document also known as a bucket) in the db.system.buckets.weather collection.In a TimeSeries Collection, buckets are created based on metadata and timeStamp data, which automatically organizes the time series data into an optimized format.In order to understand let’s look at the output of the countDocuments() method for both the weather collection and the db.system.buckets.weather collection:which means that the 12 documents are stored in a single bucket within an internal collection.You can also view the collection and its document by executing the following command in the mongo shell:We strongly advise against making any alterations/modifications to the internal collection data, as this could result in data loss.Sharing sample bucket document for your reference:I hope it clarifies your doubt.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you so much for detailed answer. This clarified my doubt. Thanks again.",
"username": "Yogesh_Sonawane1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Time Series Data in Mongodb | 2023-04-18T11:12:41.926Z | Time Series Data in Mongodb | 1,395 |
null | [
"data-modeling"
] | [
{
"code": "domain.com/france/categoryA\ndomain.com/france/categoryB\ndomain.com/germany/categoryA\ndomain.com/germany/categoryB\n",
"text": "Hello i am new in mongo and BE in general. I would like to create a big list of items that will be grouped by country(whatever) and then also by more categories. Most of the times I will need api that will call something like:I suppose is better to create collections per each country + category. Then when I will have to find get search something will be faster and more efficient. There will not be cases for needing the categoryA of many countries. How are you gonna structure and do this with mongo and what prons and cons you see.\nThank you very much",
"username": "jelu_k"
},
{
"code": "things that are queried together should stay togetherDatabase 1 (France)\n Collection 1 (category A)\n Collection 2 (category B)\n ...\nDatabase 2 (Germany)\n Collection 1 (category A)\n Collection 2 (category B)\n ...\n...\n",
"text": "Hey @jelu_k,Welcome to the MongoDB Community Forums! A general rule of thumb to follow while designing your schema in MongoDB is that things that are queried together should stay together. I’m solely basing my answer on based on what you described since we don’t know what fields you have in mind, but how about keeping your countries as different databases and then keep the different categories in these countries as the collections in those databases, something like this:This way, your data will stay more organized, and queries by countries + category will be much easier.I would like to point out that, there are multiple ways to model data in MongoDB, and there isn’t necessarily a “one-size-fits-all” approach to schema design. It may be beneficial to work from the required queries first and let the schema design follow the query pattern. You can also use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Regarding the search to be faster and more efficient, you can use appropriate Indexes to make your queries faster.Additionally, since you mentioned you are new to MongoDB, I am suggesting some resources that you can go through to expand your knowledge of MongoDB:\nIntroduction to MongoDB\nData Modelling Course\nData Model DesignPlease let us know if you have any additional questions. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Data structure for list of items grouped country and category | 2023-04-06T09:16:45.722Z | Data structure for list of items grouped country and category | 998 |
null | [
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "I have built a service which will fetch data from Crossref and stored in hosted database on server. I’m currently really struggling with updating documents at scale and would appreciate some advice.I’m using upsert: true option and mongoose BulkWrite operation while updating document. So, if new documents arrives and it does not exist in collection it will create document for the same.Average size of documents is 8kb and field that I’m using for filtering documents is set unique: true and have created index on the same field.Here, at a moment I’m fetching 500 documents from crossref and trying to upserting it vut it takes more than 3 minutes to upsert those documents which is too longer for my usecase.What should be reason of these much delay and what is optimal number of operation that I should perform in single bulkWrite call ?Thanks in advance!",
"username": "Manish_Rathod"
},
{
"code": "",
"text": "Hi @Manish_Rathod and welcome to MongoDB community forums!!As mentioned in the MongoDB documentation for bulkWrite,A maximum of 10000 writes are allowed in a single batch operation.In saying so, for your use case, where you are trying to bulk insert 500 documents, ideally should not cause issues.\nHowever, to help me understand further, could you share a few details for the deployment:In addition, since we do not have the expertise in crossref, I would suggest to reach out the CrossRef community page for additional information if the issue persists at their end.Regards\nAasawari",
"username": "Aasawari"
}
] | Efficient Mongodb BulkWire operation | 2023-04-18T05:16:13.534Z | Efficient Mongodb BulkWire operation | 423 |
null | [
"schema-validation"
] | [
{
"code": "",
"text": "As part of a research exercise, I am looking into if there is anyway we can enforce a foreign key validation in MongoDB. More specifically, when we insert a document in collection A that has references for documents in collection B, it checks to see if they are valid?So far I do not see any way to do this inside MongoDB. Though I do see we can probably handle that through the application code. Just want to make sure this understanding is correct.",
"username": "Arsalan_Siddiqui"
},
{
"code": "",
"text": "I actually just went through this very thing for a proof of concept project I’m working on, the short answer is no, you cannot enforce a foreign key validation without creating application logic for that specific use case/restriction.You can use populate() to do this in mongoose, but I’m still trying to figure out a Driver way for this kind of logic, but overall MongoDB isn’t designed to do this organically. So do expect some performance impacts (minor) at the least from implementing this.https://mongoosejs.com/docs/populate.htmlAlso, this is more in line with schema-validation rather than key validation, so you can somewhat create references between documents that will only populate what has those references. But again, this isn’t exactly like foreign key validation.",
"username": "Brock"
},
{
"code": "",
"text": "as i recall, mongo manual doesn’t mention things like that. this is one difference from sql dbs.",
"username": "Kobe_W"
}
] | Is there anyway to enforce foreign key validation in mongodb? | 2023-04-17T19:35:04.299Z | Is there anyway to enforce foreign key validation in mongodb? | 1,107 |
null | [
"storage"
] | [
{
"code": "MongoDB version: 4.4.13\nStorage engine: WiredTiger\nOperating system: Linux/k8s deployment\nDeployment method: Bitnami Helm Chart\nMemory Used by Master: 17 GB Memory\nMemory Used by Replica: 2 GB Memory\nChecked the server logs: I didn't find any obvious errors or warning messages in the server logs that could explain the issues.\nChecked the WiredTiger cache size: I set the --wiredTigerCacheSizeGB option to 1 GB, which I thought should be sufficient for my workload. However.\nChecked the Query Cache size: I tried to check the current size of the Query Cache using the db.getProfilingStatus() command, but it looks like profiling is not activated in my deployment.\nChecked the Index Cache size: I tried to check the current size of the Index Cache using the db.serverStatus().wiredTiger.cache command, but I'm not sure if I'm interpreting the results correctly.\nHow can I properly set the WiredTiger cache size and ensure that it is being used effectively?\nHow can I check the current size of the Query Cache and ensure that it is being used effectively?\nHow can I check the current size of the Index Cache and ensure that it is being used effectively?\n",
"text": "Hello everyone,I’m writing this post to seek help with troubleshooting some performance issues I’ve been experiencing with my MongoDB deployment. Specifically, I’ve been noticing high memory consumption.Here are some details about my deployment:I’ve already tried a few things to diagnose the issues, but I’m still struggling to find the root cause. Here are some of the things I’ve done so far:I’m hoping that someone in the community can provide me with some guidance on how to proceed with troubleshooting these issues. Specifically, I’m looking for answers to the following questions:Thank you in advance for any help or suggestions you can provide. If you need any additional details about my deployment, please let me know and I’ll be happy to provide them.Best regards",
"username": "Haroon_B"
},
{
"code": "",
"text": "Are you saying with wiredTigerCacheSizeGB as 1GB, mongodb is still using 17GB memo which is too high?",
"username": "Kobe_W"
}
] | Troubleshooting Memory Usage with MongoDB | 2023-04-17T14:01:14.159Z | Troubleshooting Memory Usage with MongoDB | 936 |
null | [] | [
{
"code": "",
"text": "Hi,Recently, the data on the secondary was all gone due to some server problem, and I started replication data from Primary.the server I’m using will be closed at 10pm, In this situation, when the server restarts every morning, data replication seems to start all over again.is there any way If I restart the Mongo database, is there a way to restart it from where I replicated the day before?Thank you.",
"username": "Namaksin_N_A"
},
{
"code": "",
"text": "data replication seems to start all over againhow do you know this?is there any way If I restart the Mongo database, is there a way to restart it from where I replicated the day before?If the oplog size is not big enough to hold the “missed writes” during that shutdown time, then an initial sync will have to kick off.Try this if your mongo version is high enough. As long as the oplog data is still in primary, the replication should continue from where it is left before shutdown",
"username": "Kobe_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | If unexpected shutdown while replication in startup status | 2023-04-18T08:48:44.635Z | If unexpected shutdown while replication in startup status | 580 |
null | [
"atlas"
] | [
{
"code": "",
"text": "Hello,I am writing to inquire about how Atlas works when reducing storage size. We are currently using MongoDB Atlas with AWS provider and have 184GB of free storage size out of 216GB storage size.In this situation, do we need to execute “compact” on each node before changing the storage in the cluster configuration, or can Atlas handle the free storage size without needing to perform compaction?Thanks for reading this and I’ll wait for your response. ",
"username": "Hyun_Yun"
},
{
"code": "compact()",
"text": "Hi @Hyun_Yun - Welcome to the community In this situation, do we need to execute “compact” on each node before changing the storage in the cluster configuration, or can Atlas handle the free storage size without needing to perform compaction?All Atlas deployments are running on the WiredTiger storage engine. WiredTiger is a no-overwrite data engine, and only releases disk space when blocks available for reuse are checked and used for the writing of new blocks before the file is extended. There is no need to forcibly compact your data, as WiredTiger manages this for you.So in terms of whether you “need” to execute compact() or not depends on what you’re after. E.g. Are you planning to reduce the storage configuration for the cluster once you have reclaimed the disk space?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello @Jason_TranThank you so much for kind reply.\nWe are planning reclaim disk space.Should we run compact first?",
"username": "Hyun_Yun"
},
{
"code": "",
"text": "Hi Hyun,We are planning reclaim disk space.Just to clarify, after reclaiming disk space, what is your goal? Are you attempting to scale the storage down for the cluster?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello @Jason_TranThe goal of the question is that we have a disk size that is too large for the actual data size.We would like to reduce the configured disk size in the cluster.Just out of curiosity, it might be helpful if we run compact before downgrading.Thanks you",
"username": "Hyun_Yun"
},
{
"code": "",
"text": "Thanks for confirming Hyun,There are some other posts where users have compacted the secondary(s) first (one by one) before compacting the primary:If you run into any particular issues with your cluster you can try contacting the Atlas in-app chat support team.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thank you so much\n@Jason_TranYour comments are very helpful!\nIt is really appreciated.Thank you.\nHyun",
"username": "Hyun_Yun"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Reduce atlas storage which has big freeStorageSize | 2023-04-10T11:09:50.483Z | Reduce atlas storage which has big freeStorageSize | 1,446 |
null | [] | [
{
"code": "",
"text": "I only have 2 servers.Is it possible to setup a Replicate with only 1 Primary and 1 Secondary ?ThanksNico",
"username": "Nicolas_Thuillier"
},
{
"code": "",
"text": "In that case, I usually add an arbiter to the secondary server",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi Nicolas - Welcome to the community Just curious for the use case here for a replica set with 2 members? As per the Operations Checklist documentation regarding replication:I would also highly recommend going over the following post into a similar scenario (with the exception of an arbiter(s) and why you should not use them as per stennie’s comment)Regards,\nJason",
"username": "Jason_Tran"
}
] | Can we have a replice with only 1 primary and 1 secondary? | 2023-04-18T22:58:01.058Z | Can we have a replice with only 1 primary and 1 secondary? | 583 |
null | [
"aggregation",
"queries"
] | [
{
"code": "[\n {\n \"date\": \"2023-04-01\",\n \"shift\": {\n \"_id\": \"6422d103994726677e9105c3\",\n \"date\": \"2023-04-01\",\n \"shiftType\": {\n \"_id\": \"6299fd7504978a2ba513e0a2\",\n \"name\": \"Next Day End\",\n \"isNight\": true\n }\n },\n \"attendances\": [\n {\n \"_id\": \"6422d1b9994726677e9105c6\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-01T05:30:00.000Z\"\n }\n ]\n },\n {\n \"date\": \"2023-04-02\",\n \"shift\": null, //Shift not yet set, treated as general shift.\n \"attendances\": [\n {\n \"_id\": \"6422d1b9994726677e9105c7\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-02T00:30:00.000Z\"\n },\n {\n \"_id\": \"6423286e2746f65c2480a13e\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-02T04:30:00.000Z\"\n },\n {\n \"_id\": \"6423286e2746f65c2480a13f\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-02T12:40:00.000Z\"\n }\n ]\n },\n {\n \"date\": \"2023-04-03\",\n \"shiftType\": {\n \"_id\": \"6299fd7504978a2ba513e0a2\",\n \"name\": \"General\",\n \"isNight\": false\n },\n \"attendances\": [\n {\n \"_id\": \"642328a42746f65c2480a140\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-03T04:05:00.000Z\"\n },\n {\n \"_id\": \"642328a42746f65c2480a141\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-03T13:45:00.000Z\"\n }\n ]\n },\n {\n \"date\": \"2023-04-04\",\n \"shift\": {\n \"_id\": \"6422d10a994726677e9105c5\",\n \"date\": \"2023-04-04\",\n \"shiftType\": {\n \"_id\": \"6299fd7504978a2ba513e0a2\",\n \"name\": \"Next Day End\",\n \"isNight\": true\n }\n },\n \"attendances\": [\n {\n \"_id\": \"6423292f2746f65c2480a142\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-04T12:30:00.000Z\"\n }\n ]\n },\n {\n \"date\": \"2023-04-05\",\n \"shift\": null,\n \"attendances\": [\n {\n \"_id\": \"6423292f2746f65c2480a143\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-04T21:55:00.000Z\"\n }\n ]\n }\n]\n[\n {\n \"date\": \"2023-04-01\",\n \"shift\": {\n \"_id\": \"6422d103994726677e9105c3\",\n \"date\": \"2023-04-01\",\n \"shiftType\": {\n \"_id\": \"6299fd7504978a2ba513e0a2\",\n \"name\": \"Next Day End\",\n \"isNight\": true\n }\n },\n \"attendances\": [\n {\n \"_id\": \"6422d1b9994726677e9105c6\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-01T05:30:00.000Z\"\n },\n {\n \"_id\": \"6422d1b9994726677e9105c7\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-02T00:30:00.000Z\"\n }\n ]\n },\n {\n \"date\": \"2023-04-02\",\n \"shift\": null,\n \"attendances\": [\n {\n \"_id\": \"6423286e2746f65c2480a13e\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-02T04:30:00.000Z\"\n },\n {\n \"_id\": \"6423286e2746f65c2480a13f\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-02T12:40:00.000Z\"\n }\n ]\n },\n {\n \"date\": \"2023-04-03\",\n \"shiftType\": {\n \"_id\": \"6299fd7504978a2ba513e0a2\",\n \"name\": \"General\",\n \"isNight\": false\n },\n \"attendances\": [\n {\n \"_id\": \"642328a42746f65c2480a140\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-03T04:05:00.000Z\"\n },\n {\n \"_id\": \"642328a42746f65c2480a141\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-03T13:45:00.000Z\"\n }\n ]\n },\n {\n \"date\": \"2023-04-04\",\n \"shift\": {\n \"_id\": \"6422d10a994726677e9105c5\",\n \"date\": \"2023-04-04\",\n \"shiftType\": {\n \"_id\": \"6299fd7504978a2ba513e0a2\",\n \"name\": \"Next Day End\",\n \"isNight\": true\n }\n },\n \"attendances\": [\n {\n \"_id\": \"6423292f2746f65c2480a142\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-04T12:30:00.000Z\"\n },\n {\n \"_id\": \"6423292f2746f65c2480a143\",\n \"employee\": \"622061b73b2eaac4b15d42e4\",\n \"dateTime\": \"2023-04-04T21:55:00.000Z\"\n }\n ]\n }\n]\n\n",
"text": "I was trying to make an attendance application. I am using Mongodb and koajs for development. There is night shift option and I am unable to handle when the attendance date is changing. I need to move next dated first record to current dated record when the current day’s shift is night shift. My current data set look like bellow.I have shift allocation table where I am recording the night shift as “isNight”: true”. I need the output looks like bellow.Please help me if possible.",
"username": "Pallab_Kole"
},
{
"code": "",
"text": "Hi @Pallab_Kole and welcome to MongoDB community forums!!From the sample documents shared, it looks like you have combination of clean (where shift != null) and dirty data (where shift = null).\nIf you wish to move documents based on the condition, one idea here could be to create two different collections, one for each of the conditions, and operate between them to achieve the desired results.Let us know if you have further questions.Regards\nAasawari",
"username": "Aasawari"
}
] | Mongodb move object conditionaly | 2023-04-15T02:43:23.290Z | Mongodb move object conditionaly | 418 |
null | [] | [
{
"code": "",
"text": "Hi,\nWe are using a VPN to access our company cloud.\nThis VPN works on split mode. So only traffic to our subnets is handled by the VPN.I want to add to the VPN a new route to our M60 replica set cluster on Atlas.\nThis way, all the VPN-connected users can access Atlas using the same IP.\n(when a user tries to access the DB on her machine, the DNS will resolve the IP, and the VPN will route it via our NAT server)What is the IP range I need to route to the VPN?\nI can route the IPs of the current cluster instances. But not sure if they will change over time.\nOur cluster is on AWS (one region only)\nI don’t mind routing all of my VPN user’s traffic for any of Atla’s IPs in that region. This way, even if the IP changes, we will still route the new IP (as long as the cluster stays in the same AWS Region)Where can I find the list of Atlas external IPs?\nDo I have any other solution?",
"username": "Izack_Varsanno"
},
{
"code": "",
"text": "Hi @Izack_Varsanno - Welcome to the community I can route the IPs of the current cluster instances. But not sure if they will change over time.Perhaps the details on the FAQ: Networking documentation, specifically the “Do Atlas clusters’s public IPs ever change?” section, will help you here. From a network perspective, could you use the hostname as opposed to the IP address?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran ,\nThanks, I reviewed the FAQ before; I’m sorry for not mentioning it.\nMy cluster is an NVMe-backed cluster. And I have the feeling that the FAQ might ignore a few scenarios.\nA server failing in AWS will force it to build on a different host.\nThey are not talking about this scenario at all.\nAnd since the IP might change even on scaling the cluster (NVME), I think forwarding all the Atlas subnets for that AWS region via my VPN would be better. None of my team members should access other Atlas DBs while connected to our company VPN.",
"username": "Izack_Varsanno"
},
{
"code": "",
"text": "Thanks for the confirmation Izack - As mentioned before, does using the hostname as opposed to IP work for your scenario? In a scenario where the IP changes, I would think that the hostname would then just resolve to the new IP.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran,\nI need to set the routing table for the open VPN.\nIt should get an IP-Range. No reverse proxy is available as part of the config.With only the current IPs data, I will need to update the VPN configuration each time new IP is assigned and then ask the user to reconnect to the VPN.\nThis is not a big deal, but since such a change will rarely happen, I’m afraid that no one will remember the needed steps ",
"username": "Izack_Varsanno"
},
{
"code": "",
"text": "I need to set the routing table for the open VPN.\nIt should get an IP-Range. No reverse proxy is available as part of the config.Ah gotcha. Hmm, guess it’ll be a bit tough in this case since there is still a chance IP changes could occur.Perhaps other members will have some network suggestions that may work for this use case.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi,\nI decided to choose a different path.\nI added a VPC Peering between our Atlas cluster and our VPN-VPC at AWS.\nNow all the VPN users are accessing MongoDB using private IPs only, and the routing is simple.",
"username": "Izack_Varsanno"
},
{
"code": "",
"text": "Perfect - thanks for providing the update here Izack. Sounds like that route works for your use case / requirements ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Set access to mongoDB on my VPN | 2023-04-13T05:14:24.291Z | Set access to mongoDB on my VPN | 1,141 |
null | [
"dot-net",
"atlas-device-sync",
"app-services-user-auth"
] | [
{
"code": "contextexports.getRole = function() {\n const users = context.services.mongodb.db(\"my-db\").collection(\"users\");\n const user = users.findOne({ email: context.user.data.email });\n return user.role;\n};\n{\n \"%%read\": \"role == 'admin'\",\n \"%%write\": \"role == 'admin'\",\n \"%%clean\": \"role == 'admin'\",\n \"field\": {\n \"%%read\": \"role == 'viewer'\"\n }\n}\nMongoDB.Driver.Realmusing MongoDB.Driver.Realm;\n\nvar app = App.Create(\"<your-app-id>\");\nvar emailPasswordCredentials = Credentials.EmailPassword(\"<email>\", \"<password>\");\nvar user = await app.LogInAsync(emailPasswordCredentials);\n\nvar role = await user.Functions.CallAsync<string>(\"getRole\");\n\nif (role == \"admin\") {\n // display admin menu\n} else {\n // display user menu\n}\n",
"text": "To allow different levels of user privileges after signing in with C# and MongoDB Realm, you can use MongoDB Realm’s data access controls to define access rules based on user roles.Here are the general steps to implement this functionality:Define the different roles that users can have in your application, such as “admin”, “editor”, and “viewer”. You can do this by creating a new collection in your MongoDB database and adding documents that define each role.Create a function in your MongoDB Realm app that returns the user’s role based on their credentials. You can use the context object to access the user’s credentials and query your role collection to determine their role.By following these steps, you can allow different levels of user privileges after signing in with C# and MongoDB Realm.",
"username": "Brock"
},
{
"code": "exports.getRole = function() {\n const users = context.services.mongodb.db(\"my-db\").collection(\"users\");\n const user = users.findOne({ email: context.user.data.email });\n return user.role;\n};\n\n{\n \"%%read\": \"role == 'admin'\",\n \"%%write\": \"role == 'admin'\",\n \"%%clean\": \"role == 'admin'\",\n \"field\": {\n \"%%read\": \"role == 'viewer'\"\n }\n}\n\nusing realms;\n\nvar app = App.Create(\"<your-app-id>\");\nvar emailPasswordCredentials = Credentials.EmailPassword(\"<email>\", \"<password>\");\nvar user = await app.LogInAsync(emailPasswordCredentials);\n\nvar role = await user.Functions.CallAsync<string>(\"getRole\");\n\nif (role == \"admin\") {\n // display admin menu\n} else {\n // display user menu\n}\n\n",
"text": "Modifications and improvements to the above guide, for some reason Visual Studio Code will change “using realms;” to “using MongoDB.Driver.Realm;” and I’m not sure why it defaults an autocorrect to this. If you are using any .net auto completes to help speed up your line writes, make sure you’re looking for this autocorrect. especially if you use anything to “pretty” your code.But here you go with the fixes, I’ll provide screenshots in the next edit.To provide different levels of user privileges in your C# and MongoDB Realm application, MongoDB Realm’s data access controls can be used to define access rules based on user roles. Here’s how you can implement this functionality:Example:JavaScriptExample:This is the JSONRepaste this with any roles you want to add, and make the changes appropriately.Example:\nCSharpBy following these steps, you can easily allow different levels of user privileges in your C# and MongoDB Realm application.",
"username": "Brock"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | User Privilege Access with C# SDK and Realm - Please put in Realm Docs | 2023-04-11T20:31:34.078Z | User Privilege Access with C# SDK and Realm - Please put in Realm Docs | 1,089 |
null | [
"connecting"
] | [
{
"code": "",
"text": "Hello,We are having some technical difficulties regarding implementing a Network Access, namely connecting an M10 instance(standard RS - single region cluster - eastus) with a Private Endpoint:We created yesterday a private endpoint on our part using the official Mongodb Atlas instructions, but the connection is not established, namely it is not approved on your part via the private link services that is created on your end(in Azure).\nWe have already talked with Microsoft Support regarding this issue, and they said that the end that created the PLS(Private link service) must approve the respective connection(and thus the connection is made to our private endpoint in azure).From what we understood, the flow is:Create the private endpoint on our azure subscription(using the instructions from the cloud.mongodb.com portal)On the mongodb atlas end, a private link service is createdThe PLS(Private link service) must be approved and thus creating the peering connection. → this is where we encounter the issuesThe error on the mongodb atlas portal isPrivate Endpoint with id /subscriptions/000/resourcegroups/lorem-ipsum/providers/microsoft.network/privateendpoints/my-awesome-endpoint was not found. Click ‘Edit’ to fix the problem.I can confirm that the private endpoint is still present in our azure subscription.Also, we tried contacting support via the Live chat but unfortunately, they are responding extremely slow Are we missing something?Thank you ",
"username": "Alex_Stockel"
},
{
"code": "",
"text": "I figured it out.Even tough the azure web portal presents the resourceID with small letters, its actually case sensitive, and has some capital letters:/subscriptions/000/resourceGroups/Lorem-Ipsum/providers/Microsoft.Network/privateEndpoints/my-awesome-EndpointI obtained the correctly formatted endpoint via Resources - Get By Id - REST API (Azure Resource Management) | Microsoft LearnThis was extremely frustrating to figure it out, thus please either consult with Microsoft to update the resource ID they are showing in the azure web portal, or point it out in the mongodb atlas portal at the walkthrough steps.Thank you",
"username": "Alex_Stockel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
},
{
"code": "",
"text": "thus please either consult with Microsoft to update the resource ID they are showing in the azure web portal, or point it out in the mongodb atlas portal at the walkthrough steps.Thanks for providing the solution for this topic Alex and also the feedback regarding the case sensitivity.",
"username": "Jason_Tran"
}
] | Azure Private Endpoint not working with Mongodb Atlas | 2023-02-24T11:09:52.678Z | Azure Private Endpoint not working with Mongodb Atlas | 1,308 |
null | [
"queries",
"python"
] | [
{
"code": " for s in collection.find():\n print(s)\nfrom pymongo import MongoClient\n\nurl = 'mongodb://localhost:27017'\nclient = MongoClient(url) # create MongoClient\n\ndb = client['test'] # client.dbname\ncollection = db['students'] # db.collname\n\n# QUERY\ncursor = collection.find()\nfor s in cursor:\n print(s)\n\ncursor .close() # Is this required?\nclient.close() # Is this required?\n",
"text": "I am using PyMongo, and I have some questions about connection and cursor.",
"username": "Marv"
},
{
"code": "",
"text": "https://pymongo.readthedocs.io/en/stable/api/pymongo/mongo_client.htmlthe document doesn’t say if that is required or not. But generally if a “close” is provided, it’s better to call it when the object is no longer needed. (to save compute resources).For “client” i suppose it can be kept without closing there until the application terminates (process exits), but for a “cursor” i would suggest to close it when the its use is done.",
"username": "Kobe_W"
},
{
"code": "with MongoClient(url) as client:\n # use client here\nwith collection.find() as cursor:\n for s in cursor:\n print(s)\n",
"text": "Thank you for your question.@Kobe_W is correct, best practice is to call close() when the client is no longer needed or use the client in a with-statement:It’s also a good idea to call close() on the cursor too, the best way would be to use a with-statement like this:When PyMongo’s MongoClient and cursor objects are garbage collected without being closed pymongo will clean up application-side resources automatically (connections, threads, etc…) however some server-side resources could be left open (cursors, sessions). Calling close() explicitly (or using a with-statement) will clean up these server-side resources that would otherwise be left around until they timeout.I just opened this ticket to improve our documentation around these ideas: https://jira.mongodb.org/browse/PYTHON-3606",
"username": "Shane"
},
{
"code": "explain",
"text": "So I understand correctly that exhausting the cursor does not close it? I guess failing to close the cursors could cause problems with “too many open files” in the server? It could explain some issues I’m having.(guess that makes sense since it’s probably possible to call explain etc. after exhaustion)",
"username": "Ole_Jorgen_Bronner"
},
{
"code": "with collection.find() as cursor:\n for s in cursor:\n print(s)\n raise ValueError('oops')\n",
"text": "So I understand correctly that exhausting the cursor does not close it?Exhausting a cursor (ie fully iterating all the results) does close the cursor automatically. Using the cursor in a with-statement is helpful when the cursor is intentionally or accidentally not fully iterated. For example if an exception is raised in the middle of processing the results the cursor with-statement will cleanup the cursor in a more timely manner:",
"username": "Shane"
}
] | I am using PyMongo. Do I have to close a MongoClient after use? | 2023-02-16T02:37:52.349Z | I am using PyMongo. Do I have to close a MongoClient after use? | 2,486 |
null | [
"node-js"
] | [
{
"code": "mongodbupsertedIdUpdateResultmongodb",
"text": "The MongoDB Node.js team is pleased to announce version 5.3.0 of the mongodb package!We invite you to try the mongodb library immediately, and report any issues to the NODE project.",
"username": "Warren_James"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB NodeJS Driver 5.3.0 Released | 2023-04-18T18:23:04.537Z | MongoDB NodeJS Driver 5.3.0 Released | 1,040 |
null | [
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "return new Promise((resolve, reject) => {\n\ttry {\n\t const conn = mongoose.createConnection(`${url}/MyProjectDB`, {\n\t\tserverSelectionTimeoutMS: 2000\n\t });\n\n\t // Listen for the error event on the named connection\n\t conn.on('error', () => {\n\t\tconsole.log('Mongoose default connection error');\n\t\treject(`Unable to connect ${url}.`);\n\t\t// conn.close();\n\t\t//throw new Error('Something went wrong');\n\t });\n\n\t conn.on('connected', () => {\n\t\tconst myCollectionDoc = conn.model('myCollection', myCollectionSchema);\n\t\tmyCollectionDoc.find({})\n\t\t .then((documents: Array<documentType>) => {\n\t\t\tresolve({ ConnectionString: url, Data: documents });\n\t\t })\n\t\t .catch(() => {\n\t\t\treject('Unable to fetch documents.');\n\t\t })\n\t\t .finally(() => conn.close());\n\t });\n\t} catch (err) {\n\t reject(`Unable to connect ${url}.`);\n\t}\n})\n",
"text": "I am trying to create a new instance of the Mongoose on demand. In my application, one Mongoose instance is already active and i wanted to create another Mongoose instance that would be limited to a particular scope. I am accessing another collection through this instance. So i want a way to create an instance and close that instance once CRUD operations are done. But Keep open global instance. I am not able to achieve this condition. Can someone please suggest to me the way out?I tried the below code, but even with a try-catch block, my server is crashing when invalid connection string is provided.Error -const timeoutError = new MongoServerSelectionError(MongoServerSelectionError: Server selection timed out after 2000 ms",
"username": "Prasanna_Sasne"
},
{
"code": "const mongoose = require('Mongoose');\nmongoose.connect(\"MongoDB://localhost:<PortNumberHereDoubleCheckPort>/<DatabaseName>\", {useNewUrlParser: true});\nconst <nameOfDbschemahere> = new mongoose.schema({\n name: String,\n rating: String,\n quantity: Number,\n someothervalue: String,\n somevalue2: String,\n});\n\nconst Fruit<Assuming as you call it FruitsDB> = mongoose.model(\"nameOfCollection\" , <nameOfSchemeHere>);\n\nconst fruit = new Fruit<Because FruitsDB calling documents Fruit for this>({\n name: \"Watermelon\",\n rating: 10,\n quantity: 50,\n someothervalue: \"Pirates love them\",\n somevalue2: \"They are big\",\n});\nfruit.save();\nconst mongoose = require('mongoose');\n\n// Connect to database\nconst connection = mongoose.createConnection(\"mongodb://localhost:<PortNumberHereDoubleCheckPort>/<DatabaseName>\", {useNewUrlParser: true});\n\n// Connections\nconnection.on('connecting', function() {\n console.log('Connecting to database...');\n});\n\nconnection.on('connected', function() {\n console.log('Connected to database');\n});\n\nconnection.on('error', function(err) {\n console.error('Error in database connection: ' + err);\n});\n\nconnection.on('disconnected', function() {\n console.log('Disconnected from database');\n});\n\n// Schema generation\nconst FruitSchema = new mongoose.Schema({\n name: String,\n rating: Number,\n quantity: Number,\n someothervalue: String,\n somevalue2: String,\n});\n\nconst Fruit = connection.model(\"nameOfCollection\" , FruitSchema);\n\n// Document to save.\nconst fruit = new Fruit({\n name: \"Watermelon\",\n rating: 10,\n quantity: 50,\n someothervalue: \"Pirates love them\",\n somevalue2: \"They are big\",\n});\n\nfruit.save(function(err, doc) {\n if (err) {\n console.error('Error saving fruit: ' + err);\n } else {\n console.log('Fruit saved successfully: ' + doc);\n }\n});\n\n#!/usr/bin/env node\nimport { MongoClient } from 'mongodb';\nimport { spawn } from 'child_process';\nimport fs from 'fs';\n\nconst DB_URI = 'mongodb://0.0.0.0:27017';\nconst DB_NAME = 'DB name goes here';\nconst OUTPUT_DIR = 'directory output goes here';\nconst client = new MongoClient(DB_URI);\n\nasync function run() {\n try {\n await client.connect();\n const db = client.db(DB_NAME);\n const collections = await db.collections();\n\n if (!fs.existsSync(OUTPUT_DIR)) {\n fs.mkdirSync(OUTPUT_DIR);\n }\n\n collections.forEach(async (c) => {\n const name = c.collectionName;\n await spawn('mongoexport', [\n '--db',\n DB_NAME,\n '--collection',\n name,\n '--jsonArray',\n '--pretty',\n `--out=./${OUTPUT_DIR}/${name}.json`,\n ]);\n });\n } finally {\n await client.close();\n console.log(`DB Data for ${DB_NAME} has been written to ./${OUTPUT_DIR}/`);\n }\n}\nrun().catch(console.dir);\n",
"text": "What tool did you use to help validate that? That should not have validated at all…I want you to look at this script, and change yours around accordingly. Whatever plugin you used to help generate that, I highly encourage you stop using it.@Prasanna_SasneIf you need the catch block, etc.This validates, and you’re welcome to use it however you want and modify.This is another script example I posted a while back.",
"username": "Brock"
},
{
"code": "// Connections\nconnection.on('connecting', function() {\n console.log('Connecting to database...');\n});\n\nconnection.on('connected', function() {\n console.log('Connected to database');\n});\n\nconnection.on('error', function(err) {\n console.error('Error in database connection: ' + err);\n});\n\nconnection.on('disconnected', function() {\n console.log('Disconnected from database');\n});\n",
"text": "Thank you for your reply,\nbut I am not using any tool.\nInside connection.on(‘error’) I am rejecting the promise. Ignore the Try/catch block from my code. but still, a server is crashing. I am getting exact same error for an invalid connection string. Using an IP address is not the solution. I want to handle errors properly.",
"username": "Prasanna_Sasne"
}
] | Create names connection in mongoose | 2023-04-18T00:12:54.699Z | Create names connection in mongoose | 1,616 |
null | [
"capacity-planning"
] | [
{
"code": "",
"text": "I am currently on MongoDB M2 plan and after analyzing my usage I came to know that I will reach the limit of M2 plan soon, so that i want to explore in advance that I can sift or change my plan from M2 to M5 and please share some supportive resources.",
"username": "Abidullah_N_A"
},
{
"code": "Modify a Cluster<you-cluster-name>upgradeConfigure Auto-Scaling",
"text": "Hello @Abidullah_N_A ,Welcome to The MongoDB Community Forums! Yes, you can modify your cluster after initial configuration. Please go through below documentation on Modify a ClusterQuick steps to update your cluster from Atlas GUIIf you eventually decide to upgrade to a dedicated tier (M10+) cluster, you can also take a look at Configure Auto-Scaling which will help you automatically scale your cluster tier, storage capacity, or both in response to cluster usage.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "You cannot upgrade pass an M5, you’ll have to make a whole new cluster for an M10 and above, just FYI.",
"username": "Brock"
},
{
"code": "M0M2/M5M10M0M2/M5M2M5",
"text": "Hi @Brock You cannot upgrade pass an M5, you’ll have to make a whole new cluster for an M10 and above, just FYI.It is possible to upgrade from M5 to M10 but it is a scenario where downtime is required as noted in the Modify a Cluster documentation (first dot point below):Changing the cluster tier requires downtime in the following scenarios:Note: There should generally be no downtime associated with modifying the cluster tier when doing so for M10+ dedicated tier clustersFor example, please see the below screenshot regarding an M5 to M10 upgrade from the UI from my test Atlas project:\nimage2214×2456 365 KB\nRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "That’s actually new, and awesome.We used to have to help the customer migrate off of the shared tier cluster into their independent M10s and above. That’s really awesome they’ve made it possible to go up to M10+ from shared tier without having to build a whole new cluster for it.",
"username": "Brock"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can we change the mongodb atlas M2 plan any time when it is needed | 2023-04-15T07:45:52.478Z | Can we change the mongodb atlas M2 plan any time when it is needed | 871 |
null | [
"flutter"
] | [
{
"code": "final regexText = \"/\\b($searchText)\\b/i\";\n final items = realm.query<Post>(\n r\"post CONTAINS[c] $0 SORT(createdAt DESC)\", [\"regexText\"]);\n",
"text": "Can you use regex inside .query string matches, trying to do word match search, for example:… but doesn’t seem to work.",
"username": "Sashi_Bommakanty"
},
{
"code": "LIKE*?",
"text": "No, regex is not supported. There’s a LIKE operator that supports * and ? matches, but no true regex capabilities. You can read more about the query language in the docs.",
"username": "nirinchev"
},
{
"code": "",
"text": "Thanks for getting back. Any plans to support this soon? @nirinchev",
"username": "Sashi_Bommakanty"
},
{
"code": "",
"text": "I’m not aware of any short-term plans to support it, unfortunately ",
"username": "nirinchev"
},
{
"code": "final re = RegExp(...);\nIterable<Post> matches = realm.all<Post>().where((p) => re.hasMatch(p.post));\n",
"text": "A work-around could be to do the matching in Dart. As realm is an embedded database the cost is often acceptable.Note that you no longer benefit from any indexes. As the number of posts grow the above will eventually become expensive - it is after all a linear scan - but 10k posts is unlikely to be a problem.",
"username": "Kasper_Nielsen1"
},
{
"code": "",
"text": "Yeah, that’s a good suggestion as well @Kasper_Nielsen1 . Or, get the query results from Realm, and then parse those results in Dart for word match. THanks!",
"username": "Sashi_Bommakanty"
}
] | Realm Dart RegEx Support with .query? | 2023-03-24T20:39:53.836Z | Realm Dart RegEx Support with .query? | 1,225 |
null | [
"aggregation",
"java"
] | [
{
"code": "new Document(\"$eq\", List.of(\"$field\", \"$$var\"))",
"text": "Hi, just curious why there is no helpers in the driver to avoid having to write something as verbose as new Document(\"$eq\", List.of(\"$field\", \"$$var\")) in aggregation pipelines?",
"username": "Jean-Francois_Lebeau"
},
{
"code": "",
"text": "Hi! Would you mind having a look at https://jira.mongodb.org/browse/JAVA-3879 and letting me know if this meets your needs? It was included in the most recent release of the Java Driver, 4.9, released February 10, 2023.",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "Not sure, I couldn’t find documentation/exemples to properly understand the changes.",
"username": "Jean-Francois_Lebeau"
},
{
"code": "",
"text": "Appreciate the feedback. Docs are forthcoming, I’ll update this thread once they are live. Thanks!",
"username": "Ashni_Mehta"
},
{
"code": "pipeline.add(set(new Field<>(\"myfield\", new Document(\"$first\", \"$myarray\"))));",
"text": "A silimar case is when using Aggregates.set, AFAIK there no helper for $first so you end up with something like:\npipeline.add(set(new Field<>(\"myfield\", new Document(\"$first\", \"$myarray\"))));",
"username": "Jean-Francois_Lebeau"
},
{
"code": "",
"text": "@ Ashni_Mehta still no doc?",
"username": "Jean-Francois_Lebeau"
},
{
"code": "",
"text": "I believe docs are now available for the feature I linked above, Builders for Aggregation Expressions.Would you mind taking a look at https://www.mongodb.com/docs/drivers/java/sync/v4.9/fundamentals/builders/aggregates/?",
"username": "Ashni_Mehta"
},
{
"code": "Added support for the $documents aggregation pipeline stage \nto the Aggregates helper class.\n",
"text": "The only thing new in 4.9 related to aggregation helpers seems to be this one:",
"username": "Jean-Francois_Lebeau"
},
{
"code": "",
"text": "Apologies, docs are still being updated to reflect individual aggregation expression operators that are available now in 4.9. For information on comparison operators specifically, this PR is a good entry point. I’ll update this thread once the docs are complete for all the aggregation expression operators that were added in 4.9.",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "Hi, I have an update for you. Please have a look at the documentation here and let me know if this is what you are looking for. Thanks!",
"username": "Ashni_Mehta"
}
] | Why no helper for eq/lt/gt in aggregation in java driver? | 2023-02-13T20:32:56.295Z | Why no helper for eq/lt/gt in aggregation in java driver? | 1,026 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "I get this error:\n/rbd/pnpm-volume/a2c3c6a9-9676-4e7a-8ecb-647c7e1a95f1/node_modules/mongodb/lib/operations/add_user.js:16\nthis.options = options ?? {};This is the code that I am using:\nconst { MongoClient, ServerApiVersion } = require(‘mongodb’);\nconst uri = “mongodb+srv://QuackDev:” + process.env.rocketerDatabase + “@rocketer.iolndfg.mongodb.net/?retryWrites=true&w=majority”;\n// Create a MongoClient with a MongoClientOptions object to set the Stable API version\nconst client = new MongoClient(uri, {\nserverApi: {\nversion: ServerApiVersion.v1,\nstrict: true,\ndeprecationErrors: true,\n}\n});\nasync function run() {\ntry {\n// Connect the client to the server\t(optional starting in v4.7)\nawait client.connect();\n// Send a ping to confirm a successful connection\nawait client.db(“admin”).command({ ping: 1 });\nconsole.log(“Pinged your deployment. You successfully connected to MongoDB!”);\n} finally {\n// Ensures that the client will close when you finish/error\nawait client.close();\n}\n}\nrun().catch(console.dir);",
"username": "Jayden_Yeo"
},
{
"code": "",
"text": "Ah i read through similar posts on this problem. I upgraded my nodejs version to version 14 and this issue was resolved.",
"username": "Jayden_Yeo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting error when trying to connect to mongodb atlas | 2023-04-17T10:23:09.135Z | Getting error when trying to connect to mongodb atlas | 598 |
null | [
"replication"
] | [
{
"code": "Failed: could not get collections names from the source: EOF \"user\" : \"admin\",\n \"db\" : \"admin\",\n \"roles\" : [\n {\n \"role\" : \"clusterAdmin\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"readWriteAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"readAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"backup\",\n \"db\" : \"admin\"\n }\n ]\n",
"text": "Sorry for the bad formatting!We are trying to migrate off MongoDB 2.6 to MongoDB 4.2 (eventually to 5.0, but not yet). I am running the latest version of mongomirror to copy the data to the new instance. However, I keep running into the same error:\nFailed: could not get collections names from the source: EOFI can’t find any documentation on this error. I’ve followed the instructions listed on this page www.mongodb.com/docs/atlas/import/mongomirror/Source version of MongoDB= 2.6.4\nReplicaSet= true\nOperating System= Red Hat Enterprise Linux Server 7.9\nNode Count= 1\nData Size= 400gbAny insight is greatly appreciated!I would also like to add that I have successfully used the tool to migrate data from a brand new 2.6.4 db to a 4.2 db. I was testing the tool and validating that the config would work as expected. so the versions of the source DB shouldn’t be a problem, right?",
"username": "Dillon_S"
},
{
"code": "\n/path/to/mongomirror-linux-x86_64-rhel70-0.12.8/bin/mongomirror \\\n --host=<replicaSetName/host:port> \\\n --username=admin \\\n --password=<password> \\\n --authenticationDatabase=admin \\\n --authenticationMechanism=MONGODB-CR \\\n --destination=<replicaSetName/host:port> \\\n --destinationUsername=admin \\\n --destinationPassword=<password>\\\n --includeNamespace=<DB>.<very small Collection> \\\n --noTLS \\\n --tlsInsecure \\\n > /path/to/mongomirror.log\n",
"text": "I’m sure it would also be helpful to see my command",
"username": "Dillon_S"
},
{
"code": "2023-04-17T14:55:39.419-0400 Attempting initial sync from: host:27017\n2023-04-17T14:55:39.428-0400 Collection `REDACTED` was not found on the source. Skipping initial sync.\n",
"text": "Updating this as I was able to solve my problem. There was a typo in one of my IP adresses…Anyway their is a new problem:And this happens for numerous collections",
"username": "Dillon_S"
},
{
"code": "",
"text": "It sounds like this is on-prem to on-prem restoring of data, mongomirror is used for transferring a MongoDB replica set to an Atlas Replica set.\n\nimage794×136 4.27 KB\nIf you are looking to backup / restore your data on prem you should use mongoimport and mongodump.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "You’re correct about on-prem to on-prem. But What’s really the difference between Atlas and an on-prem db? They are running the same underlying code.Even if I was migrating to Atlas, this problem has more to do with the source data than with the destination.Mongomirror was recommended to us by multiple mongoDB consultants. I think the documentation is lacking on capabilities.",
"username": "Dillon_S"
},
{
"code": "",
"text": "I was just informed the problem is with mongomirror.Turns out there is an reported issue with MongoDB Server version 2.6 with Go Driver which is used in MongoMirror. This issue specifically arises when there are more than 100 collections. I’m not able to verify this but this seems to be likely explanation.",
"username": "Dillon_S"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongomirror unable to see collections on source DB | 2023-04-13T18:35:52.425Z | Mongomirror unable to see collections on source DB | 691 |
null | [
"transactions",
"kotlin"
] | [
{
"code": "suspend fun addOrder(products: List<Product>) {\n realm.write {\n val order = Order().apply {\n products.forEach {\n this.products.add(it)\n }\n }\n copyToRealm(order)\n }\n}\n",
"text": "I have this function to add an Order inside my Realm Repository:I know that a payment is approved when a new object is inserted with the message “Approved” inside isPaid collection: {“message”:“Approved”}What is the best way to block the data before the payment starts and unblock it after the payment is done? If the payment fails, the process should be reverted. Its like Transaction in normal DB I guess.Note: the AddOrder(list) function is called from UI (Kotlin button).",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hello @Ciprian_Gabor ,Thanks for raising your question. This seems a more logical implementation question than blocking data in the database.\nCould you share some more details on your workflow and code snippets? Did you try putting a boolean flag to the payment collection and allowing data to show when the value of the flag changes? You may need a separate collection for your payment and the data you would want to show.I hope provided information helps in working out your flow.Cheers, \nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hello @henna.s . Thanks for you time \nI want to be able to make a payment with card on my App.\nHow I can block some fields until my payment is completed so other users dont complete the payment and there are no products to sell.For example: I have 1 croissant to sell. Two users are looking at the same time. They both are paying with the card. It takes about 10 seconds to verify the payment. The fastest verified user will take the croissant and the other user will pay but will not have the croissant because there are 0 croissants left.Do I have to block the croissant when one user is trying to pay?I know that a payment is approved when a new object is inserted with the message “Approved” inside isPaid collection: {“message”:“Approved”}What is the best flow?Have a nice day!",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hello @Ciprian_Gabor,Thank you for sharing more details. This seems like conflict resolution, those who ordered first will get the item first.\nThis is inbuilt logic in all Realm SDKs and you will not need to implement anything on top of this.You may need to be online for this to function correctly. In addition, this requires thread management using the tech stack you are following that unless your write transaction is completed (read Async), it cannot in-take any other transaction. Hence, the person who made the order first will be processed first before another transaction can take place for the second person.I hope the provided information is helpful. I would suggest creating a test application before you push the code to production.Cheers, \nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "Hello @henna.s, thanks for your time.I found a better solution since our project is different.\nWe want to block the data before the payment is done because the payment is done on the native side since the user has multiple payment methods.\nStep 1: remove the specific data from the DB when the user wants to order (before payment)\nStep 2:\ncase1: if the payment is successfully done, the new order will be added\ncase2: if the payment is declined or cancelled, the removed data will be readded to the database.Hope that makes sense\nHave a nice day!",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hello @Ciprian_Gabor,Glad to know you were able to find a solution. I have been on a break last week and catching up now.As you are doing verification client-side, I believe your way of implementation is correct. Otherwise, you could do verification using MongoDB Atlas Functions on the server side.Would love to see your working app when it is ready Cheers, \nHenna",
"username": "henna.s"
}
] | How to block data in MongoDB Realm until payment is done | 2023-03-29T21:17:50.729Z | How to block data in MongoDB Realm until payment is done | 1,112 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.