image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation", "crud" ]
[ { "code": "{\n \"_id\" : ObjectId(\"637b99xxx977\"),\n \"name\" : \"product1\",\n \"reports\" : [\n {\n \"_id\" : ObjectId(\"63f60fe2029fcd31b93870d3\"),\n \"name\" : \"product1\",\n \"title\" : \"hi\",\n \"dateTime\" : ISODate(\"2023-02-22T12:51:31.993+0000\"),\n \"createdAt\" : ISODate(\"2023-02-22T12:51:46.351+0000\")\n },\n ],\n ....\n}\n...\n const reports = [{name: \"product2\", dateTime: ...}, {},...];\n for (let key in reports) {\n await Station.findOneAndUpdate({name: key}, {\n $push: {\n reports: {\n $cond:{\n if: {$inc: {\"reports.$[elem].dateTime\" - reports[key].dateTime == 0}}, // --> doesnt work\n if: {$in: reports.map(function(x) { return x.dateTime ===reports[key].dateTime } )}, // __> doesnt work\n then: \"$reports\",\n else: {\n '$each': [...reports[key]],\n '$sort': {\n createdAt: -1\n }\n }\n },\n }\n }, $set: {someotherProp: true}\n }, {new: true}\n );\n }\n...\n", "text": "A property in document is an array of object. If object already exists, I dont want to push same object. I want to check it based on the date and time. I have no idea how to do it and not sure how to apply what I have seen so far.", "username": "Anna_N_A" }, { "code": "", "text": "It looks like $addToSet might be what you are looking for. It is like $push but does nothing if the element already exists in the array.", "username": "steevej" } ]
How to not push objects into array if already exist with findOneAndUpdate?
2023-02-23T10:52:00.182Z
How to not push objects into array if already exist with findOneAndUpdate?
1,973
https://www.mongodb.com/…68946372d83b.png
[ "python", "field-encryption" ]
[ { "code": "d45..794..d45794..d45...d45d45..794{\n \"_id\": ...\n \"keyAltNames\": ...\n \"keyMaterial\": {...}\n },\n \"creationDate\": {\n \"$date\": {\n \"$numberLong\": \"1676613292399\"\n }\n },\n \"updateDate\": {\n \"$date\": {\n \"$numberLong\": \"1676982904374\"\n }\n },\n \"status\": 0,\n \"masterKey\": {\n \"provider\": \"azure\",\n \"keyVaultEndpoint\": \"<redacted>\",\n \"keyName\": \"<redacted>\",\n \"keyVersion\": \"d4556112323948f2921498bdce51ebc2\"\n }\n}\n", "text": "Hello.I’m trying to work with key rotation with Atlas and Azure KMS. I’m able to successfully create a data key, encrypt and decrypt data but can’t establish a workable process for key rotation (Azure)I have a DEK that contains a CMK as follows that references keyversion d45...In Azure KMS, I have a newly rotated key with version 794..Does anyone know how I can ‘rotate’ and so be able to decrypt data in mongo that has been encrypted with key version d45 by using the newly rotated key version of 794.. and so disabling d45... key version in Azure.\nkeys883×554 42.5 KB\nThings I’ve triedI had understood rewrap_many_data_key() would update the master key document to reference the new version, but this is not the case. The key version remains at d45Would anyone be able to advise on how I am able to continue to have data decryptable, that has been encrypted with the master key with version d45.. yet encrypt future fields with version 794Surely this isn’t a case of manually reencrypting in an offline process?", "username": "jon_whittlestone" }, { "code": "rewrap_many_data_keykey_vault.rewrap_many_data_key({}, {\n provider: 'azure',\n master_key: {\n # put the rest of your master_key options here\n \"key\": \"<your new key>\"\n }\n})\n", "text": "Hi! Thanks for raising this issue. First, could you please provide all of the information specified in our README, and furthermore, it would be super helpful if you could provide a code snippet that shows what you are attempting to do right now. From my understanding of what rewrap_many_data_key does, all you need to do is update the data key, not re-encrypt the old data with that new data key. So, all you have to do to rotate keys is:The first argument is the filter–so if you only want to update some of the data keys, use that. More info can be found here: https://www.mongodb.com/docs/rapid/reference/method/KeyVault.rewrapManyDataKey/", "username": "Julius_Park" }, { "code": "rewrap_many_data_key// new key version\ndata_key['masterKey']['keyVersion'] = '794d3e3b66d64e59834d16dde86c72a2'\n \nclient_encryption.rewrap_many_data_key(\n filter={\"keyAltNames\": self.key_alt_name},\n provider=self.kms_provider_name, // azure\n master_key=data_key['masterKey']\n )\n", "text": "Thanks - I figured it out.\nThe key i was passing, was the existing key.\nIn my call to rewrap_many_data_key I needed to:", "username": "jon_whittlestone" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
CSFLE key rotation: how to update the CMK in the DEK that references an updated version
2023-02-21T13:08:19.875Z
CSFLE key rotation: how to update the CMK in the DEK that references an updated version
1,117
null
[ "aggregation", "node-js", "mongoose-odm" ]
[ { "code": "const fetchData = await WorkflowModel.aggregate([\n {\n $match: {\n _id: mongoose.Types.ObjectId(req.body.workflow_id),\n },\n }, \n {\n $unwind: \"$_id\",\n },\n {\n $lookup: {\n from: \"tasks\",\n localField: \"_id\",\n foreignField:\"workflow_id\",\n as: \"task\"\n }\n },\n ]);\n", "text": "how to use when receiving data in body in localfield and foreignfield.\nhow to use _id after $match condition in localfield", "username": "Avish_Pratap_Singh" }, { "code": "", "text": "you do not need to unwind", "username": "steevej" }, { "code": "$unwind_id$unwindlocalField$lookup", "text": "Hi @Avish_Pratap_Singh,What error message or outcomes are you seeing? Can you share some sample documents to test with?As @steevej mentioned, you do not need to $unwind the _id field as it cannot be an array. I’m not sure about Cosmos’ implementation, but in genuine MongoDB you also don’t need to $unwind even if the localField is an array (see Use $lookup with an Array).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "What is the equivalent of let keyword which we use with lookup in mongoDB.", "username": "The_IT_Guy" } ]
$lookup with local Field and foreign Field in not working in azure cosmos dB
2022-06-13T06:17:03.763Z
$lookup with local Field and foreign Field in not working in azure cosmos dB
2,671
null
[ "change-streams", "realm-web" ]
[ { "code": "", "text": "I’m using the watch() method on the web SDK to keep real time updates shown on my application screen. Sometime, for whatever reason, the watch() stops working. It can be after many hours on Chrome or almost immediately on mobile Safari.Is there a way to detect a watch() that is no longer connected, so that I can have it reconnect upon this dropping?", "username": "Alexander_Davie" }, { "code": "", "text": "have you found a solution or know why that happens?", "username": "Royal_Advice" }, { "code": "", "text": "Nope. Abandoned the SDK for a different solution.", "username": "Alexander_Davie" }, { "code": "", "text": "the watch() Drops happen mostly on Firefox, any solution to this issue?", "username": "Badrdine_Sabhi" }, { "code": "Uncaught (in promise) WatchError: rule with id=\"62ec12a7614......\" no longer exists on mongodb service, closing watch stream\n at WatchStream.feedSse (bundle.dom.es.js:1812:1)\n at WatchStream.feedLine (bundle.dom.es.js:1737:1)\n at WatchStream.advanceBufferState (bundle.dom.es.js:1872:1)\n at WatchStream.feedBuffer (bundle.dom.es.js:1716:1)\n at MongoDBCollection.watchImpl (bundle.dom.es.js:2065:1)\n at async watchAction (useWatch.js:45:1)\n", "text": "i recently create a react app by using realm web template app, the web-js one, and have this error show up every few second when fetching data. i think it is related with your watch() drops issue.## Goals\n\nI would like to open a change stream to get notified of changes to a… collection.\n\n## Expected Results\n\nThe client receives changes until the change stream is closed.\n\n## Actual Results\n\nThe change stream times out after a few minutes (according to the Realm logs, the stream stays open for about 90 to 240 seconds) and receives no more change events. The following error message is logged in the browser console.\n\n```\nUncaught (in promise) WatchError: execution time limit exceeded\n WatchError https://unpkg.com/[email protected]/dist/bundle.iife.js:9540\n feedSse https://unpkg.com/[email protected]/dist/bundle.iife.js:9686\n feedLine https://unpkg.com/[email protected]/dist/bundle.iife.js:9611\n advanceBufferState https://unpkg.com/[email protected]/dist/bundle.iife.js:9746\n feedBuffer https://unpkg.com/[email protected]/dist/bundle.iife.js:9590\n watch https://unpkg.com/[email protected]/dist/bundle.iife.js:9916\n```\n\n## Steps to Reproduce\n\n1. Open a change stream for a collection.\n2. Wait for two to four minutes.\n3. Insert or update a document in the collection to trigger a change event.\n4. The client doesn't receive the event and the error message appears in the console.\n\n## Code Sample\n\n```js\nasync function reproduction() {\n const app = new Realm.App(\"<app id>\");\n await app.logIn(Realm.Credentials.anonymous());\n\n const collection = app.currentUser\n .mongoClient(\"mongodb-atlas\")\n .db(\"<database>\")\n .collection(\"<collection>\")\n\n for await (const change of collection.watch()) {\n console.log(change);\n }\n}\n```\n\n## Version of Realm and Tooling\n\n- Realm JS SDK Version: 1.1.0\n- Node or React Native: Browser\n- Client OS & Version: Linux, Firefox 85is there a better place to rise the question to find out answers", "username": "Li_Li3" }, { "code": "", "text": "I am having the same issue. Are there any solutions for detecting dropped watch()'s?", "username": "Austin_Imperial" }, { "code": "", "text": "they created this sandboxrealm-watch-test by kraenhansen using react, react-dom, react-scripts, realm-web", "username": "Li_Li3" }, { "code": "", "text": "i use chrome, it is the same", "username": "Li_Li3" }, { "code": "", "text": "Li_Li3, thank you, this is heroic.", "username": "Austin_Imperial" }, { "code": "", "text": "I fixed my issue by moving the rules from the default to individual collection, i am not sure why, but the watch error disappeared.", "username": "Li_Li3" }, { "code": "", "text": "Made a visual guide to help people with a similar issue, Li_Li3 is right- although I wish I saw their comment before doing my own deductions.\nimage1768×1592 243 KB\n", "username": "Paul_Vu" }, { "code": "", "text": "Then set granular permissions.", "username": "Paul_Vu" }, { "code": "", "text": "I’m having the same issue using the web sdk.The issue won’t occur, if I use the example app provided here:\nhttps://codesandbox.io/s/9c8bwHowever, if I use that very same example app without any changes but using my own atlas app id, the errors show up again.\nSo it’s definitely connected to some backend configuration. I just don’t know which one. Will create a new test app and try to apply my settings one by one…Does anyone have an idea what the problem might be?\nIt’s not the rules issue as described above for me as I’ve already deleted all default roles for testing.", "username": "Daniel_Weiss" }, { "code": "export const watchCollection = async () => {\n const processing = app.currentUser\n .mongoClient('mongodb-atlas')\n .db('yourdb')\n .collection('yourcol');\n if (!processing) {\n return;\n }\n for await (const change of changeStream) {\n switch (change.operationType) {\n case 'insert': {\n }\n case 'update': {\n const {fullDocument} = change;\n//....do awesome things\nbreak;\n }\n case 'replace': {\n const { documentKey, fullDocument } = change;\n break;\n }\n case 'delete': {\n const { documentKey } = change;\n break;\n }\n }\n }\n return changeStream;\n};\n\n/*Then in my react app*/\n\n useEffect(() => {\n watchCollection();\n }, []);\n", "text": "@Daniel_Weiss difficult to tell, without more information, I too used the code sandbox example and it worked with a test app when I was first problem solving this issue.I’ll provide some more information on my set-up , my code snippet from my app, and what rules looks like on my end. This was frustrating for me, so I hope this will help you solve it faster.", "username": "Paul_Vu" }, { "code": "", "text": "Thank you very much for your help, @Paul_Vu.I finally figured it out. Having DeviceSync enabled, I assumed that those roles would take precedence and while I haven’t tested for that - I still think they do. However, the error occurs when there are no rules (not DeviceSync permissions but “normal rules”) setup. Once I did that and - importantly - no default rules as mentioned above, it did work.So, to recap, my setup is as follows:I connect to the backend via both the Flutter SDK and the Web SDK. It both works now.\nMaybe this helps someone else struggling with the same problem.", "username": "Daniel_Weiss" }, { "code": "", "text": "Glad you got it resolved, good to know about the no rules part!", "username": "Paul_Vu" } ]
Detect Realm Web SDK Watch() Drops
2021-07-03T14:18:26.794Z
Detect Realm Web SDK Watch() Drops
7,713
https://www.mongodb.com/…_2_1024x675.jpeg
[]
[ { "code": "", "text": "Hi (-:So I know about the concept of the “working set” in MongoDB. From what I’ve read, the working set is a pool of some “hot” documents that sit in RAM for quick access. MongoDB has an “internal statistics special sauce” to determine which docs will be there.So now, if the user runs a query, and a document is in the working set, it can be fetched quickly from the RAM, and skip hitting the DISK.So I’ve run this query and look at totalDocsExamined:\nCleanShot 2023-02-23 at 10.53.14@2x1864×1230 224 KB\nFrom what I’ve learned totalDocsExamined of 1, means that 1 doc was fetched from DISK.So my question is, does totalDocsExamined of 1 mean that this document was actually fetched from DISK and not from the working set?So if I continue with the query above, and try different userIds, would I be able to sometime get lucky and “catch” a document which sits in the working set, and will return totalDocsExamined of 0?I am asking this, because no matter how much I’ve tried, even with very “hot” documents that are being accessed by all users all the time, I always see totalDocsExamined of 1.", "username": "yaron_levi" }, { "code": "", "text": "According to documentation totalDocsExamined has nothing to do about reading the document from disk or not and had nothing to do about being in the working set or not.More or less a document is examined if the server cannot determined, using the index only, if the said document should be returned or not.For example, assuming you have the index {foo:1,bar:1}.If your query is something like {foo:123,bar:456}, the document does not have to be examined since the index has both fields. However if your query is {foo:123,bar:456,isActive:false}, all documents with foo:123 and bar:456 will be examined to see if isActive is true or false.I am pretty sure that you will get a totalDocsExamined:0 and nReturned not 0 only for covered queries. That is when all queried fields and projected fields are part of the index.Reading and writing from and to disk is the storage engine job. I really do not know what kind of statistics you can get from wiredtiger.", "username": "steevej" }, { "code": "", "text": "@ steevej should be correct.Think from design principal, the metric totalDocsExamined is an important number indicating how efficient your query is. If this number will be 0 when all docs examined are fetch from ram, it will be confusing developers making them believe (wrongly) that their query is very efficient. Also in that case, the number will highly depend on the available memory size, which is not expected by developers.Only covered queries will have 0 as totalDocsExamined.", "username": "Kobe_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to know if a document "lives" in the working set? (looking at totalDocsExamined)
2023-02-23T08:58:47.846Z
How to know if a document &ldquo;lives&rdquo; in the working set? (looking at totalDocsExamined)
359
null
[ "server" ]
[ { "code": "", "text": "I have a MacBook Pro 2.9 GHz 6-Core Intel Core i9. I’ve installed MongoDB (it’s in the Application folder)\nWhen I try to change the directory to DataDB it says I have no permission, when I do it with “sudo” it says I have read-only permission. I’ve tried downloading mongoDB community 6.0 from this manual: https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-os-x/\nEverything correct until I run this line:\nbrew services start [email protected]\nwhere I receive this error:\nPermission denied @ rb_sysopen - /Users/albavaldivia/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nFrom another question in the community blog, I saw that one solution was writing this:\nsudo chown $(whoami) ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nWhich I received this error:\nNo such file or directory\nI also tried “brew doctor”. Didn’t work. What can I do?", "username": "Alba_Valdivia" }, { "code": "", "text": "Check your mongod.log.It may give more details\nYou should not run as sudo or root\nIt will change dir & lock file permissions\nOn Macos access to /data/db is removed\nIf you have used brew install it would have choosen required dbpath/logpath dirs\nCheck if permissions/ownership are ok or notYou have alternate way of starting mongod from command line also", "username": "Ramachandra_Tummala" } ]
Installed MongoDB. Having problems running the shell
2023-02-23T17:15:32.483Z
Installed MongoDB. Having problems running the shell
889
null
[ "node-js" ]
[ { "code": "", "text": "This question was asked one year ago but I’m asking again because it does not seem to be resolved.I’m working with DayJS which relies on Intl but does not work in atlas functions.Is there a workaround for this?", "username": "Alexandar_Dimcevski" }, { "code": "", "text": "Hi Alexandar,Your original post says you used the moment-timezone package, this should be supported in the function.\nHave you tried using this recently? If so, what is the code you used and error seen?Regards\nManny", "username": "Mansoor_Omar" } ]
Atlas Functions handle dates
2023-02-18T23:41:59.726Z
Atlas Functions handle dates
820
null
[ "queries" ]
[ { "code": "{\n \"FullName\": \"Sarajane Cazares\",\n \"FromTime\": \"2023-02-21T19:00:00Z\",\n \"ToTime\": \"2023-02-21T20:00:00Z\",\n ...\n}\n...\n{\n \"FullName\": \"Marissa\",\n \"FromTime\": \"2023-02-21T20:00:00Z\",\n \"ToTime\": \"2023-02-21T21:00:00Z\",\n ...\n}\n", "text": "I’m facing a challenge. Let’s see documents firstAs you can see, ToTime of doc1 and FromTime of doc2 are same. I need to find all documents where ToTime and FromTime values are same.Is it possible using MongoDB?Thanks", "username": "Tanmoy_Sadhukhan" }, { "code": "", "text": "When you want to find documents that are related to other documents you simply do a $lookup.Your use-case is one of the simplest $lookup to do using localField:ToTime and foreignField:FromTime.", "username": "steevej" }, { "code": "", "text": "Thanks. An aggregate method mixing up with $lookup and $match did my job.", "username": "Tanmoy_Sadhukhan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Document querying
2023-02-21T11:53:52.117Z
Document querying
477
null
[]
[ { "code": "", "text": "Can anyone explain, How to add an auto-increment id field like A001, U0002?", "username": "Nuwan_Tharuka_N_A" }, { "code": "Create Trigger", "text": "Hello @Nuwan_Tharuka_N_A ,There are many ways one can achieve this programatically, I am sharing a blog below which uses MongoDB Atlas Triggers to achieve the same in MongoDB AtlasLearn how to implement auto-incremented fields with MongoDB Atlas triggers following these simple steps.You can update the function mentioned in Create Trigger as per your requirements and can test it according to your collection documents.To learn more about Database Triggers, please go through belowLearn about database triggers, when to use them and how to create them in MongoDB Atlas.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Documente Data IDs
2023-02-23T10:43:14.072Z
MongoDB Documente Data IDs
425
null
[]
[ { "code": "", "text": "i want to convert string format to date format but not getting any solution .String : “15/06/2020 04:05:14 PM”kindly provide any solution . thanks", "username": "kishu_raj" }, { "code": "", "text": "Most likely $dateFromString should work for you.", "username": "steevej" } ]
String to Date Conversion
2023-02-23T05:19:34.198Z
String to Date Conversion
651
null
[ "dot-net", "unity" ]
[ { "code": "", "text": "Hi everyone!\nWe are creating a cross-platform game with Unity. We are authenticating users using Unity Authentication services and then feeding a Unity token to the database to authenticate different users. However, we are not sure if Realm can actually be used in consoles or not, since the documentation only states iOS, Android, Linux, macOS, or Windows.\nIn more detail: The authentication of the user in different platforms is going to be handled by Unity, so that’s not an issue. We are going to use custom functions to use that unity user ID to authenticate into Realm and Atlas. However, that only ensures that we can authenticate users in any device, but that doesn’t mean that the Realm environment is going to work/run in PS5, XBOX Series, and Nintendo Switch. Does anyone know if the Realm environment will run/work on consoles?\nThank you all!Moonspire Games is an independent game studio, founded in 2022. Made up of an international collective of talented artists and creatives, we’re striving to build a unique, multi-platform RPG experience.", "username": "BorisSchwindt" }, { "code": "", "text": "Hi. We don’t support any console, so Realm won’t run on any of those.", "username": "Andrea_Catalini" }, { "code": "", "text": "To clarify a little: Realm does work on Xbox, but we haven’t enabled the setting in the Unity package to tell Unity to deploy the correct binaries.Re: PS5 and Switch - we don’t support them currently, but can work on adding support. To decide on the priorities, it’ll be helpful if you reached out to your AE (since you’re using Atlas) and convey to them your timelines and requirements.", "username": "nirinchev" }, { "code": "", "text": "@BorisSchwindt Product for Realm here - we do not have a developer license for PS5 or Switch but you can give it a try to compile yourself on these platforms since we are open source and see if it works. If there is problem you can file an issue, if its a quick fix perhaps we can get it sorted for you, if its a much larger project we can see where it falls in our quarterly planning prioritization process.", "username": "Ian_Ward" }, { "code": "", "text": "Hey @nirinchev what do you mean by AE? We are kinda new to Atlas and MongoDB in the project.", "username": "BorisSchwindt" }, { "code": "", "text": "That sounds like the best plan. We are still some long time from publishing and testing with developer kits, but I’ll try to get my hands on one to test the MongoDB Realm environment and check if it works. Thanks!", "username": "BorisSchwindt" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Realm in consoles (PS5, XBOX series, Nintendo Switch)
2023-02-22T23:37:55.974Z
MongoDB Realm in consoles (PS5, XBOX series, Nintendo Switch)
1,155
null
[ "aggregation", "queries" ]
[ { "code": "{\n _id: objectId\n \"myArray\": Array\n 0: Object\n name: \"A\"\n 1: Object\n name: \"B\"\n 2: Object\n name: \"A\"\n 3: Object\n name: \"B\"\n}\n{\n _id: objectId\n \"precedeCount\": Array\n 0: Object\n first: \"A\"\n second: \"B\"\n count: 2\n 1: Object\n first: \"B\"\n second: \"A\"\n count: 1\n}\n", "text": "I’ve been struggling on finding a clean way to map an array to a new array that illustrates the number of times any element within that array immediately precedes another.For instance, say this is the original array:The desired outcome would be something like:I’m wondering if this is possible with the MongoDB aggregation pipelines, or if I should just go back to MapReduce.Thanks in advance.", "username": "Andy_Zhang" }, { "code": "", "text": "Pretty sure I wrote up something like this years ago… Check out this article…", "username": "Asya_Kamsky" }, { "code": "", "text": "if I should just go back to MapReduceoh and the answer to this is always “never”! ", "username": "Asya_Kamsky" }, { "code": "{$set:{ pairs: {$map:{\n input:{$range:[0,{$subtract:[{$size:\"$myArray\"},1]}]}, \n in:{$slice:[\"$myArray.name\",\"$$this\",2]}\n}}}},\n{$unwind:\"$pairs\"},\n{$group:{_id:\"$pairs\", count:{$sum:1}}}\n { \"_id\" : [ \"A\", \"B\" ], \"count\" : 2 }\n { \"_id\" : [ \"B\", \"A\" ], \"count\" : 1 }\n", "text": "I went ahead and wrote a three stage agg solution based on that old blog post:With your input this gives:", "username": "Asya_Kamsky" }, { "code": "", "text": "Thanks so much Asya this worked like a charm!", "username": "Andy_Zhang" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Counting the number of instances where one array object proceeds another
2023-02-23T20:12:54.077Z
Counting the number of instances where one array object proceeds another
568
null
[]
[ { "code": "", "text": "Hi I signed in with github student developer pack for mongodb but i was unable to login normally\nso i used sign in with google with the same gmail account i use for github and now i have two accounts.\nI also typed mail id maually in option present below google and github and I reset my password. What exactly I need to do, I am totally confused.", "username": "Mohammad_Amir_Ansari" }, { "code": "", "text": "Hi Mohammad, welcome to the forums! I’m going to reach out to you via DM to get some more information.", "username": "Aiyana_McConnell" } ]
Personal College mail id and github student developer pack
2023-02-22T08:55:57.988Z
Personal College mail id and github student developer pack
1,353
null
[ "aggregation" ]
[ { "code": "name,\nemail,\nschools: [\n {\n roles: ['admin', 'user'],\n schoolId: ObjectId(...)\n }\n]\n_id,\nname,\naddress\netc etc\nname,\nemail,\nschools: [\n {\n roles: ['admin', 'user'],\n school: {\n _id: ObjectId,\n name,\n ...all other fields\n }\n }\n]\n{\n from: \"schools\",\n as: \"schools\",\n 'let': { 'schoolId': '$roles.schoolId' },\n 'pipeline': [\n { '$match': { '$expr': { '$in': ['$_id', '$$schoolId'] } } }\n ],\n 'as':'schools'\n}\n", "text": "Hi,I’m still getting to grips with the aggregation pipeline and need a bit of help understanding how I would piece together the following data:I have a collection of ‘users’ that has a nested document called ‘roles’ within it. A user can belong to one or many ‘schools’ and with each school have a different set of ‘roles’. Here’s how I’ve structured the data:User:A school looks like this:What I would like to be able to do is produce a result that looks like this whenever I lookup a user:Looking at the aggregation pipeline it seems like $lookup is my first port of call using something like this:Which, if I’m understanding correctly is great and gives me a ‘schools’ set of documents at the root level. I’m a bit stuck on how I get these documents to merge with the actual roles segment though. I’m also unclear about the need for a $match to check for existence.I’d really appreciate any corrections/examples and comments.J", "username": "James_N_A2" }, { "code": "rolesletaddFieldletlocalFieldforeignFieldletpipelinefrom", "text": "Thanks for linking. Here is a quick response:There may come many different solutions with different performance levels as you have seen in the other post you followed.since you are at the learning stage, I will leave the solving part to you for the moment. it is best for learning when you get hints yet keep trying yourself.", "username": "Yilmaz_Durmaz" }, { "code": "db.users.aggregate([\n {\n $unwind: \"$schools\"\n },\n {\n $lookup: {\n from: \"schools\",\n localField: \"schools.schoolId\",\n foreignField: \"_id\",\n as: \"schools.school\"\n }\n },\n {\n $unwind: \"$schools.school\"\n },\n {\n $group: {\n _id: \"$_id\",\n name: { $first: \"$name\" },\n email: { $first: \"$email\" },\n schools: {\n $push: {\n roles: \"$schools.roles\",\n school: \"$schools.school\"\n }\n }\n }\n }\n])\n$push", "text": "Hello @James_N_A2 ,I agree with @Yilmaz_Durmaz that learning by doing is the best in the long term. Just to give you an idea on how I would solve this, please see the below query.Below aggregation pipelines stages are used in the above query:$unwind - Deconstructs an array field from the input documents to output a document for each element$lookup - Performs a left outer join to a collection in the same database to filter in documents from the “joined” collection for processing.$group - It separates documents into groups according to a “group key”. The output is one document for each unique group key.$push - The $push operator appends a specified value to an array.Note: Please test the aggregation pipeline throughly as per your requirements.Also, note that this may not be the best solution, just something I quickly come up with. You’re welcome to post your own solution that may be useful for future reader of this topic.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "{\n $inspect: {\n from: \"schools\",\n foreignField: \"_id\",\n localField: \"schools.schoolId\",\n as: \"related_schools\"\n }\n}\n$addFields$mapuser.schools.map{\n $addFields: {\n \"schools\": {\n \"$map\": {\n \"input\": \"$schools\",\n \"as\": \"schools\",\n \"in\": {\n \"$mergeObjects\": [\n \"$schools\",\n { $arrayElemAt: [ \"$related_schools\", { \"$indexOfArray\": [\"$related_schools._id\", \"$schools.schoolId\"] } ] }\n ]\n }\n }\n }\n }\n}\nas$$this$reduce$map$lookup$projectschoolId{ \n $project: {\n \"schools.schoolId\": 0,\n \"related_schools\": 0\n }\n}\n$map$lookuproleslet", "text": "Okay, this has been fun so far and thanks for your hints. I’ve spent much time today looking through other examples and documentation. Most examples converge around the idea of breaking this up into 2-3 stages with some variations that I’ve learned.First step:Perform a $lookup that matches your ids to the related collection document ids:The next step is to use $addFields with $map to work through the set of schools. I’m referring to the same user.schools segment I will augment. This appears to work the same as .map in JavaScript albeit very declarative:I couldn’t get this working without the as parameter, but it does give me a useful $$ reference. Perhaps I could have used $$this? Please let me know if there are any ways around this or better approaches. I couldn’t reference anything other than examples that use $reduce instead of $map.So far, so good. I’ve got the structure I need and need to remove the temporary $related_schools array I created during the $lookup stage. I do this using $project and also take the opportunity to remove the now useless schoolId:All good, but I can’t help but wonder if I’m duplicating the workload by having to use $map. The ‘join’ has already been done by $lookup, which happens to be in another segment. I lose my roles segment if I use the same name. Perhaps there’s a way to store it in advance? I think that’s possibly what you alluded to with let.Anyway, this is a good start, and it’s given me a good, basic understanding of the aggregation stuff in Mongo.", "username": "James_N_A2" }, { "code": "", "text": "to test ideas fast on test data, you can use Mongo playground.\nthe “fast” is the emphasis here. I suggest this site whenever possible play with the examples for a while so to find out how to include your example data. then create some queries and get fast results.", "username": "Yilmaz_Durmaz" } ]
How to lookup and merge documents based on a nested array of ObjectIds
2023-02-22T14:44:40.932Z
How to lookup and merge documents based on a nested array of ObjectIds
1,560
null
[ "replication", "database-tools", "backup" ]
[ { "code": "Failed: error getting database names: not authorized on admin to execute command { listDatabases: 1, $readPreference: { mode: \"secondaryPreferred\" }, $db: \"admin\" }repls:SECONDARY> show dbs 2023-02-22T11:58:47.347+0000 E QUERY [thread1] Error: listDatabases failed:{ \"operationTime\" : Timestamp(1677067124, 1), \"ok\" : 0, \"errmsg\" : \"there are no users authenticated\", \"code\" : 13, \"codeName\" : \"Unauthorized\", \"$clusterTime\" : { \"clusterTime\" : Timestamp(1677067124, 1), \"signature\" : { \"hash\" : BinData(0,\"MluRZXuKCH3HUbfpxVyjr2itT8I=\"), \"keyId\" : NumberLong(\"7180455251781091329\") } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 Mongo.prototype.getDBs@src/mongo/shell/mongo.js:67:1 shellHelper.show@src/mongo/shell/utils.js:860:19 shellHelper@src/mongo/shell/utils.js:750:15 @(shellhelp2):1:1 repls:SECONDARY>", "text": "We have a 3 node replicaset, we would like to take backup using mongodump, however we are unable to take as we we are getting the below errorFailed: error getting database names: not authorized on admin to execute command { listDatabases: 1, $readPreference: { mode: \"secondaryPreferred\" }, $db: \"admin\" }even in the mongo shell, we are unable to execute any commande.grepls:SECONDARY> show dbs 2023-02-22T11:58:47.347+0000 E QUERY [thread1] Error: listDatabases failed:{ \"operationTime\" : Timestamp(1677067124, 1), \"ok\" : 0, \"errmsg\" : \"there are no users authenticated\", \"code\" : 13, \"codeName\" : \"Unauthorized\", \"$clusterTime\" : { \"clusterTime\" : Timestamp(1677067124, 1), \"signature\" : { \"hash\" : BinData(0,\"MluRZXuKCH3HUbfpxVyjr2itT8I=\"), \"keyId\" : NumberLong(\"7180455251781091329\") } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 Mongo.prototype.getDBs@src/mongo/shell/mongo.js:67:1 shellHelper.show@src/mongo/shell/utils.js:860:19 shellHelper@src/mongo/shell/utils.js:750:15 @(shellhelp2):1:1 repls:SECONDARY>Please help to resolve this", "username": "Sivaprakash_Gopal" }, { "code": "", "text": "As which user you are running mongodump?\nDoes that user have proper privileges?\nFor shell commands connect to your primary\nYou are connected to secondary", "username": "Ramachandra_Tummala" }, { "code": "", "text": "For shell commands connect to your primaryI am running from root user\nroot user should have all the privilege’s correct .?\nI have tried from primary as well , same output as mentioned before", "username": "Sivaprakash_Gopal" }, { "code": "", "text": "being named “admin” or “root” does not immediately mean you have access to everything. it depends on how that user was added, namely the “role” and “db” fields.also possible you are not giving the correct parameters can you give us the exact command you are using (without the password, of course)?", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Can you please show us the mongodump command you are running? Did you add the -u to the command to authenticate into the database?When you say you are “root” do you mean on the server/vm or did you authenticate into the mongodb shell? Because just because you are root on the server doesn’t mean you have authorization to run commands on MongoDB until you authenticate.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Thanks for the response, the replica set was configured by a developer who is not with us currently, we dont know how this was setup what user(s) were added ; we only have the root user account for the ubuntu server. however there are prod databases running ;\nSince we dont have any user account /password which were added\nis there a way that we can create new user account /password for the existing databases running on this replica set to manage and take the backup using mongodump …?", "username": "Sivaprakash_Gopal" }, { "code": "", "text": "Share your mongodump command and how you login/connect to your cluster.\nYou can hide sensitive details like password/cluster name,address etc", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for the response, i am not giving any parameters ; i am logging in as root in one of the node and executing mongodumproot@:~# mongodump\n2023-02-23T08:55:00.684+0000 Failed: error getting database names: not authorized on admin to execute command { listDatabases: 1, $readPreference: { mode: “secondaryPreferred” }, $db: “admin” }\nroot@:~#", "username": "Sivaprakash_Gopal" }, { "code": "mongoshuse admin\ndb.getUsers()\nunauthorizedmongomongosh", "text": "log in to the host machine and, in the terminal, simply run mongosh without parameters, then the following in it.if you get an unauthorized error, that means you need to supply credentials. if you had no fight with that developer (and I believe not, as your database still works), just ask him/her about the administrator username/password.PS: try mongo if mongosh is not found", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you\nbelow is the output i get*****:PRIMARY> use admin\nswitched to db admin\napollo:PRIMARY> db.getUsers()\n2023-02-23T09:24:09.245+0000 E QUERY [thread1] Error: there are no users auth enticated :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.getUsers@src/mongo/shell/db.js:1686:1\n@(shell):1:1\n*****:PRIMARY>", "username": "Sivaprakash_Gopal" }, { "code": "mongodump -u someusermongodumpmongoexport", "text": "Error: there are no users auth enticatedthis one has the same cause: you need a user authentication to use with mongodump -u someuser where “someuser” has a suitable role to access all databases such as “root”.with that said, it is unfortunate that there is no easy way (if any exists) to remove this restriction for a replica set. You will agree that this security is not just for access from within the service, but also from the hackers’ eyes that might infiltrate your host machine.Unless we got more suggested ways from community members that has experienced a similar situation and found a solution, these two are the immediate ones for the moment:PS: mongodump dumps everything on the service/replicaset member, so needs an admin-level access. however, if you need only backup a single database, and have a compatible user/role for that database, you can use mongoexport instead.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thanks for the suggestions\nCalling the developer is not an option for us currently unfortunately.\nCan we take the backup ( copying the DB path manually or with a script) when the databases are online …?", "username": "Sivaprakash_Gopal" }, { "code": "sed20,22", "text": "Calling the developer is not an option for us currently unfortunately.is it “currently” or “never”?currently, you can stop the mongodb service in each host, only then backup that folder. but to be able to use it again, you need to be on the same host/folder, or also copy other things like config file, security key file, etc. and then construct a similar host system.the other possibility sounds creepy. it is not enough to simply backup folders. you need to work on all replica set members to remove authentication, then create new admin users etc. I have a link you may try to follow, but you can easily see it is not something simple. read and apply very carefully. you may want to get a backup of the data folder first. Linux, DevOps, Middleware and Cloud: How to reset mongodb rootadmin password ?? (linuxhelp4u.blogspot.com)EDIT: the security section of the config file in that link is n line 20-21-22, and the author uses them in sed command as 20,22. your configuration will be different, so carefull with such details.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you\nCalling the developer is never an option Also i should have mentioned this earlier, all three replica set members are on azure cloud and we have snapshot being taken , can that be used instead of backing up the DB folders manually …?", "username": "Sivaprakash_Gopal" }, { "code": "", "text": "I haven’t worked that much on azure, or other clouds, myself, so I can’t conclude how this snapshot thing works. but be careful about its type. there is a concept of “incremental snapshot” that just saves the difference between the last save and now, and it differs from a single-full backup. check what type is yours.also, the running instances and data volumes might reside in different places. so you also need to be careful with them too. the service config file might also be in another place. you need to check the cloud settings and make a mapping for these.but overall, you will need to back up the data and config first, then work on the config file to fix the auth user problem.", "username": "Yilmaz_Durmaz" } ]
Mongodump is not working , getting "not authorized"
2023-02-22T12:01:29.532Z
Mongodump is not working , getting &ldquo;not authorized&rdquo;
2,435
null
[ "node-js", "change-streams", "field-encryption" ]
[ { "code": "mongodbbigintbigintBSON.LongBSON.LongbigintuseBigInt64import { MongoClient } from 'mongodb';\n\n(async () => {\n const client = new MongoClient('<YOUR CONNECTION STRING>');\n const db = client.db('test');\n const coll = db.collection('bigints');\n\n await coll.insertOne({ a: 10n }); // The driver automatically serializes bigints to BSON.Long before being sent to the server\n\n const docBigInt = await coll.findOne({ a: 10n }, { useBigInt64: true }); // Must provide the useBigInt64 flag to specify that bigints get returned\n console.log(docBigInt);\n // { _id: ObjectId(...), a: 10n }\n const doc = await coll.findOne({ a: 10n }); // Must provide the useBigInt64 flag to specify that bigints get returned\n console.log(doc);\n // { _id: ObjectId(...), a: 10 }\n await client.close();\n})()\n", "text": "The MongoDB Node.js team is pleased to announce version 5.1.0 of the mongodb package!The driver now supports automatic serialization of JavaScript bigints to BSON.Longs. It also supports deserializing of BSON.Long values returned from the server to bigint values when the useBigInt64 flag is passed as true.We invite you to try the mongodb library immediately, and report any issues to the NODE project.", "username": "Warren_James" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB NodeJS Driver 5.1.0 Released
2023-02-23T19:24:10.506Z
MongoDB NodeJS Driver 5.1.0 Released
1,493
null
[ "node-js" ]
[ { "code": "", "text": "Hello everyone,we have a nodejs server which is connected to a database of mongoDB.\nThere is a collection A which contains ca. 400k documents. Each document has a data field x, which is a string about 7MB.\nFor some reasons, we’d like to move the big data field x from collection A to a new collection B.\nThe nodejs server should be adjusted so that the collection B can be used for read and write of the big data, not the collection A anymore(but the collection A is still be used for some other purposes).\nHow can we arrange this, so that there will be no down time of the nodejs server and will be no loss of data during the transfer of the big data from A to B?Thanks!", "username": "Mian" }, { "code": "", "text": "From one time point on, you double write the big string to both collections (old and new). So that the data set for migration is now fixed. (only existing ones need to be moved).Then you can run a background process to move all strings from old col to new col (provided they don’t exist in new col yet).Once this is done, you can switch reading from old to new.You will need a sorting (by create time etc), so that the migration process can eventually terminate.", "username": "Kobe_W" }, { "code": "", "text": "Hello @Miankind off second @Kobe_W. You want to do a migrate on read. You can use the document versioning pattern to distinguish if the data was already migrated (assuming that you application mainly acts on collection A).Another approach could be:\nRead collection A. In case of a successful read: write a copy to collection B and remove the big data field in collection A. In case of a “not found” proceed to collection B, the document is already migrated.\nThis variant is only meant for a short migration phase. After deploying the change you should support the migration via a scripted solution, which you can run in an off peak time. Please keep in mind that the “not found” scenario will increase with the progress of the migration, and with this no needed read operations. So it is recommended to update your application asap after the migration to only act on collection B for the migrated field.\nI also recommend to use transactions to make sure to have a proper rollback in case of an error.Regards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thanks @Kobe_W and @michael_hoeller for your time and reply!", "username": "Mian" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to move big data to a new collection without down time of servers?
2023-02-22T16:22:53.371Z
How to move big data to a new collection without down time of servers?
1,053
null
[ "python" ]
[ { "code": "", "text": "while storing the usage data , I am getting the below issue\nError while storing data in MongoDB: documents must be a non-empty list", "username": "Gopi_bagadi" }, { "code": ">>> coll.insert_many([])\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/shane/git/mongo-python-driver/pymongo/_csot.py\", line 105, in csot_wrapper\n return func(self, *args, **kwargs)\n File \"/Users/shane/git/mongo-python-driver/pymongo/collection.py\", line 697, in insert_many\n raise TypeError(\"documents must be a non-empty list\")\nTypeError: documents must be a non-empty list\n", "text": "This error means your application is mistakenly calling collection.insert_many() with an empty list:", "username": "Shane" } ]
Hi , I am trying to get the oci cloud usage data and storing mongodb time getting one issue
2023-02-23T06:09:58.299Z
Hi , I am trying to get the oci cloud usage data and storing mongodb time getting one issue
644
null
[ "aggregation", "queries" ]
[ { "code": "{\n id: 123\n contactsOfAppUsers:[\n {\n id:'abc',\n contactsArray: ['9999911111','9999922222']\n },\n {\n id:'efg',\n contactsArray: ['9999933333','9999944444']\n },\n {\n id:'hij',\n contactsArray: ['9999955555','9999933333']\n }\n ]\n}\n 'matchingObjects':[ \n {\n id:'efg',\n phoneNumbers: ['9999933333','9999944444']\n },\n {\n id:'hij',\n phoneNumbers: ['9999933333','9999955555']\n }\n ]\ndb.phNumbers.aggregate([\n {// Previous stage},\n {\n $addFields: {\n 'matchingObjects': {\n '$map': {\n 'input': '$contactsOfAppUsers',\n 'as': 'cc',\n 'in': {\n '$in': [\n '9999933333','$cc.contactsArray'\n ]\n }\n }\n }\n }\n },\n])\n", "text": "In my MongoDB aggregation pipeline, I want to retrieve the matching objects for a number from the data (see below)My data is is like the below:When I search for this “9999933333” in the above data, I would like the result like this:I tried this, which gives boolean values but I actually want the matching objects (as shown above)", "username": "Pravishanth_M" }, { "code": "$filter$map", "text": "Hello @Pravishanth_M, Welcome to the MongoDB community forum,You can use $filter operator to filter the array elements by specifying conditions instead of $map,", "username": "turivishal" } ]
Retrieve an object in an array which contains a matching value
2023-02-23T13:39:08.440Z
Retrieve an object in an array which contains a matching value
499
null
[ "aggregation" ]
[ { "code": "{\n _id: ObjectId(\"...\"),\n Name: '...',\n 'Claimed For': null,\n Game: { rowId: '62808688723f058679961078', display: 'DTHG' },\n Region: { rowId: '6264884a4d3c698967514d79', display: 'Global' },\n Platform: { rowId: '6264886e4d3c698967514d7c', display: 'Steam (PC)' },\n 'Claimed TimeStamp': null,\n Category: null,\n Claimed: false,\n 'Claimed By': null,\n Notes: null\n }\ndb.coll.aggregate([{ \"$group\": { \"_id\": \"$Game.rowId\", \"Game\": { \"$first\": \"$Game\" }, \"COUNT\": { \"$count\": {} }}}])\"$group\": { \"_id\": \"$Game.rowId\", \"Game\": { \"$first\": \"$Game\" }}", "text": "Hi, I have a collection of about 45k documents on which I want to perform an aggregation. Each document looks like this:I am running this aggregation pipeline on it:\ndb.coll.aggregate([{ \"$group\": { \"_id\": \"$Game.rowId\", \"Game\": { \"$first\": \"$Game\" }, \"COUNT\": { \"$count\": {} }}}])\nI have an index on “Game.rowId”. However, I cannot seem to get Mongo to use the index when running the aggregation, unless I only do \"$group\": { \"_id\": \"$Game.rowId\", \"Game\": { \"$first\": \"$Game\" }} without the count, which is not very useful.I am getting Profiler warning emails from MongoDB basically every time I run this query, but I don’t know how to make it use the index. Any help would be appreciated.Thanks!\nBruno.", "username": "Bruno_Denuit-Wojcik" }, { "code": "", "text": "Try to $sort on Game.rowId first.", "username": "steevej" }, { "code": "", "text": "It worked! Thanks for your help.", "username": "Bruno_Denuit-Wojcik" } ]
Getting MongoDB to use an index in aggregation pipeline
2023-02-14T17:10:45.842Z
Getting MongoDB to use an index in aggregation pipeline
539
https://www.mongodb.com/…3feae5c74939.png
[ "kafka-connector" ]
[ { "code": " [BsonElement(\"_id\")]\n [JsonProperty(\"_id\")]\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public ObjectId _id { get; set; }\n [BsonElement(\"_id\")]\n [JsonProperty(\"_id\")]\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public ObjectId _id { get; set; }\nname=mongo-sink\ntopics.regex=\\\\w+$\n\nconnector.class=com.mongodb.kafka.connect.MongoSinkConnector\ntasks.max=1\n\nkey.ignore=true\nconnection.uri=mongodb://localhost:27017\ndatabase=aplicationDB\n\nmax.num.retries=3\nretries.defer.timeout=5000\ntype.name=kafka-connect\n\nkey.converter=org.apache.kafka.connect.json.JsonConverter\nkey.converter.schemas.enable=false\nvalue.converter=org.apache.kafka.connect.json.JsonConverter\nvalue.converter.schemas.enable=false\n", "text": "0I am new to Kafka connector. I use Kafka connector to send “events” to Kafka topics with the same name as the entity. After producing that, the events send to mongo by MongoDB Kafka Sink Connector for storing the state of the entity objects in the collection as the same name as the topic.\nWhen I do insert a new object, it works fine but When I send an update event with (_id), this message inserts in the topic and create a new document on MongoDB instead of updating the document(_id) of this document store as an object instead of objectId .for id of domain and updatedEvent I use ObjectId :and My Connector config:0I am new to Kafka connector. I use Kafka connector to send “events” to Kafka topics with the same name as the entity. After producing that, the events send to mongo by MongoDB Kafka Sink Connector for storing the state of the entity objects in the collection as the same name as the topic.\nWhen I do insert a new object, it works fine but When I send an update event with (_id), this message inserts in the topic and create a new document on MongoDB instead of updating the document(_id) of this document store as an object (figure 1) instead of objectId (figure 2).\nfigure 1 :\nfor id of domain and updatedEvent I use ObjectId :and My Connector config:", "username": "sa_N_A" }, { "code": "", "text": "Hello sa_N_A,If I understand correctly, you want to replace the document if the _id already exists. You may not (can’t tell from the details) insert a new document if the _id does not exist.I would recommend you look into the write strategies that are possible with the connector located here: https://www.mongodb.com/docs/kafka-connector/current/sink-connector/configuration-properties/write-strategies/", "username": "Joe_Niemiec" } ]
MongoDB Kafka Sink Connector does not update documents
2023-02-22T11:05:00.905Z
MongoDB Kafka Sink Connector does not update documents
1,440
null
[ "node-js", "connecting" ]
[ { "code": "MongoError: no connection available to server myclustername--shard-00-02.lp8yd.mongodb.net:27017.\nat disconnectHandler (/var/task/node_modules/mongodb/lib/core/topologies/server.js:271:14)\n at Server.command (/var/task/node_modules/mongodb/lib/core/topologies/server.js:625:7)\n at CountDocumentsOperation.executeCommand (/var/task/node_modules/mongodb/lib/operations/command_v2.js:95:12)\n at CountDocumentsOperation.execute (/var/task/node_modules/mongodb/lib/operations/aggregate.js:93:11)\n at CountDocumentsOperation.execute (/var/task/node_modules/mongodb/lib/operations/count_documents.js:22:11)\n at /var/task/node_modules/mongodb/lib/operations/execute_operation.js:144:17\n at _selectServer (/var/task/node_modules/mongodb/lib/core/topologies/replset.js:1160:5)\n at ReplSet.selectServer (/var/task/node_modules/mongodb/lib/core/topologies/replset.js:1163:3)\n at ReplSet.selectServer (/var/task/node_modules/mongodb/lib/topologies/topology_base.js:342:32)\n at executeWithServerSelection (/var/task/node_modules/mongodb/lib/operations/execute_operation.js:131:12)\n", "text": "Hi,I found very strange issue while reading the document from a collection .\nFrom MongoDB Atlas dashboard I can see number of connections are available , When I tried db.collection(collectionname).countDocuments API then it responds errorThe detailed log is provided below:This is first time I observed this issue and after this it did not reproduce .Did anyone face this kind of issue that lot of connections are available and still atlas sends “no connection available error” .I am using MongoDB Nodejs Driver and detailed logs are here.", "username": "Ajay_P" }, { "code": "", "text": "Hi Ajay,This is very odd indeed: out of curiosity did you create a support case? You were connected to the full replica set rather than an individual node in the cluster right?\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi Andrew,\nI am using Atlas mongodb and I believe by default atlas creates replica set.\nFor connecting the DB from my application I am using connect API with these below options.\nIt should connect with full replica set, or is it connecting with any one cluster node ?\nMongoClient.connect(“mongodb+srv://UserName:@myclustername.lp8yd.mongodb.net/myFirstDatabase?retryWrites=true&w=majority”, {useNewUrlParser: true, useUnifiedTopology: true},of course I changed usename , pwd and clustername with the real values.\nRegards\nAjay", "username": "Ajay_P" }, { "code": "", "text": "Hi Ajay,By default that modern SRV connection string will get you access to the full replica set: SRV allows for each node’s public DNS hostname to be discovered inside the SRV record itself automatically (this is supported by newer generation MongoDB drivers). This makes for a more concise connection string.Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi Andrew,\nThanks for your reply. It is confirmed that I was connected with the full replica set.\nNow the issue is when there are plenty of connection counts are available then why mongodb atlas replied \"no connection available to server \" ?\nThis is very strange , Please help to know why it misbehaved and how this can be handled.Regards\nAjay", "username": "Ajay_P" }, { "code": "", "text": "Hi Ajay, Please open a support case or use the lower-right chat in the UI: that team will be better positioned to help you with troubleshooting.Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi Andrew,\nFirst I contacted the support person by using lower-right chat in the UI.\nThe support person first says that “It is due to election between nodes”, then I replied him that while going election it does not impact the read operations.After this he says, yes it should not impact the read operations and finally he said \" I am afraid, we are not able to provide a root cause for this matter from within this chat. You may wish to check out our community support resources, where our other MongoDB users like yourself are frequently answering questions \"After reading his answer I was disappointed and I created this post here and hoping that I will get the right solution for this.Regards\nAjay", "username": "Ajay_P" }, { "code": "", "text": "Is it possible you’re using an earlier version of the MongoDB driver? later versions use retryable reads by default https://docs.mongodb.com/manual/core/retryable-reads/", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi Andrew,\nThanks for your reply.\nThere was only one single read request sent from my application , so I am not sure any use case of multiple reads here.\nMay be Mongo atlas have any mechanism for multiple reads for single read request from the replica set or any network issue at mongo end due to that it required retry able reads .To eliminate any possibilities I checked the driver version using with the application and it is Node.js 3.1.13 driver that is compatible with MongoDB 4.0 .\nShould is update my driver, if yes then 3.6.5 will be a good choice ?The API interfaces are common between 3.1.13 and 3.6.5 or is there any interface change those need to be updated at the application level ?Regards\nAjay", "username": "Ajay_P" }, { "code": "var run = async function() {\n var conn = await MongoClient.connect('mongodb+srv://user:[email protected]/test?retryWrites=true', { useNewUrlParser: true, useUnifiedTopology: true })\n console.log(await conn.db('test').collection('test').countDocuments({}));\n}().catch(function(err) {\n console.log(err)\n})\n", "text": "Hi @Ajay_P,Wondering whether you have had any success in connecting to your Atlas cluster ?Should is update my driver, if yes then 3.6.5 will be a good choice ?It would be ideal if you could update your MongoDB Node.JS driver to the current stable version (v3.6.6). Below is a simple example snippet:If you are still encountering the same issue, could you please provide a minimal reproducible code example.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi Won,As I described earlier, this issue is not preproducing, but i have logs that proves that there is an issue and it may occur in future as well.\nTo eliminate all the possibilities I have updated to the the app to Node.js driver version v3.6.6.\nI am hoping that this issue will not occur with v3.6.6 driver.\nThe string i am using for this connection is\nMongoClient.connect(‘mongodb+srv://user:[email protected]/test?retryWrites=true&w=majority’, { useNewUrlParser: true, useUnifiedTopology: true })The only difference between your suggestion and my implantation is w=majority .Regards\nAjay", "username": "Ajay_P" }, { "code": "name: 'MongoNetworkError',\n0|myapp-p | errorLabels: [ 'TransientTransactionError' ],\n0|myapp-p | status: 400,\n0|myapp-p | [Symbol(mongoErrorContextSymbol)]: { isGetMore: true }\n0|myapp-p | }\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/app.js:76:17\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:71:5)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:67:12)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:635:15\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:260:14)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/route.js:127:14)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/app/contollers/subOrder.controller.js:357:20\n0|myapp-p | at bound (domain.js:419:14)\n0|myapp-p | at runBound (domain.js:432:12)\n0|myapp-p | at tryCatcher (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/util.js:16:23)\n0|myapp-p | at Promise._settlePromiseFromHandler (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:517:31)\n0|myapp-p | at Promise._settlePromise (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:574:18)\n0|myapp-p | at Promise._settlePromise0 (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:619:10)\n0|myapp-p | at Promise._settlePromises (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:695:18)\n0|myapp-p | at _drainQueueStep (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:138:12)\n0|myapp-p | at _drainQueue (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:131:9)\n0|myapp-p | at Async._drainQueues (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:147:5)\n0|myapp-p | at Immediate.Async.drainQueues [as _onImmediate] (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:17:14)\n0|myapp-p | at processImmediate (internal/timers.js:439:21)\n0|myapp-p | at process.topLevelDomainCallback (domain.js:130:23) \n0|myapp-p | info: GET /api/admin/sub-orders?status=OPEN,WAITING,CANCELED,ACCEPTED,COOKED,SERVED,COOKING 400 229900ms \n0|myapp-p | error: Trace: MongoNetworkError: connection 10 to cluster0-shard-00-01-anxgh.mongodb.net:27017 closed\n0|myapp-p | at TLSSocket.<anonymous> (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:352:9)\n0|myapp-p | at Object.onceWrapper (events.js:313:26)\n0|myapp-p | at TLSSocket.emit (events.js:223:5)\n0|myapp-p | at TLSSocket.EventEmitter.emit (domain.js:475:20)\n0|myapp-p | at net.js:664:12\n0|myapp-p | at TCP.done (_tls_wrap.js:481:7) {\n0|myapp-p | name: 'MongoNetworkError',\n0|myapp-p | errorLabels: [ 'TransientTransactionError' ],\n0|myapp-p | status: 400,\n0|myapp-p | [Symbol(mongoErrorContextSymbol)]: { isGetMore: true }\n0|myapp-p | }\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/app.js:76:17\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:71:5)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:67:12)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:635:15\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:260:14)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/route.js:127:14)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/app/contollers/subOrder.controller.js:357:20\n0|myapp-p | at bound (domain.js:419:14)\n0|myapp-p | at runBound (domain.js:432:12)\n0|myapp-p | at tryCatcher (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/util.js:16:23)\n0|myapp-p | at Promise._settlePromiseFromHandler (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:517:31)\n0|myapp-p | at Promise._settlePromise (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:574:18)\n0|myapp-p | at Promise._settlePromise0 (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:619:10)\n0|myapp-p | at Promise._settlePromises (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/promise.js:695:18)\n0|myapp-p | at _drainQueueStep (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:138:12)\n0|myapp-p | at _drainQueue (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:131:9)\n0|myapp-p | at Async._drainQueues (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:147:5)\n0|myapp-p | at Immediate.Async.drainQueues [as _onImmediate] (/home/ubuntu/apps/myapp-servers/node_modules/bluebird/js/release/async.js:17:14)\n0|myapp-p | at processImmediate (internal/timers.js:439:21)\n0|myapp-p | at process.topLevelDomainCallback (domain.js:130:23) \n0|myapp-p | info: GET /api/admin/sub-orders?status=OPEN,WAITING,CANCELED,ACCEPTED,COOKED,SERVED,COOKING 400 221108ms \n0|myapp-p | error: Trace: MongoNetworkError: connection 10 to cluster0-shard-00-01-anxgh.mongodb.net:27017 closed\n0|myapp-p | at TLSSocket.<anonymous> (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:352:9)\n0|myapp-p | at Object.onceWrapper (events.js:313:26)\n0|myapp-p | at TLSSocket.emit (events.js:223:5)\n0|myapp-p | at TLSSocket.EventEmitter.emit (domain.js:475:20)\n0|myapp-p | at net.js:664:12\n0|myapp-p | at TCP.done (_tls_wrap.js:481:7) {\n0|myapp-p | name: 'MongoNetworkError',\n0|myapp-p | errorLabels: [ 'TransientTransactionError' ],\n0|myapp-p | status: 400,\n0|myapp-p | [Symbol(mongoErrorContextSymbol)]: { isGetMore: true }\n0|myapp-p | }\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/app.js:76:17\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:71:5)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:67:12)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:635:15\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:260:14)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/route.js:127:14)\n0|myapp-p | at exports.createByAdmin (/home/ubuntu/apps/myapp-servers/app/contollers/subOrder.controller.js:121:9)\n0|myapp-p | at runMicrotasks (<anonymous>)\n0|myapp-p | at processTicksAndRejections (internal/process/task_queues.js:94:5) \n0|myapp-p | info: POST /api/admin/sub-order-admin 400 269099ms \n0|myapp-p | error: Trace: MongoError: no connection available to server cluster0-shard-00-01-anxgh.mongodb.net:27017\n0|myapp-p | at disconnectHandler (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/server.js:264:14)\n0|myapp-p | at Server.insert (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/server.js:653:7)\n0|myapp-p | at executeWriteOperation (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/replset.js:1183:37)\n0|myapp-p | at Object.handler [as cb] (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/replset.js:1162:14)\n0|myapp-p | at connectionFailureHandler (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:240:33)\n0|myapp-p | at Connection.Pool._connectionCloseHandler (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:152:5)\n0|myapp-p | at Connection.emit (events.js:223:5)\n0|myapp-p | at Connection.EventEmitter.emit (domain.js:475:20)\n0|myapp-p | at TLSSocket.<anonymous> (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:350:12)\n0|myapp-p | at Object.onceWrapper (events.js:313:26)\n0|myapp-p | at TLSSocket.emit (events.js:223:5)\n0|myapp-p | at TLSSocket.EventEmitter.emit (domain.js:475:20)\n0|myapp-p | at net.js:664:12\n0|myapp-p | at TCP.done (_tls_wrap.js:481:7) {\n0|myapp-p | name: 'MongoError',\n0|myapp-p | status: 400,\n0|myapp-p | [Symbol(mongoErrorContextSymbol)]: {}\n0|myapp-p | }\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/app.js:76:17\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:71:5)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at Layer.handle_error (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/layer.js:67:12)\n0|myapp-p | at trim_prefix (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:315:13)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:284:7\n0|myapp-p | at Function.process_params (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:335:12)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:275:10)\n0|myapp-p | at /home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:635:15\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/index.js:260:14)\n0|myapp-p | at next (/home/ubuntu/apps/myapp-servers/node_modules/express/lib/router/route.js:127:14)\n0|myapp-p | at exports.customerMe (/home/ubuntu/apps/myapp-servers/app/contollers/user.controller.js:601:9)\n0|myapp-p | at runMicrotasks (<anonymous>)\n0|myapp-p | at processTicksAndRejections (internal/process/task_queues.js:94:5) \n0|myapp-p | info: PUT /api/customer/me 400 228086ms \n0|myapp-p | error: Trace: MongoError: no connection available to server cluster0-shard-00-01-anxgh.mongodb.net:27017\n0|myapp-p | at disconnectHandler (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/server.js:264:14)\n0|myapp-p | at Server.insert (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/server.js:653:7)\n0|myapp-p | at executeWriteOperation (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/replset.js:1183:37)\n0|myapp-p | at Object.handler [as cb] (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/topologies/replset.js:1162:14)\n0|myapp-p | at connectionFailureHandler (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:240:33)\n0|myapp-p | at Connection.Pool._connectionCloseHandler (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/pool.js:152:5)\n0|myapp-p | at Connection.emit (events.js:223:5)\n0|myapp-p | at Connection.EventEmitter.emit (domain.js:475:20)\n0|myapp-p | at TLSSocket.<anonymous> (/home/ubuntu/apps/myapp-servers/node_modules/mongoose/node_modules/mongodb-core/lib/connection/connection.js:350:12)\n0|myapp-p | at Object.onceWrapper (events.js:313:26)\n0|myapp-p | at TLSSocket.emit (events.js:223:5)\n0|myapp-p | at TLSSocket.EventEmitter.emit (domain.js:475:20)\n0|myapp-p | at net.js:664:12\n0|myapp-p | at TCP.done (_tls_wrap.js:481:7) {\n0|myapp-p | name: 'MongoError',\n0|myapp-p | status: 400,\n0|myapp-p | [Symbol(mongoErrorContextSymbol)]: {}\n0|myapp-p | }\n", "text": "Hi @Ajay_P , @Andrew_Davidson . We have ran into the same issue. From past 3 days, our production servers are automatically loosing connection to the MongoDB server at a fixed time (around 8pm, IST). For 5 mins, there are only GET calls happening but no PUT or POST. Even the clusters show they are online (in Atlas).It’s becoming scary for us, as the clients are panicking. I\"m attaching my logs as well for some to look at.To give a context, the MongoDB instance is been running since 9+months now and no DB related configurations have been changed. This is happening out of now where.", "username": "Menula_App" } ]
MongoError: no connection available to server
2021-04-21T11:02:51.674Z
MongoError: no connection available to server
7,505
https://www.mongodb.com/…5_2_1023x487.png
[ "python", "containers" ]
[ { "code": "", "text": "I have been trying this for the last 2 days and can’t seem to know what am I doing wrong. It would be great if anyone could help.\nHere’s the problem:\nimage1381×657 31.9 KB\nThis error is coming up when I’m trying to connect to MongoDB atlas from my docker container.\nBut when run it on my local machine it works perfectly and I am able to perform operations on the database…\nThank you…", "username": "Shikhar_Upadhyay" }, { "code": "", "text": "Bad authentication means wrong user id/password combination\nCheck the credentials again", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I did check that and created a new database user too but it still doesn’t seem to work… But I will check that again.", "username": "Shikhar_Upadhyay" } ]
Operation error when trying to connect with atlas cluster from docker container!
2023-02-22T08:35:51.343Z
Operation error when trying to connect with atlas cluster from docker container!
999
null
[ "queries" ]
[ { "code": "", "text": "I need to get the exact number of documents in a collection containing about15 million documents. But on using count( ) method, the output is either timing out or is buffering for a very long time.How can I get this count?", "username": "Sreekanth_R_Shekar" }, { "code": "db.find().projection( { _id : 1 } ).count()\n", "text": "What is the total size in Mb is your collection?What is your system configuration, RAM, disks, …?You may try:", "username": "steevej" }, { "code": "db.CollectionName.stats().count", "text": "An alternative way to find the number of documents, if you do not have any query is using:db.CollectionName.stats().count", "username": "steevej" }, { "code": "", "text": "HiI’m having trouble counting all 25 million documents.\nIt takes about 12.50 seconds.\nUse about 54% CPU.Can you make it better or use minimal CPU and Ram?", "username": "klingofmonsterdev" }, { "code": "db.collectionName.stats().count", "text": "Hello @klingofmonsterdev, and welcome to the MongoDB Community forums! Are you trying to get a count of all documents in your collection? If so, did you try Steeve’s suggestion of running db.collectionName.stats().count? This should use metadata information on the collection without running any query against it.", "username": "Doug_Duncan" }, { "code": "db.collectionName.stats().countcollection.stats()estimatedDocumentCount()", "text": "Welcome to the MongoDB community @klingofmonsterdev !If so, did you try Steeve’s suggestion of running db.collectionName.stats().count ? This should use metadata information on the collection without running any query against it.FYI, the collection.stats() metadata count is the same as the estimatedDocumentCount() in MongoDB driver and shell APIs. You can use this if speed is more important than accuracy for your use case.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for the good advice.\nNow I use db.collection.estimatedDocumentCount().\nBut if I want to find query What’s the best way?", "username": "klingofmonsterdev" }, { "code": "", "text": "hello plz i have the same problem but the solution of estimatedDocumentCount doesnt work for me cause i have a to count with query\nany solutions plz ?", "username": "test_te" }, { "code": "", "text": "I still find a better way. estimatedDocumentCount() not yet, but to learn, must $match and $project to keep the field to a minimum. before to do anything", "username": "klingofmonsterdev" } ]
Getting count of documents in large collection
2022-01-19T17:55:46.551Z
Getting count of documents in large collection
7,615
null
[ "next-js" ]
[ { "code": "import clientPromise from \"../../lib/mongo/mongo\";\n\nexport default async function handler(req, res) {\n const client = await clientPromise;\n const db = client.db();\n\n switch (req.method) {\n case \"GET\":\n const dbNamesArray = await db.listCollections().toArray();\n\n const dbNames = await dbNamesArray.map((el) => {\n return el.name;\n });\n\n const dbAdmin = await db.admin().listDatabases({ nameOnly: true });\n const databases = await dbAdmin.databases;\n\n\n res.json({\n names: [{ database: db.databaseName, collection: dbNames }],\n });\n break;\n }\n}\n", "text": "Hi everyone, I have nexjs project where I use mongoClient to connect MongoDB, I want to take databases names and their collections names together as an object, I mean something like this, for example:\n[\n{database: “test”, collections: [“users”, “posts”, “jobs”]}\n{database: “test1”, collections: [“users1”, “posts1”, “jobs1”]}\n]I get databases names and collections names but separately, and I can’t understand how can I push the database with the appropriate collections in an array.Thanks for your attention!this code is here, may you will understand better what I want to do", "username": "David_Takidze" }, { "code": "const { MongoClient } = require('mongodb');\n\nconst uri = 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/<database>?retryWrites=true&w=majority';\nconst client = new MongoClient(uri);\n\nasync function main() {\n try {\n await client.connect();\n\n const databases = await client.db().admin().listDatabases();\n const result = [];\n\n for (const database of databases.databases) {\n const collections = await client.db(database.name).listCollections().toArray();\n const collectionNames = collections.map(collection => collection.name);\n result.push({ database: database.name, collections: collectionNames });\n }\n\n console.log(result);\n\n } finally {\n await client.close();\n }\n}\n\nmain().catch(console.error);\n\n[\n {\n database: 'sample_analytics',\n collections: [ 'transactions', 'customers', 'accounts' ]\n },\n { database: 'sample_geospatial', collections: [ 'shipwrecks' ] },\n { database: 'sample_guides', collections: [ 'planets' ] },\n {\n database: 'sample_mflix',\n collections: [ 'comments', 'users', 'sessions', 'theaters', 'movies' ]\n }\n]\n", "text": "Hello @David_Takidze ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet, were you able to solution?\nIf not, below is an example code giving out database and collections names similar to your requirements using javascript, you can integrate it with your code and test.Sample code:Output:Note: Please thoroughly test and verify to make sure it suits your use-case/requirements.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hello, first of all, sorry for the late reply, I saw your answer yesterday but I could not verify and use the code, thank’s for your answer, this solution works for me ", "username": "David_Takidze" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I want to get database and collections names together as Object
2023-02-02T04:52:04.959Z
I want to get database and collections names together as Object
1,424
null
[ "aggregation" ]
[ { "code": "{\n $addFields:\n {\n dolNow: '$$NOW',\n isoNow: ISODate()\n }\n}\ndolNow : 2023-02-22T17:56:34.358+00:00\nisoNow : 2023-02-22T17:56:34.318+00:00\n{\n $match:\n {\n endDate: {$gt: '$$NOW'}\n }\n}\n{\n $match:\n {\n endDate: {$gt: ISODate()}\n }\n}\n", "text": "Doing aggregations on MongoDB Atlas 6.0.4Why doesn’t $$NOW work in greater than, less than etc comparion matches when ISODate() does?If I have a workflow stage like this:I get the expected values added to pipeline documents (note the actual milliseconds differ slightly):But if I use $$NOW and ISODate() in comparisons, only ISODate() results in a valid comparison occuring.vsAgainst exactly the same documents, only the $match using ISODate() returns any documents where the endDate is in the future. I thought that we could use $$NOW here as it also returns the current time as an ISODate?", "username": "Ben_Giddins" }, { "code": "$addFields{ $addFields: { <newField>: <expression>, ... } }\n$match{ $match: { <query> } }\n<field>:<value>{ <field1>: <value1>, ... }\nNOW$expr$match", "text": "Very good question.The devil is in the details and most of the times the details are documented.In $addFields we have$addFields has the following form:However, in $match we haveThe $match stage has the following prototype form:For query we haveTo specify equality conditions, use <field>:<value> expressions in the query filter document:As an expression $$NOW gives youA variable that returns the current datetime value. NOW returns the same value for all members of the deployment and remains the same throughout all stages of the aggregation pipeline.As a value $$NOW gives you the string “$$NOW”. Now WOWLuckily,Instead, use a $expr query expression to include aggregation expression in $match.And $now $why ISODate works? It does because ISODate is a function call on the client, so it is always sent as a value to the server so it works as well in isoNow:<value> and in isoNow:<expression> since a value is a valid expression.", "username": "steevej" }, { "code": "db.deals.aggregate(\n {\n $match:\n {\n dt: {$gt: ISODate()}\n }\n}\n ).explain(\"executionStats\")\n parsedQuery: {\n dt: {\n '$gt': 2023-02-23T03:39:50.017Z\n }\n }\ndb.deals.aggregate(\n {\n $match:\n {\n dt: {$gt: '$$NOW'}\n }\n}\n ).explain(\"executionStats\")\n parsedQuery: {\n dt: {\n '$gt': '$$NOW'\n }\n }\n", "text": "Hello @Ben_Giddins ,@steevej’s answer is very well explained and I just want to add a point to same.When I ran your first query with .explain(“executionStats”)The query got parsed like this in the explain and you can see it took ISODate() and converted it to current date valueand returned a result.On the other hand, when I ran the same with $$NOWyou can see the query parser did not convert the value of $$NOW,the reason was explained well in the earlier post by @steevejHence, no result is returned now.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$$NOW vs ISODate() in aggregations
2023-02-22T18:11:36.501Z
$$NOW vs ISODate() in aggregations
1,150
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "hello\nI need a query that finds from collection- that I dont have the collection’s name\nI know the collection’s suffix name\nthe collection name is something like 123456Aggregation\nbut the number is random\nso I need to read from the collection with name \" .*Aggregation\"\ni found that I could use something like this\nFROM sys.tables agg WHERE agg.name like ‘%Aggregation’\n.tables are all collections and agg.name is the collection’s name\nbut it doesn’t work\nthank you", "username": "kineret_ao" }, { "code": "db.getCollectionNames().filter(name => name.endsWith(\"Aggregation\"))mongoshcollNamedb[collName].find(...)db[collName].aggregate(...)mongosh", "text": "Hello @kineret_ao ,Welcome to The MongoDB Community Forums! Please correct me if my understanding of your use-case is not correct. You are trying to query a collection but the exact collection name is not known instead only the last part of collection name is known, is my understanding correct?If not, then please share more details regarding your use case such as:If my understand is correct then you can try some thing as follows:You can divide your question in two parts:How to get collection name with collection name’s ending/suffix?\ndb.getCollectionNames().filter(name => name.endsWith(\"Aggregation\"))\nAbove Javascript operation in the mongosh shell will give you all the collection names ending with “Aggregation”. Now you go through the results and can query the database accordingly.Using the output of the previous step, you can execute some operations on the database. For example if you put the name of the desired collection into a variable collName , you may be able to do something like db[collName].find(...) or db[collName].aggregate(...) in the mongosh shell. To learn more about how to connect and query from MongoDB Database, I would recommend you to go throughDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Note: Please test the query as per your requirements and make changes accordingly.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Select from collection that I know only suffix collection's name
2023-02-22T09:22:46.358Z
Select from collection that I know only suffix collection&rsquo;s name
871
null
[ "aggregation", "node-js", "mongoose-odm", "atlas-search" ]
[ { "code": "linkstoAsset: {\n type: mongoose.Types.ObjectId,\n ref: 'Assets',\n index: 'links'\n}\nfromAsset: {\n type: mongoose.Types.ObjectId,\n ref: 'Assets',\n index: 'links'\n},\ncomment: {\n type: String,\n index: 'links'\n}\ntoAssetfromAssetassetstitle: {\n type: String,\n required: true,\n index: 'assets'\n},\nnote: {\n type: String,\n required: true,\n index: 'assets'\n}\nconst search = {\n ...(args.phraseToSearchLinks.length > 0 && {\n $search: {\n index: 'links',\n // compound: { \n // must: [{\n // phrase: {\n // query: args.phraseToSearchLinks,\n // path: 'comment',\n // slop: 2,\n // score: { boost: { value: 3 } }\n // }\n // }]\n // },\n embeddedDocument: {\n path: directionOfLink,\n operator: {\n compound: {\n should: [{\n phrase: {\n query: args.phraseToSearchLinks,\n path: `${directionOfLink}.title`,\n slop: 2,\n score: { boost: { value: 2 } }\n }\n },\n {\n phrase: {\n query: args.phraseToSearchLinks,\n path: `${directionOfLink}.note`,\n slop: 2\n }\n }]\n }\n }\n }\n }\n })\n}\ncommentembeddedDocumentlinkstitlenote{\n \"analyzer\": \"lucene.standard\",\n \"searchAnalyzer\": \"lucene.standard\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"comment\": {\n \"analyzer\": \"htmlStrippingAnalyzer\",\n \"type\": \"string\"\n },\n \"createdAt\": {\n \"type\": \"date\"\n },\n \"fromAsset\": {\n \"fields\": {\n \"note\": {\n \"analyzer\": \"htmlStrippingAnalyzer\",\n \"searchAnalyzer\": \"htmlStrippingAnalyzer\",\n \"type\": \"string\"\n },\n \"title\": {\n \"analyzer\": \"lucene.standard\",\n \"multi\": {\n \"keywordAnalyzer\": {\n \"analyzer\": \"ngramShingler\",\n \"searchAnalyzer\": \"ngramShingler\",\n \"type\": \"string\"\n }\n },\n \"searchAnalyzer\": \"lucene.standard\",\n \"type\": \"string\"\n }\n },\n \"type\": \"embeddedDocuments\"\n },\n \"isActive\": {\n \"type\": \"boolean\"\n },\n \"toAsset\": {\n \"fields\": {\n \"note\": {\n \"analyzer\": \"htmlStrippingAnalyzer\",\n \"searchAnalyzer\": \"htmlStrippingAnalyzer\",\n \"type\": \"string\"\n },\n \"title\": {\n \"analyzer\": \"lucene.standard\",\n \"multi\": {\n \"keywordAnalyzer\": {\n \"analyzer\": \"ngramShingler\",\n \"searchAnalyzer\": \"ngramShingler\",\n \"type\": \"string\"\n }\n },\n \"searchAnalyzer\": \"lucene.standard\",\n \"type\": \"string\"\n }\n },\n \"type\": \"embeddedDocuments\"\n },\n \"updatedAt\": {\n \"type\": \"date\"\n }\n }\n },\n \"analyzers\": [\n {\n \"name\": \"ngramShingler\",\n \"tokenFilters\": [\n {\n \"maxShingleSize\": 3,\n \"minShingleSize\": 2,\n \"type\": \"shingle\"\n }\n ],\n \"tokenizer\": {\n \"maxGram\": 5,\n \"minGram\": 2,\n \"type\": \"nGram\"\n }\n },\n {\n \"charFilters\": [\n {\n \"ignoredTags\": [\n \"a\",\n \"div\",\n \"p\",\n \"strong\",\n \"em\",\n \"img\",\n \"figure\",\n \"figcaption\",\n \"ol\",\n \"ul\",\n \"li\",\n \"span\"\n ],\n \"type\": \"htmlStrip\"\n }\n ],\n \"name\": \"htmlStrippingAnalyzer\",\n \"tokenFilters\": [],\n \"tokenizer\": {\n \"type\": \"standard\"\n }\n }\n ]\n}\n", "text": "Hi, I’m using 2 references in the links model:Here, toAsset and fromAsset both reference a single object each, and the intention is to query 2 fields in the assets model:I’m using the following in Node:Here, querying the comment field returns search results, but I don’t know how to combine it with the embeddedDocument, which isn’t returning anything at the moment.I’ve made changes to the index for links to include the referenced fields, but I’m not seeing results when I perform a search on anything in a title or note field:At this stage, I’m not certain if what I’m attempting is possible! Any advice would be excellent.", "username": "Wayne_Smallman" }, { "code": "$search$lookup", "text": "Looks like I’ll have to wait until some time in March for version 6, which is when I’ll have a chance to run $search inside $lookup.", "username": "Wayne_Smallman" } ]
Search fields in a collection via Ref
2023-02-22T13:14:42.985Z
Search fields in a collection via Ref
639
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 6.0.5-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.4. The next stable release 6.0.5 will be a recommended upgrade for all 6.0 users.\nFixed in this release:", "username": "James_Hippler" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.5-rc0 is released
2023-02-23T04:30:19.264Z
MongoDB 6.0.5-rc0 is released
1,053
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "Hi, :wave: The below query takes more then a minute to aggregate the data. I have around 6 million records in a collection.\nI tried multiple approaches to optimise it but nothing worked.\n\nI am using the mongoDB version 4.4.8.\n\nThrough below query I am trying to get the total count and average of all the records of a customer in given period. The period can be day, week, month and year).\n\ndb.getCollection(collectionName).aggregate([\n {\n $match: {\n \"customer.id\": { $in: [\"10001\",\"10002\"] }\n\t \"date.year\": { $in: [ 2020, 2021, 2022 ] }\n }\n },\n {\n $group:\n {\n _id: { \"customer_id\": \"$customer.id\" },\n \"total_records\": { $sum: 1 },\n \"total_rating\": { \"$avg\": \"$rating.my_rating.rating\" }\n }\n },\n {\n $project:\n {\n _id: 0,\n \"customer_id\": \"$_id.customer_id\",\n \"total_records\": \"$total_records\",\n \"total_rating\": \"$total_rating\"\n }\n }\n])\n\nThe document in collection is look like this :\n{\n \"_id\" : ObjectId(\"5r21grnf457sdfbhdghdsh876c17a1\"),\n \"id\" : \"12345\", \n \"date\" : {\n \"date_id\" : NumberInt(20180906),\n \"date_value\" : \"2018-09-06\",\n \"day\" : NumberInt(6),\n \"day_short_name\" : \"Thu\",\n \"day_long_name\" : \"Thursday\",\n \"week_id\" : NumberInt(201836),\n \"week\" : NumberInt(36),\n \"week_short_name\" : \"WK36\",\n \"week_long_name\" : \"WEEK36\",\n \"week_year\" : \"WK36'18\",\n \"month_id\" : NumberInt(201809),\n \"month\" : NumberInt(9),\n \"month_short_name\" : \"Sep\",\n \"month_long_name\" : \"September\",\n \"month_year\" : \"Sep'18\",\n \"qtr_id\" : NumberInt(201803),\n \"quarter\" : NumberInt(3),\n \"qtr_short_name\" : \"Q3\",\n \"qtr_long_name\" : \"QUARTER3\",\n \"qtr_year\" : \"Q3'18\",\n \"year\" : NumberInt(2018)\n },\n \"customer\" : {\n \"id\" : \"888\",\n \"name\" : \"ABCD\",\n \"status\" : \"\"\n },\n \"category\" : {\n \"id\" : \"1\",\n \"name\" : \"Consumers\"\n },\n \"review\" : {\n \"source_id\" : \"10\",\n \"source_name\" : \"Facebook\" \n },\n \"reviewer\" : {\n \"commentedby\" : \"XYZ\",\n \"userurl\" : \"\",\n \"address\" : \"\",\n \"city\" : \"\",\n \"state\" : \"\",\n \"country\" : \"\"\n }, \n \"rating\" : {\n \"my_rating\" : {\"rating\" : 5 }\n },\n \"processdata\" : {\n \"processstatus\" : \"xxxx\",\n \"processdatetime\" : ISODate(\"2017-11-11T00:43:58.000+0000\"),\n \"created_by\" : \"xxxx\",\n \"created_date\" : ISODate(\"2017-11-11T00:43:58.000+0000\"),\n \"updated_by\" : \"xxxx\",\n \"updated_date\" : ISODate(\"2018-11-12T19:08:04.000+0000\"),\n \"updated_count\" : NumberInt(0)\n },\n \"customer_id\" : \"999\"\n}\n\nBelow is the index which covers the filter query :\n{ \"customer.id\" : 1.0, \"date.year\" : 1.0 }\n\nBelow is the query explain(executionStats) :\n{\n \"stages\" : [\n {\n \"$cursor\" : {\n \"queryPlanner\" : {\n \"plannerVersion\" : 1.0,\n \"namespace\" : \"xxxxx\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"customer.id\" : {\n \"$in\" : [\n \"10001\",\n \"10002\"\n ]\n }\n },\n {\n \"date.year\" : {\n \"$in\" : [\n 2020.0,\n 2021.0,\n 2022.0\n ]\n }\n }\n ]\n },\n \"collation\" : {\n \"locale\" : \"en\",\n \"caseLevel\" : false,\n \"caseFirst\" : \"off\",\n \"strength\" : 2.0,\n \"numericOrdering\" : false,\n \"alternate\" : \"non-ignorable\",\n \"maxVariable\" : \"punct\",\n \"normalization\" : false,\n \"backwards\" : false,\n \"version\" : \"57.1\"\n },\n \"queryHash\" : \"xxxxxx\",\n \"planCacheKey\" : \"xxxxxx`Preformatted text`\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"customer.id\" : 1.0,\n \"rating.my_rating.rating\" : 1.0,\n \"_id\" : 0.0\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"customer.id\" : 1.0,\n \"date.year\" : 1.0\n },\n \"indexName\" : \"idx_customer_year\",\n \"collation\" : {\n \"locale\" : \"en\",\n \"caseLevel\" : false,\n \"caseFirst\" : \"off\",\n \"strength\" : 2.0,\n \"numericOrdering\" : false,\n \"alternate\" : \"non-ignorable\",\n \"maxVariable\" : \"punct\",\n \"normalization\" : false,\n \"backwards\" : false,\n \"version\" : \"57.1\"\n },\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"customer.id\" : [\n\n ],\n \"date.year\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2.0,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"customer.id\" : [\n \"[\\\"\\\\x14\\\\x12\\\\x12\\\\x16\\\\x01\\b\\\", \\\"\\\\x14\\\\x12\\\\x12\\\\x16\\\\x01\\b\\\"]\",\n \"[\\\"\\\\x1A\\\\x14\\\\x12\\\\x12\\\\x01\\b\\\", \\\"\\\\x1A\\\\x14\\\\x12\\\\x12\\\\x01\\b\\\"]\"\n ],\n \"date.year\" : [\n \"[2020, 2020]\",\n \"[2021, 2021]\",\n \"[2022, 2022]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [...]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 472770.0,\n \"executionTimeMillis\" : 108840.0,\n \"totalKeysExamined\" : 472773.0,\n \"totalDocsExamined\" : 472770.0,\n \"executionStages\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"nReturned\" : 472770.0,\n \"executionTimeMillisEstimate\" : 98016.0,\n \"works\" : 472773.0,\n \"advanced\" : 472770.0,\n \"needTime\" : 2.0,\n \"needYield\" : 0.0,\n \"saveState\" : 4602.0,\n \"restoreState\" : 4602.0,\n \"isEOF\" : 1.0,\n \"transformBy\" : {\n \"customer.id\" : 1.0,\n \"rating.my_rating.rating\" : 1.0,\n \"_id\" : 0.0\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 472770.0,\n \"executionTimeMillisEstimate\" : 97188.0,\n \"works\" : 472773.0,\n \"advanced\" : 472770.0,\n \"needTime\" : 2.0,\n \"needYield\" : 0.0,\n \"saveState\" : 4602.0,\n \"restoreState\" : 4602.0,\n \"isEOF\" : 1.0,\n \"docsExamined\" : 472770.0,\n \"alreadyHasObj\" : 0.0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 472770.0,\n \"executionTimeMillisEstimate\" : 1720.0,\n \"works\" : 472773.0,\n \"advanced\" : 472770.0,\n \"needTime\" : 2.0,\n \"needYield\" : 0.0,\n \"saveState\" : 4602.0,\n \"restoreState\" : 4602.0,\n \"isEOF\" : 1.0,\n \"keyPattern\" : {\n \"customer.id\" : 1.0,\n \"date.year\" : 1.0\n },\n \"indexName\" : \"idx_customer_year\",\n \"collation\" : {\n \"locale\" : \"en\",\n \"caseLevel\" : false,\n \"caseFirst\" : \"off\",\n \"strength\" : 2.0,\n \"numericOrdering\" : false,\n \"alternate\" : \"non-ignorable\",\n \"maxVariable\" : \"punct\",\n \"normalization\" : false,\n \"backwards\" : false,\n \"version\" : \"57.1\"\n },\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"customer.id\" : [\n\n ],\n \"date.year\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2.0,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"customer.id\" : [\n \"[\\\"\\\\x14\\\\x12\\\\x12\\\\x16\\\\x01\\b\\\", \\\"\\\\x14\\\\x12\\\\x12\\\\x16\\\\x01\\b\\\"]\",\n \"[\\\"\\\\x1A\\\\x14\\\\x12\\\\x12\\\\x01\\b\\\", \\\"\\\\x1A\\\\x14\\\\x12\\\\x12\\\\x01\\b\\\"]\"\n ],\n \"date.year\" : [\n \"[2020, 2020]\",\n \"[2021, 2021]\",\n \"[2022, 2022]\"\n ]\n },\n \"keysExamined\" : 472773.0,\n \"seeks\" : 3.0,\n \"dupsTested\" : 0.0,\n \"dupsDropped\" : 0.0\n }\n }\n }\n }\n },\n \"nReturned\" : NumberLong(472770),\n \"executionTimeMillisEstimate\" : NumberLong(107046)\n },\n {\n \"$group\" : {\n \"_id\" : {\n \"customer_id\" : \"$customer.id\"\n },\n \"total_records\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n },\n \"total_rating\" : {\n \"$avg\" : \"$rating.my_rating.rating\"\n }\n },\n \"nReturned\" : NumberLong(2),\n \"executionTimeMillisEstimate\" : NumberLong(108040)\n },\n {\n \"$project\" : {\n \"customer_id\": \"$_id.customer_id\",\n \"total_records\": \"$total_records\",\n \"total_rating\": \"$total_rating\"\n \"_id\" : false\n },\n \"nReturned\" : NumberLong(2),\n \"executionTimeMillisEstimate\" : NumberLong(108040)\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"xxxx\",\n \"port\" : xxxx,\n \"version\" : \"4.4.8\",\n \"gitVersion\" : \"xxxxxxxx\"\n },\n \"ok\" : 1.0\n}\n\nIt would be helfull if anybody can give some ideas to optimise this query.\n\nThanks in advance.\n", "text": "", "username": "Naveen_Kohinoor" }, { "code": "", "text": "Sometimes performance issues are not logic specific.Your query is doing IXSCAN. Also nReturned, totalKeysExamined and totalDocsExamined are almost all identical. For this specific query, adding rating.my_rating.rating to your index might help.Sometimes performance issues are related to your data size vs system configuration size. If your documents need to read from disk because the working set does not fit in RAM it will be slow. In your case you FETCH ~470K documents. And your documents appear to be bloated (I mean big). The date field is certainly over-engineered as it contains many values that can be easily computed from a simple Date field.", "username": "steevej" }, { "code": "", "text": "Hi Steevej,Thanks for chipping in. Could you kindly advise me on the appropriate RAM to use for the below-DB storage?", "username": "Naveen_Kohinoor" }, { "code": "", "text": "First, I mentionedSometimes performance issues are related to your data size vs system configuration size.So your case might be different.Nobody can really recommend an exact size. It depends on working set. Which might be 100% of your data and indexes size, or just 20%. I don’t know. Only you can determine that. If your disk is working 100% of the time you need more RAM, if your disk is working under 20% your problem lies elsewhere.", "username": "steevej" }, { "code": "", "text": "Thank you Steveej.I’ll verify how much of the disk is being used when the query is run and will decide the RAM based on that.", "username": "Naveen_Kohinoor" } ]
Aggregation query is taking too long
2023-02-16T08:37:03.273Z
Aggregation query is taking too long
810
null
[]
[ { "code": "{\n \"fullDocument.myArray\": { \"$size\": 1 }\n}\n{\n \"fullDocument.myArray\": { \"$exists\": true }\n}\n", "text": "Hello.I am trying to set up the most basic trigger in Altas, but I only want it to run when the modified document has a particular array field at length 1. I have the full document option chosen.I have tried a couple match expressions:The trigger never firesand this, less-preciseThe trigger always fires, even when this array does not exist.Any thoughts on why neither seem to work correctly?", "username": "Heath_Volmer" }, { "code": "Updates[\n { a: 1, myArray: [ 1 ] },\n { a: 2, myArray: [ 1, 2 ] },\n { a: 3, myArray: [ 1, 2, 3 ] },\n { a: 4, myArray: [ 1, 2, 3, 4 ] },\n { a: 5, myArray: [ 1, 2, 3, 4, 5 ] },\n { a: 6, myArray: [ ] },\n { b: 1, myArray: [ 1 ] }\n]\n{\n \"fullDocument.myArray\": {\n \"$size\": {\n \"$numberInt\": \"1\"\n }\n }\n}\n{ a: 1, myArray: [ 1 ] }{ a: 1, myArray: [ 1 ], b: 1 }{ a: 2, myArray: [ 1, 2 ] }{ a: 2, myArray: [ 1 ] }{ a: 6, myArray: [ ] }{ a: 6, myArray: [ 'a' ] }{ b: 1, myArray: [ 1 ] }{ b: 1, myArray: [ 1, 2 ] }{ a: 3, myArray: [ 1, 2, 3 ] }{ a: 3, myArray: [ ] }", "text": "Hi @Heath_Volmer,but I only want it to run when the modified document has a particular array field at length 1I assume this trigger is only for operation type Updates then - Is that correct?I had the following test documents in my test environment:and the following match expression:Please see the following tests performed and results:Hope this helps. However, if not, could you please provide some sample documents and the operations you’re executing (that would fire the trigger) so that I could try reproduce the behaviour you’re experiencing?If you believe the above helps, please test thoroughly and verify it suits all your use case / requirements.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas trigger match expression fullDocument
2023-02-20T17:08:40.850Z
Atlas trigger match expression fullDocument
567
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 5.0.15-rc2 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.14. The next stable release 5.0.15 will be a recommended upgrade for all 5.0 users.\nFixed in this release:", "username": "James_Hippler" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.15-rc2 is released
2023-02-23T03:57:19.451Z
MongoDB 5.0.15-rc2 is released
1,152
https://www.mongodb.com/…689521a55b5.jpeg
[]
[ { "code": "Shard a at a/127.0.0.1:21000,127.0.0.1:21001,127.0.0.1:21002\n{\n data: '125.57MiB',\n docs: 227420,\n chunks: 1,\n 'estimated data per chunk': '125.57MiB',\n 'estimated docs per chunk': 227420\n}\nShard k at k/127.0.0.1:23070,127.0.0.1:23071,127.0.0.1:23072\n{\n data: '326.31MiB',\n docs: 576209,\n chunks: 1,\n 'estimated data per chunk': '326.31MiB',\n 'estimated docs per chunk': 576209\n}\n{\n \"_id\": {\n \"$oid\": \"63dd7324289226c918818c55\"\n },\n \"Title\": \"\",\n \"Product\": {\n \"web1\": {\n \"Harry Potter and the Chamber of Secrets: 2/7 (Harry Potter 2)\": {\n \"Price\": 15,\n \"Url\": \"https://www.amazon.com/Harry-Potter-Chamber-Secrets-Book/dp/B017V4IPPO/ref=sr_1_2?crid=GCT8C7Z3Q4SE&keywords=Harry+Potter+and+the+Chamber+of+Secrets&qid=1676836656&sprefix=harry+potter+and+the+chamber+of+secrets%2Caps%2C230&sr=8-2\",\n \"Time\": {\n \"$date\": {\n \"$numberLong\": \"1676669514749\"\n }\n }\n }\n }\n },\n \"Category\": [\n \"Book\",\n \"Fantasy\"\n ],\n \"Time\": {\n \"$date\": {\n \"$numberLong\": \"1676669514749\"\n }\n },\n \"shards\": \"h\"\n}\n", "text": "I have a database with 800k objects and I defined about 13 shard servers to access the data quickly. I assigned a letter to each object for use in the sharding process, for example, shards: ‘a’ for the first object, shards: ‘b’ for the second object, and so on. I created a shard key using the shards field within each object and wanted to distribute the objects as evenly as possible across the 13 shard servers. I used “hashed” as the shard key for the shards field. I evenly distributed the letters to all objects, for example, 50k objects had shards: ‘a’ and 50k objects had shards: ‘b’, and so on. I used \"sh.shardCollection(“test.testCollection”, { “shards”: “hashed” } ) to shard the collection, but the data only went to two of the 13 shard servers. The distribution was not even among the two servers, with a distribution of approximately 72% to one server and 28% to the other. I want the data to be evenly distributed among all 13 shard servers. Can you help me with this?\nshard2466×987 72.1 KB\nObject sample:edit: Host has Ryzen 9 5950x processor, 96GB RAM and 3x SN850 SSD.", "username": "Dogan_Can_GAZAZ" }, { "code": "", "text": "I was able to speed up the process by adding shard keys to aggregate queries with $text content. However, I still haven’t found why the shards are not evenly distributed. My shard keys seem to be working correctly. I need someone who knows why objects are not evenly distributed between shards.", "username": "Dogan_Can_GAZAZ" }, { "code": "sh.status()", "text": "Hi @Dogan_Can_GAZAZ and welcome to the MongoDB community forum!!The hashed shard key in MongoDB sharded cluster can help achieve an even distribution of data among the shards if the shard key is monotonically increasing which further means, the hashed shard key would evenly distribute data for fields whose values are changing at a constant rate.It would be helpful to understand the concern further if you could help me with some information regarding the deployment:Best Regards\nAasawari", "username": "Aasawari" } ]
Mongodb is not distributing data evenly among shards
2023-02-20T06:10:50.102Z
Mongodb is not distributing data evenly among shards
1,266
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.24-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.23. The next stable release 4.2.24 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "James_Hippler" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.24-rc1 is released
2023-02-23T03:47:27.485Z
MongoDB 4.4.24-rc1 is released
1,054
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.4.19-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.18. The next stable release 4.4.19 will be a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "James_Hippler" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.19-rc1 is released
2023-02-23T03:52:01.596Z
MongoDB 4.4.19-rc1 is released
1,250
null
[ "aggregation", "java", "change-streams", "spring-data-odm" ]
[ { "code": "return ChangeStreamOptions.builder()\n .filter(Aggregation.newAggregation(Example.class, matchOperationType))\n .resumeAt(Instant.ofEpochSecond(1675303335)) // this is simply a unix timestamp\n .resumeToken(tokenDoc) // resume token saved from previous notification\n .returnFullDocumentOnUpdate().build();\n", "text": "Hi,What is the difference b/w the behavior of resumeAt that accepts a timestamp to resume the notifications from vs resumeToken that accepts the resume token?In case the applicatio crashes/restarted would be ideal/simple to simply pass in an unix timestap of a reasonable past time (ranging from few hours to few days) vs building application logic to save the token of every last successfully processed message?", "username": "Darshan_Bangre" }, { "code": "tokenInstantBsonTimestampReactiveChangeStreamOperation.TerminatingChangeStreamIllegalArgumentExceptionInstantBsonTimestampresume token", "text": "Hello @Darshan_Bangre ,What is the difference b/w the behavior of resumeAt that accepts a timestamp to resume the notifications from vs resumeToken that accepts the resume token?Method .resumeAt(Object token) is a Spring Framework method. It resumes the change stream at a given point. Below are some related details which I got from this documentationParameters:\ntoken - an Instant or BsonTimestamp\nReturns:\nnew instance of ReactiveChangeStreamOperation.TerminatingChangeStream.\nThrows:\nIllegalArgumentException - if the given beacon is neither Instant nor BsonTimestamp.As per my understanding, it resumes the stream from the nearest point in the past that is still available on the server.Whereas, MongoDB has resume token, which processes a change stream from a historical point in time in the oplog. It is a unique identifier that represents a specific point in the change stream. This token is returned by the server as part of each change event, and it can be used to resume the stream at the exact point where it left off. This provides control over the resume point and ensures that no change events are missed.In case the applicatio crashes/restarted would be ideal/simple to simply pass in an unix timestap of a reasonable past time (ranging from few hours to few days) vs building application logic to save the token of every last successfully processed message?Change streams in MongoDB are resumable by specifying a resume token to either resumeAfter or startAfter when opening the cursor.I would recommend you to analyse your requirements and decide on your development approach accordingly.Note: If you anticipate interrupting the stream processing, ensure the oplog is large enough to hold the unprocessed changes (writes) before you resume the stream.To learn more about this, please referMongoDB triggers, change streams, database triggers, real timeRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Difference between resumeAt and resumeToken in change streams
2023-02-09T19:14:37.835Z
Difference between resumeAt and resumeToken in change streams
1,604
https://www.mongodb.com/…25ed57f7f77.jpeg
[ "sharding" ]
[ { "code": "", "text": "I’ve watched this video, which said that hashed sharding is a simple solution to distribute data evenly, but it comes with a horrible cost from index memory point of view.I was considering a shard key that is monotonically increasing (ObjectId), so I was going to make it hashed to avoid a hot shard. This is the practice I’ve learned from the official MongoDB document..If total doc size is huge, Hashed sharding is now not recommended way when the shard key is monotonically increasing?", "username": "kuser" }, { "code": "", "text": "Hello @kuser ,Welcome to The MongoDB Community Forums! I was considering a shard key that is monotonically increasing (ObjectId)As per the documentations, Hashed Sharding is typically recommended for monotonically increasing shard keys, however as per the video you linked, it comes with some caveats and there is no substitute for a proper workload simulation to test the shard key. Please go through below links for referenceAdditionally, choosing a proper sharding strategy depends on many things in your use-case such as: most common queries that are required to perform regularly, data size and particular requirements - hardware, costing etc. You need to carefully evaluate your specific use case and performance requirements before deciding on a shard key. Make sure your queries are not scatter-gather queries. Queries that involve multiple shards for each request are less efficient and do not scale linearly when more shards are added to the cluster. Similarly, Shards with un-even distribution of data are also not considered efficient and Jumbo flags should be avoided. So, One needs to plan accordingly. You can read selecting shard key blog to choose the best shard key for your sharded cluster.Note: Starting in MongoDB 6.0.3, data in sharded clusters is distributed based on data size rather than number of chunks, so if balancing based on data size is your ultimate goal, I recommend you to check out MongoDB 6.0.3 or newer. For details, see Balancing Policy Changes.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is hashed sharding not recommended when the shard key is monotonically increasing?
2023-02-10T04:36:15.614Z
Is hashed sharding not recommended when the shard key is monotonically increasing?
1,201
null
[ "aggregation", "queries" ]
[ { "code": "[\n {\n ignition: 1,\n time: 112 \n },\n {\n ignition: 1,\n time: 193 \n }, \n {\n ignition: 0,\n time: 115 \n },\n {\n ignition: 1,\n time: 116 \n },\n {\n ignition: 1,\n time: 117 \n },\n {\n ignition: 1,\n time: 118 \n },\n {\n ignition: 0,\n time: 119 \n },\n {\n ignition: 1,\n time: 120 \n },\n {\n ignition: 1,\n time: 121 \n },\n {\n ignition: 1,\n time: 122 \n },\n {\n ignition: 0,\n time: 123 \n },\n]\n\n{\n time: [112,193],\n time: [116,117,118],\n time: [120,121,122]\n}\n", "text": "I want to group the data based on the values of the “ignition” field. If the “ignition” value is 1, all records with the value 1 should be grouped together until the next value of 0 is encountered, and so on.I have 86400 records in MongoDB, and I want to query the data to achieve the desired output.I want the output like this:", "username": "Muhammad_Nabeel" }, { "code": "{\n time: [112,193],\n time: [116,117,118],\n time: [120,121,122]\n}\nignitionconst data =[...];\nconst group = [];\nlet currentGroup = [];\n\ndata.forEach((document) => {\n if (document.ignition === 1) {\n currentGroup.push(document.time);\n } else if (currentGroup.length > 0) {\n group.push(currentGroup);\n currentGroup = [];\n }\n});\n\nif (currentGroup.length > 0) {\n group.push(currentGroup);\n}\n....\n", "text": "Hi @Muhammad_Nabeel,Welcome to the MongoDB Community forums Can you clarify that the input document is a single document, or is it an array inside a document (and thus the example is one document)?Also,Your example output is not a valid document - You can’t have multiple fields with the same name.However, considering the question, it’s complicated to group the data based on the values of the ignition field because the ordering of documents in MongoDB is not guaranteed unless you specify a sorting parameter explicitly. This can make it difficult to perform certain types of aggregations, such as grouping data based on the values of a particular field.I suggest you perform the group operation in your application code after retrieving the data from MongoDB. You could iterate over the documents, keeping track of the current group of ignition values, and creating separate arrays for each group as you encounter them. Here is the JS code for your reference:I hope it helps!Thanks,\nKushagra", "username": "Kushagra_Kesav" } ]
Grouping data by Ignition field values
2023-02-16T08:17:16.552Z
Grouping data by Ignition field values
570
null
[ "charts" ]
[ { "code": "", "text": "Hi, How can I view all the data in the dashboard filter, Currently only a few lines of data is shown. It would be have been to great to provide a scroll bar to see the remaining data.", "username": "Arun_Mathew1" }, { "code": "", "text": "Hi !!\nDo you know how to add scrollbars in embedded charts?", "username": "Orlando_Herrera" } ]
Filter in charts
2022-06-28T04:06:51.126Z
Filter in charts
2,308
null
[ "aggregation", "golang" ]
[ { "code": "2023-02-13T16:15:29.389-05:00\t(mongo.Pipeline) (len=10 cap=14) {\n2023-02-13T16:15:29.389-05:00\t(primitive.D) (len=1 cap=1) {\n2023-02-13T16:15:29.389-05:00\t(primitive.E) {\n2023-02-13T16:15:29.389-05:00\tKey: (string) (len=6) \"$match\",\n2023-02-13T16:15:29.389-05:00\tValue: (primitive.M) (len=1) {\n2023-02-13T16:15:29.389-05:00\t(string) (len=17) \"summary\": (primitive.M) (len=1) {\n2023-02-13T16:15:29.389-05:00\t(string) (len=7) \"$exists\": (bool) true\n2023-02-13T16:15:29.389-05:00\t}\n2023-02-13T16:15:29.389-05:00\t}\n2023-02-13T16:15:29.389-05:00\t}\n2023-02-13T16:15:29.389-05:00\t},\n", "text": "I might be overlooking something obvious, but is there a way to pretty print a pipeline built in Go using mongo.Pipeline{}, bson.M{} etc?Something a bit more readable in a log that this!", "username": "Ben_Giddins" }, { "code": "mongo.Pipeline{}, bson.M{}bson.Marshal()bson.Unmarshal()bson.Mpackage main\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"go.mongodb.org/mongo-driver/bson\"\n\t\"go.mongodb.org/mongo-driver/mongo\"\n)\n\nfunc main() {\n\tpipeline := mongo.Pipeline{\n\t\tbson.D{{\"$match\", bson.M{\"age\": bson.M{\"$gt\": 100}}}},\n\t\tbson.D{{\"$group\", bson.M{\"count\": bson.M{\"$sum\": 1}}}},\n\t\tbson.D{{\"$sort\", bson.M{\"count\": -1}}},\n\t\tbson.D{{\"$limit\", 10}},\n\t}\n\tvar prettyDocs []bson.M\n\tfor _, doc := range pipeline {\n\t\tbsonDoc, err := bson.Marshal(doc)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\tvar prettyDoc bson.M\n\t\terr = bson.Unmarshal(bsonDoc, &prettyDoc)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t\tprettyDocs = append(prettyDocs, prettyDoc)\n\t}\n\tprettyJSON, err := json.MarshalIndent(prettyDocs, \"\", \" \")\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tfmt.Println(string(prettyJSON))\n}\n\n[\n {\n \"$match\": {\n \"age\": {\n \"$gt\": 100\n }\n }\n },\n {\n \"$group\": {\n \"count\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$sort\": {\n \"count\": -1\n }\n },\n {\n \"$limit\": 10\n }\n]\n", "text": "Hi @Ben_Giddins,Welcome to the MongoDB Community forums but is there a way to pretty print a pipeline built in Go using mongo.Pipeline{}, bson.M{} etc?Here, you can use bson.Marshal() to convert each pipeline element to a BSON document. After that, you can use bson.Unmarshal() to convert the BSON document to a bson.M value for pretty printing.Finally, you can use json.MarshalIndent to format the output. Here, each JSON element in the output will begin on a new line beginning with a prefix followed by one or more copies of indent according to the indentation nesting you have provided.Here is the code snippet for your reference:It will return the output as follow:I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you! This is perfect.", "username": "Ben_Giddins" }, { "code": "\tresult := bytes.Replace(prettyJSON, []byte(\"\\n\"), []byte(\"\\r\"), -1)\n\tfmt.Println(string(result))\n", "text": "Minor tweak - in AWS Lambda, newline characters need to be replaced by a carriage return to get a single block of output in CloudWatch:", "username": "Ben_Giddins" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Pretty print Golang pipelines
2023-02-13T21:28:43.147Z
Pretty print Golang pipelines
1,694
https://www.mongodb.com/…3_2_1024x578.png
[ "replication" ]
[ { "code": "", "text": "Hello Team,\nCan we use NGINX as a proxy to access ReplicasSet cluster ?The goal is to implement the data residency is to use DB offloading approach.\nDB offloading means storing the data in the related region data center and maintaining the\ncompute instances (services) in the central serving regions.\nThat means the services will be hosted in one region with DR capabilities and database will be\nstored in multiple regions base on the client data residency requirement.\nTo enable this, we will separate our compute instances from our databases and add a reverse\nproxy layer in front of the databases so the proxy that will manage to forward the request to the\nrelated DB region based on the user’s associated region.\nOne area we have to ensure in this setup is to reduce the DB connection initiations and\nmaintain multiple queries under a single DB connection (Multiplexing), because the initialization\nof DB connections will impact and increase the latency for DB responses across regions .\n\nScreenshot 2023-02-22 at 10.46.12 AM1280×723 50.3 KB\n", "username": "muhamd_abdelhaliem" }, { "code": "", "text": "i don’t see any issues with this setup from a high level.A reverse proxy is no different from a “ordinary client” from mongdb’s view point.maintain multiple queries under a single DB connection (Multiplexing)Honestly i don’t think you need to do this from your app logic. Mongodb drivers should already take care of that. You can check driver settings for connection pool related.But one thing to note is mongodb is single primary replication, so all writes have to go the same instance, with probably cross-region calls.", "username": "Kobe_W" } ]
MongoDB Nginx reverse proxy and replica set
2023-02-22T08:47:58.918Z
MongoDB Nginx reverse proxy and replica set
1,521
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Current Mongosh Log ID: 63f66743a8dd954755412d1d\nConnecting to: mongodb://localhost:27000/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.5.4\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27000", "username": "Sanjay_Tiwari1" }, { "code": "", "text": "It is most likely one of the many reasons discussed below:\nhttps://www.mongodb.com/community/forums/search?q=ECONNREFUSED", "username": "steevej" } ]
Connrefused 127.0.0.1:27000
2023-02-22T19:07:17.073Z
Connrefused 127.0.0.1:27000
861
null
[]
[ { "code": "", "text": "Windows Defender is trying to block the downloading of the validator needed for course M320 Data Modeling. I authorize its download, there is a flash on screen, and nothing more. I see there are specific instructions for problems with MAC. Is there something I am missing when it comes to Windows?", "username": "Email_Me" }, { "code": "validate_m320 example --file answer_schema.json\n", "text": "You’re right, they should include instructions for the other platformsIt took me a second but you have to put the exe file into a folder and then navigate to the folder in command prompt eg: CD C:\\m320\\ and then run the command in the lab which is\nimage1114×310 8.55 KB\n", "username": "Chase_Russell" } ]
Can't download and run validator_m320 in windows
2023-02-13T14:00:41.498Z
Can&rsquo;t download and run validator_m320 in windows
784
https://www.mongodb.com/…1_2_1024x419.png
[ "java", "swift", "atlas-device-sync", "android", "kotlin" ]
[ { "code": "2023-01-20 13:08:44.840 1717-1789 REALM_SYNC com.ianbm.sportscarnivalpro.android E Connection[2]: Session[2]: Bad sync progress received (3)\n2023-01-20 13:08:44.841 1717-1789 REALM_JAVA com.ianbm.sportscarnivalpro.android E Session Error[wss://realm.mongodb.com/]: CLIENT_BAD_PROGRESS(realm::sync::ClientError:107): Bad progress information (DOWNLOAD)\nconnecting to realm 61345920826942f6487d4d57\n2023-01-20 14:56:00.050541+1000 SportsCarnivalPro[332:15495] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\n2023-01-20 14:56:01.304701+1000 SportsCarnivalPro[332:15495] Sync: Connected to endpoint '52.64.157.195:443' (from '192.168.1.110:49382')\nconnecting to realm 61345920826942f6487d4d57-614bdf5bb47f723c2dd6caa2-613b1002c8322e6d5a36a4ed\n2023-01-20 14:56:03.921394+1000 SportsCarnivalPro[332:15495] Sync: Connection[2]: Session[2]: client_reset_config = false, Realm exists = true, client reset = false\n2023-01-20 14:56:03.997594+1000 SportsCarnivalPro[332:15495] Sync: Connected to endpoint '52.64.157.195:443' (from '192.168.1.110:49383')\n2023-01-20 14:56:04.615477+1000 SportsCarnivalPro[332:15495] Sync: Connection[2]: Session[2]: Bad sync progress received (3)\n2023-01-20 14:56:04.615892+1000 SportsCarnivalPro[332:15495] Sync: Connection[2]: Connection closed due to error\n", "text": "I keep getting this error:From Java SDK:From Swift SDK:I started getting it whenever I restarted working for the day (i.e. first time each day) with the Java SDK. But now I’m seeing it repeatedly on Swift too, which didn’t used to happen.Nothing gets logged on the server side:\n\nimage1459×597 99.6 KB\n\nSportsCarnivalPro2_logs_20230120050749.json (77.9 KB)To clear the error, I can delete the local realm, of course. But I shouldn’t have to - I have not made any schema changes or anything!Please help!", "username": "polymath74" }, { "code": "", "text": "Hello! It’s hard to determine what’s going on here without some more logging. Could you set the log level for your app to ALL and send the logs when it fails with this error?Docs for how to do this for Swift & Java:", "username": "Ben_Redmond" }, { "code": "2023-02-17 09:32:16.535 2420-2420 REALM_JAVA com.ianbm.sportscarnivalpro.android D Creating session for: /data/user/0/com.ianbm.sportscarnivalpro.android/files/mongodb-realm/sportscarnivalpro2-qynvc/61345920826942f6487d4d57/s_61345920826942f6487d4d57.realm\n2023-02-17 09:32:16.537 2420-2420 REALM_JAVA com.ianbm.sportscarnivalpro.android D First session created. Adding network listener.\n2023-02-17 09:32:16.545 2420-2420 REALM_JAVA com.ianbm.sportscarnivalpro.android D [App(sportscarnivalpro2-qynvc)] NetworkListener: Connection available\n2023-02-17 09:32:16.571 2420-2474 HostConnection com.ianbm.sportscarnivalpro.android D createUnique: call\n2023-02-17 09:32:16.572 2420-2474 HostConnection com.ianbm.sportscarnivalpro.android D HostConnection::get() New Host Connection established 0xb40000796ca9ccd0, tid 2474\n2023-02-17 09:32:16.574 2420-2474 HostConnection com.ianbm.sportscarnivalpro.android D HostComposition ext ANDROID_EMU_CHECKSUM_HELPER_v1 ANDROID_EMU_native_sync_v2 ANDROID_EMU_native_sync_v3 ANDROID_EMU_native_sync_v4 ANDROID_EMU_dma_v1 ANDROID_EMU_direct_mem ANDROID_EMU_host_composition_v1 ANDROID_EMU_host_composition_v2 ANDROID_EMU_vulkan ANDROID_EMU_deferred_vulkan_commands ANDROID_EMU_vulkan_null_optional_strings ANDROID_EMU_vulkan_create_resources_with_requirements ANDROID_EMU_YUV_Cache ANDROID_EMU_vulkan_ignored_handles ANDROID_EMU_has_shared_slots_host_memory_allocator ANDROID_EMU_vulkan_free_memory_sync ANDROID_EMU_vulkan_shader_float16_int8 ANDROID_EMU_vulkan_async_queue_submit ANDROID_EMU_vulkan_queue_submit_with_commands ANDROID_EMU_sync_buffer_data ANDROID_EMU_read_color_buffer_dma ANDROID_EMU_hwc_multi_configs GL_OES_EGL_image_external_essl3 GL_OES_vertex_array_object GL_KHR_texture_compression_astc_ldr ANDROID_EMU_host_side_tracing ANDROID_EMU_gles_max_version_3_0 \n2023-02-17 09:32:16.575 2420-2474 OpenGLRenderer com.ianbm.sportscarnivalpro.android W Failed to choose config with EGL_SWAP_BEHAVIOR_PRESERVED, retrying without...\n2023-02-17 09:32:16.576 2420-2474 OpenGLRenderer com.ianbm.sportscarnivalpro.android W Failed to initialize 101010-2 format, error = EGL_SUCCESS\n2023-02-17 09:32:16.590 2420-2474 HostConnection com.ianbm.sportscarnivalpro.android D createUnique: call\n2023-02-17 09:32:16.591 2420-2474 HostConnection com.ianbm.sportscarnivalpro.android D HostConnection::get() New Host Connection established 0xb40000796ca9c9d0, tid 2474\n2023-02-17 09:32:16.596 2420-2474 HostConnection com.ianbm.sportscarnivalpro.android D HostComposition ext ANDROID_EMU_CHECKSUM_HELPER_v1 ANDROID_EMU_native_sync_v2 ANDROID_EMU_native_sync_v3 ANDROID_EMU_native_sync_v4 ANDROID_EMU_dma_v1 ANDROID_EMU_direct_mem ANDROID_EMU_host_composition_v1 ANDROID_EMU_host_composition_v2 ANDROID_EMU_vulkan ANDROID_EMU_deferred_vulkan_commands ANDROID_EMU_vulkan_null_optional_strings ANDROID_EMU_vulkan_create_resources_with_requirements ANDROID_EMU_YUV_Cache ANDROID_EMU_vulkan_ignored_handles ANDROID_EMU_has_shared_slots_host_memory_allocator ANDROID_EMU_vulkan_free_memory_sync ANDROID_EMU_vulkan_shader_float16_int8 ANDROID_EMU_vulkan_async_queue_submit ANDROID_EMU_vulkan_queue_submit_with_commands ANDROID_EMU_sync_buffer_data ANDROID_EMU_read_color_buffer_dma ANDROID_EMU_hwc_multi_configs GL_OES_EGL_image_external_essl3 GL_OES_vertex_array_object GL_KHR_texture_compression_astc_ldr ANDROID_EMU_host_side_tracing ANDROID_EMU_gles_max_version_3_0 \n2023-02-17 09:32:16.713 2420-2482 ivalpro.android com.ianbm.sportscarnivalpro.android W Verification of void io.realm.RealmObject.addChangeListener(io.realm.RealmModel, io.realm.RealmChangeListener) took 114.799ms (78.40 bytecodes/s) (1544B approximate peak alloc)\n2023-02-17 09:32:16.762 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Realm sync client ([realm-core-12.3.0])\n2023-02-17 09:32:16.762 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Supported protocol versions: 2-6\n2023-02-17 09:32:16.762 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Platform: Android Linux 5.15.41-android13-8-00055-g4f5025129fe8-ab8949913 #1 SMP PREEMPT Mon Aug 15 18:33:14 UTC 2022 aarch64\n2023-02-17 09:32:16.762 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Build mode: Release\n2023-02-17 09:32:16.762 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: one_connection_per_session = true\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: connect_timeout = 120000 ms\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: connection_linger_time = 30000 ms\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: ping_keepalive_period = 60000 ms\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: pong_keepalive_timeout = 120000 ms\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: fast_reconnect_limit = 60000 ms\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: disable_upload_compaction = false\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: disable_sync_to_disk = false\n2023-02-17 09:32:16.763 2420-2482 REALM_SYNC com.ianbm.sportscarnivalpro.android D User agent string: 'RealmSync/12.3.0 (Android Linux 5.15.41-android13-8-00055-g4f5025129fe8-ab8949913 #1 SMP PREEMPT Mon Aug 15 18:33:14 UTC 2022 aarch64) RealmJava/10.11.1 (emu64a, sdk_gphone64_arm64, v33) Unknown'\n2023-02-17 09:32:16.778 2420-2489 REALM_JNI com.ianbm.sportscarnivalpro.android D SyncClient thread created\n2023-02-17 09:32:16.780 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Session[1]: Binding '/data/user/0/com.ianbm.sportscarnivalpro.android/files/mongodb-realm/sportscarnivalpro2-qynvc/61345920826942f6487d4d57/s_61345920826942f6487d4d57.realm' to '\"61345920826942f6487d4d57\"'\n2023-02-17 09:32:16.781 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Session[1]: Activating\n2023-02-17 09:32:16.790 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android I Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\n2023-02-17 09:32:16.790 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Session[1]: client_file_ident = 587, client_file_ident_salt = 5633985149313889800\n2023-02-17 09:32:16.790 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Session[1]: last_version_available = 10\n2023-02-17 09:32:16.790 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Session[1]: progress_server_version = 173\n2023-02-17 09:32:16.792 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Session[1]: progress_client_version = 2\n2023-02-17 09:32:16.793 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Session[1]: Progress handler called, downloaded = 112166, downloadable(total) = 112166, uploaded = 1757, uploadable = 1757, reliable_download_progress = false, snapshot version = 10\n2023-02-17 09:32:16.794 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D WebSocket::Websocket()\n2023-02-17 09:32:16.794 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Resolving 'ws.ap-southeast-2.aws.realm.mongodb.com:443'\n2023-02-17 09:32:16.844 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connecting to endpoint '13.54.209.90:443' (1/1)\n2023-02-17 09:32:16.904 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android I Connected to endpoint '13.54.209.90:443' (from '10.0.2.16:40664')\n2023-02-17 09:32:16.943 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Verifying server SSL certificate using root certificates, host name = ws.ap-southeast-2.aws.realm.mongodb.com, server port = 443, certificate =\n -----BEGIN CERTIFICATE-----\n MIIFYDCCBEigAwIBAgIQQAF3ITfU6UK47naqPGQKtzANBgkqhkiG9w0BAQsFADA/\n (etc.)\n -----END CERTIFICATE-----\n2023-02-17 09:32:16.944 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android I Verifying server SSL certificate using 155 root certificates\n2023-02-17 09:32:16.953 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Server SSL certificate verified using root certificate(37):\n -----BEGIN CERTIFICATE-----\n MIIDSjCCAjKgAwIBAgIQRK+wgNajJ7qJMDmGLvhAazANBgkqhkiG9w0BAQUFADA/MSQwIgYD\n (etc.)\n -----END CERTIFICATE-----\n2023-02-17 09:32:16.955 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D WebSocket::initiate_client_handshake()\n2023-02-17 09:32:16.955 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V HTTP request =\n GET /api/client/v2.0/app/sportscarnivalpro2-qynvc/realm-sync?baas_at=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RldmljZV9pZCI6IjYzZWViYjgyY2FmOTExYjA3MWZmYjdmNiIsImJhYXNfZG9tYWluX2lkIjoiNjExNmZkOGI5N2VhNjUxZDg5NDhmNTUxIiwiZXhwIjoxNjc2NTkxNzU0LCJpYXQiOjE2NzY1ODk5NTQsImlzcyI6IjYzZWViYjgyY2FmOTExYjA3MWZmYjdmNyIsInN0aXRjaF9kZXZJZCI6IjYzZWViYjgyY2FmOTExYjA3MWZmYjdmNiIsInN0aXRjaF9kb21haW5JZCI6IjYxMTZmZDhiOTdlYTY1MWQ4OTQ4ZjU1MSIsInN1YiI6IjYxMzQ1OTIwODI2OTQyZjY0ODdkNGQ1NyIsInR5cCI6ImFjY2VzcyJ9.-fhPr90Blrf1m-xaOEl2WW_2VOdJlNVZfwOQpFm4VD0 HTTP/1.1\n Host: ws.ap-southeast-2.aws.realm.mongodb.com\n Connection: Upgrade\n Sec-WebSocket-Key: UrM/bd1t8b2lkZYMhQ9/XQ==\n Sec-WebSocket-Protocol: com.mongodb.realm-sync/6, com.mongodb.realm-sync/5, com.mongodb.realm-sync/4, com.mongodb.realm-sync/3, com.mongodb.realm-sync/2\n Sec-WebSocket-Version: 13\n Upgrade: websocket\n User-Agent: RealmSync/12.3.0 (Android Linux 5.15.41-android13-8-00055-g4f5025129fe8-ab8949913 #1 SMP PREEMPT Mon Aug 15 18:33:14 UTC 2022 aarch64) RealmJava/10.11.1 (emu64a, sdk_gphone64_arm64, v33) Unknown\n2023-02-17 09:32:17.004 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D WebSocket::handle_http_response_received()\n2023-02-17 09:32:17.005 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V HTTP response = HTTP/1.1 101 Switching Protocols\n cache-control: no-cache, no-store, must-revalidate\n connection: Upgrade\n date: Thu, 16 Feb 2023 23:32:19 GMT\n sec-websocket-accept: fL93e/cIZvmk8GT+y97a7uAiaG0=\n sec-websocket-protocol: com.mongodb.realm-sync/6\n server: mdbws\n strict-transport-security: max-age=31536000; includeSubdomains;\n upgrade: websocket\n vary: Origin\n x-appservices-request-id: 63eebd041d6d6778c2685bb5\n x-frame-options: DENY\n2023-02-17 09:32:17.005 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Negotiated protocol version: 6\n2023-02-17 09:32:17.005 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Will emit a ping in 18709 milliseconds\n2023-02-17 09:32:17.005 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Session[1]: Sending: IDENT(client_file_ident=587, client_file_ident_salt=5633985149313889800, scan_server_version=173, scan_client_version=2, latest_server_version=173, latest_server_version_salt=2402548895602076587)\n2023-02-17 09:32:17.005 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Session[1]: Sending: MARK(request_ident=1)\n2023-02-17 09:32:17.152 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Download message compression: is_body_compressed = true, compressed_body_size=635, uncompressed_body_size=1560\n2023-02-17 09:32:17.153 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Received: DOWNLOAD CHANGESET(server_version=175, client_version=2, origin_timestamp=252731178797, origin_file_ident=601, original_changeset_size=70, changeset_size=70)\n2023-02-17 09:32:17.153 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Changeset: 3F 00 08 43 61 72 6E 69 76 61 6C 3F 01 0C 63 61 72 6E 69 76 61 6C 44 61 74 65 04 00 0A 61 4B DF 5B B4 7F 72 3C 2D D6 CA A2 01 00 03 18 32 30 32 33 2D 30 31 2D 32 37 54 31 34 3A 30 30 3A 30 30 2E 30 30 30 5A 00\n2023-02-17 09:32:17.153 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Received: DOWNLOAD CHANGESET(server_version=177, client_version=2, origin_timestamp=252731979925, origin_file_ident=601, original_changeset_size=342, changeset_size=342)\n2023-02-17 09:32:17.153 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Changeset: 3F 00 11 50 65 72 66 6F 72 6D 61 6E 63 65 52 65 63 6F 72 64 3F 01 0A 5F 70 61 72 74 69 74 69 6F 6E 3F 02 0D 63 61 72 6E 69 76 61 6C 45 76 65 6E 74 3F 03 0C 63 61 72 6E 69 76 61 6C 59 65 61 72 3F 04 0E 63 6F 6D 70 65 74 69 74 6F 72 4E 61 6D 65 3F 05 05 67 72 6F 75 70 3F 06 09 68 6F 75 73 65 4E 61 6D 65 3F 07 11 72 65 63 6F 72 64 50 65 72 66 6F 72 6D 61 6E 63 65 02 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 04 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 01 00 03 18 36 31 33 34 35 39 32 30 38 32 36 39 34 32 66 36 34 38 37 64 34 64 35 37 00 04 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 02 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E6 00 04 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 03 00 01 E6 0F 00 04 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 04 00 03 0B 41 6E 67 65 6C 61 20 42 75 62 62 00 04 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 05 00 03 05 41 31 36 3A 46 00 04 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 06 00 03 07 4C 69 64 64 65 6C 6C 00 04 00 0A 63 B4 F0 4B 8D 2F 7E 67 D7 3B 4F 0D 07 00 07 00 00 00 46 91 67 40 40 00\n2023-02-17 09:32:17.153 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Received: DOWNLOAD CHANGESET(server_version=178, client_version=2, origin_timestamp=256178898085, origin_file_ident=601, original_changeset_size=768, changeset_size=768)\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Changeset: 3F 00 08 43 61 72 6E 69 76 61 6C 3F 01 0A 5F 70 61 72 74 69 74 69 6F 6E 3F 02 0C 63 61 72 6E 69 76 61 6C 44 61 74 65 3F 03 04 6E 61 6D 65 3F 04 09 72 61 63 65 4C 61 6E 65 73 3F 05 15 73 63 6F 72 65 43 61 6D 70 75 73 53 65 70 61 72 61 74 65 6C 79 3F 06 22 73 63 6F 72 65 4E 6F 6E 43 61 6D 70 75 73 45 76 65 6E 74 73 57 69 74 68 45 61 63 68 43 61 6D 70 75 73 3F 07 0E 69 6E 63 6C 75 64 65 64 45 76 65 6E 74 73 02 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 01 00 03 18 36 31 33 34 35 39 32 30 38 32 36 39 34 32 66 36 34 38 37 64 34 64 35 37 00 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 02 00 03 18 32 30 32 32 2D 30 33 2D 33 31 54 31 34 3A 30 30 3A 30 30 2E 30 30 30 5A 00 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 03 00 03 1F 4D 53 53 20 53 77 69 6D 6D 69 6E 67 20 43 61 72 6E 69 76 61 6C 20 32 30 32 32 20 63 6F 70 79 00 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 04 00 01 0A 00 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 05 00 00 00 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 06 00 00 00 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 DC 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 DD 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E1 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E2 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E3 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E4 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E5 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E6 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E7 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E8 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 E9 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 EA 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 61 3B 10 02 C8 32 2E 6D 5A 36 A4 EB 0C 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 07 00 0A 62 46 72 7C 53 4A F5 A4 2C 69 3C F6\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Received: DOWNLOAD CHANGESET(server_version=179, client_version=2, origin_timestamp=256178905183, origin_file_ident=601, original_changeset_size=64, changeset_size=64)\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Changeset: 3F 00 08 43 61 72 6E 69 76 61 6C 3F 01 04 6E 61 6D 65 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 01 00 03 1A 4D 53 53 20 53 77 69 6D 6D 69 6E 67 20 43 61 72 6E 69 76 61 6C 20 32 30 32 33 00\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Received: DOWNLOAD CHANGESET(server_version=180, client_version=2, origin_timestamp=256178976599, origin_file_ident=601, original_changeset_size=70, changeset_size=70)\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Changeset: 3F 00 08 43 61 72 6E 69 76 61 6C 3F 01 0C 63 61 72 6E 69 76 61 6C 44 61 74 65 04 00 0A 63 E9 88 D2 31 01 2A 19 B6 BB 3B 7E 01 00 03 18 32 30 32 33 2D 30 33 2D 30 37 54 31 34 3A 30 30 3A 30 30 2E 30 30 30 5A 00\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Received: DOWNLOAD CHANGESET(server_version=181, client_version=2, origin_timestamp=256519120955, origin_file_ident=1, original_changeset_size=70, changeset_size=70)\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android V Connection[1]: Changeset: 3F 00 08 43 61 72 6E 69 76 61 6C 3F 01 0C 63 61 72 6E 69 76 61 6C 44 61 74 65 04 00 0A 61 4B DF 5B B4 7F 72 3C 2D D6 CA A2 01 00 03 18 32 30 32 33 2D 30 32 2D 32 37 54 31 34 3A 30 30 3A 30 30 2E 30 30 30 5A 00\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Session[1]: Received: DOWNLOAD(download_server_version=181, download_client_version=2, latest_server_version=181, latest_server_version_salt=232619651941121651, upload_client_version=11, upload_server_version=173, downloadable_bytes=0, last_in_batch=true, query_version=0, num_changesets=6, ...)\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android E Connection[1]: Session[1]: Bad sync progress received (3)\n2023-02-17 09:32:17.154 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android I Connection[1]: Connection closed due to error\n2023-02-17 09:32:17.155 2420-2489 REALM_JAVA com.ianbm.sportscarnivalpro.android E Session Error[wss://realm.mongodb.com/]: CLIENT_BAD_PROGRESS(realm::sync::ClientError:107): Bad progress information (DOWNLOAD)\n2023-02-17 09:32:17.156 2420-2489 REALM_SYNC com.ianbm.sportscarnivalpro.android D Connection[1]: Allowing reconnection in 2837687 milliseconds\n", "text": "Log from Android emulator: (I have omitted some rows and the full certificates because I can only post 32k here)", "username": "polymath74" }, { "code": "2023-02-17 11:02:21.855 1589-1589 REALM_JAVA com.ianbm.sportscarnivalpro.android D Creating session for: /data/user/0/com.ianbm.sportscarnivalpro.android/files/mongodb-realm/sportscarnivalpro2-qynvc/61345920826942f6487d4d57/s_61345920826942f6487d4d57.realm\n2023-02-17 11:02:21.858 1589-1589 REALM_JAVA com.ianbm.sportscarnivalpro.android D First session created. Adding network listener.\n2023-02-17 11:02:21.931 1589-1589 REALM_JAVA com.ianbm.sportscarnivalpro.android D [App(sportscarnivalpro2-qynvc)] NetworkListener: Connection available\n2023-02-17 11:02:22.012 1589-1701 HostConnection com.ianbm.sportscarnivalpro.android D createUnique: call\n2023-02-17 11:02:22.012 1589-1701 HostConnection com.ianbm.sportscarnivalpro.android D HostConnection::get() New Host Connection established 0xb40000796ca9cc10, tid 1701\n2023-02-17 11:02:22.027 1589-1701 HostConnection com.ianbm.sportscarnivalpro.android D HostComposition ext ANDROID_EMU_CHECKSUM_HELPER_v1 ANDROID_EMU_native_sync_v2 ANDROID_EMU_native_sync_v3 ANDROID_EMU_native_sync_v4 ANDROID_EMU_dma_v1 ANDROID_EMU_direct_mem ANDROID_EMU_host_composition_v1 ANDROID_EMU_host_composition_v2 ANDROID_EMU_vulkan ANDROID_EMU_deferred_vulkan_commands ANDROID_EMU_vulkan_null_optional_strings ANDROID_EMU_vulkan_create_resources_with_requirements ANDROID_EMU_YUV_Cache ANDROID_EMU_vulkan_ignored_handles ANDROID_EMU_has_shared_slots_host_memory_allocator ANDROID_EMU_vulkan_free_memory_sync ANDROID_EMU_vulkan_shader_float16_int8 ANDROID_EMU_vulkan_async_queue_submit ANDROID_EMU_vulkan_queue_submit_with_commands ANDROID_EMU_sync_buffer_data ANDROID_EMU_read_color_buffer_dma ANDROID_EMU_hwc_multi_configs GL_OES_EGL_image_external_essl3 GL_OES_vertex_array_object GL_KHR_texture_compression_astc_ldr ANDROID_EMU_host_side_tracing ANDROID_EMU_gles_max_version_3_0 \n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Realm sync client ([realm-core-12.3.0])\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Supported protocol versions: 2-6\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Platform: Android Linux 5.15.41-android13-8-00055-g4f5025129fe8-ab8949913 #1 SMP PREEMPT Mon Aug 15 18:33:14 UTC 2022 aarch64\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Build mode: Release\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: one_connection_per_session = true\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: connect_timeout = 120000 ms\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: connection_linger_time = 30000 ms\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: ping_keepalive_period = 60000 ms\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: pong_keepalive_timeout = 120000 ms\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: fast_reconnect_limit = 60000 ms\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: disable_upload_compaction = false\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D Config param: disable_sync_to_disk = false\n2023-02-17 11:02:22.055 1589-1756 REALM_SYNC com.ianbm.sportscarnivalpro.android D User agent string: 'RealmSync/12.3.0 (Android Linux 5.15.41-android13-8-00055-g4f5025129fe8-ab8949913 #1 SMP PREEMPT Mon Aug 15 18:33:14 UTC 2022 aarch64) RealmJava/10.11.1 (emu64a, sdk_gphone64_arm64, v33) Unknown'\n2023-02-17 11:02:22.080 1589-1759 REALM_JNI com.ianbm.sportscarnivalpro.android D SyncClient thread created\n2023-02-17 11:02:22.100 1589-1701 HostConnection com.ianbm.sportscarnivalpro.android D createUnique: call\n2023-02-17 11:02:22.100 1589-1701 HostConnection com.ianbm.sportscarnivalpro.android D HostConnection::get() New Host Connection established 0xb40000796ca9cb50, tid 1701\n2023-02-17 11:02:22.127 1589-1701 HostConnection com.ianbm.sportscarnivalpro.android D HostComposition ext ANDROID_EMU_CHECKSUM_HELPER_v1 ANDROID_EMU_native_sync_v2 ANDROID_EMU_native_sync_v3 ANDROID_EMU_native_sync_v4 ANDROID_EMU_dma_v1 ANDROID_EMU_direct_mem ANDROID_EMU_host_composition_v1 ANDROID_EMU_host_composition_v2 ANDROID_EMU_vulkan ANDROID_EMU_deferred_vulkan_commands ANDROID_EMU_vulkan_null_optional_strings ANDROID_EMU_vulkan_create_resources_with_requirements ANDROID_EMU_YUV_Cache ANDROID_EMU_vulkan_ignored_handles ANDROID_EMU_has_shared_slots_host_memory_allocator ANDROID_EMU_vulkan_free_memory_sync ANDROID_EMU_vulkan_shader_float16_int8 ANDROID_EMU_vulkan_async_queue_submit ANDROID_EMU_vulkan_queue_submit_with_commands ANDROID_EMU_sync_buffer_data ANDROID_EMU_read_color_buffer_dma ANDROID_EMU_hwc_multi_configs GL_OES_EGL_image_external_essl3 GL_OES_vertex_array_object GL_KHR_texture_compression_astc_ldr ANDROID_EMU_host_side_tracing ANDROID_EMU_gles_max_version_3_0 \n2023-02-17 11:02:24.158 1589-1760 REALM_JAVA com.ianbm.sportscarnivalpro.android D HTTP Request = \n POST https://ap-southeast-2.aws.realm.mongodb.com/api/client/v2.0/auth/session\n Accept: application/json\n Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RhdGEiOm51bGwsImJhYXNfZGV2aWNlX2lkIjoiNjNhYTk1YzcwZjkyZTEyOGJjNDMzM2I0IiwiYmFhc19kb21haW5faWQiOiI2MTE2ZmQ4Yjk3ZWE2NTFkODk0OGY1NTEiLCJiYWFzX2lkIjoiNjNhYTk1YzcwZjkyZTEyOGJjNDMzM2I1IiwiYmFhc19pZGVudGl0eSI6eyJpZCI6IjYxMzQ1OTIwODI2OTQyZjY0ODdkNGQ1OSIsInByb3ZpZGVyX3R5cGUiOiJhcGkta2V5IiwicHJvdmlkZXJfaWQiOiI2MTE2ZmQ4Yjk3ZWE2NTFkODk0OGY1NGYifSwiZXhwIjoxNjc3MzA3ODQ3LCJpYXQiOjE2NzIxMjM4NDcsInN0aXRjaF9kYXRhIjpudWxsLCJzdGl0Y2hfZGV2SWQiOiI2M2FhOTVjNzBmOTJlMTI4YmM0MzMzYjQiLCJzdGl0Y2hfZG9tYWluSWQiOiI2MTE2ZmQ4Yjk3ZWE2NTFkODk0OGY1NTEiLCJzdGl0Y2hfaWQiOiI2M2FhOTVjNzBmOTJlMTI4YmM0MzMzYjUiLCJzdGl0Y2hfaWRlbnQiOnsiaWQiOiI2MTM0NTkyMDgyNjk0MmY2NDg3ZDRkNTkiLCJwcm92aWRlcl90eXBlIjoiYXBpLWtleSIsInByb3ZpZGVyX2lkIjoiNjExNmZkOGI5N2VhNjUxZDg5NDhmNTRmIn0sInN1YiI6IjYxMzQ1OTIwODI2OTQyZjY0ODdkNGQ1NyIsInR5cCI6InJlZnJlc2gifQ.KiR1rAiwCVncFUYuOOnUxWsH-SiYXgNP3JgLVoJ4bO0\n Content-Type: application/json;charset=utf-8\n2023-02-17 11:02:24.434 1589-1760 TrafficStats com.ianbm.sportscarnivalpro.android D tagSocket(112) with statsTag=0xffffffff, statsUid=-1\n2023-02-17 11:02:24.477 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/ConscryptEngineSocket;->setUseSessionTickets(Z)V (max-target-q,core-platform-api, reflection, denied)\n2023-02-17 11:02:24.478 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/OpenSSLSocketImpl;->setUseSessionTickets(Z)V (max-target-q,core-platform-api, reflection, denied)\n2023-02-17 11:02:24.478 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/AbstractConscryptSocket;->setUseSessionTickets(Z)V (max-target-q, reflection, denied)\n2023-02-17 11:02:24.478 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/ConscryptEngineSocket;->setHostname(Ljava/lang/String;)V (max-target-q,core-platform-api, reflection, denied)\n2023-02-17 11:02:24.478 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/OpenSSLSocketImpl;->setHostname(Ljava/lang/String;)V (max-target-q,core-platform-api, reflection, denied)\n2023-02-17 11:02:24.478 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/AbstractConscryptSocket;->setHostname(Ljava/lang/String;)V (max-target-q, reflection, denied)\n2023-02-17 11:02:24.478 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/OpenSSLSocketImpl;->setAlpnProtocols([B)V (max-target-q,core-platform-api, reflection, denied)\n2023-02-17 11:02:24.478 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/AbstractConscryptSocket;->setAlpnProtocols([B)V (max-target-q, reflection, denied)\n2023-02-17 11:02:24.604 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/OpenSSLSocketImpl;->getAlpnSelectedProtocol()[B (max-target-q,core-platform-api, reflection, denied)\n2023-02-17 11:02:24.604 1589-1760 ivalpro.android com.ianbm.sportscarnivalpro.android W Accessing hidden method Lcom/android/org/conscrypt/AbstractConscryptSocket;->getAlpnSelectedProtocol()[B (max-target-q, reflection, denied)\n2023-02-17 11:02:24.715 1589-1760 REALM_JAVA com.ianbm.sportscarnivalpro.android E Session Error[wss://realm.mongodb.com/]: BAD_AUTHENTICATION(realm::sync::ProtocolError:203): Unable to refresh the user access token.\n", "text": "And then, the next time I ran it, I got a different error - BAD_AUTHENTICATION:", "username": "polymath74" }, { "code": "", "text": "Hello, thanks for sending over the logs. From what I can tell, your local Realm may have become corrupted. Did you at any point copy the Realm file and try to re-use it?To fix this, I would recommend terminating and re-enabling Sync. If that is not an option, a pastebin (or similar) containing the full client logs would be useful.Hope this helps!", "username": "Ben_Redmond" } ]
Bad sync progress received
2023-01-20T05:06:37.342Z
Bad sync progress received
1,727
null
[ "react-native", "android" ]
[ { "code": "npm install realm\nnpm install @realm/react\nNODE_OPTIONS=--openssl-legacy-provider expo start --web\nWARNING: The legacy expo-cli does not support Node +17. Migrate to the versioned Expo CLI (npx expo).\n\nThis command is being executed with the global Expo CLI. Learn more.\nTo use the local CLI instead (recommended in SDK 46 and higher), run:\n› npx expo start\n\nStarting project at /home/cyril/Adoption/ReactRealmJSTemplateApp\nStarting Metro Bundler\nStarting Webpack on port 19006 in development mode.\n▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄\n█ ▄▄▄▄▄ █▀█ █▄█▀▀ ▄▀▄▀█ ▄▄▄▄▄ █\n█ █ █ █▀▀▀█ ▀▀ █▀▀█▀█ █ █ █\n█ █▄▄▄█ █▀ █▀▀▄█▀ █▀▄██ █▄▄▄█ █\n█▄▄▄▄▄▄▄█▄▀ ▀▄█ █ █ ▀▄█▄▄▄▄▄▄▄█\n█▄▄▄ █▄▄▄▄▀▄▀ ▄█▀▄▄ ▄▄▀▄▀▄█▄▀█\n█▄▄▀█ █▄█▀█▄█▀▄▄ ▀█ █▄▀▀▄ ██▀██\n█ ▀█▄█ ▄▀▀▄▄█▄▄▄█▀▄▄ ▄▄▀▄▀▄ █▀█\n█ ▄▀▄ ▄▀ ██ ▄▄ ▄ █ ▀▄ ▄▀██▄▀██\n█ █▀█ █▄▀▀▀ ▄▀▀███ ▀▄ ▀█▀▀ █▀█\n█ █ ▀ ▄▀█▄██▀ ▄▄ ▄██▄▀ █▀▄▄▀██\n█▄█▄▄▄█▄█▀█▀█▄█▄█▀▀█▀ ▄▄▄ ▀ █\n█ ▄▄▄▄▄ █▄ █ ▄█▀ █▄█▄ █▄█ ▄▄▀██\n█ █ █ █ ██▄▀▀▀▄ ▄▀█ ▄ ▄▄▄ █\n█ █▄▄▄█ █ ██▀ ▄ ███▄ ▀▄▀ ▄█ ██\n█▄▄▄▄▄▄▄█▄▄▄█▄█▄█▄▄███████▄▄███\n\n› Metro waiting on exp://192.168.1.139:19000\n› Scan the QR code above with Expo Go (Android) or the Camera app (iOS)\n\n› Webpack waiting on http://192.168.1.139:19006\n› Expo Webpack (web) is in beta, and subject to breaking changes!\n\n› Press a │ open Android\n› Press w │ open web\n\n› Press r │ reload app\n› Press m │ toggle menu\n\n› Press ? │ show all commands\n\nLogs for your project will appear below. Press Ctrl+C to exit.\n[object Object]\n[object Object]\n[object Object]\nWARNING in ./node_modules/@realm/react/dist/AppProvider.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/AppProvider.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/AppProvider.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/RealmProvider.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/RealmProvider.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/RealmProvider.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/UserProvider.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/UserProvider.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/UserProvider.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/cachedCollection.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedCollection.ts' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedCollection.ts'\n\nWARNING in ./node_modules/@realm/react/dist/cachedObject.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedObject.ts' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedObject.ts'\n\nWARNING in ./node_modules/@realm/react/dist/index.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/index.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/index.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/useObject.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useObject.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useObject.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/useQuery.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useQuery.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useQuery.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/useRealm.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useRealm.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useRealm.tsx'\n\nERROR in ./node_modules/bindings/bindings.js:5\nModule not found: Can't resolve 'fs'\n 3 | */\n 4 | \n> 5 | var fs = require('fs'),\n 6 | path = require('path'),\n 7 | fileURLToPath = require('file-uri-to-path'),\n 8 | join = path.join,\n\nERROR in ./node_modules/bindings/bindings.js:6\nModule not found: Can't resolve 'path'\n 4 | \n 5 | var fs = require('fs'),\n> 6 | path = require('path'),\n 7 | fileURLToPath = require('file-uri-to-path'),\n 8 | join = path.join,\n 9 | dirname = path.dirname,\n\nERROR in ./node_modules/file-uri-to-path/index.js:6\nModule not found: Can't resolve 'path'\n 4 | */\n 5 | \n> 6 | var sep = require('path').sep || '/';\n 7 | \n 8 | /**\n 9 | * Module exports.\n\nweb compiled with 3 errors and 9 warnings\nStarted Metro Bundler\n", "text": "Hello,\nI am trying to use Realm in my react-native application\nI have installed the following package :Also followed the explanation from Using Expo and Realm React Native with expo-dev-client | MongoDBI have clone the repository GitHub - mongodb-developer/read-it-later-maybe: Offline-First React Native Mobile App with Expo and Realm and tried to launch the android app but it always crashed.When i execute the app in the web browser i get the following error :What can i do to fix this issue and make realm compatible with react-native and expo ?\nIs it a version issue?(the documentation asks for Node 12 or higher)\nIf the version is the issue, which version should i use?Additional information about my versions:\n  Node : v18.12.1\n  expo-cli : 6.3.0\n   “@realm/react”: “^0.4.3”,\n   “expo”: “~47.0.12”,\n   “expo-dev-client”: “~2.0.1”,\n   “metro-core”: “^0.75.0”,\n   “react”: “18.1.0”,\n   “react-native”: “0.70.5”,\n   “realm”: “^11.4.0”,", "username": "cyril_moreau" }, { "code": "set all JAVA_HOME and ANDROID_HOME\nvariables enviroments and install lasted JDK path too, then set\n%ANDROID_HOME%\\emulator\n%ANDROID_HOME%\\tools\n%ANDROID_HOME%\\tools\\bin\n%ANDROID_HOME%\\platform-tools\nI was solved my problem now I have Node -v = 19\nand JDK = OpenJDK 11.0.18+10 (build 11.0.18+10, mixed mode)```", "text": "Hi there, u need dowgrade your NODE Version to 16.x\nthis problem happened cause the module incompatible ssl version.\nalso u can try this, hard mode to solve\ninstall the Expo CLI global", "username": "Genilson_Mess" }, { "code": "[email protected]@[email protected]", "text": "Hello,thank you for your answer, I have also created a ticket in the github repository for realm\nand they gave me a solution :### How frequently does the bug occur?\n\nAlways\n\n### Description\n\nHello,\nI am tr…ying to use Realm in my react-native application using expo.\nI have installed the following package : \n```\nnpm install realm\nnpm install @realm/react\n```\n Also followed the explanation from https://www.mongodb.com/developer/products/realm/using-expo-realm-expo-dev-client/\n\nI have cloned the repository https://github.com/mongodb-developer/read-it-later-maybe and tried to launch the android app but it always crashed.\n\nWhat can i do to fix this issue and make realm compatible with react-native and expo ?\nIs it a version issue?(the documentation asks for Node 12 or higher)\nIf the version is the issue, which version should i use?\n\nAdditional information about my versions: \nNode : v18.12.1\nexpo-cli : 6.3.0\n\"@realm/react\": \"^0.4.3\",\n\"expo\": \"~47.0.12\",\n\"expo-dev-client\": \"~2.0.1\",\n\"metro-core\": \"^0.75.0\",\n\"react\": \"18.1.0\",\n\"react-native\": \"0.70.5\",\n\"realm\": \"^11.4.0\",\n\n### Stacktrace & log output\n\n```shell\nNODE_OPTIONS=--openssl-legacy-provider expo start --web\nWARNING: The legacy expo-cli does not support Node +17. Migrate to the versioned Expo CLI (npx expo).\n\nThis command is being executed with the global Expo CLI. Learn more.\nTo use the local CLI instead (recommended in SDK 46 and higher), run:\n› npx expo start\n\nStarting project at /home/cyril/Adoption/ReactRealmJSTemplateApp\nStarting Metro Bundler\nStarting Webpack on port 19006 in development mode.\n▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄\n█ ▄▄▄▄▄ █▀█ █▄█▀▀ ▄▀▄▀█ ▄▄▄▄▄ █\n█ █ █ █▀▀▀█ ▀▀ █▀▀█▀█ █ █ █\n█ █▄▄▄█ █▀ █▀▀▄█▀ █▀▄██ █▄▄▄█ █\n█▄▄▄▄▄▄▄█▄▀ ▀▄█ █ █ ▀▄█▄▄▄▄▄▄▄█\n█▄▄▄ █▄▄▄▄▀▄▀ ▄█▀▄▄ ▄▄▀▄▀▄█▄▀█\n█▄▄▀█ █▄█▀█▄█▀▄▄ ▀█ █▄▀▀▄ ██▀██\n█ ▀█▄█ ▄▀▀▄▄█▄▄▄█▀▄▄ ▄▄▀▄▀▄ █▀█\n█ ▄▀▄ ▄▀ ██ ▄▄ ▄ █ ▀▄ ▄▀██▄▀██\n█ █▀█ █▄▀▀▀ ▄▀▀███ ▀▄ ▀█▀▀ █▀█\n█ █ ▀ ▄▀█▄██▀ ▄▄ ▄██▄▀ █▀▄▄▀██\n█▄█▄▄▄█▄█▀█▀█▄█▄█▀▀█▀ ▄▄▄ ▀ █\n█ ▄▄▄▄▄ █▄ █ ▄█▀ █▄█▄ █▄█ ▄▄▀██\n█ █ █ █ ██▄▀▀▀▄ ▄▀█ ▄ ▄▄▄ █\n█ █▄▄▄█ █ ██▀ ▄ ███▄ ▀▄▀ ▄█ ██\n█▄▄▄▄▄▄▄█▄▄▄█▄█▄█▄▄███████▄▄███\n\n› Metro waiting on exp://192.168.1.139:19000\n› Scan the QR code above with Expo Go (Android) or the Camera app (iOS)\n\n› Webpack waiting on http://192.168.1.139:19006\n› Expo Webpack (web) is in beta, and subject to breaking changes!\n\n› Press a │ open Android\n› Press w │ open web\n\n› Press r │ reload app\n› Press m │ toggle menu\n\n› Press ? │ show all commands\n\nLogs for your project will appear below. Press Ctrl+C to exit.\n[object Object]\n[object Object]\n[object Object]\nWARNING in ./node_modules/@realm/react/dist/AppProvider.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/AppProvider.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/AppProvider.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/RealmProvider.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/RealmProvider.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/RealmProvider.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/UserProvider.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/UserProvider.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/UserProvider.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/cachedCollection.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedCollection.ts' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedCollection.ts'\n\nWARNING in ./node_modules/@realm/react/dist/cachedObject.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedObject.ts' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/cachedObject.ts'\n\nWARNING in ./node_modules/@realm/react/dist/index.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/index.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/index.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/useObject.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useObject.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useObject.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/useQuery.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useQuery.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useQuery.tsx'\n\nWARNING in ./node_modules/@realm/react/dist/useRealm.js\nModule Warning (from ../../.nvm/versions/node/v18.12.1/lib/node_modules/expo-cli/node_modules/source-map-loader/dist/cjs.js):\nFailed to parse source map from '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useRealm.tsx' file: Error: ENOENT: no such file or directory, open '/home/cyril/Adoption/ReactRealmJSTemplateApp/node_modules/@realm/react/src/useRealm.tsx'\n\nERROR in ./node_modules/bindings/bindings.js:5\nModule not found: Can't resolve 'fs'\n 3 | */\n 4 | \n> 5 | var fs = require('fs'),\n 6 | path = require('path'),\n 7 | fileURLToPath = require('file-uri-to-path'),\n 8 | join = path.join,\n\nERROR in ./node_modules/bindings/bindings.js:6\nModule not found: Can't resolve 'path'\n 4 | \n 5 | var fs = require('fs'),\n> 6 | path = require('path'),\n 7 | fileURLToPath = require('file-uri-to-path'),\n 8 | join = path.join,\n 9 | dirname = path.dirname,\n\nERROR in ./node_modules/file-uri-to-path/index.js:6\nModule not found: Can't resolve 'path'\n 4 | */\n 5 | \n> 6 | var sep = require('path').sep || '/';\n 7 | \n 8 | /**\n 9 | * Module exports.\n\nweb compiled with 3 errors and 9 warnings\nStarted Metro Bundler\n```\n```\n\n\n### Can you reproduce the bug?\n\nAlways\n\n### Reproduction Steps\n\n1) Install expo-cli and node\n2) Clone the repository https://github.com/mongodb-developer/read-it-later-maybe \n3) npx expo start\n4) app crashes\n\n### Version\n\n\"realm\": \"^11.4.0\", and \"@realm/react\": \"^0.4.3\",\n\n### What services are you using?\n\nAtlas App Services: Functions or GraphQL or DataAPI etc\n\n### Are you using encryption?\n\nNo\n\n### Platform OS and version(s)\n\nUbuntu 22.04.1 LTS\n\n### Build environment\n\n\"react-native\": \"0.70.5\",\n\n\n### Cocoapods version\n\n_No [email protected] is only compatible with expo@48 and [email protected] . You will have to either downgrade realm or upgrade expo . Here is our compatibility chart.Best regards", "username": "cyril_moreau" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm with react native not working (expo
2023-02-14T00:38:47.147Z
Realm with react native not working (expo
3,716
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": ObjectId(\"...\"),\n \"associatedTenants\": [{\n \"tenantId\": ObjectId(\"A\"),\n \"role\": \"foo\"\n }, {\n \"tenantId\": ObjectId(\"B\"),\n \"role\": \"bar\"\n }]\n}\n{\n \"_id\": ObjectId(\"A\"),\n \"name\": \"Lorem\"\n}\ntenanttenantId{\n \"_id\": ObjectId(\"...\"),\n \"associatedTenants\": [{\n \"tenantId\": ObjectId(\"A\"),\n \"role\": \"foo\",\n \"tenant\": {\n \"name\": \"Lorem\"\n }\n }, {\n \"tenantId\": ObjectId(\"B\"),\n \"role\": \"bar\",\n \"tenant\": {\n \"name\": \"Ipsum\"\n }\n }]\n}\n$lookup$project$map$mergeObject$arrayElemAt$lookup$lookup: {\n from: \"tenants\",\n localField: \"associatedTenants.tenantId\",\n foreignField: \"_id\",\n as: \"aggregatedAssociatedTenants\"\n}\naggregatedAssociatedTenantsassociatedTenants$lookup", "text": "I have 2 collections (users and tenants). Simplified, the user looks like this:Simplified tenant model looks like this:And I’d need to query a user with aggregated tenants, so ideally add a new field tenant next to the tenantId with the actual tenant document, like this:I know it can be done - it seems so simple, but I’ve been banging my head agains the wall because of it So far I’ve tried various combinations of $lookup, $project with $map, $mergeObject with $arrayElemAt, but nothing seems to cut it… The closest answer I cloud find was this: https://stackoverflow.com/questions/60342985, but I just don’t seem to be able to adapt it. Can anyone see a way, please? Btw. (a bit unrelated) - if I $lookup the tenants and save them into a new field:The order of aggregatedAssociatedTenants is different than the order of associatedTenants. Doesn’t $lookup preserve the order of the original array?", "username": "Patrik_Simunic" }, { "code": "{_id:123}{_id:\"abc\"}", "text": "have you check the documentation? About half the page starts the examples.\n$lookup (aggregation) — MongoDB ManualIt would be better if you give 2-3 sample documents of each collection and your expected output from them. Replace all ObjectId with a number or simple string: {_id:123} or {_id:\"abc\"} (flexible types). Also add 1-2 more fields if you are not doing full merges. This will enable us to work on real (sample) data and find a solution faster.", "username": "Yilmaz_Durmaz" }, { "code": "$lookup$lookup$replaceRoot$mergeObjects$arrayElemAtusertenant{\n \"_id\": ObjectId(\"63732cb4919eee473ec5cec7\"),\n \"email\": \"[email protected]\",\n \"associatedTenants\": [{\n \"tenantId\": ObjectId(\"636ad8dd185b079cf4cc3014\"),\n \"role\": \"foo\"\n }, {\n \"tenantId\": ObjectId(\"636a8ba3185b079cf4cc3012\"),\n \"role\": \"bar\"\n }]\n}\n[{\n \"_id\": ObjectId(\"636ad8dd185b079cf4cc3014\"),\n \"name\": \"Lorem\",\n \"domains\": [{ \"name\": \"lorem.example.com\" }]\n}, {\n \"_id\": ObjectId(\"636a8ba3185b079cf4cc3012\"),\n \"name\": \"Ipsum\",\n \"domains\": [{ \"name\": \"ipsum.example.com\" }]\n}]\n{\n \"_id\": ObjectId(\"63732cb4919eee473ec5cec7\"),\n \"email\": \"[email protected]\",\n \"associatedTenants\": [{\n \"tenantId\": ObjectId(\"636ad8dd185b079cf4cc3014\"),\n \"role\": \"foo\",\n \"tenant\": {\n \"_id\": ObjectId(\"636ad8dd185b079cf4cc3014\"),\n \"name\": \"Lorem\",\n \"domains\": [{ \"name\": \"lorem.example.com\" }]\n }\n }, {\n \"tenantId\": ObjectId(\"636a8ba3185b079cf4cc3012\"),\n \"role\": \"bar\",\n \"tenant\": {\n \"_id\": ObjectId(\"636a8ba3185b079cf4cc3012\"),\n \"name\": \"Ipsum\",\n \"domains\": [{ \"name\": \"ipsum.example.com\" }]\n }\n }]\n}\ntenanttenanttenantId", "text": "Yes, I have seen all the examples on the $lookup stage Manual page. That’s how got to try variations on $lookup + $replaceRoot + $mergeObjects + $arrayElemAt. Didn’t help… (I wouldn’t be asking if a haven’t spent a good amount of time reading the docs, looking for answer and trying it on my own first.)Alright, both the user and tenant schemas are exactly as I have them in db and the desired result is also exactly as it should be, I just removed some unrelated fields and ObjectIds for better readability. I can repost it with actual ObjectIds:\nUser document:Tenant documents:Desired output:I’ve added real ObjectIds. I’ve also added some totally unrelated fields as well (although I don’t see the point - as showcased in the original post, there’s no merge, only adding a new field tenant with the full tenant document next to the tenantId - but in each item of array, that’s the point I can’t seem to get around).Thank you ", "username": "Patrik_Simunic" }, { "code": "{\"$oid\":\"63732cb4919eee473ec5cec7\"}", "text": "We have users from all backgrounds and at all levels and we are not psychics, so please bear with us and do not expect anyone to know everything about you at first glance. especially, when you are new to a community, this is something unavoidable.and here is why I had those requests:The solution might come in a very simple form, but I can’t promise I will be the one to deliver it. Also, if you tackle it yourself before we do, please share it with the rest of the community.PS: also in JSON format, it should be {\"$oid\":\"63732cb4919eee473ec5cec7\"} to be easily inserted into Atlas to test, for example.", "username": "Yilmaz_Durmaz" }, { "code": "[\n {\n \"$unwind\": {\n \"path\": \"$associatedTenants\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"tenant\",\n \"foreignField\": \"_id\",\n \"localField\": \"associatedTenants.tenantId\",\n \"let\": {\n \"tId\": \"$associatedTenants.tenantId\",\n \"tRole\": \"$associatedTenants.role\"\n },\n \"pipeline\": [\n {\n \"$project\": {\n \"_id\": 0,\n \"tenantId\": \"$_id\",\n \"role\": \"$$tRole\",\n \"tenant\": \"$$ROOT\"\n }\n }\n ],\n \"as\": \"associatedTenants\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$associatedTenants\"\n }\n },\n {\n \"$group\": {\n \"_id\": \"$_id\",\n \"associatedTenants\": {\n \"$push\": \"$associatedTenants\"\n }\n }\n }\n]\n", "text": "Ok, Took me a bit but here it is. As I said, your choice of words, especially “tried everything” confuses us about your level of understanding. So let me explain it so to improve you on what you might have missed on $lookup.And the tricky part is this:PS: a solution to a similar problem seems to exists here: mongodb - $lookup on ObjectId's in an array - Stack Overflow", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Sorry for temporarily deleting my above post, if you were around at that time. It had a hiccup I failed to see.I corrected the problem and edited it. However, keep in mind the solution may not work if you change your field types too far from your example documents. if you will, then follow my explanation to adapt it to new situations or try searching the internet again with your now increased knowledge.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "You’re amazing. Thank you! Especially for the explanation, that’s more valuable as I prefer to understand it than just “blindly” solve it.And yeah, you’re right (object ids, additional fields, everything), I’m sorry if I sounded antagonistic or ungrateful… Thanks again for the help.", "username": "Patrik_Simunic" }, { "code": "", "text": "Thanks, as long as we keep understanding of each other, we are cool By the way, I went ahead and asked on another topic if we can do it without $unwind. Though I am a bit lost in the answer, there is another approach you can try (you will need to adapt field names):See you around ", "username": "Yilmaz_Durmaz" }, { "code": "$indexOfArray$arrayElemAt$filter$first$indexOfArraytenantId$arrayElemAtnamedb.users.aggregate([\n {\n \"$lookup\": {\n \"from\": \"tenants\",\n \"localField\": \"associatedTenants.tenantId\",\n \"foreignField\": \"_id\",\n \"as\": \"tenants\"\n }\n },\n {\n \"$addFields\": {\n \"associatedTenants\": {\n \"$map\": {\n \"input\": \"$associatedTenants\",\n \"in\": {\n \"$mergeObjects\": [\n \"$$this\",\n {\n \"tenant\": {\n \"name\": {\n \"$arrayElemAt\": [\n \"$tenants.name\",\n { \"$indexOfArray\": [\"$tenants._id\", \"$$this.tenantId\"] }\n ]\n }\n }\n }\n ]\n }\n }\n },\n \"tenants\": \"$$REMOVE\"\n }\n }\n])\n", "text": "By the way, I went ahead and asked on another topic if we can do it without $unwind. Though I am a bit lost in the answer, there is another approach you can try (you will need to adapt field names):Similar approach but as per the expected result, you can also use $indexOfArray and $arrayElemAt instead of $filter and $first,Playground", "username": "turivishal" }, { "code": "", "text": "Thanks guys!\nBoth solutions work quite nicely, but think I’ll go with turivishal’s approach as it’s surprisingly simple (and requires slightly less tuning - the need to explicitly select tenant’s fields is also nice as I was gonna $project it anyway).", "username": "Patrik_Simunic" }, { "code": "", "text": "Hey,Thanks for the example. I’m trying something similar, but I need the entire document, not the subset. So in this example, I would need the entire tenant document. I’m just getting to grips with the aggregation pipeline and could do with some help.Appreciated,James", "username": "James_N_A2" }, { "code": "\"$arrayElemAt\": [\n \"$tenants.name\",\n { \"$indexOfArray\": [\"$tenants._id\", \"$this.tenantId\"] }\n ]\n", "text": "I think this is pretty easy, actually. I just use arrayElemAt without the name prefix.", "username": "James_N_A2" }, { "code": "", "text": "H @James_N_A2Here, we use “$project” in the pipeline of the “$lookup” to select fields. If you want the whole document, you can use just a “$match” stage instead, and use the “tenantId” we define by “let” before entering the pipeline.It is also possible without “let” and “pipeline”. check the usual examples for this way.In case you fail, then please open a new topic, give a minimal dataset to try and the expected result, along with what you tried so far, and also put a link to this post so others can see where else you looked.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hey,Thanks for the reply! I’ve created a new post here:Hopefully, that shines some light on what I’m trying to do.J", "username": "James_N_A2" } ]
Lookup & populate objects in array
2022-11-24T17:00:47.239Z
Lookup &amp; populate objects in array
10,028
null
[ "aggregation" ]
[ { "code": "db.SAMPLE_COLLECTION.insertMany([\n\t{ list: [ 1, 2, 3, 4, 5, 6, 7, 8 ] },\n\t{ list: [ 1, 2, 3, 4, 5 ] },\n\t{ list: [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ] }\n])\n{ _id: 1, list: [ 1, 2, 3, 4, 5, 6, 7, 8 ] },\n{ _id: 2, list: [ 1, 2, 3, 4, 5 ] },\n{ _id: 3, list: [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ] }\n{ _id: 1, list: [ 1, 4, 7 ] },\n{ _id: 2, list: [ 2, 3, 5] },\n{ _id: 3, list: [ 1, 6, 11 ] }\ndb.SAMPLE_COLLECTION.aggregate([\n\t{ $unwind: '$list' },\n\t{ $sample: { size: 3 } }\n])\n", "text": "Experiment SetupProduces a collection that looks somewhat like thisI want my output to look something like this (the exact numbers in each list don’t matter, so long as there are 3 randomly sampled numbers)I want to sample 3 items from each list. I’m trying to figure out how to do this with $unwind and $sample in an aggregation pipeline.If I tryI will only get 3 results back total, instead of 3 results per list. I can see how I might do this if I processed only one document at a time, but I’m not sure how to process only one document at a time in an aggregation pipeline.Any suggestions?", "username": "Matt_Young" }, { "code": "", "text": "You may use $rand to generate 3 random array indexes.You would then use $filter on list to get the values that correspond to the random indexes.", "username": "steevej" } ]
$sample from an internal document list
2023-02-21T21:56:28.967Z
$sample from an internal document list
417
null
[ "queries", "node-js", "data-modeling", "transactions" ]
[ { "code": "const statement = await user.find({$and:[{\"Date\":{$gte:1676590789236}},{\"Date\":{$lte:1676590908291}}]})", "text": "How can i get the details(e.g transaction history) of a particular user with time range inside mongodb database, I was able to get the time range transfer history for all my user like this :\nconst statement = await user.find({$and:[{\"Date\":{$gte:1676590789236}},{\"Date\":{$lte:1676590908291}}]})\nbut not for single user history, I will be more than grateful if anybody can help , Thanks", "username": "Emmanuel_Oluwatimilehin" }, { "code": "", "text": "With so little information about your schema, the only thing we can say is that simply add the clause to your $and that uniquely identify your user.", "username": "steevej" } ]
Account statement of a user
2023-02-20T14:02:31.859Z
Account statement of a user
814
null
[ "node-js", "mongoose-odm" ]
[ { "code": "MongoError: (Unauthorized) not authorized on admin to execute command { find: \"sections\", filter: { }, projection: { }, lsid: { id: {4 [23 39 50 0 26 28 73 245 162 202 224 56 75 120 116 112]} }, $clusterTime: { clusterTime: {1676817340 1020}, signature: { hash: {0 [27 218 228 177 190 112 145 244 152 40 238 254 19 76 223 89 148 194 219 33]}, keyId: 7155525947335114752.000000 } }, $db: \"admin\" }\n at Connection.<anonymous> (/Users/filipwieselgren/Desktop/FED/improveme-bot/node_modules/mongoose/node_modules/mongodb/lib/core/connection/pool.js:453:61)\n at Connection.emit (node:events:513:28)\n at processMessage (/Users/filipwieselgren/Desktop/FED/improveme-bot/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connection.js:456:10)\n at TLSSocket.<anonymous> (/Users/filipwieselgren/Desktop/FED/improveme-bot/node_modules/mongoose/node_modules/mongodb/lib/core/connection/connection.js:625:15)\n at TLSSocket.emit (node:events:513:28)\n at addChunk (node:internal/streams/readable:324:12)\n at readableAddChunk (node:internal/streams/readable:297:9)\n at Readable.push (node:internal/streams/readable:234:10)\n at TLSWrap.onStreamRead (node:internal/stream_base_commons:190:23) {\n ok: 0,\n code: 8000,\n codeName: 'AtlasError'\n}\n", "text": "Hi,I’m new when it comes to use MongoDB Atlas and have run into this error:I have found similar topics here but not something that helped. Anyone that might be able to help?Kind regards", "username": "Filip_Wieselgren" }, { "code": "MongoError: (Unauthorized) not authorized on admin to execute command", "text": "Hi @Filip_Wieselgren and welcome to the MongoDB community forum!!MongoError: (Unauthorized) not authorized on admin to execute commandThere could be multiple reasons if the above error message have been observed. To help you with the correct solution, could you provide the following information regarding the deployment?Please note that, if you are using the above query on the admin database, this activity is restricted from the MongoDB server, as the Admin, local, config are used for the internals of MongoDB.Also, for shared tiers(M0/M2/M5), there might be some operation limitations which you can refer for further understanding.Let us know if you have any further questions.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "MongoError: (Unauthorized) not authorized on admin to execute command { find: \"sections\", filter: { }, ..., $db: \"admin\" }mongodb+srv://<user>:<pass>@xxx.mongodb.net/MongoClientadmin$db: \"admin\"mongodb+srv://<user>:<pass>@xxx.mongodb.net/myDatabasemyDatabase.sectionsadmin.sectionsadmincreateUser", "text": "@Filip_Wieselgren,MongoError: (Unauthorized) not authorized on admin to execute command { find: \"sections\", filter: { }, ..., $db: \"admin\" }The issue appears to be that you’re trying to query a namespace you don’t have access to. If you connected to your cluster using a connection string such as mongodb+srv://<user>:<pass>@xxx.mongodb.net/ the database the MongoClient will try to use will be the admin database (by default when authentication is used).You can see this in the error you provided ($db: \"admin\"), so the easiest way to address this is to provide a database in the connection string (mongodb+srv://<user>:<pass>@xxx.mongodb.net/myDatabase).By adding a database name to the connection string the query you were attempting will target myDatabase.sections, which your user should have full access to as opposed to admin.sections, which would be restricted (as it is in the admin database).The Mongoose documentation alludes to this as well.As you’re new to MongoDB and MongoDB Atlas it’s worth noting there are Unsupported Commands in Atlas (such as createUser) that may be available in standalone deployments.", "username": "alexbevi" } ]
MongoError: (Unauthorized) not authorized on admin to execute command { find: "sections", filter:
2023-02-19T14:41:20.568Z
MongoError: (Unauthorized) not authorized on admin to execute command { find: &ldquo;sections&rdquo;, filter:
4,681
null
[]
[ { "code": "", "text": "I have a database where other collections’ performance is degrading my collections. I wonder if I should migrate my collections to a different database or a different cluster? Purely from the performance standpoint, what goes into this decision?As an off-topic, how does this affect the cost, on Atlas?", "username": "Irina_R" }, { "code": "performance$lookup", "text": "Hello @Irina_R,Welcome to The MongoDB Community Forums! To understand your use-case better, can you please confirm below details?I have a database where other collections’ performance is degrading my collections.Moving your collection to a new Cluster depends on your use-case and requirements. I don’t think migrating collection to a different database within the same deployment will make any difference as the hardware resources provided are same and database is just a name space, on the other hand, if the collections are somewhat related or needed some joining then moving them to a new database will create another challenges such as: $lookup cannot be used. If you require higher levels of performance that cannot be achieved within the current cluster, then a new cluster may be necessary. Migrating to a new cluster could change your billing situation, however it’s difficult to say how much this will change without further information.To learn more about performance in MongoDB, I would recommend you to go through below links:MongoDB performs best when the application's indexes and frequently accessed data fits in memory. Performance issues may indicate that the database is operating at capacity and that it is time to add additional capacity to the database.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Should I migrate to a new database or a new cluster?
2023-02-16T19:35:25.857Z
Should I migrate to a new database or a new cluster?
582
null
[ "golang", "field-encryption" ]
[ { "code": "", "text": "Is anyone facing this issue while trying to go build with cse tags for CSFLE ?CGO_ENABLED=0 GOOS=darwin GOARCH=amd64 go build -installsuffix cgo -tags cse -o /Users/techno/go/src/github.com/100mslive/rpcaccounts/bin/server /Users/techno/go/src/github.com/100mslive/rpcaccounts/cmd/server/main.go…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:44:35: undefined: mongocrypt.MongoCrypt\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:82:25: undefined: mongocrypt.MongoCrypt\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:250:79: undefined: mongocrypt.Context\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:278:74: undefined: mongocrypt.Context\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:297:71: undefined: mongocrypt.Context\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:314:72: undefined: mongocrypt.Context\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:334:50: undefined: mongocrypt.Context\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:349:47: undefined: mongocrypt.KmsContext\n…/…/…/…/pkg/mod/go.mongodb.org/[email protected]/x/mongo/driver/crypt.go:453:79: undefined: mongocrypt.Contextgo 1.19\ngo.mongodb.org/mongo-driver v1.11.2", "username": "Aakash_Bajaj" }, { "code": "CGO_ENABLED=0CGO_ENABLED=1", "text": "Hey @Aakash_Bajaj welcome and thanks for the question! I suspect your issue is related to disabling CGO with the environment variable CGO_ENABLED=0 in your Go build command. The CSFLE feature in the MongoDB Go driver requires the libmongocrypt C extension, so requires CGO to be enabled.Are you able to build if you include CGO_ENABLED=1 instead?", "username": "Matt_Dale" }, { "code": "", "text": "Hey @Matt_Dale. Thanks for the response.\nYeah CGO_ENABLED=1 fixed the issue.", "username": "Aakash_Bajaj" } ]
Error with CSFLE mongodb golang drivers
2023-02-16T22:27:01.510Z
Error with CSFLE mongodb golang drivers
1,289
null
[ "queries", "time-series" ]
[ { "code": "", "text": "The count of the time series collection slows down. Is there a good way to optimize it?", "username": "yuan_wei" }, { "code": "", "text": "Hello @yuan_wei ,Welcome to The MongoDB Community Forums! The count of the time series collection slows down. Is there a good way to optimize it?Please refer Best Practices for Time Series Collections to improve performance and data usage for time series collections.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Timing Collection Minor Bugs
2023-02-13T02:41:45.887Z
Timing Collection Minor Bugs
865
null
[ "data-modeling" ]
[ { "code": "", "text": "I am a big fan of MongoDB, I have done couple of projects for my clients. Now, I need to design, (then may be implement) a closed-community social network…I cannot get get my mind around with data structure using MongoDB… Posts, likes, friends, permissions, privacy … so many relations… Do you think, is MongoDB good for creating a social network?Any Ideas? Any feedback is highly appreciatedThank you in advance.", "username": "coderkid" }, { "code": "", "text": "Hello @coderkid, I have some general comments about building a new application and using a database.One of the first steps in figuring the database you want to use in an application, is to figure the data and the application. The data with its attributes, the size, etc., and the relationships between various entities - the one-one, one-to-many and many-to-many. Then the application functionality - the queries (crud operations), and the associated user interface. These are some things one can do as a process or informally. These are often called as data modeling, application design, etc.The next part would be the tools, the database (e.g., MongoDB), the application platform, programming languages, etc.These are things one has to work with to build an application. Most of these aspects one cannot avoid, they show up in one form or other at different stages of application building.I hope you get a good start with all these processes and tools. MongoDB should be one of the top choices, as it provides flexibility in data design, deployments and allows quick development and prototyping.", "username": "Prasad_Saya" }, { "code": "", "text": "How you define “good” ? Do you mean the performance/applicability of joins across collections or application logic complexity?All those general purpose databases can be used to store the things you mentioned, but some are good at this and some are good at others. To name one, sql databases provide powerful joins and integrity checks (e.g. constraints) but normally nosql databases don’t, but on the other handle, it is said nosql databases without strong ACID in mind, can scale a bit better horizontally.It’s more important to understand what you want to do and what the underlying storage server can provide.", "username": "Kobe_W" } ]
Is MongoDB good for creating a social network?
2021-10-21T17:27:01.728Z
Is MongoDB good for creating a social network?
3,550
null
[ "aggregation" ]
[ { "code": "", "text": "I am working in an accounting related where project where I need query in five collection and get filtered data. Is this possible in MongoDB and I won’t use $lookup.", "username": "Moyen_Islam" }, { "code": "", "text": "Hey @Moyen_Islam ,I think we’ve chatted about Data Federation before, is there a reason it would not work here?You should be able to create a virtual collection in Atlas Data Federation and then set the 5 collections as “Data Sources” for that virtual collection.Best,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "Thank you very much. I got what I actually wanted. Thank you for your help.", "username": "Moyen_Islam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to query in multiple collections by a single query
2023-02-21T13:15:46.084Z
How to query in multiple collections by a single query
518
null
[ "flutter" ]
[ { "code": "", "text": "Is realm.all() block UI because it is not an async function?", "username": "Th_Nga_Ninh" }, { "code": "", "text": "Not likely, but it depends on the use case - e.g. in what context is that being used? Can you supply some code?", "username": "Jay" }, { "code": "realm.all<T>()Trealm.query<T>('TRUEPREDICATE')RealmResultsrealm.all<T>().map((t) => t.readAllPropsAndCreateNonRealmObject()).toList()T", "text": "Yes, as @Jay says realm.all<T>() is fast. It does not, as you may think, fetch all Ts from the realm immediately. It sets up a query semantically similar to realm.query<T>('TRUEPREDICATE') and returns a RealmResults object. This is lazy so you can iterate over it, or index into it, paying only for the io-ops needed.Of course if you do something like realm.all<T>().map((t) => t.readAllPropsAndCreateNonRealmObject()).toList() then you are explicitly asking to eagerly fetch every single property of every single T in your database. Depending on the number of objects that may be expensive, so don’t do that.", "username": "Kasper_Nielsen1" }, { "code": "ListViewrealm.all<T>()", "text": "@Th_Nga_Ninh I previously gave an example of how to efficiently work with the results of a query in combination with a flutter ListView (in particular realm.all<T>()) here: Realm Query Pagination · Issue #1058 · realm/realm-dart · GitHub if that is useful.", "username": "Kasper_Nielsen1" }, { "code": "", "text": "Thanks for your answer.", "username": "Th_Nga_Ninh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is realm.all() block UI because it is not an async function?
2023-02-18T17:43:50.386Z
Is realm.all() block UI because it is not an async function?
1,262
null
[ "java", "mongocli" ]
[ { "code": "", "text": "I’m currently undertaking the developer path (JAVA) and on Atlas Search module for static index creation. On the lab exercise, I authenticated with my atlas account but while checking the solution it gives me error “Incorrect solution”, have tried several time without any luck.Any advise on how to correct this error and proceed ?", "username": "Vinod_Kartha" }, { "code": "", "text": "Hello. I have the exact issue. I’ve tried creating a new account (with new email address), but still, nothing seems to work.I’m also following the JAVA path.I was doing the tutorials over a year ago and noticed they updated the videos and the layout/format. It’s not as pleasing to the eye and user-friendly as it used to be… perhaps no one bothered to double check functionality of Lab 1.", "username": "Livia_28581" }, { "code": "", "text": "Hi Livia -I was able to resolve it and proceed.Turns out you need to create the aggregation pipeline as mentioned in the lab intro page. Once you have done and authenticated your account you’ll be able to continue with the lab exercise and continue without issues.Hope it helps!Thanks,\nVinod", "username": "Vinod_Kartha" } ]
Unable to proceed after authenticating atlas account with atlas cli
2023-01-29T02:55:24.226Z
Unable to proceed after authenticating atlas account with atlas cli
1,225
null
[ "data-modeling" ]
[ { "code": "", "text": "I will need to do a project where I will have to generate and track a lot of serialnumbers for physical items (millions).I am restricted to use 20 characters for the serial number (a-Z0-9). I would like to use the serial numbers as a natural id in MongoDb.As ObjectId is currently 24characters it will be too long to use directly. How would you suggest that I generate id’s that is efficient for mongodb to index? I’m guessing that if I just generate random strings (some subset of GUID), the index will be too scattered to be able to get indexed properly right?How would you suggest that I generate efficient id’s?\nMy current solution is:\n[itemType: 2byte][timestamp: 4bytes][random: 4byte]\n(There might be 10-20 different itemTypes)", "username": "Brian_Christensen1" }, { "code": "_id", "text": "Hi @Brian_Christensen1 welcome to the community!The _id field serves as the primary key for the collection, and if you don’t supply one, an ObjectId will be automatically generated for you.if I just generate random strings (some subset of GUID), the index will be too scattered to be able to get indexed properly right?If by “scattered” you mean that the documents will be stored inefficiently, then this is not the case. It doesn’t matter if the _id field is random or sequential. As long as they’re unique for the collection, that is.Hope this helps.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank’s for your answer.\nA couple of years ago I wanted to use UUID’s as primary keys in mysql, but I discovered that it had a hard time indexing the keys efficiently because the values was so “random”. I don’t know how the indexing works in details, but from what you write, it seems like it shouldn’t be a problem?", "username": "Brian_Christensen1" }, { "code": "ObjectId_id", "text": "Hi @Brian_Christensen1I cannot comment on how MySQL does its data storage, but in MongoDB a random string primary key is not a problem. The default ObjectId is semi-random. In fact, in the documentation I linked to, UUID is one example of a common choice for _id.In the long term, schema design and how the data is used is more important, in my opinion.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
20 characters id for natural id (index question)
2023-02-20T07:12:30.606Z
20 characters id for natural id (index question)
685
null
[ "atlas-functions", "atlas", "app-services-cli" ]
[ { "code": "realm-cli push", "text": "I have an error message that I do know know who to fix regarding Mongodb Realm CLI. https://www.mongodb.com/docs/atlas/app-services/cli/realm-cli-apps-init/When I write the following command in Terminal:realm-cli apps init -n testI get the error message “app init setup failed: a project already exists”I have already had a project name “test” but I have deleted it (Simply deleting the folder which might have been the mistake) but I still get the error message. The error occurs always, no matter the name or path/folder at the moment.if realm-cli push is used it seems to use the old “test” application since the name is filled out when going through the [options] https://docs.mongodb.com/realm/cli/realm-cli-push/If I push the application it will deploy the test application and if deleted through either CLI or GUI it returns to the first problem mention at the start.Where to go from here? Is the application somehow stored as a draft or something making it impossible for me to create another before its discarded or am I missing something?", "username": "Sam_Lanza" }, { "code": "", "text": "Interestingly enough, this issue only happens on my D:\\ drive in Windows. No matter what directory I go to in the D drive, it gives me that “App init setup failed…” error. When I go to the C drive, however, I don’t have this issue. I can create multiple projects in different directories with the “realm-cli app init” command. Any idea what’s causing this behavior?", "username": "Sam_Lanza" }, { "code": "realm_config.json{\n \"config_version\": 20210101,\n \"name\": \"test\",\n \"location\": \"US-VA\",\n \"deployment_model\": \"GLOBAL\"\n}\nauth\ndata_sources\nenvironments\nfunctions\ngraphql\nhttp_endpoints\nservices\nsync\nvalues\n", "text": "when you use the command to init an app, in the folder you are in, a file named realm_config.json is created with content similar to this:This file marks the current folder and all descendant folders as part of the application. It seems you forgot to change to the folder you intended to use and used the command in the root of your D drive instead. To fix the problem, go to your D drive and remove this file.also, the following folders are created with the init command:create a new app and compare the contents of these folders, make sure they are not part of anything else, like another project, then remove them too to clear the mess. the emphasis here is on “make sure” not to delete precious unrelated data.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
"App init setup failed: a project already exists" MongoDB Realm App
2023-02-21T18:01:06.790Z
&ldquo;App init setup failed: a project already exists&rdquo; MongoDB Realm App
1,268
null
[]
[ { "code": "", "text": "Hi folks,Please need help resetting my MFA device because my cellphone stopped working and I lost access to my cluster. My email account is [email protected]", "username": "Damian_Rodriguez" }, { "code": "", "text": "Hi Damian,Although there are some MongoDB employees on here, this is primarily a community forum and not MongoDB Support, I would reach out to MongoDB support if this is an Atlas Cluster or Ops Manager Instance. Not sure anyone here would be able to help but maybe an employee could reach out.", "username": "tapiocaPENGUIN" } ]
MFA Lost Device
2023-02-21T16:28:55.367Z
MFA Lost Device
457
null
[ "queries", "atlas-cluster", "performance", "atlas-data-lake" ]
[ { "code": "", "text": "Hello, I m relatively new to MongoDB.\nI am working on an application and I have a certain use case where I have to use mongodb lookups on two different collections stored in two different databases. Since, lookups dont have support for cross databases, I decided to use MongoDB Data Federation.\nThe query works now using federated instance. I read up more about the limitation of Data federation by reading this article data-federation-limitations.One of the limitations state that federated instance can only have 60 guaranteed concurrent connections per region and only 30 concurrent queries. So if my application were to be used by thousands of users , would there be a performance issue??\nIf so, how can I do this task differently?", "username": "Manish_Bisht" }, { "code": "", "text": "Hello Manish,Ben here, I’m the PM for Atlas Data Federation. It’s true we have limits on concurrent queries and concurrent connections, but we can expand these limits depending on your requirements. Additionally we plan on offering more configuration options in the future that would help with this.That said, I’m curious if you can share more about your specific use case. Is the $lookup across databases a very common query? I wonder if there are more appropriate data modeling options that would alleviate the need for a cross DB $lookup.Best,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "Hey Ben, Thanks for the reply.yes, the $lookup is a very common query for now. The user has a “favorites” section where they maintain their favorite items. The issue is the item and user collections are being maintained in different databases because of some business constraints. Do you think moving both of them to a same DB would be a better idea? Also, if we were to do so, how would it affect the scalability of the application?", "username": "Manish_Bisht" }, { "code": "", "text": "Got it. Can I ask how “long running” your $lookup queries are? Do you expect there to be contention over the 30 concurrent query limit and do you know how much we would need to raise the limit by to support your use case?", "username": "Benjamin_Flast" }, { "code": "", "text": "Hi @Benjamin_Flast\nI’ve got the same issue. Not that I’m doing $lookup, but here’s our use case:The issue here is that 30 simultaneous queries is way too low for our needs. That would mean that we can’t allow more than 30 customers to do this kind of query, which is ridiculously low.\nHow can we increase this to a more “usable” value ?\nThanks in advance and have a nice day", "username": "Roro" } ]
MongoDB Data federation limitation
2022-09-26T16:11:21.775Z
MongoDB Data federation limitation
3,008
null
[ "aggregation", "atlas-search" ]
[ { "code": "[\n {\n \"$search\": {\n \"index\": \"product_title_en\",\n \"text\": {\n \"query\": \"iphone 11 glass screen protectors full\",\n \"path\": \"product_title_en\"\n }\n }\n },\n {\n \"$facet\": {\n \"products\": [\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 20\n }\n ],\n \"products_count\": [\n {\n \"$count\": \"totalCount\"\n }\n ],\n \"available_categories\": [\n {\n \"$group\": {\n \"_id\": \"$category_en\"\n }\n }\n ]\n }\n },\n {\n \"$project\": {\n \"products_count\": \"$products_count.totalCount\",\n \"products\": \"$products\",\n \"available_categories\": \"$available_categories._id\"\n }\n }\n]\n", "text": "Hi Mongo community!I have around 300k docs and I want to run a query aggregation with diff stages.the query needs 2 to 5s to resolve, depend on the text query, let say if we do “iphone 10” the query will take 1s but if we do “iphone 11 glass screen protectors” will take 3 to 10s, is there any way to improve this pipeline :I did build atlas search index named “product_title_en”, can you help with with best practice in such scenario.\nI do not understand why the seach take much longer evertime we add new word to the query", "username": "chout_sport" }, { "code": "$facet", "text": "Can you try using Atlas Search’s facet operator? Instead of $facet", "username": "Elle_Shwer" }, { "code": "", "text": "Hi Elle_Shwer thank you for the response, I want to create an aggregation pipeline, but I dont know what’s the best practice specially I need a complex query, first we need to search by the title then we need to sort the data by price then limit to 20.\nthe result should return the docs + the availble categories in these docs + the count of each.docs suggest to use near instead of sort but I have no Idea how to use it with text/phrase operator.\nand how to use facets in my case.\nimage889×588 15 KB\nI’ll apreciate any help thanks", "username": "chout_sport" }, { "code": "[\n {\n '$search': {\n 'facet': {\n 'operator': {\n 'text': {\n 'path': 'title', \n 'query': '<query>'\n }\n }, \n 'facets': {\n 'roomtypeFacet': {\n 'type': 'string', \n 'path': 'category'\n }\n }\n }\n }\n }, {\n '$sort': {\n 'price': 1\n }\n }, {\n '$facet': {\n 'docs': [\n {\n '$limit': 20\n }\n ], \n 'meta': [\n {\n '$replaceWith': '$$SEARCH_META'\n }, {\n '$limit': 1\n }\n ]\n }\n }, {\n '$set': {\n 'meta': {\n '$arrayElemAt': [\n '$meta', 0\n ]\n }\n }\n }\n]\ntextnearsort", "text": "Something like this might work:(can swap text out for near and remove the sort)", "username": "Elle_Shwer" } ]
Atlas search index takes alot of time if query by text
2023-02-06T18:16:12.025Z
Atlas search index takes alot of time if query by text
853
null
[]
[ { "code": "", "text": "Hi Everyone!I am Priyanka Taneja. I am from India, I work as a Software Developer at SAP Labs. I have worked with organisations like JP Morgan and GE.When I’m not working, I like to spend my time in outdoor activities. I love to play badminton and practice yoga.I am really excited to be the part of Mongo DB User Group, Delhi. Looking forward to connect with you all. Excited to share my experiences and skills to this community.Feel free to connect with me:\nLinkedIn: https://www.linkedin.com/in/ptneja/", "username": "Priyanka_Taneja" }, { "code": "", "text": "Welcome to the MongoDB Community @Priyanka_Taneja Glad to have you lead the Delhi-NCR Community and excited for all the amazing skills and experiences you plan to share with the community! ", "username": "Harshit" }, { "code": "", "text": "Welcome Priyanka! Looking forward to seeing the awesome things you will do with the Delhi-NCR MUG!", "username": "Veronica_Cooley-Perry" } ]
Hello Everyone, I’m Priyanka Taneja - MUG Leader, Delhi
2023-02-21T13:11:59.455Z
Hello Everyone, I’m Priyanka Taneja - MUG Leader, Delhi
1,149
null
[]
[ { "code": "", "text": "Does the “text” operator make the query look into non-text fields as well?My requirement is for the search to return all documents that have the matching text (number/string/date/bool) in any field. Is this possible?Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "textsample_airbnb> db.listingsAndReviews.aggregate([{ $search: { index: 'default', text: { query: '1', path: 'minimum_nights' } } },\n{$project:{\"_id\":0,\"name\":1,score:{$meta:\"searchScore\"}}}]) \n\n[ \n{ name: 'Double Room en-suite (307)', score: 0.49679940938949585 }, \n{ name: 'City center private room with bed', score: 0.49679940938949585 }, \n{ name: 'Friendly Apartment, 10m from Manly', score: 0.49679940938949585 }, \n{ name: 'Great studio opp. Narrabeen Lake', score: 0.49679940938949585 }, \n...\n}\nsample_airbnb> db.listingsAndReviews.aggregate([{ $search: { index: 'default', text: { query: 1, path: 'bathrooms' } } },\n{$project:{\"_id\":0,\"name\":1,score:{$meta:\"searchScore\"}}}]) \n\nMongoServerError: PlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: \"text.query\" must be a string\ncompoundequals", "text": "Hey @Prasad_Kini,My requirement is for the search to return all documents that have the matching text (number/string/date/bool) in any field. Is this possible?I tried to check this using the text operator in Atlas(used dynamic indexing on sample_airbnb dataset). For string fields, it worked as expected.But when used for a numeric field, it didn’t return any documents and instead returned the following error.As per the text operator documentation, the query value needs be a string or an array of strings.You can, however, try using other Atlas Search operators based on your use case like using compound with equals for booleans and date fields. You can read more about these from the documentation:\nCompound Search\nEqualsPlease feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Hi @Satyam,\nI haven’t been able to find any operator that would be able to do a “text” search on all the fields (string, number, boolean, date, arrays etc) in a document. Would you be able to help?Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "textcompoundequalsequals", "text": "Hey @Prasad_Kini,As already pointed outAs per the text operator documentation, the query value needs be a string or an array of strings.using just the text operator to do a search for all the data types you mentioned is currently not supported. Hence, you may wish to consider using other operators like the compound operator with equals. equals supports querying the date, numeric as well as boolean fields.Regards,\nSatyam", "username": "Satyam" }, { "code": "{\n\t\"projects\": [\n\t\t{\n\t\t\t\"name\": \"2021 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2022-07-21T00:00:00.000Z\"),\n\t\t\t\"num_users\": 364\n\t\t},\n\t\t{\n\t\t\t\"name\": \"2022 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2023-03-31T00:00:00.000Z\"),\n\t\t\t\"num_users\": 178\n\t\t},\n\t\t{\n\t\t\t\"name\": \"2023 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2023-06-30T00:00:00.000Z\"),\n\t\t\t\"num_users\": 12022\n\t\t},\n\t\t{\n\t\t\t\"name\": \"2024 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2024-06-30T00:00:00.000Z\"),\n\t\t\t\"num_users\": 2022\n\t\t}\n\t]\n}\n", "text": "Hi @Satyam,\nI do know that text operators cannot be used for my requirement by itself and that is the reason for this post.Compound with equals will do an exact match. I need to support partial matches on any field.\ne.g. Searching for 2022 should pickup all documents with any field containing 2022 including strings, dates and numbers. In the sample collection below, the query should bring back all the documents.Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "", "text": "Hey @Prasad_Kini,Searching for 2022 should pickup all documents with any field containing 2022 including strings, dates and numbers.Unfortunately, this won’t be possible by using one search operator, hence the suggestion was to use the text operator for strings and compound with equals for booleans, numbers, and dates.If you want to match string patterns in non-string data types, you will have to explore other data modeling approaches. For example, Atlas text operator search will work if all your fields are strings and so if you can model your data accordingly, you might be able to use that.Regards,\nSatyam", "username": "Satyam" }, { "code": "{\n\t\"projects\": [\n\t\t{\n\t\t\t\"name\": \"2021 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2022-07-21T00:00:00.000Z\"),\n\t\t\t\"num_users\": 364\n\t\t},\n\t\t{\n\t\t\t\"name\": \"2022 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2023-03-31T00:00:00.000Z\"),\n\t\t\t\"num_users\": 178\n\t\t},\n\t\t{\n\t\t\t\"name\": \"2023 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2023-06-30T00:00:00.000Z\"),\n\t\t\t\"num_users\": 120226\n\t\t},\n\t\t{\n\t\t\t\"name\": \"2024 Enhancements\",\n\t\t\t\"release_date\": ISODate(\"2024-06-30T00:00:00.000Z\"),\n\t\t\t\"num_users\": 2022\n\t\t}\n\t]\n}\n\n", "text": "Modeling the data in a way that won’t conform to the types that the data actually represents goes against basic data modeling principles. I am not sure how it would affect performance or other requirements. This is not an option for us at this time.Having said that, could you please share a query depicting how to search for the number 2022 in the sample collection below using compound with equals?Thanks,\nPrasad", "username": "Prasad_Kini" } ]
Atlas Full Text Search
2023-02-08T20:45:23.872Z
Atlas Full Text Search
1,003
null
[ "aggregation", "database-tools" ]
[ { "code": "{ $match : {\"meterConsumptionDeltas\" : 0, \n \"meterId\" : \"22056.3\" \n ,\"receivedTime\" : { $gte: ISODate(\"2022-12-28T00:00:00.004Z\"),\n $lte: ISODate(\"2022-12-31T00:23:59.004Z\")}\n }}, \n{$project: {\"repeaterNumber\":1,\"meterConsumptionDeltas\" : 1,\"diameter\":1,\"externalName\":1,\"meterId\":1,\"receivedTime\": 1} }\n,{$out:\"cons_2022_second_part\"}\n]);\n", "text": "Hi,I perform a query which creates the collection “cons_2022_second_part”, so later I can use mongoexport to keep it as a csv file:The output table has 2 documents with these dates:{\n“meterId” : “22056.3”,\n“externalName” : “01-200021386-14”,\n“diameter” : 2,\n“receivedTime” : ISODate(“2022-12-29T00:00:00.000+02:00”),\n“meterConsumptionDeltas” : 0,\n“repeaterNumber” : NumberLong(162)\n},{\n“meterId” : “22056.3”,\n“externalName” : “01-200021386-14”,\n“diameter” : 2,\n“receivedTime” : ISODate(“2022-12-31T00:00:00.000+02:00”),\n“meterConsumptionDeltas” : 0,\n“repeaterNumber” : NumberLong(162)\n}However. when I use mongoexport, this is the dates shown:mongoexport --db mydb --collection cons_2022_second_part --fields “meterId”,“externalName”,“diameter”,“receivedTime”,“meterConsumptionDeltas”,“repeaterNumber”\n–username “myuser” --password “mypassword” --type csv --out cons_2022_second_part2.csvmeterId,externalName,diameter,customerName,receivedTime,meterConsumptionDeltas,repeaterNumber\n22056.3,01-200021386-14,2,2022-12-30T22:00:00.000Z,0,162\n22056.3,01-200021386-14,2,2022-12-28T22:00:00.000Z,0,162why is the difference between the dates in output table and the csv?Thanks", "username": "Tamar_Nirenberg" }, { "code": "+02:00mongoexportZzero offsetvar now = new Date();db.data.save( { date: now, offset: now.getTimezoneOffset() } );\nvar record = db.data.findOne();var localNow = new Date( record.date.getTime() - ( record.offset * 60000 ) );\n", "text": "Hello @Tamar_Nirenberg ,Can you answer few questions for better understanding of your use-case?Additionally, if you just concerned about the difference in dates between the output table and mongoexportz it is due to the difference in time zones. In the output table, the dates are shown with the timezone offset +02:00. However, when you export the data to a CSV file using mongoexport, the dates are shown in UTC time zone with the timezone offset Z, which means zero offset. So, the exported dates are actually the same as the original dates in the database, but shown in UTC time zone instead of the original time zone.As per documentation - Time Representations in MongoDBMongoDB stores times in UTC by default, and converts any local time representations into this form. Applications that must operate or report on some unmodified local time value may store the time zone alongside the UTC timestamp, and compute the original local time in their application logic.In the MongoDB shell, you can store both the current date and the current client’s offset from UTC.You can reconstruct the original local time by applying the saved offset:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Thank you @Tarun_Gaur", "username": "Tamar_Nirenberg" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query dates, csv and time zone
2023-02-12T13:55:16.165Z
Query dates, csv and time zone
1,370
null
[]
[ { "code": "", "text": "Can I set a filter on a $search stage to filter out any results for a field that is an empty string? Maybe there’s a way to do an equals for this?", "username": "Matt_Jones1" }, { "code": "", "text": "Hi @Matt_Jones1,welcome to MongoDB and the Forums!I would recommend you to go through compound operator in which you can specify must and must not match of the output.Please let me know if that helps.Thanks,\nDarshan", "username": "DarshanJayarama" }, { "code": "", "text": "Darshan,Thank you for the reply. If I use compound with a must operator, then which operator under must will allow me to specify a match only if a field is an empty string?Thanks,\nMatt", "username": "Matt_Jones1" }, { "code": "", "text": "Hello,\nIs there any answer to this question?Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "", "text": "Hi @Prasad_Kini ,Hope you are doing great.Can you please share the sample of valid documents, As this query needs tailored query, it will be helpful if I have the sample document and expected output.Thanks,\nDarshan", "username": "DarshanJayarama" }, { "code": "{\n\t\"projects\": [\n\t\t{\n\t\t\t\"name\": \"Payment System\",\n\t\t\t\"description\": \"Issue payments to clients\",\n\t\t\t\"category\": \"Budgeted\",\n\t\t\t\"estimate\": 12,\n\t\t\t\"cost_center\": 874,\n\t\t\t\"budget\": 600000,\n\t\t\t\"users\": [\n\t\t\t\t\"Sales\",\n\t\t\t\t\"HR\",\n\t\t\t\t\"Technology\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"name\": \"Admin System\",\n\t\t\t\"description\": \"Admin Portal Enhancements\",\n\t\t\t\"category\": \"\",\n\t\t\t\"estimate\": 4,\n\t\t\t\"cost_center\": null,\n\t\t\t\"budget\": 100000,\n\t\t\t\"users\": [\n\t\t\t\t\"Technology\"\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\t\"name\": \"Lights On\",\n\t\t\t\"description\": \"Business As Usual\",\n\t\t\t\"category\": \"\",\n\t\t\t\"estimate\": 52,\n\t\t\t\"cost_center\": null,\n\t\t\t\"budget\": 1000000,\n\t\t\t\"users\": []\n\t\t}\n\t]\n}\n", "text": "Hi Darshan,\nPlease find below some sample documents. My basic requirement is to be able to retrieve all documents with blank/null fields (e.g. catgory, cost_center, users). I will know the field names when building the query.Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "", "text": "Hi @DarshanJayarama,\nWould you be able to help with this?Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "movies{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"plot\": {\n \"type\": \"string\"\n },\n \"runtime\": {\n \"type\": \"number\"\n }\n }\n }\n}\nMongoDB Enterprise mflix-shard-0:PRIMARY> db.movies.aggregate([ {\n $search: {\n compound: {\n must: [{\n text: {\n query: \"titanic\",\n path: \"plot\"\n }\n }],\n mustNot: [{\n range: {\n \"gt\": 1,\n path: \"runtime\"\n }\n }]\n }\n }\n}, {\n $project: {\n countries: 1,\n _id: 1,\n runtime: 1,\n plot: 1\n }\n}])\n{ \"_id\" : ObjectId(\"573a1394f29313caabce0bcb\"), \"plot\" : \"An account of the ill-fated maiden voyage of RMS Titanic in 1912.\", \"runtime\" : \"\", \"countries\" : [ \"USA\" ] }\nMongoDB Enterprise mflix-shard-0:PRIMARY>\nruntime", "text": "Hi @Prasad_Kini,My apologies for the delay. I have created below search index in my sample_mflix movies collection:And with below search query, I got the empty record for the given field:Above query returned empty string for the runtime field.I hope this answer your question.Thanks,\nDarshan", "username": "DarshanJayarama" }, { "code": "", "text": "Thanks @DarshanJayarama. Could you please shed some light on the inner workings of gt? How does it know to pick up documents with the empty string when the condition is doing a comparison with a numeric value?Also, is there a similar hack to check for empty arrays as well?Regards,\nPrasad", "username": "Prasad_Kini" }, { "code": "MongoDB Enterprise mflix-shard-0:PRIMARY> null < 0\nfalse\nMongoDB Enterprise mflix-shard-0:PRIMARY> null < 1\ntrue\nMongoDB Enterprise mflix-shard-0:PRIMARY> \"\" < 0\nfalse\nMongoDB Enterprise mflix-shard-0:PRIMARY> \"\" < 1\ntrue\nMongoDB Enterprise mflix-shard-0:PRIMARY>\n", "text": "Hi Prasad,Thanks for your response.We are filtering out those records which are not greater than 1 for the runtime field. as runtime containing numeric values, if any field containing null or missing fields will not be adhere to the condition as null is less than 0:In this condition, filtering out those records which gt 1 in number field get us the missing or null field.I hope this clarify your doubts.Thanks,\nDarshan", "username": "DarshanJayarama" }, { "code": "", "text": "Hi @DarshanJayarama,Wouldn’t the query also pickup documents that have runtime<=1? i.e. values like 1, 0.9, 0.72 etc?Thanks,\nPrasad", "username": "Prasad_Kini" } ]
Atlas Search Index: Filter out docs with a field that has an empty string
2021-07-23T15:08:55.568Z
Atlas Search Index: Filter out docs with a field that has an empty string
4,343
null
[ "swift" ]
[ { "code": "", "text": "Hi Folks,\nSo with a iOS project with minimum target of iOS 16, all builds and runs well on simulator and device, but when I got to archive the app to deploy it I get an error about compiling for iOS11.Compiling for iOS 11.0, but module ‘Accessibility’ has a minimum deployment target of iOS 16.0.Has anyone encountered this and have a fix / workaround?", "username": "KrissBennettGE" }, { "code": "", "text": "Yeah - that’s frustrating. There are several places the deployment can be set. Obviously the Minimum Deployment target is one. Your project in the left hand navigator->Targets->General->Minimum DeploymentBut you can also check the Build Settings tab an use the Search Field for “deployment” and see if all of the results match up", "username": "Jay" }, { "code": "", "text": "Thanks for the reply, I think I found way forward. Just XCode giving me a totally incorrect error, when actually something else totally. Turns out it doesnn’t like my Custom package called Accessibility. Rename it and it appears to now show different issue. So Accessibility must conflict with something somewhere.Thanks", "username": "KrissBennettGE" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Xcode Archive Issue for iOS16 Project
2023-02-20T15:08:24.579Z
Xcode Archive Issue for iOS16 Project
1,209
null
[ "data-modeling" ]
[ { "code": "", "text": "Hey Team,\nCurrent document we store in Mongo has got _id field automatically created. But what we want is we want to provide the id field from one of our microservice.\nI went through some of blogs what I understood is we can provide hexadecimal string or int to ObjectId() and we can pass that id in document.\nIf we start providing _id while inserting new document will it work or not ??\nCan you help me if I change the schema for the collection to take the user provided _id, will it be performance cost and any impact on existing data ??", "username": "Gheri_Rupchandani1" }, { "code": "_id_idThe following are common options for storing values for _id:\n\n* Use an ObjectId.\n* Use a natural unique identifier, if available. This saves space and avoids an additional index.\n* Generate an auto-incrementing number.\n* Generate a UUID in your application code. For a more efficient storage of the UUID values in the collection and in the _id index, store the UUID as a value of the BSON BinData type.\n* Index keys that are of the BinData type are more efficiently stored in the index if:\n the binary subtype value is in the range of 0-7 or 128-135, and\n the length of the byte array is: 0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 16, 20, 24, or 32.\n*Use your driver's BSON UUID facility to generate UUIDs. Be aware that driver implementations may implement UUID serialization and deserialization logic differently, which may not be fully compatible with other drivers. See your driver documentation for information concerning UUID interoperability.\n_idE11000 duplicate key error_id_id_id_id", "text": "Hello @Gheri_Rupchandani1 ,Welcome back to MongoDB Community Forums! what I understood is we can provide hexadecimal string or int to ObjectId() and we can pass that id in document.It is not mandatory that _id can only be ObjectId(), the _id field may contain values of any BSON data type, other than an array, regex, or undefined.Below blob is from the _id field documentation.If we start providing _id while inserting new document will it work or not ??It should work. However, you need to provide an _id that does not exist yet in the collection, as it’s supposed to be the document’s primary key. Otherwise the insert will fail with an E11000 duplicate key error message. For more information, see the _id field .Can you help me if I change the schema for the collection to take the user provided _id, will it be performance cost and any impact on existing data ??As mentioned above, you need to make sure that an existing _id is not getting reinserted else it will give an error. Other than that, custom “_id” will not create any performance issues.Note: Default _id IndexMongoDB creates a unique index on the _id field during the creation of a collection. The _id index prevents clients from inserting two documents with the same value for the _id field. You cannot drop this index on the _id field.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "See a little example of custom _id from another thread:", "username": "steevej" }, { "code": "", "text": "I have two fields in collection in production env. How to create composite unique index ??\nPlease help me with both options\nif its low volume traffic for some time in day i.e its okay for some performance degradation for some replicas??\nhigh traffic all time i.e no degradation is allowed??Also If I want to create one new field (x_field) in already exsiting collection if its going to impact performance ??Thanks,\nGheri.", "username": "Gheri_Rupchandani1" }, { "code": "", "text": "For creating a Unique Compound Index, kindly checkAs the follow-up questions are not in-line with the original post, I would recommend you to open a new topic and add relevant details such as: MongoDB version, Example documents and any other details that might be helpful in understanding and answering your query.", "username": "Tarun_Gaur" } ]
How to provide _id for unique key and how to migrate existing records
2023-02-12T13:54:41.578Z
How to provide _id for unique key and how to migrate existing records
2,584
https://www.mongodb.com/…c54e244f45cc.png
[]
[ { "code": "", "text": "Hi, i have passed recently the Associate Developer Node Certification, and I got this results,\n\nimage690×521 15 KB\n\nI want to know how the score was calculated , and what i can do to improve my score. because i am not able to see the wrong questions.", "username": "KHEIREDDINE_AZZEZ1" }, { "code": "", "text": "Hey @KHEIREDDINE_AZZEZ1,Welcome to the MongoDB Community Forums and Congratulations on clearing the Developer Certification! In keeping with certification industry best practices, MongoDB continues to offer examinees a pass/fail result and topic-level performance percentages. The scores are not shared with the test-takers.Please let us know if you have further questions. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Associate Developer Node Certification
2023-02-17T13:53:58.828Z
Associate Developer Node Certification
1,167
null
[ "node-js" ]
[ { "code": "", "text": "Hi all, I am planning to take the associate certificate for nodejs. If I understand correctly, I cannot refer to the docs and atlas while taking the exam. Am I correct for both or one or the other is allowed?", "username": "Cjolaguera" }, { "code": "", "text": "Hey @Cjolaguera,Welcome to the MongoDB Community Forums! You are correct. One cannot refer to any documentation, change their tab or use Atlas during the exam session. You can read more on the rules from the Certification Program Guide and exam Study Guide.Wishing you all the very best for your exam. Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Clarification regarding what is allowed
2023-02-20T18:21:40.464Z
Clarification regarding what is allowed
1,151
null
[ "atlas-cluster" ]
[ { "code": "# for i in $(seq 0 2); do nc -vw2 mycluster-shard-00-0$i.2gbel.mongodb.net 27017 ; done\nmycluster-shard-00-00.2gbel.mongodb.net (192.168.xx.xx:27017) open\nmycluster-shard-00-01.2gbel.mongodb.net (192.168.xx.xx:27017) open\nmycluster-shard-00-02.2gbel.mongodb.net (192.168.xx.xx:27017) open\n", "text": "Hello,I’m completely new to mondb atlas. I use an aws eks cluster to connect to this mongodb cluster (dedicatec) M10) and I have a crucial question … What is the endpoint wich I have to use to connect to my cluster ??I have setup a peering connection between my vpc and mongodb vpc, created the route tables, and I have tested to reach each nodes of the cluster separatly (tcp test on 27017 port).But where could I find the global connection string to connect to the cluster using this peering ?Thanks for your help.", "username": "Reza_ISSANY" }, { "code": "", "text": "Hey @Reza_ISSANY,Welcome to the MongoDB Community Forums! You can get the connection string by clicking on ‘Connect’ in your Atlas account while you were setting up the peering connection. You can read more from the documentation: Configure an Atlas Network Peering ConnectionOther useful links that you can go through:\nWhich connection string my application uses\nMongoDB Atlas on AWSPlease feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AWS peering to my first cluster from atlas
2023-02-17T21:46:30.454Z
AWS peering to my first cluster from atlas
744
https://www.mongodb.com/…b_2_1023x152.png
[ "app-services-cli" ]
[ { "code": "realm-clirealm-cli pushrealm-cli apps createrealm-cli pushrealm-cli apps create", "text": "We are working on creating Infrastructure as Code to deploy an Atlas cluster together with App Services.Atlas cluster deployment is automated using Terraform. App Service deployment is automated using realm-cli. We have all of the json configurations in the source control.There are 2 scenariosWe do not have any problems with the first option. However, the second option is causing some trouble when using it in out DevOps environment.The problem is whenever we try to using realm-cli push or realm-cli apps create command in the DevOps pipeline we get the following message\nimage2026×302 49.5 KB\nWe do have 3 different atlas projects for each environment. Obviously, we can’t interact with the automated deployment in the cloud. Thus, we need a way to specify the project ID in which we want to create the app.However, I couldn’t find that option either in the realm-cli push or in the realm-cli apps create command.So, how I can set the project ID when creating a new app?", "username": "Gagik_Kyurkchyan" }, { "code": "--project \"MY_PROJ_ID\"", "text": "I made a shot in the dark and just added --project \"MY_PROJ_ID\". It worked!!! Obviously, you need to update the documentation guys, it’s frustrating I had to spend so much time and “guess” the answer in the end!", "username": "Gagik_Kyurkchyan" }, { "code": "realm-cli--project", "text": "Thank you for raising this Gagik,I have created an internal ticket to have our documentation updated (where applicable) for the realm-cli regarding the --project flag.", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to specify Atlas project ID when creating a new app using realm CLI?
2023-02-19T15:37:53.157Z
How to specify Atlas project ID when creating a new app using realm CLI?
1,252
null
[ "queries", "database-tools" ]
[ { "code": "", "text": "I just learned about the cursor concept in this doc.\nBut what does “cursor opened within/not within a server session” mean?", "username": "111661" }, { "code": "result set", "text": "Hello @111661 ,Welcome to The MongoDB Community Forums! Cursor points to the set of documents known as result set that are matched to the query filter. A server session is created when a client connects to the server and begins an operation such as a query, insert, update, or delete. It includes information about the client’s transaction and can be used to execute multiple operations in a single transaction.When a cursor is opened within a server session, it is associated with that session and can be used to retrieve query results for as long as the session is active. This allows clients to iterate through the cursor results over multiple network round trips and across multiple database operations.On other hand, a cursor that is not opened within a server session is only active for the duration of the query operation that created it. Once the query completes and the cursor is exhausted or closed, the cursor’s results are no longer available for retrieval.Please go through Server Sessions and Cursor Behaviors documentation to get more detailed information.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is server session in mongodb?
2023-02-16T03:15:46.309Z
What is server session in mongodb?
1,111
null
[]
[ { "code": "", "text": "Hi!Since I updated from the free version to M2 shared my website stopped to work. I haven’t changed anything on my code just upgrading to shared M2 and it stopped.Ask to support and they didn’t give me any solution.Appreciate a lot your suggestions", "username": "Zaesar_Po" }, { "code": "Project Activity FeedM0M2/M5M10M0M2/M5M2M5", "text": "Hi @Zaesar_Po,Since I updated from the free version to M2 shared my website stopped to work. I haven’t changed anything on my code just upgrading to shared M2 and it stopped.Unfortunately there is not enough information here to assist further. Could you provide the following details:Additionally, as per the Modify a Cluster documentation:Changing the cluster tier requires downtime in the following scenarios:Ask to support and they didn’t give me any solution.Could you also clarify what support you contacted (in-app chat support?) and what you went through with support? This should give some insight into what they may have checked.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I was using mongoose package on my NodeJS server and it was working perfectly fine on my free version.I created the model Schemas with mongoose. No issues. So fast I updated to M2 shared cluster it suddenly stopped.I showed to support all my logs and info I could provide. After waiting for hours they didn’t give any info or suggestion. Just told me to apply for a paid support.Have never encounter on my life such a type of marketing strategy to get more paid customers.Really surprised of such a type of behavior.Regards.\nZaesar", "username": "Zaesar_Po" }, { "code": "mongosh", "text": "Hi @Zaesar_Po sorry you’re seeing this issue. However could you help us understand the situation and your use case better?I was using mongoose package on my NodeJS server and it was working perfectly fine on my free version.By free version do you mean you’re using the M0 cluster? Could you show us the code that you’re having an issue with specifically?For troubleshooting purposes - are you able to connect via mongosh or MongoDB Compass?\nThis is one test you can generally do to check if the database is up and running.I created the model Schemas with mongoose. No issues. So fast I updated to M2 shared cluster it suddenly stopped.There are some scenarios where the connection string must be updated which are noted on the FAQ: Connection String Options documentation, you may wish to check the connection string being used post upgrade.Additionally, when you say it suddenly stopped, are you seeing any error or pattern? If you’re receiving any errors please post those here.I showed to support all my logs and info I could provide.Could you please post the relevant logs here so we can see what’s the problems are?With regards to any of the above information requested, please redact / remove any sensitive information before posting.", "username": "Jason_Tran" } ]
After upgrading from the free version to mongodb from free version to M2 shared my website went down
2023-02-19T13:54:36.878Z
After upgrading from the free version to mongodb from free version to M2 shared my website went down
413
null
[]
[ { "code": "", "text": "Something a hybrid deployment, the on-premise mongodb server can sync to atlas, while the mobile apps can access the data by realm and also the realm will sync to atlas.Is that possible? Can you point me out to the documentation", "username": "Gilbert" }, { "code": "", "text": "This is not currently possible though it is something we have thought about doing before. Can you explain a bit more about your use case and why this is necessary? Collecting this information on potential use-cases can help us better prioritize projects like this in the future.", "username": "Tyler_Kaye" }, { "code": "", "text": "My use case about health care system, connecting Outpatient Dept., Inpatient Dept. and the patient itself. The problem in country is that internet connectivity is, let say not that quite good. So the model is we have an app for the patient, and app for the doctor to view or can just do minor edit if they are not on-premise using Realm. Then if they are on-premise the transaction will be applied to a local MongoDB server, then were thinking using atlas to be the bridge between realm and the local MongodDB server, and possibly making MongoDB Atlas a main database.", "username": "Gilbert" }, { "code": "", "text": "There are still relevant case where clinics or hospitals do not want their data to leave their premises and Cloud based only architecture are excluded. A docker based architecture can still cover both scenarios, serving clients that will use SaaS services (Cloud Instance w/ dockerized backend + MongoDb Atlas) and on-premises only clients (dockerized backend image + dockerized MongoDb instance ).\nIf Realm cannot sync with a provided local network instance of MongoDb than it will generate more complexity for its adoption.", "username": "Marco_Falsitta1" } ]
Is it possible to sync Realm to Atlas and local mongodb server?
2022-05-25T08:41:42.778Z
Is it possible to sync Realm to Atlas and local mongodb server?
2,800
null
[ "node-js" ]
[ { "code": " const mongodb = require('mongodb').MongoClient;\n const connectionString = 'deletedforprivacy';\n \n \n mongodb.connect(connectionString, {useNewUrlParser: true, useUnifiedTopology: true}, function (err, client) {\n module.exports = client.db()\n const app = require('./app')\n app.listen(5000);\n})\n", "text": "Is there anything that I am doing wrong?", "username": "Tony_Guarino" }, { "code": "", "text": "Is there anything that I am doing wrong?Yes you are doing something wrong.You do not provide enough information for us to help you.Where is mongod running? On localhost, Atlas, … Do you manage the server yourself?You are hiding your connection string so it is hard to say anything about it. Is there any error message?", "username": "steevej" }, { "code": "const connectionString = 'mongodb+srv://username:[email protected]/ComplexApp?retryWrites=true&w=majority';app.listen(5000)mongodb.connect()app.listen(5000)connect()", "text": "Thanks for the reply. I manage the server myself using node express on localhost.const connectionString = 'mongodb+srv://username:[email protected]/ComplexApp?retryWrites=true&w=majority';When I put app.listen(5000) inside the anonymous function within mongodb.connect(), I dont get any connectivity to my site at all if I run node db.js in the terminal. But if I put app.listen(5000) back inside the app.js file, my site works.But the whole point is to connect to the database first which is why I want it inside of the connect() function on my db.js file.Sorry if I am not explaining this well, I hope this makes sense.", "username": "Tony_Guarino" }, { "code": "mongodb.connect", "text": "Oh and the only error message is a warning with the connect function on mongodb.connectIt says invalid number of arguments, expected 1 …2\nPromise returned from connect is ignoredBut I dont think that is whats causing a ERR_CONNECTION_REFUSED on my localhost:5000 page", "username": "Tony_Guarino" }, { "code": "", "text": "I downgraded to mongodb 4.13.0 and my code above works.", "username": "Tony_Guarino" }, { "code": "util.callbackifymongodb-legacy", "text": "Hi @Tony_GuarinoI downgraded to mongodb 4.13.0 and my code above works.I think you hit on a breaking change in Node driver 5.0, especially this part:This release removes support for callbacks in favor of a promise-based API. The following list provides some strategies for callback users to adopt this version:Since your code example is using a callback style, I believe that’s why it stops working with Node driver 5.0.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "That makes sense. Thanks for letting me know", "username": "Tony_Guarino" } ]
Ever since 5.0 I can't connect to my db
2023-02-19T22:49:48.719Z
Ever since 5.0 I can&rsquo;t connect to my db
546
null
[]
[ { "code": "struct MapView: View {\n @EnvironmentObject var user: UserAccount\n @ObservedResults(Beacon.self) var beacons: Results<Beacon>\n \n @MainActor\n var userBeacons: Results<Beacon> {\n beacons.filter(\"%@ IN participants\", user.account!) // Crashes here\n }\n...\nclass RealmManager: ObservableObject {\n \n let appId = \"beacon1-iuaze\"\n \n @Published var realm: Realm?\n static let shared = RealmManager()\n \n @MainActor\n func initialize() async throws {\n \n let app = App(id: appId)\n let user = try await app.login(credentials: Credentials.anonymous)\n \n var config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"all-accounts\") == nil {\n subs.append(QuerySubscription<Account>(name: \"all-accounts\"))\n }\n if subs.first(named: \"all-messages\") == nil {\n subs.append(QuerySubscription<Message>(name: \"all-messages\"))\n }\n if subs.first(named: \"beacons\") == nil {\n subs.append(QuerySubscription<Beacon>(name: \"beacons\"))\n }\n }, rerunOnOpen: true)\n \n // Pass object types to the Flexible Sync configuration\n // as a temporary workaround for not being able to add complete schema\n // for a Flexible Sync app\n config.objectTypes = [Account.self, Beacon.self, Message.self]\n realm = try await Realm(configuration: config, downloadBeforeOpen: .always)\n }\n}\n@main\nstruct MyApp: SwiftUI.App {\n @StateObject private var realmManager = RealmManager.shared\n @StateObject private var locationManager = LocationManager.shared\n \n var body: some Scene {\n \n WindowGroup {\n VStack {\n if let realm = realmManager.realm {\n MainView(user: UserAccount(realm: realm))\n .environmentObject(realmManager)\n .environmentObject(locationManager)\n }\n }.task {\n try? await realmManager.initialize()\n }\n }\n }\n}\n\nclass Account: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name: String\n @Persisted var phone: String\n @Persisted var email: String\n @Persisted(originProperty: \"participants\") var beacons: LinkingObjects<Beacon>\n \n convenience init(name: String, phone: String, email: String) {\n self.init()\n \n self.name = name\n self.phone = phone\n self.email = email\n }\n}\nclass UserAccount: ObservableObject {\n private static let EMAIL_KEY = \"email\"\n \n @Published var account: Account?\n \n convenience init(realm: Realm) {\n self.init()\n \n self.account = UserAccount.getAccountFromDefaults(realm: realm)\n }\n \n func setAccount(account: Account) {\n let defaults = UserDefaults.standard\n defaults.set(account.email, forKey: UserAccount.EMAIL_KEY)\n \n self.account = account\n }\n \n static func getAccountFromDefaults(realm: Realm?) -> Account? {\n let defaults = UserDefaults.standard\n guard let email = defaults.string(forKey: UserAccount.EMAIL_KEY) else {\n return nil\n }\n \n let query = realm!.objects(Account.self).where {\n $0.email == email\n }\n \n if query.count > 0 {\n return query.first\n } else {\n defaults.set(\"\", forKey: UserAccount.EMAIL_KEY)\n }\n \n return nil\n }\n}\n\nstruct MainView: View {\n @EnvironmentObject var errorHandler: ErrorHandler\n @StateObject var user: UserAccount\n\n var body: some View {\n if user.account != nil {\n MapView()\n .environmentObject(user)\n } else {\n AccountView()\n .environmentObject(user)\n }\n }\n}\n", "text": "I’m running into what look like threading issues with Realm. I’m trying to filter a Results list of objects by another object that has a many to many relationship. I’m using a singleton RealmManager, initialized on startup, getting a user account (which is separate from a Realm.user) with that realm, and then trying filter @ObservedResults with that account object in a computed property.I’m getting a runtime crash, “Object must be from the Realm being queried.” This error is confusing since I’m only creating one Realm, and the Realm initialization code is tagged with @MainActor. What is going on? Is @ObservedResults returning on a different thread or on a different Realm?Also, as a side note, I cobbled this solution together from some outdated examples and pretty sparse documentation, so I’m sure I’m doing a lot wrong. Any other suggestions welcome. Thanks in advance for your help.The code that is crashing:RealmManager:InitializationAccountUserAccountMainView", "username": "Greg_Lee" }, { "code": "var userBeacons: Results<Beacon> {\n let realm = beacons.realm?.thaw()\n let account = realm!.object(ofType: Account.self, forPrimaryKey: user.account._id)\n beacons.filter(\"%@ IN participants\", account)\n}\n", "text": "I ended up going a different direction with the code, to use a subscription instead, but I think I found the issue. Caveat that I haven’t tried this code. Basically, you need to get the Realm from the list getting filtered, find the Account object in that Realm, and then use that Account to do the filtering.Something like:", "username": "Greg_Lee" } ]
Object must be from the Realm being queried
2023-02-19T06:35:02.951Z
Object must be from the Realm being queried
695
https://www.mongodb.com/…c_2_650x1024.png
[]
[ { "code": "", "text": "hi,I have a collection with nested documents array “categories”. to update a category, i do a bulk write with two operations :For an unknown reason, it appears that the pull has been performed after the push, despite the correct instruction (see oplog.rs screenshot).\nIf i’m right, the lsnid and txnNumber are equal which means they are part of the same request (the bulk write), the stmtId is in the corret order (0 for the pull, 1 for the push), but the timestamp shows that the pull has been performed after the push (25sec later !)\nscreenshot_oplogrs706×1112 42.7 KB\nI don’t know if i misunderstood something or if this is a bug. Any help would be appreciated to better understand what’s going on Edit : DB version is MongoDB 5.0.9 Community", "username": "Adrien_DESMOULES" }, { "code": "", "text": "The bulk write operations can be ordered or not.If you did your operations with the ordered:true then we will need to see the exact code you are using. Sample documents in text json will also be needed. Use $redact to hide sensible information.", "username": "steevej" }, { "code": "ordered:true{ advertiserId: 123456, categories: [{ categoryId: 9 }] }\n{ advertiserId: 123456 }\nmongoClient.GetDatabase(\"MyDatabase\")\n .GetCollection<BsonDocument>(\"advertisers\")\n .BulkWriteAsync(new WriteModel<BsonDocument>[]\n {\n new UpdateOneModel<BsonDocument>(\n filter: Builders<BsonDocument>.Filter.Eq(\"advertiserId\", 123456),\n update: Builders<BsonDocument>.Update.Pull(\n field: \"categories\",\n value: new Dictionary<string, object>{ { \"categoryId\", 9 } }))\n { IsUpsert = false },\n new UpdateOneModel<BsonDocument>(\n filter: Builders<BsonDocument>.Filter.Eq(\"advertiserId\", 123456),\n update: Builders<BsonDocument>.Update.Push(\n field: \"categories\",\n value: new Dictionary<string, object>{ { \"categoryId\", 9 } }))\n { IsUpsert = false }\n },\n options: new BulkWriteOptions { IsOrdered = true },\n cancellationToken: cancellationToken);\n{\n \"update\": \"advertisers\",\n \"ordered\": true,\n \"$db\": \"MyDatabase\",\n \"lsid\": {\n \"id\": CSUUID(\"03d3ef41-18a7-4d1b-8c52-e6ef8ee687a8\")\n },\n \"updates\": [{\n \"q\": {\n \"advertiserId\": 123456\n },\n \"u\": {\n \"$pull\": {\n \"categories\": {\n \"categoryId\": 9\n }\n }\n }\n }, {\n \"q\": {\n \"advertiserId\": 123456\n },\n \"u\": {\n \"$push\": {\n \"categories\": {\n \"categoryId\": 9\n }\n }\n }\n }]\n}\n", "text": "Hi, thanks for your answer.I confirm the ordered:true is passed. I’m using .Net MongDB.Driver package (2.18.0) to call the database.The “full” advertiser document looks likeThe document state when performing the request wasI can’t give you the exact code but here is the .Net equivalent code with hardcoded values :I logged the driver Command in the Console to see what is generated and here is what i get", "username": "Adrien_DESMOULES" }, { "code": "", "text": "hi again,We have an investigation in progress in our side and we recently found that a Primary node switch occured at the same time (#unlucky).Maybe this kind of query can be corrupted in such situation ? Is it a possible explanation ?But even in the situation of a primary switch, i would expect queries to be canceled, but not to be mixed up like that.", "username": "Adrien_DESMOULES" }, { "code": "", "text": "Thanks for the update. The thread was in my bookmark but could not come up with anything of value to write. Hopefully, someone with a more intimate knowledge could enlighten us.", "username": "steevej" }, { "code": "", "text": "Hi @Adrien_DESMOULES welcome to the community!but the timestamp shows that the pull has been performed after the push (25sec later !)I’m thinking that the important question here is: does it result in the correct result, or does it produce an unexpected result?In other words, do you see the data to be in an unexpected state after the bulkwrite is finished?If everything is as expected, I wouldn’t put too much worry on the exact detail on how the operation is performed exactly, especially by looking at the oplog. I would note that the oplog is for internal use, the format may change with no warning, and it may not record things in the manner you expect.We have an investigation in progress in our side and we recently found that a Primary node switch occured at the same time (#unlucky).Since MongoDB is a distributed database, it is expected that during normal operation, the topology may change due to network issues or maintenance. This is by design to give you high availability for your data. I would recommend you to check out Build a Resilient Application with MongoDB to cater for this situation.Also, have you tried this bulkwrite scenario on a steady state replica set, e.g. one with no unexpected topology change, and see the same or a different result?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin, thanks for your post.Unfortunately the result was inconsistent. This inconsistency triggered a sensor and resulted in an oplog investigation to better understand what append exactly on the replica set.I tried to reproduce locally without any issue, and when I check the oplog, I only see the array insert and not the delete (which seems ok as i guess deleting a non existing value results in a noop, so no oplog entry).The bulkwrite scenario also happens several times a day and we weren’t able to reproduce the issue so far.This makes me really think that it may come from the “unpredictable collision” between bulkwrite and primary switch.Without any reproductible case on our side, I totally understand this is almost impossible to investigate further and could remain an inexplicable issue.Best regards,", "username": "Adrien_DESMOULES" } ]
Bulk write operations order inverted
2023-02-13T10:16:48.892Z
Bulk write operations order inverted
676
null
[ "aggregation" ]
[ { "code": "db.serial.findAndModify({ query: {}, update: { $inc: {serial: { $floor: { $multiply: [ { $rand: {} }, 100 ]}}}}, new: true})\ndb.serial.findAndModify({ query: {}, update: { $inc: {serial: { $floor: 3.12 }}}, new: true})\n", "text": "I am trying to increment a value by some random amount.and I get Cannot increment with the non-numeric argumentI have even triedand get the same error.I expect the value to be incremented by 3 in the second example, and some random number between 0-100 in the first.Why? and how to fix it?", "username": "sv_savage" }, { "code": "db.serial.findAndModify({ query: {}, update: { $inc: {serial: { $floor: { $multiply: [ { $rand: {} }, 100 ]}}}}, new: true})\ndb.users.findAndModify( {query: {}, update: { $inc: { serial: Math.floor(Math.random() * 100) } }})\nMath.floor(Math.random() * 100)\"serial\"", "text": "Hi @sv_savage,Welcome to the MongoDB Community forums and I get Cannot increment with the non-numeric argumentThe error is because $inc requires the parameter to be a numeric value and not a document.To fix this, the update operation should use valid atomic operators that work for incrementing the field with a numeric value. One possible solution could be:Note: It is basically javascript, so the value of Math.floor(Math.random() * 100) will be calculated on the client side.Here, it will increment the \"serial\" field with the rounded value of a random number between 0 and 100.I hope it helps!Thanks,\nKushagra", "username": "Kushagra_Kesav" } ]
$inc value using $floor return "Cannot increment with non-numeric argument"
2023-02-12T19:28:11.215Z
$inc value using $floor return &ldquo;Cannot increment with non-numeric argument&rdquo;
793
null
[ "kotlin" ]
[ { "code": "\"App.create(APP_ID).currentUser\"", "text": "I’m using JWT provider for the authentication. Also from that JWT I’m getting some custom fields like: name, email, picture and I’m successfully saving them on my Mongo DB Atlas. I can see those information about the user when I open up: App Services > App Users.But my question is, how can I access those information from my own app? When I try to get the current user for example I use something like: \"App.create(APP_ID).currentUser\"But from that user I cannot access any data, except the ownerId/identity. Any solution?", "username": "111757" }, { "code": "", "text": "Can someone point me in the right direction on this?", "username": "111757" } ]
Access User's Data - Mongo Realm Kotlin SDK
2022-12-23T15:43:38.124Z
Access User&rsquo;s Data - Mongo Realm Kotlin SDK
1,487
null
[]
[ { "code": "", "text": "Attempting to load data for lab associated with Creating and Deploying Atlas Cluster. Entered name of data set “myAtlasCluster” Mongo returned error message indicating that cluster does not exist.", "username": "rich_fields" }, { "code": "", "text": "Hey @rich_fields,Welcome to the MongoDB Community Forums! Have you created a cluster before starting this lab? Kindly follow the steps to create and deploy your Altas Cluster as given in the lab instructions. Kindly note, if you have an existing account, we recommend that you create a new email address and use it to create your new Atlas account.Please let us know if this helps or if the issue still persists. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Training
2023-02-19T12:35:19.112Z
MongoDB Training
1,021
null
[ "java", "atlas-cluster" ]
[ { "code": "26-Jan-2023 09:02:43.395 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [BufferPoolPruner-1-thread-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/jdk.internal.misc.Unsafe.park(Native Method)\n [email protected]/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)\n [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1672)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1062)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1122)\n [email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.396 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [cluster-ClusterId{value='XXX', description='null'}-mycluster-shard-00-00-server.mongodb.net:27017] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/sun.nio.ch.Net.poll(Native Method)\n [email protected]/sun.nio.ch.NioSocketImpl.park(NioSocketImpl.java:181)\n [email protected]/sun.nio.ch.NioSocketImpl.timedRead(NioSocketImpl.java:285)\n [email protected]/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:309)\n [email protected]/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350)\n [email protected]/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803)\n [email protected]/java.net.Socket$SocketInputStream.read(Socket.java:966)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:484)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:478)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70)\n [email protected]/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1465)\n [email protected]/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1069)\n com.mongodb.internal.connection.SocketStream.read(SocketStream.java:113)\n com.mongodb.internal.connection.SocketStream.read(SocketStream.java:138)\n com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:716)\n com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:574)\n com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413)\n com.mongodb.internal.connection.InternalStreamConnection.receive(InternalStreamConnection.java:372)\n com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:226)\n com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:158)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.397 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [cluster-rtt-ClusterId{value='XXX', description='null'}-mycluster-shard-00-00-server.mongodb.net:27017] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/java.lang.Thread.sleep(Native Method)\n com.mongodb.internal.connection.DefaultServerMonitor.waitForNext(DefaultServerMonitor.java:448)\n com.mongodb.internal.connection.DefaultServerMonitor.access$1500(DefaultServerMonitor.java:65)\n com.mongodb.internal.connection.DefaultServerMonitor$RoundTripTimeRunnable.run(DefaultServerMonitor.java:420)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.398 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [cluster-ClusterId{value='XXX', description='null'}-mycluster-shard-00-01-server.mongodb.net:27017] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/sun.nio.ch.Net.poll(Native Method)\n [email protected]/sun.nio.ch.NioSocketImpl.park(NioSocketImpl.java:181)\n [email protected]/sun.nio.ch.NioSocketImpl.timedRead(NioSocketImpl.java:285)\n [email protected]/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:309)\n [email protected]/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350)\n [email protected]/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803)\n [email protected]/java.net.Socket$SocketInputStream.read(Socket.java:966)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:484)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:478)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70)\n [email protected]/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1465)\n [email protected]/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1069)\n com.mongodb.internal.connection.SocketStream.read(SocketStream.java:113)\n com.mongodb.internal.connection.SocketStream.read(SocketStream.java:138)\n com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:716)\n com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:574)\n com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413)\n com.mongodb.internal.connection.InternalStreamConnection.receive(InternalStreamConnection.java:372)\n com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:226)\n com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:158)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.398 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [cluster-rtt-ClusterId{value='XXX', description='null'}-mycluster-shard-00-01-server.mongodb.net:27017] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/java.lang.Thread.sleep(Native Method)\n com.mongodb.internal.connection.DefaultServerMonitor.waitForNext(DefaultServerMonitor.java:448)\n com.mongodb.internal.connection.DefaultServerMonitor.access$1500(DefaultServerMonitor.java:65)\n com.mongodb.internal.connection.DefaultServerMonitor$RoundTripTimeRunnable.run(DefaultServerMonitor.java:420)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.399 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [cluster-ClusterId{value='XXX', description='null'}-mycluster-shard-00-02-server.mongodb.net:27017] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/sun.nio.ch.Net.poll(Native Method)\n [email protected]/sun.nio.ch.NioSocketImpl.park(NioSocketImpl.java:181)\n [email protected]/sun.nio.ch.NioSocketImpl.timedRead(NioSocketImpl.java:285)\n [email protected]/sun.nio.ch.NioSocketImpl.implRead(NioSocketImpl.java:309)\n [email protected]/sun.nio.ch.NioSocketImpl.read(NioSocketImpl.java:350)\n [email protected]/sun.nio.ch.NioSocketImpl$1.read(NioSocketImpl.java:803)\n [email protected]/java.net.Socket$SocketInputStream.read(Socket.java:966)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.read(SSLSocketInputRecord.java:484)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.readHeader(SSLSocketInputRecord.java:478)\n [email protected]/sun.security.ssl.SSLSocketInputRecord.bytesInCompletePacket(SSLSocketInputRecord.java:70)\n [email protected]/sun.security.ssl.SSLSocketImpl.readApplicationRecord(SSLSocketImpl.java:1465)\n [email protected]/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:1069)\n com.mongodb.internal.connection.SocketStream.read(SocketStream.java:113)\n com.mongodb.internal.connection.SocketStream.read(SocketStream.java:138)\n com.mongodb.internal.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:716)\n com.mongodb.internal.connection.InternalStreamConnection.receiveMessageWithAdditionalTimeout(InternalStreamConnection.java:574)\n com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:413)\n com.mongodb.internal.connection.InternalStreamConnection.receive(InternalStreamConnection.java:372)\n com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:226)\n com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:158)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.399 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [cluster-rtt-ClusterId{value='XXX', description='null'}-mycluster-shard-00-02-server.mongodb.net:27017] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/java.lang.Thread.sleep(Native Method)\n com.mongodb.internal.connection.DefaultServerMonitor.waitForNext(DefaultServerMonitor.java:448)\n com.mongodb.internal.connection.DefaultServerMonitor.access$1500(DefaultServerMonitor.java:65)\n com.mongodb.internal.connection.DefaultServerMonitor$RoundTripTimeRunnable.run(DefaultServerMonitor.java:420)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.400 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [MaintenanceTimer-3-thread-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/jdk.internal.misc.Unsafe.park(Native Method)\n [email protected]/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)\n [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1672)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1062)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1122)\n [email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.400 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [MaintenanceTimer-4-thread-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/jdk.internal.misc.Unsafe.park(Native Method)\n [email protected]/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)\n [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1672)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1062)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1122)\n [email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.401 WARNING [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [myapp] appears to have started a thread named [MaintenanceTimer-6-thread-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:\n [email protected]/jdk.internal.misc.Unsafe.park(Native Method)\n [email protected]/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)\n [email protected]/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1672)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182)\n [email protected]/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1062)\n [email protected]/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1122)\n [email protected]/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n [email protected]/java.lang.Thread.run(Thread.java:833)\n26-Jan-2023 09:02:43.415 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@70f635bf]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.416 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@1050149f]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.417 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@2cbc7be9]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.418 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@132a7a4d]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.419 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@5c8628f2]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.419 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@8813712]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.420 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@94726ef]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.421 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@317a5145]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.422 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@fb62641]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n26-Jan-2023 09:02:43.422 SEVERE [http-nio-8080-exec-88] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [myapp] created a ThreadLocal with key of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference$1@39f1c29b]) and a value of type [com.oracle.truffle.api.nodes.EncapsulatingNodeReference] (value [com.oracle.truffle.api.nodes.EncapsulatingNodeReference@171eac92]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.\n@Override\n public void contextDestroyed(ServletContextEvent servletContextEvent) {\n logger.debug(\"Closing Connection to Mongo DB.\");\n MongoClient mongoClient = (MongoClient) servletContextEvent.getServletContext().getAttribute(EnvironmentConstant.MONGO_CLIENT);\n mongoClient.close();\n logger.debug(\"Closing Connection to Mongo DB successful.\");\n }\n", "text": "We use the current MongoDB Sync driver to establish database connections to a MongoDB in a web application. The web application is hosted in a Tomcat servlet container.These are our system specs:Apache Tomcat 10JDK 17.0.6mongodb-driver-sync 4.8.2When we undeploy an application, we get the following error message in catalina.out.The MongoDB connection is created in a ServletContextListener. In the contextDestroy method, the connection is also closed again.However, that doesn’t seem to work for some reason. The method itself is called, but the connections are not closed.", "username": "Stefan_Schweiger" }, { "code": "", "text": "osted in a Tomcat servletI’ve noticed the same thing, did you ever figure it out?", "username": "Stan_Ehm" }, { "code": "", "text": "Unfortunately, no. Maybe a MongoDB developer can give some feedback on this?", "username": "Stefan_Schweiger" } ]
MongoClient causes memory on undeploy in Tomcat servlet container
2023-01-26T11:04:34.167Z
MongoClient causes memory on undeploy in Tomcat servlet container
1,270
null
[ "node-js", "python", "php" ]
[ { "code": "", "text": "Hello Everyone, I am Sumanta Mukhopadhyay, the MongoDB User Group (MUG) Leader in Kolkata. I am a seasoned full-stack developer with extensive expertise in delivering cutting-edge web solutions using a variety of technologies, including MongoDB, Express, React, Node.js, Python, PHP, and Vue.js. I have a deep understanding of database architecture, performance tuning, and scalability, and I have several years of experience in developing, deploying, and managing applications.As a MUG Leader, I am passionate about sharing my knowledge and experience with others, and I am excited to be a part of this community. I look forward to contributing my skills and expertise to help others and learning from others as well. Let’s work together to build amazing things!", "username": "Sumanta_Mukhopadhyay" }, { "code": "", "text": "Hey @Sumanta_Mukhopadhyay,\nWelcome to MongoDB Community!We are glad for the passion for helping and learning you bring to the community and can’t wait to see all the amazing things you have in plans for the Kolkata Community ", "username": "Harshit" }, { "code": "", "text": "Hello @Sumanta_Mukhopadhyay and welcome to this community! I’m looking forward to seeing your next post ", "username": "Hubert_Nguyen1" }, { "code": "", "text": "Python is a popular programming language known for its simplicity and ease of use. It offers a wide range of libraries and functions that…\nReading time: 2 min read\nWrote this", "username": "Sumanta_Mukhopadhyay" }, { "code": "", "text": "Just a suggestion: people interested in sharing their knowledge should consider publishing in places other than Medium. Publishing on Medium may have advantages for you as an author, but does not benefit the community to the extent that it could if you were publishing as a freely accessible resource (for example as a LinkedIn or Github article)", "username": "Sanjay_Dasgupta" } ]
Hello Everyone Sumanta From Kolkata India
2023-02-09T19:14:54.769Z
Hello Everyone Sumanta From Kolkata India
1,501
null
[ "aggregation", "crud" ]
[ { "code": "\nvar products = [array with 2500 product barcodes];\n\n[\n {\n '$match': {\n 'productId': {\n '$in': [\n products\n ]\n }\n }\n }\n]\nmatchcountupdateMany()", "text": "Hello,In our place an operator needs to update 2500 products by scanning their barcodes. MongoDB then matches those scanned barcodes into its aggregation query like so:After matching it then performs necessary update operations.QUESTION 1) Immediate question: Given the amount of products, is this optimal? How many product barcodes are too many to pass for $in operator?Before performing the aforementioned step it is necessary to find out if all the scanned products exist (were already added into the system). The way it is configured now, a match query is run before the previously mentioned query together with the count stage to get the number of items matched. The retrieved count is then compared to the length of the products array supplied.So, in summary, the way it is now:QUESTION 2) is there a more optimal way to run this 2 step operation? Maybe it is possible to achieve the same result in the single updateMany query?Thank you very much", "username": "RENOVATIO" }, { "code": "matchcountupdateMany()", "text": "Hey @RENOVATIO,Before performing the aforementioned step it is necessary to find out if all the scanned products exist (were already added into the system). The way it is configured now, a match query is run before the previously mentioned query together with the count stage to get the number of items matched. The retrieved count is then compared to the length of the products array supplied.So, in summary, the way it is now:Can you please elaborate on this further? Why are you searching for productId in the array first instead of just searching in your collection and then updating the document that corresponds that that particular productId? It would be great if you can give us sample documents along with the query you are using and the output for us to better understand your current process and help you better.How many product barcodes are too many to pass for $in operator?There is no hard limit as such on how many elements are too many for an operator. You can try running .explain output on your queries which can help you better understand how your query is performing.Regards,\nSatyam", "username": "Satyam" } ]
$in operator array size question, verifying all products exist before updating them
2023-02-11T00:34:37.602Z
$in operator array size question, verifying all products exist before updating them
693
null
[ "atlas-cluster", "serverless" ]
[ { "code": "", "text": "We found out that we will gget disconnect sometime on2023/02/15 09:42:27.924017 wesync.go:143: Failed to process file, error: connection(ac-w84fdyl-lb.n1oyria.mongodb.net:27017[-116]) unable to write wire message to network: write tcp 10.237.75.0:56726->35.80.213.180:27017: write: broken pipeconnection(ac-w84fdyl-lb.n1oyria.mongodb.net:27017[-107]) unable to write wire message to network: write tcp 10.237.75.0:56704->35.80.213.180:27017: write: broken pipeis there any way to find the root cause ?", "username": "CJ_Wu" }, { "code": "", "text": "Hello @CJ_Wu ,Welcome to The MongoDB Community Forums! I would advise you to bring this up with the Atlas chat support team. They may be able to check if anything on the Atlas side could have possibly caused this broken pipe message. In saying so, if a chat support is raised, please provide them with the following:Regards,\nTarun", "username": "Tarun_Gaur" } ]
How to trouble shooting with serverless plan
2023-02-15T19:52:51.322Z
How to trouble shooting with serverless plan
1,140
null
[ "crud" ]
[ { "code": "{\n_id:\"...\",\n_artist:\"...\",\n_datecreated:\"...\",\n_gallery: [0] {\n\t _title: title,\n\t_description: description\n\t_datecreated: currentDate,\n\t_client: client,\n\t_path: displaypath,\n\t_artworkid: artworkid,\t\t\t\n},\n_shop:\"...\",\n_devdate:\"...\",\n_dev:\"...\",\n_username:\"...\",\n_password:\"...\",\n }\n\"DB.updateOne({\"_gallery._artworkid\": artid}, {$set: { \"_gallery.0._title\": newtitle }});\"", "text": "H, I have a document with an array, here is the structure:And I’ve got only one entry in the array, which is 0…I have succeeded in updating the “_title” field within array using:\n\"DB.updateOne({\"_gallery._artworkid\": artid}, {$set: { \"_gallery.0._title\": newtitle }});\"But I have a problem, thats using a prefixed “.0.” in the function, when really, I want to use that ‘field’ as a variable as in ‘i’ (depending on which ‘i’ is being chosen, the array entry will be updated accordingly), and I have tried all sorts, with no luck to simply include a variable into the function “field” option… Can someone shine a light on this?This example bellow illustrates what I mean with the ‘i’, there is the same: “array”.“entrynr”.“field”…I got errors of this sort:a bunch more, but this was as close as I got to it…How can I do this?HELP x)", "username": "Illimited_Co" }, { "code": "<id>\"DB.updateOne({\"_gallery._artworkid\": artid}, {$set: { \"_gallery.0._title\": newtitle }});\"\n{\n \"_id\": ObjectId(\"60103cc232ec3d6175e19185\"),\n \"name\": \"John Doe\",\n \"grades\": [\n { \"subject\": \"Maths\", \"score\": 85 },\n { \"subject\": \"English\", \"score\": 92 },\n { \"subject\": \"Science\", \"score\": 78 }\n ]\n}\n\ndb.students.updateOne(\n { \"_id\": ObjectId(\"60103cc232ec3d6175e19185\") }, // Filter by student ID\n { \n $set: { \n \"grades.$[elem].score\": 90 \n }\n },\n {\n arrayFilters: [\n { \"elem.subject\": \"Maths\" } // Filter the \"grades\" array to find the element with \"subject\" equal to \"Math\"\n ]\n }\n)\n{\n \"_id\": ObjectId(\"60103cc232ec3d6175e19185\"),\n \"name\": \"John Doe\",\n \"grades\": [\n { \"subject\": \"Math\", \"score\": 90 },\n { \"subject\": \"English\", \"score\": 92 },\n { \"subject\": \"Science\", \"score\": 78 }\n ]\n}\n_gallerydb.coll.updateOne(\n { \"_gallery._artworkid\": artid }, \n { \n $set: { \n \"_gallery.$[elem]._title\": newtitle \n }\n },\n {\n arrayFilters: [\n { \"elem._artworkid\": artid } // Filter the \"gallery\" array to find the element with \"_artworkid\" equal to \"artid\"\n ]\n }\n)\n", "text": "Hi @Illimited_Co,Welcome to the MongoDB Community forums I would suggest you use the $arrayFilter and $[<id>] operators.I am going to use a small example to illustrate this. Let’s say you have a collection name “student” that looks like this (resembling yours).To update the “score” field for Maths subject for a student, we can use the following query:It will return the following output: You can similarly approach the above question to update the field matching the specific object in your array of _gallery. Here is the query for your reference:Let us know if you have any further questions.Thanks,\nKushagra", "username": "Kushagra_Kesav" } ]
Update field within array - HELP
2023-02-12T20:30:03.924Z
Update field within array - HELP
1,352
null
[ "atlas-cluster" ]
[ { "code": "", "text": "Hello!I’m thinking of changing from Cluster to Global Cluster, but can you tell me if there will be any downtime?\nI would appreciate it if you could add some evidence such as official documents if you don’t mind.Best regard!!", "username": "Masaki_Miyamoto1" }, { "code": "", "text": "Hi @Masaki_Miyamoto1 - Welcome to the community.I’m thinking of changing from Cluster to Global Cluster, but can you tell me if there will be any downtime?I would contact the Atlas in-app chat support regarding this to try clarify whether downtime is required or not when converting to Global Cluster.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Downtime when changing from Cluster to Global Cluster
2023-02-17T06:27:46.013Z
Downtime when changing from Cluster to Global Cluster
581
null
[ "replication", "sharding" ]
[ { "code": "\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"mongodb-cluster-rs0-1.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\"}}\n\n\"s\":\"W\", \"c\":\"NETWORK\", \"id\":23235, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"SSL peer certificate validation failed\",\"attr\":{\"reason\":\"self signed certificate\"}}\n\n\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333213, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM Topology Change\",\"attr\":{\"replicaSet\":\"rs0\",\"newTopologyDescription\":\"{ id: \\\"1cfeadb1-bb62-4744-912b-c19ce0c33385\\\", topologyType: \\\"ReplicaSetNoPrimary\\\", servers: { mongodb-cluster-rs0-0.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-rs0-0.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, mongodb-cluster-rs0-1.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-rs0-1.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", topologyVersion: { processId: ObjectId('63a050118196af8adf3e85d9'), counter: 3 }, roundTripTime: 792, lastWriteDate: new Date(1676554432000), opTime: { ts: Timestamp(1676554432, 1), t: 32 }, type: \\\"RSSecondary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"mongodb-cluster-rs0-1.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", setName: \\\"rs0\\\", setVersion: 108768, primary: \\\"mongodb-cluster-rs0-2.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", lastUpdateTime: new Date(1676554439781), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"mongodb-cluster-rs0-0.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", 1: \\\"mongodb-cluster-rs0-1.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", 2: \\\"mongodb-cluster-rs0-2.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\" }, arbiters: {}, passives: {}, tags: { podName: \\\"mongodb-cluster-rs0-1\\\", serviceName: \\\"mongodb-cluster\\\" } }, mongodb-cluster-rs0-2.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-rs0-2.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, logicalSessionTimeoutMinutes: 30, setName: \\\"rs0\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000020'), setVersion: 108768 } }\",\"previousTopologyDescription\":\"{ id: \\\"f3fd6d57-5393-4493-8c11-c37d7262fbcc\\\", topologyType: \\\"ReplicaSetNoPrimary\\\", servers: { mongodb-cluster-rs0-0.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-rs0-0.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, mongodb-cluster-rs0-1.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-rs0-1.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, mongodb-cluster-rs0-2.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-rs0-2.mongodb-cluster-rs0.mongodb.svc.cluster.local:27017\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, setName: \\\"rs0\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000020'), setVersion: 108768 } }\"}}\n\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333213, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM Topology Change\",\"attr\":{\"replicaSet\":\"cfg\",\"newTopologyDescription\":\"{ id: \\\"61b03df2-d4a2-453f-b841-8483b2c25bb7\\\", topologyType: \\\"ReplicaSetWithPrimary\\\", servers: { mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", topologyVersion: { processId: ObjectId('63ee2aac594f4fe7eab6c104'), counter: 5 }, roundTripTime: 997, lastWriteDate: new Date(1676554296000), opTime: { ts: Timestamp(1676554296, 4), t: 39 }, type: \\\"RSPrimary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", setName: \\\"cfg\\\", setVersion: 137515, electionId: ObjectId('7fffffff0000000000000027'), primary: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", lastUpdateTime: new Date(1676554296828), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 1: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 2: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\" }, arbiters: {}, passives: {}, tags: { podName: \\\"mongodb-cluster-cfg-0\\\", serviceName: \\\"mongodb-cluster\\\" } }, mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", topologyVersion: { processId: ObjectId('63ee2a68c3b77cbea7abef9b'), counter: 80 }, roundTripTime: 765, lastWriteDate: new Date(1676554296000), opTime: { ts: Timestamp(1676554296, 4), t: 39 }, type: \\\"RSSecondary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", setName: \\\"cfg\\\", setVersion: 137515, primary: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", lastUpdateTime: new Date(1676554296920), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 1: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 2: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\" }, arbiters: {}, passives: {}, tags: { podName: \\\"mongodb-cluster-cfg-1\\\", serviceName: \\\"mongodb-cluster\\\" } }, mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", topologyVersion: { processId: ObjectId('63b71bb65ff176f83ec5c1b1'), counter: 6 }, roundTripTime: 992, lastWriteDate: new Date(1676552843000), opTime: { ts: Timestamp(1676552843, 2), t: 38 }, type: \\\"RSOther\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", setName: \\\"cfg\\\", setVersion: 137515, lastUpdateTime: new Date(1676554295898), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 1: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 2: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\" }, arbiters: {}, passives: {}, tags: { podName: \\\"mongodb-cluster-cfg-2\\\", serviceName: \\\"mongodb-cluster\\\" } } }, logicalSessionTimeoutMinutes: 30, setName: \\\"cfg\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000027'), setVersion: 137515 } }\",\"previousTopologyDescription\":\"{ id: \\\"3da4bfe1-ae74-47ab-9598-a2b6aa66224a\\\", topologyType: \\\"ReplicaSetWithPrimary\\\", servers: { mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", topologyVersion: { processId: ObjectId('63ee2aac594f4fe7eab6c104'), counter: 5 }, roundTripTime: 997, lastWriteDate: new Date(1676554296000), opTime: { ts: Timestamp(1676554296, 4), t: 39 }, type: \\\"RSPrimary\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", setName: \\\"cfg\\\", setVersion: 137515, electionId: ObjectId('7fffffff0000000000000027'), primary: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", lastUpdateTime: new Date(1676554296828), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 1: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 2: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\" }, arbiters: {}, passives: {}, tags: { podName: \\\"mongodb-cluster-cfg-0\\\", serviceName: \\\"mongodb-cluster\\\" } }, mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} }, mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017: { address: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", topologyVersion: { processId: ObjectId('63b71bb65ff176f83ec5c1b1'), counter: 6 }, roundTripTime: 992, lastWriteDate: new Date(1676552843000), opTime: { ts: Timestamp(1676552843, 2), t: 38 }, type: \\\"RSOther\\\", minWireVersion: 13, maxWireVersion: 13, me: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", setName: \\\"cfg\\\", setVersion: 137515, lastUpdateTime: new Date(1676554295898), logicalSessionTimeoutMinutes: 30, hosts: { 0: \\\"mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 1: \\\"mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\", 2: \\\"mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\\\" }, arbiters: {}, passives: {}, tags: { podName: \\\"mongodb-cluster-cfg-2\\\", serviceName: \\\"mongodb-cluster\\\" } } }, logicalSessionTimeoutMinutes: 30, setName: \\\"cfg\\\", compatible: true, maxElectionIdSetVersion: { electionId: ObjectId('7fffffff0000000000000027'), setVersion: 137515 } }\"}}\n\n\"s\":\"I\", \"c\":\"SHARDING\", \"id\":471693, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Updating the shard registry with confirmed replica set\",\"attr\":{\"connectionString\":\"cfg/mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017,mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017,mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\"}}\n\n\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22846, \"ctx\":\"UpdateReplicaSetOnConfigServer\",\"msg\":\"Updating sharding state with confirmed replica set\",\"attr\":{\"connectionString\":\"cfg/mongodb-cluster-cfg-0.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017,mongodb-cluster-cfg-1.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017,mongodb-cluster-cfg-2.mongodb-cluster-cfg.mongodb.svc.cluster.local:27017\"}}\n", "text": "We are having problems from time to time in the same environment, where the mongo cluster (installed operator - percona), becomes unavailable.But until it’s time to re-establish the environment, we de-escalate the mongo and re-climb.Percona version 11.2.1\nMongo version 5.0.7Some bugs that caught my attention:mongo-cluster-cfg-0:cluster-mongoS-0Do you have any idea what’s going on?I didn’t put the log files here because I didn’t know if I could", "username": "Rafael_Carvalho2" }, { "code": "mongodmongos", "text": "Hi @Rafael_Carvalho2 welcome to the community!the mongo cluster (installed operator - percona), becomes unavailable.Percona version 11.2.1By “Percona” do you mean Percona Operator for MongoDB?If yes, then I’m afraid Percona is a separate entity not affiliated with MongoDB, and thus we cannot tell you what went wrong.From the logs you posted, it seems that you wanted to deploy a sharded cluster using TLS? For official MongoDB servers, you might find these links helpful:Note that deploying a sharded cluster is considered to be an advanced MongoDB topic. A sharded cluster is a great deployment when you need horizontal scaling and more parallelization, but it requires careful planning and more advanced operational skills vs. a more basic replica set deployment.Alternatively you might want to check out MongoDB Atlas if you prefer to offload the operational concerns.Best regards\nKevin", "username": "kevinadi" } ]
Mongo cluster availability issues
2023-02-16T20:20:33.297Z
Mongo cluster availability issues
943
null
[]
[ { "code": "", "text": "I have a MongoDB cluster with three config nodes and three shard nodes and work on pbm backup setup. I can start pbm-agent on config nodes but failed to start pbm-agent on shrad nodes with error.2023/02/17 22:21:42 Exit: connect to PBM: create mongo connection: mongo ping: server selection error: server selection timeout, current topology: { Type: Unknown, Servers: [{ Addr: mongo-shard-node2-preprod.apps-nonprod.abc.com:27045, Type: RSGhost, Average RTT: 809758 }, ] }Coudl some one hlease help if you met similar problem before? Thank you in advance!", "username": "Larry_Sun" }, { "code": "pbm-agent", "text": "Hi @Larry_Sun welcome to the community!If by pbm-agent you mean Percona Backup Manager, then I’m afraid it’s not a MongoDB product, thus we cannot provide knowledge nor support for it.I suggest you try contacting Percona Support for this product.Best regards\nKevin", "username": "kevinadi" } ]
PBM Agent Fail to Start on Shard Cluster Nodes
2023-02-20T01:09:32.973Z
PBM Agent Fail to Start on Shard Cluster Nodes
1,163
null
[]
[ { "code": "", "text": "Hello good afternoon, does anyone know where I can find the following bookstore?MongoDB.h", "username": "David_Davila_Barrios" }, { "code": "", "text": "Hi @David_Davila_Barrios welcome to the community!MongoDB.hCan I ask what you’re trying to do?Best regards\nKevin", "username": "kevinadi" } ]
Library MongoDB.h
2023-02-18T21:05:48.173Z
Library MongoDB.h
610
null
[ "node-js", "data-modeling", "schema-validation" ]
[ { "code": "assignee: {\n type: \"linkingObjects\",\n objectType: \"Task<>\",\n property: \"user\"\n}\nuser: \"User\"\n", "text": "Hello, I am referencing this article: https://docs.mongodb.com/realm/sdk/node/examples/define-a-realm-object-model/#define-an-inverse-relationship-propertyThe example in the article puts the backlink on the Many-to-One side of the relationship. I.e. you define per User which Tasks belong to it, and then each Task has a backlinked User. Being new to the NoSQL paradigm, I am used to the opposite. I would take each Task and assign it a User. There would then be a backlinked list (or better, set) of Tasks on each User.Is this possible, or am I thinking about this the wrong way? If it’s possible, what is the proper syntax to define the schema? Here’s a snippet of what I’ve tried (within the inverse relationship part within the User schema):and a snippet of the Task schema:I get a schema validation error: “- Property ‘User.assignee’ of type ‘linking objects’ has unknown object type ‘Task<>’”Thanks!", "username": "samaa" }, { "code": "", "text": "Bumping this - I’d appreciate any wisdom.", "username": "samaa" }, { "code": "objectType: \"Task<>\",<>unknown object type ‘Task<>’objectType: \"Task\"\n[]", "text": "objectType: \"Task<>\",Angle brackets, <>, are mostly used by TypeScript to denote types for generic classes. But I am guessing here you were not dealing with TS. But, actually, the code is part of the schema and this would be the wrong move, hence the error unknown object type ‘Task<>’Following the examples just a page below in the link you supplied, your code should be without any brackets:Or maybe you had an array in mind, but that requires square brackets, []. You haven’t supplied more of your code, so checking this is in on you ", "username": "Yilmaz_Durmaz" } ]
Invalid Schema with Inverse Relationship on One-To-Many Side
2021-09-29T22:02:55.863Z
Invalid Schema with Inverse Relationship on One-To-Many Side
4,139
null
[ "aggregation", "replication" ]
[ { "code": "", "text": "Need a ladge json collection which I can import into my mongodb database for checking oplog of replicaset working properly or not.", "username": "Sabbir_Sattar_Mukit" }, { "code": "", "text": "Hi @Sabbir_Sattar_Mukit - Welcome to the community Need a ladge json collection which I can import into my mongodb database for checking oplog of replicaset working properly or not.Could you describe the goal you’re trying to achieve with regards to the oplog? Please also provide your MongoDB version. Note that the oplog is for internal MongoDB use and the format and content can and will change with no notice. If you need to monitor changes in a collection/database/deployment then the supported method is to use change streams instead.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Mongodb replica set and oplog
2023-02-09T06:45:12.831Z
Mongodb replica set and oplog
514
null
[ "crud" ]
[ { "code": "db.createUser({\n user: \"mongoAdmin\", \n pwd: \"12345678\", \n roles: [\"readWrite\", \"dbAdmin\"]\n})\ndb.bios.insertMany([\n {\n \"_id\" : 1,\n \"name\" : {\n \"first\" : \"John\",\n \"last\" : \"Backus\"\n },\n \"birth\" : ISODate(\"1924-12-03T05:00:00Z\"),\n \"death\" : ISODate(\"2007-03-17T04:00:00Z\"),\n \"contribs\" : [\n \"Fortran\",\n \"ALGOL\",\n \"Backus-Naur Form\",\n \"FP\"\n ],\n \"awards\" : [\n {\n \"award\" : \"W.W. McDowell Award\",\n \"year\" : 1967,\n \"by\" : \"IEEE Computer Society\"\n },\n {\n \"award\" : \"National Medal of Science\",\n \"year\" : 1975,\n \"by\" : \"National Science Foundation\"\n },\n {\n \"award\" : \"Turing Award\",\n \"year\" : 1977,\n \"by\" : \"ACM\"\n },\n {\n \"award\" : \"Draper Prize\",\n \"year\" : 1993,\n \"by\" : \"National Academy of Engineering\"\n }\n ]\n }\n\n] );\n2023-02-17T17:47:37.991-0500 E QUERY [js] uncaught exception: WriteCommandError({\n\t\"ok\" : 0,\n\t\"errmsg\" : \"not authorized on bios to execute command { insert: \\\"bios\\\", ordered: true, lsid: { id: UUID(\\\"4daafa5f-1b95-460a-8c4b-a303ea05eb50\\\") }, $db: \\\"bios\\\" }\",\n\t\"code\" : 13,\n\t\"codeName\" : \"Unauthorized\"\n}) :\nWriteCommandError({\n\t\"ok\" : 0,\n\t\"errmsg\" : \"not authorized on bios to execute command { insert: \\\"bios\\\", ordered: true, lsid: { id: UUID(\\\"4daafa5f-1b95-460a-8c4b-a303ea05eb50\\\") }, $db: \\\"bios\\\" }\",\n\t\"code\" : 13,\n\t\"codeName\" : \"Unauthorized\"\n})\nWriteCommandError@src/mongo/shell/bulk_api.js:417:48\nexecuteBatch@src/mongo/shell/bulk_api.js:915:23\nBulk/this.execute@src/mongo/shell/bulk_api.js:1163:21\nDBCollection.prototype.insertMany@src/mongo/shell/crud_api.js:326:5\n@(shell):1:1\n\n", "text": "These are the commands I used. The issue is I can not properly insert any data. No sure if the issue is with user role or the construct of the insert.Thanks in advance.use biosGetting this ERROR", "username": "lindylex" }, { "code": "mongo -u \"mongoAdmin\" -p \"myPassword\" --authenticationDatabase \"admin\"\n\ndb.createUser({\n user: \"myUser\", \n pwd: \"anotherpassword\", \n roles: [\"readWrite\", \"bios\"]\n})\nquit()\nmongo -u \"myUser\" -p \"anotherpassword\" --authenticationDatabase \"bios\"\nuse bios\ndb.bios.insert({ x:1 })\ndb.bios.find()\n", "text": "] SOLVED SOLUTION [The tutorial I was following did not mention I needed to create a user “myUser” associate them with the database for example “bios” with a a password. Log out and then login to create the database and add collections to it.Step one login using admin user:Step two create the user for your desdired database:Step three logout :Step four login with the user.5: Create database and add data for testing.Hope this help others.Environment : Linux d64 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/LinuxMongoDB version : 4.2.23Thanks", "username": "lindylex" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Insert Problems
2023-02-18T15:59:27.161Z
Insert Problems
945
null
[]
[ { "code": "", "text": "Hey thereI’m trying to build MongoDB from source because my CPU doesn’t support AVX.It went all well and it seems to be working, but I noticed when I connect with mongosh it says the version isUsing MongoDB: 7.0.0-alpha-174-g7c98e8bI am, not a fan of using alpha or beta versions when a stable version is avaliable.I am unsure which branch I should switch to build version 6.0.4 (stable as of now).Any help is appreciated.", "username": "daytona63" }, { "code": "", "text": "My best guess is:\nhttps://github.com/mongodb/mongo/commits/r6.0.4", "username": "chris" }, { "code": "", "text": "@chris much love, got it", "username": "daytona63" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Which branch is the stable release?
2023-02-18T06:16:27.676Z
Which branch is the stable release?
1,148
https://www.mongodb.com/…d11ec466ca00.png
[ "golang", "lebanon-mug" ]
[ { "code": " Meeting ID: 820 4969 7996\n Passcode: 222453\nBackend & System Engineer | Blockchain engineer | Project Manager | Mentor | Community builder ", "text": "\n_MongoDB posters960×540 58.3 KB\nThe MongoDB User Group in Lebanon is pleased to invite you to it’s second workshop in 2023. Let’s dive together in the world of system design & software engineering by building a complete Airline Reservation system like Amadeus powered by MongoDB and Golang.Event Type: OnlineLocation: Online via Zoom\nVideo Conferencing URLMeeting Credentials:Backend & System Engineer | Blockchain engineer | Project Manager | Mentor | Community builder \n1600×1200 400 KB\n", "username": "eliehannouch" }, { "code": "", "text": "After receiving many request from our amazing lebanese community to extend the registration time of our MongoDB & GoLang workshop.The MongoDB User Group Lebanon & Google Developer Groups - GDG North Lebanon extended the registration for the next couple of hours.New Date & Time: Monday, 20 Feb - 8:00pm (EET)\nLocation: Online Via ZoomRegister now !!, and invite your friends to save your free spots and to be eligble getting the certification after completing the PART 2.", "username": "eliehannouch" } ]
Lebanon MUG: Build & Design a complete Airline Reservation System with MongoDB/Golang PART 1
2023-01-27T06:20:57.686Z
Lebanon MUG: Build &amp; Design a complete Airline Reservation System with MongoDB/Golang PART 1
3,546
https://www.mongodb.com/…f83880b40bb0.png
[]
[ { "code": "", "text": "sir Ubuntu new version has released but i couldn’t fond any official doc for install MongoDB on Ubuntu 22.10 please make available MongoDB on Ubuntu 22>10\n\nScreenshot from 2023-02-16 22-18-23729×139 17.3 KB\n", "username": "Sarowar_Hosen" }, { "code": "", "text": "what is the latest you have used?it is not always feasible to compile for specific versions, so instead of one compiled for 22.10, I suggest using the one for 22.04. they shouldn’t differ too much in the libraries they use. after all, you can still use 10-20 years old things if libraries are compatible.But there might be a slight problem. I haven’t lately checked the installation page, but instructions for 22.04 was not on the page a month ago. If this is still the case, search this forum. we had discussions on it, and have solutions with how-to instructions.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "i hope mongodb make a supported guide for installing mongodb on ubuntu 22.10, just ubuntu new release and mongdb instalation is depreaceated its not done", "username": "Sarowar_Hosen" }, { "code": "", "text": "Welcome to the MongoDB community @Sarowar_Hosen!Official MongoDB packages only support Ubuntu LTS (Long Term Support) releases since interim releases like Ubuntu 22.10 have a much shorter support lifecycle from Canonical (9 months for an interim release versus 5 years for LTS releases). Interim and major O/S releases can introduce package or dependency changes that require further testing and validation. Ubuntu 22.10 will reach End of Life (EOL) in July 2023 which is 2 years before the July 2025 EOL date for MongoDB 6.0.Ubuntu 22.04 is supported for MongoDB 6.04+ (see Install MongoDB Community Edition on Ubuntu), but these packages are not validated with later releases like Ubuntu 22.10.If you prefer to use interim O/S releases rather than LTS, I would consider installing services like MongoDB in a container or virtualised environment with a supported O/S version.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Request mongodb for ubuntu 22.10
2023-02-16T16:38:25.655Z
Request mongodb for ubuntu 22.10
2,168
null
[ "queries" ]
[ { "code": "{\n _id:\n sender:\n receiver:\n message:\n createdAt:\n}\n", "text": "Hello,\nI am creating a project that requires a chat app and I used the following schemaBased on this schema, what would be the best way to get a list of latest conversations/chats. For more context, I want to have a sidebar that shows the list of chats like( (User_a (message) 2m ago). Sorry if I’m not framing the question well.", "username": "Oluwapelumi_Adegoke" }, { "code": "createdAt", "text": "Hello @Oluwapelumi_Adegoke\nWelcome to the community forum!!By looking at the schema design, I would recommend you to use aggregation and compare the createdAt time stamp value with the current time and sort the data, such that the list with the most recent chats will appear at the top.I tried to load a sample data and tried the following query which worked for my dataset:db.test.aggregate([{$addFields: {difference: {$subtract: [“$$NOW”, “$createdAt”]}}}, {$sort: {“difference”:-1}}])where you can add other validation as well to check is the receiver is same for all the messages.However, the date operators are available post MongoDB version 5.0 and would recommend to upgrade is you are using a lower version. MongoDB 5.0\nAlso, if the query does not seem to wok for you, could you provide a sample data for the above mentioned scheme so we could try to find the resultant for it.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Thank you. I was able to query all previous messages. However, the problem comes from sorting the latest message in a conversation. I can get all messages received or sent by user A but I don’t know how to sort it to give me the last message sent between User A and (User B, User C, User D) in order of time", "username": "Oluwapelumi_Adegoke" }, { "code": "const aggregation = [\n{ \n $match: {\n receiver: \"Robert Huff\"\n }\n},\n{\n $addFields: {\n duration: {$divide: [{$subtract: [\"$$NOW\", \"$createdAt\"]}, 360000]\n }\n }\n},\n{$project: {message: 1, duration: 1, _id: 0}}\n ];\ndb.test.aggregate(aggregation);\n", "text": "Hello @Oluwapelumi_Adegokelast message sent between User A and (User B, User C, User D) in order of timeRegarding the above sentence, could you please confirm that User A being the receiver while others ( User B , User C, User D) being the sender.\nBased on the above statement, the following query will sort the data based on the time difference calculated between the current time and createAt time stamp value.Please let me know if you have any questions with the above query.Also, if you could help me with a sample data which would help me understand the case in better way.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Regarding the above sentence, could you please confirm that User A being the receiver while others ( User B , User C, User D) being the sender.Thanks for your time. User A is not just the receiver, it could also be that User A sent a message. If User A sends user B a message then User B should be at the top of the query result. If User A then receives a message from user C, the query results should return User C, User B.I understand that I may not be communicating the problem well. The query is for a direct messages list. By sample data do you mean the message schema, example message documents or desired output? Sorry I’m still new to asking questions so thanks for your patience.", "username": "Oluwapelumi_Adegoke" }, { "code": "createdAt", "text": "Hello @Oluwapelumi_AdegokeApologies if the response has not been clear from our end. Let’s take a step back to understand more precisely.Could you help me with a few details:Thanks\nAasawari", "username": "Aasawari" }, { "code": " {\n \"sender\": \"senderA\",\n \"receiver\": \"senderB\",\n \"message\": \"Why would you say that?\",\n \"createdAt\": \"2022-03-13T03:13:38.740Z\"\n }\n {\n \"sender\": \"senderC\",\n \"receiver\": \"senderA\",\n \"message\": \"Don't know I was just testing something\",\n \"createdAt\": \"2022-03-13T03:14:51.502Z\"\n },\n {\n \"sender\": \"senderB\",\n \"receiver\": \"senderA\",\n \"message\": \"It's cool\",\n \"createdAt\": \"2022-03-13T04:08:11.178Z\"\n },\n", "text": "I apologize for the delay.1.Sample Documents: Sender A isAfter message 1: Sender A’s message list will have the message they sent to Sender B at the top of the list\nAfter message 2: Sender A’s message list will have the message they received from Sender C at the top\nAfter message 3: Sender A’s message list will have the message they recived from Sender B at the top.The special requirement is that the result should also include messages were the Sender A received message.I again apologize for the late response", "username": "Oluwapelumi_Adegoke" }, { "code": "const aggregation = [\n{ \n $match: {\n receiver: \"Robert Huff\"\n }\n},\n{\n $addFields: {\n duration: {$divide: [{$subtract: [\"$NOW\", \"$createdAt\"]}, 360000]\n }\n }\n},\n{$project: {message: 1, duration: 1, _id: 0}}\n ];\ndb.test.aggregate(aggregation);\n", "text": "Hi @Oluwapelumi_AdegokeThank you for sharing the sample document.To make the process of helping you efficiently, could you please help me in understanding, why the query below does not help?Also, if you could share your own aggregation query and an output that you have been expecting.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "If “Robert Huff” sent a message to John Doe, this query would not return the message sent at the top of the query. I’d use the aggregation you provided if I only wanted the received messages", "username": "Oluwapelumi_Adegoke" }, { "code": "db.test.aggregate([\n { \n $match: {\n $or: [\n {receiver: \"Robert Huff\"},\n {sender: \"Robert Huff\"}\n ]\n }\n },\n {\n $addFields: {\n duration: {\n $divide: \n [{\n $subtract: [\"$$NOW\", \"$createdAt\"]}, 36000]\n }\n }\n },\n {$sort: {duration: -1}},\n {\n $project: {\n message: 1, duration: 1, _id: 0}\n }\n ]);\n\n$match$addFields$sort$project", "text": "Hi @Oluwapelumi_AdegokeThe followingThis query comprise of three parts:\n$match with filter all the documents where sender and receiver are both “Robert Huff”\n$addFields will calculate the the duration based on the current time and createdAt timestamp value.\n$sort this will sort the most recent messages.\n$project will display only the messages and the duration calculated from the pipeline.This query is for only a sample data based on the above described schema.Please let us know if this aggregation query helps with the requirement.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "So it gives all the messages corresponding to Robert Huff. However, it should give only the latest message for each interaction. So if Robert Huff has correspondence with John Doe and Jane Doe, it should only return the latest message between (Robert Huff and Jane Doe) and (Robert Huff and John Doe).", "username": "Oluwapelumi_Adegoke" }, { "code": "", "text": "I am also stuck at that. For single user it is easy but for all it is bit complicated. i dont know why that lady from mongo teams left unanswered you", "username": "Saad_Tanveer" }, { "code": "", "text": "Hi , have you resolved this issue? if you did then could you pls share the query .", "username": "Fityst_chat" } ]
Get latest DMs from chat collection
2022-03-31T20:16:24.122Z
Get latest DMs from chat collection
4,831
null
[]
[ { "code": "<iframe style=\"background: #FFFFFF;border: none;border-radius: 2px;box-shadow: 0 2px 10px 0 rgba(70, 76, 79, .2);\" width=\"640\" height=\"480\" src=\"https://charts.mongodb.com/charts-tola-park-sunshine-monito-ogkjo/embed/charts?id=73acecb0-1ada-4703-a1a6-5af360e865f9&maxDataAge=3600&theme=light&autoRefresh=true\"></iframe>\n", "text": "I’ve tried creating new iframe embedded links but same problem.\nAlso tried using the iframe in a local HTML page viewed from my hard disk in a browser but get same error.\nThis only started to fail this afternoon. I thought there might be a 6 month expiry on the embedded links or something.\nAny ideas as what to try? Here’s one of the links:", "username": "Tony_Walsh" }, { "code": "", "text": "Hi @Tony_Walsh -Sorry about that - we are doing an upgrade and there was a configuration issue. Thanks for pointing this out; we think we’ve fixed it now. Can you please confirm - the chart you referenced is now loading, although it isn’t showing any data, but maybe this is normal?Tom", "username": "tomhollander" }, { "code": "", "text": "Thanks Tom. It’s fixed Wasn’t expected a reply until Monday so thank-you again for the speedy remedy.", "username": "Tony_Walsh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
My previously working iframe embedded charts now show "Cannot retrieve data."
2023-02-19T00:32:13.294Z
My previously working iframe embedded charts now show &ldquo;Cannot retrieve data.&rdquo;
989
https://www.mongodb.com/…05b0a63aa3d8.png
[ "queries" ]
[ { "code": "", "text": "Hi All,\nI want to filter documents with elements in the array that must satisfy the condition (using find()). I tried many ways, including $elemMatch, but still not correct.\nThanks for your advice.\nimage713×395 41.4 KB\n", "username": "Trung_Tran_The" }, { "code": "db.students.find(\n { grades: { $elemMatch: { $gte: 98 } } },\n {\n grades: {\n $filter: {\n input: \"$grades\",\n in: {\n $gte: [\"$$this\", 98]\n }\n }\n }\n }\n)\n", "text": "Hello @Trung_Tran_The, Welcome back to the MongoDB community developer forum,You need to filter it by $filter operator in projection,", "username": "turivishal" }, { "code": "db.<collectionName>.find( { < filter > }, { < project > } )db.students.find( \n { grades: { $elemMatch: { $gte: 98 } } }, \n { grades: { \n $filter: { \n input: \"$grades\",\n as: 'data',\n cond: { $gte: [\"$$data\", 98] } \n} } })\n", "text": "Thanks for your answer,I tried it but the result still exists _id: 2. It only removes element with value < 98 (=95), document with _id: 2 still exists.db.<collectionName>.find( { < filter > }, { < project > } )\nI want to find process in < filter >, not process in < project >My code:\nimage1030×415 29 KB\n", "username": "Trung_Tran_The" }, { "code": "$all$elemMatchdb.students.find({\n grades: {\n $elemMatch: {\n $all: [98]\n }\n }\n})\n", "text": "I tried it but the result still exists _id: 2. It only removes element with value < 98 (=95), document with _id: 2 still exists.What are you expecting? can you show the expected result, In your first post’s screenshot, you have strikeout the 95 in _id: 3, so I thought you want values that are greater than or equal to 98.Do you mean all the elements in grades should be greater than or equal to 98? then try $all operator inside $elemMatch to check for all elements,", "username": "turivishal" }, { "code": "", "text": "I tried it but the result still exists _id: 2. It only removes element with value < 98 (=95) , document with _id: 2 still exists .Sorry bro, I mean _id: 3.\nIf all elements are >= 98 then the current document will be retrieved. Otherwise, no", "username": "Trung_Tran_The" }, { "code": "$not$ltgrades.gradedb.students.find({\n grades: {\n $not: {\n $lt: 98\n }\n },\n \"grades.grade\": {\n $exists: false\n }\n})\n", "text": "You can try this, using $not with $lt operator,\nBut this will return grades.grade array of objects documents, so I have added a condition for that to don’t return.I will update you if there are any other possibilities to achieve this.", "username": "turivishal" }, { "code": "", "text": "This is a nice solution, thanks bro!", "username": "Trung_Tran_The" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to filter all elements in array that must satisfy the condition (using find())
2023-02-19T03:22:27.496Z
How to filter all elements in array that must satisfy the condition (using find())
3,335
null
[ "swift" ]
[ { "code": "let realm = try! Realm()\nclass ProjectServiceFloor: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var guid = UUID().uuidString\n @Persisted var projectService: ProjectService!\n @Persisted var floor: Floor!\n \n ...\n \n func createInspectionPoint(of ipType: InspectionPointType) {\n let realm = try! Realm()\n let type = realm.object(ofType: InspectionPointType.self, forPrimaryKey: ipType.id)!\n try! realm.write {\n let newIp = InspectionPoint(type: type, floor: self.floor)\n let newPip = ProjectInspectionPoint(ip: newIp, project: self.projectService.project)\n let newPsip = ProjectServiceInspectionPoint(psf: self, pip: newPip)\n realm.add(newPsip) // Error thrown here\n }\n }\n", "text": "First off, I love Realm for SwiftUI and it has some amazing, efficient features, which only seem to be expanding as new versions are released. Amazing work!Having said that, I am getting increasingly exasperated with thrown errors regarding realm instances. Most prominently, I am running into numerous frustrating instances of “Object is already managed by another Realm. Use create instead to copy it into this Realm.” I have also encountered an exception similar to “Objects must be from the realm being queried”.These exceptions seem to imply that there are other realm databases in my project, which is not the case. I use a single, local realm. No other realm is every being queried, written to, or read from. My datamodel has many connected relations (I am importing the data from an existing SQL Server) and I usually ‘reach’ objects or lists of objects through following those relations and filtering them through a query where necessary. If I need to explicitly query the realm database, for example with a primary key, I exclusively use:Nevertheless, I seem to run into objects that are seemingly managed by other realms, as indicated by the exception(s) I mentioned above.I suspect that this is, in fact, not another realm database, but either the same instance on a different thread, or another instance? (Not really sure what an instance means in this context) Or maybe a combination of the two, or maybe something else entirely? Either way, the exception is misleading, and on top of that there is very little (if any) documentation on this scenario.So my feedback would be to either clarify these exceptions and/or add a section in the documents; unless, of course, I’m wrong and there are in fact multiple realm databases that I’ve created inadvertently, in which case the document could also do with a little explanation on how to avoid this. In other words: I am fully ready and willing to accept that these errors are the result of my own mistakes or misuse of the framework, but the documentation is unable to either help me avoid these mistakes, or rectify them.For my specific use case, without going into too much detail, the exception is raised at the last line of the write block. The object is created literally right before that. The object has not been added, or created, to any instance of any realm anywhere, and the initializer only assigns the parameters to properties (for now). So how, where and why is this object possibly managed by another realm?Note that if the code doesn’t seem very optimal, it’s because this is just one of many iterations of debugging, trial-and-error and just generally metaphorically bashing my head in over this exception. (And also this data needs to be send back to an SQL Server that expects all these objects to be instantiated).Also, I specifically query for the type because this function is called from a UI thread, which I believe works with a separate instance? Please correct me if this is not the case.", "username": "Boris_Versluis" }, { "code": "func createFieldValue(for field: ipField) -> FieldValue {\n let fieldValue = FieldValue(point: self.point, service: self.service, field: ipField)\n \n let realm = try! Realm()\n try! realm.write {\n realm.add(fieldValue)\n // Below throws error \"Cannot modify managed RLMArray outside of a write transaction.\"\n self.fieldValues.append(ipfv) \n }\n\n return fieldValue\n}\n", "text": "Here’s another inexplicability:I am literally inside a write transaction? Again, I’m sure I’m using stuff wrong but these exception messages are not helpful (to say the least).", "username": "Boris_Versluis" }, { "code": "let realm = try! Realm()Realm.ConfigurationDispatchQueue(label: \"background\").async {\n autoreleasepool {\n let realm = try! Realm()\nlet newPsip = ProjectServiceInspectionPoint(psf: self, pip: newPip)\nrealm.add(newPsip)\n", "text": "I understand your frustration and feel we can help.The error is not indicating you have different Realm files per se - at a high level, it’s indicating an object created on one Realm instance/thread is trying to be modified on a different Realm instance/thread.You can think of it as an object on one thread is trying to be modified by code on a different thread.The main restriction is that you can only use an object on the thread which it was created.There are ways around that but it’s essential to the core functionality of Realm.The code in your question is a bit limited - and there could be a number of causes for the error.So, first question: Is this this only way you ever open a Realm?let realm = try! Realm()e.g. do you ever use a realm config Realm.Configuration anywhere in your code?The second question: Do you ever use background tasks? Often times when writing or reading a large amount of data you may want to do that in an autoreleasepool on a background thread to keep the UI fluid. It may look something like thisLast Question:How was self (the ProjectServiceFloor object) instantiated? Where did it come from? How is it being used?", "username": "Jay" }, { "code": "let realm = try! Realm()Realm.ConfigurationRealm?enum RealmMigrator {\n static private func migrationBlock(migration: Migration, oldSchemaVersion: UInt64) { \n // Not implemented yet\n }\n\n static func setDefaultConfiguration() {\n let config = Realm.Configuration(\n schemaVersion: 1,\n migrationBlock: migrationBlock,\n deleteRealmIfMigrationNeeded: true\n )\n Realm.Configuration.defaultConfiguration = config\n }\n}\nTaskstruct ProjectRequest {\n \n func store<T: ResponseRow>(_ objects: [T]) {\n let realm = try! Realm()\n debugPrint(\"Storing \\(String(describing: T.self))\")\n try! realm.write {\n for row in objects {\n row.validate()\n realm.add(row, update: .modified)\n }\n }\n }\n \n ...\n \n func doRequest() async throws {\n let response = try await NetworkService.shared.api.sendRequest(request: self)\n try await processResponse(response: response)\n }\n\n func processResponse(response: ProjectResponse) async throws {\n // Somewhat simplified\n try await store(response.projectServices)\n try await store(response.projectInspectionPoints)\n ...\n debugPrint(\"Response parsed\")\n \n let realm = try! await Realm()\n try! realm.write {\n for ps in realm.objects(ProjectService.self) {\n for floor in ps.floors {\n let psf = ProjectServiceFloor(ps: ps, floor: floor)\n ream.add(psf, update: .modified)\n \n for pip in ps.getPips(for: floor) {\n let psip = ProjectServiceInspectionPoint(psf: psf, pip: pip)\n realm.add(psip, update: .modified)\n }\n }\n }\n }\n }\n}\n\n", "text": "So, first question: Is this this only way you ever open a Realm?let realm = try! Realm()e.g. do you ever use a realm config Realm.Configuration anywhere in your code?Yes, I only ever use this way. I have tried in the past to use the Realm? properties attached to objects but found random errors so I figured I needed to stick to one way of accessing. Possibly those errors are similar to my current problem?As for the configuration, I do use a default configuration as seen below. The static function is called in the SceneDelegate:scene method during startup.I think I got this snippet from an example or tutorial somewhere, back when I was starting out with this project. I haven’t really looked at it since; because this project is not in production yet and the object schema is changing throughout development still, I haven’t felt a need to implement a migration block yet.The second question: Do you ever use background tasks?Yes, all of the data is downloaded from an API and decoded into realm objects. This is done with an async function which is called from the UI using Task.The ProjectServiceFloor objects (as well as all existing ProjectServiceInspectionPoint objects) are not downloaded, but created after the downloaded data is decoded and added to the realm. All of this is happening in the aforementioned Task. Below is a simplified version of the request object, but really it’s just a matter of downloading and storing the data, then creating and storing these secondary objects.Does this mean that they are added into a different ‘instance’ of Realm because it is done from an async function? I know that realm objects are thread-confined, but I thought that after this background task is closed, the database would go back to a single source of truth? Admittedly I don’t know much about threading in Swift yet (I know this is outside the scope of this forum but if you know of any guides out there to give a bit more detail into how/when threads are created it would be appreciated).Should also mention that I’m very impressed by Realm as a whole, so I don’t want to give off the vibe that I’m only complaining. When it works it’s truly amazing.", "username": "Boris_Versluis" }, { "code": "struct AddInspectionPointPopover: View {\n @ObservedRealmObject var psf: ProjectServiceFloor\n @ObservedResults(InspectionPointType.self) var types\n @State var selection: InspectionPointType?\n \n ...\n \n func createIp() {\n guard let selection = selection else {\n return\n }\n \n psf.createInspectionPoint(of: selection)\n dismiss()", "text": "How was self (the ProjectServiceFloor object) instantiated? Where did it come from? How is it being used?As for your last question, the createInspectionPoint method is called from a view:", "username": "Boris_Versluis" }, { "code": "let realm = try! await Realm()try awaitfunc processResponse(response: ProjectResponse) async throws { <- asynch\n let realm = try! await Realm() <-asynch\n", "text": "It’s challenging to get a good overview of the project flow but I would guess that the objects are being created one one thread but then accessed on a different thread.I know you posted simplified code but be sure to encapsulate your try await in a do: catch block so errors are trapped.Also, this may not needed let realm = try! await Realm(); await is generally for queries where you’re awaiting the data to be returned or long write transactions. Generally speaking, if this is a local Realm, you will rarely have to ‘wait’ for data. On the other hand - these kinds of functions are critical when working with MongoDB Realm Sync (data sync’d from the server).I am going to take a guess here but it seems like you’re using a number of asynchronous calls, background threads and try await syntax where much of that may not be needed - and may be the core cause of the issues as objects are on different threads at different times. In other words you may have asynchronous calls within asynchronous calls; this is an exampleBut again, without (a lot) more context, it’s hard to say so all of the above is a guess.", "username": "Jay" }, { "code": "", "text": "I’m also having this issue, and it’s also very opaque and frustrating. I’ll post my code in a new thread, but it appears that @ObservedResults is coming in on a different thread, which doesn’t make sense.", "username": "Greg_Lee" } ]
Object already managed by another realm
2022-03-16T12:54:38.211Z
Object already managed by another realm
4,676
null
[ "queries", "node-js" ]
[ { "code": "", "text": "Hi, I’m new using mongo , and I’m in need for helpthe result of the insertOne:\n{\nacknowledged: true,*\ninsertedId: new ObjectId(“63f15abc4a9576b16ec5649c”)*\n}\nI do have the collection created on my databasewhat am I missing?\nThere is no error shown, how can I understand what is happening?", "username": "Stephanie_G" }, { "code": "", "text": "Hello @Stephanie_G, Welcome to the MongoDB community forum,Possibility of incorrect database name or collection name, can you show the screenshot of your code and result in sequence,Where you are executing these commands? can you please try connecting the mongo shell and execute these commands?", "username": "turivishal" } ]
InsertOne result is acknowledged true and an inserted Id but nothing is persisted on the collection
2023-02-19T02:11:58.039Z
InsertOne result is acknowledged true and an inserted Id but nothing is persisted on the collection
719