image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "", "text": "When will the MongoDB Developer Associate exam be updated to MongoDB version 6.0?", "username": "Tarek_Hammami" }, { "code": "", "text": "Hi @Tarek_Hammami,Welcome to the MongoDB Communion forums MongoDB has instituted a regular review cadence for the training and certification content.\nProduct changes that impact the training and certification content are implemented in real-time, eliminating the need to associate the content with a particular release.I hope it answers your questions. If you have any follow-up questions feel free to reach out to us!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hello,\nDoes this mean that if I sit for the exam and pass it, the certification I get will be on MongoDB version 6.0?", "username": "Tarek_Hammami" }, { "code": "", "text": "Hello @Tarek_Hammami,I sit for the exam and pass it, the certification I get will be on MongoDBUpon passing, you will become a certified MongoDB professional. We no longer tie the certification to a specific product version.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "@Kushagra_Kesav Thank you so much for your answer. ", "username": "Tarek_Hammami" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Exam update to latest MongoDB version
2023-03-19T23:39:48.728Z
Exam update to latest MongoDB version
977
null
[ "react-native" ]
[ { "code": "IntroTextTaskListFlatList<FlatList\n data={tasks}\n keyExtractor={task => task._id.toString()}\n renderItem={({ item }) => (\n <TaskItem\n task={item}\n onToggleStatus={() => onToggleTaskStatus(item)}\n onDelete={() => onDeleteTask(item)}\n // Don't spread the Realm item as such: {...item}\n />\n )}\n/>\nArray.isArray(tasks)<FlatList\n data={[...tasks]}\n...\n/>\ntasks.toString()[Function bound value]", "text": "I just thought I would share a couple pieces of information about the Expo template, since I know myself and others have encountered some difficulties with it.There was a problem where the template was installing some incompatible Realm/Expo versions. Since Expo 48 was officially released, I decided to try the template again and bump everything (realm, expo and friends, @realm/react) to latest, and it worked. Previously this had failed for me (using Expo 48 beta). I don’t know if the template was updated and that played a role, but either way good news!When I started the template app, everything looks like it should, but once you enter a todo, the IntroText disappears but no to-dos show up. I seem to recall this happening in previous attempts to use the template but can’t say for sure. I dug around a bit and the problem is in the TaskList component, it renders a FlatList as such:But the FlatList isn’t even receiving elements to render. If you run Array.isArray(tasks) it’s false, I’m not sure what tasks is, but it’s not an array. I just spread it into an array, egand that fixed it. If you call tasks.toString() it returns [Function bound value], not sure what that indicates.Thought I would share in case it helps the maintainers and other people trying to figure out how to use Realm.Cheers,\nBrian", "username": "Brian_Luther" }, { "code": "", "text": "Note: the rendering problem I describe is with the nonsync version, although I’m not sure if that effects it.", "username": "Brian_Luther" }, { "code": "useQueryuseQueryFlatList", "text": "Update: The same problem with FlatList rendering happens in a sync version as well, so that wasn’t the problem. The useQuery documentation says that the value returned from useQuery can be passed directly to a FlatList’s data prop but this seems to be broken. Easy to fix by spreading it into an array, just trying to point out some sticking points.", "username": "Brian_Luther" }, { "code": "package.json", "text": "@Brian_Luther , are you able to run the project with Expo Go? Could you possibly share a repo or a package.json?PS: Thank you for going out of your way to inform us of your success with this big problem.", "username": "Damian_Danev" }, { "code": "0.71.00.71.40.71.4package.json0.71.30.71.4", "text": "@Brian_Luther There was a restriction on FlatList introduced in React Native 0.71.0. We have brought this to the attention of Meta and they have made FlatList Realm compatible in 0.71.4. You should be able to upgrade the RN dependency to 0.71.4 in your package.json, as the upgrade path from 0.71.3 to 0.71.4 does not require any changes to native code.@Damian_Danev Realm is not compatible with Expo Go, as it is not included in the base SDK of Expo. One has to compile their own dev-client (similar to a custom Expo Go) either locally or through EAS.", "username": "Andrew_Meyer" }, { "code": "", "text": "@Andrew_Meyer , do you think there is a chance the Expo team will include Realm in the base SDK of Expo any time soon or at all?", "username": "Damian_Danev" }, { "code": "react-native-reanimated", "text": "@Damian_Danev They will not. They have even removed react-native-reanimated from the base SDK. My understanding is that they are pushing towards the dev-client if you need any third party library. They are also giving library authors, like us, the ability to provide configurations that can patch native parts, so that expo users can still have a much easier experience using React Native.\nIt makes sense, as each app has their own set of needs when it comes to what libraries they use. Any library that is adding native code to the app will increase the amount of disc space, so you would want to keep this light anyway.", "username": "Andrew_Meyer" }, { "code": "package.json{\n \"name\": \"gv2-r\",\n \"main\": \"index.js\",\n \"version\": \"1.0.0\",\n \"scripts\": {\n \"start\": \"expo start --dev-client\",\n \"android\": \"expo run:android\",\n \"ios\": \"expo run:ios\"\n },\n \"dependencies\": {\n \"@expo/vector-icons\": \"^13.0.0\",\n \"@react-native-community/masked-view\": \"^0.1.11\",\n \"@realm/react\": \"^0.4.3\",\n \"expo\": \"~48.0.6\",\n \"expo-constants\": \"~14.2.1\",\n \"expo-dev-client\": \"~2.1.5\",\n \"expo-linking\": \"~4.0.1\",\n \"expo-router\": \"^1.2.2\",\n \"expo-splash-screen\": \"~0.18.1\",\n \"expo-status-bar\": \"~1.4.4\",\n \"luxon\": \"^3.3.0\",\n \"react\": \"18.2.0\",\n \"react-native\": \"0.71.3\",\n \"react-native-gesture-handler\": \"~2.9.0\",\n \"react-native-get-random-values\": \"~1.8.0\",\n \"react-native-reanimated\": \"~2.14.4\",\n \"react-native-safe-area-context\": \"4.5.0\",\n \"react-native-screens\": \"~3.20.0\",\n \"realm\": \"^11.5.2\"\n },\n \"devDependencies\": {\n \"@babel/core\": \"^7.12.9\",\n \"@babel/plugin-proposal-decorators\": \"^7.19.0\",\n \"@realm/babel-plugin\": \"^0.1.1\",\n \"@types/react\": \"~18.0.14\",\n \"@types/react-native\": \"~0.70.6\",\n \"typescript\": \"^4.9.4\"\n },\n \"private\": true\n}\n", "text": "I haven’t used Expo Go yet, it should be possible to do so using a development build (see Create development builds - Expo Documentation under “On a device”) but I haven’t tried yet, I’ve been using the iOS simulator. Here’s a copy of my package.json:", "username": "Brian_Luther" }, { "code": "0.71.4npx expo install --fix0.71.30.71.4", "text": "That explains it, thanks. I was able to upgrade to react-native to 0.71.4 without issue so far. Running npx expo install --fix will downgrade back to 0.71.3 but I’m assuming that’s because something doesn’t official support 0.71.4?", "username": "Brian_Luther" }, { "code": "0.71.4", "text": "I’m assuming that’s because something doesn’t official support 0.71.4 ?@Brian_Luther Expo usually releases at the most current React Native at the time of the release. It’s possible that they haven’t seen it as a priority to update this internally, but there isn’t anything in this update that would be incompatible with Expo. I’ll ping the Expo team to make an update to the Expo SDK to support this patch release.", "username": "Andrew_Meyer" }, { "code": "", "text": "Expo says they will soon do a release and have a PR for this to be officially updated.", "username": "Andrew_Meyer" }, { "code": "", "text": "Nice, that was fast, thanks Andrew", "username": "Brian_Luther" } ]
React Native Expo template: problems and solutions
2023-03-18T00:34:07.761Z
React Native Expo template: problems and solutions
1,566
null
[ "replication", "monitoring" ]
[ { "code": "rs.printSecondaryReplicationInfo(); source: d-mipmdb-cfg-02:27019\n\tsyncedTo: Sat Apr 02 2022 20:01:01 GMT+0200 (W. Europe Daylight Time)\n\t0 secs (0 hrs) behind the primary \nsource: d-mipmdb-cfg-03:27019\n\tsyncedTo: Sat Apr 02 2022 20:00:58 GMT+0200 (W. Europe Daylight Time)\n\t3 secs (0 hrs) behind the primary \nrs.printSecondaryReplicationInfo()rs.status()rs.status()const members = rs.status().members\nconst primary = members.filter(x=> x.stateStr == \"PRIMARY\").shift().optimeDate;\nmembers.filter(x=> x.stateStr == \"SECONDARY\").map(x=> {return `${x.name} is ${(primary - x.optimeDate)/1000} Seconds behind the primary`});\n\n[\n\t\"d-mipmdb-cfg-02:27019 is 2 Seconds behind the primary\",\n\t\"d-mipmdb-cfg-03:27019 is 3 Seconds behind the primary\"\n]\n", "text": "When you run rs.printSecondaryReplicationInfo(); then the output is like this:The documentation says “Use rs.printSecondaryReplicationInfo() for manual inspection, and rs.status() in scripts.” but it does not tell me directly where to see this information.I guess with rs.status() you get the same information with a script like this:Is this correct or do I need to run something else?Kind Regards\nWernfried", "username": "Wernfried_Domscheit" }, { "code": "rs.status()mongoshmongodb-js/mongosh\n \n @apiVersions([1])\n async printShardingStatus(verbose = false): Promise<CommandResult> {\n this._emitDatabaseApiCall('printShardingStatus', { verbose });\n const result = await getPrintableShardStatus(await getConfigDB(this), verbose);\n return new CommandResult('StatsResult', result);\n }\n \n @returnsPromise\n @topologies([Topologies.ReplSet])\n @apiVersions([])\n async printSecondaryReplicationInfo(): Promise<CommandResult> {\n let startOptimeDate = null;\n const local = this.getSiblingDB('local');\n \n if (await local.getCollection('system.replset').countDocuments({}) !== 0) {\n const status = await this._runAdminCommand({ 'replSetGetStatus': 1 });\n // get primary\n let primary = null;\n for (const member of status.members) {\n if (member.state === 1) {\n primary = member;\n \n mongomongodb/mongo/src/mongo/shellmongo\n \n print(\"oplog last event time: \" + result.tLast);\n print(\"now: \" + result.now);\n };\n \n DB.prototype.printSlaveReplicationInfo = function() {\n print(\n \"WARNING: printSlaveReplicationInfo is deprecated and may be removed in the next major release. Please use printSecondaryReplicationInfo instead.\");\n this.printSecondaryReplicationInfo();\n };\n \n DB.prototype.printSecondaryReplicationInfo = function() {\n var startOptimeDate = null;\n var primary = null;\n \n function getReplLag(st) {\n assert(startOptimeDate, \"how could this be null (getReplLag startOptimeDate)\");\n print(\"\\tsyncedTo: \" + st.toString());\n var ago = (startOptimeDate - st) / 1000;\n var hrs = Math.round(ago / 36) / 100;\n var suffix = \"\";\n if (primary) {\n \n ", "text": "Hi @Wernfried_Domscheit,MongoDB shell helpers derive replication lag from rs.status(), so your approach looks correct.The implementations in the new and legacy MongoDB shells may be helpful references, although they have additional logic and error handling to try to cover broader usage scenarios than your example.The new MongoDB shell uses the MongoDB Node.js driver and TypeScript:The legacy mongo shell uses an embedded JavaScript engine:Regards,\nStennie", "username": "Stennie_X" }, { "code": "members.\nfilter(x=> x.stateStr == \"SECONDARY\").\nmap(x=> {\n return `${x.name} is ${(primary - x.optimeDate)/1000} Seconds behind the primary`\n});\n\"(not reachable/healthy)members.\nfilter(x=> x.stateStr != \"PRIMARY\").\nmap(x=> {\n return `${x.name} is ${(primary - x.optimeDate)/1000} Seconds behind the primary`\n});\n", "text": "Just a small remark - you won’t see unhealthy hosts with that query, for example in \"(not reachable/healthy) state. To fix it, just change this string to", "username": "Denis_Iusupov" } ]
printSecondaryReplicationInfo as JSON output
2022-04-02T22:08:34.446Z
printSecondaryReplicationInfo as JSON output
3,178
null
[]
[ { "code": "", "text": "while monitoring server observed high memory and cpu utilization.i am not able sort the issue can anybody help with this. hardware config is 32gb(RAM) and 630gb(storage)", "username": "vani_M_R" }, { "code": "", "text": "We are going to need more information to be able to start and help.\nWhat is considered “high memory”?\nWhat is the OS and Version?\nWhat is the MongoDB Version?\nWhat is the MongoDB Deployment Architecture? For example replica, sharded, standalone. How many MongoDB processes are running on a server", "username": "tapiocaPENGUIN" }, { "code": "", "text": "memory utilization by mongo is crossing 50%\nos= ubuntu 20.04\nmongodb version=5.0\nstandalone server\nsingle mongo process", "username": "vani_M_R" }, { "code": "", "text": "le sort the issue can anybody help with this. hardware config is 32gb(RAM) and 630gb(storage)In my experience, execution of sort depends on data size to be sorted, indexes, and query design too. Index Details and Query would help to answer your questions in better way.Use of proper indexing can make this task easy.", "username": "Monika_Shah" }, { "code": "0.5 * (4 GB - 1 GB) = 1.5 GB0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB", "text": "This is pretty expected, with the WireTiger Cache taking 50% memory by default.With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.Starting in MongoDB 3.4, the default WiredTiger internal cache size is the larger of either:For example, on a system with a total of 4GB of RAM the WiredTiger cache will use 1.5GB of RAM (0.5 * (4 GB - 1 GB) = 1.5 GB). Conversely, a system with a total of 1.25 GB of RAM will allocate 256 MB to the WiredTiger cache because that is more than half of the total RAM minus one gigabyte (0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB).", "username": "tapiocaPENGUIN" }, { "code": "", "text": "solution for that? , what action should be taken to minimize the memory utilization", "username": "vani_M_R" }, { "code": "topfree -mdb.serverStatus().mem\n\nvar mem = db.serverStatus().tcmalloc;\n\nmem.tcmalloc.formattedString\n", "text": "If you are not running high cost aggregate operation, then sometimes its due to wiredTiger cache.To start with, can you post output of below commands:top\nPress SHIFT + m to sort the output by memory utilization.\nPost the output of first few lines.free -mOn Mongo Shell:Please post the output from the above commands to investigate further.", "username": "Abdullah_Madani" }, { "code": "", "text": "For that, few more information required:db.coll.getIndexes()\ndb.coll.stats().indexSizes\ndb.coll.stats().storageSize\ndb.coll.stats().count", "username": "Monika_Shah" }, { "code": "mem.tcmalloc.formattedString", "text": "mem.tcmalloc.formattedString\nimage940×189 29.7 KB\n\n\nimage940×101 10.1 KB\n\n\nimage940×642 142 KB\n", "username": "vani_M_R" }, { "code": "cacheSizeGBdb.serverStatus().wiredTiger.cache\n\n'bytes currently in the cache': Long(\"13028203994\"),\n'maximum bytes configured': Long(\"16299065344\"), <<-- cacheSizeGB\ncacheSizeGB'bytes currently in the cache': Long(\"13028203994\"),\n'maximum bytes configured': Long(\"16299065344\"), <<-- cacheSizeGB\n", "text": "@vani_M_RSummary of the output:Total memory used by Mongo DB is 18229.2 MiB ≈ 18GB (55% of Total RAM), out of which 5.6 GB (30% of the MongoDB consumed memory) is in the free list (part of memory which is currently empty but held by MongoDB so that it can be immediately assigned to store data whenever required for upcoming requests).In other words, Free list is a way of enhancing performance by keeping some memory in hand so that it can be quickly assigned to new requests, instead of requesting OS each time. This enhances the overall execution by saving time and compute for allocation and deallocation of memory pages.So basically your MongoDB is actually consuming 11950 MB ≈ 11.6 GB ≈ 37% of your total RAMLet me give a brief on MongoDB memory management. By Default, the MongoDB uses Google tcmalloc as its memory allocator. The majority of memory gets allocated to wiredTiger (Default Storage Engine from MongoDB 3.2) for data processing. Now, to control the upper limit of the memory that should be used by the WiredTiger engine, MongoDB uses a parameter cacheSizeGB which is set to about 60% of the system’s usable memory by default. This is to prevent situations like system crash due MongoDB running out of Full System Memory to satisfy a given workload.You can check your current ‘cacheSizeGB’ settings through the below command:My Test Env. with 32GB RAMSo once the memory consumption reaches to a threshold (80% to 90% of cacheSizeGB ), wiredTiger begins to start eliminations of CLEAN and DIRTY PAGES. Cutting long story short, it is normal to have upto 60% of RAM consumed by Mongod process, if the MongoDB is the only Application running on the server.For the deployments with heavy workloads, you need to monitorThere is no major problem if MongoDB cache utilization is generally “0.8 * cacheSizeGB” and under or if it occasionally exceeds. BUT IF the increase persists for a considerable amount of time, then that means the memory elimination pressure is high, and you should really consider looking for top resource consuming user connections / queries to tune or increase memory or consider having faster SSDs to enhance SWAP IO throughput.", "username": "Abdullah_Madani" } ]
High memory utilization by mongodb
2023-03-17T12:10:10.770Z
High memory utilization by mongodb
2,827
https://www.mongodb.com/…_2_1024x236.jpeg
[ "mongodb-shell" ]
[ { "code": "", "text": "\nScreenshot 2023-03-20 at 2.50.45 PM1920×444 41.4 KB\n\nHi I am using Documentdb:4.0 (Mongodb Compatiblity)and Using Mongosh: 1.8.0. I was using admin command to enable change stream on a database/collection . I am getting the error as shown below:\nMongoServerError: Feature not supported: modifyChangeStreamsI am using db command :db.adminCommand({ modifyChangeStreams: 1, database: “cache_reservas”, collection: “”, enable: true });", "username": "Kunal_Shanbagh" }, { "code": "", "text": "So mongo employees also have to use this forum to ask questions?", "username": "Kobe_W" }, { "code": "mongosh", "text": "@Kunal_Shanbagh this seems to be a DocumentDB-specific problem. In general, mongosh and the other tools are not tested with DocumentDB so given that it is well known that it is far from being fully compatible with MongoDB it’s not surprising that some functionality breaks.In this case, however, the error is a server error, so it might be DocumentDB that in fact does not like the command you are sending it.", "username": "Massimiliano_Marcon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to enable change stream on Documentdb 4.0 (Mongodb Compatiblity) the database and collection
2023-03-20T19:53:24.811Z
Not able to enable change stream on Documentdb 4.0 (Mongodb Compatiblity) the database and collection
796
https://www.mongodb.com/…1_2_1024x586.png
[]
[ { "code": "", "text": "\nimage1647×943 61.2 KB\n", "username": "Hung_Viet" }, { "code": "userData.save().then(err => {\n ...\n})\n", "text": "Hello @Hung_Viet,A little information would be great to debug your problem, your screenshot is not even opening on click,I guess, It looks like you are using mongoose npm and they have removed the callback function in many methods in the latest mongoose version 7,Read the migrating guide,\nhttps://mongoosejs.com/docs/migrating_to_7.html#dropped-callback-supportYou need to use .then() instead of the callback function,", "username": "turivishal" } ]
Can someone explain to me why this error occurs?
2023-03-21T10:17:48.343Z
Can someone explain to me why this error occurs?
360
null
[ "node-js" ]
[ { "code": "", "text": "Reading large amounts of data from a collection using FindCursor in batches of 10k, and everything works really well. My problem is that I don’t know how many documents there will be in total, and the cursor size (https://www.mongodb.com/docs/manual/reference/method/cursor.size/) feature does not seem to be available in the node-js driver. I can do two separate queries, one to count, but then there is no guarantee that the count is correct.I would expect that the cursor size is known, and therefore would solve my issue.", "username": "Fredrik_Fager1" }, { "code": "QueryingCounting const Result = await cursor.toArray();\n console.log(\"Count: \" + Result.length);\n", "text": "Hello @Fredrik_Fager1 ,Welcome to The MongoDB Community Forums! A cursor fetches documents in batches to reduce both memory consumption and network bandwidth usage. Cursors are highly configurable and offer multiple interaction paradigms for different use cases.I don’t know how many documents there will be in total,Querying and Counting are typically two different operations. One can do a query to get data that matched the query criteria. Whereas, Count will tell you the number of documents that match the query criteria.If you want to count the documents via cursor then you can execute the cursor, add all return documents in an array and do a count. Below is a small snippet:I can do two separate queries, one to count, but then there is no guarantee that the count is correct.Why do you believe that you will not get an exact count?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "const Result = await cursor.toArray();\ntoArrayfor await (const document of cursor) {\nbatchSizecursor.size", "text": "Welcome to The MongoDB Community Forums! Thank you!Why do you believe that you will not get an exact count?That does give the exact amount, but toArray fetches all documents into an array at once. This is what I want to avoid. I want to read really large amounts of documents, without using huge amounts of memory while parsing them. My test data set results in 4GiB of memory if I load it all, while reading it in batches of 10k memory usage is ~200 MiB.I’m reading using:and batchSize set to 10k.If I make two separate queries the data might change between the first counting query and the second fetching the data, i.e. not guaranteed to contain the same number of documents.Given that there is a cursor.size method available in MongoDB, I’d assume this is the reason for its existence. The cursor probably knows how many documents matched.", "username": "Fredrik_Fager1" }, { "code": "Cursor.size()cursor.count()\n \n @returnsPromise\n async size(): Promise<number> {\n return this._cursor.count();\n }\n \n cursor.count()Warning: cursor.count is deprecated and will be removed in the next major version, please use collection.estimatedDocumentCount or collection.countDocuments insteadcountDocuments()[ { $match: query }, { $group: { _id: 1, n: { $sum: 1 } } } ]\ncursor.size()db.collection.find(...)db.collection.find(...).count()", "text": "Cursor.size() is a mongosh method that basically calls cursor.count() in the node driver (mongosh uses the node driver), please referHowever, cursor.count() is deprecated in the recent versions of the Node driver.\nRunning that in current mongosh showsWarning: cursor.count is deprecated and will be removed in the next major version, please use collection.estimatedDocumentCount or collection.countDocuments insteadas per the error message in mongosh you can use collection.countDocuments().The countDocuments() implementation (please refer this source) in the node driver is basically an aggregation ofIn conclusion, the cursor.size() method actually executes the query, bringing back our earlier point that db.collection.find(...) and db.collection.find(...).count() are two separate commands which means you can either count the number of documents on a cursor, or you can return those documents then count them later.If I make two separate queries the data might change between the first counting query and the second fetching the data, i.e. not guaranteed to contain the same number of documents.If the count is important then you can try using MongoDB Transactions where you can do whole operation in single transaction.Lastly, can you clarify, why do you need the count, when you are processing the documents one at a time? Why does the count matter in this case?", "username": "Tarun_Gaur" }, { "code": "", "text": "A bit of background: I’m working on a multi-tenant service, where we introduce a feature to export all data according to a set of permissions and rules. When exporting the data there are about 15 collections from which data is exported and then compressed into a gzip stream, document by document.The simple task at hand is to provide progress information during this export operation. In the end I have about 15 queries which result in the final export, and I would like to know how many documents are matched for each query, when the operation begins. I.e I don’t want to count the documents in the collections, as that is not the count exported.That said, I can solve the issue, while I don’t like the options available, as I would assume the cursor in MongoDB must know how many documents matched when returning the cursor. I don’t want to have the DB do unnecessary work, and I think this is something that could be used when solving several other issues, and that is why I have spent some time trying to figure this out. Apparently there is something that I don’t understand about how MongoDB is working internally with the query cursor.", "username": "Fredrik_Fager1" } ]
FindCursor size not available
2023-03-02T09:06:18.231Z
FindCursor size not available
913
null
[ "react-native", "flexible-sync" ]
[ { "code": "", "text": "Hello,We use realm (device-sync) with flexible-sync in our react-native app. This app is in production and used by a bunch of customers. Sometimes we get notified by customers that they get a white screen when they open the app. I guess this has something to do with the sync process. Looking into the logs, we noticed the following Error:ClientFileExpired Error:\nending session with error: router encountered error: error performing history scan: error fetching next batch: access attempted to expired part of server history (ProtocolErrorCode=222)Sadly, we did not find anything about this error in the docs. We thought about schema mismatch, but we are pretty sure there was no mismatch. In my device sync settings, I have a Client Max Offline Time of 30 days. Could it be that this error occurred when a customer was not online for 30 days and somehow the local realm file was expired ?", "username": "Sami_Karak" }, { "code": "", "text": "Hello @Sami_Karak ,Welcome back to The MongoDB Community Forums! I would advise you to bring this up with the Atlas chat support team . They may be able to check if anything on the Atlas side could have possibly caused this issue. In saying so, if a chat support is raised, please provide them with the following:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hello,I would also love to do that, but mongodb-atlas wants me to pay 150$ for reactivation of the support plan to only ask a question every 12 months. This does not make sense for us.Maybe you can point me to the documentation of this error? Since this is custom error name by mongodb-atlas there should be some docs about this right ? I could not find any …We start to get really frustrated about using realm device-sync for our project because by going further and further we only encounter more and more problems with this service. It just don’t seem to be stable enough as it is now. And we are starting to loose hope that it will be stable in the near future.thanks in advance", "username": "Sami_Karak" }, { "code": "ProtocolErrorCode=222", "text": "Unfortunately, I cannot check what the issue is and as the error is related to your application and realm sync hence only Atlas chat support team will have more insight inside your Atlas account and services.I would also love to do that, but mongodb-atlas wants me to pay 150$ for reactivation of the support plan to only ask a question every 12 months. This does not make sense for us.I believe you can ask questions even with the basic support plan which is free.In general, ProtocolErrorCode=222 means Client file has expired.You can check below link to learn more about sync error handling.", "username": "Tarun_Gaur" } ]
White screen - ClientFileExpired
2023-03-15T14:29:38.110Z
White screen - ClientFileExpired
1,267
null
[]
[ { "code": " destination: file\n logAppend: true\n path: /data/log/mongodb/mongod.log\n component:\n accessControl:\n verbosity: 1\n command:\n verbosity: 1\n # COMMENT some component verbosity settings omitted for brevity\n replication:\n verbosity: 1\n election:\n verbosity: 1\n heartbeats:\n verbosity: 1\n initialSync:\n verbosity: 1\n rollback:\n verbosity: 1\n storage:\n verbosity: 1\n journal:\n verbosity: 1\n recovery:\n verbosity: 1\n write:\n verbosity: 1```\n\n\nPlease suggest me the log verbosity level, by which I will have minimum logs.\nAnd one more question, Is more logs make any impact on mongoDb performance.\ne.g. If my mongoDb logs file will goes upto 30GB of size in a day, will it impact on mongodb performance?", "text": "Hello everyone,I am working on a project in which I have used mongoDb as database,\nI have setup 3 replicas on mongodb,\nMy question is:-I am facing a lot of mongoDb logs in the log file of mongodb, and I want to minimize the logs, to which verbosity level I should have to set the logs?My current setting for logs in config file is:-", "username": "sahil_garg1" }, { "code": "Please suggest me the log verbosity level, by which I will have minimum logs", "text": "Please suggest me the log verbosity level, by which I will have minimum logsA verbosity level can have values from 0 to 5. The level 0 is the default value for verbosity, which logs basic informational messages about the on-going component activities. The log levels 1 through 5 further includes additional debug messages as well in the log file. The higher the number the more verbose your debug messages are. The verbosity level of negative one (-1) means that the object inherits the verbosity level from its parent component.To retrieve the current log components from the database with their respective verbosity, run the below command:db.getLogComponents()To change the verbosity level of individual component, for instance the “INDEX” component, use the below command:db.setLogLevel(0, “index”)", "username": "Abdullah_Madani" }, { "code": "", "text": "Logs are append only so very fast. But it’s difficult to say when you have hundred GB. Better try it yourself.", "username": "Kobe_W" } ]
Need to minimize mongodb logs
2023-03-20T12:32:40.135Z
Need to minimize mongodb logs
374
null
[ "database-tools" ]
[ { "code": ".\\mongoimport.exe --uri=\"mongodb://localhost/optmyzr\" --collection=openWeatherData --mode=upsert --upsertFields=city.id --file=\"daily_16.json\"\n", "text": "I want to import this json file in upsert mode in MongoDB.File: http://bulk.openweathermap.org/sample/daily_16.json.gzThis file is almost 1GB (the compressed version is 90MB as you can see in the hyperlinked file).Exporting this zip file and importing the 1GB takes >50 minutes.\nIt has taken 20 minutes to complete 25% of import.\nWhich is too time consuming, is there any faster way to do this?", "username": "Arun_S_R" }, { "code": "", "text": "Which is too time consuming, is there any faster way to do this?Since the input file is in a non-BSON format, it needs to be converted into BSON while importing, which is where most of the time is spent. Also, mongoimport is a single threaded operation. The simplest way to expedite is to split the file into multiple input chunks and run the import operation parallelly against each input chunk.Note: The no. of parallel processes should be less than the no. of logical CPU Cores on the machine, as each process will run on a single core.", "username": "Abdullah_Madani" }, { "code": "", "text": "As answered here, creating an index on the upsert field, really reduced the time to 2 minutes.", "username": "Arun_S_R" }, { "code": "mongoimport", "text": "Replace existing documents in the database with matching documents from the import file. mongoimport will insert all other documents.A query will be performed on the specified fields, so that’s why an index should exist for them. Otherwise it will be apparently slow.", "username": "Kobe_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to import JSON (1GB) into MongoDB faster?
2023-03-20T13:23:41.003Z
How to import JSON (1GB) into MongoDB faster?
1,302
null
[ "indexes" ]
[ { "code": "", "text": "Running “db.collection.dropIndex” cause a resource lock, which means that all read / write operations to the target collection are locked until deletion is completed.\nDoes “db.collection.hideIndex” cause the same effect?", "username": "Gabriel_Ayres" }, { "code": "dropIndexhideIndexhideIndex", "text": "Hi @Gabriel_Ayres welcome to the community!While the locking behaviour of dropIndex is clearly documented, I don’t think hideIndex follows the same pattern.hideIndex, I believe, just puts a flag on the associated index so the query planner is not considering it. It’s typically a step before actually dropping the index, to ensure that there are no negative performance consequences without that index. The ticket is SERVER-9306 if you want to see more details on this.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "hideIndexcollModdropIndex", "text": "Further details on hideIndex method: it’s an alias to the database command of the same name. It executes a collMod command in the background, which obtains an exclusive lock on the parent database of the specified collection for the duration of the operation. However in normal operations, this should be a quick process vs. the dropIndex command, thus the lock should only be held for an instant.Kevin", "username": "kevinadi" } ]
Does "db.collection.hideIndex" cause a resource lock?
2023-02-16T13:07:24.885Z
Does &ldquo;db.collection.hideIndex&rdquo; cause a resource lock?
970
null
[ "aggregation" ]
[ { "code": "[{\n \"priority\":4,\n \"options\":{\n \"setting1\":{\n \"value\":10\n },\n \"setting2\":{\n \"value\":20\n },\n \"setting4\":{\n \"value\":10\n },\n }\n },\n {\n \"priority\":1,\n \"options\":{\n \"setting1\":{\n \"value\":90\n },\n \"setting2\":{\n \"value\":80\n }\n }\n },\n {\n \"priority\":2,\n \"options\":{\n \"setting1\":{\n \"value\":10\n },\n \"setting3\":{\n \"value\":50\n }\n }\n }]\n{\n options: {\n \"setting1\": 90, // From priority: 1\n \"setting2\": 80, // From priority: 1\n \"setting3\": 50, // From priority: 2\n \"setting4\": 10, // From priority: 4\n }\n}\n", "text": "Hi all,\ni have this data so far in my aggregation:and i want to merge these according to their priority number (1 being the highest, 4 being the lowest)The result should be:Any help ?", "username": "nicoskk" }, { "code": "{ \"$sort\" : {\n \"priority\" : 1\n} }\n{ \"$group\" : {\n \"_id\" : null ,\n \"options\" : {\n \"setting1\" : { \"$first\" : \"$options.setting1\" } ,\n \"setting2\" : { \"$first\" : \"$options.setting2\" } ,\n \"setting3\" : { \"$first\" : \"$options.setting3\" } ,\n \"setting4\" : { \"$first\" : \"$options.setting4\" }\n }\n} }\n", "text": "I do not think that we have enough information to supply a complete solution but you could try the following and provide more information if it is not working.I would start with a $sort on priority withThe I would $group with _id:null using the $first accumulator for the settings:", "username": "steevej" } ]
Merge documents based on "priority" value
2023-03-20T15:21:09.053Z
Merge documents based on &ldquo;priority&rdquo; value
412
null
[]
[ { "code": "gyp WARN EACCES current user (\"healthd\") does not have permission to access the dev dir \"/root/.cache/node-gyp/14.18.1\"\ngyp WARN EACCES attempting to reinstall using temporary dev dir \"/var/app/staging/node_modules/mongodb-client-encryption/.node-gyp\"\ngyp WARN install got an error, rolling back install\ngyp WARN install got an error, rolling back install\ngyp ERR! configure error \ngyp ERR! stack Error: EACCES: permission denied, mkdir '/var/app/staging/node_modules/mongodb-client-encryption/.node-gyp'\ngyp ERR! System Linux 4.14.301-224.520.amzn2.x86_64\ngyp ERR! command \"/opt/elasticbeanstalk/node-install/node-v14.18.1-linux-x64/bin/node\" \"/opt/elasticbeanstalk/node-install/node-v14.18.1-linux-x64/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js\" \"rebuild\"\ngyp ERR! cwd /var/app/staging/node_modules/mongodb-client-encryption\ngyp ERR! node -v v14.18.1\ngyp ERR! node-gyp -v v5.1.0\ngyp ERR! not ok \nnpm WARN The package @babel/preset-env is included as both a dev and production dependency.\nnpm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents):\nnpm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {\"os\":\"darwin\",\"arch\":\"any\"} (current: {\"os\":\"linux\",\"arch\":\"x64\"})\nmongo-client-encryption module not found", "text": "Hi all,\nI am using mongo client encryption library for some encryption. I have tested it locally and it works fine. However when deploying to aws beanstalk via cloudshell, I am hit with the error messageand at times getting the error mongo-client-encryption module not foundkindly help.", "username": "Emmanuel_langat" }, { "code": "", "text": "@kevinadi , any know-how on this? Kindly help.", "username": "Emmanuel_langat" }, { "code": "", "text": "Hello @Emmanuel_langat ,Welcome to The MongoDB Community Forums! From my understanding, the error was generated when you’re trying to create a Beanstalk deployment, hence it’s likely to be a Beanstalk specific issue rather than MongoDB. A quick search showed a StackOverflow post that may be useful for your caseIf you are facing any issues with your AWS deployments, I would recommend you try getting help from AWS discussion forums as they might have more insight into your deploying environment.re:Post is the only AWS-managed community where experts review answers and author articles to help with AWS technical questions.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "Mar 9 15:19:17 ip-172-31-21-84 web: MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27020\nMar 9 15:19:17 ip-172-31-21-84 web: at Timeout._onTimeout (/var/app/current/node_modules/mongodb/lib/sdam/topology.js:284:38)\nMar 9 15:19:17 ip-172-31-21-84 web: at listOnTimeout (internal/timers.js:557:17)\nMar 9 15:19:17 ip-172-31-21-84 web: at processTimers (internal/timers.js:500:7) {\nMar 9 15:19:17 ip-172-31-21-84 web: reason: TopologyDescription {\nMar 9 15:19:17 ip-172-31-21-84 web: type: 'Unknown',\nMar 9 15:19:17 ip-172-31-21-84 web: servers: Map(1) { 'localhost:27020' => [ServerDescription] },\nMar 9 15:19:17 ip-172-31-21-84 web: stale: false,\nMar 9 15:19:17 ip-172-31-21-84 web: compatible: true,\nMar 9 15:19:17 ip-172-31-21-84 web: heartbeatFrequencyMS: 10000,\nMar 9 15:19:17 ip-172-31-21-84 web: localThresholdMS: 15,\nMar 9 15:19:17 ip-172-31-21-84 web: setName: null,\nMar 9 15:19:17 ip-172-31-21-84 web: maxElectionId: null,\nMar 9 15:19:17 ip-172-31-21-84 web: maxSetVersion: null,\nMar 9 15:19:17 ip-172-31-21-84 web: commonWireVersion: 0,\nMar 9 15:19:17 ip-172-31-21-84 web: logicalSessionTimeoutMinutes: null\nMar 9 15:19:17 ip-172-31-21-84 web: },\nMar 9 15:19:17 ip-172-31-21-84 web: code: undefined,\nMar 9 15:19:17 ip-172-31-21-84 web: [Symbol(errorLabels)]: Set(0) {}\nMar 9 15:19:17 ip-172-31-21-84 web: }\n", "text": "Thanks @Tarun_Gaur , this was really helpful.I have now successfully deployed the application on aws elastic beanstalk, however when now calling my API that uses the encrypted client,\nI hit this errorI believe this is a mongocryptd issue.Kindly advise on way forward.\nThanks", "username": "Emmanuel_langat" }, { "code": "", "text": "The error message that you shared indicates a possibility that the MongoDB driver is unable to connect to the mongocryptd service running on localhost at port 27020. There could be several different reasons for such issue, below are some things that you can check at your end:You may also want to try connecting to mongocryptd directly using a command-line client, to see if you’re able to connect to the service. If you’re still having issues, I would recommend checking the MongoDB driver logs and the mongocryptd logs for more information about the error.Notably from Install and Configure mongocryptd — MongoDB ManualEnterprise Feature\nThe automatic feature of field level encryption is only available in MongoDB Enterprise 4.2 or later, and MongoDB Atlas 4.2 or later clusters.\nmongocryptd is installed with MongoDB Enterprise Server (version 4.2 and later).Since this is an Enterprise Advanced feature, if you’re evaluating this feature and need further help, please DM me your contact details so I can notify the relevant teams regarding your issue.", "username": "Tarun_Gaur" } ]
Mongodb-client-encryption module not found error
2023-02-24T09:07:38.691Z
Mongodb-client-encryption module not found error
2,249
null
[ "aggregation", "queries", "node-js", "mongoose-odm", "time-series" ]
[ { "code": "const TcpDataSchema = new mongoose.Schema({\n src_ip: String,\n dst_ip: String,\n dst_mac: String,\n src_mac: String,\n dst_port: String,\n src_port: String,\n packet_size: String,\n protocols: Array,\n timestamp: {type: Number, required: true},\n}, {\n autoIndex: false,\n})\nconst TimeTcpDataSchema = new mongoose.Schema({\n timestamp: Date,\n metadata: {\n src_ip: String,\n dst_ip: String,\n dst_mac: String,\n src_mac: String,\n dst_port: String,\n src_port: String,\n protocols: Array,\n },\n packet_size: String,\n}, {\n timeseries: {\n timeField: 'timestamp',\n metaField: 'metadata',\n },\n})\n{\n $match:\n {\n timestamp: {\n $gte: <start>,\n $lte: <end>,\n },\n },\n },\n", "text": "Hi, I stored some iot data in a normal collection and a time series collection, respectively.The schema for the normal collection:The schema for the time series collection:They both have an index on ‘timestamp’.I made a query like this in both collectionsThe result from the normal collection:\nexplain plan (normal collection)\nThe result from the time series collection:\nexplain plan (time series collection)How come the time series collection spent more time on this query?\nThe db version is 6.0.4Thank you.Best,\nXiyuan", "username": "xiyuan_tu" }, { "code": "39ms88ms{\n \"timestamp\": {\n \"$date\": \"2023-01-01T00:00:00.000Z\"\n },\n \"metadata\": {\n \"dst_ip\": \"String\",\n \"dst_mac\": \"String\",\n \"dst_port\": \"String\",\n \"protocols\": [\n 12,\n 12,\n 23\n ],\n \"src_ip\": \"String\",\n \"src_mac\": \"String\",\n \"src_port\": \"String\"\n },\n \"packet_size\": \"String\",\n \"_id\": {\n \"$oid\": \"64134e6b4086586d15956077\"\n },\n \"meta\": {\n \"device_id\": 123\n },\n \"val\": 1367\n}\ntest> db.test_ts.aggregate([\n {\n $match: {\n timestamp: {\n $gte: ISODate(\"2023-01-03\"),\n $lte: ISODate(\"2023-04-10\"),\n },\n },\n },\n]).explain(\"executionStats\")\n executionSuccess: true,\n nReturned: 8380801,\n executionTimeMillis: 10007,\n totalKeysExamined: 8380801,\n totalDocsExamined: 8380801,\n executionSuccess: true,\n nReturned: 11063,\n executionTimeMillis: 10804,\n totalKeysExamined: 11068,\n totalDocsExamined: 11063,\nexecutionTimeMillistest> db.test_ts.aggregate([\n {\n $match: {\n timestamp: {\n $gte: ISODate(\"2023-01-03\"),\n $lte: ISODate(\"2023-04-10\"),\n },\n },\n },\n {\n $group: { _id: null, avg: { $avg: \"$val\" } },\n },\n]).explain(\"executionStats\") \n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 16667,\n totalKeysExamined: 0,\n totalDocsExamined: 10000000,\n executionSuccess: true,\n nReturned: 11063,\n executionTimeMillis: 7545, \n totalKeysExamined: 0,\n totalDocsExamined: 11069,\ndb.runCommand({ collStats: \"<collection_name>\" }) indexBuilds: [],\n totalIndexSize: 110505984,\n totalSize: 497184768, // 497MB\n indexSizes: { _id_: 110505984 },\n ...\n numOrphanDocs: 0,\n storageSize: 386678784, // 386MB\n indexBuilds: [],\n totalIndexSize: 0,\n totalSize: 22687744, // 22MB\n ...\n numOrphanDocs: 0,\n storageSize: 22687744, // 22MB\n", "text": "Hi @xiyuan_tu,Welcome to the MongoDB Community forums As per your shared information, the Non-TS collections take 39ms to return 30K documents where as the TS collection takes 88ms respectively.When considering the default case, it appears that the query time still falls well below the slow query threshold of 100ms. However, we recommend conducting more tests to determine whether using a TimeSeries collection is suitable for the specific use case.The TimeSeries collection is specifically designed to handle a large dataset of time series data. It optimizes disk usage once the total size exceeds the available RAM. In your case, the slow queries could be due to a small dataset of only 30K documents residing in cache memory. This could be the reason why the TimeSeries collection is slower, as it uses a compressed bucket format that requires an extra decompression/unpacking step to convert the data back to regular BSON.To illustrate the point, I created test data of 10 million documents and inserted them into a time series and a regular collection for comparison. Sharing the results for your reference:The sample document on which I ran the test:I ran a $match query on the TS and the Regular collections, and the results are as follows:Regular Collection:Time-Series Collection:As you can note the executionTimeMillis result from explainOutput for both collections are quite similar (about 10 sec).However, if I perform additional aggregations, such as the $match + $group query as shown below, the resulting output is as follows:Regular collection:Time-Series Collections:After aggregating the collections further, I found that the TS Collection takes only 7.5 seconds to return the result, while the Regular Collection takes 16.6 seconds. When I aggregate these collections, the performance of the TS Collection appears to be better.Further, the TS Collection has a clear advantage when considering the total collection size and secondary indexes.To gather more information, I executed db.runCommand({ collStats: \"<collection_name>\" }) to see the memory occupied by the collections.Regular Collection:Time-Series Collection:Here we can see that the total storage size (including collection and index) of the regular collection is much larger than that of the TS collection. Specifically, the regular collection occupies 497 MB, while the TS collection occupies only 22 MB.Even without considering the index, the regular collection still takes up 386 MB of space, which is over 17 times the size of the TS collection. Therefore, the TS collection is just 4.5% of the total size of the regular collection.I hope it gives you an understanding of the TimeSeries collection and its use-case suitability.Feel free to reach out if you have any further questions.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you very much for your help.", "username": "xiyuan_tu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
The time series collection seems to be slower than a normal collection when dealing with iot data
2023-03-12T00:58:55.880Z
The time series collection seems to be slower than a normal collection when dealing with iot data
1,321
null
[ "python" ]
[ { "code": " Mongodb dates are milliseconds since epoch, a 64-bit integer value.\n Python datetime is seconds since epoch, a float value.\n in python datetime, milliseconds appear after a decimal point\nconvert mongodb date to python date: mongodbDate / 1000 = pythonDate\ntimezones are handled in the application layer (which means me)\n", "text": "How do I query for a date in a mongodb document that has been opened in pandas?Could someone refer me to an concise resource on handling datetime values from mongodb in python and pandas? There are confusing issues here that I am slowly figuring out.WhatI have learned:So when a mongodb document is read into a pandas dataframe, how is a date queried?\nFor example, select (iloc) data with a date of ‘2023-03-01’?Apologies for what is no doubt a dumb question. The diversity of standards in managing datetimes creates challenges that are difficult to reason through.", "username": "Philip_Chenette" }, { "code": "startDate = pd.Timestamp('2023-03-07')\nend__Date = pd.Timestamp('2023-03-09')\n\nmask = ((df['timestamp'] > startDate) & (df['timestamp'] <= end__Date))\ndf_masked = df.loc[mask]\n", "text": "Found the problem - it’s iloc\niloc is purely positional.\nfor a logical mask, use loc\nfor example:", "username": "Philip_Chenette" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Pandas datetime from mongodb
2023-03-20T17:45:03.670Z
Pandas datetime from mongodb
800
null
[]
[ { "code": "{\n _id: ObjectId(\"5fce0e137ff7401634bad2ac\")\n address: \"new york\"\n description: \"this is a great city\"\n image: \"images\\0.4644859674390589-gikiguk.jpg\"\n userId: \"5fcdcd8487e37216bc82171b\"\n}\n{\n _id: ObjectId(\"5fcdcd8487e37216bc82171b\")\n name: \"jack\"\n profile: {image: \"images\\0.4644859674390589-dofowfg.jpg\", description: \"be happy dude\"}\n email: \"[email protected]\"\n password: \"2a$12$gzRrY83QAjwLhdyvn3JQ2OcMj3XG65.BULva4cZqcuSxDhhMbSXCq\"\n}\n{\n _id: ObjectId(\"5fce0e191ff7301666kao2xc\")\n likes: {quantity: 1, likes: [{isActive: true, userId: \"5fcdcd8487c31216cc87886r\"}]}\n comments: {quantity: 1, comments: [{userId: \"5fcdcd8487c31216cc87886r\" , comment: \"awesome city\"}]}\n postId: \"5fce0e137ff7401634bad2ac\"\n}\nposts collection$lookupusers collectionpostId_idpostIdpostId$lookuplikeComments documents[] [\n {\n postId: '5fce0e137ff7401634bad2ac', //from post\n location: 'shiraz', // from post\n description: 'this is a greate city', // from post\n image: 'images\\\\0.4644859674390589-gikiguk.jpg', // from post\n userId: '5fcdcd8487e37216bc82171b', // from user\n name: 'mohammad', // from user\n profile: 'images\\\\0.6093033055735912-DSC_0002_2.JPG', // from user\n comments: { quantity: 1, comments: [{userId: \"...\", name: \"...\", profile: \"...\", comment: \"...\"}] }, // from likesComments and then lookup to users documets to get data about users that wrote comments\n likes: {quantity: 1, [{userId: \"...\", name: \"...\", profile: \"...\"}]} // from likesCommnets and then lookup to users documets to get data about users that liked the post\n }\n ]\ndb.collection(\"posts\")\n .aggregate([\n { $match: { userId: { $in: users.map(u => u) } } },\n {\n $project: {\n _id: 0,\n userId: { $toObjectId: \"$userId\" },\n postId: { $toString: \"$_id\" },\n location: \"$address\",\n description: \"$description\",\n image: \"$image\"\n }\n },\n {\n $lookup: {\n from: \"users\",\n localField: \"userId\",\n foreignField: \"_id\",\n as: \"userInfo\"\n }\n },\n { $unwind: \"$userInfo\" },\n {\n $project: {\n _id: 0,\n postId: 1,\n location: 1,\n description: 1,\n image: 1,\n userId: { $toString: \"$userId\" },\n name: \"$userInfo.name\",\n profile: \"$userInfo.profile.image\"\n }\n },\n {\n $lookup: {\n from: \"likes-comments\",\n localField: \"postId\",\n foreignField: \"postId\",\n as: \"likesComments\"\n }\n },\n { $unwind: \"$likesComments\" },\n {\n $project: {\n postId: 1,\n location: 1,\n description: 1,\n image: 1,\n userId: 1,\n name: 1,\n profile: 1,\n likes: {\n $map: {\n input: \"$likesComments.likes.likes\",\n as: \"item\",\n in: {\n $toObjectId: \"$item.userId\"\n }\n }\n },\n quantity: \"$likesComments.comments.quantity\",\n comments: {\n comments: {\n $map: {\n input: \"$likesComments.comments.comments\",\n as: \"item\",\n in: {\n $toObjectId: \"$item.userId\"\n }\n }\n }\n }\n }\n },\n {\n $lookup: {\n from: \"users\",\n localField: \"likes\",\n foreignField: \"_id\",\n as: \"likes\"\n }\n },\n { $unwind: \"$likes\" },\n {\n $lookup: {\n from: \"users\",\n localField: \"comments.comments\",\n foreignField: \"_id\",\n as: \"comments\"\n }\n },\n { $unwind: \"$comments\" },\n { $addFields: { quantity: \"$quantity\" } },\n {\n $project: {\n _id: 0,\n postId: 1,\n location: 1,\n description: 1,\n image: 1,\n userId: 1,\n name: 1,\n profile: 1,\n likes: [\n {\n userId: { $toString: \"$likes._id\" },\n name: \"$likes.name\",\n profile: \"$likes.profile.image\"\n }\n ],\n comments: {\n quantity: 1,\n comments: [\n {\n userId: { $toString: \"$comments._id\" },\n \n", "text": "let’s consider i have some collection and the structures of data is look like thesepost col:user col:likesComments col:what i want to do is i have a userId (maybe more than one) that i want to search through all of posts (that is belong to the user) and get some data from posts collection and then $lookup to users collection that again get some data about user and after that every post has own postId (i converted _id to postId ). with postId i can $lookup to likeComments documents and get some data about any posts, but if a post don’t have any likes or comments, all of my query that i wrote them gives me an empty array [] (not data at all just empty array) but if a post has at least one like and one comment, all of my query works find.what i expect is something like this:query:i know that my query is wrong but i can’t find what’s wrong here.", "username": "mohammad_nowresideh" }, { "code": "{\n postId: '5fce0e137ff7401634bad2ac', //from post\n location: 'shiraz', // from post\n description: 'this is a greate city', // from post\n image: 'images\\\\0.4644859674390589-gikiguk.jpg', // from post\n userId: '5fcdcd8487e37216bc82171b', // from user\n name: 'mohammad', // from user\n profile: 'images\\\\0.6093033055735912-DSC_0002_2.JPG', // from user\n comments: { quantity: 1, comments: [{userId: \"...\", name: \"...\", profile: \"...\", comment: \"...\"}] }, // from likesComments and then lookup to users documets to get data about users that wrote comments\n likes: {quantity: 1, [{userId: \"...\", name: \"...\", profile: \"...\"}]} // from likesCommnets and then lookup to users documets to get data about users that liked the post\n }\n", "text": "Hi @mohammad_nowresideh,Welcome to MongoDB community!The query is very complex and resembles more of a relationsional schema for relational databases. This examples exactly why your schema design is not optimal for MongoDB and will put you in lots of troubles and complex aggregation.If your application expects to get and obejct like the following why not to structure the data this way:Whenever a user writes a post add all data to that document. Update likes and push comments into the arrays. If the comments array grows big add an extended document holding the rest of the comments.You can index the userid field for searches.Fixing this aggregation now will not avoid you from running into issues in the future, fixing the schema will.Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_Duchovny,\nYes, you’r right this is very complex, but to have clear data i separated the additional information to another collection. in the next project i’ll do all of that in the one document.\nin my current project if i want to change my schema i have to change a lot of code and this is very cumbersom. but may i ask you what is the query of this data?\nplease.", "username": "mohammad_nowresideh" }, { "code": "", "text": "Hi @mohammad_nowresideh,Not sure I understand the question.If you cannot change the schema I would do 3 indexed queries rather than managing this very long 3 lookup query with all the transformation.Query posts, query users, query comments.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you for answering my question @Pavel_Duchovny", "username": "mohammad_nowresideh" }, { "code": "", "text": "Its an old Thread, but ima currently struggling with schemas like that (coming from relation dbs too).If I store data like that. How to handle a User-Name change?\nI i have 10… or 15 collections with arrays with comments and likes e.g.\nMaybe the user generates 100.000 of likes…Is this still the way to go?", "username": "paD_peD" } ]
How to use multiple $lookup with arrays of objects
2020-12-10T17:03:20.365Z
How to use multiple $lookup with arrays of objects
15,270
null
[]
[ { "code": "", "text": "Hi,\nWe are testing Mongodb 6.0.4 community edition in our Kubernetes environment. We have ruled out the use of the mongodb-kubernetes-operator till it is GA.We have a 90GB collection and are noticing that a standalone instance uses 14-16 GB of memory. There are three indexes and we are not running many queries against it currently, yet we see 14-16GB of memory usage.We intentionally did not apply resource limits to the mongodb statefulset to determine how it responds and are using mongodb out of the box without any tuning or additional parameters.We are also using the official mongo:6.0 image.Is this memory usage normal ?", "username": "Sean_Crasto" }, { "code": "", "text": "did you check https://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-storage.wiredTiger.engineConfig.cacheSizeGB", "username": "Kobe_W" }, { "code": "", "text": "Thanks for the link. Yes, we did look at that.What we are observing is that mongo is respecting the Kubernetes limits when we put one in place. So for a 16GB limit, it is using the 50% of (RAM - 1 GB) limit while running, which is great.On mongo 2.6.12 with the same collection, we were only using 1GB without any limits in place. We also understand that the comparison is between two different database engines but the question I’m being asked is why is there such a big discrepancy. Which is why the question was raised to begin with.I guess we will have to try reducing the limit further to see how far we can go without degrading performance.", "username": "Sean_Crasto" } ]
Memory usage on a mongodb 6.0.4 standalone instance
2023-03-16T15:47:34.045Z
Memory usage on a mongodb 6.0.4 standalone instance
597
null
[]
[ { "code": "[\n{\n \"_id\": \"6414ac1101f297b14f36ea42\",\n \"name\": \"Mens Casual Premium Slim Fit T-Shirts\",\n \"image\": \"https://fakestoreapi.com/img/71-3HjGNDUL._AC_SY879._SX._UX._SY._UY_.jpg\",\n \"category\": \"Men's clothing\",\n \"manufacturer\": \"Guleb Inc.\",\n \"manufacturerPrice\": 22,\n},\n{\n \"_id\": \"6414ad1601f297b14f36ea53\",\n \"name\": \"Fjallraven - Foldsack No. 1 Backpack, Fits 15 Laptops\",\n \"image\": \"https://fakestoreapi.com/img/81fPKd-2AYL._AC_SL1500_.jpg\",\n \"category\": \"Men's clothing\",\n \"manufacturer\": \"Guleb Inc.\",\n \"manufacturerPrice\": 109,\n},\n{\n \"_id\": \"6414ad2b01f297b14f36ea55\",\n \"name\": \"Mens Cotton Jacket\",\n \"image\": \"https://fakestoreapi.com/img/71li-ujtlUL._AC_UX679_.jpg\",\n \"category\": \"Men's clothing\",\n \"manufacturer\": \"Guleb Inc.\",\n \"manufacturerPrice\": 55,\n}\n]\n[\n{\n \"_id\": \"641364bf9cb7c37fab3b1f8f\",\n \"name\": \"Based\",\n \"products\": [\n {\n \"buyingPrice\": 30,\n \"sellingPrice\": 40,\n \"totalAmount\": 5,\n \"_id\": \"6414ac1101f297b14f36ea42\"\n },\n {\n \"buyingPrice\": 120,\n \"sellingPrice\": 140,\n \"totalAmount\": 10,\n \"_id\": \"6414ad1601f297b14f36ea53\"\n }\n ],\n]\n[\n{\n \"_id\": \"641364bf9cb7c37fab3b1f8f\",\n \"name\": \"Based\",\n \"products\": [\n {\n \"buyingPrice\": 30,\n \"sellingPrice\": 40,\n \"totalAmount\": 5,\n \"_id\": \"6414ac1101f297b14f36ea42\",\n \"name\": \"Mens Casual Premium Slim Fit T-Shirts\",\n \"image\": \"https://fakestoreapi.com/img/71-3HjGNDUL._AC_SY879._SX._UX._SY._UY_.jpg\",\n \"category\": \"Men's clothing\",\n \"manufacturer\": \"Guleb Inc.\",\n \"manufacturerPrice\": 22,\n },\n {\n \"buyingPrice\": 120,\n \"sellingPrice\": 140,\n \"totalAmount\": 10,\n \"_id\": \"6414ad1601f297b14f36ea53\",\n \"name\": \"Fjallraven - Foldsack No. 1 Backpack, Fits 15 Laptops\",\n \"image\": \"https://fakestoreapi.com/img/81fPKd-2AYL._AC_SL1500_.jpg\",\n \"category\": \"Men's clothing\",\n \"manufacturer\": \"Guleb Inc.\",\n \"manufacturerPrice\": 109,\n }\n ],\n]\n", "text": "I have a storages collection and a products collection.\nI need to supplement the information in the arrays with the products of the storages collection with the information obtained from the products collection.\nThanks in advance!Products collectionStorages collectionExpected output", "username": "Gleb_Ivanov" }, { "code": "$lookupproductDetails$project$mapproducts$filterproductDetails_id$first$arrayElemAt$mergeObjectsdb.storages.aggregate([\n {\n $lookup: {\n from: \"products\",\n localField: \"products._id\",\n foreignField: \"_id\",\n as: \"productDetails\"\n }\n },\n {\n $project: {\n name: 1,\n products: {\n $map: {\n input: \"$products\",\n as: \"product\",\n in: {\n $mergeObjects: [\n \"$$product\",\n {\n $first: {\n $filter: {\n input: \"$productDetails\",\n cond: { $eq: [\"$$this._id\", \"$$product._id\"] }\n }\n }\n }\n ]\n }\n }\n }\n }\n }\n])\n", "text": "Hello @Gleb_Ivanov, Welcome to the MongoDB community forum,You can try something like this,", "username": "turivishal" }, { "code": "", "text": "Thank you very much for your answer! I spent a lot of time trying to do this, but I couldn’t. You are a true master at this.", "username": "Gleb_Ivanov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to supplement the information in the arrays of one collection with the information obtained from another collection?
2023-03-20T16:54:44.050Z
How to supplement the information in the arrays of one collection with the information obtained from another collection?
438
null
[]
[ { "code": "use admin\ndb.getUsers()\n...\n {\n \"_id\" : \"admin.userreader\",\n \"userId\" : UUID(\"3d9ae333-5bab-340b-85c1-fa2de5226685\"),\n \"user\" : \"userreader\",\n \"db\" : \"admin\",\n \"roles\" : [\n {\n \"role\" : \"readAnyDatabase\",\n \"db\" : \"admin\"\n }\n ],\n \"mechanisms\" : [\n \"SCRAM-SHA-1\",\n \"SCRAM-SHA-256\"\n ]\n },\n...\n", "text": "I want to create a read-only user in mongo 5.0.3.\nI tried with the readAnyDatabase role, or with the read role on each of the databases,\nIn both cases, the user is created, is able to log in and read the data,\nBUT also able to insert and delete documents.\nWhat user or server configuration can cause that?", "username": "Leiberman_Yuval" }, { "code": "[direct: mongos] admin> db.createUser({user:\"readOnly\",pwd:\"test\",roles:[{role:\"readAnyDatabase\",db:\"admin\"}]})\n{\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1679257081, i: 2 }),\n signature: {\n hash: Binary(Buffer.from(\"878756225d60b31ee03ef06a2139d811883bb0e5\", \"hex\"), 0),\n keyId: Long(\"7206364152367939607\")\n }\n },\n operationTime: Timestamp({ t: 1679257081, i: 2 })\n}\n\n\n[root@mongos ~]# mongosh -host ip_addr:port -authenticationDatabase admin -u readOnly -p test \nCurrent Mongosh Log ID: 64176e181782364e658fc610\nConnecting to: mongodb://<credentials>@ip_addr:port/?directConnection=true&authSource=admin&tls=true&tlsCertificateKeyFile=%2Fetc%2Fsecurity%2Fmongodb%2F....&tlsCAFile=%2Fetc%2Fsecurity%2Fmongodb%2F...&appName=mongosh+1.8.0\nUsing MongoDB: 5.0.15\nUsing Mongosh: 1.8.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n[direct: mongos] test> show collections\ncollection_test\n[direct: mongos] test> db.collection_test.insertOne({test:false})\nMongoServerError: not authorized on test to execute command { insert: \"collection_test\", documents: [ { test: false, _id: ObjectId('64176e39e972edeb1e236958') } ], ordered: true, lsid: { id: UUID(\"41a15864-77fe-448f-97d2-2371ef78fb06\") }, txnNumber: 1, $clusterTime: { clusterTime: Timestamp(1679257135, 2), signature: { hash: BinData(0, 863AAD840F5AF9BC7804B3619DBD98698BB7F6D5), keyId: 7206364152367939607 } }, $db: \"test\" }\n\n[direct: mongos] test> db.collection_test.find().pretty()\n[\n {\n _id: ObjectId(\"641383e8f2bb42d739875bac\"),\n nome: 'test',\n cognome: 'test'\n }\n]\n\n", "text": "Hi @Leiberman_Yuval,\nI have a slightly newer version than yours did and i did quick test to check, but everything seems to be working fine:maybe you didn’t authenticate with the correct username.Best Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Turns out authorization was disabled in the security configuration (https://www.mongodb.com/docs/manual/reference/configuration-options/#security-options).", "username": "Leiberman_Yuval" } ]
Creating a read-only user
2023-03-19T12:38:40.607Z
Creating a read-only user
3,037
https://www.mongodb.com/…cc6da3bb658f.png
[ "graphql" ]
[ { "code": "query {\n\tnav {\n\t\t_id\n\t\tkey\n\t\tpageId\n\t\tchildren {\n\t\t\talternateurl\n\t\t\tcontentType\n\t\t\tkey\n\t\t\tnaame\n\t\t\tpageId\n\t\t\tpageLevel\n\t\t\tshowonnav\n\t\t\tshowonsitemap\n\t\t\ttitle\n\t\t\turl\n\t\t\tchildren .... etc\n\t\t}\n\t}\n}\nquery {\n\tnav {\n\t\t_id\n\t\tkey\n\t\tpageId\n\t\tchildren {\n\t\t\talternateurl\n\t\t\tcontentType\n\t\t\tkey\n\t\t\tnaame\n\t\t\tpageId\n\t\t\tpageLevel\n\t\t\tshowonnav\n\t\t\tshowonsitemap\n\t\t\ttitle\n\t\t\turl\n\t\t\tinfoItem {\n\t\t\t\titemImageId\n\t\t\t\titemLinkObjectId\n\t\t\t\titemLinkText\n\t\t\t\titemLinkType\n\t\t\t\titemLinkUrl\n\t\t\t\titemText\n\t\t\t\titemTitle\n\t\t\t}\n\t\t\tchildren .... etc\n\t\t}\n\t}\n}\n", "text": "A specific object/json/graphql combo in our setup has worked on production for more than a year. I have added some more properties to this object in Mongodb json collection and updated the schema, added data, which i can see in the collection, yet graphql queries only show the new property (which is an array) as null. I have tried the application graphql (nextjs), Postman and the mongodb website tools to query app services, but the new property is always null in the returned json. Am i missing something?Here is the original query:Here is the updated query with the new array object “infoItem”:Here in the MongoDB collection you can see there are values for the new properties. The schema has been updated to suit the new props (by the “generate” tool)Here is a screengrab of the data returned from Postman - with the new property “infoItem” present, but with a null value.Can anyone shed any light on this? Why doesn’t the query pull in the data?", "username": "James_Houston" }, { "code": "\"infoItem\" [ {\n\t\"itemImageId\": null,\n\t\"itemLinkObjectId\": null,\n\t\"itemLinkText\": null,\n\t\"itemLinkType\": null,\n\t\"itemLinkUrl\": null,\n\t\"itemText\": null,\n\t\"itemTitle\": null\n}]\n", "text": "Hi @James_Houston,Did you find a solution?\nI’ve got almost the same issue but in my case I have something like this:", "username": "Julien_Chouvet" }, { "code": "", "text": "No luck so far - if i dont get one soon ill have to change from an array to a series of individual properties. There are (will be) only two items, but our next developer wanted it as an array", "username": "James_Houston" }, { "code": "", "text": "I Changed properties from an array to a set of 12 individual properties, updated the schema, yet all the values come through as null in the web interface:\nimage1209×523 34.6 KB\nThis is the data in the collection:", "username": "James_Houston" } ]
Puzzling null values returned from new property with updated schema
2023-02-14T10:39:02.128Z
Puzzling null values returned from new property with updated schema
1,355
null
[ "configuration" ]
[ { "code": "systemLog:\n logAppend: true\n logRotate: rename\nsystemLog:\n logAppend: true\n logRotate: rename\n logRotateTime: midnight\n logRotateSize: 100\n", "text": "As I understand it, in /etc/mongod.confWhen I set logRotate to “rename”, MongoDB will automatically save the file under a different name and open a new file when the rotate occurs.When I set logRotate to “reopen”, MongoDB just closes and opens the file when the rotate occurs. (because mongodb expect linux to take care of it)Is it right?Then, when set to “rename”, how do I make the log file size exceed a certain size or logRotate to occur automatically at a certain time?As with “reopen”, do I need to create /etc/logrotate.d/mongod file to configure it?I am using the latest version.\n/etc/mongod.conf file,the chatGPT says that I can specify options called logRotateSize and logRotateTime, but if I add those, an error occurs. Was this in an older version and then disappeared? or Is chatGPT telling the wrong information?Thanks.", "username": "gantodagee_N_A" }, { "code": "", "text": "At lease from latest manual, i don’t see such an option.", "username": "Kobe_W" }, { "code": "", "text": "Thanks. could you know about first question?I wonder if I understand the difference between rename and reopen correctly.", "username": "gantodagee_N_A" } ]
How can I specify the conditions under which logRotate occurs?
2023-03-20T00:41:58.607Z
How can I specify the conditions under which logRotate occurs?
871
null
[]
[ { "code": "", "text": "Hello guys, I’m running MongoDB Enterprise version v6.0.5, storage engine: inMemory.\nI’ve created a collection around 7GB in size, and after I removed all the documents, I noticed still my memory usage is around 7GB. After stopping MongoDB, I noticed the memory usage went to normal - so this data was purged only after restart. I’ve also tried to drop collection - and even waited over 24hrs to see if files will be cleaned from memory - but it was still stuck in memory, again after mongod restart it was cleaned and memory usage was ok.\nThis is a big issue, because I’m working with a large size of data, and I’m deleting some entires every few days, this will lead to memory leak - because I will delete my data, but memory usage will be still the same.Could you please test this and help?\nBest,\nDavid", "username": "David_David2" }, { "code": "", "text": "Once deleted , Mongodb may not just return those memory back to OS. Very likely those memory will be used by your new incoming data and releasing-then-reallocating memory takes time.The database may also pre-allocate memory before they are actually needed, this is purely to improve performance. Many software do this.Mongodb uses tcmalloc lib for memory management (based on manual), so check it out.", "username": "Kobe_W" }, { "code": "", "text": "Thanks a lotBest,\nDavid", "username": "David_David2" } ]
BUG: Dropped and removed documents, are not removed from memory! version v6.0.5
2023-03-19T15:45:20.520Z
BUG: Dropped and removed documents, are not removed from memory! version v6.0.5
396
null
[ "atlas-cluster", "serverless" ]
[ { "code": "", "text": "Hello there!I am planning to design a system for a small-to-medium-scale social medium platform and was unsure which cluster tier would best suit my use case.We are expecting to have around 25K users with roughly 10-15 million reads (~100k writes) on a daily basis. The database must reside within Canada, so serverless instances are not an option for me. When exploring the dedicated cluster options, I was confused about the max number of connections.For example, the M20 tier is stated to have 3K max connections. Is that referring to the connections formed for the length of a read/write request? Assuming there were 10 000 active users, would this max connections limit be met?Additionally, I have experience with auto-scaled serverless services that charge according to the number of read/write operations. However, I am unfamiliar with whether an M20 tier cluster can handle such a quantity of reads.I apologize for my lack of knowledge and would greatly appreciate any help!Best regards,\nAdrian Olesniewicz", "username": "Adrian_Olesniewicz" }, { "code": "", "text": "Hi @Adrian_Olesniewicz and welcome to the MongoDB Community forum!!We are expecting to have around 25K usersFirstly, can you help me understand if the users here specify the number of people using the application and are not the Atlas users.For example, the M20 tier is stated to have 3K max connections.As mentioned in the Connection Limits and the Atlas Tier documentations, the limit is for the concurrent connections. The connections here represents the connection pooling and not the users using the database on the Atlas.We are expecting to have around 25K users with roughly 10-15 million readsYou may want to experiment with setting readPreference to something other than Primary (the default) to other settings such as PrimaryPreferred, or Secondary, to see if any of them suit your use case best.Lastly, to plan a deployment of this magnitude, I’d suggest seeking Professional Advice . There are numerous considerations, and an experienced consultant can provide better guidance based on a more comprehensive knowledge of your needs. Some scalability decisions (such as shard key selection) become more difficult to course correct after a large quantity of production data has been collected.Let us know if you have further queries.Best Regards\nAasawari", "username": "Aasawari" } ]
Choosing the Correct Cluster Tier
2023-03-16T01:55:46.916Z
Choosing the Correct Cluster Tier
1,203
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "createdAtconst mongoose = require('mongoose')\nconst bcrypt = require('bcryptjs')\n\nconst PostSchema = new mongoose.Schema(\n {\n title: {\n type: String,\n required: true,\n default: 'No title',\n },\n slug: {\n type: String,\n required: true,\n default: 'no-title',\n },\n text: {\n type: String,\n required: true,\n default: 'No text',\n },\n status: {\n type: String,\n required: [true, 'Please add a status option'],\n enum: ['draft', 'published', 'trash'],\n default: 'published',\n },\n postType: {\n type: String,\n required: true,\n enum: ['post', 'story'],\n default: 'post',\n },\n },\n {\n toJSON: { virtuals: true },\n toObject: { virtuals: true },\n timestamps: true,\n id: false,\n }\n)\nexpire", "text": "I’m trying to buld a model in which users will publish their own objects, however I want these to be deleted after 24rs have passed via a cron-job. I currently use mongoose to make my queries to my MongoDB.I’m assuming the query will be taking consideration of the createdAt field. Is there a way to make this possible?I found out that the best approach would be by adding an addedOn/createdAt field with expire on it which I believe would work great but what if the objects that I need to delete are those under the ‘story’ postType field only?", "username": "kirasiris" }, { "code": "", "text": "I believe another post here says mongodb ttl index only supports root level document. (it can only delete all but not a part of the doc).in the latter case you will have to implement your own logic by running a cron , for instance", "username": "Kobe_W" } ]
How to delete object after 24hrs based on the createdAt field?
2023-03-19T11:29:26.101Z
How to delete object after 24hrs based on the createdAt field?
836
null
[]
[ { "code": "exports.printMsg = function() {\n console.log(\"This is a message from the demo package\");\n}\n exports.myDateTime = function () {\n\tconsole.log(\"hello from myDateTime\");\n return Date();\n};\n{\n \"name\": \"first_mongo_module\",\n \"version\": \"1.0.1\",\n \"description\": \"my first try\",\n \"main\": \"myFirstModule.js\",\n \"scripts\": {\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\n },\n \"author\": \"[email protected]\",\n \"license\": \"ISC\",\n \"keywords\": [], \n \"dependencies\": { \n \"first_mongo_module\": \"^1.0.1\" \n }\n}\nexports = function(changeEvent) {\n const msg = require('first_mongo_module');\n msg.printMsg();\n}\n", "text": "I am trying to have a simple js trigger use and external npm moduleThe module has this bit of js in itUsing a basic package.jsonI ran the npm tools to create and then publish the package for public accessUsed the GUI to import the dependency successfully and it show the newest version.Inside the trigger\nI have this simple jsWhen the trigger fires I get an Error: Cannot find module ‘first_mongo_module’Something simple must be missing.BTW running this on a free tier on Azure, and the same simple module works using node.Thanks in advance for any help or pointers.", "username": "Harm_Sluiman" }, { "code": "{\"version\": \"1.0.0\",\"name\":\"my_package\",\"main\": \"myCustomCode.js\"}\nexports = function() {\n const { myFunc } = require(\"my_package/myCustomCode.js\");\n return myFunc();\n};\n", "text": "It works for me with this simple package.json:And then the structure of the folder that I gzip:And here’s my trigger function:", "username": "Florian_Bienefelt" }, { "code": "", "text": "Thank you so much Florian.I tried many variations with no success, and stopped trying to use the NPM public library and using the basic as you provided.When I upload the gzip the page refreshes without errors or the dependancy listed, but have this messageSome Node standard libraries are not supported yet, packages relying on these libraries may not work as expected. Learn more about supported libraries.It seems this is even more basic. I even use your exact package.json", "username": "Harm_Sluiman" }, { "code": "", "text": "Went back to square 1 one more time and finally got it to work and seems repeatable but only as the directory import.\nNot clear what the error was, and will post here if it ever emerges.Will get the open. NPM library working later.Thanks again", "username": "Harm_Sluiman" } ]
Trigger Function External Dependency not found
2023-03-05T02:07:24.264Z
Trigger Function External Dependency not found
836
null
[ "atlas-cluster" ]
[ { "code": "", "text": "I am trying to create the Cluster0 in a new free tier MongoDB project. It’s been over 30 minutes and still showing “your cluster is being created”. Also, I terminated a cluster to delete one of my old projects, and there also its showing “we are deploying your changes” for a long time.", "username": "Sahan_Mondal" }, { "code": "", "text": "This issue sounds very much unexpected and I’m sorry to see nobody has responded here sooner. Did it eventually work?", "username": "Andrew_Davidson" }, { "code": "", "text": "im currently having this problem as well. Tried creating a cluster and it says its on step 3 of 3 and then im also trying to delete and its just stuck in the process of deleting. been about 40 minutes now.", "username": "Jenny_Nguyen" }, { "code": "", "text": "Seems to be a current problem, i created a cluster about 30 min ago still stucked…", "username": "Pablo_Erhard" }, { "code": "", "text": "I’m experiencing the same problem too, it has been stuck for almost 30 min", "username": "X_Olona" } ]
MongoDB Cluster creation taking too long
2022-06-11T06:57:23.800Z
MongoDB Cluster creation taking too long
2,478
null
[ "queries", "transactions" ]
[ { "code": "", "text": "Hi,\nI have n number of services running and they are all connecting to same db … and I want to implement a lock kind of mechanism in mongo db How can I do that?I have a collection which contains some documents I want only one services should read a document at a time lock it so other process wont pick that document again. What is the better and correct way of doing this.Currently I am using findAndModify… as soon as 1 process will pick(it will get by criteria if status is waiting) a document it will change the status filed of that document from waiting to processing . will it work and if yes is it the best solution … will there be any case it will result in deadlock or race condition? What other things I should consider while using findAndModify in this scenario.", "username": "Aggregation_need_help" }, { "code": "", "text": "Given the guarantees of findAndModify, it works in MOST CASES.Consider this, what happens if after the modify is done, the response from mongo server fails to reach your app server ? (in this case, this doc is not read by your app code yet, but it will never be returned again by mongo).The best approach depends on your requirement and tolerance on edge cases. another e.g. is your app code written to process a duplicate read idempotent-ly?Probably you want to try something based on leasing. E.g. Building Distributed Locks with the DynamoDB Lock Client | AWS Database Blog", "username": "Kobe_W" } ]
How to implement lock mechanism in mongodb
2023-03-18T08:45:06.340Z
How to implement lock mechanism in mongodb
1,696
null
[ "connecting" ]
[ { "code": "[2020-11-12 11:13:17,790] 63027 [qtp2052573687-51] ERROR errors - CONNECTOR:Get_Capab:mongo:Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@c5db0d6. \n{type=REPLICA_SET, servers=[{address=qwertty1-shard-00-00.uxyz.mongodb.net:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketWriteException: Exception sending message}, caused by {javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target}, caused by {sun.security.validator.ValidatorException: PKIX path building failed:\nmdb-01-shard-00-00.uiboi.mongodb.net:27017,mdb-01-shard-00-01.uiboi.mongodb.net:27017,mdb-01-shard-00-02.uiboi.mongodb.net:27017/?replicaSet=atlas-myk-shard-0\n", "text": "Hi,We are able to Connect to Atlas DB from the local shell. However, when trying to connect to the Atlas from our Application, Mongo URI doesn’t seem to establish the connectionMongo Connection failed due to an invalid mongo server address or credentialsClient view of the cluster state isMongo URI :we have created two users one master(for admin db) and another user for a newly created database. we tried with both the credentials but still not working from the Application to Atlas environment, but we are able to connect from mongo shell.\nPlease help.Regards,\nRK", "username": "RK_S" }, { "code": "", "text": "Hi RK,Might your application be leveraging an older driver version? What language driver are you using?-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "I had this issue when trying to use Java 8. When I switched to Java 11, the issue fully resolved.", "username": "Will_Rowan" } ]
Unable to Connect to Atlas from Application
2020-11-17T17:52:12.284Z
Unable to Connect to Atlas from Application
3,886
null
[ "aggregation", "python" ]
[ { "code": "", "text": "Hi, as the topic header says, I’m looking for a way to write an aggregation of fields back to a corresponding (equal number and order) set of documents in the database.\nFor example in the pymongo driver, and I have three documents with a ‘field’ parameter. Is there a single operation to carry out:\ncollection.update_many(\nfilter = {“documents”: “documents_key”},\nvalues = {“$set”: {“value” : value1},\n{“value” : value2},\n{“value” : value3}}).I don’t want to do it in a loop because it feels inefficient and ‘unclean’ and I’m sure this is a fairly basic question but Google being how it be nowadays I cannot for the life of me find out how to do this. Or if it is even possible", "username": "Joseph_Hubbard" }, { "code": "{ \"$set\" : {\n \"field_one\" , value_1 ,\n \"field_two\" , value_2 ,\n \"field_three\" , value_3\n} }\n", "text": "What do you mean bythree documents with a ‘field’ parameterWhere is it used incollection.update_many(\nfilter = {“documents”: “documents_key”},\nvalues = {“$set”: {“value” : value1},\n{“value” : value2},\n{“value” : value3}}).You have too many parenthesis in your $set. The syntax isTo help you, we will need sample documents that we can cut-n-paste directly into our system. Expected resulting documents based on these sample documents are needed.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,Let’s say I have a collection in my database that contains however many records.\nNow say I construct a filter that specifies 500 of those documents. And I need to set THE SAME FIELD in those 500 documents with a list of 500 values.I am not setting DIFFERENT fields in the same document, but the same field in different documents.I know I can use a loop but that doesn’t work with the filter unless I perform a bulk read first to gather the IDs, the ‘for’ loop also feels inefficient as the process jumps from executing business code to executing database code N times.I only wrote ‘update_many’ as a placeholder for what I meant. Perhaps a poor choice as it overlapped with something that already exists.", "username": "Joseph_Hubbard" }, { "code": "bulk = [ \n { \"updateOne\" : {\n \"filter\" : { /* filter for first document */ } ,\n \"update\" : { \"field_name\" : value_for_first_document }\n } } ,\n { \"updateOne\" : {\n \"filter\" : { /* filter for second document */ },\n \"update\" : { \"field_name\" : value_for_second_document }\n } } ,\n /* filter and value for other documents */\n { \"updateOne\" : {\n \"filter\" : { /* filter for last document */ },\n \"update\" : { \"field_name\" : value_for_last_document }\n } } \n]\n", "text": "For n documents, with n different values you need https://www.mongodb.com/docs/manual/reference/method/db.collection.bulkWrite/.To build the bulkWrite array you will need a loop to build something like:", "username": "steevej" } ]
Write a list of fields to an associated set of documents in the database
2023-03-18T01:16:25.769Z
Write a list of fields to an associated set of documents in the database
957
null
[ "java", "transactions" ]
[ { "code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<module xmlns=\"urn:jboss:module:1.9\" name=\"org.mongodb\">\n <resources>\n <resource-root path=\"mongodb-driver-core.4.9.0.jar\"/>\n <resource-root path=\"mongodb-driver-sync-4.9.0.jar\"/>\n </resources>\n <dependencies>\n <module name=\"javax.api\"/>\n <module name=\"javax.transaction.api\"/>\n <module name=\"javax.servlet.api\" optional=\"true\"/>\n </dependencies>\n</module>\n <subsystem xmlns=\"urn:jboss:domain:naming:2.0\">\n <bindings>\n <object-factory name=\"java:jboss/jndi/MongoDBClient\" module=\"org.mongodb\" class=\"com.mongodb.client.MongoClientFactory\">\n <environment>\n <property name=\"connectionString\" value=\"mongodb://localhost:27017\"/>\n </environment>\n </object-factory>\n </bindings>\n <remote-naming/>\n </subsystem>\nERROR [org.jboss.as.controller.management-operation] (ServerService Thread Pool -- 73) WFLYCTL0013: Operation (\"add\") failed - address: ([\n (\"subsystem\" => \"naming\"),\n (\"binding\" => \"java:jboss/jndi/MongoDBClient\")\n]) - failure description: \"WFLYNAM0052: Could not load module org.mongodb\"\n", "text": "I have been trying to create a connection in Wildfly 27.0.1 following the reference for the mongo java driver 4.9.Unfortunately I cannot go past an error sying that the module cannot be loaded.I have checked pretty much all I could and I am pretty sure I got it to work with a previous release of Wildfly.Here is the module.xml I put in $WILDFLY_HOME/modules/system/layer/base/org/mongodb/main/together with .jar files.I have tried with core, sync or both as in the last case.Here is the entry in standalone-full.xmland the error I getMany thanks", "username": "Gianluca_Elmo" }, { "code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<module xmlns=\"urn:jboss:module:1.9\" name=\"org.mongodb\">\n <resources>\n <resource-root path=\"mongodb-driver-core.4.9.0.jar\"/>\n <resource-root path=\"mongodb-driver-sync-4.9.0.jar\"/>\n </resources>\n <dependencies>\n <module name=\"javax.api\"/>\n <module name=\"javax.transaction.api\"/>\n <module name=\"javax.servlet.api\" optional=\"true\"/>\n </dependencies>\n</module>\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<module xmlns=\"urn:jboss:module:1.9\" name=\"org.mongodb\">\n <resources>\n <resource-root path=\"bson.4.9.0.jar\"/>\n <resource-root path=\"mongodb-driver-core.4.9.0.jar\"/>\n <resource-root path=\"mongodb-driver-sync-4.9.0.jar\"/>\n </resources>\n <dependencies>\n <module name=\"javax.api\"/>\n <module name=\"javax.transaction.api\"/>\n <module name=\"javax.servlet.api\" optional=\"true\"/>\n </dependencies>\n</module>\n", "text": "I had to change the module.xml toin order to include also the bson dependency to get it to work.", "username": "Gianluca_Elmo" } ]
Integration with Wildfly 27.0.1, jndi
2023-03-12T21:04:24.074Z
Integration with Wildfly 27.0.1, jndi
923
null
[ "node-js" ]
[ { "code": "", "text": "Hi everyone,I am planning to take the MongoDB in Node.js Certification Exam soon and I am looking for some resources to help me prepare for it. Specifically, I am in need of practice questions that will help me assess my knowledge and identify areas where I need to improve.If anyone knows of any good resources for MongoDB in Node.js Certification Exam practice questions, please share them in this forum. I would greatly appreciate any suggestions, tips, or advice on how to best prepare for this certification exam.Thank you in advance!", "username": "Mohammad_Aazen" }, { "code": "", "text": "Hello @Mohammad_Aazen, Welcome to the MongoDB Community Forum,I have recently passed the node.js developer certification exam and it was an awesome experience, my post on linkedin.You just need to visit the MongoDB newly launched website for the learning and certifications.Direct link to the MongoDB Associate Developer Exam, you will find the course, exam guide, and practice questions as well.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.All the best in advance ", "username": "turivishal" }, { "code": "", "text": "Thankyou for your reply but i am looking for a platform where i could test my skills based on the targetted questions", "username": "Mohammad_Aazen" }, { "code": "", "text": "There are practice questions, If you have checked the direct link to the certification that I have shared,I am sharing a direct link to the practice questions of node.js,Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.I would suggest you refer to the course, they have provided the proper video tutorial and Q&A, just need to clear the concepts,Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "turivishal" }, { "code": "", "text": "This one i have completed ", "username": "Mohammad_Aazen" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Looking for MongoDB in Node.js Certification Exam Practice Questions
2023-03-17T10:51:30.346Z
Looking for MongoDB in Node.js Certification Exam Practice Questions
1,187
null
[]
[ { "code": "Database Error: MongoServerSelectionError: C ::1:27017\n at Timeout._onTimeout (C:\\Users\\Dennis\\Documents\\MongoTest\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:438:30)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\n reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) { 'localhost:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (C:\\Users\\Dennis\\Documents\\MongoTest\\node_modules\\mongodb\\lib\\core\\sdam\\topology.js:438:30)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\n reason: TopologyDescription {\n type: 'Single',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(1) { 'localhost:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\n", "text": "Okay, where do I start. About five years ago I attended a Coding Bootcamp. My memory is fuzzy but I think I got some MongoJS applications to work. I’m coming back to it now, trying to practice, but when I try to start an old program I get this error:They say something like I have to update my ip address to 127.0.0.1 but how on Earth do I do that? I’ve tried the bind_ip command in mongod but that doesn’t work.They say I’m supposed to modify this in mongod.cfg but I don’t have one anywhere in my bin.I’m not Mongo expert but I’ve been Googling like crazy trying to figure this out. I need specifics. I just want to get a simple MongoJS program running.", "username": "Dennis_Markham" }, { "code": "", "text": "Replace localhost with 127.0.0.1 in your code", "username": "Ramachandra_Tummala" }, { "code": "// Dependencies\nvar express = require(\"express\");\nvar mongojs = require(\"mongojs\");\n\n// Initialize Express\nvar app = express();\n\n// Database configuration\n// Save the URL of our database as well as the name of our collection\nvar databaseUrl = \"zoo\";\nvar collections = [\"animals\"];\n\n// Use mongojs to hook the database to the db variable\nvar db = mongojs(databaseUrl, collections);\n\n// This makes sure that any errors are logged if mongodb runs into an issue\ndb.on(\"error\", function(error) {\n console.log(\"Database Error:\", error);\n});\n\n// Routes\n// 1. At the root path, send a simple hello world message to the browser\napp.get(\"/\", function(req, res) {\n res.send(\"Hello world\");\n});\n\n// 2. At the \"/all\" path, display every entry in the animals collection\napp.get(\"/all\", function(req, res) {\n // Query: In our database, go to the animals collection, then \"find\" everything\n db.animals.find({}, function(err, found) {\n // Log any errors if the server encounters one\n if (err) {\n console.log(err);\n }\n // Otherwise, send the result of this query to the browser\n else {\n res.json(found);\n }\n });\n});\n\n// 3. At the \"/name\" path, display every entry in the animals collection, sorted by name\napp.get(\"/name\", function(req, res) {\n // Query: In our database, go to the animals collection, then \"find\" everything,\n // but this time, sort it by name (1 means ascending order)\n db.animals.find().sort({ name: 1 }, function(err, found) {\n // Log any errors if the server encounters one\n if (err) {\n console.log(err);\n }\n // Otherwise, send the result of this query to the browser\n else {\n res.json(found);\n }\n });\n});\n\n// 4. At the \"/weight\" path, display every entry in the animals collection, sorted by weight\napp.get(\"/weight\", function(req, res) {\n // Query: In our database, go to the animals collection, then \"find\" everything,\n // but this time, sort it by weight (-1 means descending order)\n db.animals.find().sort({ weight: -1 }, function(err, found) {\n // Log any errors if the server encounters one\n if (err) {\n console.log(err);\n }\n // Otherwise, send the result of this query to the browser\n else {\n res.json(found);\n }\n });\n});\n\n// Set the app to listen on port 3000\napp.listen(3000, function() {\n console.log(\"App running on port 3000!\");\n});\n", "text": "Where? This is the sum total of my code:", "username": "Dennis_Markham" }, { "code": "", "text": "How do you activate your runtime environment?\nWhere is uri/mongodb connect string defined.You must be having an env file or node.js,app js etc\nCheck those files", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I have a server.js file, a node_modules folder, a package.json file, and a package-lock.json file. I activate my runtime environment by typing mongod and pressing enter, then opening another window and typing \"node server.js.", "username": "Dennis_Markham" } ]
Connection refused error
2023-03-17T15:32:38.698Z
Connection refused error
1,180
null
[ "mongodb-shell", "containers" ]
[ { "code": " mongoDB:\n image: mongo\n command: mongod --port 10511\n environment:\n MONGO_INITDB_ROOT_USERNAME: kek\n MONGO_INITDB_ROOT_PASSWORD: 1234\n MONGO_INITDB_DATABASE: dbName\n ports:\n - \"10511:10511\"\n\n\n fastAPI:\n build: /backend\n ports:\n - \"40050:80\"\n environment:\n - DB_NAME=dbName\n - MONGO_URL=mongodb://kek:1234@mongoDB:10511/dbName?authSource=admin\n depends_on:\n - mongoDB\nsudo docker exec -it <mongodb container> shmongodb://kek:1234@mongoDB:10511/dbName?authSource=adminmongosh --port 10511 -u kek -p 1234 --authenticationDatabase dbNamemongosh --port 10511use dbName\ndb.auth('kek','1234')\n", "text": "This is mongo docker-compose file;I created two container in AWS EC2. One for MongoDB and the other one is for Fast API.I can connect MongoDB container from Fast API container because when sent POST method it accepts end returns 201, but I did not create GET method to see if it is created or not.The problem is when connect to MongoDB container from the AWS EC2 with sudo docker exec -it <mongodb container> sh and try to connect mongo with my username and password that my Fast API container’s uses it, it throws “authorization failed” error.Here is the Fast API connection string to MongoDB; mongodb://kek:1234@mongoDB:10511/dbName?authSource=adminHere is the mongo connection string;mongosh --port 10511 -u kek -p 1234 --authenticationDatabase dbNameI can connect mongo with this; mongosh --port 10511 (in the MongoDB container), but I can’t do anything because of authorization issue.I tried;and same, still authentication error.", "username": "kek" }, { "code": "", "text": "I solved it.\nI was creating my env. variable with \" on AWS side.\nSo use ’", "username": "kek" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can not connect MongoDB container from in the same container on AWS EC2
2023-03-17T19:35:53.161Z
Can not connect MongoDB container from in the same container on AWS EC2
1,224
null
[ "production", "golang" ]
[ { "code": "RawCommandErrorWriteException", "text": "The MongoDB Go Driver Team is pleased to release version 1.11.3 of the MongoDB Go Driver.This release reduces memory usage under some query workloads and fixes a bug that can cause undefined behavior when reading the Raw field on database error types, including CommandError and WriteException. For more information please see the 1.11.3 release notes.You can obtain the driver source from GitHub under the v1.11.3 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.11.3 Released
2023-03-17T22:43:59.155Z
MongoDB Go Driver 1.11.3 Released
906
null
[ "queries", "transactions" ]
[ { "code": "", "text": "I have a collection that holds “status” for “transaction_id” and I want to find out which transaction_ids are missing certain statuses. Statuses can be - “sent”, “delivered”, “undelivered”, “failed”, “queued”. Most transaction_ids have multiple statuses so there are multiple documents with the same transaction_id but the status value is different. I want to find out transaction_ids that have these status values missing - “delivered” & “undelivered” & “failed”. How do I do this using a query? “transaction_id” is indexed and I can narrow down the documents using another indexed field “created_by” date.", "username": "Ram_Mulay" }, { "code": "", "text": "Hello @Ram_Mulay ,Welcome to The MongoDB Community Forums! To understand your use case better, please provide more details, such as:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "MongoDB version - 4.4.16Requirements - I have given sample documents below. As you can see, they have different statuses. The collection has many documents with many transaction ids. I want a query that gives me transaction ids with missing statuses. - “delivered” & “undelivered” & “failed”. So I need to get transaction_id=2 as a result of this query since none of its statuses are “delivered” or “undelivered” or “failed”.Sample documents in the collection -\n{“_id”: “1”, “status”: “sent”, “transaction_id”: “1”}\n{“_id”: “2”, “status”: “queued”, “transaction_id”: “1”}\n{“_id”: “3”, “status”: “undelivered”, “transaction_id”: “1”}\n{“_id”: “4”, “status”: “sent”, “transaction_id”: “2”}\n{“_id”: “5”, “status”: “queued”, “transaction_id”: “2”}what I have tried - $ne or $nin do not work because there are other documents with the same transaction_id which have statuses that are not “delivered”, “undelivered”, “failed”. Hence the query returns both transaction_ids - “1” and “2”.Maybe what I need is a map-reduce function. First get all statues tied to a transaction_id and then have a reduce function to check if the list of statuses are missing delivered\", “undelivered”, “failed”. If they are, then choose that document. However, I could not find examples of map-reduce to do this type of query. Also, the mongoDB documentation says map-reduce is deprecated and wants us to use aggregations. I cannot figure out how to do this using an aggregation.", "username": "Ram_Mulay" }, { "code": "", "text": "Sample documents for all you use-cases including:there are other documents with the same transaction_id which have statuses that are not “delivered”, “undelivered”, “failed”. Hence the query returns both transaction_ids - “1” and “2”.Read Formatting code and log snippets in posts before supplying all documents we need to experiment.Share exactly the code youhave tried - $ne or $ninMy approach would be to group on transaction_id using $addToSet for the status. Then a $match a $nin on the $addToSet array.", "username": "steevej" }, { "code": "db.getCollection('statuses').aggregate ([\n{\n $group:\n {\n _id: { tran_id: \"$transaction_id\" },\n statuses: { $addToSet: \"$status\" }\n }\n},\n{\n $match: { statuses: { $nin: [\"failed\", \"delivered\", \"undelivered\"] } }\n}\n])\n", "text": "That worked, thanks @steevej ! Here is the query for anyone else who might run into this.", "username": "Ram_Mulay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to group documents by field1 and then query to see field2 is missing a value?
2023-03-15T21:04:35.223Z
How to group documents by field1 and then query to see field2 is missing a value?
1,325
null
[]
[ { "code": "export type List = {\n _id: Realm.BSON.ObjectId;\n allowedUserIds: Array<string>;\n allowedUserNotifications: Array<AllowedUserNotification>;\n date?: Date;\n isActive?: boolean;\n listItems: Realm.List<ListItem>;\n name: string;\n userId: string;\n};\n const list: RealmInsertionModel<List> = {\n _id: new ObjectId(),\n allowedUserIds: [],\n name,\n date: adjustedDate,\n isActive: true,\n listItems: [],\n userId: currentUser?.id ?? \"1\",\n allowedUserNotifications: []\n };\n", "text": "I have a Realm Model as follows:In a previous version of Realm, I used the following to instantiate an instance of this model:However, with the latest version of Realm, RealmInsertionModel not longer appears to be available.My question is how do I now instantiate an instance of a model that includes Realm.List?Thanks,Tony", "username": "Tony_Schilling" }, { "code": " const item = listRealm?.create<List>(\"List\",\n {\n _id: new ObjectId(),\n allowedUserIds: [],\n name,\n date: adjustedDate,\n isActive: true,\n listItems: [],\n userId: currentUser?.id ?? \"1\",\n allowedUserNotifications: [] \n },\n Realm.UpdateMode.All\n );\n", "text": "I resolved. It turns out I can just add the object as follows:", "username": "Tony_Schilling" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
RealmInsertionModel
2023-03-12T01:43:38.234Z
RealmInsertionModel
605
null
[ "replication", "devops" ]
[ { "code": "- name: MONGODB_ADVERTISED_HOSTNAME\n value: >-\n $(MONGODB_POD_NAME).mongodb-headless.mongodb-\napiVersion: v1\nkind: Service\nmetadata:\n name: mongodb-headless\n namespace: mongodb-replicaset\n selfLink: /api/v1/namespaces/mongodb-replicaset/services/mongodb-headless\n uid: ea2c62e8-2916-11e9-a6b7-0050568f5646\n resourceVersion: '578197287'\n creationTimestamp: '2019-02-05T07:23:20Z'\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n heritage: Tiller\n io.cattle.field/appId: mongodb\n release: mongodb\nstatus:\n loadBalancer: {}\nspec:\n ports:\n - name: mongodb\n protocol: TCP\n port: 27017\n targetPort: 27017\n selector:\n app: mongodb\n release: mongodb\n clusterIP: None\n type: ClusterIP\n sessionAffinity: None\n\n\n\nmongo\napiVersion: v1\nkind: Service\nmetadata:\n name: mongodb\n namespace: mongodb-replicaset\n selfLink: /api/v1/namespaces/mongodb-replicaset/services/mongodb\n uid: ea2ff8be-2916-11e9-a6b7-0050568f5646\n resourceVersion: '574164609'\n creationTimestamp: '2019-02-05T07:23:20Z'\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n heritage: Tiller\n io.cattle.field/appId: mongodb\n release: mongodb\n annotations:\n field.cattle.io/publicEndpoints: >-\n [{\"addresses\":[\"10.8.7.22\"],\"port\":27017,\"protocol\":\"TCP\",\"serviceName\":\"mongodb-replicaset:mongodb\",\"allNodes\":true}]\nstatus:\n loadBalancer: {}\nspec:\n ports:\n - name: mongodb\n protocol: TCP\n port: 27017\n targetPort: mongodb\n nodePort: 27017\n selector:\n app: mongodb\n component: primary\n release: mongodb\n clusterIP: 10.43.154.112\n type: NodePort\n sessionAffinity: None\n externalTrafficPolicy: Cluster\n\n\n\nPods:\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mongodb-arbiter\n namespace: mongodb-replicaset\n selfLink: /apis/apps/v1/namespaces/mongodb-replicaset/statefulsets/mongodb-arbiter\n uid: ea31cd4e-2916-11e9-a6b7-0050568f5646\n resourceVersion: '578375651'\n generation: 146\n creationTimestamp: '2019-02-05T07:23:20Z'\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n heritage: Tiller\n io.cattle.field/appId: mongodb\n release: mongodb\nstatus:\n observedGeneration: 146\n replicas: 1\n readyReplicas: 1\n currentReplicas: 1\n updatedReplicas: 1\n currentRevision: mongodb-arbiter-b5bd84ffc\n updateRevision: mongodb-arbiter-b5bd84ffc\n collisionCount: 0\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: mongodb\n component: arbiter\n release: mongodb\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n component: arbiter\n release: mongodb\n spec:\n containers:\n - name: mongodb-arbiter\n image: golem.ilntsur.loc:18080/bitnami/mongodb:4.0.13\n ports:\n - name: mongodb\n containerPort: 27017\n protocol: TCP\n env:\n - name: BITNAMI_DEBUG\n value: 'true'\n - name: MONGODB_ADVERTISED_HOSTNAME\n value: >-\n $(MONGODB_POD_NAME).mongodb-headless.mongodb-replicaset.svc.cluster.local\n - name: MONGODB_DISABLE_SYSTEM_LOG\n value: 'no'\n - name: MONGODB_ENABLE_DIRECTORY_PER_DB\n value: 'no'\n - name: MONGODB_ENABLE_IPV6\n value: 'no'\n - name: MONGODB_PRIMARY_HOST\n value: mongodb\n - name: MONGODB_REPLICA_SET_MODE\n value: arbiter\n - name: MONGODB_REPLICA_SET_NAME\n value: mongodb-replicaset\n - name: MONGODB_SYSTEM_LOG_VERBOSITY\n value: '1'\n - name: MONGODB_POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n - name: MONGODB_PRIMARY_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mongodb\n key: mongodb-root-password\n - name: MONGODB_REPLICA_SET_KEY\n valueFrom:\n secretKeyRef:\n name: mongodb\n key: mongodb-replica-set-key\n resources: {}\n livenessProbe:\n tcpSocket:\n port: mongodb\n initialDelaySeconds: 30\n timeoutSeconds: 5\n periodSeconds: 10\n successThreshold: 1\n failureThreshold: 6\n readinessProbe:\n tcpSocket:\n port: mongodb\n initialDelaySeconds: 5\n timeoutSeconds: 5\n periodSeconds: 10\n successThreshold: 1\n failureThreshold: 6\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n imagePullPolicy: IfNotPresent\n securityContext:\n runAsUser: 1001\n runAsNonRoot: true\n procMount: Default\n restartPolicy: Always\n terminationGracePeriodSeconds: 30\n dnsPolicy: ClusterFirst\n securityContext:\n fsGroup: 1001\n affinity: {}\n schedulerName: default-scheduler\n serviceName: mongodb-headless\n podManagementPolicy: OrderedReady\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n partition: 0\n revisionHistoryLimit: 10\n\n\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mongodb-primary\n namespace: mongodb-replicaset\n selfLink: /apis/apps/v1/namespaces/mongodb-replicaset/statefulsets/mongodb-primary\n uid: ea3392bf-2916-11e9-a6b7-0050568f5646\n resourceVersion: '578202691'\n generation: 143\n creationTimestamp: '2019-02-05T07:23:20Z'\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n heritage: Tiller\n io.cattle.field/appId: mongodb\n release: mongodb\n annotations:\n field.cattle.io/publicEndpoints: >-\n [{\"addresses\":[\"10.8.7.22\"],\"port\":27017,\"protocol\":\"TCP\",\"serviceName\":\"mongodb-replicaset:mongodb\",\"allNodes\":true}]\nstatus:\n observedGeneration: 143\n replicas: 1\n readyReplicas: 1\n currentReplicas: 1\n updatedReplicas: 1\n currentRevision: mongodb-primary-bcd5b684b\n updateRevision: mongodb-primary-bcd5b684b\n collisionCount: 0\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: mongodb\n component: primary\n release: mongodb\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n component: primary\n release: mongodb\n annotations:\n cattle.io/timestamp: '2021-07-19T14:05:32Z'\n field.cattle.io/ports: >-\n [[{\"containerPort\":27017,\"dnsName\":\"mongodb-primary\",\"kind\":\"ClusterIP\",\"name\":\"mongodb\",\"protocol\":\"TCP\"}]]\n field.cattle.io/publicEndpoints: >-\n [{\"addresses\":[\"10.8.7.22\"],\"allNodes\":true,\"port\":27017,\"protocol\":\"TCP\",\"serviceId\":\"mongodb-replicaset:mongodb\"}]\n spec:\n containers:\n - name: mongodb-primary\n image: golem.ilntsur.loc:18080/bitnami/mongodb:4.0.13\n ports:\n - name: mongodb\n containerPort: 27017\n protocol: TCP\n env:\n - name: BITNAMI_DEBUG\n value: 'true'\n - name: MONGODB_ADVERTISED_HOSTNAME\n value: >-\n $(MONGODB_POD_NAME).mongodb-headless.mongodb-replicaset.svc.cluster.local\n - name: MONGODB_DISABLE_SYSTEM_LOG\n value: 'no'\n - name: MONGODB_ENABLE_DIRECTORY_PER_DB\n value: 'no'\n - name: MONGODB_ENABLE_IPV6\n value: 'no'\n - name: MONGODB_REPLICA_SET_MODE\n value: primary\n - name: MONGODB_REPLICA_SET_NAME\n value: mongodb-replicaset\n - name: MONGODB_SYSTEM_LOG_VERBOSITY\n value: '0'\n - name: MONGODB_POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n - name: MONGODB_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mongodb\n key: mongodb-root-password\n - name: MONGODB_REPLICA_SET_KEY\n valueFrom:\n secretKeyRef:\n name: mongodb\n key: mongodb-replica-set-key\n resources: {}\n volumeMounts:\n - name: datadir\n mountPath: /bitnami/mongodb\n livenessProbe:\n exec:\n command:\n - pgrep\n - mongod\n initialDelaySeconds: 30\n timeoutSeconds: 5\n periodSeconds: 10\n successThreshold: 1\n failureThreshold: 6\n readinessProbe:\n exec:\n command:\n - mongo\n - '--eval'\n - db.adminCommand('ping')\n initialDelaySeconds: 5\n timeoutSeconds: 5\n periodSeconds: 10\n successThreshold: 1\n failureThreshold: 6\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n imagePullPolicy: IfNotPresent\n securityContext:\n runAsUser: 1001\n runAsNonRoot: true\n procMount: Default\n restartPolicy: Always\n terminationGracePeriodSeconds: 30\n dnsPolicy: ClusterFirst\n securityContext:\n fsGroup: 1001\n affinity: {}\n schedulerName: default-scheduler\n volumeClaimTemplates:\n - kind: PersistentVolumeClaim\n apiVersion: v1\n metadata:\n name: datadir\n creationTimestamp: null\n spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 8Gi\n storageClassName: nfs-client\n volumeMode: Filesystem\n status:\n phase: Pending\n serviceName: mongodb-headless\n podManagementPolicy: OrderedReady\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n partition: 0\n revisionHistoryLimit: 10\n\n\n\n\napiVersion: apps/v1\nkind: StatefulSet\nmetadata:\n name: mongodb-secondary\n namespace: mongodb-replicaset\n selfLink: /apis/apps/v1/namespaces/mongodb-replicaset/statefulsets/mongodb-secondary\n uid: ea36259c-2916-11e9-a6b7-0050568f5646\n resourceVersion: '578203765'\n generation: 172\n creationTimestamp: '2019-02-05T07:23:20Z'\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n heritage: Tiller\n io.cattle.field/appId: mongodb\n release: mongodb\nstatus:\n observedGeneration: 172\n replicas: 1\n readyReplicas: 1\n currentReplicas: 1\n updatedReplicas: 1\n currentRevision: mongodb-secondary-6f757c5bc\n updateRevision: mongodb-secondary-6f757c5bc\n collisionCount: 0\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: mongodb\n component: secondary\n release: mongodb\n template:\n metadata:\n creationTimestamp: null\n labels:\n app: mongodb\n chart: mongodb-7.4.4\n component: secondary\n release: mongodb\n annotations:\n cattle.io/timestamp: '2021-07-19T14:05:28Z'\n field.cattle.io/ports: >-\n [[{\"containerPort\":27017,\"dnsName\":\"mongodb-secondary\",\"kind\":\"ClusterIP\",\"name\":\"mongodb\",\"protocol\":\"TCP\"}]]\n spec:\n containers:\n - name: mongodb-secondary\n image: golem.ilntsur.loc:18080/bitnami/mongodb:4.0.13\n ports:\n - name: mongodb\n containerPort: 27017\n protocol: TCP\n env:\n - name: BITNAMI_DEBUG\n value: 'true'\n - name: MONGODB_ADVERTISED_HOSTNAME\n value: >-\n $(MONGODB_POD_NAME).mongodb-headless.mongodb-replicaset.svc.cluster.local\n - name: MONGODB_DISABLE_SYSTEM_LOG\n value: 'no'\n - name: MONGODB_ENABLE_DIRECTORY_PER_DB\n value: 'no'\n - name: MONGODB_ENABLE_IPV6\n value: 'no'\n - name: MONGODB_PRIMARY_HOST\n value: mongodb\n - name: MONGODB_REPLICA_SET_MODE\n value: secondary\n - name: MONGODB_REPLICA_SET_NAME\n value: mongodb-replicaset\n - name: MONGODB_SYSTEM_LOG_VERBOSITY\n value: '0'\n - name: MONGODB_POD_NAME\n valueFrom:\n fieldRef:\n apiVersion: v1\n fieldPath: metadata.name\n - name: MONGODB_PRIMARY_ROOT_PASSWORD\n valueFrom:\n secretKeyRef:\n name: mongodb\n key: mongodb-root-password\n - name: MONGODB_REPLICA_SET_KEY\n valueFrom:\n secretKeyRef:\n name: mongodb\n key: mongodb-replica-set-key\n resources: {}\n volumeMounts:\n - name: datadir\n mountPath: /bitnami/mongodb\n livenessProbe:\n exec:\n command:\n - pgrep\n - mongod\n initialDelaySeconds: 30\n timeoutSeconds: 5\n periodSeconds: 10\n successThreshold: 1\n failureThreshold: 6\n readinessProbe:\n exec:\n command:\n - mongo\n - '--eval'\n - db.adminCommand('ping')\n initialDelaySeconds: 5\n timeoutSeconds: 5\n periodSeconds: 10\n successThreshold: 1\n failureThreshold: 6\n terminationMessagePath: /dev/termination-log\n terminationMessagePolicy: File\n imagePullPolicy: IfNotPresent\n securityContext:\n runAsUser: 1001\n runAsNonRoot: true\n procMount: Default\n restartPolicy: Always\n terminationGracePeriodSeconds: 30\n dnsPolicy: ClusterFirst\n securityContext:\n fsGroup: 1001\n affinity: {}\n schedulerName: default-scheduler\n volumeClaimTemplates:\n - kind: PersistentVolumeClaim\n apiVersion: v1\n metadata:\n name: datadir\n creationTimestamp: null\n spec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 8Gi\n storageClassName: nfs-client\n volumeMode: Filesystem\n status:\n phase: Pending\n serviceName: mongodb-headless\n podManagementPolicy: Parallel\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n partition: 0\n revisionHistoryLimit: 10\n", "text": "Hello,\nWe recently deployed mongodb on k8s as a replicaset. Everything was working fine until we had to restart the pods.\nThe arbiter, primary and secondary are up, but they can’t communicate with each other and it’s impossible to connect to the db.\nIn the logs for the primary we see on startup that the getaddrinfo failed:I NETWORK [initandlisten] waiting for connections on port 27017\nW NETWORK [replexec-0] getaddrinfo(“mongodb-secondary-0.mongodb-headless.mongodb-replicaset.svc.cluster.local”) failed: Name or service not known\nD NETWORK [replexec-0] connected to server mongodb-arbiter-0.mongodb-headless.mongodb-replicaset.svc.cluster.local:27017\nW NETWORK [replexec-0] getaddrinfo(“mongodb-arbiter-0”) failed: Temporary failure in name resolution\nAfterwards there are constant messages on host unreachable:\n[Replication] Failed to connect to mongodb-secondary-0.mongodb-headless.mongodb-replicaset.svc.cluster.local:27017 - HostUnreachable: Error connecting to mongodb-secondary-0.mongodb-headless.mongodb-replicaset.svc.cluster.local:27017 :: caused by :: Could not find address for mongodb-secondary-0.mongodb-headless.mongodb-replicaset.svc.cluster.local:27017: SocketException: Host not found (authoritative)\nWe’re using MONGODB_ADVERTISED_HOSTNAME as such:replicaset.svc.cluster.local\nNetwork in the k8s wasn’t modified, other clusters/namespaces work fine. We tried using a different version of bitnami/mongodb but then reverted to the previous version and configuration which had\nworked.\nNetwork endpoints:", "username": "sc1231" }, { "code": "", "text": "Found any solution? Thanks in advance.", "username": "Sporti_Fies" }, { "code": "", "text": "Kerala Lottery Result Akshaya is one of the known lottery types in the Kerala state lottery.", "username": "zakria_maqsood" } ]
replicaSet pods giving errors on host unreachable and can't connect to DB
2022-05-09T08:44:34.638Z
replicaSet pods giving errors on host unreachable and can&rsquo;t connect to DB
4,945
null
[ "replication", "atlas-cluster", "connector-for-bi" ]
[ { "code": "mongosqld --mongo-uri \"mongodb://<URI-REPLICA-NODE-1:PORT>,<URI-REPLICA-NODE-2:PORT>,<URI-REPLICA-NODE-3:PORT>/?ssl=true&replicaSet=<URI-REPLICA-SET>&retryWrites=true&w=majority\" --auth -u <USERNAME> -p <PASSWORD>\n\n2023-03-16T15:04:14.101-0300 I CONTROL [initandlisten] mongosqld starting: version=v2.14.5 pid=8000 host=EC2AMAZ-OO1JQ7N\n2023-03-16T15:04:14.102-0300 I CONTROL [initandlisten] git version: 1ba4542957c4abb8b58cf242ebfd67f7805ef59f\n2023-03-16T15:04:14.102-0300 I CONTROL [initandlisten] OpenSSL version OpenSSL 1.0.2n-fips 7 Dec 2017 (built with OpenSSL 1.0.2s 28 May 2019)\n2023-03-16T15:04:14.102-0300 I CONTROL [initandlisten] options: {security: {enabled: true}, mongodb: {net: {uri: \"mongodb://xxx0.mongodb.net:27017,xxx1.mongodb.net:27017,xxx2.mongodb.net:27017/?ssl=true&replicaSet=atlas-xxx-shard-0&retryWrites=true&w=majority\", auth: {username: \"xxxUser\", password: \"<protected>\"}}}}\n2023-03-16T15:04:14.112-0300 I NETWORK [initandlisten] waiting for connections at 127.0.0.1:3307\n2023-03-16T15:04:19.127-0300 E NETWORK [initandlisten] unable to load MongoDB information: failed to create admin session for loading server cluster information: unable to execute command: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: xxx0.mongodb.net:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: OCSP verification failed: no OCSP cache provided }, { Addr: xxx1.mongodb.net:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: OCSP verification failed: no OCSP cache provided }, { Addr: xxx2.mongodb.net:27017, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: OCSP verification failed: no OCSP cache provided }, ] }\n", "text": "I’m trying to connect PowerBI to my Basic Plan project.\nI know there’s a paid option to run the BI Connector on your cloud, but we want to run locally on our Windows Server.I’m having a hard time connecting mongosqld.exe to our serverBI connector is already installed and showing up in ODBC\nBut making the connection with mongosqld.exe is not working.", "username": "Mundo_Invest" }, { "code": "", "text": "Check this link.Same error discussed", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yeah, I saw that one.\nBut I already tried ssl=true, as you can see on the command string.No success =(", "username": "Mundo_Invest" }, { "code": "", "text": "May be due to authentication database missing in your connect string?", "username": "Ramachandra_Tummala" } ]
Free PowerBI Connector - issue connecting with mongosqld.exe
2023-03-16T19:20:24.484Z
Free PowerBI Connector - issue connecting with mongosqld.exe
1,379
https://www.mongodb.com/…b9ca18be9a49.png
[]
[ { "code": "\"MATRICULA\" : {\n \"ID_TURMA\" : NumberInt(14246),\n \"NR_ANDAMENTO\" : 0.0,\n \"DT_CANCELAMENTO\" : null,\n \"ID_USUARIOVEC\" : NumberInt(15004),\n \"ID_STATUSAPROVEITAMENTO\" : NumberInt(1),\n \"NR_APROVPRESENCIAL\" : 0.0,\n \"NR_FREQUENCIA\" : 0.0,\n \"DT_EXCLUSAO\" : null,\n \"DT_SOLICITACAO\" : ISODate(\"2022-12-15T04:00:00.000+0000\"),\n \"ID_MOTIVO_CANCELAMENTO\" : null,\n \"DS_MOTIVO_CANCELAMENTO_OUTRO\" : null,\n \"CD_UNIDADE_CONCLUSAO\" : NumberInt(4338),\n \"NR_TENTATIVA\" : NumberInt(1),\n \"NR_CARGAHORARIA\" : NumberInt(30),\n \"DT_REALIZACAO\" : ISODate(\"2022-12-31T04:00:00.000+0000\"),\n \"ID_STATUSMATRICULA\" : NumberInt(0),\n \"NR_APROVEITAMENTO\" : 0.0,\n \"DT_FINALIZACAO\" : ISODate(\"2023-01-01T04:00:00.000+0000\"),\n \"DT_PRIMEIRO_ACESSO\" : null,\n \"ID_STATUSSITUACAO\" : NumberInt(0),\n \"DT_CONCLUSAO\" : ISODate(\"2023-01-01T04:00:00.000+0000\"),\n \"CD_MATRICULA\" : NumberInt(511095),\n \"UNIDADES\" : [\n NumberInt(4338)\n", "text": "I need to find all _id inside the DT_FINALIZACAO que contenham a hora 04:00:00\nScreenshot_1744×484 17.1 KB\n", "username": "Guilherme_amorim" }, { "code": "", "text": "Hi @Guilherme_amorim,Welcome to the MongoDB Community forums To understand your use case better, please provide further details, such as:Best,\nKushagra", "username": "Kushagra_Kesav" } ]
How to find _id with an specific isodate hour
2023-03-16T17:38:51.009Z
How to find _id with an specific isodate hour
305
null
[]
[ { "code": "", "text": "I have a collection called ProductPrice which contains ProductCode, ProductPrice,CreatedDateTime and CreatedTimeZone along with other fields. When user retrieve this data I want to display ProductCode, ProductPrice, CreatedDateTime, CreatedTimeZone, UTCconvertedDateTime.Please help me convert the CreatedDateTime column into UTCDateTime using CreatedTimeZone column.", "username": "Niju_Jose" }, { "code": "{\n \"_id\": {\n \"$oid\": \"64142cfe6abcfbb8b232d1a1\"\n },\n \"date\": {\n \"$date\": \"2023-03-17T09:03:37.567Z\"\n }\n}\nCreatedTimeZoneCreatedDateTime", "text": "Hey @Niju_Jose,Welcome to the MongoDB Community forums The ISODate in MongoDB is already in UTC. If I insert a new document in MongoDB Collection, it will get saved in UTC only.To understand the question better, please provide further details, such as:Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Timezone Conversion from one to another
2023-03-17T08:06:49.462Z
Timezone Conversion from one to another
453
null
[ "node-js", "mongoose-odm" ]
[ { "code": "const MongoClient = require('mongodb').MongoClient;\nconst uri = process.env.MONGO_URI;\nconst db_name = process.env.DB_NAME;\n\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n\n//insert document into mongodb atlas\nconst insertDocument = async (collectionName, document) => {\n\n try {\n await client.connect();\n const database = client.db(db_name);\n const collection = database.collection(collectionName);\n const result = await collection.insertOne(document);\n return result;\n\n } catch (error) {\n console.log(error);\n \n } finally {\n await client.close();\n }\n};\n", "text": "I am randomly getting an error:MongoServerClosedError: Server is closedI am using nodejs and mongodb, but NOT using Mongoose…Here is my basic MongoDBConnector.js code:…as you can see, I am creating the client.connect() every time I call insertDocument() DB operation and then client.close() at the end of the operation, yet I keep getting the:MongoServerClosedError: Server is closed errorI use the same approach for all my database operations, such as getDocument() and updateDocument(), where I useawait client.connect();to connect for every operation andawait client.close();at the end of every operation. Yet I am still sometimes getting the errorMongoServerClosedError: Server is closed errorTo me it seems like the asynchronous client.connnect() and client.close() commands should only be used ONCE, and the insertDocument() and other DB operations should be able to RESUSE the client connection.Can anyone offer an opinion on this? I followed a tutorail that said to do it the way I did, but I think there must be a better way to implement my client.connect() and reuse it for all my DB operations.Many thanks for anyone interested in replying\nJon", "username": "Jon_McGowan" }, { "code": "", "text": "Having the same issue. Any update on that?", "username": "Midnight_Vector" } ]
Should I make a NEW mongodb.client.connect() for every DB operation?
2023-01-04T15:08:02.766Z
Should I make a NEW mongodb.client.connect() for every DB operation?
1,236
null
[ "queries", "node-js", "mongoose-odm", "atlas-search" ]
[ { "code": "$search: {\n index: 'ad-search',\n compound: {\n mustNot: [\n {\n equals: {\n path: 'userId',\n value: new mongoose.Types.ObjectId(userId),\n },\n },\n {\n equals: {\n path: 'isVisible',\n value: false,\n },\n },\n ],\n\n should: [\n {\n geoWithin: {\n circle: {\n center: {\n type: Lang.POINT,\n coordinates,\n },\n radius: radiusToBeSearchedAround,\n },\n path: 'pickUpDetails.location',\n \n },\n },\n {\n geoWithin: {\n circle: {\n center: {\n type: Lang.POINT,\n coordinates,\n },\n radius: radiusToBeSearchedAround,\n },\n path: 'dropOffDetails.location',\n \n },\n },\n\n {\n equals: {\n path: 'isUrgent',\n value: true,\n },\n },\n ],\n minimumShouldMatch: 1,\n },\n },\n", "text": "I need search query such that, the ad/product co-ordinates must be either pickUplocation or dropOffLocation and need to boost ads which are urgent that satisfied coordinate requirements.My search query is as follows:\nThe problem with this query is that: It return urgent ad which are not even inside the mentioned co-ordinates", "username": "Arun_Only1" }, { "code": "", "text": "Hi @Arun_Only1 and welcome to the MongoDB Community forumTo understand the requirement better and help you with the possible solution, could you help me a few details Best Regards\nAasawari", "username": "Aasawari" } ]
Need to boost urgent ads within given radius from product/ad co-ordinates
2023-03-16T04:07:12.960Z
Need to boost urgent ads within given radius from product/ad co-ordinates
704
null
[ "replication" ]
[ { "code": "", "text": "Greetings!!! I am working m312 course. As part of this course I did shutdown secondary replica using db.shutdownServer(). It is three node replica set as belowIt is on Vagrant virtualbox.\nBelow command was used to create replica set\nmlaunch init --name TIMEOUTS --replicaset --node 3 --dir timeouts --port 27000Once I shutdown secondary replica , I am not sure how to bring it backCan any one please guide or help meThanks", "username": "Mohammad_Alam1" }, { "code": "", "text": "I think you have to use mlaunch start with appropriate flag\nPlease check mlaunch commands for exact syntax from the GitHub link available in our forum threads", "username": "Ramachandra_Tummala" } ]
Unable to bring back online secondary replica
2023-03-16T21:12:24.540Z
Unable to bring back online secondary replica
601
null
[ "java", "spring-data-odm" ]
[ { "code": "@Document(\"items\")\npublic class GroceryItem {\n @Id\n private String id;\n\n private String name;\n private Integer quantity;\n private String category;\n\n private List<Product> products;\n...\npublic class Product {\n private String name;\n private List<Level> levels;\n ...\npublic class Level {\n private String name;\n...\nQuery query = new Query(Criteria.where(\"name\").is(name)\n .and(\"products.name\").is(productName));\n@Query(value = \"{ 'quantity' : ?0, 'products.name' : ?1, 'products.levels.name' : ?2 }\")\nStream<GroceryItem> findByQuantityProdNameLevelName(int quantity, String prodName, String level);\n\n", "text": "Hello, can enyone help me how to use MongoTemplate to find documents by embeded object value?I have a classes:Embeded class:andHow can I find document by embeded product “name” or level “name” field?It is not working with query:With @Query annotation it works fine:Thanks.", "username": "Tadeus_Kozlovski" }, { "code": "CodecRegistry pojoCodecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(),\n fromProviders(PojoCodecProvider.builder().automatic(true).build()));\n\n MongoCollection<GroceryItem> collection = MongoClients.create(dbConnString)\n .getDatabase(\"gettingstarted\").withCodecRegistry(pojoCodecRegistry)\n .getCollection(\"groceryitems\", GroceryItem.class);\n\n FindIterable<GroceryItem> items = collection.find(eq(\"products.levels.name\", \"level1\"), GroceryItem.class);\n", "text": "Solution found with MongoClient and Filters class approach:", "username": "Tadeus_Kozlovski" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find by embeded object field value
2023-03-16T08:15:26.957Z
Find by embeded object field value
991
null
[ "aggregation", "python", "atlas-search" ]
[ { "code": "[{'dok_id': 'h50377',\n 'sender': 'Finansdepartementet',\n 'titel': 'Nya regler om betaltjänster',\n 'doctype': 'prop',\n 'subdocuments': [{'page': 1, 'nr': 0, 'embedding':[0.542523,..., 0.343321]},\n {'page': 2, 'nr': 1, 'embedding':[0.1455423,..., 0.543325]},\n {'page': 692, 'nr': 980, 'embedding':[0.1455423,..., 0.543325]}]},\n {'dok_id': 'h503185d2',\n 'sender': '',\n 'titel': 'prop 2017/18 185 d2',\n 'doctype': 'prop',\n 'subdocuments': [{'page': 1, 'nr': 0, 'embedding':[0.192523,..., 0.113321]},\n {'page': 2, 'nr': 1,'embedding':[0.5655423,..., 0.013325]},\n {'page': 645, 'nr': 864,'embedding':[0.522423,..., 0.145325]}]}\n]\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"doctype\": {\n \"type\": \"string\"\n },\n \"embedding\": [\n {\n \"dimensions\": 768,\n \"similarity\": \"dotProduct\",\n \"type\": \"knnVector\"\n }\n ],\n \"nr\": {\n \"type\": \"number\"\n },\n \"sender\": {\n \"type\": \"string\"\n },\n \"year\": {\n \"type\": \"number\"\n }\n }\n }\n}\ncursor=collection.aggregate([\n {'$search': {\n 'knnBeta': {'vector': embedding, \n 'path': 'subdocuments.embedding', \n 'k': 10}}\n },\n {'$addFields': {'subdocuments.score': {'$meta': 'searchScore'}}},\n {'$project': {'_id': 0}},\n])\nresults=list(cursor)\nresults\n", "text": "Hi,Is it possible to use the search operator knnBeta on nested fields?I’m using the new search operator knnBeta to find and retrieve similar texts. I have a couple of thousand documents that I have split in to subdocuments in order to compute embeddings to each subdocument. I want to search and retrieve the k most similar subdocuments that are nested within their respective document. I have tried this on a flat database (and it worked) now I want to try this in a nested setting because I want to avoid repeating the meta data regarding each document for each subdocument. Here is a simplyfied (I have omitted some fields and sliced all but the beginning and end of the arrays, also the embeddings ar actually of 768 dimension) example of the structure of my data:And this is my search index:I’m using pymongo. This is the aggregation pipeline that unfortunately gives me an empty list:This is a simplyfied aggregation pipeline. I actually want to filter out subdocuments that does not score above a threshold and group the results on document-level and compute a max score per document. But since this simple pipeline does not work I suspect that vector search is not feasible on nested documents?", "username": "Joakim_Hveem" }, { "code": "", "text": "Hi Joakim,\nCould you please try using embeddedDocument to index and query your documents? Similar to other operators, it supports knnBeta.Let me know if that works for you!", "username": "Alexander_Lukyanchikov" }, { "code": "knn_dict={'knnBeta': {'vector': embedding, \n 'path': 'subdocuments.embedding', \n 'k': 100}}\n\ncursor=collection.aggregate([\n {'$search': {\n 'embeddedDocument':{\n 'path': 'subdocuments',\n 'operator': knn_dict,\n \"score\": {\n \"embedded\": {\n \"aggregate\": \"maximum\"\n }\n }\n }}},\n {'$addFields': {'score': {'$meta': 'searchScore'}}},\n {'$project': {'_id': 0,'dok_id':1,'score':1,'subdocuments.page':1,'subdocuments.nr':1}},\n])\n", "text": "Thank you Alexander! I actually also just found that solution. However, I still have a problem. I need to retrieve the score for each embedded document (my subdocument). As of now I only get a document aggregate (e.g. maximum). I would like that document score and the individual score (subdocument score).Here is my current pipeline:I tried {‘$addFields’: {‘subdocument.score’: {‘$meta’: ‘searchScore’}}} but I only got the aggregate score repeated on all subdocuments.Kind regards\nJoakim", "username": "Joakim_Hveem" }, { "code": "", "text": "Hi Joakim,\nToday we only support sum/max/min/mean scoring options for embedded documents, see the details here:Normalize or modify the score assigned to a returned document with the boost, constant, embedded, or function operator.Unfortunately it’s not possible to output individual embedded document score, feel free to request that on https://feedback.mongodb.com/forums/924868-atlas-search", "username": "Alexander_Lukyanchikov" }, { "code": "", "text": "Thank you Alexander for the timely response. I’ve made a feature request regarding this now. In our small application this is of strategic importance, because it determines the structure of our database (flat/long structure or thick/nested). If we go by the thick/nested structure we will have to calculate the dot-product a second time outside Mongo for the retrieved documents (naturally that is something I want to avoid).Kind regards\nJoakim", "username": "Joakim_Hveem" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
knnBeta on field nested in an array
2023-03-16T08:20:48.459Z
knnBeta on field nested in an array
1,526
https://www.mongodb.com/…_2_1024x644.jpeg
[ "replication" ]
[ { "code": "", "text": "Hi, my cluster’s nodes going down repeatedly… I fix the problem, but its going down again after a few hours… How can i fix the problem \nEkran Resmi 2023-03-16 22.25.441862×1172 68.8 KB\n", "username": "Huseyin_U" }, { "code": "", "text": "", "username": "Huseyin_U" }, { "code": "", "text": "Hi @Huseyin_U,Welcome to the MongoDB Community forums!For issues like this, please login to your account and contact the Atlas Support team for assistance.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Replica set has no primary
2023-03-16T19:26:55.873Z
Replica set has no primary
733
null
[]
[ { "code": "", "text": "HelloI’m currently working on a project with a collection of foods. Each food has a name field and I would like to use the Full Text Search capabilities to query the collection. For example, in the collection, there are 3 documents with the exact name “APPLE”. However, there are a few hundred other food documents which contain APPLE in the name field. Some of these documents have “APPLE” show up multiple times in the name. My question is how do I set up a search index and query to prioritize exact matches. So if I want the top 10 results, how do I ensure the 3 documents with exactly “APPLE” as the name show up at the top. To be clear I do not want to do only an exact match. I still want other documents that contain APPLE in it. The exact matches just need to be scored highest. Currently, based off a relevance, documents that contain APPLE in the name multiple times are scored higher.", "username": "Shivam_Patel" }, { "code": "", "text": "Hey @Shivam_Patel,Welcome to the MongoDB Community Forums! In order to better understand your use case, can you please provide us with the following details:This would help us understand your issue and help you better.Regards,\nSatyam", "username": "Satyam" }, { "code": "[\n {\n \"_id\": \"6407a138694fa2f8499bcf1f\",\n \"name\": \"BARRILITOS, APPLE SODA, APPLE, APPLE\"\n },\n {\n \"_id\": \"6407a174694fa2f8499c2458\",\n \"name\": \"JUMEX, APPLE NECTAR, APPLE, APPLE\"\n },\n {\n \"_id\": \"6407a0ba694fa2f8499aec10\",\n \"name\": \"APPLE\"\n },\n {\n \"_id\": \"6407a0ba694fa2f8499aed86\",\n \"name\": \"APPLE PIE\"\n },\n {\n \"_id\": \"6407a0ae694fa2f8499ad42e\",\n \"name\": \"APPLE CIDER\"\n }\n]\n[\n {\n $search: {\n index: \"foodSearch\",\n text: {\n query: \"APPLE\",\n path: \"name\"\n }\n }\n }\n]\n[\n {\n \"_id\": \"6407a0ba694fa2f8499aec10\",\n \"name\": \"APPLE\"\n },\n {\n \"_id\": \"6407a0ae694fa2f8499ad42e\",\n \"name\": \"APPLE CIDER\"\n },\n {\n \"_id\": \"6407a0ba694fa2f8499aed86\",\n \"name\": \"APPLE PIE\"\n },\n {\n \"_id\": \"6407a138694fa2f8499bcf1f\",\n \"name\": \"BARRILITOS, APPLE SODA, APPLE, APPLE\"\n },\n {\n \"_id\": \"6407a174694fa2f8499c2458\",\n \"name\": \"JUMEX, APPLE NECTAR, APPLE, APPLE\"\n }\n]\n", "text": "HiHere are some sample documents in my collection:I’m currently using the lucene.standard analyzer. This is the current query I’m using.The desired output isHow do I modify my query or search index to achieve something like this?", "username": "Shivam_Patel" }, { "code": "[{\n \"_id\": \"6412e7eff819dac5b09cabe2\",\n \"name\": \"BARRILITOS, APPLE SODA, APPLE, APPLE\"\n},\n{\n \"_id\": \"6412e809f819dac5b09cabe3\",\n \"name\": \"JUMEX, APPLE NECTAR, APPLE, APPLE\"\n},\n{\n \"_id\": \"6412e81ff819dac5b09cabe4\",\n \"name\": \"APPLE\"\n},\n{\n \"_id\": \"6412e834f819dac5b09cabe5\",\n \"name\": \"APPLE PIE\"\n},\n{\n \"_id\": \"6412e844f819dac5b09cabe6\",\n \"name\": \"APPLE CIDER\"\n},\n{\n \"_id\": \"6412f2849b835a5a29568133\",\n \"name\": \"APPLE\"\n},\n{\n \"_id\": \"6412f28f9b835a5a29568134\",\n \"name\": \"APPLE\"\n}]\nlucene.standard{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n }\n}\n{\n $search: {\n index: 'default',\n text: {\n query: 'APPLE',\n path: 'name'\n }\n }\n \n}\n{\n _id: ObjectId(\"6412e81ff819dac5b09cabe4\"),\n name: 'APPLE'\n},\n{\n _id: ObjectId(\"6412f2849b835a5a29568133\"),\n name: 'APPLE'\n},\n{\n _id: ObjectId(\"6412f28f9b835a5a29568134\"),\n name: 'APPLE'\n},\n{\n _id: ObjectId(\"6412e7eff819dac5b09cabe2\"),\n name: 'BARRILITOS, APPLE SODA, APPLE, APPLE'\n},\n{\n _id: ObjectId(\"6412e809f819dac5b09cabe3\"),\n name: 'JUMEX, APPLE NECTAR, APPLE, APPLE'\n},\n{\n _id: ObjectId(\"6412e834f819dac5b09cabe5\"),\n name: 'APPLE PIE'\n},\n{\n _id: ObjectId(\"6412e844f819dac5b09cabe6\"),\n name: 'APPLE CIDER'\n}\n", "text": "Hey @Shivam_Patel,My question is how do I set up a search index and query to prioritize exact matches. So if I want the top 10 results, how do I ensure the 3 documents with exactly “APPLE” as the name show up at the top.\n…\nThe exact matches just need to be scored highest. Currently, based off a relevance, documents that contain APPLE in the name multiple times are scored higher.So as per my understanding, you have a lot of documents containing the word ‘APPLE’. You want to do a search that should first display all the exact matches, followed by the rest of the matches.I created a sample collection using the sample documents you provided. My collection looked like this:I used the lucene.standard mapping with the following index search definition:For the query you provided:I got the following result:As we can see, the exact matches appear first followed by the rest.Please let me know if my understanding is correct here or not. If it’s not working as expected, please post more details such as your search result, index definition, etc. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Prioritize exact match using Search Indexes
2023-03-14T19:58:11.511Z
Prioritize exact match using Search Indexes
1,045
null
[]
[ { "code": "", "text": "I am working on new projects which requires to store petabytes of images in DB. File size of each image is expected to be less than 5MB. I am planning to store textual data in SQL SERVER and images in MongoDB. is it good or i should store images on file system and keep reference in DB?\nPlease guide.", "username": "Nasir_Hussain" }, { "code": "", "text": "I always store them in the file system and save a filepath as a string in the db.\nMongoDB is not designed for very large BLOBs.", "username": "Jack_Woehr" }, { "code": "", "text": "there’s a good post about solutions to store such thing in this forum (not able to get the link though, you can try searching for it).Generally speaking, storing things like those in a distributed file system (e.g. s3) is a very common. One big benefit of this is they be best managed by your CDN servers.as sadi by Jack, database are not designed for this specific purpose. yes, they can use blob, but file systems are specifically designed to store large file data, So why not.", "username": "Kobe_W" } ]
Petabytes of images / pdf in mongodb
2023-03-16T16:09:10.213Z
Petabytes of images / pdf in mongodb
333
null
[ "aggregation", "java", "atlas", "spring-data-odm" ]
[ { "code": "reactor.bufferSize.x", "text": "Hey!While getting data from the Aggregation query in the Reactor approach using Spring Data I have observed that even if I’m getting a few hundred of results, the Mongo Atlas shows only 32 returned documents always.The 32 is a default reactor.bufferSize.x in Flux.All details can be found here: Spring Reactive Mongo @Aggregation causes Mongo Atlas alerts · Issue #4319 · spring-projects/spring-data-mongodb · GitHubDo you have any idea why it behaves like this? How it can be fixed? Is that issue on the driver or the Spring data side?Best regards,\nBartosz", "username": "Bartosz_Skorka" }, { "code": "", "text": "Hi @Bartosz_Skorka and welcome to the MongoDB Community Forum The alert message as mentioned:I have observed that one of my queries caused a lot of Mongo Atlas alerts related to Examined / Returned ratio - “Query Targeting: Scanned Objects / Returned has gone above 1000”.is explained in the Query Targeting Documentation, which says, the query is scanning more documents than the number of documents getting retuned from the query.This alert, however, may be resolved by using proper indexing in the collection. The Indexing Strategies documentation will be a good resource to start with.Further, it would be helpful if you could share some details regarding the error being seen:In addition, if you are an Atlas customer, would recommend you to contact Atlas Customer support for detailed assessment.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" } ]
Reactive @Aggregation causes Mongo Atlas alerts
2023-03-13T08:33:03.467Z
Reactive @Aggregation causes Mongo Atlas alerts
1,001
null
[ "performance" ]
[ { "code": "", "text": "Hi,\nwe are using latest MongoDB recently. But after changing to the latest MongoDB we are facing issue with file system utilization. MongoDB file WiredTigerLAS.wt abruptly using more than the 70% of disk space. So, Utilization of the disk space become 100% sometime and downs DB automatically. Please help us to understand the issue and provide a solution for this issue.MongoDB Version : 4.2.2\nNo. of application point MongoDB Instance : 2\nFile Name: WiredTigerLAS.wtCase 1\nData Growth: 265GB in 20 HoursCase 2:\nData Growth: 395GB in 12 Hours", "username": "Visva_Ram" }, { "code": "WiredTigerLAS.wtmongodWiredTigerLAS.wtmongod", "text": "Not something I’ve come across. So this is just my 5 minute note:This is worth a read IMO\nFrom Percona:Note for readers coming here from web search etc: The typical symptoms are suddenly you notice that the WiredTigerLAS.wt file is growing rapidly. So long as the WT cache is filled to it’s maximum the file will grow approximately as fast as the Oplog GB/hr rate at the time. Disk utilization will be 100%, and the WiredTigerLAS.wt writes and read compete for the same disk IO as the normal db disk files. The WiredTigerLAS.wt never shrinks, not even after a restart. The only way to get rid of it is to delete all the files and restart the node for an initial sync (which you probably can’t do until the heavy application load stops).Don’t forget: The initial cause is not primarily the software issue - the initial cause is that the application load has overwhelmed replica sets node’s capacity to write all the document updates to disk. The symptom manifests on the primary, but it may be lag on a secondary that is the driving the issue.You have this option: https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.maxCacheOverflowFileSizeGBBut pay close attention to what happens when this is non-zero:If the WiredTigerLAS.wt file exceeds this size, mongod exits with a fatal assertion. You can clear the WiredTigerLAS.wt file and restart mongod .Hopefully someone else can give some knowledgeable input.", "username": "chris" }, { "code": "", "text": "Thanks much chris. I will try to control this file size by configuring maximum size as mentioned.", "username": "Visva_Ram" }, { "code": "WiredTigerLAS.wtmongodmongodWiredTigerLAS.wtmajoritymaxCacheOverflowFileSizeGB", "text": "Hi @Visva_Ram,What sort of deployment do you have (standalone, replica set, or sharded cluster)? If you have a replica set or sharded cluster, can you describe your the roles of your instances in terms of Primary, Secondary, and Arbiter and also confirm whether you are seeing the LAS growth on the Primary, Secondaries, or both?WiredTigerLAS.wt is an overflow buffer for data that does not fit in the WiredTiger cache but cannot be persisted to the data files yet (analogous to “swap” if you run out of system memory). This file should be removed on restart by mongod as it is not useful without the context of the in-memory WiredTiger cache which is freed when mongod is restarted.If you are seeing unbounded growth of WiredTigerLAS.wt, likely causes are a deployment that is severely underprovisioned for the current workload, a replica set configuration with significant lag, or a replica set deployment including an arbiter with a secondary unavailable.The last scenario is highlighted in the documentation: Read Concern majority and Three-Member PSA and as a startup warning in recent versions of MongoDB (3.6.10+, 4.0.5+, 4.2.0+).I will try to control this file size by configuring maximum size as mentioned.The maxCacheOverflowFileSizeGB configuration option mentioned by @chris will prevent your cache overflow from growing unbounded, but is not a fix for the underlying problem.Please provide additional details on your deployment so we can try to identify the issue.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks Stennie,I am using standalone setup, but the single MongoDB instance used by two homogenised application by pointing different database.We were used old version of MongoDB 3.4.17. In that, i haven’t seen this kind issues.In this, Are you saying? I am overloading the MongoDB?\nIf so, Performance can be degaraded, why the caching size is keep on increasing, That I don’t understand.What is the purpose of WiredTigerLAS.wt in MongoDB?I am not able to accept that disk usage will keep on increase and down automcatically.If So, then How i can calculate the load that can be given to MongoDB to avoid this issue. Please comment. Thanks.", "username": "Visva_Ram" }, { "code": "", "text": "Hey there,\nI have the exact same problem with a single mongo instance.I’m doing many bulkWrites one after the other and in parallel and for some cases the file size increases until the entire disk gets full and Mongo crash.I think me and many others need a more definite answer on how can I tell how much writing operation could cause this and how to limit my load on mongo, it’s really hard to just “guess” the numbers here. Also I think it’s weird that mongo can’t handle the load in a better way or clear this file after the load has finished.", "username": "Ziv_Glazer" }, { "code": "", "text": "Yeah, I also think the database should handle it in a more elegant way. DONOT let users see strange things, though it’s about internal implementation.", "username": "Lewis_Chan" }, { "code": "", "text": "I have some question about this case.\nwhen we do a an __wt_las_insert_block ,we can insert some page which is not WT_UPDATE_BIRTHMARK\n\n企业微信截图_167901918350651380×626 93.4 KB\n\nAnd when we do __wt_las_sweep,we only delete those page with WT_UPDATE_BIRTHMARK flag\n\n企业微信截图_167901919750901380×854 115 KB\n\nDoes it means there always some page can not delete from WiredTigerLAS.wt file? Because i notice that the WiredTigerLAS.wt file size nerver shirnk.\n\nimage1778×967 116 KB\n", "username": "zhangruian1997" } ]
MongoDB disk space increases abruptly - WiredTigerLAS.wt
2020-05-04T05:36:30.167Z
MongoDB disk space increases abruptly - WiredTigerLAS.wt
13,501
https://www.mongodb.com/…4_2_1024x512.png
[ "compass", "swift", "atlas-device-sync" ]
[ { "code": "class AccessibleDataTest: Object {\n @Persisted(primaryKey: true) var _id: ObjectId = ObjectId.generate()\n @Persisted var place = \"\"\n @Persisted var accessibility: AccessTestItem?\n @Persisted var accessAddress: AccessTestAddress?\n}\n//\nclass AccessTestItem: EmbeddedObject {\n @Persisted var bell: String = \"\"\n @Persisted var ramp: String = \"\"\n}\n\nclass AccessTestAddress: EmbeddedObject {\n @Persisted var postcode:String = \"\"\n @Persisted var houseNumber: String = \"\"\n}\n{\n \"place\": \"Alice's Tea Cup\",\n \"accessibility\": {\n \"elevator\": \"elevator\",\n \"ramp\": \"ramp\",\n \"bell\": \"bell\",\n },\n \"address\": {\n \"postcode\": \"10023\",\n \"houseNumber\": \"102\"\n }\n}\n", "text": "Hi,I’m trying to embed two objects according to this explanationI’ve uploaded data via Atlas Compass.When I run device sync, the realm doesn’t sync the data on Atlas db into the local sync realm file. But it does sync a non-embedded object.Is there a step I am missing?", "username": "swar_leong" }, { "code": "", "text": "So the key here is that I am using Flexible Device Sync.After spending two days trawling the internet, reading the documentation about realm objects, changing the Embedded Object to a list of Embedded Objects, looking at the realm ios github examples, reading the documentation on subscriptions, by luck, I found the answer here in the fourth response:Is it possible to define a subscription with a query on a field that belongs to embedded object?Unfortunately it is not. See here: https://mongodb.prakticum-team.ru/docs/atlas/app-services/sync/data-access-patterns/flexible-sync/#eligible-field-typesClicking through the link to the documentation, there is one paragraph that says:Flexible Sync does not support embedded objects or arrays of objects as queryable fields.I hope this helps anyone who finds themselves in a similar situation.Back to the drawing board!", "username": "swar_leong" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot sync data with embedded realm objects
2023-03-16T16:47:21.446Z
Cannot sync data with embedded realm objects
1,010
null
[ "security" ]
[ { "code": "", "text": "While enabling encryption-at-rest on MongoDB Atlas, I consistently get an “Invalid Azure credentials” error. I’ve connected to Azure and can successfully access the key-vault key in PowerShell with the same credentials that are being used. We set up another cluster in the same org a few months ago. When we set up the encryption, it also got that same error, but the next day it just worked for the first instance. The current issue has been going on for a couple of days and still doesn’t work. Has anyone else seen this issue or know what may be causing the error?", "username": "Caycee_Cress" }, { "code": "", "text": "Hello @Caycee_Cress ,Welcome to The MongoDB Community Forums! I would advise you to bring this up with the Atlas chat support team . They may be able to check if anything on the Atlas side could have possibly caused this issue. In saying so, if a chat support is raised, please provide them with the following:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "We have tried reaching out through chat support hoping to at least just get the detail of the exact error since it’s known the credentials are good. However, the only advice given was to either pay for Developer support or to try the forum. Seems silly to pay $800 to simply get an error message that is being masked incorrectly.", "username": "Caycee_Cress" }, { "code": "", "text": "Hello Caycee,It seems others have gotten this error when the Azure Key Vault Reader role was not assigned to the service principal. There is a list of prerequisite steps in the the Azure Key Vault documentation for Manage Customer Keys that you can review to see if there was perhaps a missed step.I hope this helps,Cynthia", "username": "Cynthia_Braund" }, { "code": "", "text": "Thank you, however, the service principal does have the role. As mentioned above we can use the az PowerShell module to authenticate using the same client and secret. Once authenticated we are also able to successfully retrieve the secret.", "username": "Caycee_Cress" } ]
Encryption at Rest Using Key Management
2023-03-15T20:14:40.387Z
Encryption at Rest Using Key Management
960
null
[ "replication", "sharding" ]
[ { "code": "chunkSizechunkSize[direct: mongos] zios> db.collection_name.getShardDistribution()\nShard shard3 at shard3/***IPs***\n{\n data: '428.79GiB',\n docs: 522351318,\n chunks: 4752,\n 'estimated data per chunk': '92.4MiB',\n 'estimated docs per chunk': 109922\n}\n\nShard shard1 at shard1/***IPs***\n{\n data: '429.1GiB',\n docs: 300330555,\n chunks: 975,\n 'estimated data per chunk': '450.67MiB',\n 'estimated docs per chunk': 308031\n}\n\nShard shard2 at shard2/***IPs***\n{\n data: '428.68GiB',\n docs: 290760604,\n chunks: 2720,\n 'estimated data per chunk': '161.38MiB',\n 'estimated docs per chunk': 106897\n}\n\nTotals\n{\n data: '1286.58GiB',\n docs: 1113442477,\n chunks: 8447,\n 'Shard shard3': [\n '33.32 % data',\n '46.91 % docs in cluster',\n '881B avg obj size on shard'\n ],\n 'Shard shard1': [\n '33.35 % data',\n '26.97 % docs in cluster',\n '1KiB avg obj size on shard'\n ],\n 'Shard shard2': [\n '33.31 % data',\n '26.11 % docs in cluster',\n '1KiB avg obj size on shard'\n ]\n}\n", "text": "Hello,\nBeen using MongoDB 6.0.4 with Ranged-Sharding.\nNoticed that the storage attached to those shards is imbalanced, which became a concern in terms of storage planning.In this case, we have a 3 shards cluster which in terms of shard size looks balanced but the actual size of the data on disk is very different, be it because of compress/dedup - when our system sees a disk is going to be fully used it will automatically start a new shard with a new replica set, disks and everything, even that the other shards may be 50% used in capacity.Tried setting the chunkSize to be 128MB on a different system but didn’t see it has any effects, the chunks seem to go beyond that (right now past 1GB), so not sure how I can use that to solve the issue, also saw this thread about it: Chunk size many times bigger than configure chunksize (128 MB)This is the first system with 3 shards (without any chunkSize restriction), each volume here has 512GB size:shard1:shard2:shard3:As you can see, the distribution between the shards looks ok, but the final disk usage is not balanced at all.\nIs there any suggestion we can follow to balance the disk capacities better?Thanks for your support.", "username": "Oded_Raiches" }, { "code": "chunkSizedb.<collection name>.stats()", "text": "Hi @Oded_Raiches and welcome to the MongoDB community forum!!Tried setting the chunkSize to be 128MB on a different system but didn’t see it has any effects,As mentioned in the documentation, staring from MongoDB Version 6.0.3, the shards are balanced based on the data size rather than the chunk size.For further understanding the issue, could you confirm, if the documents inside the collection have similar sizes?\nI tried to reproduce the above in a local sharded deployment with 3 shards and 5GB of data and I see as similar distribution between the shards because the documents in the collection have inconsistent sizes (i.e. some are much larger than others).Please provide us with the below details for more clear understandingLet us know if you have any further queries .Best Regards\nAasawari", "username": "Aasawari" }, { "code": "_id{\n\t_id: 'key1,prefix1/prefix2/folder/5b46247f-f797-45cf-9a61-35a62542c52d/b70a9a7c-1597-4d8b-a7ba-eb05ba9849ed/blocks/98f769b32035439d4345c8916c7a1d56/240075.b9875d2a27e9344a6031f6378af91951.00000000000000000000000000000000.blk',\n\tabout 20 additional keys, document length about 2K bytes.\n\tthese \"large\" document are less frequent.\n}\n\n{\n\t_id: '+prefix0+key1,+prefix1/prefix2/folder/5b46247f-f797-45cf-9a61-35a62542c52d/0d52826b-a804-4699-89bc-79d935f0af8e/blocks/c7cee0f0df504aa031b9300f5fa2f93d/24788.f7124aa6c880f4be9c037a759df46736.00000000000000000000000000000000.blk+8323786472.99638',\n\tabout 15 additional keys, document length about 700 bytes.\n\tthese documents are more frequent.\n}\ndb.<collection name>.stats()", "text": "Hi @Aasawari , thanks for the reply!We do have different documents sizes, but in general they are small in size.\nThe shard key is the _id and has to be this way since we use the prefixes to do do listing of certain groups of documents based on these prefixes (using regex).\nMain 2 document examples:\ndb.<collection name>.stats() call (to long for the reply, had to add as a file):collection_stats.txt (65.8 KB)Thanks!", "username": "Oded_Raiches" }, { "code": "", "text": "Hi @Aasawari !\nIs there anything new with regards to the info provided?", "username": "Oded_Raiches" }, { "code": " shards: {\n shard1: {\n ns: '<db_name>.<collection_name>',\n size: Long(\"429786275912\"), ~429.79 GB\n count: 258306027,\n avgObjSize: 1663,\n numOrphanDocs: 0,\n storageSize: Long(\"218921512960\"), ~218.92 GB\n freeStorageSize: Long(\"122932756480\"), ~122.93 GB\n ...\n shard2: {\n ns: '<db_name>.<collection_name>',\n size: Long(\"429519998496\"), ~429.52 GB\n count: 293937414,\n avgObjSize: 1461,\n numOrphanDocs: 0,\n storageSize: Long(\"143384731648\"), ~143.38 GB\n freeStorageSize: Long(\"59074117632\"), ~59.074 GB \n ...\n shard3: {\n ns: '<db_name>.<collection_name>',\n size: Long(\"429381962949\"), ~429.38 GB\n count: 486677674,\n avgObjSize: 882,\n numOrphanDocs: 0,\n storageSize: Long(\"103618985984\"), ~103.62 GB\n freeStorageSize: Long(\"6423724032\"), ~6.42 GB \n ...\nsizeavgObjSizestorageSizeshard1avgObjSizeshard3shard3shard2shard3shard1shard3shard1freeStorageSize", "text": "Hi @Oded_RaichesI believe here is the relevant snippet from the collection stats. Note that I rearranged this a little to have the shards in sequence, and I also added annotations to the sizes to make it easier to read:What I observed is that the size (uncompressed size) are quite similar across the three shards.However the avgObjSize and the storageSize are telling a different story here:Although the uncompressed size is balanced, the compressed size are not balanced across the shards. You mentioned that you have larger documents and smaller documents in general, but they seem to be concentrated on certain shards instead of evenly balanced across all the shards. This leads to unbalanced disk use in general.I think it may be caused by the shard key. The best shard key should spread the workload evenly across the shards, but that doesn’t seem to be the case here. See Uneven Load Distribution for more details.To mitigate this, I would consider picking a shard key that allow for a more balanced distribution of large and small documents. Note that from MongoDB 5.0, you can reshard a collection.Best regards\nKevin", "username": "kevinadi" }, { "code": "compactcompact", "text": "Hi @kevinadi , don’t think the shard key is the issue.\nI tried out compact , and a disk with an ~80% used capacity went down to ~40%.\nSeems that the DB is leaving the disk fragmented, why is defrag not ran from time to time when needed? is there a way to know how much of a the data is fragmented and if compact run is needed?", "username": "Oded_Raiches" }, { "code": "freeStorageSizefreeStorageSize", "text": "Seems that the DB is leaving the disk fragmented, why is defrag not ran from time to time when needed?This is sort of alluded to in the earlier reply I did:There are large numbers of freeStorageSize, I’m guessing because the workload also involve deleting/updating a large number of documentsWhat I forgot to explain is that: WiredTiger is a no-overwrite storage engine. When a document is deleted, it was marked as deleted, the space marked as reusable (freeStorageSize). This is because:Of course this is a general assumption and may not be true in all cases. If you find that you won’t need to reuse that space, then compacting it is the right thing to do.Hope this clears things up.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "thanks! the info helped to build my solution ", "username": "Oded_Raiches" } ]
Shard sizes on disk are imbalanced
2023-02-27T17:06:17.331Z
Shard sizes on disk are imbalanced
1,300
null
[ "dot-net" ]
[ { "code": "", "text": "I’m aware of limits on azure app services, like max outbound tcp connections.Are there recommended configuration settings for using the Mongo C# driver in a busy application hosted on an azure app service. Things like - min/max connection pool size, timeouts, wait queue multiple (although this seems set for deprecation)", "username": "Paul_Allington" }, { "code": "", "text": "Hi @Paul_Allington, thanks for your question!There’s nothing specific to the driver that needs to be addressed; the limits on Azure App Services still apply.One thing that is worth mentioning is to instantiate your MongoClient as a singleton in your application, as recommended by the docs. In this way, you can take advantage of connection pooling.", "username": "yo_adrienne" }, { "code": "", "text": "I have it as a singleton yeah. Occasionally we get a big spike in waitqueuefullexceptions. So I wondered what defaults are recommended based on the limits in azure. Like what should be the max connection pool size and the timeouts?", "username": "Paul_Allington" }, { "code": "", "text": "Just to add on, when doing same query with nodejs i send 1000 concurrent request with ramp up period 1 sec from Jmeter and less than 1% request errors out but the same query errors 30-35% request using net core 6, don’t know why", "username": "MANISH_RANJAN" }, { "code": "", "text": "Hi Paul, we have just started experiencing this ourselves on a similar set up. Did you get to the bottom of a sensible set of defaults?", "username": "Daniel_Charlton" } ]
Recommended configuration for MongoDB C# hosted on Azure app service
2021-05-05T08:53:22.674Z
Recommended configuration for MongoDB C# hosted on Azure app service
6,347
null
[ "compass" ]
[ { "code": "use db1\ndb.col1.insert( { \"x\":1 } )\n\n2 questions :\n1) is it the good process to use the \"Documents\" menu and to do \"ADD DATA\" and \"Insert document\"?\n2) I yes, when I run the query, I have the message \"Insert not permitted while document contains errors.\"\nCould you help please?\n\n\n\n\n\n", "text": "HiI try to create automatically a database from Compass", "username": "jip31" }, { "code": "use db1db.col1.insert( { \"x\":1 } ){ \"x\":1 }db1", "text": "Hi @jip31,Thanks,\nSahi", "username": "Sahi_Muthyala" }, { "code": "use db1db.col1.insert( { \"x\":1 } ){ \"x\":1 }", "text": "Hi Sahi\nThanks for the first point\nConcerning the second point, I try to run the use db1 and db.col1.insert( { \"x\":1 } ) in Compass\n\nSorry, because I am a rookie but except if I am mistaken, it’s not possible to do that in Compass but just in mongosh?Yes { \"x\":1 } is the exact syntax", "username": "jip31" }, { "code": "", "text": "Hi Jean @jip31,What you have pulled up in the screenshot is the query bar in Compass which can be used to filter documents based on specified criteria. You can find more information on how to use that query bar to find specific documents here: https://www.mongodb.com/docs/compass/current/query/filter/.Unfortunately, the query bar cannot be used for inserting documents. However, you can run those 2 commands in the embedded shell (mongosh) in Compass to add a document to db.col1. Once you run the commands in the embedded mongosh, you can click on the refresh button next to “Databases” to see the data you added via the embedded mongosh in Compass. I have attached a screenshot of what this should look like for your reference.\n\nScreenshot 2023-03-14 at 6.08.36 PM2684×1358 296 KB\nHope this helps! Let me know if you have any other questions.Thanks,\nSahi", "username": "Sahi_Muthyala" }, { "code": "", "text": "Thanks a lot Sahi!!!", "username": "jip31" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help to create a database from Compass
2023-03-13T15:33:52.518Z
Help to create a database from Compass
756
null
[ "aggregation", "atlas-search" ]
[ { "code": "", "text": "Hi everyone,I am struggling to find a way to calculate distance between a user point (the user location) and points in a collection within Atlas Search. Neither “near” nor “geoWithin” offer any way to calculate distance in a 2dsphere (unlike $geoNear in aggregation). My app needs to conduct both text search (on fields like book title, description) and geospatial queries (proximity to user), and Atlas search seems to be capable to do both in one query, but there is no option to calculate the distance. Am I missing something as it is a substantial disadvantage? Many thanks for all help.", "username": "Gueorgui_58194" }, { "code": "score = pivot /pivot + distance\nscore(pivot +distance)=pivot\nscore*pivot + score*distance = pivot\ndistance = (pivot - score*pivot)/score\n", "text": "Hi @Gueorgui_58194 ,Thats an interesting question. i saw that near operator uses pivot and distance to compute score for geo data :If you use pivot as 1 you might be able to take a score and figure out a distance for each document.Learn how to search near a numeric, date, or GeoJSON point value.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "That’s very clever Pavel. Thanks very much. I will try it now.", "username": "Gueorgui_58194" }, { "code": "", "text": "Hi Pavel,Following up on this, I gave it a try but the problem is that the score for my searches is a combination of the scores for different clauses (not only the “near” operator in Atlas search). I don’t see any way I can extract the score just for “near”. I ended up calculating the distance using longitude and latitude in my app (rather than the database), which comes up pretty accurate. This does not completely meet my use case as I am not able to do a pure distance sort on the database (I can do it in the app, but that would mean getting all the results from the db, doing the sort and then getting the relevant page of the results). If the search is purely on distance, Mongo sorts automatically, but I cannot rely on this when the search includes other parameters. I think it would be good to expose the distance calculated by Mongo in the API (since it is done anyway by Mongo when using near or geoWithin in Atlas search).Thanks very much for loking into this.Kind regards,\nGueorgui", "username": "Gueorgui_58194" }, { "code": "", "text": "Hi @Gueorgui_58194 ,I’ve raised this with our Product managers for search.CC: @MarcusThanks for sharing\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "There is this feature request you can vote on to be notified when we release this feature.", "username": "Marcus" }, { "code": "", "text": "Hello,Also I am currently struggling with this. My idea currently is this:\n → Use “near” to calculate the distance and use the “function”-option in the “near”-score to calculate the distance like this: distance = (pivot - score*pivot)/score. Is this possible? Make sure that this is an integer value and add this number as “score” for the “near”\n → The other clause i use is a simple “text” to calculate the search match which is a value between 0-1. Am I correct?If I understood everything correctly, my final score should now consist of the integer value which is my distance and the desimals would be the final score.", "username": "Sakarias_Jarvinen" }, { "code": "", "text": "I ended up using a hybrid solution where I use the $search for filtering my query and calculating the “score” with using only “near”. Then on my aggregate, I use “$addFields” to calculate the distance knowing the “pivot” and the “score”. After that I have my other addFields to add more fields required to sort my query and finally I use normal “$sort”-stage and “$limit” and return my results.I was hoping to use $search to calculate also “Text match” for a few fields on my item but it seems like I need to wait for updates for $search to be able to solve my use case. I think I will go with Elastic search for now and connect my collection to elastic search and then give mongo atlas another try when their search index offers all the required features.In the end, compared to not using “$search” at all, I am left with a much more efficient solution to filter and calculate the distance than “$geoNear” that I was forced to use before. $geoNear scans the whole collection always and therefore querying more than 200k of items with this method resulted in horribly performance queries that were too much for the database cluster.-Sakarias", "username": "Sakarias_Jarvinen" } ]
Calculating distance in Atlas Search
2022-10-27T14:39:14.739Z
Calculating distance in Atlas Search
2,911
null
[ "golang" ]
[ { "code": "*event.CommandMonitor*event.CommandStartedEventbson.RawbsonCommand[evt.CommandName]", "text": "Hi,In our application, we are relying on the *event.CommandMonitor option when creating clients to log structured information about the command we are running, and one of the fields we would be interesting in accessing is the collection name. However the received *event.CommandStartedEvent struct doesn’t directly have this information (even though it contains the bson.Raw command where you could potentially access it).So my question is if there’s a reliable way to access the collection name within the scope of this function, or if it’s safe to assume bsonCommand[evt.CommandName] will always contain the collection name?Thanks in advance!", "username": "Rodrigo_Arguello" }, { "code": "bsonCommand[evt.CommandName]CommandStartedEvent", "text": "Hey @Rodrigo_Arguello welcome and thanks for the question!As far as I can tell, the collection name is always the value associated with the command name (i.e. bsonCommand[evt.CommandName]). Check out the Database Commands section of the MongoDB Manual for a more comprehensive answer.As far as the actual CommandStartedEvent (and other command events), the information provided is specified in the Events API section of the MongoDB driver specifications. That specification doesn’t currently include collection name, but I can definitely see a use case for it.", "username": "Matt_Dale" }, { "code": "", "text": "I’ve proposed a change to the drivers specification to add database name and collection name to all command logging and monitoring events (see DRIVERS-2575).@Rodrigo_Arguello can you tell me more about your use case and why you need access to the collection name so I can include that in the ticket?", "username": "Matt_Dale" }, { "code": "", "text": "Hi @Matt_Dale and thanks for your reply and your proposal!As for the use case, I work in Datadog’s APM product where we support tracing some libraries out of the box for different languages.We support the official mongodb drivers for some languages and we wanted to start adding more information to the spans we generate (in this case the collection when applies for the command the user is running).Also I recently noticed this limitation is also present in other official MongoDB drivers like the Ruby one (not sure about the rest), so it would be awesome if this change would be implemented across all of them.I hope this is helpful for your proposal. Also looking forward for it to be implemented!", "username": "Rodrigo_Arguello" } ]
How to reliable access collection name when using event.CommandMonitor in Golang
2023-02-16T11:26:10.310Z
How to reliable access collection name when using event.CommandMonitor in Golang
1,066
null
[ "dot-net", "cxx" ]
[ { "code": "set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -O0 -g --coverage /EHsc\")>> cmake .. -G \"Visual Studio 17 2022\" -A x64 -DBUILD_VERSION=3.6.0 -DBOOST_ROOT=C:\\Users\\dgm55\\source\\repos\\boost_1_81_0 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" -DCMAKE_PREFIX_PATH=C:\\Users\\dgm55\\source\\repos\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver>> cmake --build .C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(305,1): warning C4530\n: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc [C:\\Users\\dgm55\\source\\repos\\mongo-cx\nx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(298,1): message : whi\nle compiling class template member function 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,s\ntd::char_traits<char>>::operator <<(unsigned int)' [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\n\\test_bson.vcxproj]\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\src\\bsoncxx/test_util/to_string.hh(53,57): message : see reference to func\ntion template instantiation 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<\nchar>>::operator <<(unsigned int)' being compiled [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\\ntest_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(671,75): message : se\ne reference to class template instantiation 'std::basic_ostream<char,std::char_traits<char>>' being compiled [C:\\Users\\\ndgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n Generating Code...\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\src\\bsoncxx\\test\\bson_builder.cpp(1705): fatal error C1001: Internal comp\niler error. [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n (compiler file 'D:\\a\\_work\\1\\s\\src\\vctools\\Compiler\\Utc\\src\\p2\\main.c', line 224)\n To work around this problem, try simplifying or changing the program near the locations listed above.\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build>cmake .. -G \"Visual Studio 17 2022\" -A x64 -DBUILD_VERSION=3.6.0 -DBOOST_ROOT=C:\\Users\\dgm55\\source\\repos\\boost_1_81_0 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" -DCMAKE_PREFIX_PATH=C:\\Users\\dgm55\\source\\repos\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\n-- Selecting Windows SDK version 10.0.22000.0 to target Windows 10.0.19043.\n-- No build type selected, default is Release\n-- Auto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn't specify otherwise\nbsoncxx version: 3.6.0\nfound libbson version 3.6.0\nmongocxx version: 3.6.0\nfound libmongoc version 3.6.0\n-- Build files generated for:\n-- build system: Visual Studio 17 2022\n-- instance: C:/Program Files/Microsoft Visual Studio/2022/Community\n-- instance: x64\n-- Configuring done\n-- Generating done\n-- Build files have been written to: C:/Users/dgm55/source/repos/mongo-cxx-driver/build\n\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build>cmake --build .\nMSBuild version 17.4.1+9a89d02ff for .NET Framework\n bsoncxx_shared.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\Debug\\bsoncxx.dll\n bsoncxx_testing.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\Debug\\bsoncxx-testing.dll\n mongocxx_mocked.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\Debug\\mongocxx-mocked.dll\n mongocxx_shared.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\Debug\\mongocxx.dll\n array.cpp\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(305,1): warning C4530\n: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc [C:\\Users\\dgm55\\source\\repos\\mongo-cx\nx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(298,1): message : whi\nle compiling class template member function 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,s\ntd::char_traits<char>>::operator <<(unsigned int)' [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\n\\test_bson.vcxproj]\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\src\\bsoncxx/test_util/to_string.hh(53,57): message : see reference to func\ntion template instantiation 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<\nchar>>::operator <<(unsigned int)' being compiled [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\\ntest_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(671,75): message : se\ne reference to class template instantiation 'std::basic_ostream<char,std::char_traits<char>>' being compiled [C:\\Users\\\ndgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n bson_b_date.cpp\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(305,1): warning C4530\n: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc [C:\\Users\\dgm55\\source\\repos\\mongo-cx\nx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(298,1): message : whi\nle compiling class template member function 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,s\ntd::char_traits<char>>::operator <<(unsigned int)' [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\n\\test_bson.vcxproj]\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\src\\bsoncxx/test_util/to_string.hh(53,57): message : see reference to func\ntion template instantiation 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_traits<\nchar>>::operator <<(unsigned int)' being compiled [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\\ntest_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(671,75): message : se\ne reference to class template instantiation 'std::basic_ostream<char,std::char_traits<char>>' being compiled [C:\\Users\\\ndgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n main.cpp\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(544,1): warning C4530\n: C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc [C:\\Users\\dgm55\\source\\repos\\mongo-cx\nx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(536,1): message : whi\nle compiling class template member function 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,s\ntd::char_traits<char>>::write(const _Elem *,std::streamsize)' [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\b\nsoncxx\\test\\test_bson.vcxproj]\n with\n [\n _Elem=char\n ]\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\src\\third_party\\catch\\include\\catch.hpp(13929,9): message : see reference\nto function template instantiation 'std::basic_ostream<char,std::char_traits<char>> &std::basic_ostream<char,std::char_\ntraits<char>>::write(const _Elem *,std::streamsize)' being compiled [C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\n\\src\\bsoncxx\\test\\test_bson.vcxproj]\n with\n [\n _Elem=char\n ]\nC:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.34.31933\\include\\ostream(671,75): message : se\ne reference to class template instantiation 'std::basic_ostream<char,std::char_traits<char>>' being compiled [C:\\Users\\\ndgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n Generating Code...\ntest_bson.dir\\Debug\\bson_builder.obj : fatal error LNK1136: invalid or corrupt file [C:\\Users\\dgm55\\source\\repos\\mongo-\ncxx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n test_client_side_encryption_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Deb\n ug\\test_client_side_encryption_specs.exe\n test_command_monitoring_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\t\n est_command_monitoring_specs.exe\n test_crud_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_crud_specs\n .exe\n test_driver.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_driver.exe\n test_gridfs_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_gridfs_s\n pecs.exe\n test_instance.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_instance.exe\n test_logging.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_logging.exe\n test_mongohouse_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_mong\n ohouse_specs.exe\n test_read_write_concern_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\t\n est_read_write_concern_specs.exe\n test_retryable_reads_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test\n _retryable_reads_specs.exe\n test_transactions_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_tr\n ansactions_specs.exe\n test_unified_format_spec.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_u\n nified_format_spec.exe\n test_versioned_api.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_version\n ed_api.exe\n\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build>cmake --build . --target install\nMSBuild version 17.4.1+9a89d02ff for .NET Framework\n bsoncxx_shared.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\Debug\\bsoncxx.dll\n bsoncxx_testing.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\bsoncxx\\Debug\\bsoncxx-testing.dll\n mongocxx_mocked.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\Debug\\mongocxx-mocked.dll\n mongocxx_shared.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\Debug\\mongocxx.dll\ntest_bson.dir\\Debug\\bson_builder.obj : fatal error LNK1136: invalid or corrupt file [C:\\Users\\dgm55\\source\\repos\\mongo-\ncxx-driver\\build\\src\\bsoncxx\\test\\test_bson.vcxproj]\n test_client_side_encryption_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Deb\n ug\\test_client_side_encryption_specs.exe\n test_command_monitoring_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\t\n est_command_monitoring_specs.exe\n test_crud_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_crud_specs\n .exe\n test_driver.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_driver.exe\n test_gridfs_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_gridfs_s\n pecs.exe\n test_instance.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_instance.exe\n test_logging.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_logging.exe\n test_mongohouse_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_mong\n ohouse_specs.exe\n test_read_write_concern_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\t\n est_read_write_concern_specs.exe\n test_retryable_reads_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test\n _retryable_reads_specs.exe\n test_transactions_specs.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_tr\n ansactions_specs.exe\n test_unified_format_spec.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_u\n nified_format_spec.exe\n test_versioned_api.vcxproj -> C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build\\src\\mongocxx\\test\\Debug\\test_version\n ed_api.exe\n\nC:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\\build>\n", "text": "I’m getting a fatal error and warnings during compile. Data security is going to be of v. high importance so even if I just ignored it and a linked-in app worked I’m not sure whether it would be safe?I’m compiling the c++ drivers under Win10 with VS2022 (having just built and installed the c drivers)\nHowever getting a fatal error from bson_builder.cpp (as per output below)\nand a warning for many other files\n“C++ exception handler used, but unwind semantics are not enabled. Specify /EHsc”\nas below\nI did try and add EHsc to the flags in CMakeLists.txt, but it seem to be ignored\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -O0 -g --coverage /EHsc\")These are the commands entered via VS Command Prompt:\n>> cmake .. -G \"Visual Studio 17 2022\" -A x64 -DBUILD_VERSION=3.6.0 -DBOOST_ROOT=C:\\Users\\dgm55\\source\\repos\\boost_1_81_0 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" -DCMAKE_PREFIX_PATH=C:\\Users\\dgm55\\source\\repos\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\Users\\dgm55\\source\\repos\\mongo-cxx-driver\n>> cmake --build .This is the one of the warnings and the fatalThe full output is below", "username": "david_d" }, { "code": "\nC:\\Users\\dgm55\\source\\repos\\mongo-c-driver\\cmake-build>cmake .. -G \"Visual Studio 17 2022\" -A x64 -DBUILD_VERSION=3.6.0 -DBOOST_ROOT=C:\\Users\\dgm55\\source\\repos\\boost_1_81_0 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus\" -DCMAKE_PREFIX_PATH=C:\\Users\\dgm55\\source\\repos\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\Users\\dgm55\\source\\repos\\mongo-c-driver\n\n-- Selecting Windows SDK version 10.0.22000.0 to target Windows 10.0.19043.\n-- The C compiler identification is MSVC 19.34.31937.0\n-- Detecting C compiler ABI info\n-- Detecting C compiler ABI info - done\n-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe - skipped\n-- Detecting C compile features\n-- Detecting C compile features - done\n-- Looking for a CXX compiler\n-- Looking for a CXX compiler - C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe\n-- The CXX compiler identification is MSVC 19.34.31937.0\n-- Detecting CXX compiler ABI info\n-- Detecting CXX compiler ABI info - done\n-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.34.31933/bin/Hostx64/x64/cl.exe - skipped\n-- Detecting CXX compile features\n-- Detecting CXX compile features - done\nstoring BUILD_VERSION 3.6.0 in file VERSION_CURRENT for later use\n-- Build and install static libraries\n-- Found Python3: C:/Users/dgm55/AppData/Local/Programs/Python/Python311/python.exe (found version \"3.11.2\") found components: Interpreter\n -- Using bundled libbson\nlibbson version (from VERSION_CURRENT file): 3.6.0\n-- Looking for snprintf\n-- Looking for snprintf - found\n-- Performing Test BSON_HAVE_TIMESPEC\n-- Performing Test BSON_HAVE_TIMESPEC - Success\n-- struct timespec found\n-- Looking for gmtime_r\n-- Looking for gmtime_r - not found\n-- Looking for rand_r\n-- Looking for rand_r - not found\n-- Looking for strings.h\n-- Looking for strings.h - not found\n-- Looking for strlcpy\n-- Looking for strlcpy - not found\n-- Looking for stdbool.h\n-- Looking for stdbool.h - found\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD\n-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed\n-- Looking for pthread_create in pthreads\n-- Looking for pthread_create in pthreads - not found\n-- Looking for pthread_create in pthread\n-- Looking for pthread_create in pthread - not found\n-- Found Threads: TRUE\nlibmongoc version (from VERSION_CURRENT file): 3.6.0\n-- Searching for zlib CMake packages\n-- Could NOT find ZLIB (missing: ZLIB_LIBRARY ZLIB_INCLUDE_DIR)\n-- Enabling zlib compression (bundled)\n-- Looking for include file unistd.h\n-- Looking for include file unistd.h - not found\n-- Looking for include file stdarg.h\n-- Looking for include file stdarg.h - found\n-- Searching for compression library zstd\n-- Found PkgConfig: D:/StrawberryPerl/perl/bin/pkg-config.bat (found version \"0.26\")\n-- Checking for module 'libzstd'\n-- Can't find libzstd.pc in any of D:/StrawberryPerl/c/lib/pkgconfig\nuse the PKG_CONFIG_PATH environment variable, or\nspecify extra search paths via 'search_paths'\n-- Not found\n-- Looking for sys/types.h\n-- Looking for sys/types.h - found\n-- Looking for stdint.h\n-- Looking for stdint.h - found\n-- Looking for stddef.h\n-- Looking for stddef.h - found\n-- Check size of socklen_t\n-- Check size of socklen_t - done\n-- Looking for sched_getcpu\n-- Looking for sched_getcpu - not found\n-- Searching for compression library header snappy-c.h\n-- Not found (specify -DCMAKE_INCLUDE_PATH=/path/to/snappy/include for Snappy compression)\n-- No ICU library found, SASLPrep disabled for SCRAM-SHA-256 authentication.\n-- If ICU is installed in a non-standard directory, define ICU_ROOT as the ICU installation path.\nSearching for libmongocrypt\n-- libmongocrypt not found. Configuring without Client-Side Field Level Encryption support.\n-- Performing Test MONGOC_HAVE_SS_FAMILY\n-- Performing Test MONGOC_HAVE_SS_FAMILY - Failed\n-- Compiling against Secure Channel\n-- Compiling against Windows SSPI\n-- Building with MONGODB-AWS auth support\n-- Build files generated for:\n-- build system: Visual Studio 17 2022\n-- instance: C:/Program Files/Microsoft Visual Studio/2022/Community\n-- instance: x64\n-- Configuring done\n-- Generating done\nCMake Warning:\n Manually-specified variables were not used by the project:\n\n BOOST_ROOT\n\n\n-- Build files have been written to: C:/Users/dgm55/source/repos/mongo-c-driver/cmake-build\n", "text": "Looking back I wonder if I might be missing dependencies even earlier in the toolchain.\nI repeated the steps of recompiling the underlying C driver but even at the first step had missing stdlib headers (although not sure why cmake can’t find them as they’re definitly installed and the path seems to be comprehensive enough to cover them?)\nAlso despite a clean install of Strawberry Perl it still didn’t find the libzstd module which I presume is also related to the zlib error above that - although I can’t find any documentation on what needs to be installed there (it’s not listed as a dependency on the install page: Installing the MongoDB C Driver (libmongoc) and BSON library (libbson) — libmongoc 1.23.2)Is there a precompiled package for windows I can use to just install the library/drivers? The only ones I have been able to track down are for Linux or Mac which I can’t use for this project.", "username": "david_d" }, { "code": "", "text": "Hi @david_d ,Can you try the steps outlined in Getting Started with MongoDB and C++ | MongoDB ?\nC++ driver package is also available with vcpkg (vcpkg - Open source C/C++ dependency manager from Microsoft) however it may not have the latest version.", "username": "Rishabh_Bisht" }, { "code": "", "text": "Thanks @Rishabh_Bisht!\nI’ve installed with vcpkg - as at this point I just need to start with a basic test app\nvcpkg install mongo-c-driver\nvcpkg install mongo-cxx-driver\nNewbie confession: If there had been a link to vcpkg in Windows I would have probably used that as the first option - but despite programming c++ for years I’ve never needed one in windows so missed the reference.Assuming it works as we need I will try again following the steps in your article to ensure the latest build.", "username": "david_d" }, { "code": "#pragma once\n\n#include <mongocxx/client.hpp>\n#include <bsoncxx/builder/stream/document.hpp>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/uri.hpp>\n#include <mongocxx/instance.hpp>\n\n#include <algorithm>\n#include <iostream>\n#include <vector>\n\nusing namespace std;\n\nstd::string getEnvironmentVariable(std::string environmentVarKey)\n{\n\tchar* pBuffer = nullptr;\n\tsize_t size = 0;\n\tauto key = environmentVarKey.c_str();\n\t\n", "text": "Getting closer…\nI’ve had a look at your article but since this directs to the original links which (as above) I tried and failed to compile the c and c++ drivers I’m not going to get any further with compiling the drivers from scratch using that route.\nAlso obviously (although not to vcpkg nubes) the correct package manager command for a modern PC was not the ‘obvious’ command (as I listed above) (does anyone still develop for x86?!?) but:\nvcpkg install mongo-c-driver:x64-windows\nvcpkg install mongo-cxx-driverx64-windowsIf I understand it correctly\nhas an undefined variable (s_Cluster0_uri) so presumably isn’t expected to compile as is. I replaced this with: mongoURImongoURI is constructed from mongoURIStr which looks something looking like\nstd::string mongoURIStr = \"mongodb+srv://usermaster:@mytestcluster.bnmwf3b.mongodb.net/?retryWrites=true&w=majority[I have no idea why putting a database password in a plaintext environmental variable would be more secure than one inside a compiled executable, although perhaps it’s just because people might not think to look there, so I simplified that out for the test code.]the article lists “v_noabi” directories as required but they aren’t created by the vcpkg\nso I will need to have a bit more of a play before it compiles, but hopefully this is forward movement", "username": "david_d" }, { "code": "mongoURImongoURIStr./vcpkg integrate installpackagesinstalled", "text": "Hi @david_d ,I’ve had a look at your article but since this directs to the original links\nThe links are for reference only. The article actually is independent in itself and does everything step by step.Regarding the get-started-cxx/studentRecordsConsoleApp.cpp at main · mongodb-developer/get-started-cxx · GitHub,\nThanks for the catch. It was a typo. You did the right thing. It’s meant to be replaced with mongoURI. It should be fixed now in the github repo.If you look at the code, mongoURIStr is actually fetched from an environment variable.Regarding v_noabi directories - those are just includes that you should not need with vcpkg if you run\n./vcpkg integrate install. The content of the v_noabi should however still be present under vcpkg folder - either in packages folder or installed folder.", "username": "Rishabh_Bisht" }, { "code": ">cmake --build . --config RelWithDebInfo --target install\n...\n -- Installing: C:/Program Files/mongo-c-driver/lib/cmake/libmongoc-1.0/libmongoc-1.0-config-version.cmake\n -- Installing: C:/Program Files/mongo-c-driver/lib/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config.cmake\n -- Installing: C:/Program Files/mongo-c-driver/lib/cmake/libmongoc-static-1.0/libmongoc-static-1.0-config-version.cma\n ke\n ****** B A T C H R E C U R S I O N exceeds STACK limits ******\n Recursion Count=289, Stack Usage=90 percent\n ****** B A T C H PROCESSING IS A B O R T E D ******\n -- Installing: C:/Program Files/mongo-c-driver/share/mongo-c-driver/uninstall.cmd\n", "text": "Thanks @Rishabh_Bisht\nTried your tutorial and ultimately didn’t get any further than I had without it (although it was clearer and I wish I’d initially gone through that rather than the original one)\nI did see that your code loads the uri from an environmental variable. I wasn’t clear what that adds? There don’t seem to be any security benefits and its an extra manual step to go wrong and it’s definitely not something which would be secure in a production environment. Given its a tutorial why not just have a string variable and just paste the autogenerated string into that (ie the one from the database deployment page? Cloud: MongoDB Cloud ). Much simpler than manually create an env variable with my database pasword exposed to any program which even happened to glance at env. So you’re teaching people a really bad! security habit.Anyway I’ve now built with your tutorial and still get errorsI assume this one when building for the c-driver doesn’t matter?Exception Unhandled:\nWhen executing following the VS build get the following (very uninformative) error (this is the same as the furtherst point I managed when using the vkpkg files\nUnhandled exception at 0x00007FF811A4CD29 in MongoCXXGettingStarted.exe: Microsoft C++ exception: mongocxx::v_noabi::operation_exception at memory location 0x000000ABE1CFE1A0.\nimage1794×1878 280 KB\n", "username": "david_d" }, { "code": "", "text": "Microsoft C++ exception: mongocxx::v_noabi::operation_exceptionYeah, the “batch recursion exceeds limit” is a known issue and should be benign.The exception you are seeing seem to be coming from the server. See the documentation here for reference - MongoDB C++ Driver: mongocxx::operation_exception Class ReferenceCan you try wrapping the getDatabases method in a try-catch and check what’s the error code/message you are getting in the exception?", "username": "Rishabh_Bisht" }, { "code": "vector<string> getDatabases(mongocxx::client& client)\n{\n\ttry {\n\t\tvector<string> cldn = client.list_database_names();\n\t\tfor (auto& dn : cldn)\n\t\t\tcout << dn << \"\\n\";\n\t\treturn cldn;\n\t}\n\tcatch (const std::exception& e) {\n\t\tstd::cout << \"ERROR: getDatabases: \" << e.what() << std::endl;\n\t\treturn {};\n\t}\n}\n#include <mongocxx/exception/operation_exception.hpp>\nvector<string> getDatabases(mongocxx::client& client)\n{\n\n\tcout << \"password: \" << client.uri().password() << std::endl;\n\tcout << \"username: \" << client.uri().username() << std::endl;\n\tcout << \"auth_source: \" << client.uri().auth_source() << std::endl;\n//\tcout << \"appname: \" << client.uri().appname().value() << std::endl;\n\n\tbool contains_err_info{ false };\n\tauto err_info = bsoncxx::builder::basic::document{}; \n\ttry {\n\t\tvector<string> cldn = client.list_database_names();\n\t\tfor (auto& dn : cldn)\n\t\t\tcout << dn << \"\\n\";\n\t\treturn cldn;\n\t}\n\tcatch (mongocxx::operation_exception const& e) {\n\t\tstd::cout << \"ERROR: getDatabases: \" << e.what() << std::endl;\n//\t\tauto error = e.raw_server_error()->view();\n//\t\tauto result = error[\"writeConcernErrors\"][0][\"errInfo\"];\n//\t\tcontains_err_info = (err_info == result.get_document().view());\n\t\treturn {};\n\t}\n}\n", "text": "Oh seriously! talk about amateur hour \nWhen I copied the Atlas URI I just replaced the word password with my password. I didn’t delete the bracketing <> so it was actually submitting the password as “”\nI didn’t realise until I printed out the client variables and thought it was a bit odd that only the password was bracketed. So after deleting the bracketing <> it’s working now. For reference here are the quick t/c functions I used. I’ll expand these later - presumably the specific invalid password error is returned in raw_server_error?With the simple t/c below I get:-\nERROR: getDatabases: bad auth : authentication failed: generic server errorWith the mongoDB specific one:-\npassword: \nusername: myUsr\nauth_source: admin\nERROR: getDatabases: bad auth : authentication failed: generic server error", "username": "david_d" }, { "code": "e.code()raw_server_error", "text": "Glad to hear it’s working for you!You could query e.code() and define a corresponding error in your application. Here is a map for error codes thrown by server - mongo/error_codes.yml at master · mongodb/mongo · GitHub\nFor reference, the situation you faced returns error code 11 (I reproduced it on my end) - UserNotFoundraw_server_error returns an optional. In case you want to make use of it, here’s a sample code - mongo-cxx-driver/transactions.cpp at master · mongodb/mongo-cxx-driver · GitHub", "username": "Rishabh_Bisht" }, { "code": "static const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr };\nDebug Assertion Failed!\n\nProgram: C:\\mongo-cxx-driver\\bin\\bsoncxx.dll\nFile: C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.35.32215\\include\\xstring\nLine: 1258\n\nExpression: non-zero size null string_view\n1>C:\\Users\\dgm55\\source\\repos\\MongoCXXGettingStarted\\MongoCXXGettingStarted\\MongoCXXGettingStarted.cpp(159,42): warning C4927: illegal conversion; more than one user-defined conversion has been implicitly applied\n1>C:\\Users\\dgm55\\source\\repos\\MongoCXXGettingStarted\\MongoCXXGettingStarted\\MongoCXXGettingStarted.cpp(159,42): message : while calling the constructor 'bsoncxx::v_noabi::view_or_value<bsoncxx::v_noabi::document::view,bsoncxx::v_noabi::document::value>::view_or_value(View)'\n1> with\n1> [\n1> View=bsoncxx::v_noabi::document::view\n1> ]\n1>C:\\mongo-cxx-driver\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp(60,5): message : see declaration of 'bsoncxx::v_noabi::view_or_value<bsoncxx::v_noabi::document::view,bsoncxx::v_noabi::document::value>::view_or_value'\n// Find the document with given key-value pair.\nvoid findDocument(mongocxx::collection& collection, const string& key, const string& value)\n{\n\t// Create the query filter\n\tauto filter = bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize;\n\n\t//Add query filter argument in find\n\tauto cursor = collection.find({ filter });\n\n\tfor (auto&& doc : cursor)\n\t{\n\t\tcout << bsoncxx::to_json(doc) << endl;\n\t}\n}\n// Find the document with given key-value pair.\nvoid findDocument(mongocxx::collection& collection, const string& key, const string& value)\n{\n\t// Create the query filter\n\tbsoncxx::document::view_or_value filter = bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize;\n\n\t//Add query filter argument in find\n\tmongocxx::v_noabi::cursor cursor = collection.find({ filter });\n\n\tfor (auto&& doc : cursor)\n\t{\n\t\tcout << bsoncxx::to_json(doc) << endl;\n\t}\n}\n", "text": "@Rishabh_Bisht\nSorry another crash with the tutorial - hopefully it’s a quick fix for you to answer.\nThe tutorial works well until I switch to a release build when it crashes on execution with a failed assert as the uri is contructed.\nIt builds with only one warning (as below - but this isn’t different for debug version).\n[incidentally why did you make mongoURI global? There seems no need for this and it would seem to have been better to have declared it local to main.\nimage1829×1770 354 KB\n\n\nimage1829×1770 379 KB\nThe build warning doesn’t appear to have any functional impact but is easily fixed by explicitly declaring the autosfromto", "username": "david_d" }, { "code": "", "text": "I gave it a try on my end but can’t seem to reproduce with a release build. Is your code exactly same as tutorial for creating URI object from mongoURIStr or do you have done any local modifications?\nYou could move the URI creation inside main also if that helps. It was kept outside because I was going to do some checks on it in different part of the code but I later removed that part to keep the tutorial simple.", "username": "Rishabh_Bisht" } ]
Fatal error building C++ drivers with VS 2022 under Win10 - are errors OK?
2023-03-04T11:56:09.694Z
Fatal error building C++ drivers with VS 2022 under Win10 - are errors OK?
2,112
null
[ "java", "app-services-user-auth", "android" ]
[ { "code": "", "text": "Hi,\nI’m new to MongoDB and I’m wondering how I can best implement email/password authentication with MongoDB for the Login activity of an Android app using the Realm Java SDK. I have already looked at the guides available, however, the guides assume that users are only contained within one class and have no other information besides credentials. For the app that I am developing, we have two classes, each representing a different role with different permissions/access levels, and each user type has other data associated to it; their first and last name for example. Currently, I am considering merging these two classes into one User class (possibly keeping those other two classes to extend the User class) and using the new User class for authentication. I am also aware that MongoDB supports custom user data. My question is: is that the best way of going about it? Or is there a more efficient solution out there that doesn’t use App Services’s Users tab (for example, retrieving input, querying the database which already has the credentials there, seeing if the information matches the input, then logging into the app if it does)?", "username": "Samir_Saidi" }, { "code": "", "text": "Not a java developer, so not sure I can help you. What I would suggest is to explore the Custom Data that can be set up and also used to define rules on the device sync.", "username": "Damian_Danev" } ]
Email/password authentication with MongoDB
2023-03-14T11:47:13.926Z
Email/password authentication with MongoDB
1,092
null
[ "queries", "node-js", "crud", "mongoose-odm" ]
[ { "code": "const structure = new mongoose.Schema({\n Name: {\n type:String,\n required:[true,\"Please Enter Task\"],\n },\n Completed: {\n type:Boolean,\n default:false\n }\n})\n\nlet work = mongoose.model('tasks',structure) \n\nmodule.exports = work\nconst work = require(\"../models/schema.js\")\n\nasync function updateone(req,res)\n{ \n console.log(req.body)\n try {\n let treat = {\"First\" :\"Michael\" }\n let newtask = await \n work.findOneAndUpdate({_id:req.params.id},treat,{ overwrite: true }) \n res.status(200).json(newtask)\n } \n catch (error) {\n res.status(500).json({status:\"Fail\",msg:\"Wrong Parameter\"})\n }\n}\n\nCompleted$setoverwrite:trueNameCompletedlet treat = {\"Job\" :\"Pilot\" }\nlet newtask = await work.findOneAndUpdate({_id:req.params.id},treat,{ overwrite: true }) \n", "text": "Below is my Schema and APISchema.js fileI have imported it in the api file and wrote an api to update recordsNow , I want that upon updating , only name shall remain and Completed field should not be there. I know that mongoose by default wraps it in $set therefore I used overwrite:true so that my entire document will be replaced.However , I am seeing that the value Name is replaced but the Completed field is still present. Also I noticed that if I provide a totally new field then nothing changes.\nIt means if :-In this case nothing changes , no new field is added at all.What am I doing wrong here ?", "username": "Brijesh_Roy" }, { "code": "overwrite:trueoverwritefindOneAndUpdatestrict: falseconst structure = new mongoose.Schema({\n Name: {\n type:String,\n required:[true,\"Please Enter Task\"],\n },\n Completed: {\n type:Boolean,\n default:false\n }\n},\n{ strict: false })\n", "text": "Hello @Brijesh_Roy, Welcome to the MongoDB Community Forum ,I used overwrite:true so that my entire document will be replaced.First of all, there is no overwrite property in the options of findOneAndUpdate method, you can refer to the mongoose documentationIf you want to replace the document then you can use replaceOne method.In this case nothing changes , no new field is added at all.What am I doing wrong here ?mongoose schema is strict by default, you can’t insert or retrieve except specified properties in the schema.You can use strict: false property in schema options, refer to the documentation.", "username": "turivishal" }, { "code": "", "text": "Hello,\nThank you very much for your reply. Actually I came across overwrite proerty in documentation only .\nHere it is :- Mongoose documentationCan you please help me decide.I am unable to implement it.\n\nimage1114×871 89.6 KB\n", "username": "Brijesh_Roy" }, { "code": "findOneAndReplace", "text": "I am not sure about it but there is an alternate function findOneAndReplace as well, did you tried it?\nhttps://mongoosejs.com/docs/api/model.html#model_Model-findOneAndReplace", "username": "turivishal" } ]
Overwrite : true not working in FindOneAndUpdate in MongoDB
2023-03-16T08:12:11.966Z
Overwrite : true not working in FindOneAndUpdate in MongoDB
2,004
null
[]
[ { "code": "", "text": "OS version - centos7\nmongo version - Community - 5.0.14\nHere is my mongo.conf filenet:\nhttp:\nenabled: trueWhen I am trying to start services with above configuration it is failing with unrecognized net.http.enable optioncan someone please help", "username": "Jagan_62817" }, { "code": "net.httpnet.http", "text": "Hi @Jagan_62817,\nfrom the documentation:net.httpChanged in version 3.6: MongoDB 3.6 removes the deprecated net.http options. The options have been deprecated since version 3.2.Best Regards", "username": "Fabio_Ramohitaj" } ]
How to enable http in mongodb
2023-03-16T09:54:18.747Z
How to enable http in mongodb
787
null
[]
[ { "code": "", "text": "As specified here MongoDB 5.0.15 is released - #4 by Andrea_PerniciLooks like from the latest versione Mongo started to change the Service File without any warning. Breaking the restart.Any help on how to avoid it?", "username": "Andrea_Pernici" }, { "code": "", "text": "I have the same thing in my server and I couldn’t figure what is happen, the service is break with no reason. Happened in 5.0.x and 6.0.xhttps://www.mongodb.com/community/forums/t/mongodb-6-0-crushes-always-centos/217740We hopefully someone help us.", "username": "Mina_Ezeet" } ]
Starting from the 5.0.15 mongo upgrade changes the systemd service file and break mongo
2023-03-16T08:08:12.442Z
Starting from the 5.0.15 mongo upgrade changes the systemd service file and break mongo
1,068
null
[ "crud" ]
[ { "code": "", "text": "Mongodb support special character ?\nHow many special character support mongoDb", "username": "Arjun_Maurya" }, { "code": "", "text": "Welcome to the MongoDB Community Forums @Arjun_Maurya!Can you provide some examples of what you mean by special characters?MongoDB uses UTF-8 character encoding which is part of the Unicode Standard.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "A post was split to a new topic: How to seach M&M in Atlas search", "username": "Kushagra_Kesav" } ]
Special character
2021-08-20T05:26:34.543Z
Special character
7,340
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "Gedmatchesdb.gedmatches.find({ kit1: 'BkHXGDn4z', kit2: 'S1B52MvnVz' },{kit1:1,kit2:1}){ \"_id\" : \"w8WSXAjyvC8Ncxers\", \"kit1\" : \"BkHXGDn4z\", \"kit2\" : \"S1B52MvnVz\" }KitskitUserBkHXGDn4znameDoc.runningNo1184{ \"_id\" : \"qfTzZ4TSaGuBhCuuK\", \"names\" : [ 1183, 1184, 1185, 1186 ], \"name\" : \"Sheridan Lugo\", \"kit23\" : \"S1B52MvnVz\" }const pipeline = [\n {\n // get gedmatches with both kitUser on kit1/kit2\n $match: { $or: [{ kit1: kitUser }, { kit2: kitUser }] },\n },\n { $project: { kit1: 1, kit2: 1 } },\n {\n // let's create arrays for both kit1/kit2 and push kits\n $group: {\n _id: null,\n kits1: { $push: '$kit1' },\n kits2: { $push: '$kit2' },\n },\n }, {\n // merge both arrays into 1\n $project: { 'cousins': { '$setUnion': ['$kits1', '$kits2'] } },\n }, {\n // filter out kitUser from the array\n $project: {\n 'cousins': {\n $filter: {\n 'input': '$cousins',\n 'as': 'cousin',\n 'cond': { $ne: ['$$cousin', kitUser] },\n },\n },\n },\n }, {\n // let's get docs from kits from our array\n $lookup: {\n 'from': 'kits',\n 'localField': 'cousins',\n 'foreignField': 'kit23',\n 'as': 'cousins',\n },\n }, {\n // unwind array so we have all as objects\n $unwind: {\n 'path': '$cousins',\n 'preserveNullAndEmptyArrays': false,\n },\n }, {\n // run our searchTerm using regexFindAll to get results that match it\n $project: { 'kit23': '$cousins.kit23', 'match': { names: nameDoc.runningNo } },\n }, {\n // unwind our results as now we have a nested array, we want only 1\n $unwind: { 'path': '$match', 'preserveNullAndEmptyArrays': false },\n }, {\n // get the kits docs that matched `Kits` query\n $lookup: {\n 'from': 'kits',\n 'localField': 'kit23',\n 'foreignField': 'kit23',\n 'as': 'doc',\n },\n }, {\n // unwind doc field, so we get the object and not an array with the object inside\n $unwind: {\n 'path': '$doc',\n 'preserveNullAndEmptyArrays': false,\n },\n }, {\n // only send back kit23 and doc on our object\n $project: {\n '_id': 0,\n 'kit23': '$kit23',\n 'doc': '$doc',\n },\n },\n ];\n const gedMatchesArray = await GedmatchesRaw.aggregate(pipeline, { session: mongoSession }).toArray();\n", "text": "Please excuse if I made a newbie error in my code here.I’m trying to merge the results of two queries (against different collections).One is the Gedmatches collection and this is the expected doc that should join with the 2nd collection:the query:\ndb.gedmatches.find({ kit1: 'BkHXGDn4z', kit2: 'S1B52MvnVz' },{kit1:1,kit2:1})the result:\n{ \"_id\" : \"w8WSXAjyvC8Ncxers\", \"kit1\" : \"BkHXGDn4z\", \"kit2\" : \"S1B52MvnVz\" }The other one is from the Kits collection and this doc should come up as a result of the aggregation query (kitUser is BkHXGDn4z):the match query in my code:\nnameDoc.runningNo is 1184{ \"_id\" : \"qfTzZ4TSaGuBhCuuK\", \"names\" : [ 1183, 1184, 1185, 1186 ], \"name\" : \"Sheridan Lugo\", \"kit23\" : \"S1B52MvnVz\" }Here’s my query, language is MeteorJS (nodeJS):Result is an empty array Any help is highly appreciated!", "username": "Andreas_West" }, { "code": "Kits{\n \"_id\": \"qfTzZ4TSaGuBhCuuK\",\n \"names\": [\n 1183,\n 1184,\n 1185,\n 1186\n ],\n \"name\": \"Sheridan Lugo\",\n \"kit23\": \"S1B52MvnVz\"\n}\nGedmatches{\n \"_id\": \"w8WSXAjyvC8Ncxers\",\n \"kit1\": \"BkHXGDn4z\",\n \"kit2\": \"S1B52MvnVz\"\n}\n", "text": "Hi @Andreas_West,Welcome to the MongoDB Community forums As per my understanding of your above question, you have two collections that contain the following documents:Please correct me if I’m wrong and share the sample document from both collections.I’m trying to merge the results of two queries (against different collections).Please elaborate on what you mean by merging two queries. Are you trying to lookup up other collections, then what is the sample output you are expecting?Also, share the workflow of the code and the MongoDB version you are using.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "[{\"kit23\":\"S1B52MvnVz\",\n\"doc\":\n{\"_id\":\"qfTzZ4TSaGuBhCuuK\",\"names\":[1183,1184,1185,1186],\"name\":\"Sheridan Lugo\",\"kit23\":\"S1B52MvnVz\"},\n}]\nkit23docworkflow of the code", "text": "Please correct me if I’m wrong and share the sample document from both collections.That’s correct.Please elaborate on what you mean by merging two queries. Are you trying to lookup up other collections, then what is the sample output you are expecting?Sorry, merging is the wrong term, it’s a classic join via $lookup.The sample output that I’m expecting is:Sorry for the bad formatting, it’s an array of objects (as there can be more results than just one but in this case I expect exactly 1 object to be returned.It has the kit23 field and the doc object.Also, share the workflow of the code and the MongoDB version you are using.MongoDb 4.2 and the I’m not sure what you mean by workflow of the code as I posted the relevant code in my original post.Thanks you,Andreas", "username": "Andreas_West" }, { "code": "Kitsdb.Kits.aggregate([\n {\n $lookup: {\n from: \"books\",\n localField: \"kit23\",\n foreignField: \"kit2\",\n as: \"matches\",\n },\n },\n {\n $unwind: \"$matches\",\n },\n {\n $project: {\n _id: 1,\n names: 1,\n name: 1,\n kit23: 1,\n matches: {\n $cond: {\n if: {\n $eq: [\"$matches.kit2\", \"$kit23\"],\n },\n then: \"$matches\",\n else: null,\n },\n },\n },\n },\n {\n $project: {\n kit23: 1,\n doc: \"$$ROOT\",\n },\n },\n {\n $project: {\n _id: 0,\n kit23: 1,\n doc: {\n _id: 1,\n names: 1,\n name: 1,\n kit23: 1,\n },\n },\n },\n])\nKitsGedmatcheskit23Kitskit2Gedmatchesmatcheskit2kit23matches{\n \"kit23\": \"S1B52MvnVz\",\n \"doc\": {\n \"_id\": \"qfTzZ4TSaGuBhCuuK\",\n \"names\": [\n 1183,\n 1184,\n 1185,\n 1186\n ],\n \"name\": \"Sheridan Lugo\",\n \"kit23\": \"S1B52MvnVz\"\n }\n}\n", "text": "Hi @Andreas_West,Thanks for sharing the information.To obtain the desired result, I ran an aggregation query on the Kits collections, sharing it for your reference:Here I’ve used $lookup stage joins the Kits collection with the Gedmatches collection based on the kit23 field from Kits and the kit2 field from Gedmatches.Next, I used the $unwind to deconstruct the resulting array of matches from the previous stage.After that used $project stage to project only the fields needed from both collections and also created a new field called matches that contains only the match where kit2 is equal to kit23.Following up on that I’ve used $match to filter out any documents where matches is null and finally used $project to get the desired output, which satisfies your expected result:I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation with $lookup and $unwind between two collections doesn't deliver expected results
2023-03-15T20:37:34.958Z
Aggregation with $lookup and $unwind between two collections doesn&rsquo;t deliver expected results
1,381
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi there,We have MongoDB 5.0.2 installed through Ubuntu 20.04 package manager (apt) and running as a systemd service (mongod.service). We would like to know which one of the below is the proper method to restart the mongo Instance in our scenario:ORThanks and Regards", "username": "Abdullah_Madani" }, { "code": "", "text": "Hi @Abdullah_Madani,\nI think is the same thing.Best Regards.", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Hi @Fabio_Ramohitaj,Thanks for the reply. So if I use the option 1 i.e. “db.shutdownServer()” command then does it stops the mongod service as well? Then can I start the DB by simply starting the mongod service? Please advise", "username": "Abdullah_Madani" }, { "code": "", "text": "None of the options you presented willrestart the mongo InstanceHowever both are valid ways to stop the server.Both offers different security models.db.shutdownServerRequires a valid database user with specific privileges on the admin database. See https://www.mongodb.com/docs/manual/reference/method/db.shutdownServer/#access-control for more details.You may run this command from any other machines, given the appropriate credentials, that can connect to the database. The database user issuing this command might not be able to restart mongod if he does not have access, as an OS user with sudo privileges, to the machine where mongod is running.Systemctl stop mongodThis can be done by any OS admin that has access with sudo privileges to the machine where mongod is running. This OS user will also be able to start mongod.So depending of the security model you have. A DB user can terminate mongod using shutdownServer and only an OS user can start/restart it. The same person may have both DB user and OS user credentials.I think is the same thingNot entirely, especially for replica set members, see https://www.mongodb.com/docs/manual/reference/method/db.shutdownServer/#db.shutdownserver---on-replica-set-members.", "username": "steevej" }, { "code": "sudo systemctl restart <application_name>\n", "text": "OS admin that has access with sudo privileges to the machine where mongodHi @Abdullah_Madani ,For any application, you can usee.g. sudo systemctl restart mongod", "username": "Monika_Shah" }, { "code": "", "text": "Hi there,\n@steevej My ultimate goal is to reboot the MongoDB server. For that, I want to shutdown the MongoDB gracefully followed by the Server reboot and then start the MongoDB Instance.So what I understand from the responses is, if I shutdown using “db.shutdownServer()” command the DB instance will stop gracefully. However the service “mongod.service” will only shutdown if the DB User has the privilege on the OS User. Please correct me if I am wrong!While my understanding is that the DB users has no relation with the OS user. Their scope is limited to the Database only. The OS service “mongod.service” is owned by “mongodb” OS user which is also the owner of MogoDB file structure at OS Level.So in short, if I stop MongoDB by “db.shutdownServer()” command using a user that has dbAdmin privilege on admin database, then it should in turn trigger “Mongodb” OS user to stop the service as well. After rebooting the machine, the MongoDB should directly start once the OS Service “mongod.service” is started.Is my understanding is correct?", "username": "Abdullah_Madani" }, { "code": "", "text": "My ultimate goal is to reboot the MongoDB server.The solution issudo systemctl restart mongodI also saw that sometimes the service is mongodb rather than simply mongod.I shutdown using “db.shutdownServer()” command the DB instance will stop gracefullyYesHowever the service “mongod.service” will only shutdown if the DB User has the privilege on the OS User.No. If mongod is terminated with shutdownServer, systemd knows it is down systemctl status will indicate that mongod is not running.After rebooting the machine, the MongoDB should directly start once the OS Service “mongod.service” is started.Not exactly. mongod.service will restart at reboot if the service is enabled. A service has many states. Read systemd - Debian Wiki for more details.If the mongod is started with systemctl, then you have nothing to do when you shutdown your computer in order to make sure that it terminates gracefully.If you want mongod to start automatically when starting the computer, the service has to be enabled.", "username": "steevej" }, { "code": "", "text": "Thanks @steevej for your active participation and bringing your expertise and experience here. Your inputs were very helpful.Thanks @Monika_Shah/Fabio_Ramohitaj for engaging and providing your proficient views.So the simple takeaway is:Both “sudo systemctl stop mongod.service” and \"db.shutdownServer()” command shutdowns the MongoDB gracefully.After rebooting the machine; issing “sudo systemctl start mongod.service” at OS level will bring the MongoDB up and running (in case the service was not “enabled” in systemctl for auto start)Thanks and Regards,\nAbdullah Madani", "username": "Abdullah_Madani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to gracefully restart MongoDB installed through Ubuntu package manager
2023-03-14T07:22:09.610Z
How to gracefully restart MongoDB installed through Ubuntu package manager
4,367
null
[ "aggregation" ]
[ { "code": "", "text": "For example: my SQL query can be SELECT * FROM (SELECT ID, SUM(revenue) as total_revenue FROM TABLE2 GROUP BY ID) as TABLE3, (SELECT MAX(revenue) as max_revenue FROM TABLE3) WHERE total_revenue = max_revenue; I was trying to lookup online but did not find any solution that targets a complex nested query like this.", "username": "Libin_Zhou" }, { "code": "", "text": "Hello @Libin_Zhou ,Welcome to The MongoDB Community Forums! To understand your use case better, please provide more details, such as:Regards,\nTarun", "username": "Tarun_Gaur" } ]
How to select all documents with a max value on a field where the field is calculated at runtime?
2023-03-15T16:32:53.251Z
How to select all documents with a max value on a field where the field is calculated at runtime?
441
null
[ "crud" ]
[ { "code": "[\n{_id: 0, status: 'pending'},\n{_id: 1, status: 'pending'}\n]\nfindOneAndUpdate({ status: 'pending' }, { $set: { status: 'processing' } })\n", "text": "Consider the following collectionIf I run the following operation twice concurrently I would expect to get both documents set to processing and returnedHowever what I experience is that the first document is updated and returned twice. Am I misunderstanding something?", "username": "Andreas_Hald" }, { "code": "db.collectionA.findOneAndUpdate({ status: 'pending' }, { $set: { status: 'processing' } })\n{\n _id: 0,\n status: 'pending'\n}\ndb.collectionA.findOneAndUpdate({ status: 'pending' }, { $set: { status: 'processing' } })\n{\n _id: 1,\n status: 'pending'\n}\n", "text": "Hello @Andreas_Hald ,Welcome to The MongoDB Community Forums! I added the mentioned documents and ran the same query twice below is the response I gotSo, it is working as expected, if the two queries run one after another.If I run the following operation twice concurrently I would expect to get both documents set to processing and returnedBy concurrently , do you mean you’re trying to run/update the same document with two queries at the same time? Could you post an example code that can reproduce what you’re seeing?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "FindandUpdate should be atomic operation. Maybe explain a bit on how you run your tests.", "username": "Kobe_W" } ]
How to ensure multiple findOneAndUpdate does not return the same document
2023-03-13T11:19:13.883Z
How to ensure multiple findOneAndUpdate does not return the same document
609
null
[]
[ { "code": "unregisterReceiveronDestroyRealm.init(this)onCreate2020-08-19 21:09:14.617 18966-18966/com.example.realm E/ActivityThread: Activity com.example.realm.MainActivity has leaked IntentReceiver io.realm.internal.network.NetworkStateReceiver@805ae25 that was originally registered here. Are you missing a call to unregisterReceiver()?\nandroid.app.IntentReceiverLeaked: Activity com.example.realm.MainActivity has leaked IntentReceiver io.realm.internal.network.NetworkStateReceiver@805ae25 that was originally registered here. Are you missing a call to unregisterReceiver()?\noverride fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n Realm.init(this) #Exception refers to this line of code\n}\n", "text": "Hello everyone,I’m using realm database for android and kept getting unregisterReceiver error when press back button. I’m closing my realm during onDestroy and initializing Realm.init(this) during onCreate.Is there something I’m missing to close here.This is the Exception I’m getting", "username": "Safik_Momin" }, { "code": "", "text": "someone knows the answer?", "username": "SirSwagon_N_A" }, { "code": "", "text": "I got this error too.And I solved it later by moving the ‘Realm.init()’ to the ‘onCreate’ method of a subclass of Application.", "username": "Arno_Dorian" } ]
unregisterReceiver error
2020-08-20T02:21:16.804Z
unregisterReceiver error
2,510
null
[ "queries" ]
[ { "code": "{\n \"_id\" : ObjectId(\"63f8a22ae22b80196a09d688\"),\n \"workspace_id\" : NumberInt(1),\n \"data\" : [\n {\n \"k\" : \"first_name\",\n \"v\" : \"Berneice\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"last_name\",\n \"v\" : \"Adams\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"email\",\n \"v\" : \"[email protected]\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"phone\",\n \"v\" : \"(201) 205-4629\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"address\",\n \"v\" : \"1627 General Center Apt. 481\\nNaderberg, OH 73926-4376\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"city\",\n \"v\" : \"Millertown\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"state\",\n \"v\" : \"Wisconsin\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"zip\",\n \"v\" : \"32184\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"country\",\n \"v\" : \"India\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"company\",\n \"v\" : \"Jakubowski-Prosacco\",\n \"t\" : NumberInt(1)\n },\n {\n \"k\" : \"remove_from_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"consequatur\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"http://www.ondricka.com/voluptas-voluptatem-accusamus-nisi\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 56.07\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(47)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"atque\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"quos\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"viewed_page\",\n \"v\" : [\n {\n \"k\" : \"url\",\n \"v\" : \"http://www.wiza.org/\"\n },\n {\n \"k\" : \"page_title\",\n \"v\" : \"excepturi\"\n },\n {\n \"k\" : \"page_type\",\n \"v\" : \"quisquam\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"http://www.wiza.net/dicta-corrupti-est-atque-quia-sit\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 94.77\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(88)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"nisi\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"deserunt\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"purchase\",\n \"v\" : [\n {\n \"k\" : \"category\",\n \"v\" : \"optio\"\n },\n {\n \"k\" : \"revenue\",\n \"v\" : NumberInt(788)\n },\n {\n \"k\" : \"product\",\n \"v\" : \"sint\"\n },\n {\n \"k\" : \"size\",\n \"v\" : \"quo\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"purchase\",\n \"v\" : [\n {\n \"k\" : \"category\",\n \"v\" : \"earum\"\n },\n {\n \"k\" : \"revenue\",\n \"v\" : NumberInt(102)\n },\n {\n \"k\" : \"product\",\n \"v\" : \"eaque\"\n },\n {\n \"k\" : \"size\",\n \"v\" : \"provident\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"viewed_page\",\n \"v\" : [\n {\n \"k\" : \"url\",\n \"v\" : \"http://murray.biz/quisquam-et-ea-similique-consequatur-laboriosam-ab-vel\"\n },\n {\n \"k\" : \"page_title\",\n \"v\" : \"ea\"\n },\n {\n \"k\" : \"page_type\",\n \"v\" : \"qui\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"purchase\",\n \"v\" : [\n {\n \"k\" : \"category\",\n \"v\" : \"ad\"\n },\n {\n \"k\" : \"revenue\",\n \"v\" : NumberInt(152)\n },\n {\n \"k\" : \"product\",\n \"v\" : \"ab\"\n },\n {\n \"k\" : \"size\",\n \"v\" : \"ut\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"purchase\",\n \"v\" : [\n {\n \"k\" : \"category\",\n \"v\" : \"aliquam\"\n },\n {\n \"k\" : \"revenue\",\n \"v\" : NumberInt(326)\n },\n {\n \"k\" : \"product\",\n \"v\" : \"aperiam\"\n },\n {\n \"k\" : \"size\",\n \"v\" : \"ipsa\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"purchase\",\n \"v\" : [\n {\n \"k\" : \"category\",\n \"v\" : \"voluptatem\"\n },\n {\n \"k\" : \"revenue\",\n \"v\" : NumberInt(209)\n },\n {\n \"k\" : \"product\",\n \"v\" : \"eum\"\n },\n {\n \"k\" : \"size\",\n \"v\" : \"eius\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"remove_from_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"eos\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"remove_from_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"neque\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"viewed_page\",\n \"v\" : [\n {\n \"k\" : \"url\",\n \"v\" : \"http://kunde.com/unde-et-deleniti-veniam-dolore-aliquam-possimus-amet-dolores.html\"\n },\n {\n \"k\" : \"page_title\",\n \"v\" : \"illo\"\n },\n {\n \"k\" : \"page_type\",\n \"v\" : \"nesciunt\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"http://kiehn.com/\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 19.43\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(75)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"aperiam\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"quam\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"search\",\n \"v\" : [\n {\n \"k\" : \"query\",\n \"v\" : \"in\"\n },\n {\n \"k\" : \"results_count\",\n \"v\" : NumberInt(17)\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"viewed_page\",\n \"v\" : [\n {\n \"k\" : \"url\",\n \"v\" : \"http://beahan.com/autem-commodi-facilis-quia.html\"\n },\n {\n \"k\" : \"page_title\",\n \"v\" : \"libero\"\n },\n {\n \"k\" : \"page_type\",\n \"v\" : \"sequi\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"http://www.prohaska.info/vitae-fuga-voluptatem-mollitia-natus-ea-consectetur-et-est\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 97.38\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(57)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"occaecati\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"temporibus\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"remove_from_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"sapiente\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"remove_from_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"quo\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"remove_from_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"officiis\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"search\",\n \"v\" : [\n {\n \"k\" : \"query\",\n \"v\" : \"qui\"\n },\n {\n \"k\" : \"results_count\",\n \"v\" : NumberInt(27)\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"https://www.cartwright.biz/aut-eos-blanditiis-est-voluptas-eius\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 93.39\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(51)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"nihil\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"rerum\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"search\",\n \"v\" : [\n {\n \"k\" : \"query\",\n \"v\" : \"deleniti\"\n },\n {\n \"k\" : \"results_count\",\n \"v\" : NumberInt(94)\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"http://www.baumbach.org/est-illo-similique-nostrum-perspiciatis-sint-itaque-facere.html\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 4.78\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(19)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"ratione\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"esse\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"http://batz.com/qui-ab-eaque-aut-neque-ad\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 66.02\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(8)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"nobis\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"dolores\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"viewed_page\",\n \"v\" : [\n {\n \"k\" : \"url\",\n \"v\" : \"http://schowalter.com/\"\n },\n {\n \"k\" : \"page_title\",\n \"v\" : \"assumenda\"\n },\n {\n \"k\" : \"page_type\",\n \"v\" : \"debitis\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"search\",\n \"v\" : [\n {\n \"k\" : \"query\",\n \"v\" : \"quas\"\n },\n {\n \"k\" : \"results_count\",\n \"v\" : NumberInt(48)\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"viewed_page\",\n \"v\" : [\n {\n \"k\" : \"url\",\n \"v\" : \"http://www.kautzer.com/consectetur-repellat-sit-doloremque-possimus-dolorum.html\"\n },\n {\n \"k\" : \"page_title\",\n \"v\" : \"et\"\n },\n {\n \"k\" : \"page_type\",\n \"v\" : \"necessitatibus\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"viewed_page\",\n \"v\" : [\n {\n \"k\" : \"url\",\n \"v\" : \"http://www.renner.com/consequatur-labore-ducimus-minus\"\n },\n {\n \"k\" : \"page_title\",\n \"v\" : \"recusandae\"\n },\n {\n \"k\" : \"page_type\",\n \"v\" : \"harum\"\n }\n ],\n \"t\" : NumberInt(2)\n },\n {\n \"k\" : \"add_to_cart\",\n \"v\" : [\n {\n \"k\" : \"product\",\n \"v\" : \"http://zulauf.net/delectus-qui-nihil-quia-officia-reprehenderit\"\n },\n {\n \"k\" : \"price\",\n \"v\" : 76.76\n },\n {\n \"k\" : \"quantity\",\n \"v\" : NumberInt(16)\n },\n {\n \"k\" : \"size\",\n \"v\" : \"aut\"\n },\n {\n \"k\" : \"color\",\n \"v\" : \"quaerat\"\n }\n ],\n \"t\" : NumberInt(2)\n }\n ]\n}\ndb.contats.createIndex({ \"workspace_id\": 1, \"data.v\": 1, \"created_at\": -1 )db.getCollection(\"contactst\").find({\n \"workspace_id\": 1,\n \"data\": {\n $elemMatch: {\n \"k\": \"viewed_page\", \"v\": { $elemMatch: { \"k\": \"page_title\", \"v\": \"excepturi\" } }, \"t\": 2\n }\n }\n}).limit(25)\npage_titleexcepturidb.getCollection(\"contacts\").find({\n \"workspace_id\": 1,\n \"data\": {\n $elemMatch: {\n \"k\": \"viewed_page\", \"v\": { $elemMatch: { \"k\": \"page_title\", \"v\": \"SOME-NON-EXISTING RECORD-HERE\" } }, \"t\": 2\n }\n }\n}).limit(25)\n{\n \"explainVersion\" : \"1\",\n \"queryPlanner\" : {\n \"namespace\" : \"test.contacts\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"data\" : {\n \"$elemMatch\" : {\n \"$and\" : [\n {\n \"v\" : {\n \"$elemMatch\" : {\n \"$and\" : [\n {\n \"k\" : {\n \"$eq\" : \"page_title\"\n }\n },\n {\n \"v\" : {\n \"$eq\" : \"NON-EXISTING\"\n }\n }\n ]\n }\n }\n },\n {\n \"k\" : {\n \"$eq\" : \"viewed_page\"\n }\n },\n {\n \"t\" : {\n \"$eq\" : 2.0\n }\n }\n ]\n }\n }\n },\n {\n \"workspace_id\" : {\n \"$eq\" : 1.0\n }\n }\n ]\n },\n \"queryHash\" : \"8A0FD575\",\n \"planCacheKey\" : \"0D8E423A\",\n \"maxIndexedOrSolutionsReached\" : false,\n \"maxIndexedAndSolutionsReached\" : false,\n \"maxScansToExplodeReached\" : false,\n \"winningPlan\" : {\n \"stage\" : \"LIMIT\",\n \"limitAmount\" : 25.0,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"data\" : {\n \"$elemMatch\" : {\n \"$and\" : [\n {\n \"k\" : {\n \"$eq\" : \"viewed_page\"\n }\n },\n {\n \"t\" : {\n \"$eq\" : 2.0\n }\n },\n {\n \"v\" : {\n \"$elemMatch\" : {\n \"$and\" : [\n {\n \"k\" : {\n \"$eq\" : \"page_title\"\n }\n },\n {\n \"v\" : {\n \"$eq\" : \"NON-EXISTING\"\n }\n }\n ]\n }\n }\n }\n ]\n }\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"workspace_id\" : 1.0,\n \"data.v\" : 1.0,\n \"created_at\" : -1.0\n },\n \"indexName\" : \"workspace_id_1_data.v_1_created_at_-1\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"workspace_id\" : [\n\n ],\n \"data.v\" : [\n \"data\",\n \"data.v\"\n ],\n \"created_at\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2.0,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"workspace_id\" : [\n \"[1, 1]\"\n ],\n \"data.v\" : [\n \"[MinKey, MaxKey]\"\n ],\n \"created_at\" : [\n \"[MaxKey, MinKey]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [\n\n ]\n },\n \"command\" : {\n \"find\" : \"contact_tests\",\n \"filter\" : {\n \"workspace_id\" : 1.0,\n \"data\" : {\n \"$elemMatch\" : {\n \"k\" : \"viewed_page\",\n \"v\" : {\n \"$elemMatch\" : {\n \"k\" : \"page_title\",\n \"v\" : \"NON-EXISTING\"\n }\n },\n \"t\" : 2.0\n }\n }\n },\n \"limit\" : 25.0,\n \"$db\" : \"jellyreach_backup\"\n },\n \"serverInfo\" : {\n \"host\" : \"MacBook-Pro.local\",\n \"port\" : 27017.0,\n \"version\" : \"6.0.1\",\n \"gitVersion\" : \"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\"\n },\n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : 104857600.0,\n \"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600.0,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600.0,\n \"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600.0,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600.0,\n \"internalQueryProhibitBlockingMergeOnMongoS\" : 0.0,\n \"internalQueryMaxAddToSetBytes\" : 104857600.0,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600.0\n },\n \"ok\" : 1.0\n}\n", "text": "I created ~500,000 documents with fake data. This is how one document looks:After that, I created the following index:db.contats.createIndex({ \"workspace_id\": 1, \"data.v\": 1, \"created_at\": -1 )It is really fast when searching something like this:But, this is ONLY fast when there is page_title with value of excepturi.If I search this for example:then it takes ~2 minutes to query.This is explain:Why? It looks like event when searching for non-existing fields is using index, but it is still so much slow. Our data is pretty dynamic so I’ve selected this structure.", "username": "jellyx" }, { "code": "", "text": "Bumping this.I created one simple collection array and tested. Happens there too.When array element does not exist, it is slow.", "username": "jellyx" }, { "code": "dataworkspace_iddata", "text": "Adding wildcard index to data seems to be working.However, I’m curious why didn’t work compound index on workspace_id and data (multikey index)?", "username": "jellyx" }, { "code": "workspace_id_1_data.v_1_created_at_-1workspace_id_1_data.v_1_created_at_-1db.testing.find({\n \"workspace_id\": 1,\n \"data\": {\n $elemMatch: {\n \"k\": \"viewed_page\", \"v\": { $elemMatch: { \"k\": \"page_title\", \"v\": \"excepturi\" } }, \"t\": 2\n }\n }\n}).limit(25).explain('executionStats')\n{\n executionStats: {\n executionSuccess: true,\n nReturned: 25,\n executionTimeMillis: 0,\n totalKeysExamined: 53,\n totalDocsExamined: 5,\n\n}\nexecutionTimeMillistotalKeysExamineddb.testing.find({\n \"workspace_id\": 1,\n \"data\": {\n $elemMatch: {\n \"k\": \"viewed_page\", \"v\": { $elemMatch: { \"k\": \"page_title\", \"v\": \"something\" } }, \"t\": 2\n }\n }\n}).limit(25).explain('executionStats')\n\n executionStats: {\n executionSuccess: true,\n nReturned: 0,\n executionTimeMillis: 85,\n totalKeysExamined: 101611,\n totalDocsExamined: 1104\n\n\nexecutionTimeMillistotalKeysExaminedtotalDocsExaminedworkspace_id_1_data.v_1_created_at_-1workspace_id_1_data.k_1_created_at_-1workspace_id_1_data.k_1_created_at_-1\nexecutionStats: {\n executionSuccess: true,\n nReturned: 25,\n executionTimeMillis: 0,\n totalKeysExamined: 37,\n totalDocsExamined: 37\nexecutionStats{\n executionStats: {\n executionSuccess: true,\n nReturned: 0,\n executionTimeMillis: 10,\n totalKeysExamined: 1055,\n totalDocsExamined: 1055\n\n}\ntotalKeysExaminedexecutionTimeMillis", "text": "Hey @jellyx,The reason why your index on workspace_id_1_data.v_1_created_at_-1 is not performing well when searching for a non-existing value is that queryPlanner has to search through all the index keys to perform the operation. You can check this by reading the explain output of your queries. To confirm this, I created a collection of 2100 documents from the sample document you provided. I created an index workspace_id_1_data.v_1_created_at_-1.\nThen I used your first query:The execution stats wereNotice in the executionStats, executionTimeMillis is 0, and totalKeysExamined is 53 while documents returned is 25 (query targeting ratio is about 0.47: 0.47 document returned for each index key examined.Now running this for the second query:we got:As you can see, executionTimeMillis becomes 85, and the totalKeysExamined is 101611 with totalDocsExamined being 1104 while the query returns nothing. This shows when a value does not exist, the queryPlanner has to search through a big part of the indexes and hence the time.Additionally, why are you using index workspace_id_1_data.v_1_created_at_-1, instead of workspace_id_1_data.k_1_created_at_-1 ie, instead of data.v, use data.k in our index. This should give a better performance based on the two queries you provided. I tested this on my end as well. Changed the index to workspace_id_1_data.k_1_created_at_-1. For the first query, the explain output was:Notice in the executionStats, only 37 keys had to be examined against 53 originally, while returning 25 documents (query targeting a ratio of 0.67, which is already much better than the earlier 0.47).\nFor the second query, the explain output was:Here too, the totalKeysExamined reduces to 1055 instead of 101611 with the original key. The executionTimeMillis also comes down to 10 from 85.In conclusion, I believe by modifying your index, you can achieve better query-targeting ratio by allowing the server to zoom in to the relevant part of the collection quickly, eliminating a lot of unnecessary work. I would note that in the best case above, the query targeting ratio is still not close to 1 (where 1 index key scan returns 1 document). The server is doing the work at maximum efficiency when this ratio is 1.Please let us know if there are any doubts about this. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Slow query when searching for non-existing values
2023-02-24T12:44:45.779Z
Slow query when searching for non-existing values
600
null
[ "sharding", "time-series" ]
[ { "code": "{\n _id: \"Int64\"\n name: \"string\"\n details:\n prop1: \"string\"\n prop2: \"string\"\n propN: \"string\"\n lastUpdated: \"DateTime\"\n latestPositionTime: \"DateTime\"\n}\n{\n \"_id\": number\n trackId: number\n originalTrackId: number\n timeOfPosition: DateTime\n # We tried create a range base shared based on time\n timestampRangeSharedKey = minuteOfPosition + (hourOfPosition * 60) + ((dayOfYearOfPosition -1) * 24 * 60 )\n sensor: string\n positGeom:\n geometry:\n coordinates: [ \"longitude: number\", \"latitude: number\"]\n properties:\n speed: number\n heading: number\n positScore: number\n columnName: positScore\n type: number\n details:\n prop1: \"string\"\n prop2: \"string\"\n prop15: \"string\"\n}\ntimestampRangeSharedKeytimeseries.metaFieldtimeseries.granularity", "text": "We could use some advice on how to configure mongodb sharding for a collection that needs to be able to store around 150 millions documents per day. Our near term goal is to retain 1 years worth of data (~54 billion documents) and long term store 5 years worth of data. We have deployed a 3 node mongodb (v4.) cluster using kubernetes as a test bed on AWS.Our primary use case is for two collections one which has tracks and the other that stores position updates for tracks.The tracks collection will contain around 300K to 1M document and each document is around 1.2 KB. Track roughly look documents look like this:The positions collections will be loaded with around 150 millions positions updates for tracks EACH DAY. This is the collection we are trying to configure to be sharded. Position document are roughly 1KB each and look something this:We have setup index on _id, timeOfPosition, sensor, 2dsphere(positGeom.geometry\")There are four main type of queries that we need to support (NOTE time is always included the position collection criteria):Below are the key questions:Thanks in advance for any help!!", "username": "Bryan_Golding" }, { "code": "explain('executionStats')", "text": "Hi @Bryan_Golding and thank you for reaching out to the MongoDB Community Forum.Firstly, to answer your questions point wise:Can mongodb scale to support our use case?Among the different key features of MongoDB Atlas available, scalability is one feature that makes the MongoDB Atlas more efficient.\nSince you’re already deploying your workload on AWS, you might want to consider using Atlas to simplify your ops and allow you to scale quickly and easily if you find that you need more (or less) hardware to handle your workload.The documentation on Atlas Tiers and Clusters would be a good reference point to analyse the cluster and tier need for your use case.Recommendable on how to configure sharding for the positions collection?The following blog post on Best Practices to Shard your Cluster would be a good document to start with to know about what components make the shard cluster performance better.Hence, for the positions collection, you can select the field as the shard key which distributes the data evenly between the shards.Should we use a time series collection?Since for your collection, you need to use the geospatial data in the document, I would not recommend using the time series collection.Depending on the cluster you choose, you can refer to the Cluster Limits documentation to understand the number of shards allowed on each of the clusters present.As I understand it, you have 4 queries that you need to run all the time, and most of them involve geo queries. Is this correct? At this early design phase, it’s difficult to say what the problems would be, so I would encourage you to experiment with the schema design and the queries, be familiar with the query explain output to determine the efficiency of the queries, and use a simulated workload (inserts + queries) using a semi-random data. These can be created using mgeneratejs, and these random dataset in combination with explain('executionStats') should be able to tell you how efficient your queries are.Lastly. to plan a deployment of this magnitude, I’d suggest seeking Professional Advice. There are numerous considerations, and an experienced consultant can provide better guidance based on a more comprehensive knowledge of your needs. Some scalability decisions (such as shard key selection) become more difficult to course correct after a large quantity of production data has been collected.Let us know if you have any other concerns.Best Regards\nAasawari", "username": "Aasawari" } ]
Scale MongoDB to support collection with 150 million documents per day
2023-03-11T03:35:34.064Z
Scale MongoDB to support collection with 150 million documents per day
1,227
null
[]
[ { "code": "", "text": "I’m currently having 3 peering connections, and each of them has the same ‘347958916767’ owner id of requester vpc created by mongodb ATLAS.They were established by the request from mongodb altas console, and accepted by AWS console side.And I’d like to know what 347958916767 means.Whether 347958916767 is my aws account generated by ATLAS, or ATLAS uses the 347958916767 aws account always, or it is a AWS account for each ATLAS user in behalf of ATLAS service itself.Can anyone explain to me what the owner id of mongodb atlas vpc means?", "username": "Joonghun_Park" }, { "code": "", "text": "Hello @Joonghun_Park ,Please correct me if I am wrong, your MongoDB Atlas deployment used AWS as cloud provider and while using their infrastructure, I believe those numbers are AWS designation used by Atlas internally, thus those numbers are a part of the automation information used by Atlas to manage your deployment.Is there an issue that you’re facing that you have traced to the use of those numbers?Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "There was an request to report who is the owner id(which turns out to be ATLAS internal number) from internal security audit in my company. That owner id isn’t one of our maintained aws account list, so that’s why I asked about it.\nThank you for your help!", "username": "Joonghun_Park" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What's the meaning of owner id of requester vpc of mongodb atlas?
2023-03-09T04:26:24.359Z
What&rsquo;s the meaning of owner id of requester vpc of mongodb atlas?
535
null
[ "aggregation", "queries", "python", "atlas-search" ]
[ { "code": "< collections = collection_name.aggregate(\n [\n {\n \"$search\": {\n \"index\": 'name_id',\n \"compound\": {\n \"should\": [\n {\n \"autocomplete\": {\n \"path\": \"name\",\n \"query\": search_key,\n }\n },\n {\n \"autocomplete\": {\n \"path\": \"contractId\",\n \"query\": search_key,\n\n }\n }\n ],\n \"minimumShouldMatch\": 1\n }\n }\n },\n {\n '$match': {\"_id\": {'$gt': ObjectId(start_id)}}\n },\n {\n \"$limit\": int(70000)\n },\n {\n \"$sort\": {'_id': 1}\n },\n {\n \"$limit\": int(limit)\n },\n ]) />\n", "text": "This Is the method which i used.Is there a better solution than this.(accuracy and efficient)", "username": "LiveRoom" }, { "code": "", "text": "Hi @LiveRoom,Welcome to the MongoDB Community forums To better understand the question, can you share the following details:Is there a better solution than this.(accuracy and efficient)Can you please clarify what you mean by “accuracy” and “efficiency” in this context? Also, what are the existing numbers, and what do you expect them to be?Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Pagination With Full Text Search In Pymongo
2023-03-13T11:00:30.173Z
Pagination With Full Text Search In Pymongo
802
null
[]
[ { "code": "", "text": "I’m running a production instance with version 4.2 and I have not had sufficient time to prepare for an upgrade. If Atlas forces an upgrade and my application breaks, I will be an unhappy customer. My application is IoT and the solution has been running with nearly 100% uptime for 5 years, so I’m not really interested in upgrading. I was forced to upgrade once in the past, and there was considerable effort invoved. Now I must do it again, and the application has grown, so it requires even more effort.Is there any way to get an extension on the Apri 30, 2023 deadline?", "username": "Dennis_Kornbluh" }, { "code": "", "text": "Hi @Dennis_Kornbluh welcome to the community!Since MongoDB 4.2 series will be out of support in April 2023, it’s actually best if you can prepare to update the application to use MongoDB 4.4 series. Out of support means that it will receive no further updates, including any bugfixes.Unfortunately at this moment this is the reality of software development. It takes considerable resource to keep supporting older versions, and since MongoDB first released the 4.2 series in August 2019, it’s been supported for almost 4 years.In many cases, older driver versions still support newer server versions to some extent, although you’ll be missing out on newer server features. We try to keep backward compatibility as much as possible so as to allow smooth upgrade experience, so I encourage you to at least try connecting your existing app to a testing deployment running 4.4.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,Thanks for your reply. I need an additional month to be able to prepare for a proper upgrade to 5.0. My application has about one hundred queries, and we’re a small team (me). I’m actively working on a port to 5.0, so having to test for 4.4 on top of that will increase my effort.I don’t recall getting into this fix in all the decades that I spent working with SQL databases. There were standards like ODBC and ANSI, which were not perfect, to be sure, but backward compatibility was considered an important goal. The MongoDB “Strict API” is a move in the right direction, but it came along too late for me to take advantage, unfortunately.I’m fine with not having bug fixes for a month. As a paying Atlas customer, I’m just hoping for a little consideration. I can get the job done by end of May, but not by end of April.Respectfully,\nDennis", "username": "Dennis_Kornbluh" } ]
Not ready to upgrade 4.2 to 4.4
2023-03-15T22:39:29.128Z
Not ready to upgrade 4.2 to 4.4
507
null
[ "queries" ]
[ { "code": "", "text": "Hello,I defined a collection with 3 level arrays. I’m able to insert value into the 3rd level array, but having trouble to view the inserted element value since find() always shows as [object] for the 3rd level array. Is this expected behavior in MongoDB? How can I display it to verify?Thanks!", "username": "Linda_Peng" }, { "code": "", "text": "See example:\ndb.col_1.find({“tp_id”: “tp-1”},{“arr_1.t_period”:1, “arr_1.arr_2.txid”:1, “arr_1.arr_2.arr_3.OLIid”:1, _id:0});\n[\n{\narr_1: [\n{\nt_period: ‘2020’,\narr_2: [\n{ txid: ‘tx1’, arr_3: [ [Object] ] },\n{ txid: ‘tx2’, arr_3: }\n]\n},\n{\nt_period: ‘2021’,\narr_2: [ { txid: ‘tx1’ }, { txid: ‘tx2’, arr_3: } ]\n}\n]\n}\n]", "username": "Linda_Peng" }, { "code": "", "text": "Hi Linda,Based on your question, I believe you are using mongosh to query data.You will need to change the mongosh config file in order to print all the objects in a document for single line or Depth of the object.The parameters are: InspectDepth and inspectCompact.Please have a look on: https://www.mongodb.com/docs/mongodb-shell/reference/configure-shell-settings-global/#std-label-configure-settings-globalMore info on the location of the config file is also in this link.Best,", "username": "Adamo_Tonete" } ]
Find() doesn't display 3rd level array element data, only show as [object]
2023-03-15T00:05:38.622Z
Find() doesn&rsquo;t display 3rd level array element data, only show as [object]
465
null
[ "dot-net" ]
[ { "code": "", "text": "What is the best way to insert data from csv file to Mondodb using .Net Core.\nAny sample code snippet provided will be great", "username": "Samrat_Basu" }, { "code": "", "text": "There are a number of solutions depending on whether or not you need to execute the upload in C#. If you don’t need to import the CSV in C# application code, the simplest solution is using mongoimport as discussed in this thread.If you do need to execute the solution in C#, I recommend reviewing this discussion on Stack Overflow for a number of solutions to this problem, including utilizing the CSVHelper package to parse a CSV and insert records to a database in mongo and calling mongoimport from a C# console app.Does this answer your question @Samrat_Basu?", "username": "Patrick_Gilfether1" } ]
Best way to push csv to mongo db using C#
2023-03-15T16:33:17.483Z
Best way to push csv to mongo db using C#
887
null
[ "node-js" ]
[ { "code": " const app = new Realm.App({\n id: 'XXXXXXX-YYYY',\n });\n\n const credentials = Realm.Credentials.apiKey(\n 'apikey1'\n );\n\n await app.logIn(credentials);\n const realm = await Realm.open({\n schema: [\n Products,\n Categories,\n Variants,\n Images,\n TaxCategories,\n TaxRates,\n SubRates,\n Channels,\n Prices,\n Zones,\n InventoryEntires,\n InventoryMessages\n ],\n sync: {\n flexible: true,\n user: app.currentUser,\n initialSubscriptions: {\n update: (subs, realm) => {\n subs.add(realm.objects('Products'));\n }\n }\n }\n });\nconst products = realm.objects(Products.name);\n\nConnection[1]: Session[1]: Begin processing pending FLX bootstrap for query version 0. (changesets: 1, original total changeset size: 1335)\nConnection[1]: Session[1]: Integrated 1 changesets from pending bootstrap for query version 0, producing client version 8 in 10 ms. 0 changesets remaining in bootstrap\n", "text": "Hi,\nI just trying to configure an initial Flexible Device Sync, but is not working, is not syncing data between the mongo atlas database and the local realm data base.When i init my app i get the following messageBut the Products collections is never sync there is no documentsSomebody can tell me if i’m missing something?", "username": "xema_yebenes" }, { "code": "", "text": "Hi, generally speaking when we see things like this is means that permissions are filtering out documents. Can you send a link to your application (the url in the App Service UI)?", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi, thanks for the reply\nFinally i figured out the issue. Was a problem with the schema in realm and schema in mongo not matching with the dataWe can close the ticket", "username": "xema_yebenes" }, { "code": "", "text": "Got it, glad you worked it out.Best,\nTyler", "username": "Tyler_Kaye" } ]
Inital Device Sync with Flexible is not working
2023-03-15T12:51:26.730Z
Inital Device Sync with Flexible is not working
775
https://www.mongodb.com/…491e700a9ab.jpeg
[]
[ { "code": "", "text": "Hi, I’m trying to see where my fails are happening in my Requests under the Metrics section in my Atlas home page. When I check my logs there aren’t any errors.", "username": "Fritzroy_Thompson" }, { "code": "https://realm.mongodb.com/groups/<group-id>/apps/<app-id>", "text": "Hi,Could you share your app URL (looks like https://realm.mongodb.com/groups/<group-id>/apps/<app-id>)? The team will then be able to look into it and see if there’s anything unexpected going on.Jonathan", "username": "Jonathan_Lee" }, { "code": "", "text": "https://realm.mongodb.com/groups/63a08fd9b199471056a446fd/apps/data-kllbw", "username": "Fritzroy_Thompson" }, { "code": "", "text": "Hi,Sorry for the late follow up here. We found a bug in our sync metrics that was causing a subset of the successful requests to be misreported as failed requests. This should be fixed in the upcoming release.Jonathan", "username": "Jonathan_Lee" } ]
Trying to find 'Metrics' Request Fail Details
2023-03-08T18:47:20.071Z
Trying to find &lsquo;Metrics&rsquo; Request Fail Details
745
null
[ "app-services-cli" ]
[ { "code": "", "text": "I am having an issue, that my sync/config.json file is being ignored by realm-cli.\nWhen I change anything in functions/ I will get the correct behaviour when using realm-cli push.\nHowever, when I change said config.json file, all I get is:Deployed app is identical to proposed version, nothing to doIn case I update via the Realm UI, it saves correctly and next time I deploy, it pushes an “old version” of the config.json file.Anyone experienced that issue before?", "username": "Thomas_Anderl" }, { "code": "client_max_offline_daysclient_max_offline_days", "text": "Hi @Thomas_Anderl,\nYes, I managed to reproduce the same error. The issue is with changing client_max_offline_days when Max Offline Time is not enabled. Once you change it’s value you can not push the app anymore.\nThe workaround is to click “Pause Sync” button at the Device Sync UI and then to deploy again using realm-cli push. On this way you will be able at least to apply the changes in the permissions.\nWe will check what is the issue with client_max_offline_days. You can leave it by default for now if it works for you.", "username": "Desislava_St_Stefanova" }, { "code": "client_max_offline_daysDeployed app is identical to proposed version, nothing to do", "text": "Thank you for the response. It is actually likely that this started to happen when I first tried to change client_max_offline_days, but I am not 100% sure. But even if I don’t touch this field in my .json file, it behaves that way. Is this due to the same reason?Edit: I pressed “Pause Sync”, tried to push my config and am still getting\nDeployed app is identical to proposed version, nothing to do", "username": "Thomas_Anderl" }, { "code": "", "text": "Hm, It worked for me. Could you send your config.json? Is this option (max offline days) enabled or not on your app service device sync?", "username": "Desislava_St_Stefanova" }, { "code": "", "text": "I sent yout the json via message. As I am not sure if I should post that here (my app is already in production). Offline time is according to the web UI disabled.", "username": "Thomas_Anderl" }, { "code": "\"client_max_offline_days\": 30", "text": "You have to set client_max_offline_days to the default value. Otherwise, it won’t work.\nSet \"client_max_offline_days\": 30 in sync\\config.json?\nThen click “Pause Sync” and then deploy again using realm-cli push.\nIt should work.", "username": "Desislava_St_Stefanova" }, { "code": "\"client_max_offline_days\": 30", "text": "I added\n\"client_max_offline_days\": 30\nand it still does not work. It keeps ignoring my config.", "username": "Thomas_Anderl" }, { "code": "", "text": "You will continue to receive this message “Deployed app is identical to proposed version, nothing to do” if the json settings are the same as the app service settings. Only changing client_max_offline_days is not considered as a change (it will be investigated).\nIf you change anything else in the config it will be uploaded. If there is no new changes different from the last upload, then the upload will be ignored.\nSo, I’m wondering whether your changes are not already uploaded. Or, are you sure that you are uploading to the correct App service? Could you try to login with realm-cli again with the app keys of the relevant Atlas project?\nYou can try to pull your app (realm-cli pull ) to another folder and then to compare the json files, whether they are identical indeed.", "username": "Desislava_St_Stefanova" }, { "code": "D:\\GitHub\\myApp\\server\\myApp-dev>git diff\ndiff --git a/server/myApp-dev/functions/onMessage.js b/server/myApp-dev/functions/onMessage.js\nindex f87b4c3..ec11c54 100644\n--- a/server/myApp-dev/functions/onMessage.js\n+++ b/server/myApp-dev/functions/onMessage.js\n@@ -34,7 +34,7 @@ exports = async function (changeEvent) {\n );\n\n await context.functions.execute(\"sendNotification\", receiverId, {\n- text: message.senderName+ \": \" + message.text,\n+ text: message.senderName + \": \" + message.text,\n type: \"MESSAGE\",\n });\n-};\n\\ No newline at end of file\n+};\ndiff --git a/server/myApp-dev/sync/config.json b/server/myApp-dev/sync/config.json\nindex 5e43a07..4b6538d 100644\n--- a/server/myApp-dev/sync/config.json\n+++ b/server/myApp-dev/sync/config.json\n@@ -24,6 +24,9 @@\n },\n \"createdAt\": {\n \"write\": false\n+ },\n+ \"test\": {\n+ \"write\": true\n }\n }\n }\n\nD:\\GitHub\\myApp\\server\\myApp-dev>realm-cli push\nDetermining changes\nThe following reflects the proposed changes to your Realm app\n--- functions/onMessage.js\n+++ functions/onMessage.js\n@@ -34,7 +34,8 @@\n );\n\n await context.functions.execute(\"sendNotification\", receiverId, {\n- text: message.senderName+ \": \" + message.text,\n+ text: message.senderName + \": \" + message.text,\n type: \"MESSAGE\",\n });\n-};\n+};\n+\n\n? Please confirm the changes shown above (y/N)\n\"client_max_offline_days\": 30,", "text": "you change anything else in the config it will be uploaded. If there is no new changes different from the last upload, then the upload will be ignored.I am 99% confident, that I am pushing to the correct project. I also tried enable the automatic deployment from GitHub via the App Services UI and experienced the same issue. I just checked again by changing something in my functions and the function is changed correctly.I can add my console output:As you can see, I changed the config.json and the realm-cli push does not detect the changes. I changed my functions afterwards too, and those are detected correctly. The git diff shows the differences I really made.I have \"client_max_offline_days\": 30, in that file.", "username": "Thomas_Anderl" }, { "code": "", "text": "@Thomas_Anderl we will do some more investigations. Any additional details will be helpful if you have some.\nThanks, for your report.", "username": "Desislava_St_Stefanova" }, { "code": "realm-cli apps list", "text": "I can give you any details that you need. What would help you? A “strange” behaviour is: When I change it in the UI it appears correctly. When I then run realm-cli pull, I however get the “old” version again. It seems that that version is stuck somewhere, or it somehow connects to the wrong project (I have prod and dev, but both projects have the same issue).\nrealm-cli apps list prints the correct project.", "username": "Thomas_Anderl" }, { "code": "", "text": "Hello @Thomas_Anderl,Thank you so much for your patience. This has been discovered as a bug in the code and the team is investigating. I have escalated this again.I will keep you updated as and when I have information.Your patience is appreciated.Cheers,\nHenna", "username": "henna.s" }, { "code": "", "text": "Hello @Thomas_Anderl ,Could you please let me know if directly making changes from the UI would suffice at this moment? The team is working on releasing a new version of the CLI soon and it may take time to look at the issue in question.I look forward to your response.Cheers,", "username": "henna.s" }, { "code": "", "text": "Hello @henna.s ,\nchanging in the UI only solved the problem until I make any other change again and the redeploy would “reset” it to the old version. Also I would prefer testing the configuration first extensively on dev (where I deploy stuff quite often), before I make such a big change on prod.", "username": "Thomas_Anderl" }, { "code": "", "text": "@henna.s @Desislava_St_Stefanova\nI just did some additional testing (because I had to add a new Entity to my sync rules). I figured out that this one was correctly added to the sync. So I investigated a bit more and figured out, that field-level permissions are ignored. Document-level permissions are taken over correctly. Is this a known issue?That means the max-offline-time and field level permissions are the ones that are (apparently) being ignored. Hope this helps in the investigation. Copying those manually, also works until they are overwritten the next time by my json. I copied my entire json manually into the UI and it was accepted and saved. With the next deployment however, the fields and max-offline-days will be removed.", "username": "Thomas_Anderl" }, { "code": "client_max_offline_days", "text": "Hi @Thomas_Anderl,\nDue to your report and investigations, we have already two issues created and the App service team is on them. The first issue is about client_max_offline_days and the second one is about the permissions fields.\nThank you about your contribution!", "username": "Desislava_St_Stefanova" }, { "code": "", "text": "Hey @Thomas_Anderl,With regards to the issue with flexible sync field-level permissions not being push/pulled via the CLI - the team will be changing how flexible sync permissions are defined in the near future (within the upcoming release or two). More specifically, flexible sync permissions are being integrated into the Rules that are defined on the “Rules” page in the UI. In the App Configuration structure, this corresponds to the data source default rules and collection rules.For a bit more context, the integration is being performed in order to make those rules be the single source of truth for all app services involving a MongoDB cluster (GraphQL, Flexible Sync, DataAPI, etc.). We’ll be performing a migration to convert everything over, so that shouldn’t require any additional action from your end.In the meantime, if you need to deploy changes to the existing field-level permissions configuration for flexible sync, I’m afraid the UI is the best way to go about doing that; as you alluded to already, the local sync configuration will have to be manually kept in sync with the one deployed via the UI in order to develop locally off the latest version of the config (assuming there are changes that set field-level permissions). Sorry for the inconvenience.Jonathan", "username": "Jonathan_Lee" }, { "code": "", "text": "Thank you @Jonathan_Lee for the insight. I was anyways wondering why these are two seperate things with a similar purpose. I am always happy about overall simplifications anf alignments. I assume however that the migration you mentioned will not affect me if my field rules are not manually deployed to the UI when the migration happens.", "username": "Thomas_Anderl" }, { "code": "", "text": "The migration will not “auto-pull” the app for you, so it should not immediately have an effect on anything locally (unless you pull, but that would be the same behavior as if you were to pull the app right now). To go into a bit more detail, the migration will look at whatever is defined under the “permissions” key in the flexible sync configuration at the time of running, and convert it to an equivalent permissions setup across the relevant data source default_rule.json / rules.json file(s). Thus, after the migration is ran, if you pull the app, you’ll be able to see those new file(s) in which permissions are stored, and should be able to modify the field-level permissions there.One last thing to note: the “permissions” key in the flexible sync configuration will still exist after the migration (it just won’t be used anywhere); after all apps are migrated, we intend on removing that field completely.Jonathan", "username": "Jonathan_Lee" }, { "code": "", "text": "Good to know, so there is more files than that I should sync between dev and prod.I am deploying automatically to dev whenever my config changed (and prod on demand), but my field level permissions are ignored due to the issue you mentioned. That means when the migration happened I should have my config synced via the UI so the migration happens on them? Or are the field level permissions stored correctly behind the scenes and will migrate correctly?As I redeploy my config pretty often on dev, it would overwrite it everytime. So currently I work without the field level permissions, so I dont have to update via the UI every time I change something.", "username": "Thomas_Anderl" } ]
Deployed app is identical to proposed version, nothing to do
2023-01-06T09:45:19.050Z
Deployed app is identical to proposed version, nothing to do
2,875
null
[ "serverless" ]
[ { "code": "", "text": "Are there any updates on serverless instances being used as a database trigger source?", "username": "clueless_dev" }, { "code": "", "text": "+1\nneed this too. As well as serverless as Charts data source.", "username": "andrefelipe" }, { "code": "", "text": "+1\nReally important feature !", "username": "Sami_Karak" } ]
MongoDB Atlas Serverless Instance Trigger Support
2022-05-28T08:06:29.478Z
MongoDB Atlas Serverless Instance Trigger Support
2,713
null
[ "queries", "react-native", "flexible-sync" ]
[ { "code": "Rules", "text": "Hello, I am building a react native app using Realm with MongoDB Device sync (flexible sync). I want to know how can I limit the number of documents on the mobile client side without deleting them from the database. For example, if I have a chat app with 1 million messages inside a chat. I want all of them to be persisted on to MongoDB Atlas, but only the last 100 to be present on the mobile device. Basically, the mobile will be pushing documents to the cloud, but will only see a subset of them - last N count or only the documents from the past month, etc. Should I be using Rules (I already have rules set up and am not sure if this is combinable with them) or should I be using a Realm Query? The point is to avoid storing all of the documents on the mobile device. Could you also provide a simple example?", "username": "Damian_Danev" }, { "code": "", "text": "Hi, you should lean on “Subscriptions” in flexible sync for this. Unfortunately, we don’t allow subscriptions on a “limit” due to technical limitations (we would constantly need to re-evaluate the query changes to one object can affect if another object moves in/out of view). Most of our users who want to mimic the limit behavior instead use a time-based query. So you can add a time field to each message and subscribe to all queries that are from the last 7 days let’s say. Then have your app periodically refresh the subscription with a new time whenever it is opened or after some amount of time.", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Sync: How to avoid storing all documents on mobile device?
2023-03-15T09:52:59.655Z
Realm Sync: How to avoid storing all documents on mobile device?
999
null
[ "python", "atlas-cluster" ]
[ { "code": "Traceback (most recent call last):\n ~Something here~\ndns.resolver.NoNameservers: All nameservers failed to answer the query _mongodb._tcp.clusterai.7zs4x3d.mongodb.net. IN SRV: \n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n ~Something here again~\npymongo.errors.ConfigurationError: All nameservers failed to answer the query _mongodb._tcp.clusterai.7zs4x3d.mongodb.net. IN SRV:\nimport pymongo\nconnection_pass = \"MyPass\"\nconnection_string = f\"mongodb+srv://ansh-admin:{connection_pass}@clusterai.7zs4x3d.mongodb.net/test\"\n\nif __name__ == \"__main__\":\n client = pymongo.MongoClient(connection_string)\nimport pymongo \nconnection_pass = \"MyPass\"\nconnection_string = f\"mongodb://ansh-admin:{connection_pass}@clusterai.7zs4x3d.mongodb.net/test\"\n\nif __name__ == \"__main__\":\n client = pymongo.MongoClient(connection_string)\n\n db = client['test-database']\n\n collection = db['test-col']\nimport pymongo \nconnection_pass = \"MyPass\"\nconnection_string = f\"mongodb://ansh-admin:{connection_pass}@clusterai.7zs4x3d.mongodb.net/test\"\n\nif __name__ == \"__main__\":\n client = pymongo.MongoClient(connection_string)\n\n db = client['test-database']\n\n collection = db['test-col']\n\n docs = {\"name\" : \"test\"}\n data_id = collection.insert_one(docs).inserted_id\n print(data_id)\nTraceback (most recent call last):\n ~something here~\npymongo.errors.ServerSelectionTimeoutError: clusterai.7zs4x3d.mongodb.net:27017: [Errno 11001] getaddrinfo failed, Timeout: 30s, Topology Description: <TopologyDescription id: 640f56ba89e1802b2f068d6a, topology_type: Unknown, servers: [<ServerDescription ('clusterai.7zs4x3d.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('clusterai.7zs4x3d.mongodb.net:27017: [Errno 11001] getaddrinfo failed')>]>\n", "text": "I am creating a Python-based program and wanted to store data in MongoDB Atlas but when I created a cluster and tried connecting it using a Module named pymongo, which is used to access MongoDB Atlas. It didn’t connect and gave this error :My Code was this and all credentials were correct! :When I tried removing +srv from the connection string as ChatGPT asked me to do. It gave me no error with this code :but when I tried writing data it gave me an error. My code was :The error it gave was :Help me out, please! Thanks in advance!", "username": "Ansh_Tanwar" }, { "code": "", "text": "Do not trust ChatGPTWhen I tried removing +srv from the connection string as ChatGPT asked me to doAn Atlas cluster must have +srv.Try to change your nameservers to Google’s 8.8.8.8 or 8.8.44.", "username": "steevej" }, { "code": "", "text": "How, can you help me?\ntell me in brief!", "username": "Ansh_Tanwar" }, { "code": "", "text": "Can you connect with Compass or mongosh to your cluster?", "username": "steevej" } ]
Mongo DB Atlas not connecting and Giving Errors in Python using PyMongo Module
2023-03-13T17:32:57.642Z
Mongo DB Atlas not connecting and Giving Errors in Python using PyMongo Module
1,318
null
[ "cxx" ]
[ { "code": "", "text": "I compiled the driver, after connecting the library, the following error occurs during assemblyAutoMoc subprocess errorThe moc process failed to compile“SRC:/controller.h”", "username": "leogoleogoleogo" }, { "code": "", "text": "Hi @leogoleogoleogo , can you share more details about your build environment? What OS, driver version, IDE and compiler are you using? Are you using Qt by any chance - AutoMoc sounds related to it.", "username": "Rishabh_Bisht" }, { "code": "", "text": "Yes, I am using QT libraries, I have compiled c++ driver with visual studio 2019 and working with c++14 and boost 1.60.0", "username": "leogoleogoleogo" }, { "code": "", "text": "I am not sure if this is really related to the MongoDB driver. Off hand it feels like an issue with Qt where it is failing to include some of the generated files (?)\nAre you able to build successfully if you don’t include the C++ driver? Are you generating your solution also via Visual Studio or are you using a build generator like Ninja or CMake?", "username": "Rishabh_Bisht" }, { "code": "", "text": "In this case, the problem is really with CT, if I connect mongoDB in the .cpp file, the project is built. However, if I connect MongoDB in the .h file, an error appears. The question is what could be causing this problem?", "username": "leogoleogoleogo" }, { "code": "", "text": "I am no QT expert and it’s even more difficult to say what’s going wrong without taking a look at the code. \nWould you be able to share code snippets on how you are “connecting MongoDB” in .h vs .cpp?", "username": "Rishabh_Bisht" }, { "code": "#include <mongocxx/uri.hpp>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/instance.hpp>\n#include <mongocxx/client.hpp>\n#include <mongocxx/collection.hpp>\n", "text": "if this code is in a .h file, an error occurs \nand if the same code is in the .cpp file, there is no error\nI have no idea how it works\nXD", "username": "leogoleogoleogo" } ]
Automoc error after add include c++ driver
2023-03-10T12:02:29.650Z
Automoc error after add include c++ driver
1,162
null
[]
[ { "code": "{\n \"_id\": ObjectId(\"640c64740fb9216e3a1bd565\"),\n \"outer_field\": {\n \"some_field_1\": [\"apple\", \"orange\"],\n \"some_field_2\": [\"lemon\", \"orange\"],\n \"some_field_3\": [],\n \"some_field_4\": [\"apple\"],\n \"some_field_5\": [\"orange\"],\n ...\n } \n}\n$pullappleouter_fielddb.coll.update({}, {\n \"$pull\": {\n \"outer_field.*\": \"apple\"\n }\n})\n*{\n \"_id\": ObjectId(\"640c64740fb9216e3a1bd565\"),\n \"outer_field\": {\n \"some_field_1\": [\"orange\"],\n \"some_field_2\": [\"lemon\", \"orange\"],\n \"some_field_3\": [],\n \"some_field_4\": [],\n \"some_field_5\": [\"orange\"],\n ...\n } \n}\n", "text": "Hi everyone,I have the following document schema:I would like to $pull a speicfic value, let’s say apple, from all the arrays in the inner fields of outer_field but I’m unsure how to reference the inner fields since they’re unknown to me.My initial thoughts are this query:where * would match any/all fields and update all of them accordingly.The expected result would be something like this:Thank you.", "username": "loay_khateeb" }, { "code": "$objectToArray$map$filter$arrayToObjectdb.coll.update({},\n[\n {\n $set: {\n outer_field: {\n $arrayToObject: {\n $map: {\n input: { $objectToArray: \"$outer_field\" },\n in: {\n k: \"$$this.k\",\n v: {\n $filter: {\n input: \"$$this.v\",\n cond: {\n $ne: [\"$$this\", \"apple\"]\n }\n }\n }\n }\n }\n }\n }\n }\n }\n])\n{\n \"_id\": ObjectId(\"640c64740fb9216e3a1bd565\"),\n \"outer_field\": [\n {\n \"field\": \"some_field_1\",\n \"value\": [\"apple\", \"orange\"]\n },\n {\n \"field\": \"some_field_2\",\n \"value\": [\"lemon\", \"orange\"]\n },\n {\n \"field\": \"some_field_3\",\n \"value\": []\n },\n {\n \"field\": \"some_field_4\",\n \"value\": [\"apple\"]\n },\n {\n \"field\": \"some_field_5\",\n \"value\": [\"orange\"]\n }\n ]\n }\ndb.coll.update({},\n{\n $pull: {\n \"outer_field.$[].value\": \"apple\"\n }\n})\n", "text": "Hello @loay_khateeb,There is no straight way, but you can use an update with aggregation pipeline starting from MongoDB4.2,Out of the question, I would suggest you improve your schema as below, which is called attribute pattern, and this will cover almost straight operations,And your query would be:", "username": "turivishal" }, { "code": "", "text": "After some research, I’ve decided to use the attribute pattern.Thank you for the reply, @turivishal.", "username": "loay_khateeb" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to match any field name in a query?
2023-03-11T14:56:25.842Z
How to match any field name in a query?
1,066
null
[ "php" ]
[ { "code": " try { Schema::table('invoice_services', function (Blueprint $collection) { $collection->index('company_id'); $collection->index('numRps'); $collection->index('serieRps'); $options['sparse'] = true; $options['unique'] = true; $collection->index(['company_id', 'numRps', 'serieRps'], 'uniques', null, $options); }); } catch (\\Exception $ex) { Log::debug($ex->getMessage()); }Index build failed: fd1868d8-db3c-4367-b818-42939da3bd5d: Collection invoice_hmlg.invoice_services ( 34a28ecd-882d-49b4-8e8b-e6e90b08b932 ) :: caused by :: E11000 duplicate key error collection: invoice_hmlg.invoice_services index: company_id_1_numRps_1_serieRps_1 dup key: { company_id: \"5b6da812e014611c866da693\", numRps: null, serieRps: null }", "text": "Laravel version: 9\nPHP version: 8.2\njenssegers/mongodb package version: 3.9Estou tentando criar um índice único, segue meu código:I’m trying to create an unique index, code follows: try { Schema::table('invoice_services', function (Blueprint $collection) { $collection->index('company_id'); $collection->index('numRps'); $collection->index('serieRps'); $options['sparse'] = true; $options['unique'] = true; $collection->index(['company_id', 'numRps', 'serieRps'], 'uniques', null, $options); }); } catch (\\Exception $ex) { Log::debug($ex->getMessage()); }But even using sparse to ignore null value entries, I get the following error message:Index build failed: fd1868d8-db3c-4367-b818-42939da3bd5d: Collection invoice_hmlg.invoice_services ( 34a28ecd-882d-49b4-8e8b-e6e90b08b932 ) :: caused by :: E11000 duplicate key error collection: invoice_hmlg.invoice_services index: company_id_1_numRps_1_serieRps_1 dup key: { company_id: \"5b6da812e014611c866da693\", numRps: null, serieRps: null }Any help appreciated. Thanks in advance.", "username": "Kleber_Marioti" }, { "code": "E11000 duplicate key error collection: invoice_hmlg.invoice_services index: company_id_1_numRps_1_serieRps_1 dup key: { company_id: \"5b6da812e014611c866da693\", numRps: null, serieRps: null }\n", "text": "It looks like you are building a unique index but there are multiple documents with the same key. With unique indexes you can’t have the same values for the keys that you defined. So to resolve this issue you will need to make sure there are no duplicates and all keys are unique.", "username": "tapiocaPENGUIN" }, { "code": "sparseuniquenull", "text": "Cross-referencing with Erro ao criar índice com valor nulo · jenssegers laravel-mongodb · Discussion #2524 · GitHub where I’ve answered the question already.@Kleber_Marioti please cross-link discussions that you create in multiple forums so people can check if the question has been answered.@tapiocaPENGUIN yes, that is correct. The fact that OP is using sparse and unique at the same time indicates that they have fields that shouldn’t count towards the uniqueness constraint if they are null. In that case, removing duplicates is impossible, but it is possible to work around the problem using partial indexes.", "username": "Andreas_Braun" } ]
Error creating index with null value
2023-03-14T20:57:40.509Z
Error creating index with null value
1,251
null
[ "node-js", "data-modeling", "mongoose-odm", "api" ]
[ { "code": "const express = require(\"express\");\nconst app = express();\nconst mongoose = require(\"mongoose\");\nconst dotenv = require(\"dotenv\");\nconst authRoute = require(\"./routes/auth\");\nconst bodyParser = require(\"body-parser\");\nconst cookieParser = require(\"cookie-parser\");\nconst passport = require(\"passport\");\nconst flash = require(\"express-flash\");\nconst session = require(\"express-session\");\nconst cors = require(\"cors\");\nrequire(\"./config/passport\");\nrequire(\"./config/google-config\");\nrequire(\"./config/facebook-config\");\n\ndotenv.config();\n\nmongoose.set(\"strictQuery\", false);\nmongoose\n .connect(process.env.MONGO_URL)\n .then(() => console.log(\"connected to db\"))\n .catch((e) => console.log(e));\n\napp.use(bodyParser.json());\napp.use(bodyParser.urlencoded({ extended: true }));\napp.use(\n session({\n secret: \"***\",\n resave: false,\n saveUninitialized: true,\n })\n);\napp.use(cors());\napp.use(passport.initialize());\napp.use(passport.session());\napp.use(flash());\napp.use(\"/api/user\", authRoute);\napp.listen(3000, () => console.log(\"Server up and running\"));\nconst express = require(\"express\");\nconst router = express.Router();\nconst User = require(\"../model/users\");\nconst jwt = require(\"jsonwebtoken\");\nconst bcrypt = require(\"bcrypt\");\nconst { registerValidation, loginValidation } = require(\"../validation\");\nconst passport = require(\"passport\");\nrequire(\"../config/passport\");\nrequire(\"../config/google-config\");\nrequire(\"../config/facebook-config\");\n\n//register-user\nrouter.post(\"/register\", async (req, res) => {\n const { error } = registerValidation(req.body);\n if (error) return res.status(400).send(error.details[0].message);\n //check if user is registered\n const emailExist = await User.findOne({ email: req.body.email });\n if (emailExist) return res.status(400).send(\"Email already exist\");\n //hashpassword\n const salt = await bcrypt.genSalt(10);\n const hashedPassword = await bcrypt.hash(req.body.password, salt);\n //createUser\n const user = new User({\n name: req.body.name,\n email: req.body.email,\n password: hashedPassword,\n phoneNumber: req.body.phoneNumber,\n });\n try {\n const savedUser = await user.save();\n res.send({ user: user._id });\n } catch (err) {\n res.status(400).send(err);\n }\n});\n//login\nrouter.post(\"/login\", async (req, res) => {\n const { error } = loginValidation(req.body);\n if (error) return res.status(400).send(error.details[0].message);\n const userExist = await User.findOne({ email: req.body.email });\n if (!userExist) return res.status(400).send(\"Email or Password Invalid\");\n const validPassword = await bcrypt.compare(\n req.body.password,\n userExist.password\n );\n if (!validPassword) return res.status(400).send(\"Invalid Password\");\n //create and assign a token\n const token = jwt.sign({ _id: User._id }, process.env.TOKEN_SECRET);\n res.header(\"auth-token\", token).send(token);\n res.send(\"Signed In Successfully\");\n});\nrouter.get(\n \"/auth/google\",\n passport.authenticate(\"google\", {\n scope: [\"profile\", \"email\"],\n })\n);\nrouter.get(\n \"/auth/google/callback\",\n passport.authenticate(\"google\", {\n failureRedirect: \"/failed\",\n }),\n function (req, res) {\n res.redirect(\"/success\");\n }\n);\nrouter.get(\"/auth/facebook\", passport.authenticate(\"facebook\"));\nrouter.get(\n \"/auth/facebook/callback\",\n passport.authenticate(\"facebook\", { failureRedirect: \"/login\" }),\n (req, res) => {\n res.redirect(\"/\");\n }\n);\nconst isLoggedIn = (req, res, next) => {\n req.user ? next() : res.sendStatus(401);\n};\nrouter.get(\"/failed\", (req, res) => {\n res.send(\"Failed\");\n});\nrouter.get(\"/success\", isLoggedIn, (req, res) => {\n res.send(`Welcome ${req.user.email}`);\n});\n\n\nrouter.post(\"/:_id/books/current-reading\", async (req, res) => {\n const {\n bookTitle,\n bookAuthor,\n totalPages,\n pagesLeft,\n daysLeft,\n bookGenre,\n bookCompleted,\n } = req.body;\n const user = await User.findById(req.params._id);\n if (!user) return res.status(404).send(\"User not found\");\n user.bookReading.currentReading = {\n ...user.bookReading.currentReading,\n bookTitle: bookTitle,\n bookAuthor: bookAuthor,\n totalPages: totalPages,\n pagesLeft: pagesLeft,\n daysLeft: daysLeft,\n bookGenre: bookGenre,\n bookCompleted: bookCompleted,\n };\n const savedUser = await user.save();\n res.status(200).json(savedUser);\n});\nmodule.exports = router;\nconst mongoose = require(\"mongoose\");\nconst Schema = mongoose.Schema;\n\nconst bookReadingSchema = new Schema({\n pagesLeft: {\n type: Number,\n default: 0,\n },\n bookCompleted: {\n type: Boolean,\n default: false,\n },\n daysLeft: {\n type: Number,\n default: 0,\n },\n bookTitle: {\n type: String,\n default: \"\",\n },\n totalPages: {\n type: Number,\n default: 0,\n },\n bookAuthor: {\n type: String,\n default: \"\",\n },\n bookGenre: {\n type: String,\n default: \"\",\n },\n});\n\nconst bookReadingDefault = {\n pagesLeft: 0,\n bookCompleted: false,\n daysLeft: 0,\n bookTitle: \"\",\n totalPages: 0,\n bookAuthor: \"\",\n bookGenre: \"\",\n};\n\nconst userSchema = new Schema(\n {\n name: {\n type: String,\n minlength: 6,\n maxlength: 255,\n },\n email: {\n type: String,\n maxlength: 255,\n unique: true,\n },\n phoneNumber: {\n type: String,\n },\n password: {\n type: String,\n minlength: 6,\n maxlength: 1024,\n },\n bookReading: {\n currentReading: {\n type: bookReadingSchema,\n default: bookReadingDefault,\n },\n },\n },\n { timestamps: true }\n);\n\nmodule.exports = mongoose.model(\"User\", userSchema);\n", "text": "When i make a post request from my postman, it sends the request, returns a 201 success code, but nothing is saved on my database, it returns an empty value. Here is my App.js Code,My Auth CodeAnd my schemaWill appreciate any help greatly.", "username": "Ojochogwu_Dickson" }, { "code": "/register try {\n const savedUser = await user.save();\n res.send({ user: user._id });\nsavedUseruser_idconst savedUser = await user.save();\nres.send({ user: savedUser._id });\n", "text": "Hi @Ojochogwu_Dickson,Welcome to the MongoDB Community forums I preassume that you are hitting the /register route to create a user.it returns an empty valueThe above code will return an empty response instead of the expected JSON object containing the user ID because the savedUser variable is never used in the response. Instead, the response uses the user variable, which is the new user instance that doesn’t contain the _id.So, to resolve this error, please modify the code to:If it doesn’t work as expected, please post more details regarding the error, and the workflow you’re doing.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "ok, hi. Thanks. The user register is going well, the data is getting saved, i think the problem is with my schema, i need to create a different schema to save the book reading records and not append it to the main user profile schema, since the book reading records is supposed to be independent, that’s what i am thinking.", "username": "Ojochogwu_Dickson" }, { "code": "", "text": "Hi, i hope you can help me with this please, here is what i’m trying to achieve with the schema, the user should be able to add a book records to their account. Do i really need to create a new schema for the book records? looking at my current schema referenced in the main post, what do you think i should do?. I’m still very much confused.", "username": "Ojochogwu_Dickson" }, { "code": "\"independent\"", "text": "Hi @Ojochogwu_Dickson,Before answering your question, may I ask how you intend to access the data?The general rule of thumb when modeling your data in MongoDB is - “Data that is accessed together should be stored together.”I think the problem is with my schema, I need to create a different schema to save the book reading records and not append it to the main user profile schema, since the book reading records are supposed to be independent, that’s what I am thinking.Could you please clarify why you think it’s not supposed to be added to the user profile, and what you mean by \"independent\" in this context?Rather than jumping straight into the solution, could you explain the typical scenario and workflow of this app?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "ok, here is how it is supposed to work. The user can create an account using the profile details, the user should be able to create, and add books he is currently working on to his account. He can add as much books as possible, hence an array of books. For the “independent”, i probably mean, the book addition to the user account should not be part of the register schema.", "username": "Ojochogwu_Dickson" }, { "code": "userSchemaconst userSchema = new Schema(\n {\n name: {\n type: String,\n minlength: 6,\n maxlength: 255,\n },\n email: {\n type: String,\n maxlength: 255,\n unique: true,\n },\n phoneNumber: {\n type: String,\n },\n password: {\n type: String,\n minlength: 6,\n maxlength: 1024,\n },\n books: [\n {\n type: Schema.Types.ObjectId,\n ref: 'Book'\n }\n ],\n },\n { timestamps: true }\n);\n\nmodule.exports = mongoose.model(\"User\", userSchema);\nuserSchema_idbookSchemaconst bookSchema = new Schema({\n title: {\n type: String,\n required: true\n },\n author: {\n type: String,\n required: true\n },\n genre: {\n type: String,\n required: true\n },\n totalPages: {\n type: Number,\n required: true\n },\n pagesLeft: {\n type: Number,\n default: 0,\n },\n completed: {\n type: Boolean,\n default: false,\n },\n});\n\nmodule.exports = mongoose.model(\"Book\", bookSchema);\n", "text": "Hi @Ojochogwu_Dickson,Thanks for explaining the workflow of the app.add books he is currently working on to his account. He can add as many books as possible, hence an array of books.As I understand you frequently need to access the books and their progress for a particular user.Considering this it can be useful to design your schema to store the book details as an array within the userSchema and using .populate() functionality you can access the book’s details if you need.In the above schema books field is part of the userSchema which is storing the book _id in an array format as a reference to the book schema.Here is the bookSchema. Sharing this for your reference:This is just an example solution, and its effectiveness depends on the specific use case.To determine the optimal solution, it is recommended to evaluate the performance with expected workloads. One approach is to use mgeneratejs to generate randomized data for testing purposes.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Nodejs Data not saving on MongoDB
2023-03-07T08:26:40.078Z
Nodejs Data not saving on MongoDB
5,418
https://www.mongodb.com/…8_2_1024x576.png
[ "replication" ]
[ { "code": "", "text": "\nops1920×1080 203 KB\nwhat can be the error?", "username": "Amit_Faibish" }, { "code": "", "text": "call https://www.mongodb.com/docs/manual/reference/method/rs.initiate/", "username": "Kobe_W" }, { "code": "", "text": "I dont want to invite the replica manually.\nI work with ops manager in ubuntu server.\nThere is connection between the windows server, its works that i deploy 3 standalone servers.\nI want to deploy replica set with rhe 3 server in windows, and the deploy got stuck (photo attach).", "username": "Amit_Faibish" }, { "code": "", "text": "Hello @Amit_Faibish ,Welcome to The MongoDB Community Forums! MongoDB Ops Manager is part of Enterprise Advanced, MongoDB Ops Manager is part of Enterprise Advanced, which is a product requiring a subscription to use.I would advise you to bring this up with the Enterprise Advanced Support | MongoDB as typically these issues will require detailed knowledge into your deployment infrastructure. Alternatively, if you’re evaluating Ops Manager and would like more insight, please DM me and I should be able to connect you to the right team.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Replica set in 3 servers at windows
2023-03-09T07:08:03.408Z
Replica set in 3 servers at windows
796
null
[ "aggregation", "queries", "data-modeling", "indexes" ]
[ { "code": "{\n tests: [\n { grade: 90 },\n { grade: 80 },\n { grade: 100 },\n ...\n ]\n}\ntests.grade$elemMatch$match[\n { $sort: { \"tests.grade\", 1 } }\n { $limit: 10 }\n]\ntests.gradeelemMatch", "text": "This is the schema of my collection.I have an index on tests.grade.In my aggregation pipeline, after $elemMatch inside of a $match, there will beMy question is, will this work, given that tests.grade is a field inside of an array of embedded documents? Would it know to use the embedded doc that was matched in the elemMatch? What if there are multiple matches? Would this be optimized as in ESR rule?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "tests.grade$elemMatch$match$sort> db.test.find({}).limit(5)\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c18\"),\n tests: [\n {\n grade: 90\n },\n {\n grade: 80\n },\n {\n grade: 100\n }\n ]\n}\n{\n _id: ObjectId(\"64104fd819ad274fe51c6c1b\"),\n tests: [\n {\n grade: 40\n },\n {\n grade: 50\n },\n {\n grade: 10\n }\n ]\n}\n{\n _id: ObjectId(\"6410500819ad274fe51c6c1c\"),\n tests: [\n {\n grade: 75\n },\n {\n grade: 80\n },\n {\n grade: 92\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c19\"),\n tests: [\n {\n grade: 70\n },\n {\n grade: 80\n },\n {\n grade: 90\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c1a\"),\n tests: [\n {\n grade: 60\n },\n {\n grade: 70\n },\n {\n grade: 80\n }\n ]\n}\ntests.grade$elemMatch$match[\n { $sort: { \"tests.grade\", 1 } }\n { $limit: 10 }\n]\n[\n {\n $match:\n {\n tests: {\n $elemMatch: {\n grade: {\n $gte: 50,\n $lt: 95,\n },\n },\n },\n },\n },\n {\n $sort:\n {\n \"tests.grade\": 1,\n },\n },\n {\n $limit:10,\n }\n]\n{\n _id: ObjectId(\"64104fd819ad274fe51c6c1b\"),\n tests: [\n {\n grade: 40\n },\n {\n grade: 50\n },\n {\n grade: 10\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c1b\"),\n tests: [\n {\n grade: 50\n },\n {\n grade: 60\n },\n {\n grade: 70\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c1a\"),\n tests: [\n {\n grade: 60\n },\n {\n grade: 70\n },\n {\n grade: 80\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c1f\"),\n tests: [\n {\n grade: 60\n },\n {\n grade: 65\n },\n {\n grade: 70\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c19\"),\n tests: [\n {\n grade: 70\n },\n {\n grade: 80\n },\n {\n grade: 90\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c1e\"),\n tests: [\n {\n grade: 70\n },\n {\n grade: 75\n },\n {\n grade: 80\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c22\"),\n tests: [\n {\n grade: 70\n },\n {\n grade: 80\n },\n {\n grade: 90\n }\n ]\n}\n{\n _id: ObjectId(\"6410500819ad274fe51c6c1c\"),\n tests: [\n {\n grade: 75\n },\n {\n grade: 80\n },\n {\n grade: 92\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c18\"),\n tests: [\n {\n grade: 90\n },\n {\n grade: 80\n },\n {\n grade: 100\n }\n ]\n}\n{\n _id: ObjectId(\"6410499d19ad274fe51c6c1d\"),\n tests: [\n {\n grade: 80\n },\n {\n grade: 85\n },\n {\n grade: 90\n }\n ]\n}\nexecutionStats: {\n executionSuccess: true,\n nReturned: 10,\n executionTimeMillis: 0,\n totalKeysExamined: 0,\n totalDocsExamined: 13,\n executionStages: {\n stage: 'SORT',\n nReturned: 10,\n executionTimeMillisEstimate: 0,\n works: 26,\n advanced: 10,\n needTime: 15,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n sortPattern: {\n 'tests.grade': 1\n },\n memLimit: 104857600,\n limitAmount: 10,\n type: 'simple',\n totalDataSizeSorted: 1476,\n usedDisk: false,\n spills: 0,\n inputStage: {\n stage: 'COLLSCAN',\n filter: {\n tests: {\n '$elemMatch': {\n '$and': [\n {\n grade: {\n '$lt': 95\n }\n },\n {\n grade: {\n '$gte': 50\n }\n }\n ]\n }\n }\n },\n nReturned: 13,\n executionTimeMillisEstimate: 0,\n works: 15,\n advanced: 13,\n needTime: 1,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n direction: 'forward',\n docsExamined: 13\n }\n }\n }\nexecutionStats: {\n executionSuccess: true,\n nReturned: 10,\n executionTimeMillis: 0,\n totalKeysExamined: 21,\n totalDocsExamined: 10,\n executionStages: {\n stage: 'LIMIT',\n nReturned: 10,\n executionTimeMillisEstimate: 0,\n works: 22,\n advanced: 10,\n needTime: 11,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n limitAmount: 10,\n inputStage: {\n stage: 'FETCH',\n filter: {\n tests: {\n '$elemMatch': {\n '$and': [\n {\n grade: {\n '$lt': 95\n }\n },\n {\n grade: {\n '$gte': 50\n }\n }\n ]\n }\n }\n },\n nReturned: 10,\n executionTimeMillisEstimate: 0,\n works: 21,\n advanced: 10,\n needTime: 11,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 0,\n docsExamined: 10,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 10,\n executionTimeMillisEstimate: 0,\n works: 21,\n advanced: 10,\n needTime: 11,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 0,\n keyPattern: {\n 'tests.grade': 1\n },\n indexName: 'tests.grade_1',\n isMultiKey: true,\n multiKeyPaths: {\n 'tests.grade': [\n 'tests'\n ]\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'tests.grade': [\n '[MinKey, MaxKey]'\n ]\n },\n keysExamined: 21,\n }\n }\n }\n }\n", "text": "Hey @Big_Cat_Public_Safety_Act,The $sort aggregation stage should work as expected, even if tests.grade is a field inside an array of embedded documents. When using $elemMatch inside $match, it returns the first element that matches the specified condition in the array, and only that element is used in the next pipeline stages.Regarding multiple matches, $sort should work as expected, sorting all the matched embedded documents The order of the documents after the sort operation will depend on the sort order specified in the $sort stage. To confirm this, I tried to make a sample collection from the sample document you provided. This is what the documents looked like:tests.grade has an index.Based on the information you provided:In my aggregation pipeline, after $elemMatch inside of a $match, there will beI created an aggregation pipeline:The result was as we expected, the output was shown in increasing order :Regarding the optimization of the query, if you have an index on “tests.grade”, then the query should perform well. MongoDB’s query optimizer should use the index to speed up the sort operation, which will improve the query’s performance. However, the specific optimization strategy may depend on the size of the collection, the number of matches, and the sort order. You can use explain output to check this all. The explain output for the above aggregation query without any index looked like this:As we can see, it had to do a collection scan (COLLSCAN). With indexes, the explain output looked like this:As we can see, the inputStage has no COLLSCAN this time and there is an IXSCAN happening. Of course, there may be some changes to your explain output based on the full structure of your documents and the exact aggregation query that you’re using. I would suggest using explain output to understand which indexes your query is using. Also, using compass for writing aggregation queries can help a lot too since one can easily see the output after each stage and analyze accordingly.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ESR rule when sorting is done on a field inside of a embedded document
2023-03-08T17:07:17.983Z
ESR rule when sorting is done on a field inside of a embedded document
1,030
null
[ "atlas-search" ]
[ { "code": "{\n \"analyzer\": \"lucene.standard\",\n \"searchAnalyzer\": \"lucene.standard\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"alternate_names\": {\n \"type\": \"string\"\n },\n \"platform\": {\n \"dynamic\": false,\n \"fields\": {\n \"_id\": {\n \"representation\": \"int32\",\n \"type\": \"number\"\n },\n \"alias\": {\n \"type\": \"string\"\n },\n \"name\": {\n \"type\": \"string\"\n }\n },\n \"type\": \"document\"\n },\n \"reviewed\": {\n \"type\": \"boolean\"\n },\n \"title\": {\n \"type\": \"string\"\n }\n }\n }\n}\nplatform{\n \"_id\": 1868,\n \"coop\": \"No\",\n \"platform\": {\n \"_id\": 1,\n \"alias\": \"PC\",\n \"name\": \"PC\"\n },\n \"publishers\": [\n {\n \"_id\": 113,\n \"keyword\": \"Accolade, Inc.\"\n }\n ],\n \"title\": \"Test Drive II: The Duel\",\n \"reviewed\": true,\n \"references\": 1,\n \"slug\": \"pc/test-drive-ii-the-duel\",\n\n}\n[\n {\n $search: {\n index: \"name_idx\",\n equals: {\n path: \"platform._id\",\n value: 1\n }\n }\n }\n]\nplatform._idmustin", "text": "Hi there, I have an index configured as:The index was working as expected before the addition of the platform nested document.And here’s a sample document from the collection:Running a sample search to filter by platform is not working however:Not sure if I understand why this query is not working.On the same subject, when creating the pipeline to search for the platform._id do I need to add one must entry for each platform selected by the user, or is there a in like operator?Thank you", "username": "Vinicius_Carvalho1" }, { "code": "Your index could not be built: \n\"mappings.fields.platform.fields._id.representation\" must be one of [double, int64].\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"platform\": {\n \"fields\": {\n \"_id\": {\n \"representation\": \"int64\",\n \"type\": \"number\"\n },\n \"alias\": {\n \"type\": \"string\"\n },\n \"name\": {\n \"type\": \"string\"\n }\n },\n \"type\": \"document\"\n },\n \"reviewed\": {\n \"type\": \"boolean\"\n },\n \"title\": {\n \"type\": \"string\"\n }\n }\n }\n}\n{\n index: 'default',\n equals: {\n value: 1,\n path: 'platform._id'\n }\n\nplatform._idmustin", "text": "Hey @Vinicius_Carvalho1,Welcome to the MongoDB Community Forums! When I tried to reproduce your mappings on my end, I got an error sayingie. it does not accept int32 as a value. Hence, I would suggest you, either skip this field or edit the value.I edited the value to test this and my mapping looked like this:Used the sample search you providedand I got the sample document as the output.On the same subject, when creating the pipeline to search for the platform._id do I need to add one must entry for each platform selected by the user, or is there a in like operator?I suggest exploring compound operators with equals to try and achieve this.Please let us know if there’s anything else you need help with. Please feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Question on indexing nested documents
2023-03-14T01:54:08.689Z
Question on indexing nested documents
750
null
[ "aggregation", "queries", "python" ]
[ { "code": "", "text": "I have test database of documents with average document size is 32942 Bytes only.I am getting exceed memory error on below mentioned very simple query and small database (@ 600MB).\nCode:\nresultSet=collection.find({“cs.c_id”:{“$in”: list}})\nreccnt=0\nfor rec in resultSet:\nreccnt=reccnt+1\nError: pymongo.errors.DocumentTooLarge: BSON document too large (50853059 bytes) - the connected server supports BSON document sizes up to 16777216 bytes.Can anyone help me to resolve it?", "username": "Monika_Shah" }, { "code": "", "text": "In principal, only small enough documents are stored.But a query is also a JSON document. May be your query is simply too big. What is the size of your variable list? How many ids do you have in it?", "username": "steevej" }, { "code": "", "text": "Yes, you are right. Query parameter list may be high (approximate 10^6) .Thank you to help in cause identification.\nActually, it was result to avoid expensive Join operation using $lookup.It is in two part:\npart 1) identified all references from documents satisfying critieria,\nThese criteria are merged, then\npart 2) Identify documents of ID return by part 1What is solution for such case?", "username": "Monika_Shah" }, { "code": "", "text": "You are right. Query document may become large by large list.\nBut, It shows error for line ‘for rec in resultSet’ not for collection.find", "username": "Monika_Shah" }, { "code": "", "text": "it was result to avoid expensive Join operation using $lookup.So to avoid a $lookup entirely done with one access to the server by1 - doing a find that downloads the list of ids to lookup, join or find\n2 - doing a second access to the server by uploading the list of ids you got in step 1 to find the documentsSo you basically implement your own $lookup in a less efficient way using more access to the server, using more I/O between the client and the server and more CPU on the client which in principal is less powerful than the server.What is solution for such case?$lookupBut, It shows error for line ‘for rec in resultSet’ not for collection.findI do not know how accurate your python environment is terms of showing where is the error line but I suspect it is wrong. As far as I know, resultSet is a cursor so I am pretty confident that pymongo will return a valid cursor object. And each record is a document stored in the server, so I don’t see how any single rec from resultSet could be too big.", "username": "steevej" }, { "code": "", "text": "I think $lookup would be less efficient. You may correct it.Query here is to filter record from first collection. Collect references from Array from these documents. For this reference ID’s , search documents from second collection.Two collections are used here, which may be have different distribution on shards.\nSo, there will be much network I/O between shards to perform $lookup stage.On other side, two query are used to perform this task. First query is to find reference IDs from first collection by querying it. This process will be done in parallel to all applicable shards. Almost balanced workload . Only IDs are returned from first query. It will be used to filter second collection in balanced workload . No document transfer among cluster nodes will be required except result.", "username": "Monika_Shah" }, { "code": "", "text": "PyMongo’s find() cursor is executed lazily on the first iteration, this is why the exception is raised on the “for rec in resultSet” line.", "username": "Shane" }, { "code": "insert_one_result = temp_collection.insert_one( { \"cs.c_id\" : { \"$in\" : list } } )\n", "text": "I think $lookup would be less efficient. You may correct it.I could by I will not because I do not personally have the resources to test it and I will not use the resources of my customers to test it.To test if the BSON document too large error is the size of the query, like I think, versus the processing of the result set what you can do is to try insert the query (using the same list) into a temporary collection rather than calling find.So tryIf you get a DocumentTooLarge error then you will know that the query is too big. If it works with exactly the same list that generated the error with find then I have no clue.", "username": "steevej" }, { "code": "", "text": "Yes, query document is large. Thank you.", "username": "Monika_Shah" } ]
pymongo.errors.DocumentTooLarge: BSON document too large (50853059 bytes) - the connected server supports BSON document sizes up to 16777216 bytes
2023-03-13T13:38:58.034Z
pymongo.errors.DocumentTooLarge: BSON document too large (50853059 bytes) - the connected server supports BSON document sizes up to 16777216 bytes
2,893
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "This option is generally only useful in combination with the \nprocessManagement.fork\n setting.\nLinux\nOn Linux, PID file management is generally the responsibility of your distro's init system: usually a service file in the /etc/init.d directory, or a systemd unit file registered with systemctl. Only use the \nprocessManagement.pidFilePath\n option if you are not using one of these init systems.\nWARNING\nIf you upgrade an existing instance of MongoDB to MongoDB 4.4.19, that instance may fail to start if fork: true is set in the mongod.conf file.\n\nThe upgrade issue affects all MongoDB instances that use .deb or .rpm installation packages. Installations that use the tarball (.tgz) release or other package types are not affected. For more information, see \nSERVER-74345.\n\nTo remove the fork: true setting, run these commands from a system terminal:\n\nsystemctl stop mongod.service\nsed -i.bak '/fork: true/d' /etc/mongod.conf\nsystemctl start mongod.service\n\nThe second systemctl command starts the upgraded instance after the setting is removed.\n", "text": "Hi All,I am a bit confused on MongoDB documentation.(MongoDB on LINUX)I am using a unit file customized for my purpose (not using built in mongod.service).\nBut when it comes to specifying processManagement.pidFilePath either in mongod.conf or in the unit file related to mongodb service(I want to start mongodb using systemd ) or in both.What is the significance of pidfile.\nIs it necessary/mandatory/compulsory to use it.could you please summarize the below:processManagement.pidFilePathThanks and Regards\nsatya", "username": "Satya_Oradba" }, { "code": "mongodmongod --forkmongod", "text": "Hi @Satya_Oradba welcome to the community!What is the significance of pidfile.\nIs it necessary/mandatory/compulsory to use it.In short, you’ll generally need it if you’re forking the mongod process using the mongod --fork option. If you don’t, then you can ignore this setting, as far as I know.If you need help with setting up the mongod server using systemd, there’s a Gist with configuration example provided by a community member that may be useful for you.If this doesn’t work for you, could you please provide more details on what you’re trying to achieve, your MongoDB version, and what you have tried but haven’t been working so far?Best regards\nKevin", "username": "kevinadi" }, { "code": "In short, you’ll generally need it if you’re forking the mongod process using the mongod --fork option. If you don’t, then you can ignore this setting, as far as I know.\n\nroot@ubuntu-002:~# su mongodb\nThis account is currently not available.\nroot@ubuntu-002:~#\nroot@ubuntu-002:~# cat /etc/passwd|grep mongodb\n***mongodb:x:113:65534::/home/mongodb:/usr/sbin/nologin***\nroot@ubuntu-002:~#\nroot@ubuntu-002:~#\n\n\nubuntu@ubuntu-002:~$ mongo\nMongoDB shell version v4.4.13\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n\n", "text": "Hi @kevinadi ,Thanks for the quick update.I am testing replica set migration from Windows to Ubuntu.\nI really want to know the need of pid file specification either in unit file or mongod.conf or both.But as you said :I need this confirmation to move forward with my migration. Now I can avoid PID file specification.\nThanks for the help.One last question is :On Linux when we install MongoDB using the default method specified in documentation.\nMongoDB on UbuntuIt creates ‘mongodb’ user and required DIR structure.\nBut the user ‘mongodb’ does not have a login or shell. Of course, Start , stop , status check , disable and enable of service is taken care by systemd.But if we want to login/connect to MongoDB using ‘mongo’ or ‘mongosh’ on local server where mongod is running to insert some documents, do we need to have ‘sudo’ privilege to ‘mongodb’ OS user or we need to use some other OS user with sudo privilege . Could you please let me know.Incase if we need to use some other OS user from which we connect to MongoDB on local server , does it not pose a risk or cause problem.If we are connecting to MongoDB using a OS user other than ‘mongodb’ then why the shell or login is disabled for ‘mongodb’ OS user. It could have been ‘mongodb’ too rather than some other OS user.But read and write permissions on MongoDB data folder are with ‘mongodb’ OS user. Then\nhow can we use any other OS user on the local server to connect to MongoDB.-rw------- 1 mongodb mongodb 21 Mar 14 16:02 WiredTiger.lock\n-rw------- 1 mongodb mongodb 50 Mar 14 16:02 WiredTiger\n-rw------- 1 mongodb mongodb 4096 Mar 14 16:02 WiredTigerHS.wt\ndrwx------ 2 mongodb mongodb 4096 Mar 14 16:02 journal\n-rw------- 1 mongodb mongodb 114 Mar 14 16:02 storage.bson\n-rw------- 1 mongodb mongodb 6 Mar 14 16:02 mongod.lock\ndrwx------ 2 mongodb mongodb 4096 Mar 14 16:02 admin\ndrwx------ 2 mongodb mongodb 4096 Mar 14 16:02 local\ndrwx------ 2 mongodb mongodb 4096 Mar 14 16:02 config\n-rw------- 1 mongodb mongodb 20480 Mar 14 16:03 _mdb_catalog.wt\n-rw------- 1 mongodb mongodb 20480 Mar 14 16:04 sizeStorer.wt\n-rw------- 1 mongodb mongodb 69632 Mar 14 16:05 WiredTiger.wt\n-rw------- 1 mongodb mongodb 1466 Mar 14 16:05 WiredTiger.turtle\ndrwx------ 2 mongodb mongodb 4096 Mar 14 16:06 diagnostic.dataCould you please let me know.Thanks and regards\nSatya", "username": "Satya_Oradba" }, { "code": "sudomongodb", "text": "do we need to have ‘sudo’ privilege to ‘mongodb’ OS user or we need to use some other OS user with sudo privilegeYou should not need to use sudo for things other than general server maintenance. In terms of connecting to MongoDB and securing it, you might want to have a look at Enable Access Control and the Security Checklist. The mongodb user and group are created for a specific reason, which is answered in your next question:If we are connecting to MongoDB using a OS user other than ‘mongodb’ then why the shell or login is disabled for ‘mongodb’ OS user. It could have been ‘mongodb’ too rather than some other OS user.This is a best practice with regard to daemon or server software. For more details, please see this question and answer on StackExchange: Why is it recommended to create a group and user for some applicationsIn short, that setup follows best practice for UNIX server/daemon software with regard to OS security. MongoDB security should be setup separately, and that involves setting up users and privileges in the database itself. This is the practice of other database servers installed in UNIX, not just MongoDB.Best regards\nKevin", "username": "kevinadi" } ]
What is the use of mongodb pid file and what is its relevance in regards to fork option
2023-03-12T10:18:20.971Z
What is the use of mongodb pid file and what is its relevance in regards to fork option
1,929
null
[ "queries", "golang" ]
[ { "code": "type A struct {\n Id primitive.ObjectID \n Random1 string\n Parents []B\n Random2 int\n}\n\ntype B struct {\n Id primitive.ObjectID \n Random3 string\n Children []C\n Random4 int\n}\n\ntype C struct {\n Random5 string\n Name Name\n Random6 int\n}\n\ntype Name struct {\n FirstName string\n LastName string\n}\nfilter1 := bson.M{\n\t\t\"parents.0.chilren.0.name\": bson.M{\n\t\t\t\"first_name\": \"Mike\",\n\t\t\t\"last_name\": \"Anderson\",\n\t\t},\n}\nfilter2 := bson.M{\n\t\t\"parents.0.chilren.0.name\": bson.D{\n\t\t\t{Key: \"first_name\", Value: \"Mike\"},\n\t\t\t{Key: \"last_name\", Value: \"Anderson\"},\n\t\t},\n}\n\nfilter3 := bson.M{\n\t\t\"parents.0.chilren.0.name.first_name\": \"Mike\",\n\t\t\"parents.0.chilren.0.name.last_name\": \"Anderson\",\n}\n", "text": "Hi,Need some help to figure out why nested bson.M doesn’t work occasionally.For the following Golang structs stored in a MongoDb collection for type A:The following filter for FindOne(), which uses two bson.M, worked in most situations but failed to find a match in about 10% runsThe following two filters alway work, where filter 2 uses bson.D inside bson.M, and filter 3 just uses one bson.MI found a similar question in https://jira.mongodb.org/browse/GODRIVER-877 but still don’t understand the differences or root cause. Thanks for the help!", "username": "Tianjun_Fu" }, { "code": "", "text": "Bump the thread. Hope to find an answer in the new year, thanks!", "username": "Tianjun_Fu" }, { "code": "Iteration Orderfilter1filter3filter3 := bson.M{\n\t\t\"parents.0.chilren.0.name.first_name\": \"Mike\",\n\t\t\"parents.0.chilren.0.name.last_name\": \"Anderson\",\n}\nbson.Dbson.Mfilter2filter2 := bson.M{\n\t\t\"parents.0.chilren.0.name\": bson.D{\n\t\t\t{Key: \"first_name\", Value: \"Mike\"},\n\t\t\t{Key: \"last_name\", Value: \"Anderson\"},\n\t\t},\n}\nbson.Dbson.M", "text": "Hi @Tianjun_Fu,Welcome to the MongoDB Community forums In Go, maps are intentionally non-deterministic. This is mentioned in the article Go maps in action specifically in the section Iteration Order where it is stated:When iterating over a map with a range loop, the iteration order is not specified and is not guaranteed to be the same from one iteration to the next.So, while querying in MongoDB, in most cases the order of keys does not matter, so we take advantage of the concise syntax of Go maps.In filter1 you are matching on a BSON document where field order matters. Instead, you should use the approach from filter3 which worked well for you. Also, it is a shorter and clearer filter declaration:Whereas when you use bson.D instead of bson.M, you will see deterministic behavior as you noted in filter2.Also as per the comment in the ticket - GODRIVER-877:The only two cases where the order is significant are for generic commands(where the first key has to be the command name) and index specifications(where the order determines the structure of the index), and in those cases, it’s recommended to use bson.D instead of bson.M.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you for the reply, Kushagra. Can you clarify why field order matters in filter1 only but not filter3?", "username": "Tianjun_Fu" }, { "code": "filter1filter1 := bson.M{\n\t\t\"parents.0.chilren.0.name\": bson.M{\n\t\t\t\"first_name\": \"Mike\",\n\t\t\t\"last_name\": \"Anderson\",\n\t\t},\n}\nbson.M\"parents.0.chilren.0.name\"filter3\"parents.0.chilren.0.name.first_name\"\"parents.0.chilren.0.name.last_name\"replset [direct: primary] test> db.test.find()\n[\n { _id: 0, name: { first: 'aaa', last: 'bbb' } },\n { _id: 1, name: { last: 'bbb', first: 'aaa' } }\n]\n\nreplset [direct: primary] test> db.test.find({name:{first:'aaa',last:'bbb'}})\n[ { _id: 0, name: { first: 'aaa', last: 'bbb' } } ]\n\nreplset [direct: primary] test> db.test.find({name:{last:'bbb',first:'aaa'}})\n[ { _id: 1, name: { last: 'bbb', first: 'aaa' } } ]\n", "text": "Hi @Tianjun_Fu,Can you clarify why field order matters in filter1 only but not filter3?In filter1, the query looks like the following:Here if you note, the query is using nested bson.M. Specifically, in the \"parents.0.chilren.0.name\" field, the matching order is crucial and needs to be definite whereas in the filter3 it doesn’t matter which comes first because it directly points out to the specific keys which are \"parents.0.chilren.0.name.first_name\" and \"parents.0.chilren.0.name.last_name\"To explain it further, consider the following example:It’s worth noting that when you want to match the sub-document, field order does matter.I hope it answers your questions.Feel free to reach out if you have any further questions.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Golang] FindOne() with two Bson.M failed for 10% attempts, Bson.M+Bson.D or a single Bson.M always work, why?
2022-12-09T19:51:10.001Z
[Golang] FindOne() with two Bson.M failed for 10% attempts, Bson.M+Bson.D or a single Bson.M always work, why?
2,054
null
[ "queries" ]
[ { "code": "", "text": "Does system check PlanCache before every query execution?\nIs there any other cache on which query performance is depending?", "username": "Monika_Shah" }, { "code": "", "text": "Hi @Monika_Shah,Does system check PlanCache before every query execution?MongoDB checks in the PlanCache to see if an optimal index has been chosen before or not for the given queryIf not, then itIs there any other cache on which query performance is depending?There are some aggregation stages such as $lookup and $graphLookup which have their own document caches for their internal pipelines.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "ueryI am getting reduced execution time in simple Range query using find operation.", "username": "Monika_Shah" }, { "code": "\"reduced execution time\"explain('executionStats')", "text": "Hi @Monika_Shah,I am getting reduced execution time in simple Range query using find operation.Can you provide more details about what you mean by \"reduced execution time\" in this context? It seems like you may be comparing two queries, one of which has a faster execution time than the other. Could you please provide the following information:to better understand the requirements of the question.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
How cache is used in query execution?
2023-02-07T08:11:36.810Z
How cache is used in query execution?
746
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const branchSchema = mongoose.Schema(\n {\n office: [\n {\n ...multiple office detail\n },\n ],\n },\n { timestamps: true }\n)\n\nconst Branch = mongoose.model(\"Branch\", branchSchema)\nexport default Branch\nconst companySchema = mongoose.Schema(\n {\n office: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Branch.office\",\n },\n status: {\n type: String,\n required: true,\n default: \"active\",\n },\n createdBy: {\n type: mongoose.Schema.Types.ObjectId,\n required: true,\n ref: \"User\",\n },\n },\n { timestamps: true },\n)\n\nconst Company = mongoose.model(\"Company\", companySchema)\nexport default Company\nconst offices = await Company.find().populate(\"office\")\nERROR I GET: \n{\n \"error\": \"failed\",\n \"message\": \"MissingSchemaError: Schema hasn't been registered for model \\\"Branch.office\\\".\\nUse \n mongoose.model(name, schema)\"\n}\n", "text": "Hi, am new here.\nwhat i am trying to do is to reference a sub-document and then populate on the .find() function.My branch schemaMy company schemaQuery and error:", "username": "Rickal_Hamilton" }, { "code": "ERROR I GET: \n{\n \"error\": \"failed\",\n \"message\": \"MissingSchemaError: Schema hasn't been registered for model \\\"Branch.office\\\".\\nUse \n mongoose.model(name, schema)\"\n}\nofficecompanySchemacompanySchemarefofficeofficebranchSchemaBranchofficeoffice: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Branch\",\n},\nofficeBranchBranchCompanyimport Branch from \"./path_to_branchModel\"\n", "text": "Hi @Rickal_Hamilton,Welcome to the MongoDB Community forums This error is occurring because the schema for the office field in your companySchema has not been registered with the mongoose model. In your companySchema, you have set the ref field of the office property to “Branch.office”. This is not the correct way to reference the office field in the branchSchema. Instead, you should reference the Branch model and its office field as follows:This will tell Mongoose to populate the office field with the data from the Branch model.To read more about it refer to the Mongoose documentation link.Also, make sure that you have imported the Branch model into the file where your Company model is defined. You can do this by adding the following line at the top of your file:I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": " Branch{offices:[\n{....1000 office sub doc}]\n\n", "text": "i know of that way but that returns the big array of offices stored in branch i don’t want the entire array of offices i am trying to get to a single office by referencing that branch _id Branch.office is there no way to do this? what if i have 1000 office stored in the array of offices—>", "username": "Rickal_Hamilton" }, { "code": "mongoose.Schema(\n {\n office: [\n {\n ...multiple office detail\n },\nofficebranchoffice: {\n type: mongoose.Schema.Types.ObjectId,\n ref: \"Branch\",\n},\nrefbranchbranch _idCompany.find({ office: { $elemMatch: { _id: 'abc123' } } })\n .populate(\"office\") \n .exec((err, offices) => {\n if (err) {\n console.log(err);\n return;\n }\n console.log(offices);\n });\n$elemMatch_id", "text": "Hi @Rickal_Hamilton,Here office is a field as per your branch model which contains an array of sub-documents.The ref option is what tells Mongoose which model to use during population. So, the model is branch which will be used to populate the office field. To read more refer to this link.I am trying to get to a single office by referencing that branch _idIf you want to populate the sub-document from the large array set by referencing that branch _id, you can do as following:Here the $elemMatch operator is used to search for the _id of the branch you wish to populate.Please note this is not the tested code, it’s just for your reference please test this as per your requirement.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Reference sub document and then populate that subdocument on .find()
2023-03-06T00:57:36.013Z
Reference sub document and then populate that subdocument on .find()
2,906
null
[]
[ { "code": "", "text": "I’m evaluating MongoDB change stream or trigger for supporting auditing. Any changes to the database MUST be logged into an audit history collection. How reliable is it if using change streams or Atlas triggers to log changes to the audit history? In what circumstances it may cause data loss such that committed data changes would not able to be logged?", "username": "Linda_Peng" }, { "code": "", "text": "Hi @Linda_PengIn terms of change stream, it should be very reliable. It works using the oplog as a data source.In terms of Atlas triggers, it also should fire when specified.In my opinion, I would encourage you to test both solutions thoroughly with your workload. If you find that there are missing events, you should definitely report it as a possible bug.Best regards\nKevin", "username": "kevinadi" }, { "code": "resumeAfterstartAfterdb.collection.watch()", "text": "If the operation identified by the resume token passed to the resumeAfter or startAfter option has already dropped off the oplog, db.collection.watch() cannot resume the change stream.So it is theoretically possible to lose events since opLogs are not persisted forever. For example, your change stream connection is broken and before you get a chance to reestablish it, the oplog is gone in the too busy server.", "username": "Kobe_W" }, { "code": "", "text": "So it is theoretically possible to lose events since opLogs are not persisted forever.Correct, but this is not the issue of change stream or Atlas triggers. This basically means that the oplog was not sized correctly for the workload, or if it does, there was a catastrophic issue that went unresolved for a long time, enough time until the oplog gets rolled over. At this point, the whole replica set would have issues, not only the change stream Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "that’s a fair enough point.But I would personally prefer ingesting the change stream event to a message queue (e.g. kafka) and set up consumers on the other end of the queue. (and look like mongodb already has its support)By doing this, the event will not be lost before the application processes it, and the “change stream traffic” will never swamp the observer side (imagine there are too many events and the simple change stream cursor from mongo shell is not able to catch up with it)", "username": "Kobe_W" }, { "code": "", "text": "Thank you all for the great suggestions!", "username": "Linda_Peng" } ]
How reliable is the MongoDB change stream for auditing purpose?
2023-03-09T15:30:12.165Z
How reliable is the MongoDB change stream for auditing purpose?
824
null
[]
[ { "code": "", "text": "I’m trying to create a trigger in MongoDB Atlas M0, but can’t link data source for it to work. Does MongoDB Atlas M0 supports trigger creation?", "username": "Linda_Peng" }, { "code": "", "text": "Hi @Linda_Peng - Welcome to the community!You should be able to create triggers and link them with a M0 tier cluster data source. Are you trying to create a Database trigger?If so, what is the issue you are encountering when trying to link the data source? (i.e. No options, greyed out dropdown menus, error messages?)Fwiw my own M0 test environment has 2 database triggers associated with it.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "When I try to link the data source, it is sort of greyed out with a red circle to prevent the linking. In the next section of trigger source details, I can select the cluster though. I don’t know if this is supposed behavior where link button doesn’t link. I’m not seeing expected behavior after defining the function, so I’m not sure if it’s linking issue preventing the trigger working or code issue itself.", "username": "Linda_Peng" }, { "code": "", "text": "I checked the trigger status is enabled so it maybe created already without the link button working. After I define the function and click ‘run’, it generated following output:result:\n{\n“$undefined”: true\n}\nresult (JavaScript):\nEJSON.parse(‘{“$undefined”:true}’)What does that mean? The trigger is not working as expected. So the above output is saying something wrong with the function body.", "username": "Linda_Peng" }, { "code": "const fullDoc = changeEvent.fullDocument;\nconst updateDescription = changeEvent.updateDescription;\n\nconst mongodb = context.services.get(\"Cluster20434\");\nconst db = mongodb.db(\"test\");\n\nif (changeEvent.operationType == \"insert\") {\n db.collection(\"history_col\").insertOne({\"operation\": changeEvent.operationType, \"Full Document\": fullDoc})\n .then(result => console.log(\"Inserted\"));\n return result;\n}\n", "text": "Here is the trigger function code, where I expect to see record inserted into history_col after I do some inserts to tpayer collection. But nothing inserted to history_col after the inserts.exports = function(changeEvent) {};", "username": "Linda_Peng" }, { "code": "Link Data Source(s)\"Inserted\" const fullDoc = changeEvent.fullDocument;\n const updateDescription = changeEvent.updateDescription;\n\n const mongodb = context.services.get(\"Cluster0\");\n const db = mongodb.db(\"test\");\n\n if (changeEvent.operationType == \"insert\") {\n db.collection(\"history_col\").insertOne({\"operation\": changeEvent.operationType, \"Full Document\": fullDoc})\n .then(result => console.log(\"Inserted\"));\n }\nreturn result\"Cluster20434\"\"Cluster0\"\"insert\"", "text": "When I try to link the data source, it is sort of greyed out with a red circle to prevent the linking. In the next section of trigger source details, I can select the cluster though. I don’t know if this is supposed behavior where link button doesn’t link.\nimage2436×398 56.6 KB\nCan you go into the trigger itself and see if a data source is already linked to it? They generally appear at the top of the Link Data Source(s) dropdown menu as shown above.What does that mean? The trigger is not working as expected. So the above output is saying something wrong with the function body.I did brief testing only but managed to get \"Inserted\" logged with the following function:Note: I removed the return result line and changed \"Cluster20434\" to \"Cluster0\" to match my test environmentI think if you’re running it with the “Run” button as you say, no documents are being inserted so the trigger is running with no document to work against. I tested by inserting 2 documents (with the Operation Type set to \"insert\" for this trigger) and it recorded the following the trigger logs:\nimage1292×788 47.4 KB\nNote: I inserted a total of 2 times, 1 document each time.", "username": "Jason_Tran" }, { "code": "\"test.history_col\"test> db.history_col.find()\n[\n {\n _id: ObjectId(\"6407e5a2e4f987c6ca2b010d\"),\n operation: 'insert',\n 'Full Document': { _id: ObjectId(\"6407e5a0f3015b4b893b868a\"), a: 1 }\n },\n {\n _id: ObjectId(\"6407e5a3e4f987c6ca2b02a5\"),\n operation: 'insert',\n 'Full Document': {\n _id: ObjectId(\"6407e5a2f3015b4b893b868b\"),\n a: 'this is a test string for inserting a document'\n }\n }\n]\n", "text": "This is the contents of the \"test.history_col\" namespace after the 2 inserts:", "username": "Jason_Tran" }, { "code": "", "text": "Jason,\nThank you so much! I removed “return result;” this time and saved the trigger. Then inserted records into the collection and found a row gets inserted into history_col.So the linked data source is actually already linked since it appears on the top already not at the link button line. Wondering why “return result;” would cause trigger not working as expected?", "username": "Linda_Peng" }, { "code": "return result;resultresultonFulfilled.then(...)", "text": "Wondering why “return result;” would cause trigger not working as expected?At the level where you have return result; I’m pretty sure result is an undefined variable. (I know JS only extremely vaguely).It looks like you are using result as a parameter name inside the then which makes it receive the value onFulfilled but only inside the scope of .then(...) and doesn’t push a declaration up to a higher level.", "username": "Andy_Dent" }, { "code": "", "text": "Thanks Andy! I’m not familiar with JS either ", "username": "Linda_Peng" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does free tier MongoDB Atlas M0 support trigger creation?
2023-03-07T19:17:58.563Z
Does free tier MongoDB Atlas M0 support trigger creation?
920
null
[]
[ { "code": "\nSystem: CPU (User) % has gone above 95\n\nEnsure no index is missing and scale up. Please navigate to the [System CPU metrics page]) to see usage details.\n", "text": "I have a cluster on Mongo Atlas, and I constantly receive these alerts:Checking this CPU usage I reviewed the metrics about System CPU, Normalized Process CPU, and Normalized System CPU but none of those has usage over 80% (I did zoom on on the metrics for the day that I received those alerts).¿Where can I find information about the 95% overconsumption?\n¿Is there a way to see which query or process Is consuming this 95%?Thanks in advance.\nRegards,\nVíctor.", "username": "Victor_Merino" }, { "code": "", "text": "Hello @Victor_Merino ,Welcome to The MongoDB Community Forums! ¿Where can I find information about the 95% overconsumption?It might be possible by checking the timestamp of the alerts you received and check your metrics around the same timestamp.¿Is there a way to see which query or process Is consuming this 95%?I would recommend you to check the logs around the timestamp of the alert. There could be a slow query or similar alert that could be a starting point for the investigation.You can use the MongoDB Atlas Performance Advisor (Only available on M10+ clusters and serverless instances). This tool provides detailed analysis and recommendations for improving the performance of your cluster.In addition to using the Performance Advisor, you can also run the db.currentOp() command in the MongoDB shell to view information about currently running operations. This can help you identify any long-running queries or processes that may be contributing to high CPU usage.Finally, if you are unable to identify the root cause of the high CPU usage, you may want to consider scaling up your cluster. Adding more resources, such as additional CPU cores or memory, can help alleviate performance issues caused by high CPU usage.Lastly, I would advise you to bring this up with the Atlas chat support team. They may be able to check if anything on the Atlas side could have possibly caused this broken pipe message. In saying so, if a chat support is raised, please provide them with the following:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi, @Tarun_GaurFirst of all, thanks for the response. I review the metrics again, and I don’t know if I missed something the first time, but now I can see the metrics and the logs correctly. Finally, we detect the query that uses most of the CPU and it is possible to optimize, so we will do it.Thanks for your time, I really appreciate it.Regards,\nVíctor", "username": "Victor_Merino" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
About the metrics and alerts
2023-03-07T20:19:15.439Z
About the metrics and alerts
606
null
[ "aggregation", "queries", "crud" ]
[ { "code": "{\n \"_id\": \"1234\",\n \"created_at\": 1678787680\n}\nupdated_atcreated_at{\n \"_id\": \"1234\",\n \"created_at\": 1678787680,\n \"updated_at\": \"2023-03-14 15:39:18.767232\"\n}\nupdateMany", "text": "I have a document as follows:I want to modify the document and add a new key updated_at which will be a datetime equivalent of the created_at UNIX timestamp.Is there a way to perform this operation using updateMany?", "username": "Anuj_Panchal1" }, { "code": "db.test.updateMany(\n {},\n [\n {\n $set: { updated_at: { $toDate: { $multiply: ['$created_at', 1000] } } }\n }\n ]\n);\n1000{\n \"_id\": \"1234\",\n \"created_at\": 1678787680,\n \"updated_at\": 2023-03-14T09:54:40.000+00:00\n}\n", "text": "Hey @Anuj_Panchal1,Welcome to the MongoDB Community forums You can use the following aggregation pipeline to do so:Here I’ve used $toDate to convert a value to a date by multiplying it with 1000 to convert it into milliseconds.Here UNIX timestamp is the the number of seconds between a particular date and the Unix Epoch on January 1st, 1970 at UTCSo, to convert it to BSON Date we have to multiply by 1000 because in MongoDB - “Date is a 64-bit integer that represents the number of milliseconds since the Unix epoch on January 1st, 1970 at UTC”After executing, it will return the following output:I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "You can use update query with $set operator to add new key.\ncollection.update(\n{ ‘_id’ : “1234”},\n{ “$set” : { “updated_at”: “2023-03-14 15:39:18.767232”} } )", "username": "Monika_Shah" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to add a key in a mongo document on the basis of an existing key?
2023-03-14T10:25:28.474Z
How to add a key in a mongo document on the basis of an existing key?
1,547
null
[ "queries" ]
[ { "code": "", "text": "Why MongoDB logs show PlanCacheKey after clearing PlanCache using db.commaond(“PlanCacheClear”:collectionnanme)?Is there any way to stop caching query plans?", "username": "Monika_Shah" }, { "code": "PlanCacheKeyPlanCacheKeyPlanCacheClear// running the plan cache list returns a list of cached plans\ndb.orders.getPlanCache().list()\n[\n {\n version: '1',\n queryHash: '8545567D',\n planCacheKey: 'F66BA4BD',\n isActive: false,\n works: Long(\"4\"),\n timeOfCreation: ISODate(\"2023-02-24T16:55:47.486Z\"),\n createdFromQuery: { query: { item: 'abc' }, sort: {}, projection: {} },\n cachedPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { item: 1 },\n//redacted the rest of the list for readability\n\n// executed the planCacheClear command\nEnterprise replset [direct: primary] test> db.runCommand( { planCacheClear: \"orders\" })\n{\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1677258251, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1677258251, i: 1 })\n}\n\n// checking the plan cache list again to verify and it returns an empty\n// array indicating that the plan cache is cleared\nEnterprise replset [direct: primary] test> db.orders.getPlanCache().list()\n[]\n", "text": "Hello,In the log, it’s expected to always see a PlanCacheKey as the query is executed already and even if the cache was cleared, since the query is executed a plan cache key is generated.For example if I run a query for the first time in explain mode, I will notice a PlanCacheKey is created for that particular query shape.If you want to verify if the PlanCacheClear command successfully cleared cached plans, you can list the current plans for a collection using PlanCache.list().To demonstrate this idea, please check the below test:For more information about PlanCache.list(), please check this documentation link.Regards,\nMohamed Elshafey", "username": "Mohamed_Elshafey" }, { "code": "", "text": "I agree that PlanCacheKey is key of new entry to PlanCache .Then, what would be reason of less execution time when the query is executed again after clearing PlanCache ?", "username": "Monika_Shah" } ]
Why log shows PlanCacheKey even after clearing PlanCache
2023-02-23T05:15:13.769Z
Why log shows PlanCacheKey even after clearing PlanCache
524
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.7.1.Please note that this version of mongocxx requires MongoDB C Driver 1.22.1 or higher.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.7.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Kevin_Albertson" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB C++11 Driver 3.7.1 Released
2023-03-14T18:12:31.543Z
MongoDB C++11 Driver 3.7.1 Released
974
null
[ "containers", "atlas", "field-encryption" ]
[ { "code": "", "text": "I have gotten the sample code here mongodb-university/docs-in-use-encryption-examples/blob/main/queryable-encryption/go/local/reader to run on macOS (using libmongocrypt installed with brew, and the shared_lib). No issues there.I’m now trying to run the same code in an amd64/debian:bullseye container connecting to an atlas cluster (M10, version 6.0.4).It compiles, but hangs when it reaches this line: docs-in-use-encryption-examples/make-data-key.go at 8d823bba56e62a429b330072a6b364bc0e097cf1 · mongodb-university/docs-in-use-encryption-examples · GitHub\nI’ve tried setting timeouts in the client options but it just hangs until I stop the container, never returning any error or printing anything. The prior db commands are successful and I can see the data keys created in atlas.Adding in prints myself, I can see that this is the last line that runs: mongo-go-driver/client.go at b504c38406a5e7c45e1f77a8d2d9e938358cc695 · mongodb/mongo-go-driver · GitHub before the application hangs.Can anyone see what I might be doing wrong, or provide methods to debug?", "username": "danny_fry" }, { "code": "", "text": "@danny_fry The issue seems to be that the Go application is hanging when it reaches a certain line of code while trying to connect to an Atlas cluster. The last line that runs successfully has been identified, but no error or output is being returned.\nSome possible issues could be:Some suggested methods to debug the issue could be:", "username": "Deepak_Kumar16" }, { "code": ".SetBypassQueryAnalysis(true)", "text": "I resolved this issue by adding .SetBypassQueryAnalysis(true) to the options in this line: docs-in-use-encryption-examples/make-data-key.go at 8d823bba56e62a429b330072a6b364bc0e097cf1 · mongodb-university/docs-in-use-encryption-examples · GitHub but I didn’t look more closely at what the problem was or why this fixed it.", "username": "danny_fry" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Go app in debian container hangs when connecting to atlas with automatic encryption
2023-03-11T00:35:15.208Z
Go app in debian container hangs when connecting to atlas with automatic encryption
902