image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "crud" ]
[ { "code": "", "text": "Hi All,I am learning MongoDB had have a very basic question.What is the difference between the following Collection methods, since they appear to do the same things:\ndb.collection.insert() vs. db.collection.insertMany()\ndb.collection.update() vs. db.collection.updateMany()Thanks,\nDave", "username": "David_Geyer" }, { "code": "updateupdateOneupdateManyinsertinsertOneinsertManyinsertOneinsertManyinsertupdateinsert", "text": "Hello @David_Geyer,Good to know about learning data operations on MongoDB! Here is some info.The update method is from older versions of MongoDB (for example, in v2.2 or earlier; and current version is 4.4), and by default it updates one document only; the first document that matches the query filter. There is an option to specify that the update can happen in multiple documents matching the query filter. The updateOne and updateMany methods are introduced in MongoDB v3.2 and their method names clearly state their function.Similar is the case with the insert, insertOne and insertMany methods. With insertOne you can insert one document into the collection. insertMany accepts an array of documents and these are inserted. The insert (the method from older versions) takes a single document by default, and there is an option to insert multiple documents supplied as an array.Note that the update and insert methods also have the newer features, and can be used as you like.You can refer (syntax and examples) the above methods in the MongoDB database server documentation for the latest as well as the older (a.k.a. legacy) versions at:", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks Prasad!Regarding the 4.4 Manual, I have been relying heavily on it, but it is not very user friendly. Do you have any suggestions regarding books, tutorials, etc. that would help to make the learning process easier?Thanks,\nDave", "username": "David_Geyer" }, { "code": "", "text": "Hi @David_Geyer,For the first time users the manual can be little overwhelming - the size, the number of topics and the product features. After little bit of exposure it will be alright (I feel the documentation is quite comprehensive).Another good way to learn is from the MongoDB University - the courses are free and online video based. There are basic / entry level courses one can start with. In addition, on the top of this page there are links to various resources - tutorials, webinars, blog posts, etc., you can benefit from. That said, one of the things I find useful in learning is by trying the code examples and studying posts/questions (for example, on this forum).", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Insert vs. InsertMany and Update vs. UpdateMany
2021-06-11T16:01:06.315Z
Insert vs. InsertMany and Update vs. UpdateMany
11,385
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "How can I find out what version of MongoDB is used on MongoDB Atlas?https://docs.mongodb.commanual/", "username": "Ping_Pong" }, { "code": "", "text": "Hi Ping, the version is available on the cluster card within the Atlas UICheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Thanks.", "username": "Ping_Pong" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What version of MongoDB is used on MongoDB Atlas?
2021-06-11T19:50:53.318Z
What version of MongoDB is used on MongoDB Atlas?
3,454
https://www.mongodb.com/…4_2_1024x512.png
[ "atlas", "monitoring" ]
[ { "code": "MongoDB Enterprise atlas-9n69k4-shard-0:PRIMARY> use admin\nswitched to db admin\nMongoDB Enterprise atlas-9n69k4-shard-0:PRIMARY> db.createRole({\n... role: \"listCollections\",\n... privileges: [{\n... resource: {db:\"\",collection:\"\"},\n... actions: [\"listCollections\"]\n... }],\n... roles: []\n... })\n2020-06-02T15:40:05.859+0000 E QUERY [js] uncaught exception: Error: not authorized on admin to execute command { createRole: \"listCollections\", privileges: [ { resource: { db: \"\", collection: \"\" }, actions: [ \"listCollections\" ] } ], roles: [], writeConcern: { w: \"majority\", wtimeout: 600000.0 }, lsid: { id: UUID(\"b90ffac4-046f-4e4a-b13e-e1de65370a15\") }, $clusterTime: { clusterTime: Timestamp(1591112384, 4), signature: { hash: BinData(0, 48EA20F0F871C4EC7887C3BDE628F8BA00400601), keyId: 6807798656247267329 } }, $db: \"admin\" } :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.createRole@src/mongo/shell/db.js:1654:15\n@(shell):1:1\n", "text": "I have configured 3rd-party service integration with New Relic with help of this documentation:Set ID and keys, test connections works ok, but I can’t find any data from Atlas in New Relic.\nMy cluster tier is M20 type.\nAlso there is document from NR:\nhttps://docs.newrelic.com/docs/integrations/host-integrations/host-integrations-list/mongodb-monitoring-integration\nwhich has a reference of Atlas:Our integration is compatible with MongoDB v3.0+. MongoDB Atlas is supported for tiers M10 and above.so I suppose that this dos should work for Atlas, but instructions doesn’t work. They advice to create new role listCollections via MongoDB shell, but I can’t create it with because of error:Any advices?", "username": "Roman_Tkachenko" }, { "code": "", "text": "ongoDB Atlas is supported for tiers M10 and above.Getting the same issue… Has support ever responded to you about this?", "username": "Alexander_Janckila" }, { "code": "", "text": "Hi @Alexander_Janckila ,I cannot speak to why this issue arose 1 year ago, but I can confirm that New Relic is phasing out Plugins support, which is how MongoDB’s integration with New Relic works. This transition has rendered some users unable to view data in New Relic, and the existing MongoDB integration with New Relic will be deprecated on June 16th, 2021. Users of this integration have been notified of its deprecation via email.I would recommend transitioning to New Relic’s remote-agent based integration with MongoDB, which is actually linked earlier in this thread. This integration is maintained by New Relic, and acts as an alternative to MongoDB’s existing integration. Please ensure that you transition to a new monitoring solution before the June 16th date.–Julia Oppenheim, Product Manager", "username": "Julia_Oppenheim" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Integration with New Relic
2020-06-03T09:51:28.414Z
Integration with New Relic
6,027
null
[ "queries", "python" ]
[ { "code": "query = {'$and': [{\"status_derived\" : \"approved\"},{\"gss_code\":\"/^E/\"}]}", "text": "Using a simple query, where I filter by a specific value and look for text that start with E. The query works within robo 3t but doesn’t in pymongo, I know it’s due to “/^E/” but why is that?\nWhat’s a potential work around :\nquery = {'$and': [{\"status_derived\" : \"approved\"},{\"gss_code\":\"/^E/\"}]}Any suggestions much appreciated ", "username": "Edward_Burroughes" }, { "code": "", "text": "You have to use the $regex operator. See https://docs.mongodb.com/manual/reference/operator/query/regex/", "username": "Bernie_Hackett" } ]
Query regex issue in Pymongo
2021-06-11T16:58:30.433Z
Query regex issue in Pymongo
2,090
https://www.mongodb.com/…b617bfb0d9a8.png
[]
[ { "code": "", "text": "Is it possible to change from Dark mode to White mode? Because Dark mode is very hard on my eyes.\nimage929×557 34.6 KB\n", "username": "Ping_Pong" }, { "code": "", "text": "Yes it is available\nWhen you click on Dark Mode drop down list you will see light mode\nChoose it and save changes", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Great, thanks.I clicked it, I could not see it. This is how hard the Dark mode caused to my eyes.", "username": "Ping_Pong" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Community UI theme: Is White mode available
2021-06-11T14:46:32.137Z
Community UI theme: Is White mode available
4,067
null
[]
[ { "code": "", "text": "Hello all,I have a small application that I would like to add a payment feature to. So far I am hosting all app components on Atlas as it is all nicely in one place etc.I would like to use Stripe as the payment provider. Stripe however requires that some code is run on your own backend for security reasons. They offer certain libraries to do that - but it seems like none of them are really usable out of the box on MongoDB realm / atlas. For example, there is a node.js library - but function dependencies are still in beta, there is a package limit of 10MB so that really cant be used. Looking at their example I would have though I could just implement a few webhooks that my frontent app can call, the webhooks would communicate with Stripe APIs to e.g. create a payment intent etc. I can however find no examples on the web either.Has anyone on here have any experience of integrating stripe with MongoDB Realm without any additional infrastructure? … any pointers welcome!Cheers,\nSteve", "username": "stephan_uk" }, { "code": "", "text": "Hi @stephan_uk , welcome to the community forum!It’s not been touched in a couple of years, but I did create a sample e-commerce app where the backend Realm app used the Stripe API: GitHub - mongodb-appeng/eCommerce-Realm: The backend portion of the MongoDB eCommerce reference app", "username": "Andrew_Morgan" } ]
MongoDB realm and Stripe integration
2021-06-11T08:35:36.640Z
MongoDB realm and Stripe integration
3,405
null
[ "sharding" ]
[ { "code": "", "text": "Hi mongodb guru out there:I have a scenerio related to tag sharding that need some assistance. Or maybe there are some better ways to handle what we wants.For example, our mongo nodes configuration has following setup:Data nodes:US replica set: NODE_US_A, NODE_US_B, NODE_US_CEU replica set: NODE_EU_A, NODE_EU_B, NODE_EU_CAsia replica set: NODE_ASIA_A, NODE_ASIA_B, NODE_ASIA_CConfig Server nodes (replica set):US: CS_US\nEU: CS_EU\nAsia: CS_ASIAApplication access to mongodb data is through mongos in the local region.Because of the geographally seperation, network latency sometimes is high. If we replicated the whole database (over 750GB) between the continents, it will be impossible. Therefore we think of using a tag sharding setup to distribute data specific to each region to stay in its region. However, there are some common collections that all 3 locations need to share.Example collections:I used the tag sharding example provided in the mongodb documentation on our test environment. Tag sharding seems to be able to do what I want. customer records are sharded into different\nshards US, EU, and Asia. Only Asia customer records are available in Asia replicaset.But what I cannot figure out is how to handle collections like “market” & “company”. We want these collections to be available in all regions.How can we get these type of common tables to be available in all regions? From what I understand, sharding means split the data based on shard keys. If they are sharded, some data will\nnot be available in all regions.The reason I want the common data collections to be available everywhere is, in case of high network latency between the geo regions, when a local mongos talks to local mongo config server to query common data collections. If the “company” collection is primary in US, and network latency between Asia and US so too high that proper mongo communication is not fesible, then “company” will not be available in Asia as there is no “company” collection in the Asia shard.Therefore, I want to see if you guys have any advice on this.Thanks in advance.", "username": "Eric_Wong" }, { "code": "", "text": "Hello @Eric_Wong, just couple of comments here.But what I cannot figure out is how to handle collections like “market” & “company”. We want these collections to be available in all regions.How can we get these type of common tables to be available in all regions? From what I understand, sharding means split the data based on shard keys. If they are sharded, some data will not be available in all regions.The non-sharded collection data will reside on the Primary shard only (and, the primary shard is going to be local to one of the regions, in your setup). This is because the unsharded collections data is not distributed among the shards. This data will be available to all regions (US, EU and Asia). And, your queries related to these collections will hit the primary shard.", "username": "Prasad_Saya" } ]
Availibility of non-sharded data
2021-06-11T10:09:28.908Z
Availibility of non-sharded data
2,179
https://www.mongodb.com/…bc86feb8f423.png
[]
[ { "code": "", "text": "Captura de pantalla 2021-06-10 a las 10.21.16821×58 6.38 KBI’ve been trying to access my cluster for half an hour, but it remains pending. Since I started studying your courses 3 weeks ago, I have never had a problem connecting.what’s going on?\ncan anybody help me?\nI have tried to generate the ip again\ndisconnect my account, re-enter … I don’t know what else to do.", "username": "Veronica_Moreno_Flor" }, { "code": "", "text": "I still have the same problem, it does not load the cluster ", "username": "Veronica_Moreno_Flor" }, { "code": "", "text": "I think I had a similar problem today. It started working again when I switched browsers from chrome to edge. Good luck!", "username": "CantCode" }, { "code": "", "text": "Thanks for answering.\nThe truth is that I have tried what you just told me in case, but I still have the same problem.**Could someone from the Mongo team help me?**", "username": "Veronica_Moreno_Flor" }, { "code": "", "text": "There are some issues going on with Free Tier clusters for past 2 daysI am not sure if that is the cause.Mongodb staff can confirmYou can check this linkWelcome to MongoDB Cloud's home for real-time and historical data on system performance.I tried to add a new IP to my cluster but getting same pending status", "username": "Ramachandra_Tummala" }, { "code": "", "text": "thank you very much for the info", "username": "Veronica_Moreno_Flor" }, { "code": "", "text": "We are very sorry about the inconvenience folks", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cant connect to my Cluster in Atlas
2021-06-10T08:23:00.914Z
Cant connect to my Cluster in Atlas
2,349
null
[ "data-modeling", "swift", "atlas-device-sync" ]
[ { "code": "", "text": "In many situations it will be a bad UX to sync/save every single change the user makes. Because of this I open a temp(In memory) realm and then I copy that object from the temp realm to the synced realm. I am unsure if this however Is considered good practice, not to mention the mess it creates, having to keep a temp object for each object (user, tempUser, playlist, tempPlaylist, etc.).I would really appreciate if you can point me to the right direction.", "username": "dimo" }, { "code": "", "text": "Does the temporary object need to be stored in Realm at all, or could it just be an in-memory object that you work on and then save to Realm when you’re done (that’s how I’d normally do it for new objects, a bit trickier when changing an existing object)?", "username": "Andrew_Morgan" }, { "code": "let myUnmanagedObject = MyObject(value: managedObjectToCopy)", "text": "Creating a temporary object is how we do it -let myUnmanagedObject = MyObject(value: managedObjectToCopy)creates an unmanaged, editable object. Then, once done, you can save it in Realm.The downside is that any child objects, like objects in List, also need to have unmanaged copies made and then there are a couple of issues if it has embedded objects.But overall, yes, that’s a common practice.That being said, we don’t sustain them in memory very long - they stay around while the user is actively editing it, like on a sheet or detail view but when that’s completed, they are saved and disposed of.", "username": "Jay" }, { "code": "", "text": "OK, what you describe here is how I’d do it. I got confused by your original description about storing them in a temporary realm. As unmanaged Objects, I don’t believe that they’re stored in any realm.", "username": "Andrew_Morgan" }, { "code": "", "text": "The OP was creating an in-memory realm and copying the objects to it, which would also be unmanaged objects.However, it doesn’t seem like that’s really needed for this use case if just the individual objects are being worked with.Creating in-memory copies (as I indicated above) should suffice for most situations including when a user is making changes to a specific object. I would not suggest leveraging in-memory realms unless there’s a really a need it.", "username": "Jay" }, { "code": "func addToTempRealm(newValue: String) {\n try! state.tempRealm.write {\n let tempUser = state.tempRealm.create(type(of: state.user!), value: state.tempUser, update: .modified)\n tempUser.card!.label = newValue\n state.tempUser = tempUser\n }\n}\n", "text": "Thanks for your replies! If I don’t create an in-memory (temp realm), I get an error for trying to write outside a write transaction when I try to change a value.Here is an exampleMaybe I am doing something wrong.I think it will be an amazing feature to have a local and synced state per object. So if I try to edit an object outside a write transaction, Realm will save it locally but if I change a value within a write transaction, it will sync the changes.", "username": "dimo" }, { "code": "Object", "text": "You shouldn’t get an error if you create and modify an Object that hasn’t yet been added to Realm.", "username": "Andrew_Morgan" }, { "code": "create(type(of: state.user!)", "text": "I am curious why you’re creating an in-memory realm to start with. If you’re just editing an object, creating a copy of it will suffice - there’s not need to also add it to a Realm until you’re ready to persist it.Also, I am a little suspicious of thiscreate(type(of: state.user!)If you want to create an unmanaged copy of a Realm object, you would use that object class per my above post. Does your .user class contain any relationships? If so, that’s not supported with .create.", "username": "Jay" }, { "code": "", "text": "You are right, having a in-memory realm is just an overkill. However I am still confused about what’s the best practice to avoid realtime sync (when let’s say a user is editing their profile and they are not sure if they want to save the changes).Thanks for your attention", "username": "dimo" }, { "code": "self.contactResults = realm.objects(ContactClass.self) //self.contacts is the tableView dataSource\nself.contactTableView.reloadData()\nlet detailView = DetailView() //create the detail view\nlet managedContact = self.contactResults[selectedRow] //get the selected contact\nlet unmanagedContact = ContactClass(value: managedContact) //create an unmanaged, editable copy\ndetailView.populateWith(contact: unmanagedContact)\n", "text": "Let me provide a high level scenario:Suppose your app has a listing of items, like an address book.The contact list is a Results object populated from Realm that acts as a datasource for a tableView.Then suppose when a contact is double clicked (macOS) or selected and Edit button tapped (iOS) another view is shown that allows that contacts details to be edited. For that process you want to pass an unmanaged Realm object to the sheet or details view to populate it and be editable.From there the user can edit the contact info on the detailView, updating the unmanaged contact object along the way and when complete, write it to Realm.This gives the app the ability to not sync or attempt to update a managed object outside a write transaction - and the user can click ‘Cancel’ and no data will be affectedNow, you may say to yourself“self, why not just pass the managed object and update it within a write transaction”The answer is that in some cases, you want to maintain the data in an actual object instead of creating arrays and other vars to hold the data while editing. For example suppose the contact had a bunch of embedded Addresses - keeping those in a tidy list within a the ContactClass object makes them easier to work with; adding, editing and removing addresses can be done right within the object because it’s unmanaged.", "username": "Jay" }, { "code": "@StateObservableObjectEditProfileView.onAppear {\n let unmanagedUser = User(value: state.user!) \n tempUser = unmanagedUser\n }", "text": "EDIT: Sadly this doesn’t work if the object has nested objects(list, embedded). I would have to go back to the in-memory solution. Or maybe deconstruct the object and use local @State to deal with the changes before syncing.@Jay Thanks for the explanation, it worked! I actually tried this before but it wasn’t working (changes were persisted) because I had my unmanaged object in an ObservableObject.This is what I have now when the user goes to EditProfileView,", "username": "dimo" }, { "code": "", "text": "It actually does work but you need to also make a copy, known as a deep copy, of the embedded objects. I had a question about that topic last yearand there’s a standing issue as wellCrash during upsert of an existing object with a list of EmbeddedObject inside.\n…\n\n## Goals / Expected Results\nI can upsert a record.\n\n## Actual Results\nException: ** Terminating app due to uncaught exception 'RLMException', reason: 'Cannot add an existing managed embedded object to a List.'**\n\n## Steps for others to Reproduce\nCreate unmanaged copy of existing object with the property - list of EmbeddedObject's children objects.\nTry to add it with `UpdatePolicy.update`\n\n## Workaround\nCreate a deep copy of a target object.\n\n## Code Sample\n```\nclass Address: EmbeddedObject {\n @objc dynamic var street: String? = nil\n @objc dynamic var city: String? = nil\n @objc dynamic var country: String? = nil\n @objc dynamic var postalCode: String? = nil\n}\n\n// Define an object with an array of embedded objects\nclass Business: Object {\n @objc dynamic var _id = ObjectId.generate()\n @objc dynamic var name = \"\"\n let addresses = List<Address>() // Embed an array of objects\n \n override static func primaryKey() -> String? {\n return \"_id\"\n }\n \n convenience init(name: String, addresses: [Address]) {\n self.init()\n self.name = name\n self.addresses.append(objectsIn: addresses)\n }\n}\n\n let b = realm.objects(Business.self).first!\n \n //make an unmanaged copy of a business\n let someBusiness = Business(value: b)\n someBusiness.name = \"New Business Name\"\n\n try! realm.write {\n realm.add(someBusiness, update: .modified)\n }\n```\n## Stack trace\n```\n*** First throw call stack:\n(\n\t0 CoreFoundation 0x00007fff2043a126 __exceptionPreprocess + 242\n\t1 libobjc.A.dylib 0x00007fff20177f78 objc_exception_throw + 48\n\t2 delcrash 0x00000001036918c5 _ZN18RLMAccessorContext12createObjectEP11objc_objectN5realm12CreatePolicyEbNS2_6ObjKeyE + 3125\n\t3 delcrash 0x00000001036ea20d RLMAddObjectToRealm + 285\n\t4 delcrash 0x00000001038876c4 $s10RealmSwift0A0V3add_6updateySo0aB6ObjectC_AC12UpdatePolicyOtF + 1252\n\t5 delcrash 0x00000001034fc24b $s8delcrash14ViewControllerC9addActionyyFyyXEfU_ + 251\n\t6 delcrash 0x00000001034fb84f $ss5Error_pIgzo_ytsAA_pIegrzo_TR + 15\n\t7 delcrash 0x00000001034fc2a4 $ss5Error_pIgzo_ytsAA_pIegrzo_TRTA.1 + 20\n\t8 delcrash 0x00000001038866cb $s10RealmSwift0A0V5write16withoutNotifying_xSaySo20RLMNotificationTokenCG_xyKXEtKlF + 299\n\t9 delcrash 0x00000001034fbfb8 $s8delcrash14ViewControllerC9addActionyyF + 1112\n\t10 delcrash 0x00000001034fbace $s8delcrash14ViewControllerC6runAddyyF + 46\n\t11 delcrash 0x00000001034fb0d3 $s8delcrash14ViewControllerC11viewDidLoadyyF + 723\n\t12 delcrash 0x00000001034fba8b $s8delcrash14ViewControllerC11viewDidLoadyyFTo + 43\n\t13 UIKitCore 0x00007fff23f37de3 -[UIViewController _sendViewDidLoadWithAppearanceProxyObjectTaggingEnabled] + 88\n\t14 UIKitCore 0x00007fff23f3c6ca -[UIViewController loadViewIfRequired] + 1084\n\t15 UIKitCore 0x00007fff23f3cab4 -[UIViewController view] + 27\n\t16 UIKitCore 0x00007fff246ac28b -[UIWindow addRootViewControllerViewIfPossible] + 313\n\t17 UIKitCore 0x00007fff246ab978 -[UIWindow _updateLayerOrderingAndSetLayerHidden:actionBlock:] + 219\n\t18 UIKitCore 0x00007fff246ac93d -[UIWindow _setHidden:forced:] + 362\n\t19 UIKitCore 0x00007fff246bf950 -[UIWindow _mainQueue_makeKeyAndVisible] + 42\n\t20 UIKitCore 0x00007fff248fa524 -[UIWindowScene _makeKeyAndVisibleIfNeeded] + 202\n\t21 UIKitCore 0x00007fff23ace736 +[UIScene _sceneForFBSScene:create:withSession:connectionOptions:] + 1671\n\t22 UIKitCore 0x00007fff2466ed47 -[UIApplication _connectUISceneFromFBSScene:transitionContext:] + 1114\n\t23 UIKitCore 0x00007fff2466f076 -[UIApplication workspace:didCreateScene:withTransitionContext:completion:] + 289\n\t24 UIKitCore 0x00007fff2415dbaf -[UIApplicationSceneClientAgent scene:didInitializeWithEvent:completion:] + 358\n\t25 FrontBoardServices 0x00007fff25a6a136 -[FBSScene _callOutQueue_agent_didCreateWithTransitionContext:completion:] + 391\n\t26 FrontBoardServices 0x00007fff25a92bfd __94-[FBSWorkspaceScenesClient createWithSceneID:groupID:parameters:transitionContext:completion:]_block_invoke.176 + 102\n\t27 FrontBoardServices 0x00007fff25a77b91 -[FBSWorkspace _calloutQueue_executeCalloutFromSource:withBlock:] + 209\n\t28 FrontBoardServices 0x00007fff25a928cb __94-[FBSWorkspaceScenesClient createWithSceneID:groupID:parameters:transitionContext:completion:]_block_invoke + 352\n\t29 libdispatch.dylib 0x00000001054dfa88 _dispatch_client_callout + 8\n\t30 libdispatch.dylib 0x00000001054e29d0 _dispatch_block_invoke_direct + 295\n\t31 FrontBoardServices 0x00007fff25ab88f1 __FBSSERIALQUEUE_IS_CALLING_OUT_TO_A_BLOCK__ + 30\n\t32 FrontBoardServices 0x00007fff25ab85d7 -[FBSSerialQueue _targetQueue_performNextIfPossible] + 433\n\t33 FrontBoardServices 0x00007fff25ab8a9c -[FBSSerialQueue _performNextFromRunLoopSource] + 22\n\t34 CoreFoundation 0x00007fff203a8845 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17\n\t35 CoreFoundation 0x00007fff203a873d __CFRunLoopDoSource0 + 180\n\t36 CoreFoundation 0x00007fff203a7c81 __CFRunLoopDoSources0 + 346\n\t37 CoreFoundation 0x00007fff203a23f7 __CFRunLoopRun + 878\n\t38 CoreFoundation 0x00007fff203a1b9e CFRunLoopRunSpecific + 567\n\t39 GraphicsServices 0x00007fff2b793db3 GSEventRunModal + 139\n\t40 UIKitCore 0x00007fff2466d40f -[UIApplication _run] + 912\n\t41 UIKitCore 0x00007fff24672320 UIApplicationMain + 101\n\t42 libswiftUIKit.dylib 0x00007fff53c487b2 $s5UIKit17UIApplicationMainys5Int32VAD_SpySpys4Int8VGGSgSSSgAJtF + 98\n\t43 delcrash 0x000000010350100a $sSo21UIApplicationDelegateP5UIKitE4mainyyFZ + 122\n\t44 delcrash 0x0000000103500f7e $s8delcrash11AppDelegateC5$mainyyFZ + 46\n\t45 delcrash 0x0000000103501059 main + 41\n\t46 libdyld.dylib 0x00007fff20257409 start + 1\n\t47 ??? 0x0000000000000001 0x0 + 1\n)\nlibc++abi.dylib: terminating with uncaught exception of type NSException\n*** Terminating app due to uncaught exception 'RLMException', reason: 'Cannot add an existing managed embedded object to a List.'\nterminating with uncaught exception of type NSException\nCoreSimulator 732.18 - Device: iPhone 8 (A8CA0A6C-C943-4C70-8EC4-EF9FC5E0F5F5) - Runtime: iOS 14.1 (18A8394) - DeviceType: iPhone 8\n```\n\n## Version of Realm and Tooling\nRealm framework version: 10.1.2\nXcode version: 12\niOS/OSX version: 14\nDependency manager + version: SPMWhat we do it when we create an unmanaged copy of an object we also use another function within that object that iterates over the embedded object list and creates unmanaged copies of each (that was suggested in the link I included above)", "username": "Jay" }, { "code": "", "text": "Thanks @Jay! It’s a shame that this isn’t included out of the box, sounds like an essential feature as not every user action requires instant sync. I will try to experiment with this deep copy route.Thanks again for your time", "username": "dimo" }, { "code": "", "text": "After scouting the web for solution I found these:Converting the realm object to JSON. However I can’t conform to Codable (on latest RealmSwift).Another way it to .detach() the object but it’s not working with EmbeddedObjectI have an object in Realm. I would like to retrieve that object and then work wi…th an unmanaged version of that object. Unfortunately, it doesn't seem like Realm for iOS has any good copy options, so I tried to followed the \"detachable\" workaround by \"anlaital\" outlined in issue #3381.\n\nMy code is as follows:\n![screen shot 2017-10-27 at 3 53 33 pm](https://user-images.githubusercontent.com/16312918/32124854-075f95da-bb2f-11e7-9af4-23d40451424a.png)\n\nThis works for the most part. When on a property that is a list and has values, `detachable.detached()` does return the expect list copy. Unfortunately, `detached.setValue(detachable.detached(), forKey: property.name)` does not set the value for any Lists my object has. \n![screen shot 2017-10-27 at 3 54 39 pm](https://user-images.githubusercontent.com/16312918/32125059-d43904a6-bb2f-11e7-8ae3-bd3389ab000a.png)\n\nAny thoughts on how I can properly assign my \"detached\"/unmanaged list to my object?So I guess there is no “best practice” because Realm ignores this use case altogether.", "username": "dimo" }, { "code": "", "text": "Ok! Managed to find a solution for using unmanaged objects with nested lists and embedded objects.Just make sure that every object and embedded object has Codable or you will get errors.", "username": "dimo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Best practice to deal with "clone" objects? (sync on tap/click)
2021-05-31T17:54:01.080Z
Best practice to deal with &ldquo;clone&rdquo; objects? (sync on tap/click)
5,143
null
[ "performance", "monitoring" ]
[ { "code": "MALLOC: 2091358456 ( 1994.5 MiB) Bytes in use by application\\nMALLOC: + 96292864 ( 91.8 MiB) Bytes in page heap freelist\\nMALLOC: + 43007552 ( 41.0 MiB) Bytes in central cache freelist\\nMALLOC: + 4904832 ( 4.7 MiB) Bytes in transfer cache freelist\\nMALLOC: + 21930312 ( 20.9 MiB) Bytes in thread cache freelists\\nMALLOC: + 19816704 ( 18.9 MiB) Bytes in malloc metadata\\nMALLOC: ------------\\nMALLOC: = 2277310720 ( 2171.8 MiB) Actual memory used (physical + swap)\\nMALLOC: + 345247744 ( 329.3 MiB) Bytes released to OS (aka unmapped)\\nMALLOC: ------------\\nMALLOC: = 2622558464 ( 2501.1 MiB) Virtual address space used\\nMALLOC:\\nMALLOC: 273100 Spans in use\\nMALLOC: 86 Thread heaps in use\\nMALLOC: 4096 Tcmalloc page size\\n------------------------------------------------\\nCall ReleaseFreeMemory() to release freelist memory to the OS (via madvise()).\\nBytes released to the OS take up virtual address space but no physical memory.\t\t\n", "text": "Hi All,At this moment I am experiencing issues with a Mongo replicaset version 4.0.12 and slowly increasing memory usage only on the primary node. Over the course of 3-4 weeks memory slowly grows from 30% to 99% after which the server becomes unresponsive and at a certain point steps down as primary. Data size does not change as all collections are cleaned up periodically using a client side script or capped collection. A similar setup which is processing more data is not having this issue. Restarting the node solves the issue for the next couple of weeks. Is there anything I should check or change?Some more detailed information:replicas have 2GiB memory and around 50GiB of data. Indexes are around 30MiB. No slow queries reported by Mongo. 60-70 connections are open at the same time. TcMalloc looks like this before crashing. wiredtiger cache only uses 500MiB.A possible related issue is https://jira.mongodb.org/browse/SERVER-43632", "username": "Kees" }, { "code": "", "text": "Hi @Kees,Welcome to MongoDB community.It sounds like Wired Tiger memory is overwhelmed. This can be happening even if queries are under the 100ms slow threshold for logging…I suggest to first upgrade to latest 4.0.24 as a minimum.Additional please add resources to the host or at least grow cache to 1GB (~50% ram)Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "\"bytes currently in the cache\" : 296423058", "text": "Hi Pavel,Thanks a lot for the quick response.Wired Tiger memory usage grows slowly over 3-4 weeks while data size and query patterns stay the same. This doesn’t exactly look like overwhelmed to me. Could you explicate a bit on this?Indeed I’ve been thinking of upgrading to a newer version. Are there any particular ‘fixes’ in 4.0.24 compared to 4.0.12 that I should be aware of?Note that \"bytes currently in the cache\" : 296423058. Not sure if it needs more?Best,Kees", "username": "Kees" }, { "code": "", "text": "Yes, there is around ~2 years of fixes between the versions specifically around menory consumptionIt doesn’t make alot of seance to investigate memory with 500mb of cache its too small for 50gb of data…", "username": "Pavel_Duchovny" }, { "code": "", "text": "Okay thanks, upgrading can be tried.", "username": "Kees" } ]
Memory slowly increasing over several weeks
2021-06-08T15:39:20.699Z
Memory slowly increasing over several weeks
4,557
null
[ "crud", "performance" ]
[ { "code": "{ttnr: \"ttnr\", name: \"name\", 'location._id':ObjectId(\"location._id\") };\n{ \"$set\": {ttnr: \"ttnr\", name: \"name\", 'location._id': ObjectId(\"location._id\") } }\n", "text": "Hello dear Community,i am using Mongo db_version: 4.2.8 and connect to it via C 1.17 latest driver from a remote ETL tool (Ab Initio).In the tool, there is a special component which is used to communicate with the DB. Basically, it executes\nupdate / insert commands. The task is to load ~300000 documents in update/upsert mode. Below, there is a criteria to decide on update or insert :action:\nthenbatch =10000 : Number of records to submit in batch (does not really affect the udpate/upsert mode, it updates about 10 records per second → quite slow with respect to 300000 items)data looks as follows:{\"_id\":{\"$oid\":“60c2fbd3627f8b06d03c98b5”},“name”:“EL-; AR14-C”,“type”:“Product”,“ttnr”:“01215555501”,“location”:{\"_id\":“60c23939898cb1d1168e4551”},“parents”:[],“versions”:[],“check_sum”:{\"$numberLong\":“0”}}The problem is that it takes hours to load these unique 300000 records in there; however in the \"insert \" mode it takes second. The insert mode, though, does not care about duplicates of data regarding “ttnr” and “location” values. That is why i use update in upsert mode. Does anybody know how to improve performance ? Can any settings be adjusted on the DB side to increase processing speed (it is clear that DB should check the criteria for the every incoming record. ) ? What may be improved for the query itself?Thank you in advanceBest regards\nIgor", "username": "igor_insights" }, { "code": "", "text": "Hi @igor_insightsThanks for raising this interesting question, I’m not sure how this pertains to M201 and I’m unfamiliar with the tool you mention. I’d suggest reposting this in the Working with Data category as you may be able to get additional help and people who may be more familiar with the ETL tool you are using.Is there a specific lesson or exercise in M201 that you have an issue with or question about that I can help you with?Kindest regards,\nEoin", "username": "Eoin_Brazil" } ]
Performance in update upsert mode from ETL tool
2021-06-11T09:02:55.950Z
Performance in update upsert mode from ETL tool
2,687
https://www.mongodb.com/…0a70fc386bec.png
[]
[ { "code": "", "text": "Hi there,I am encountering a problem building a kind of simple chart (count of something over time, filtered on one value). This works in Sample Mode but not on the full collection the collection is large and no index on the filter field. so the assumption is close to run for an index.\nBut this is not the case here, the problem already comes up when chart does the initial sample of the full collection.Questions here are:Thanks a lot\nMichael", "username": "michael_hoeller" }, { "code": "system.profile", "text": "Hi Michael. I have a couple of suggestions:You are correct that indexes are mainly useful for filters, and those do not apply when the fields are sampled. If it’s slow, I suspect one or more of the following apply:In all of these cases, the pipeline will apply when the data is sampled. Depending on the size of your data and the indexing strategy, this could result in a slow query.Also FWIW we are doing some work to eliminate the 90 second query timeout, but that will take us some time as it’s a fairly significant architectural change.HTH\nTom", "username": "tomhollander" }, { "code": "", "text": "Hello Tom,You are correct that indexes are mainly useful for filters, and those do not apply when the fields are sampled. If it’s slow, I suspect one or more of the following apply:In this case we have the collection as is. First thing is to use sample mode -> limited to the 1000 docs, this works. With switching off the “Sample Mode” charts runs into a timeout error. Though the profiler is switched on I don’t see the sampling in the Atlas Profiler UI. I do see other queries to take care off but noting related to the charts issue.I could get the schema exploration in Compass to finish after 20 min (!)\nI also have an aggregation (count of items grouped by year/month) which runs 10s and touches all records. This is what I what to visualize but I can not pass the sampling step …Any thoughts how to get that to work? On a mid term the schema will be optimized, but since the aggregation is acceptable performant I hope to get this visualized asap.Michael", "username": "michael_hoeller" }, { "code": "sample modesampling mode", "text": "Hello @tomhollanderI like to take complexity out of this question. The core problem is as follows:In case I switch on the sample mode the sampling is reduces to 1000 documents (I assume) and this passes the sampling finishes with no error. But what ever I do I can not switch off the sampling mode with out getting a timeout. So I never can run any query against the full dataset.I want now to understand what’s the reason for the timeout while sampling. Other (small) collections work fine. So I assume it is data/schema related, the profiler is switched on bit I do not see any specific spike. I would expect this since the timeout seems to be 90 sec.Any thoughts, hints on that?Michael", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoeller -Apologies for the slow response. This isn’t really a Charts question anymore; it’s a MongoDB query optimisation question and I’m trying to find someone more qualified than me to answer it.But in the meantime I’ll help as much as I can. If I understand correctly, your first post was about a timeout while building a chart, but now you’re asking about a timeout while sampling the collection and you’re not even able to see the fields?When you build a chart without a filter (or a filter on an unindexed field), the query must scan every document, even if you are doing a simple count. If the collection is large enough (and/or the cluster is underpowered), this scanning can take more than the 90 seconds that Charts is able to wait for a result.The field sampling process involves looking at a random 50 documents from the first 10,000 documents in the collection. This should be quick, even on a large collection, unless it is a view or a data source with a pipeline. If this is a simple collection and is timing out, I don’t have an immediate explanation - it may need further investigation by our support team or someone with better knowledge of query optimisation.Sorry I know it’s not a complete answer, but I hope it helps a bit.\nTom", "username": "tomhollander" }, { "code": "\"allowDiskUse\": true,", "text": "Hello @tomhollanderthanks a lot for your response.If I understand correctly, your first post was about a timeout while building a chart, but now you’re asking about a timeout while sampling the collection and you’re not even able to see the fields?This is correct, I want to simplify the issue and could break it down to the fact that the very initial sampling immediately after adding a source collection to a brand new chart creates the problem.When you build a chart without a filter (or a filter on an unindexed field), the query must scan every document, even if you are doing a simple count.No chart here, just the initial sampling. I do not see any peaks / spikes in the profiler, how can we get further information where the sampling runs into problems? I am quite sure that it is a schema/data issue but to fix that I’d like to learn more about the root cause.If the collection is large enough (and/or the cluster is underpowered), this scanning can take more than the 90 seconds that Charts is able to wait for a result.The collection has ~290k Documents with an avg. document size of 3.3 MB, so not very large.\nThe document size might be an issue but since there are many candidates to work on I like to lean more about the root cause and how to check that in a tool/log. So something beyond assumptions.The machine is M60, running on a kind of low IOPs (completely fitting for the general day to day taks - is there a significant higher demand with charts? Also I saw while testing that the disk IO went high, do you use \"allowDiskUse\": true,? That might make things slow…The field sampling process involves looking at a random 50 documents from the first 10,000 documents in the collection. This should be quick, even on a large collection, unless it is a view or a data source with a pipeline.As mentioned in the prev. posting the sampling works fine, but at some time you have to switch off the sampling - than the problem arises.So it is all down to get to know what and where to check to find out why the pure and initial sampling of a collection runs into a timeout.thanks a lot for looking into it\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Hi @michael_hoeller,The collection has ~290k Documents with an avg. document size of 3.3 MB, so not very large.\nThe document size might be an issue but since there are many candidates to work on I like to lean more about the root cause and how to check that in a tool/logCould you try creating smaller Views of the documents in the collection and see if that would help with the timeout issue ?Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hello @wan\nI will set a view up however as mentioned the db is not huge. And it seems to be a glitch dealing with docs which are not in the kbyte range. Sure Mbyte docs are considered an anti pattern but the db allows for up to 16 MB…\nThe aggregation I want to run and visualize returns in seconds outside of charts. The issues is that charts completely new, no aggregation noting, fails with the first click on the collection when it wants to sample. Going to Sample Mode and work on only 50 docs is fine. Going back to “full data” mode failsRegards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thanks for the extra info @michael_hoeller! It’s possible we haven’t tested this with very large documents. Can you give more info about the docs you’re dealing with? How big exactly are they? Do they have a large number of fields, or a small number of fields with a lot of data?If the data isn’t sensitive, it would be helpful if you could send a dump of the data so we can try reproing the issue.Tom", "username": "tomhollander" }, { "code": "", "text": "Hello @tomhollanderI will send you a DM with sensitive data. Let’s publish the results here when the issues is solved.Regards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Chart get's a timeout - how to debug?
2021-05-13T08:27:41.226Z
Chart get&rsquo;s a timeout - how to debug?
4,923
null
[ "replication", "monitoring" ]
[ { "code": "db.myCollectionName.aggregate([{ \"$sample\" : { \"size\" : 100}}, { \"$project\" : { \"myFieldName\" : 1, \"_id\" : 0}}])\n$sampleReadPreference.secondaryPrefererd()localThresholdMS", "text": "Hi,I have a 3 node mongodb replica set cluster, with one node handles write requests and two others handle read requests.I have also a Spring Boot web server (with Spring Data MongoDB 3.0.6.REALEASE and mongodb-driver-sycn:4.0.5 java), which exposes a simple READ operation over a collection:This operation use $sample operator to randomly select 100 documents over a collection having about 100m documents, and project one field.I use JMeter to do pressure test over the application, with ReadPreference.secondaryPrefererd() configured, it turns out that each secondary node can handle about 600 ops. However, the strange thing is:One secondary node has 100 more connection count over another, whereas can only handles the same #Ops. The node which bears less connection count also has much less active reads.We can always repeat this problem if we try to retest more times.Each secondary node has exactly the same software configuration and hardware setting.Can anyone give some tips ?BTW, I notice there is a server selecting algorithm: specifications/server-selection.rst at master · mongodb/specifications · GitHubI tried to increase localThresholdMS however it does not work.", "username": "Shu_SHANG" }, { "code": "localThresholdMSlatency fastest node + localThreadholdMSfastest node + localThreadholdMS", "text": "Hi @Shu_SHANG and welcome in the MongoDB community !I don’t have a proper answer to your question but rather a few comments and a potential answer.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "secondaryPreferred()nearest()secondaryPreferred()nearest()localThresholdMS", "text": "@MaBeuLux88 Thanks for the reply !I was thinking, by chaning readPreference, workload can be distributed over a replicas set, you told me that it’s not the case, thanks again for this usable enlightenment.In fact, during the performance test, the MongoDB cluster does not handle any write operations, thus we can think the two secondary nodes can focus on read operations, while the master node stays idle.I’ve tried several things:Two secondary nodes do the work, when the concurrency is 80, the performance reaches the best, TPS reaches about 700+, rt is about 100ms and #ops reaches 400 for each secondary node.Two secondary nodes as well as the primary node do the work, when the concurrency is 100, the performance reaches the best, TPS reaches about 1000+, rt is about 100ms and #ops reaches 400 for each node.Adding one more secondary node to the cluster (offline copy mode, without degrading the performance), when the concurrency is 100, the performance reaches the best, TPS reaches about 1000+, rt is about 100ms and #ops reaches 400 for each secondary node.3 secondary ndoes as well as the primary node do the work, when the concurrency is 120, the performance reaches the best, TPS reaches about 1500+, rt is about 80ms and #ops reaches for each node.By changing readPreference from secondaryPreferred() to nearest(), the primary node begins come to share the workload, best number of concurrency climbs up by ~20. For each scenario, as the number of concurrency exceeds the best point, rt begins to degrade and TPS remains the same.Also, I’ve changed the localThresholdMS to a fairly large value (5000ms) for each scenario, hoping that all server nodes can serve as a candidate. However, the strange thing still remains for each of the scenario, as the performance test goes on, one secondary node still has much more connection count than the other one (two in 3 secondary node case) and the primary master node whereas has the same number of operations. The node bearing more connection count starts to lag behind (as you have said), but as far as I am concerned, it’s not much, as you can see below:mongo-community-ask-question1734×2933 510 KB", "username": "Shu_SHANG" }, { "code": "_idfind", "text": "Adding an extra secondary will indeed boost your read performances overall, but if this node isn’t setup properly, you now increased your majority from 2 to 3 and have an even number of nodes which isn’t ideal for the High Availability again .How many clients to do have and which language are you using? In Java I think one client can generate maximum about 100 connections. My guess it that one client decides to send its 100 connections to one node for a bit and then re-evaluate at some point if this node is still the fastest node available. I guess that makes the clients a little sticky to one secondary potentially until the driver re-evaluate the best node to send the query to?I was thinking, by chaning readPreference, workload can be distributed over a replicas set, you told me that it’s not the case, thanks again for this usable enlightenment.You can distribute using the readPreference like you did. But the round robin distribution won’t be absolutely perfect and even because of the server selection algorithm and also because it’s not initially designed for this kind of use and breaks the first reason why RS are built for: HA. Sharding is design for this kind of operations though.\nUsually readPreference is used to target a specific node for ad hoc queries or analytics workload (using tags) and the writeConsistency is used to enforce a strong data resilience to avoid rollbacks (== acknowledge writes on the majority of the voting members of the RS).By the way, what limit are you reaching? What’s making you say that you cannot handle more queries? Doubling the number of clients doesn’t allow more performance? What is saturating? CPU ? RAM ?It doesn’t look like you are using Atlas here so there are many things that could have been overlooked in the configuration or OS setup that could improve the performances. And the hardware itself is another entire discussion…Another random idea that I have about your $sample query. I’m not sure how it’s implemented low level. Maybe it’s super efficient. But maybe it could be better.\nYou mentioned that you have no write operations during this $sample storm.\nSo maybe you could load in memory all the _id of all the documents in RAM in your clients and then just send find queries with _id $in [list of ids] and maybe this could perform even better with the use of the _id index.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "_idfind$sample$sample", "text": "How many clients to do have and which language are you using? In Java I think one client can generate maximum about 100 connections. My guess it that one client decides to send its 100 connections to one node for a bit and then re-evaluate at some point if this node is still the fastest node available. I guess that makes the clients a little sticky to one secondary potentially until the driver re-evaluate the best node to send the query to?The application is stateless web server, on the application side, the workload is not too much, only 20 ~ 40% of cpu and mem has been used, and TPS remains the same either we use 2 instances or 3 or 4. Number of client is equal to the number application instance, as the client is initialized as singleton.I am using Spring boot (with Spring Data MongoDB) to expose the API, version of the Java driver is mongodb-driver-sync:4.0.5. Max number of connecton pool has been configured to 1000.By the way, what limit are you reaching? What’s making you say that you cannot handle more queries? Doubling the number of clients doesn’t allow more performance? What is saturating? CPU ? RAM ?As I add one more application instance (the Spring web server), total TPS remains the same, #ops for each MongoDB node remains 400 and cannot saturate more queries as far as I can notice, slow queries begins to apear in the secondary node with more #connections. Doubling the number of client seems does not boost the performance, the bottleneck should be on MongoDB side.Another random idea that I have about your $sample query. I’m not sure how it’s implemented low level. Maybe it’s super efficient. But maybe it could be better.\nYou mentioned that you have no write operations during this $sample storm.\nSo maybe you could load in memory all the _id of all the documents in RAM in your clients and then just send find queries with _id $in [list of ids] and maybe this could perform even better with the use of the _id index.This is a good suggestion. I was thinking about migrating $sample to the application side, however, as the collection needs to be updated frequently, sampling on application is not as simple as on MongoDB, we have to randomly generate the ids whereas those may have already been updated, thus I have chosen to put the $sample operator on the MongoDB side. I will look if there is better workaround for this.", "username": "Shu_SHANG" }, { "code": "", "text": "the bottleneck should be on MongoDB sideBut what’s reaching 100% then? Is it disk IOPS? RAM? CPU?\nThe problem with your use case, is that you are constantly loading all the docs in RAM randomly so your entire collection is the working set… So I guess the RAM is the bottleneck, no? Therefore, you are probably maxing out your IOPS as many docs can’t possibly be in RAM, unless the entire data set fits in RAM?By the way, what’s your performance target? 400 isn’t enough apparently then I guess?MongoDB Atlas could be a great platform for your performance testing. It’s easy to set up an infra, even a sharded one for a few minutes and perform your test, then discard it.$sample on the app side is a bit desperate indeed. The list of valid _ids could be maintained in RAM with a change stream… But that sounds overly complicated indeed and $sample is much MUCH more simple to implement.What’s does your hardware look like for this RS? How much RAM, IOPS, CPU? What about the collection? How big is it? If you can’t scale vertically anymore. Then sharding is the next step for better perf.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "$sample$sample", "text": "The secondary node with more #connection reaches 100%cpu ( ~350 vs ~220) , whereas the other two reaches about 50%. Here is the monitoring results:zcpu-load2731×1336 390 KBAs you can see, secondary node 03 reaches nearly 100% cpu, memory usgae and disk IOPS is farily low as far as I can understand.The bottleneck is the CPU, it gives me an impression that all connections went to 03 and saturated its CPU. In the meantime, #connection of 01 and 02 is farily low (with a slightly climbing up), all three nodes have the same #ops, 03 has a little bit more queued queries, but not too much.It seems that $sample is very CPU-expensive, causing the application and mongodb not able to scale vertically anymore.400 ops is OK for me currently, but what I want is a proof of scaling. However, in current setting, with $sample operator, scaling out on the MongoDB side with Replica Set seems to be pretty expensive.Total collection has 930k docuemnts, datasize is about 162MB.", "username": "Shu_SHANG" }, { "code": "", "text": "MongoDB Atlas could be a great platform for your performance testing. It’s easy to set up an infra, even a sharded one for a few minutes and perform your test, then discard it.Thanks for the advice, I will reach our system adminstrators to see if MongoDB Atlas would be a better choice for our global service deployments.", "username": "Shu_SHANG" }, { "code": "", "text": "162MB is too small to justify a sharded environment though if this is the only workload running on this cluster.\nVertical scaling should be enough to scale this.\nThe entire dataset fits in RAM so RAM and IOPS shouldn’t be an issue. So I guess only the CPU or the network can be the bottlenecks here.Can you share a sample document and query just so I can give it a quick dry run on Atlas?Maybe 1000 connections per client isn’t a good idea as a client might decide to send all of them to the same single node.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "{ \"name\" : \"John\", \"tag\" : [ 1,2 ], \"gender\" : 1, \"length\" : 4 }\n{ \"name\" : \"Lucilla\", \"tag\" : [ 4 ], \"gender\" : 0, \"length\" : 7}\nnamelengthgendertagabcdb.abc.aggregate([{ \"$sample\" : { \"size\" : 100}}, { \"$project\" : { \"name\" : 1, \"_id\" : 0}}])\n", "text": "Sample document:name is a String of length length, gender can only either be 1 or 0, tag is a multiple-valued int field, value can can be 0 ~ 9.Query:Supposing collection name is abcMaybe 1000 connections per client isn’t a good idea as a client might decide to send all of them to the same single node.I’ve tried leaving the max pooling connections to the default value 100, however the same problem remains.", "username": "Shu_SHANG" } ]
Connection count not equally created over different Mongodb replicaset secondary nodes
2021-06-07T08:40:50.320Z
Connection count not equally created over different Mongodb replicaset secondary nodes
5,208
null
[]
[ { "code": "", "text": "Hi everyone,\nMy startup weDstll uses a MERN stack. We use mongoDB atlas with AWS and we have our backend and frontend code on AWS, we also have AWS activate credits. I was wondering if we could use those credits with mongoDB Atlas even though we created our cluster through MongoDB Atlas directly. Thank you!", "username": "Sonya_Denton" }, { "code": "", "text": "Hi @Sonya_Denton ,\nThanks a lot for reaching out! I just emailed your colleague Laura all the details.\nPlease let me know if you have any other questions and we’re excited to work with you as part of the MongoDB for Startups program!", "username": "Manuel_Meyer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
AWS activate with MongoDB Atlas
2021-06-10T23:20:57.904Z
AWS activate with MongoDB Atlas
5,099
https://www.mongodb.com/…4_2_1024x512.png
[ "monitoring" ]
[ { "code": "", "text": "Hello,using mongodb community edition + datadog integration, I am trying to simulate the “Scanned / Returned” metrics offered by Atlas.\nI was trying to use\nmongodb.metrics.queryexecutor.scannedps / mongodb.metrics.document.returnedpsBut when re-reading the documentation, it seems to me that the scannedps metrics actually corresponds to amount of items read from the index, not from the collection. So this would be equivalent to IXSCAN, but as far as I understand, to achieve the metric provided by Atlas, I would need the COLLSCAN value.Is there an alternative metric. Have I misunderstood the meaning of the metrics provided by Atlas?Datadog, the leading service for cloud-scale monitoring.", "username": "Nuno_Pinheiro" }, { "code": "", "text": "hey Nuno,Did you get the metrics formula required to simulate Query targeting metrics of Cloud Atlas on datadog?.", "username": "Priyesh_Patel" }, { "code": "scannedpsquerypsexecutionStatsexplain()", "text": "Welcome to the MongoDB Community @Priyesh_Patel!I’m not familiar with how metrics are surfaced in Datadog, but looking at the description for scannedps and queryps in the link shared by @Nuno_Pinheiro, these appear to be aggregate activity metrics calculated by Datadog (both measures are per-second, not per-query).The query targeting alert for Atlas is based on the keys and documents examined for a specific query, which is available via server logs and the executionStats in explain(). I’m not sure if Datadog collects detailed per-query metrics, but I would reach out to their support team for assistance.Note: if you want metrics from both Atlas and Datadog and have an M10+ Atlas cluster, you have the option of configuring Datadog Integration.Regards,\nStennie", "username": "Stennie_X" } ]
Scannedps metric clarification/alternative
2020-07-08T05:41:38.824Z
Scannedps metric clarification/alternative
1,855
null
[ "documentation" ]
[ { "code": "", "text": "I was wondering how to submit correction in the documentation. The wikipedia resource linked at this section for AEAD should be replaced with a current version: Authenticated encryption - Wikipedia", "username": "Student_al_St" }, { "code": "", "text": "Hi! I created https://jira.mongodb.org/browse/DOCS-14536 for this issue.", "username": "Sheeri_Cabral" }, { "code": "", "text": "Hi @Student_al_St,Thank for you for the feedback! It seems unusual that the documentation is linking to a specific version of this Wikipedia article rather than the canonical link for the latest version, but I found a few similar references which should be corrected.The MongoDB manual includes an About MongoDB Documentation reference with more information on reporting issues and making change requests, but the TL;DR is:You can create issues directly in the DOCS project in the MongoDB issue tracker (jira.mongodb.org) by signing on with the same account you use in the MongoDB Community Forums (or other MongoDB cloud services like Atlas).If you are interested you can also Contribute to the Documentation by making a GitHub pull request referencing a Jira issue.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Great!!\nI wasn’t aware that one could submit a PR given that I looked though some links where it would explicitly say “Edit this Page in GitHub” thus one would just click that and submit a PR. May be something to add in each page of the documentation? This would just be a footer note update.Thank you for the info, though.Alain.", "username": "Student_al_St" }, { "code": "", "text": "Hi Alain,We used to have “edit on GitHub” links on most documentation pages but it isn’t always straightforward to figure out how to make a change. The mapping of pages to source files often isn’t 1:1 because pages can be built with shared blocks of content, and some changes require building a preview version of the documentation for review.The strong preference is to have tracking issues in Jira to help triage and organise issues. A single change to the latest version of the server documentation (eg 4.4) may be applicable to multiple server release versions and end up ported with PRs to each non end-of-life docs release (currently 5.0, 4.4, 4.2, and 4.0). The DOCS issue is a common reference for these PRs.I ended up raising a PR for DOCS-14536 which has been merged into the MongoDB 5.0 manual. Since this is a minor change, I don’t think it needs to be backported to older versions of the documentation.In the process I found a mix of versioned & latest Wikipedia links in the server manual, and updated these to the latest links in my PR for consistency. One of our docs leads mentioned the rationale for using versioned Wikipedia links was to try to ensure destination links remained accurate & relevant. Revisions to Wikipedia articles may alter the page links or content that we expected to be linking to, but versioned links would not be affected. For example, the Wikipedia link to AEAD is a named anchor that relies on the title of the section remaining “Authenticated encryption with associated data (AEAD)”. If the section title changes in a newer revision of the page, the link will still lead to the expected article but without focusing on the referenced section. However, since we previously had mixed usage (more often to the “latest” version of articles) my PR was approved and we are now consistently linking to the latest article versions.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update external resource in the documentation
2021-06-02T08:50:12.808Z
Update external resource in the documentation
4,287
null
[ "upgrading" ]
[ { "code": "", "text": "Dear Team,We need an advise on the below topics.1)Is there any security/bug fix patches is available from mongodb side?\n2) If yes, How can we apply in centos servers. Kindly suggest.Thanks\nBala", "username": "Balakrishnan_Karuppu" }, { "code": "", "text": "Hi @Balakrishnan_Karuppu,The current version of MongoDB is 4.4.6. If this is what you are running with, you are good to go from MongoDB’s point of view.Everything else is OS specific and depends on your packages installed, etc.I would recommend to have a look to MongoDB production notes though and also the security checklist:Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "X.Y.ZX.Yyum update/etc/yum.confyum versionlock", "text": "Hi @Balakrishnan_Karuppu,I would definitely follow @MaBeuLux88’s suggestions to tune and secure your production deployment. I’ve added some further detail below in regards to updates.1)Is there any security/bug fix patches is available from mongodb side?Bug fixes & security updates are included in minor/patch releases (X.Y.Z) which are associated with a specific major release version (X.Y). Minor releases do not introduce any backward-breaking compatibility or behaviour changes, so upgrading or downgrading between patch releases for the same major version of MongoDB only differs by the server binaries that are deployed.The MongoDB Release Notes include a list of changes in each release. You can also find critical alerts and advisories via the MongoDB Alerts page and subscribe to Enterprise Release Announcements for news of production releases (Enterprise & Community server versions are released concurrently).Assuming you have installed via RPM packages and the normal Installation on Redhat/CentOS, you would orchestrate doing yum update on the members of your cluster and restart the MongoDB processes after upgrading the binaries.To avoid accidentally pulling a major version upgrade, I would include the major version numbers when installing and pin the packages by excluding in /etc/yum.conf or using yum versionlock.Borrowing an example from the documentation to install the latest version of MongoDB 4.4 server and tools:sudo yum install -y mongodb-org-4.4 mongodb-org-server-4.4 mongodb-org-shell-4.4 mongodb-org-mongos-4.4 mongodb-org-tools-4.4Regards,\nStennie", "username": "Stennie_X" } ]
Mongodb security/bug fix patches
2021-06-01T06:46:13.948Z
Mongodb security/bug fix patches
3,898
null
[ "aggregation", "queries" ]
[ { "code": "mydb1.mongodbbucketnocpu2index.aggregate([\n\n {\n \"$match\": {\n \"samples.timestamp1\": {\"$gte\": datetime.strptime(\"2010-01-01 00:00:00\", \"%Y-%m-%d %H:%M:%S\"),\n \"$lte\" :datetime.strptime(\"2020-12-31 00:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n \"samples.id13\": {\n \"$gt\": 5\n }\n\n }\n },\n { \"$unwind\": \"$samples\" },\n{\n \"$match\": {\n \"samples.id13\": {\n \"$gt\": 5\n }\n }\n },\n\n{\n \"$group\": {\n \"_id\": {\"$dateToString\": { \"format\": \"%Y-%m-%d \", \"date\": \"$samples.timestamp1\" }},\n\n\n \"avg_id13\": {\n \"$avg\": \"$samples.id13\"\n }\n }\n },\n{\"$sort\": {\"_id\": -1}}\n {\n \"$project\": {\n \"_id\": 0,\n \"day\":\"$_id\",\n \"avg_id13\": 1\n }\n }\n \n])\ndb.mongodbbucketnocpu2index.explain(true).aggregate(agg_pipeline);\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"mongodbtime.mongodbbucketnocpu2index\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\"$lte\" : ISODate(\"2020-12-31T00:55:00Z\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"E9B4DE5C\",\n\t\t\t\t\t\"planCacheKey\" : \"C7A0292D\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"[new Date(1609376100000), new Date(-9223372036854775808)]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$lte\" : ISODate(\"2020-12-31T00:55:00Z\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"[new Date(9223372036854775807), new Date(1262304000000)]\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"executionStats\" : {\n\t\t\t\t\t\"executionSuccess\" : true,\n\t\t\t\t\t\"nReturned\" : 63960,\n\t\t\t\t\t\"executionTimeMillis\" : 5261,\n\t\t\t\t\t\"totalKeysExamined\" : 1156908,\n\t\t\t\t\t\"totalDocsExamined\" : 96409,\n\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\"nReturned\" : 63960,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 1985,\n\t\t\t\t\t\t\"works\" : 1156909,\n\t\t\t\t\t\t\"advanced\" : 63960,\n\t\t\t\t\t\t\"needTime\" : 1092948,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 1271,\n\t\t\t\t\t\t\"restoreState\" : 1271,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"nReturned\" : 63960,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 1979,\n\t\t\t\t\t\t\t\"works\" : 1156909,\n\t\t\t\t\t\t\t\"advanced\" : 63960,\n\t\t\t\t\t\t\t\"needTime\" : 1092948,\n\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\"saveState\" : 1271,\n\t\t\t\t\t\t\t\"restoreState\" : 1271,\n\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\"docsExamined\" : 96409,\n\t\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 96409,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 109,\n\t\t\t\t\t\t\t\t\"works\" : 1156909,\n\t\t\t\t\t\t\t\t\"advanced\" : 96409,\n\t\t\t\t\t\t\t\t\"needTime\" : 1060499,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 1271,\n\t\t\t\t\t\t\t\t\"restoreState\" : 1271,\n\t\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"[new Date(1609376100000), new Date(-9223372036854775808)]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keysExamined\" : 1156908,\n\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\"dupsTested\" : 1156908,\n\t\t\t\t\t\t\t\t\"dupsDropped\" : 1060499\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"allPlansExecution\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"nReturned\" : 101,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 20,\n\t\t\t\t\t\t\t\"totalKeysExamined\" : 2725,\n\t\t\t\t\t\t\t\"totalDocsExamined\" : 228,\n\t\t\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 101,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 20,\n\t\t\t\t\t\t\t\t\"works\" : 2725,\n\t\t\t\t\t\t\t\t\"advanced\" : 101,\n\t\t\t\t\t\t\t\t\"needTime\" : 2624,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 7,\n\t\t\t\t\t\t\t\t\"restoreState\" : 6,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"nReturned\" : 101,\n\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 20,\n\t\t\t\t\t\t\t\t\t\"works\" : 2725,\n\t\t\t\t\t\t\t\t\t\"advanced\" : 101,\n\t\t\t\t\t\t\t\t\t\"needTime\" : 2624,\n\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\"saveState\" : 7,\n\t\t\t\t\t\t\t\t\t\"restoreState\" : 6,\n\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\"docsExamined\" : 228,\n\t\t\t\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\t\t\"nReturned\" : 228,\n\t\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"works\" : 2725,\n\t\t\t\t\t\t\t\t\t\t\"advanced\" : 228,\n\t\t\t\t\t\t\t\t\t\t\"needTime\" : 2497,\n\t\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"saveState\" : 7,\n\t\t\t\t\t\t\t\t\t\t\"restoreState\" : 6,\n\t\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[new Date(1609376100000), new Date(-9223372036854775808)]\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"keysExamined\" : 2725,\n\t\t\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\t\t\"dupsTested\" : 2725,\n\t\t\t\t\t\t\t\t\t\t\"dupsDropped\" : 2497\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"nReturned\" : 90,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\"totalKeysExamined\" : 2725,\n\t\t\t\t\t\t\t\"totalDocsExamined\" : 228,\n\t\t\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 90,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\"works\" : 2725,\n\t\t\t\t\t\t\t\t\"advanced\" : 90,\n\t\t\t\t\t\t\t\t\"needTime\" : 2635,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 1271,\n\t\t\t\t\t\t\t\t\"restoreState\" : 1271,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$lte\" : ISODate(\"2020-12-31T00:55:00Z\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"nReturned\" : 90,\n\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\t\"works\" : 2725,\n\t\t\t\t\t\t\t\t\t\"advanced\" : 90,\n\t\t\t\t\t\t\t\t\t\"needTime\" : 2635,\n\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\"saveState\" : 1271,\n\t\t\t\t\t\t\t\t\t\"restoreState\" : 1271,\n\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\"docsExamined\" : 228,\n\t\t\t\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\t\t\"nReturned\" : 228,\n\t\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"works\" : 2725,\n\t\t\t\t\t\t\t\t\t\t\"advanced\" : 228,\n\t\t\t\t\t\t\t\t\t\t\"needTime\" : 2497,\n\t\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"saveState\" : 1271,\n\t\t\t\t\t\t\t\t\t\t\"restoreState\" : 1271,\n\t\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[new Date(9223372036854775807), new Date(1262304000000)]\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"keysExamined\" : 2725,\n\t\t\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\t\t\"dupsTested\" : 2725,\n\t\t\t\t\t\t\t\t\t\t\"dupsDropped\" : 2497\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(63960),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(3826)\n\t\t},\n\t\t{\n\t\t\t\"$unwind\" : {\n\t\t\t\t\"path\" : \"$samples\"\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(767520),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(3887)\n\t\t},\n\t\t{\n\t\t\t\"$match\" : {\n\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(749342),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(4563)\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : {\n\t\t\t\t\t\"$dateToString\" : {\n\t\t\t\t\t\t\"date\" : \"$samples.timestamp1\",\n\t\t\t\t\t\t\"format\" : {\n\t\t\t\t\t\t\t\"$const\" : \"%Y-%m-%d \"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"avg_id13\" : {\n\t\t\t\t\t\"$avg\" : \"$samples.id13\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(3178),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(5193)\n\t\t},\n\t\t{\n\t\t\t\"$sort\" : {\n\t\t\t\t\"sortKey\" : {\n\t\t\t\t\t\"_id\" : -1\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(3178),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(5193)\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"avg_id13\" : true,\n\t\t\t\t\"day\" : \"$_id\",\n\t\t\t\t\"_id\" : false\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(3178),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(5193)\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"xaris-MS-7817\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.6\",\n\t\t\"gitVersion\" : \"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\"\n\t},\n\t\"ok\" : 1\n}\n\"executionSuccess\" : true,\n\t\t\t\t\t\"nReturned\" : 63960,\n\t\t\t\t\t\"executionTimeMillis\" : 5261,\n\t\t\t\t\t\"totalKeysExamined\" : 1156908,\n\t\t\t\t\t\"totalDocsExamined\" : 96409,\n\t\t\t\t\t\"executionStages\" : {\n.\n.\n.\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 1979,\n\t\t\t\t\t\t\t\"works\" : 1156909,\n\t\t\t\t\t\t\t\"advanced\" : 63960,\n\t\t\t\t\t\t\t\"needTime\" : 1092948,\n\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\"saveState\" : 1271,\n\t\t\t\t\t\t\t\"restoreState\" : 1271,\n\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\"docsExamined\" : 96409,\n\t\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\n", "text": "Hello guys.i cant understand something in explain plan.\nMy data consist of 1.157.000 rows.I have nested documents.Every documents consist of 12 subdocuments inside so we have around 96000 documents.I use combound index on samples.timestamp1,samples.id13\nMy query look like this:and the explain plan is this:I see here that the query use index scan although it scan the whole collectionCan someone explain to me why does the planner does an index scan although it scan the whole collection?wouldnt be better if it did collection scan?", "username": "harris" }, { "code": "\"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 96409,\n \"executionTimeMillisEstimate\" : 109,\n \"works\" : 1156909,\n \"advanced\" : 96409,\n \"needTime\" : 1060499,\nsamples.id13", "text": "Index scan is always better than a collection scan.It’s only 109 ms of your 5s pipeline so… Negligible.In your case here, you see a large number of “works” in your index scan because the selectivity of your index is poor. Your first field in the compound index is the date and you are selecting from 2010 to 2020 and I guess this is more or less everything in your collection.\nIf the samples.id13 is filtering documents more efficiently (==eliminating more), then it should go first in the order of the compound index.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Explain plan on query, Index scan instead of collscan
2021-06-09T20:47:00.588Z
Explain plan on query, Index scan instead of collscan
2,479
null
[ "aggregation", "performance" ]
[ { "code": " [{\n $match: {\n $and: [\n {\n epoch: {\n $gt: start,\n $lt: end\n }\n }, {\n site: \"my site\"\n }\n ]\n }\n },{\"$group\" : {_id:\"$file\", count:{$sum:1}}}]\n{\"$group\" : {_id:\"$file\", count:{$sum:1}}}\n", "text": "This is the pipline I’m trying to run:The above is slower than:The $match operator slows it down. To note: epoch, file, and site are all indexed descending. Maybe I am misunderstanding something, but intuitively, an indexed match before a group by operation should be faster than a singular group by.Is this just a performance issue? Speed doesn’t matter a whole lot in my particular application. I just want to learn what MongoDB is doing underneath the hood.", "username": "Dogunbound_hounds" }, { "code": "db.coll.aggregate([...], {explain: true})\n{site: 1, epoch: 1}\n$and[{\n $match: {\n epoch: {\n $gt: start,\n $lt: end\n },\n site: \"my site\"\n }\n}, {\n \"$group\": {\n _id: \"$file\",\n count: {\n $sum: 1\n }\n }\n}]\n", "text": "Hi @Dogunbound_hounds,In this pipeline, only the match operation can use an index (singular). You can confirm which index is used by running an explain:See doc: https://docs.mongodb.com/manual/core/aggregation-pipeline/#pipeline-operators-and-indexesThe best compound index you can use for this query is:The order is important here because I’m respecting the ESR rule (Equality, Sort, Range).Also, on a side note, that won’t make a difference in the performances, but the $and isn’t necessary here as it’s the default system in place already.So your query is equivalent to:Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Why does $match on indexed variables in aggregation pipeline slow down the query
2021-06-10T15:40:05.048Z
Why does $match on indexed variables in aggregation pipeline slow down the query
2,024
null
[ "python" ]
[ { "code": "", "text": "We are using MongoDB in our project (with a Python backend) and every once in awhile there’s a need to run migrations on the production environment. As far as I can see there are a few migration tools for NodeJS and other frameworks/languages, but there lacks a good solution for Python. The existing libraries are not well supported and lack a few key functionalities.I’m wondering if anyone here has experience with migrating on Python. What libraries did you use? Any best practices?I’m also thinking about creating an open-source tool for it. Is it worth the effort?", "username": "Henry_Harutyunyan" }, { "code": "", "text": "Hi Henry! Welcome to the MongoDB Community Forums, and thank you for your question. To help us better assist you, can you please share what Driver/ODM/ORM you are using to access MongoDB from Python?By ‘migration’ I assume here that you mean a tool like alembic (Welcome to Alembic’s documentation! — Alembic 1.9.4 documentation)? As far as we know, no such tool exists for MongoDB’s Python ODMs though people have tried to roll their own as you are planning (e.g. Bitbucket).In general, migrations aren’t something that have much importance in the MongoDB ecosystem because of the inherently schema-free nature of the database. The use of migrations implies that your app is written in a way that assumes a rigid schema which means it cannot take advantage of MongoDB’s flexible data model going forward. Instead of a migration, you might want to consider using the schema versioning in your data-model - Building with Patterns: The Schema Versioning Pattern | MongoDB Blog. When using this kind of a pattern, your application can lazily migrate documents i.e. when a document with an outdated schema version is retrieved, it can be passed through ‘migration code’ that migrates it to the newest schema version before carrying on with applying the application logic.", "username": "Prashant_Mital" }, { "code": "", "text": "Following up on this answer, we have been looking for a client ODM tool (preferably in Python) that can actually implement the schema versioning pattern. So far, we have not found one. Neither Mongoengine nor PyMODM seem to support it naturally and the one we use (Mongoengine) makes this pattern especially difficult.Does anyone have any recommendations here?", "username": "Adam_Sussman" }, { "code": "", "text": "We have same problem and found pymongo-migrate thats looks good. Going to use it in prod.", "username": "Dmitry_Shevelev" } ]
Migration tool for Python
2020-07-06T20:27:38.509Z
Migration tool for Python
11,834
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hi guys, I want to put Realm into Development mode so any function changes will not take effect on Production Realm until switch it back to Production. Is there any way to make this or am I missing something?", "username": "Tuan_Nguyen1" }, { "code": "", "text": "Could you explain a bit more about what you’re trying to do?Is this a single Realm app or are there multiples?By “Development Mode” to you mean the Realm Sync mode or the Environment setting, or something else?", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi, thank you for reaching back so quickly. I want to program and test the functions before release to make sure it stable. But now, when I make changes to functions, I need to ask my customer stop using the App. It’s not convenient for them. So I want something like ‘Stagging mode’, you can perform any changes without effecting the application. When making sure everything is OK, then release it.", "username": "Tuan_Nguyen1" }, { "code": "", "text": "Have you looked at the Drafts feature? https://docs.mongodb.com/realm/deploy/deploy-ui/You could make all of your changes to the app and then only deploy once they’re complete (though you’d probably want to test the changes in a development Realm app[ instance first).", "username": "Andrew_Morgan" }, { "code": "", "text": "I need to interact with API outside so it only best if we test inside Realm, that’s what I do for now. Is there any additional suggestions?", "username": "Tuan_Nguyen1" } ]
How to put Realm into Development mode without effecting Production
2021-06-10T10:01:12.805Z
How to put Realm into Development mode without effecting Production
2,083
null
[ "swift", "atlas-device-sync" ]
[ { "code": "owner=<id_of_the_owner>", "text": "Hi,I’m developing an iOS app using Swift and Realm Sync.\nEach document stored in Atlas have the following partition key (string):owner=<id_of_the_owner>,category=<id_of_the_category>So far I only needed to access the documents of one category at a time. To do so, I open a Realm like this:try! Realm(configuration: user!.configuration(partitionValue: “owner=<id_of_the_owner>,category=<id_of_the_category>”))But now I would like to access all the documents of an owner, thus matching owner=<id_of_the_owner>.I know I can do it with an Atlas Search Index but is there a way to do it directly in my app? For example, can I use regular expression in the partition key to do something like this:owner=<id_of_the_owner>,category=(.*)Thanks for your help!", "username": "Julien_Chouvet" }, { "code": "class TaskClass: Object {\n @objc dynamic var _id = ObjectId.generate()\n @objc dynamic var _partitionKey = \"\"\n}\n\"owner=julien,category=cat_a\"%22owner%3Djulien%2Ccategory%3Dcat_a%22let config = user.configuration(partitionValue: \"owner=julien,category=cat_a\")\nRealm.asyncOpen(configuration: config) { result in\nclass TaskClass: Object {\n @objc dynamic var _id = \n @objc dynamic var _partitionKey = \"\" //the users uid\n @objc dynamic var category = \"\"\n}\n", "text": "The partition key is the ‘name’ of a Realm - both on the server and locally as well. If you look at the actual Realm filenames, they are the specific name of the partitionEach unique partition value, or value of a partition key, maps to a different realm. Documents that share the same partition value belong to the same realm. All documents in a realm share the same read and write permissions.As far as the client SDK goes, no that can’t directly be done.So if your object looks like thisand the _partitionKey is\"owner=julien,category=cat_a\"there will be a local filename matching that string%22owner%3Djulien%2Ccategory%3Dcat_a%22When creating the connection to Realm, the _partitionKey would have to match thatOtherwise Realm wouldn’t know which file to open.I would suggest changing the object structure to have the category as a propertyIf the categories are hard coded or stored in a list, you could concatenate the category with the owners id to have them all open at one time - that would give you access to all of them but they would each be in separate Results objects.One other option is to denormalize your data - essentially keeping duplicate data in another Realm using the users uid as the _partitionKey. Upside is you can query across all categories (if that is the use case) but downside is more code to maintain and duplicate data.", "username": "Jay" }, { "code": "", "text": "Hi @Jay,Thanks for your answer!However I don’t understand when you said:If the categories are hard coded or stored in a list, you could concatenate the category with the owners id to have them all open at one time - that would give you access to all of them but they would each be in separate Results objects.Can you please give me an example?", "username": "Julien_Chouvet" }, { "code": "for catName in [\"cat_0\", \"cat_1\"] {\n let partition = \"\\(userId),category=\\(catName)\"\n //open a realm using the partition string\n}", "text": "My comment was a bit unclear - sorry. I was simply stating that if you needed to open a connection to 5 realms you could iterate over an array of collection names and open each one.", "username": "Jay" }, { "code": "", "text": "Yes I was first thinking of doing it that way but I though that maybe there was a more efficient way I didn’t know.Thanks for your help ", "username": "Julien_Chouvet" } ]
Realm Swift - Get all objects of a collecting matching part of the partition key
2021-06-09T06:55:54.375Z
Realm Swift - Get all objects of a collecting matching part of the partition key
2,898
null
[ "performance", "atlas-functions" ]
[ { "code": "", "text": "Hi, We’re thinking of shifting our Node.js based REST API to Realm.Let’s say a function performs one document read operation behind an endpoint of the http service.What would be the expected/typical response time of that endpoint ?", "username": "Timey_AI_Chatbot" }, { "code": "", "text": "Hi @Timey_AI_Chatbot, welcome to the community.It’s going to depend on a bunch of things such as:It’s probably simplest to create a small PoC and try it out.", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Typical response time of http service
2021-06-09T15:07:09.514Z
Typical response time of http service
2,493
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hi, I have a self hosted mongoDB on a server on Microsoft Azure. I want to link this server with the MongoDB Realm for sync functionality. I know about we can link MongoDB Atlas with MongoDB Realm. But i donot find way to connect self hosted cloud mongoDB to MongoDB Realm. Is there any way to connect? Thanks!", "username": "Tech_Work" }, { "code": "", "text": "Hi @Tech_Work, welcome to the community!MongoDB Realm (including Sync) is only able to connect to Atlas clusters.", "username": "Andrew_Morgan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to link self hosted mongoDB to Realm Cloud?
2021-06-10T04:34:04.746Z
How to link self hosted mongoDB to Realm Cloud?
2,585
null
[]
[ { "code": "", "text": "HI Experts,Seek your best advices and Ideas for migrating MongoDB databases data from GCP to Azure cloud. Size of data is nears to a petabyte. Please share your valuable suggestion based on your experiences and knowledgeThanking you", "username": "nkishorb" }, { "code": "", "text": "best advices and Ideas for migrating MongoDB databases data from GCP to Azure cloudDo not do it, without looking at Atlas, because it allows you to run transparently on both.Cost might even be lower. See Not able to restore indexes using mongorestore - #2 by MaBeuLux88", "username": "steevej" }, { "code": "", "text": "Thanks Steeve for the reply…btw , I have verified using MongoBD Atlas there is a possibility to connect to GCP to fetch the data and copy data to Azure. Is this you are trying convey ? ", "username": "nkishorb" }, { "code": "", "text": "Yup, that’s totally possible indeed.See Live Migrate Your Replica Set to Atlas.With the live migration, you can directly create your new cluster in Azure, then activate the live migration to pump the data from your current cluster (where ever it is) and start the sync of your new cluster. It might take a bit of time with a PB of data… But as long as it can copy faster than you are writing in the prod cluster, it will eventually be in sync and you will be able to switch.There is a specific doc to migrate sharded cluster: https://docs.atlas.mongodb.com/import/live-import-sharded/. As I guess you are not running a PB of data on a single Replica Set…If your cluster was already on Atlas, it would be trivial (like 3/4 clics…) to migrate from GCP to Azure. Or you could even run a Multi-Cloud cluster across the 2 cloud provider and benefit from both if you wanted to.We briefly explain multi-cloud, the benefits and challenges posed by multi-cloud, multi-cloud strategy, and management.Something to consider though: it might be easier to migrate from GCP (self service) to GCP on Atlas first (same region) so the sync with the live migration can benefit from the full transfer speed and reduce the time of sync. Then, once you are in Atlas, using the UI to migrate to Azure will be more simple and can be done step by step with the Multi-Cloud option to ease the transition.I could also be completely wrong… I’ll try to find someone smarter than me to see what they think… !Let us know how it goes!! I want the end of this story !Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Migrate MongoDB from Google Cloud Platform to Microsoft Azure
2021-06-08T15:03:47.065Z
Migrate MongoDB from Google Cloud Platform to Microsoft Azure
3,470
null
[ "dot-net" ]
[ { "code": "", "text": "I am using the latest version of the C# Mongo Driver with a v4.2 database. When I call BulkWriteAsync (IMongoCollection(TDocument).BulkWriteAsync Method (IClientSessionHandle, IEnumerable(WriteModel(TDocument)), BulkWriteOptions, CancellationToken)) with a CancellationToken, if the cancellation token gets cancelled while the BulkWriteAsync is running, no OperationCanceledException gets thrown. This is a problem because the transaction can take a long time making the database hold a lot of locks and interfere with other requests. I need to interrupt any database commands as soon as the cancellation token cancels to reduce the risk of deadlocks caused by transactions continuing to run for too long.", "username": "Alejandro_Carrazzoni" }, { "code": "", "text": "Hi @Alejandro_Carrazzoni, welcome!Could you provide a minimal reproducible code snippet to reproduce the behaviour that you’re seeing ?Regards,\nWan.", "username": "wan" }, { "code": "", "text": "I hit the same problem with Aggregate.\nI think the root cause is this issue:\nhttps://jira.mongodb.org/browse/CSHARP-1200\nThe issue was created Mar 20 2015 and is still unresolved", "username": "john_m" } ]
BulkWriteAsync does not interrupt when CancellationToken gets canceled
2020-04-22T19:13:45.141Z
BulkWriteAsync does not interrupt when CancellationToken gets canceled
2,686
null
[ "aggregation" ]
[ { "code": "[\n {\"$lookup\": {\n \"from\": \"base_model\",\n \"localField\": \"suboptions\",\n \"foreignField\": \"_id\",\n \"as\": \"suboptions\"\n }},\n {\"$project\": {\"name\": \"$name\",\n \"uuid\": \"$uuid\",\n \"is_required\": \"$is_required\",\n \"is_suboption\": \"$is_subption\",\n \"suboptions\": \"$suboptions\",\n \"option_uuid\": \"$option_uuid\",\n \"is_in_stock\": \"$is_in_stock\",\n \"created_at\": \"$created_at\",\n \"updated_at\": \"$updated_at\",\n \"_id\": 0,\n }},\n {\"$addFields\": {\n \"suboptions.id\": {\"$toString\": \"$suboptions._id\"}\n }},\n {\"$project\": {\n \"suboptions._id\": 0,\n }}\n\n ]\n", "text": "I have this pipeline:If I remove the $addFields stage it works fine, but I need to get the result in JSON serializable format, so I need to change the type of _id field in suboptions to string, but this is returning Unsupported conversion from array to string in $convert with no onError value.\nHow can I fix this?", "username": "Mehdi_Khlifi" }, { "code": "", "text": "The result of a lookup, field suboptions, is an array not a string.Most likely, you will need to use https://docs.mongodb.com/manual/reference/operator/aggregation/map/", "username": "steevej" } ]
Convert field from list of objectids to list of string
2021-06-09T09:44:20.282Z
Convert field from list of objectids to list of string
9,126
null
[ "mongodb-shell", "configuration", "devops" ]
[ { "code": "", "text": "I have MongoDB community installed on VMs and I was wondering if there is there a way to lock user accounts after x number of failed logins on MongoDB Community when using SCRAM authentication?I looked in the documentation and didn’t see anything on this topic.Any help would be appreciated.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Hi @tapiocaPENGUIN,There is no built in way to do it whithin the server.What our customers usually do is integrating LDAP to fulfill this task using enterprise versionBut I am not certain why do you need this? Do you think someone will brute force your password? Why would anyone have access to do so?Anyway you can crawl the logs with script and remove user permissions if necessary affectively locking him…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks Pavel for the answer.But I am not certain why do you need this? Do you think someone will brute force your password? Why would anyone have access to do so?Our security team proposed the question so I wanted to verify.", "username": "tapiocaPENGUIN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Limit failed logins from MongoDB shell
2021-06-08T21:03:34.686Z
Limit failed logins from MongoDB shell
4,275
null
[ "java" ]
[ { "code": "findAll()findAll()findAll()", "text": "In the latest version of Realm doc, using findAll() on main thread is not recommended, disabled by default, and can possibly cause ANR for heavy read state in the doc (RealmQuery (Realm 10.10.1))However, I’m using an older version of Realm and the older doc (RealmQuery (Realm 6.0.0)) doesn’t say that it would drop frame or create ANRs.My questions are:Thanks a lot!", "username": "D_c_Le_Tr_n_Anh" }, { "code": "findAllAsync()RealmConfiguration.allowQuriesOnUiThread()", "text": "HiThe problem has always existed. For small datasets, the queries are generally fast enough that you do not notice, but it is hard to give exact guidance as to when it becomes “too slow” as that depends on a lot of factors like the size of the data and what device you are running on.It is for that reason we decided to disable it by default, it is safer, and for most cases, it should be a trivial refactor to change to findAllAsync().If for some reason you still want the old behaviour you can just enable RealmConfiguration.allowQuriesOnUiThread() to get the old behaviour RealmConfiguration.Builder (Realm 10.10.1)", "username": "ChristanMelchior" } ]
Is using findAll() safe on main thread in older version?
2021-06-09T10:22:28.980Z
Is using findAll() safe on main thread in older version?
1,932
null
[ "crud" ]
[ { "code": "{\n _id: someID\n array: [\n {\n title: someTitle\n data: someOtherData\n ...\n },\n {\n title: someTitle\n data: someOtherData\n ...\n }\n ]\n}\n_idexample.com/some_path/userID/0{\n _id: someID\n object: {\n {\n _id: someUniqueID,\n title: someTitle\n data: someOtherData\n ...\n },\n {\n _id: someUniqueID,\n title: someTitle\n data: someOtherData\n ...\n }\n }\n}\n", "text": "Hi, so I have an entry in my collection as such:Is there any way for me to create a unique and non-changing _id field for each element in this array? I understand that the array indexing is already unique, and I don’t have a constraint of duplicates as the data will never result in duplicates (consists of unique hashes).The reason why I seek the indexing to be unique and non-changing is because I’m using the index to generate a link to some data. Right now it directs to example.com/some_path/userID/0 and the last digit is the index in the array. However, users can modify this data and delete data at an arbitrary index.That means that if one day you navigate to that link, and the user deletes their item at index 0, the next day when you navigate to that link you’ll be seeing something different (the element at the previous index 1 instead). Or, if you navigate to the last element in the array and the user deletes something before that, it’ll result in a 404.If I can have a unique ID, then links are preserved over time. I thought of one way of doing this by using an object instead of an array:But I’d prefer to use an array as it seems much more logical than using an object of objects, as I still have the choice of accessing things by index and all the other useful sorting, etc that can be done with an array.Appreciate any help, thank you!", "username": "Ajay_Pillay" }, { "code": "", "text": "Hi @Ajay_PillayWelcome back to MongoDB community.Why not to set a new objectId for each array element, you can potentially index it and query it .Creating objectIds is the preferred bson way of generating ids in MongoDB, will that work or you have to have an ability to recreate it from some other document data?The array makes sense as object can have multiple sub objects only under a fields (what you presented is not a legal json representation)Hopefully this helps.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi, thank you for your response.So I realized the best way for me to do this is to use data I already have generated.In my sub-fields, I actually have unique hashed IDs that are being generated, and I figured I could just use the very first unique hashed ID as the ID for the entire object.And yes I realize that I made a mistake in the JSON representation, the _id field should’ve been hoisted up one level.Thanks for pointing out the ObjectId() function, it should be useful in some other areas I’m developing.", "username": "Ajay_Pillay" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can you create a unique hash for each element in an array?
2021-06-05T07:13:57.837Z
Can you create a unique hash for each element in an array?
6,040
null
[ "installation", "devops" ]
[ { "code": "sudo mongod --port 27017 --dbpath /var/lib/mongodb", "text": "Hello. I’m still using mongo 3.6 version. I’m running mongod by sudo mongod --port 27017 --dbpath /var/lib/mongodb . But after closing the terminal, mongodb instance stops running and I can’t access to the connection via mongo shell.", "username": "Patrick_Edward" }, { "code": "--fork", "text": "Hi @Patrick_Edward, welcome to the community .Add the --fork flag to your command along with a logpath, therefore your command would look something like:sudo mongod --port 27017 --dbpath /var/lib/mongodb --logpath /var/log/mongodb/mongod.log --forkAnd hopefully this would successfully fork a child-mongod-process and you can close the terminal and it won’t stop the process from runningIdeally, in production, you might want to create a mongod-config file. The configuration file contains settings that are equivalent to the mongod command-line options. Hence you don’t have to worry about the length and any typos/syntax in your command. Learn more about mongod configuration file through our awesome documentation about Configuration File Options.\nAlso, checkout Configuration File Options for v3.6.Having said that, I would highly recommend going through our free MongoDB University course on Basic Cluster Administration.In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" }, { "code": "sudo mongod --port 27017 --dbpath /var/lib/mongodb --logpath /var/log/mongodb/mongod.log --fork\nsudo", "text": "@SourabhBagrecha Hello. I know that I should learn mongodb from the zero but I’m having tight deadline. So I hope you don’t mind me. I would like to ask you a few more questions.\nWhen I runI have an issue that the mongod instance should not be initiated as root user.So I run the same cmd without sudo, I got permission denied. What’s the practice of doing this process?Thanks for your time and support.Best,\nPatrick", "username": "Patrick_Edward" }, { "code": "", "text": "Hi @Patrick_Edward, whay do you mean when you say:I have an issue that the mongod instance should not be initiated as root user.Can you please explain more about what issues you are facing?Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" }, { "code": "sudo mongod --port 27017 --dbpath /var/lib/mongodb --logpath /var/log/mongodb/mongod.log --forkWARNING: You are running this process as the root user, which is not recommended.", "text": "sudo mongod --port 27017 --dbpath /var/lib/mongodb --logpath /var/log/mongodb/mongod.log --forkI’m having this warning message in mongo shell.\nWARNING: You are running this process as the root user, which is not recommended.", "username": "Patrick_Edward" }, { "code": "", "text": "Hi @Patrick_Edward, can you please follow the instriction mentioned in this guide: Procedure to Allow Non-Root Users to Stop/Start/Restart “mongod” Process.Hopefully you’ll be able to get rid of that warning.In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" }, { "code": "Unit mongod.service could not be found.service mongod status", "text": "@SourabhBagrecha Thanks for the guide. I will read it tonight. I have another question to ask you if it’s ok for you.\nIs it normal to get Unit mongod.service could not be found. when I cmd service mongod status although I can access to mongod through mongo shell?", "username": "Patrick_Edward" }, { "code": "unit mongodb.servicesudo systemctl unmask mongodsudo service mongod start", "text": "No worries @Patrick_Edward, usually the unit mongodb.service is masked.\nUse following command to unmask it:sudo systemctl unmask mongodand re-runsudo service mongod startIn case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" }, { "code": "", "text": "", "username": "SourabhBagrecha" } ]
How to keep mongod instance running on Ubuntu 20.04 server
2021-06-09T07:47:02.416Z
How to keep mongod instance running on Ubuntu 20.04 server
6,152
null
[ "queries", "dot-net" ]
[ { "code": "public class Book\n{\n [BsonId]\n public string Id { get; set; }\n\n public string Author { get; set; }\n\n [BsonExtraElements]\n public BsonDocument Metadata { get; set; }\n}\n{\n \"name\":\"John 2\",\n \"age\":30,\n \"car\":null\n}\n", "text": "BsonDocument filtration does not work as expected. I need to filter BsonDocument (I used the BsonDocument field to store JSON object). I need to filter using that json property.As a examplebelow json save in Metadata field (BsonDocument)now I need to filter using name or age field. how can I do that in .net core ? any idea or support", "username": "Lilan_Silva" }, { "code": "", "text": "Seems like this is working. let me know if anyone know better solution than this\n_books.Find(Builders.Filter.Eq(“user.name”, “Test User”)).ToList()", "username": "Lilan_Silva" }, { "code": "", "text": "It seems you could also tryvar filter = Builders.Filter.Eq(“fieldName.nestedFieldName”, “value”)", "username": "David_Thompson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filter value from BsonDocument (dynamic field) in C#
2021-06-07T14:55:18.560Z
Filter value from BsonDocument (dynamic field) in C#
13,382
null
[ "ops-manager" ]
[ { "code": "/mongodb-mms start\n\nGenerating new Ops Manager private key...\nStarting pre-flight checks\nAn unexpected error occurred during pre-flight checks: null\njava.lang.reflect.InvocationTargetException\n at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)\n at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)\n at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)\n at com.xgen.svc.core.PreFlightCheck.main(PreFlightCheck.java:204)\nCaused by: com.google.inject.CreationException: Unable to create injector, see the following errors:\n\n1) An exception was caught and reported. Message: Failed to initialize connection to App Settings Database.\n at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:137)\n\n1 error\n at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:543)\n at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:159)\n at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:106)\n at com.google.inject.Guice.createInjector(Guice.java:87)\n at com.mycila.inject.jsr250.Jsr250.createInjector(Jsr250.java:73)\n at com.mycila.inject.jsr250.Jsr250.createInjector(Jsr250.java:47)\n at com.mycila.inject.jsr250.Jsr250.createInjector(Jsr250.java:43)\n at com.xgen.svc.core.PreFlightCheck.<init>(PreFlightCheck.java:88)\n at com.xgen.svc.mms.MmsPreFlightCheck.<init>(MmsPreFlightCheck.java:68)\n ... 5 more\nCaused by: java.lang.RuntimeException: Failed to initialize connection to App Settings Database.\n at com.xgen.svc.core.AppSettings.getAppPropertyDao(AppSettings.java:472)\n at com.xgen.svc.core.AppSettings.<init>(AppSettings.java:294)\n at com.xgen.svc.core.AppSettings.<init>(AppSettings.java:229)\n at com.xgen.svc.core.PreFlightCheck$1.configure(PreFlightCheck.java:244)\n at com.google.inject.AbstractModule.configure(AbstractModule.java:61)\n at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:344)\n at com.google.inject.spi.Elements.getElements(Elements.java:103)\n at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:137)\n at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:103)\n ... 11 more\nCaused by: java.lang.IllegalStateException: Failed to decrypt mongodb tokens from: mongodb:// e8f0a8be706d7a9772108f78314c91c66b4fbca0c38c3d1d52761386b61fa983-f87eab10f73c688e70f7f97cccf89625-e4622851994424c653f1a23345160551f1af0df4c541592db145b5ddcea2ade7:e8f0a8be706d7a9772108f78314c91c66b4fbca0c38c3d1d52761386b61fa983-6fe17f68567b095eb564a6e54327425d-16defa3d43559f0a248b40c633af069f@ mongod-vm2.az.3pc.mbt.com,mongod-vm0.az.3pc.mbt.com,mongod-vm1.az.3pc.mbt.com:8543/?replicaSet=rs0&maxPoolSize=150 - Check that your gen.key file at /root/.mongodb-mms/gen.key has not changed.\n at com.xgen.svc.mms.dao.mongo.MongoSvcUtils.decryptedMongoUri(MongoSvcUtils.java:77)\n at com.xgen.svc.mms.dao.mongo.MongoSvcUriImpl$Config.getUri(MongoSvcUriImpl.java:394)\n at com.xgen.svc.core.AppSettings.getAppPropertyDao(AppSettings.java:469)\n ... 19 more\nCaused by: java.lang.IllegalStateException: CipherInfo with signature e8f0a8be706d7a9772108f78314c91c66b4fbca0c38c3d1d52761386b61fa983 is not found!\n at com.google.common.base.Preconditions.checkState(Preconditions.java:588)\n at com.xgen.svc.security.util.CipherManager.getDecCipherInfo(CipherManager.java:87)\n at com.xgen.svc.mms.util.EncryptionUtils.genDecryptStr(EncryptionUtils.java:135)\n at com.xgen.svc.mms.dao.mongo.MongoSvcUtils.decryptedMongoUri(MongoSvcUtils.java:64)\n ... 21 more\nPreflight check failed.\n", "text": "Hi Team,During start OPsmanager service getting below error , what was mistake please elaborate to us.", "username": "hari_dba" }, { "code": "", "text": "Hi @hari_dba,The initial error seems to be with application database connection failure so I will suggest verify if the Ops Manager application hostbis configured with correct connection and able to access the app db.Having said that, Ops Manager is an enterprise licences software which you should have support for and I recommend opening a support ticket for this.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
MongoDB opsmanager unable to start
2021-06-07T17:21:42.264Z
MongoDB opsmanager unable to start
4,148
null
[ "queries", "java" ]
[ { "code": "\"FI_CLOB_ORDER_RATE_LIMIT_DUPLICATE_CHILD\": {\n \"orderLimitName\": \"Duplicate_Child_Order\",\n \"windowSize\": 1000,\n \"rateLimit\": 100,\n \"actionEnumKey\": \"FI_CLOB_ORDER_RATE_LIMIT_DUPLICATE_CHILD\"\n },\n \"FI_CLOB_ORDER_RATE_LIMIT_CHILD_NOTIONAL\": {\n \"orderLimitName\": \"Child_Notional\",\n \"windowSize\": 1000,\n \"rateLimit\": 20000000000.0,\n \"actionEnumKey\": \"FI_CLOB_ORDER_RATE_LIMIT_CHILD_NOTIONAL\"\n }\n\nWhen I retrieve it I get:\n\"FI_CLOB_ORDER_RATE_LIMIT_DUPLICATE_CHILD\": {\n \"orderLimitName\": \"Duplicate_Child_Order\",\n \"windowSize\": 1000,\n \"rateLimit\": 100,\n \"actionEnumKey\": \"FI_CLOB_ORDER_RATE_LIMIT_DUPLICATE_CHILD\"\n },\n \"FI_CLOB_ORDER_RATE_LIMIT_CHILD_NOTIONAL\": {\n \"orderLimitName\": \"Child_Notional\",\n \"windowSize\": 1000,\n \"rateLimit\": {\"$numberLong\": \"20000000000\"}\n \"actionEnumKey\": \"FI_CLOB_ORDER_RATE_LIMIT_CHILD_NOTIONAL\"\n }\n", "text": "Hi, I am new to mongo and I love the fact I could use it without knowing much! I am storing a json and it has data like:As you can see the second rateLimit is returned with $numberLong. Is there a way to get simply the original json back?My java mongo driver is 3.11.0Any help will be greatly appreciated.", "username": "Navin_Jha" }, { "code": "", "text": "Hi @Navin_Jha and welcome in the MongoDB Community !The current version of the MongoDB Java driver is 4.2.3. Please make sure to use the correct version of the driver and also not the legacy one. But this won’t solve this “issue”.Your number is 20 billions. It’s greater than 2,147,483,647 which is the maximum positive value for a 32-bit signed binary integer. The only way for MongoDB (or any computer for that matter) to store this value is in a 64-bit integer == a long.MongoDB is a BSON database. So it’s capable to handle basic JSON… But also more complex data types like dates, decimal128, … and longs that JSON can’t handle.So the reason you get a long back using Java, it’s because Java is kind enough to transform automatically for you your 20000000000 into a long and avoid an integer overflow. And I guess you probably have a warning in your code that says that you should actually type 20000000000L instead.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi Maxime,Thanks for the prompt response.Driver updates happen at a slow pace in large firms as you know.The data is kept in json files that get loaded to mongo. When retrieved from mongo it is sent to consumers as json. So ideally I would like to keep json intact.Is there a way to tell mongo:I am sending\n“rateLimit”: 20000000000please give me back the same in in the returned json and not\n“rateLimit”: {\"$numberLong\": “20000000000”}I simply do document.toJson() when retrieving", "username": "Navin_Jha" }, { "code": "coll.insertOne(new Document(\"integer\", 20).append(\"long\", 20000000000L));\ntoJson()import com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport org.bson.Document;\n\npublic class LongsAreLong {\n public static void main(String[] args) {\n try (MongoClient mongoClient = MongoClients.create(\"mongodb://localhost/test\")) {\n MongoCollection<Document> coll = mongoClient.getDatabase(\"test\").getCollection(\"coll\");\n coll.drop();\n coll.insertOne(new Document(\"integer\", 20).append(\"long\", 20000000000L));\n System.out.println(coll.find().first().toJson());\n }\n }\n}\n{\"_id\": {\"$oid\": \"60bf8bba9e723258bdc82f04\"}, \"integer\": 20, \"long\": 200000000000000000}\n200000000000000000toJson()", "text": "You are sending 20000000000L, not 20000000000. You are sending a long, it’s stored in MongoDB as a long so it’s returned as a long.\nimage976×83 7.53 KB\nIt’s not a warning actually, it doesn’t even compile if you try to send a number larger than MAX_INTEGER.This compiles though:You can hack the final string that is returned by toJson() before you are sending it, but I’m not even sure this is legit JSON that you are sending in the end.If you want the same experience for integers and longs, maybe you could use their respective string values instead? With this solution, I’m sure the JSON is actually legit and it’s your client’s problem to deal with the parsing of this value into the right type.I’m now trying this with Java 4.2.3Output:Sooo apparently 200000000000000000 IS a legit “number” value for a JSON file and it’s also the default behaviour for toJson().Here is my doc in Compass where we can clearly see the types:\nimage1335×183 16.1 KB\nCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "String jsonString = \"{ \"product\": \"2Y\", \"buyLimit\": 1000000, \"sellLimit\": 20000000000 , \"primaryRatio\": 100 }\"\nDocument document = Document.parse(jsonString );\ndocument.put(\"_id\", \"somekey\"));\nmongoCollection.insertOne(document);\ndocuments = mongoCollection.find();\nString jsonStringBack = document.toJson();\n{ \"product\": \"2Y\", \"buyLimit\": 1000000, \"sellLimit\": {\"$numberLong\": \"20000000000\"}, \"primaryRatio\": 100 }\n", "text": "Thank you writing a test code for me! Something I should have done right away instead of dealing with an elaborate unit test.So I did the same.I insert:usingI read like this:I get back:I understand your argument for int versus long but shouldn’t I get back the same as I sent?Any other solution besides sending data as string e.g. “sellLimit”: “20000000000” ?Thanks Maxime.", "username": "Navin_Jha" }, { "code": "System.out.println(parse.toJson(JsonWriterSettings.builder().outputMode(JsonMode.RELAXED).build())); -\n{ “product”: “2Y”, “buyLimit”: 1000000, “sellLimit”: 20000000000, “primaryRatio”: 100 }\nSystem.out.println(parse.toJson());\n{ “product”: “2Y”, “buyLimit”: 1000000, “sellLimit”: {\"$numberLong\": “20000000000”}, “primaryRatio”: 100 }\n", "text": "This one works! what is the reason?This one print $numberLong", "username": "Navin_Jha" }, { "code": "", "text": "Nice. I didn’t know that RELAXED mode. But looks like this is the default behaviour in 4.2.3 because I didn’t get the $numberLong in my example. Update !", "username": "MaBeuLux88" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Inserted a json got $numberLong back in the returned json
2021-06-08T13:37:29.932Z
Inserted a json got $numberLong back in the returned json
12,458
null
[ "queries", "charts" ]
[ { "code": "{\n 'n': 1,\n subdocuments_array: [sub_1, sub_2,...,sub_n],\n},\n{\n 'n':2,\n subdocuments_array:[sub_1, sub_2,...,sub_n],\n} \n{\n 'value': some number\n 'flag1': 0 or 1,\n 'flag2': 0 or 1\n}\n{'sub_n.flag': {$ne: 0}}", "text": "Hi,I want to filter subdocuments in Mongo Charts Query.I have a document with a field and a field of array of documents like:and so on.These sub_n have the following form:I want to use only the subdocuments with flag1 and flag 2 equal to 1.I’ve tried to use {'sub_n.flag': {$ne: 0}} but this is filtering the entire document when there is a subdocument with flag equal to 0.I hope you can help me.Thank you.", "username": "Fryderyk_Chopin" }, { "code": "$unwind$match[\n { $unwind: \"$subdocuments_array\" },\n { $match: { \"subdocuments_array.flag1\":1, \"subdocuments_array.flag2\":1 } }\n]", "text": "Hi @Fryderyk_Chopin -The trick here is to use $unwind on the array before you filter. That will result in a new document being created for each array element, which you can then filter with a $match stage.Based on your sample docs, the full pipeline would be something like:", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filter subdocuments by conditions in Mongo Charts Query
2021-06-08T19:38:27.958Z
Filter subdocuments by conditions in Mongo Charts Query
4,690
null
[ "python", "connecting" ]
[ { "code": "", "text": "I am trying to connect to my Atlas cluster using python/pymongo on my Windows 10 PC. I am using the same uri (excluding +srv) that Compass on my PC uses to connect to the cluster but I am getting [Errno 11001] getaddrinfo failed. Code snippet follows:uri = “mongodb://m220-student:[email protected]/test?authSource=admin&replicaSet=atlas-13r6jd-shard-0”\ndbproc = MongoClient(uri)Any help would be greatly appreciated.", "username": "mark_rehert" }, { "code": "", "text": "Whyexcluding +srvYou should use the same URI. The address m220.c3djr.mongodb.net is the address of a replica set cluster.", "username": "steevej" }, { "code": "", "text": "I am using the same UIR but dropped off the +srv because it requires another module that I have not installed", "username": "mark_rehert" }, { "code": "", "text": "Well, actually mongodb://… and mongodb+srv://… are 2 very different URI even if you use the same string for the dot dot dot part.The module that you have not install maps the mongodb+srv into its mongodb counter part.You have 2 choices:The latter will contains 3 host addresses that looks like m220-shard-99-99-c3djr.mongodb.net.", "username": "steevej" }, { "code": "", "text": "Thanks very much. I thought the +srv just created a secure connection. I have installed dnspython and it now works.", "username": "mark_rehert" }, { "code": "", "text": "For more information you may want to look at:\n\t\t\tPages for logged out editors learn more\n\n\t\t \nA Service record (SRV record) is a specification of data in the Domain Name System defining the location, i.e., the hostname and port number, of servers for specified services. It is defined in RFC 2782, and its type code is 33. Some Internet protocols such as the Session Initiation Protocol (SIP) and the Extensible Messaging and Presence Protocol (XMPP) often require SRV support by network elements.\n A SRV record has the form:\n An example SRV reco...", "username": "steevej" }, { "code": "", "text": "I had read that once before but my interpretation was that I would need to create that srv entry on my PC, which I have not done. If you don’t mind, what am I not understanding?", "username": "mark_rehert" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to connect from pymongo to Atlas using Compass URI
2021-06-08T17:44:38.335Z
Unable to connect from pymongo to Atlas using Compass URI
4,150
null
[ "queries" ]
[ { "code": "0.1Pythontwo_dim = [[0.01, 1.0], [1.0, 0.01], [0.6, 0.8]]\nprint(list(filter(lambda row: min(row) < 0.1, two_dim)))\n[[0.01, 1.0], [1.0, 0.01]]MongoDB", "text": "I need to filter the collection by the following condition: the minimum value of two fields in each document must be less than 0.1.Example:Expected result:This is done very easily in Python:[[0.01, 1.0], [1.0, 0.01]]Is it possible to do it in MongoDB?", "username": "Platon_workaccount" }, { "code": "{ \"$or\" : [ { \"fdrp_bh_ref\" : { \"$lt\" : 0.1 } } , { \"drp_bh_alt\" : { \"$lt\" : 0.1 } } ] }\n", "text": "The following should work:Untested of course.In the future, if you could provide your simple documents as JSON strings, we could cut-n-paste them and test.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Apply filter only to min elements of two fields
2021-06-08T18:32:43.490Z
Apply filter only to min elements of two fields
1,727
null
[ "backup", "security" ]
[ { "code": "", "text": "if I am downloading a snapshot (I am using mongo encryption mechanism only) and somehow the server the snapshot it was stored was compromised - does it in plain text or the data keep encrypt when I am downloading it?", "username": "Yaron_Chelouche" }, { "code": "", "text": "Hi @Yaron_Chelouche and welcome in the MongoDB Community !I’m sorry but I don’t understand the question.Here you can find a bunch of examples and tutorials that are using CSFLE. In the one I wrote, I explain how to use CSFLE with MongoDB Community Edition only.What’s a snapshot for you?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88I will try to explain in a different way, If I am downloading the backup I created on Atlas to my laptop .\ndoes the file is store as an encrypted file? ( I am assuming Atlas is encrypting all data at rest)\nis this the case?", "username": "Yaron_Chelouche" }, { "code": "", "text": "Only if you are using Encryption at Rest: https://docs.atlas.mongodb.com/security-kms-encryption/\nimage2526×1139 253 KB\nDoc about the Cloud Backup: https://docs.atlas.mongodb.com/backup/cloud-backup/overview/#encryption-at-rest-using-customer-key-managementCheers,\nMaxime.", "username": "MaBeuLux88" } ]
Encryption for backup snapshots downloaded from Atlas
2021-06-06T17:56:55.730Z
Encryption for backup snapshots downloaded from Atlas
2,574
null
[ "aggregation" ]
[ { "code": "", "text": "I’m trying to get the size of an array so that I can use the number that it returns in a for loop. I’m using mongodb compass. I’m trying to use something like the projection below where 0 is an object inside of path and “here” is an array with 2 items.\n$project\n{\n“alias” : {$size : “$this.is.the.field.path.0.here”}\n}\nHowever, this keeps returning an array size of 0. It works fine for field paths that don’t contain a number in their path but returns 0 if the path does contain a number. Is there a way to properly get the correct size of the array here[] which has a size of 2?", "username": "science_cam" }, { "code": "", "text": "Hi @science_camWelcome to MongoDB community.So your field name is “0”? Or its an array under “path” field.If its 0 its ambiguous to with an operator looking for first element in an array.Maybe try `“$this.is.the.field.path.‘0’.here”, but if it doesn’t work I would suggest to rename the field to “zero” or do the length calc on application side…Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Aggregation $size: error when fieldpath field name contains a number
2021-06-08T01:56:15.558Z
Aggregation $size: error when fieldpath field name contains a number
1,468
null
[ "aggregation" ]
[ { "code": "[{'_id': ObjectId('6080d0a3bea947d567d9afba'), '_cls': 'BaseModel.LoyaltyCardScanModel', 'uuid': UUID('af35fe28-a309-11eb-bc8e-2ad4b4403f63'), 'place_uuid': UUID('483c8bb0-9e95-11eb-821a-42f2f9ada1ca'), 'user_uuid': UUID('35f831f8-a308-11eb-a237-2ad4b4403f63'), 'admin_uuid': UUID('8a022d9b-96e1-11eb-a58d-bc5ff48c075c'), 'user': ObjectId('6080ce2ad8c7ca00ececa252'), 'place':ObjectId('6079575cec0b3b2d62dbbcce'), 'scan_dates': ['2021-04-22 01:25:55.445332', '2021-04-22 01:29:37.231813'], 'last_scan_date': datetime.datetime(2021, 4, 22, 1, 29, 37, 231000), 'expiration_date': datetime.datetime(2021, 5, 22, 1, 29, 37, 231000), 'scans_count': 2, 'created_at': datetime.datetime(2021, 4, 22, 1, 25, 55, 451000), 'updated_at': datetime.datetime(2021, 4, 22, 1, 29, 37, 242000)}]\n[\n{\"$lookup\": {\n \"from\": \"place_model\",\n \"let\": {\"uuid\": \"$place_uuid\"},\n \"pipeline\": [\n {\"$match\": {\"name\": \"/whateverstring/\"}},\n {\"$match\": {\"$expr\": {\"$eq\": [\"$uuid\", \"$uuid\"]}}},\n {\"$project\": {\"uuid\": 1, \"name\": 1}}\n ],\n \"as\": \"place\"\n}},\n{\n '$skip': 10\n}, {\n '$limit': 1\n}\n]\n", "text": "My MongoDB trasanction collection has data represented like this:My goal is to get transaction documents from places that contain a string in their name.I have tried the following aggregation pipeline and it worked:But I also want get the user data from its ObjectID, but the users document are in another database. How can I do that?", "username": "Mehdi_Khlifi" }, { "code": "", "text": "Hi @Mehdi_KhlifiWelcome to MongoDB community.You can’t do a lookup between databases so your options is to copy data to this source database, you can sync it using $merge operations in latest mongo version.Or perform another query gathering all the users information on the application side.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Populate field from another database in aggregation pipeline
2021-06-07T21:57:12.675Z
Populate field from another database in aggregation pipeline
9,180
null
[ "queries", "performance" ]
[ { "code": "{ \n \"stages\" : [\n { \n \"$cursor\" : { \n \"queryPlanner\" : { \n \"plannerVersion\" : NumberInt(1), \n \"namespace\" : \"SquidexContent.States_Contents_All3\", \n \"indexFilterSet\" : false, \n \"parsedQuery\" : { \n \"$and\" : [\n { \n \"_ai\" : { \n \"$eq\" : \"ff0c76e5-0459-416a-8680-601ab07fdb72\"\n }\n }, \n { \n \"_si\" : { \n \"$eq\" : \"3abca025-a594-4d13-8836-acd4da20d0b1\"\n }\n }, \n { \n \"id\" : { \n \"$gt\" : \"00000000-0000-0000-0000-000000000000\"\n }\n }, \n { \n \"mt\" : { \n \"$gt\" : ISODate(\"1970-01-01T00:00:00.000+0000\")\n }\n }, \n { \n \"dl\" : { \n \"$not\" : { \n \"$eq\" : true\n }\n }\n }\n ]\n }, \n \"queryHash\" : \"FD73CE49\", \n \"planCacheKey\" : \"5216DDAA\", \n \"winningPlan\" : { \n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : { \n \"mt\" : NumberInt(-1), \n \"id\" : NumberInt(1), \n \"_ai\" : NumberInt(1), \n \"_si\" : NumberInt(1), \n \"dl\" : NumberInt(1), \n \"rf\" : NumberInt(1)\n }, \n \"indexName\" : \"mt_-1_id_1__ai_1__si_1_dl_1_rf_1\", \n \"isMultiKey\" : true, \n \"multiKeyPaths\" : { \n \"mt\" : [\n\n ], \n \"id\" : [\n\n ], \n \"_ai\" : [\n\n ], \n \"_si\" : [\n\n ], \n \"dl\" : [\n\n ], \n \"rf\" : [\n \"rf\"\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : NumberInt(2), \n \"direction\" : \"forward\", \n \"indexBounds\" : { \n \"mt\" : [\n \"[new Date(9223372036854775807), new Date(0))\"\n ], \n \"id\" : [\n \"(\\\"00000000-0000-0000-0000-000000000000\\\", {})\"\n ], \n \"_ai\" : [\n \"[\\\"ff0c76e5-0459-416a-8680-601ab07fdb72\\\", \\\"ff0c76e5-0459-416a-8680-601ab07fdb72\\\"]\"\n ], \n \"_si\" : [\n \"[\\\"3abca025-a594-4d13-8836-acd4da20d0b1\\\", \\\"3abca025-a594-4d13-8836-acd4da20d0b1\\\"]\"\n ], \n \"dl\" : [\n \"[MinKey, true)\", \n \"(true, MaxKey]\"\n ], \n \"rf\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }, \n \"rejectedPlans\" : [\n { \n \"stage\" : \"FETCH\", \n \"filter\" : { \n \"$and\" : [\n { \n \"id\" : { \n \"$gt\" : \"00000000-0000-0000-0000-000000000000\"\n }\n }, \n { \n \"mt\" : { \n \"$gt\" : ISODate(\"1970-01-01T00:00:00.000+0000\")\n }\n }\n ]\n }, \n \"inputStage\" : { \n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : { \n \"_ai\" : NumberInt(1), \n \"dl\" : NumberInt(1), \n \"_si\" : NumberInt(1)\n }, \n \"indexName\" : \"_ai_1_dl_1__si_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : { \n \"_ai\" : [\n\n ], \n \"dl\" : [\n\n ], \n \"_si\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : NumberInt(2), \n \"direction\" : \"forward\", \n \"indexBounds\" : { \n \"_ai\" : [\n \"[\\\"ff0c76e5-0459-416a-8680-601ab07fdb72\\\", \\\"ff0c76e5-0459-416a-8680-601ab07fdb72\\\"]\"\n ], \n \"dl\" : [\n \"[MinKey, true)\", \n \"(true, MaxKey]\"\n ], \n \"_si\" : [\n \"[\\\"3abca025-a594-4d13-8836-acd4da20d0b1\\\", \\\"3abca025-a594-4d13-8836-acd4da20d0b1\\\"]\"\n ]\n }\n }\n }, \n { \n \"stage\" : \"FETCH\", \n \"filter\" : { \n \"$and\" : [\n { \n \"_ai\" : { \n \"$eq\" : \"ff0c76e5-0459-416a-8680-601ab07fdb72\"\n }\n }, \n { \n \"id\" : { \n \"$gt\" : \"00000000-0000-0000-0000-000000000000\"\n }\n }\n ]\n }, \n \"inputStage\" : { \n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : { \n \"_si\" : NumberInt(1), \n \"dl\" : NumberInt(1), \n \"mt\" : NumberInt(-1)\n }, \n \"indexName\" : \"_si_1_dl_1_mt_-1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : { \n \"_si\" : [\n\n ], \n \"dl\" : [\n\n ], \n \"mt\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : NumberInt(2), \n \"direction\" : \"forward\", \n \"indexBounds\" : { \n \"_si\" : [\n \"[\\\"3abca025-a594-4d13-8836-acd4da20d0b1\\\", \\\"3abca025-a594-4d13-8836-acd4da20d0b1\\\"]\"\n ], \n \"dl\" : [\n \"[MinKey, true)\", \n \"(true, MaxKey]\"\n ], \n \"mt\" : [\n \"[new Date(9223372036854775807), new Date(0))\"\n ]\n }\n }\n }\n ]\n }\n }\n }, \n { \n \"$group\" : { \n \"_id\" : { \n \"$const\" : NumberInt(1)\n }, \n \"n\" : { \n \"$sum\" : { \n \"$const\" : NumberInt(1)\n }\n }\n }\n }\n ], \n \"serverInfo\" : { \n \"host\" : \"b0d41a1197f0\", \n \"port\" : NumberInt(27017), \n \"version\" : \"4.4.6\", \n \"gitVersion\" : \"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\"\n }, \n \"ok\" : 1.0\n}\n{ \n \"op\" : \"command\", \n \"ns\" : \"SquidexContent.States_Contents_All3\", \n \"command\" : {\n \"aggregate\" : \"States_Contents_All3\", \n \"pipeline\" : [\n {\n \"$match\" : {\n \"mt\" : {\n \"$gt\" : ISODate(\"1970-01-01T00:00:00.000+0000\")\n }, \n \"id\" : {\n \"$gt\" : \"00000000-0000-0000-0000-000000000000\"\n }, \n \"_ai\" : \"ff0c76e5-0459-416a-8680-601ab07fdb72\", \n \"_si\" : {\n \"$in\" : [\n \"3abca025-a594-4d13-8836-acd4da20d0b1\"\n ]\n }, \n \"dl\" : {\n \"$ne\" : true\n }\n }\n }, \n {\n \"$group\" : {\n \"_id\" : NumberInt(1), \n \"n\" : {\n \"$sum\" : NumberInt(1)\n }\n }\n }\n ], \n \"cursor\" : {\n\n }, \n \"allowDiskUse\" : false, \n \"$db\" : \"SquidexContent\", \n \"lsid\" : {\n \"id\" : UUID(\"ebb255c8-9140-4f66-986f-4e787fd70a5b\")\n }\n }, \n \"keysExamined\" : NumberInt(1271036), \n \"docsExamined\" : NumberInt(0), \n \"cursorExhausted\" : true, \n \"numYield\" : NumberInt(1271), \n \"nreturned\" : NumberInt(1), \n \"queryHash\" : \"FD73CE49\", \n \"planCacheKey\" : \"5216DDAA\", \n \"locks\" : {\n \"ReplicationStateTransition\" : {\n \"acquireCount\" : {\n \"w\" : NumberLong(1274)\n }\n }, \n \"Global\" : {\n \"acquireCount\" : {\n \"r\" : NumberLong(1274)\n }\n }, \n \"Database\" : {\n \"acquireCount\" : {\n \"r\" : NumberLong(1273)\n }\n }, \n \"Collection\" : {\n \"acquireCount\" : {\n \"r\" : NumberLong(1273)\n }\n }, \n \"Mutex\" : {\n \"acquireCount\" : {\n \"r\" : NumberLong(2)\n }\n }\n }, \n \"flowControl\" : {\n\n }, \n \"storage\" : {\n \"data\" : {\n \"bytesRead\" : NumberLong(2374652), \n \"timeReadingMicros\" : NumberLong(10476)\n }\n }, \n \"responseLength\" : NumberInt(148), \n \"protocol\" : \"op_msg\", \n \"millis\" : NumberInt(1301), \n \"planSummary\" : \"IXSCAN { mt: -1, id: 1, _ai: 1, _si: 1, dl: 1, rf: 1 }\", \n \"ts\" : ISODate(\"2021-06-07T19:32:44.014+0000\"), \n \"client\" : \"172.18.0.1\", \n \"allUsers\" : [\n\n ], \n \"user\" : \"\"\n}\n", "text": "Hi,I have issues to understand why counts are so slow. I am using the C# driver with CountDocumentAsync. My collection has 1Mio+ records with around 1,5 GB of data. Not that much actually.In my use case I count the number of documents covered by a filter. The filter can be fulfilled by an index. When I let Mongo explain the query I get the following result:So my understanding is that the query can be fulfilled by an index.But when I check the result in the profiler I get the following document:So it is actually reading from storage, even though the index for this collection should fit into RAM.", "username": "Sebastian_Stehle" }, { "code": "{ _ai : 1, _si : 1, id : 1 , mt : 1, dl : 1}\n", "text": "Hi @Sebastian_Stehle,Welcome to MongoDB community.Although the query can use an index it uses a non optimal one for this perticular count predicts.Our indexing guidelines suggest the order of fields to fit Equality Sort and finally Range order called the ESR rule.In your case a better index isPlease read more hereBest practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Why is count so slow?
2021-06-07T19:41:21.019Z
Why is count so slow?
10,874
null
[ "queries" ]
[ { "code": "", "text": "How to loop through collection from MongoDB Realm?\nI want to loop through each collection to get data from them.\nSomething like getCollectionNames() in MongoShell.\nHas anyone had a solution for this?", "username": "CSKH_ePlus" }, { "code": "", "text": "Hi @CSKH_ePlus,I don’t think it’s currently possible to do this, based on the different “MongoDB Actions” I see in the doc: Sign in to GitHub · GitHub (left panel).If you know the names of the collections, I guess it wouldn’t be a problem to loop through them with JS.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "The question is a little vague.How to loop through collection from MongoDB Realm?It’s pretty straight forward to loop (iterate) over a collection but we would need to know what your platform is and what kind of data you’re after. Generally iterating over a large quantity of data can be a inefficient but again, more details would enable us to assist you.Can you update the question with more information? What do your objects look like? What kind of data are you after? Examples?", "username": "Jay" }, { "code": "db.a.drop();\ndb.b.drop();\ndb.c.drop();\n\ndb.a.insertOne({a:1});\ndb.b.insertOne({b:1});\ndb.c.insertOne({c:1});\n\nlet colls = db.getCollectionNames();\nprint(colls);\n\nfor (let coll of colls) {\n printjson(db.getCollection(coll).find().toArray());\n}\n> load(\"test.js\")\n[ 'a', 'b', 'c' ]\n[ { _id: ObjectId(\"60be5a4b457aaba0111734ca\"), a: 1 } ]\n[ { _id: ObjectId(\"60be5a4b457aaba0111734cb\"), b: 1 } ]\n[ { _id: ObjectId(\"60be5a4b457aaba0111734cc\"), c: 1 } ]\n", "text": "I guess the OP wants to do something like this, but inside a Realm Function.My JS script executed in mongosh.Mongosh output:There isn’t, to my knowledge, an equivalent of getCollectionNames() in Realm Functions that would allow this kind of algo.Cheers,\nMax.", "username": "MaBeuLux88" } ]
How to loop through collection from MongoDB Realm?
2021-05-29T03:30:40.916Z
How to loop through collection from MongoDB Realm?
6,171
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.12.3 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.12.4%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "Robert_Stam" }, { "code": "", "text": "", "username": "system" } ]
.NET Driver 2.12.4 Released
2021-06-07T16:55:15.482Z
.NET Driver 2.12.4 Released
2,171
null
[]
[ { "code": "", "text": "I’m part of Testing Team and completely new to MongoDB and cloud migration. There will be different clusters for different environmentWould need your help on what are the testing scope for MongoDB migration to MongoDB Atlas (Azure).\nBelow are the few points I came up:\n•\tCollection Names\n•\tNo. of docs in the collection\n•\tAvg. size of each doc & Total size of docs in the collection respectively\n•\tNo. of Indexes on the collection\n•\tTotal size of all the indexes on the collectionAbove are just at collection level, but there will be hell lot to cover as part of overall testing. So appreciate if any one has more views on testing coverage.Thanks\nSat", "username": "Satish_Shinde" }, { "code": "", "text": "Hi Sat,It’s a little unclear what you’re really asking here: are you working on migrating a large-scale self-managed MongoDB deployment over to MongoDB Atlas and aiming to validate that your workload is successful on Atlas before cutting over, that kind of thing?How large is your database cluster today? Are you working with anyone from MongoDB to help you with this? Depending on the mission criticality and scale of the workload, we might suggest different levels of test coverage.Cheers\n-Andrew", "username": "Andrew_Davidson" } ]
What testing to be covered for MongoDB migration to MongoDB (4.4) Atlas (Azure)
2021-06-01T03:34:37.063Z
What testing to be covered for MongoDB migration to MongoDB (4.4) Atlas (Azure)
1,611
null
[ "python" ]
[ { "code": " uri = f\"mongodb+srv://{DB_USER}:{DB_PASSWORD}@{DB_URL}/{DB_NAME}\"\n collection = pymongo.MongoClient(uri)[DB_NAME]['testcollection']\n my_dict_with_dots = {\"my.key\": \"myValue\"}\n collection.insert_one({\"dict_with_dots\": my_dict_with_dots})\nbypass_document_validation", "text": "hey all =]\nI’m working with python and trying to insert a dict into one of my fields but I get an error message when trying to do so…\nI was able to insert the same data via the mongo shell, so It’s probably not a server issue.my python code looks like this:The exception thrown iskey ‘my.key’ must not contain ‘.’I also tried using bypass_document_validation but that didn’t produce any difference…what am I missing?server version - Atlas 4.4\npymongo version - 3.11.4", "username": "Mr_Nun" }, { "code": "my_dict_with_dots = {“my.key”: “myValue”}\nmy_dict_with_dots = { \"my\" : { \"key\" : \"myValue\" } }\n", "text": "Replacewith", "username": "steevej" }, { "code": "", "text": "hey steevej, thanks for your answer, but this is not what I’m trying to achieve =[", "username": "Mr_Nun" }, { "code": "", "text": "I was able to insert the same data via the mongo shell,Please show the resulting document.", "username": "steevej" }, { "code": "{ \"_id\" : ObjectId(\"60be0b745c6f5a0ce036387d\"), \"dict_with_dots\" : { \"my.key\" : \"myValue\" } }", "text": "there it is:\n{ \"_id\" : ObjectId(\"60be0b745c6f5a0ce036387d\"), \"dict_with_dots\" : { \"my.key\" : \"myValue\" } }", "username": "Mr_Nun" }, { "code": ".mongo", "text": "In general, it is not recommended to use dot (.) within a field name for MongoDB document (see Dot Notation - Document Field Access). Though it is permissible in some cases (like in mongo shell), PyMongo is not allowing it (as per your own try and the resulting error). There is a related JIRA issue to learn something about it: PyMongo - dots allowed in field names when updating.", "username": "Prasad_Saya" }, { "code": "$.", "text": "This is indeed the expected behaviour. Dots in field names is a bad practice and should be avoided.Until support is added in the query language, the use of $ and . in field names is not recommended and is not supported by the official MongoDB drivers.", "username": "MaBeuLux88" } ]
Unable to insert a dot '.' in a nested key via pymongo
2021-06-07T10:24:39.023Z
Unable to insert a dot &lsquo;.&rsquo; in a nested key via pymongo
11,299
null
[ "aggregation", "python" ]
[ { "code": "totalDocsExaminedexecution statsagg_pipeline=[\n{\"$match\": {\n\"timestamp1\": {\"$gte\": datetime.strptime(\"2020-01-01 00:00:00\",\n \"%Y-%m-%d %H:%M:%S\"),\n\"$lte\" :datetime.strptime(\"2020-12-31 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\"id13\": {\"$gt\": 5}}},\n{\"$group\": {\n\"_id\": {\"$dateToString\": {\"format\": \"%Y-%m-%d %H\",\n \"date\": \"$timestamp1\"}}}},\n{\"$sort\": {\"_id\": -1}},\n{ \"$limit\": 5},\n{\"$project\": {\n\"_id\": 0,\n\"hour\":\"$_id\"}}\n\n]\n\nexplain_output = mydb1.command('aggregate', 'mongodb2indextimestamp1', pipeline=agg_pipeline, explain=True)\npprint(explain_output)\n{'ok': 1.0,\n 'serverInfo': {'gitVersion': '72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7',\n 'host': 'xaris-MS-7817',\n 'port': 27017,\n 'version': '4.4.6'},\n 'stages': [{'$cursor': {'queryPlanner': {'indexFilterSet': False,\n 'namespace': 'mongodbtime.mongodb2indextimestamp1',\n 'parsedQuery': {'$and': [{'timestamp1': {'$lte': datetime.datetime(2020, 12, 31, 1, 55)}},\n {'id13': {'$gt': 5}},\n {'timestamp1': {'$gte': datetime.datetime(2020, 1, 1, 0, 0)}}]},\n 'planCacheKey': '3A0C9E84',\n 'plannerVersion': 1,\n 'queryHash': 'DC05E87A',\n 'rejectedPlans': [],\n 'winningPlan': {'inputStage': {'direction': 'forward',\n 'indexBounds': {'id13': ['(5, '\n 'inf.0]'],\n 'timestamp1': ['[new '\n 'Date(1609379700000), '\n 'new '\n 'Date(1577836800000)]']},\n 'indexName': 'timestamp1_-1_id13_1',\n 'indexVersion': 2,\n 'isMultiKey': False,\n 'isPartial': False,\n 'isSparse': False,\n 'isUnique': False,\n 'keyPattern': {'id13': 1,\n 'timestamp1': -1},\n 'multiKeyPaths': {'id13': [],\n 'timestamp1': []},\n 'stage': 'IXSCAN'},\n 'stage': 'PROJECTION_COVERED',\n 'transformBy': {'_id': 0,\n 'timestamp1': 1}}}}},\n {'$group': {'_id': {'$dateToString': {'date': '$timestamp1',\n 'format': {'$const': '%Y-%m-%d '\n '%H'}}}}},\n {'$sort': {'limit': 5, 'sortKey': {'_id': -1}}},\n {'$project': {'_id': False, 'hour': '$_id'}}]}\n \"executionTimeMillis\" \"totalKeysExamined\"\"totalDocsExamined\"", "text": "Hello guys.I am trying to see how many totalDocsExamined with explain on my query,but i dont get this kind of information.I need to run execution stats but i dont know how\nThis is my queryAnd this is the the explain output:Is it possible to get information about \"executionTimeMillis\" \"totalKeysExamined\" \"totalDocsExamined\"?\nThanks in advance!", "username": "harris" }, { "code": "", "text": "@Pavel_Duchovny Hello .Can you help me with that?", "username": "harris" }, { "code": " agg_pipeline=[\n{\"$match\": {\n\"timestamp1\": {\"$gte\": ISODate(\"2020-01-01 00:00:00\",\n \"%Y-%m-%d %H:%M:%S\"),\n\"$lte\" :ISODate(\"2020-12-31 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\"id13\": {\"$gt\": 5}}},\n{\"$group\": {\n\"_id\": {\"$dateToString\": {\"format\": \"%Y-%m-%d %H\",\n \"date\": \"$timestamp1\"}}}},\n{\"$sort\": {\"_id\": -1}},\n{ \"$limit\": 5},\n{\"$project\": {\n\"_id\": 0,\n\"hour\":\"$_id\"}}\n];\n\ndb.mongodb2indextimestamp1.explain(true).aggregate(agg_pipeline);\n", "text": "Hi @harris,Perhaps try using the shell:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "\"timestamp1\": {\"$gte\": datetime.strptime(\"2020-01-01 00:00:00\",\n \"%Y-%m-%d %H:%M:%S\"),\n\"$lte\" :datetime.strptime(\"2020-12-31 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\"id13\": {\"$gt\": 5}}},\nagg_pipeline=[\n{\"$match\": {\n\"timestamp1\": {\"$gte\": datetime.strptime(\"2020-01-01 00:00:00\",\n \"%Y-%m-%d %H:%M:%S\"),\n\"$lte\" :datetime.strptime(\"2020-12-31 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\"id13\": {\"$gt\": 5}}},\n{\"$group\": {\n\"_id\": {\"$dateToString\": {\"format\": \"%Y-%m-%d %H\",\n \"date\": \"$timestamp1\"}}}},\n{\"$sort\": {\"_id\": -1}},\n{ \"$limit\": 5},\n{\"$project\": {\n\"_id\": 0,\n\"hour\":\"$_id\"}}\n]\n\nmydb1.mongodb2indextimestamp1.explain('true').aggregate(agg_pipeline)\n'Collection' object is not callable. If you meant to call the 'explain' method on a 'Collection' object it is failing because no such method exist", "text": "Hello\nYes i i did thatBut the output says:'Collection' object is not callable. If you meant to call the 'explain' method on a 'Collection' object it is failing because no such method exist", "username": "harris" }, { "code": "explain('true')explain(true)", "text": "I do not think that explain('true') is the same as explain(true).", "username": "steevej" }, { "code": "name 'true' is not definedmydb1.mongodb2indextimestamp1.explain(True).aggregate(agg_pipeline)'Collection' object is not callable. If you meant to call the 'explain' method on a 'Collection' object it is failing because no such method exists.", "text": "Yes but i code the queries with python.if dont use ‘true’ it gives me error name 'true' is not defined\nand if i use mydb1.mongodb2indextimestamp1.explain(True).aggregate(agg_pipeline) the output says\n'Collection' object is not callable. If you meant to call the 'explain' method on a 'Collection' object it is failing because no such method exists.", "username": "harris" }, { "code": "", "text": "The method I provided is via a mongo shell its not for python.I am not certain if its even possible in a python code to get it this way…", "username": "Pavel_Duchovny" }, { "code": "agg_pipeline=[\n{\"$match\": {\n\"samples.timestamp1\": {\"$gte\": ISODate(\"2010-01-01 00:00:00\",\n \"%Y-%m-%d %H:%M:%S\"),\n\"$lte\" :ISODate(\"2020-12-31 01:55:00\", \"%Y-%m-%d %H:%M:%S\")},\n\"id13\": {\"$gt\": 5}}},\n{\"$unwind\": \"$samples\"},\n{\"$match\": {\n\"samples.id13\": {\"$gt\": 5}}},\n{\"$group\": {\n\"_id\": {\"$dateToString\": {\"format\": \"%Y-%m-%d %H\",\n \"date\": \"$samples.timestamp1\"}},}},\n{\"$sort\": {\"_id\": -1}},\n{ \"$limit\": 5},\n{\"$project\": {\n\"_id\": 0,\n\"hour\":\"$_id\"}}\n];\n db.mongodbbucketnocpu2index.explain(true).aggregate(agg_pipeline);\n{\n\t\"stages\" : [\n\t\t{\n\t\t\t\"$cursor\" : {\n\t\t\t\t\"queryPlanner\" : {\n\t\t\t\t\t\"plannerVersion\" : 1,\n\t\t\t\t\t\"namespace\" : \"mongodbtime.mongodbbucketnocpu2index\",\n\t\t\t\t\t\"indexFilterSet\" : false,\n\t\t\t\t\t\"parsedQuery\" : {\n\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\"$lte\" : ISODate(\"2020-12-31T01:55:00Z\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"id13\" : {\n\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"queryHash\" : \"D8B2A1E8\",\n\t\t\t\t\t\"planCacheKey\" : \"322C4E92\",\n\t\t\t\t\t\"winningPlan\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"[new Date(1609379700000), new Date(-9223372036854775808)]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"rejectedPlans\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\"$lte\" : ISODate(\"2020-12-31T01:55:00Z\")\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"[new Date(9223372036854775807), new Date(1262304000000)]\"\n\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"executionStats\" : {\n\t\t\t\t\t\"executionSuccess\" : true,\n\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\"executionTimeMillis\" : 582,\n\t\t\t\t\t\"totalKeysExamined\" : 1156920,\n\t\t\t\t\t\"totalDocsExamined\" : 96410,\n\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 54,\n\t\t\t\t\t\t\"works\" : 1156921,\n\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\"needTime\" : 1156920,\n\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\"saveState\" : 1186,\n\t\t\t\t\t\t\"restoreState\" : 1186,\n\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t},\n\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 52,\n\t\t\t\t\t\t\t\"works\" : 1156921,\n\t\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\t\"needTime\" : 1156920,\n\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\"saveState\" : 1186,\n\t\t\t\t\t\t\t\"restoreState\" : 1186,\n\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\"docsExamined\" : 96410,\n\t\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 96410,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 47,\n\t\t\t\t\t\t\t\t\"works\" : 1156921,\n\t\t\t\t\t\t\t\t\"advanced\" : 96410,\n\t\t\t\t\t\t\t\t\"needTime\" : 1060510,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 1186,\n\t\t\t\t\t\t\t\t\"restoreState\" : 1186,\n\t\t\t\t\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\"[new Date(1609379700000), new Date(-9223372036854775808)]\"\n\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"keysExamined\" : 1156920,\n\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\"dupsTested\" : 1156920,\n\t\t\t\t\t\t\t\t\"dupsDropped\" : 1060510\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t\"allPlansExecution\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\"totalKeysExamined\" : 28929,\n\t\t\t\t\t\t\t\"totalDocsExamined\" : 2411,\n\t\t\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\"works\" : 28929,\n\t\t\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\t\t\"needTime\" : 28929,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 58,\n\t\t\t\t\t\t\t\t\"restoreState\" : 57,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gte\" : ISODate(\"2010-01-01T00:00:00Z\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\t\"works\" : 28929,\n\t\t\t\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\t\t\t\"needTime\" : 28929,\n\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\"saveState\" : 58,\n\t\t\t\t\t\t\t\t\t\"restoreState\" : 57,\n\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\"docsExamined\" : 2411,\n\t\t\t\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\t\t\"nReturned\" : 2411,\n\t\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"works\" : 28929,\n\t\t\t\t\t\t\t\t\t\t\"advanced\" : 2411,\n\t\t\t\t\t\t\t\t\t\t\"needTime\" : 26518,\n\t\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"saveState\" : 58,\n\t\t\t\t\t\t\t\t\t\t\"restoreState\" : 57,\n\t\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[new Date(1609379700000), new Date(-9223372036854775808)]\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"keysExamined\" : 28929,\n\t\t\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\t\t\"dupsTested\" : 28929,\n\t\t\t\t\t\t\t\t\t\t\"dupsDropped\" : 26518\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 3,\n\t\t\t\t\t\t\t\"totalKeysExamined\" : 28929,\n\t\t\t\t\t\t\t\"totalDocsExamined\" : 2412,\n\t\t\t\t\t\t\t\"executionStages\" : {\n\t\t\t\t\t\t\t\t\"stage\" : \"PROJECTION_SIMPLE\",\n\t\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 3,\n\t\t\t\t\t\t\t\t\"works\" : 28929,\n\t\t\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\t\t\"needTime\" : 28929,\n\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\"saveState\" : 1186,\n\t\t\t\t\t\t\t\t\"restoreState\" : 1186,\n\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\"transformBy\" : {\n\t\t\t\t\t\t\t\t\t\"samples\" : 1,\n\t\t\t\t\t\t\t\t\t\"_id\" : 0\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\t\t\t\t\t\"filter\" : {\n\t\t\t\t\t\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"id13\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : {\n\t\t\t\t\t\t\t\t\t\t\t\t\t\"$lte\" : ISODate(\"2020-12-31T01:55:00Z\")\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\"nReturned\" : 0,\n\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 3,\n\t\t\t\t\t\t\t\t\t\"works\" : 28929,\n\t\t\t\t\t\t\t\t\t\"advanced\" : 0,\n\t\t\t\t\t\t\t\t\t\"needTime\" : 28929,\n\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\"saveState\" : 1186,\n\t\t\t\t\t\t\t\t\t\"restoreState\" : 1186,\n\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\"docsExamined\" : 2412,\n\t\t\t\t\t\t\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\t\t\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\t\t\t\t\t\"nReturned\" : 2412,\n\t\t\t\t\t\t\t\t\t\t\"executionTimeMillisEstimate\" : 3,\n\t\t\t\t\t\t\t\t\t\t\"works\" : 28929,\n\t\t\t\t\t\t\t\t\t\t\"advanced\" : 2412,\n\t\t\t\t\t\t\t\t\t\t\"needTime\" : 26517,\n\t\t\t\t\t\t\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"saveState\" : 1186,\n\t\t\t\t\t\t\t\t\t\t\"restoreState\" : 1186,\n\t\t\t\t\t\t\t\t\t\t\"isEOF\" : 0,\n\t\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : -1,\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : 1\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"indexName\" : \"samples.timestamp1_-1_samples.id13_1\",\n\t\t\t\t\t\t\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"samples\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\t\t\t\t\t\"samples.timestamp1\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[new Date(9223372036854775807), new Date(1262304000000)]\"\n\t\t\t\t\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\t\t\t\t\"samples.id13\" : [\n\t\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t\t\t\"keysExamined\" : 28929,\n\t\t\t\t\t\t\t\t\t\t\"seeks\" : 1,\n\t\t\t\t\t\t\t\t\t\t\"dupsTested\" : 28929,\n\t\t\t\t\t\t\t\t\t\t\"dupsDropped\" : 26517\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(0),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(552)\n\t\t},\n\t\t{\n\t\t\t\"$unwind\" : {\n\t\t\t\t\"path\" : \"$samples\"\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(0),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(552)\n\t\t},\n\t\t{\n\t\t\t\"$match\" : {\n\t\t\t\t\"samples.id13\" : {\n\t\t\t\t\t\"$gt\" : 5\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(0),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(552)\n\t\t},\n\t\t{\n\t\t\t\"$group\" : {\n\t\t\t\t\"_id\" : {\n\t\t\t\t\t\"$dateToString\" : {\n\t\t\t\t\t\t\"date\" : \"$samples.timestamp1\",\n\t\t\t\t\t\t\"format\" : {\n\t\t\t\t\t\t\t\"$const\" : \"%Y-%m-%d %H\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(0),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(552)\n\t\t},\n\t\t{\n\t\t\t\"$sort\" : {\n\t\t\t\t\"sortKey\" : {\n\t\t\t\t\t\"_id\" : -1\n\t\t\t\t},\n\t\t\t\t\"limit\" : NumberLong(5)\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(0),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(552)\n\t\t},\n\t\t{\n\t\t\t\"$project\" : {\n\t\t\t\t\"hour\" : \"$_id\",\n\t\t\t\t\"_id\" : false\n\t\t\t},\n\t\t\t\"nReturned\" : NumberLong(0),\n\t\t\t\"executionTimeMillisEstimate\" : NumberLong(552)\n\t\t}\n\t],\n\t\"serverInfo\" : {\n\t\t\"host\" : \"xaris-MS-7817\",\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"4.4.6\",\n\t\t\"gitVersion\" : \"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\"\n\t},\n\t\"ok\" : 1\n}\n", "text": "Yes you are right.Can i ask you one last thing?My table contains 1.157.000 rows but i have used bucket pattern so i have 1 document that contains 12 subdocuments inside.I use the explain stat and i see something weird.It says that the planner used indexscan but it scanned the whole table.isnt this sequence scan?I post the query and the execution plan below.and the explain is here", "username": "harris" }, { "code": "explainaggregatedb.commandmongorunCommandexecutionStatsallPlansExecution", "text": "Hello @harris,In PyMongo, there is only one way to specify the explain on aggregate method - that is via the db.command (that is same as mongo shell’s runCommand). And, there are no option to specify the executionStats and allPlansExecution modes. This is an earlier post discussing the syntax:", "username": "Prasad_Saya" }, { "code": "\t\t\t\t\t\"totalKeysExamined\" : 1156920,\n\t\t\t\t\t\"totalDocsExamined\" : 96410,\n", "text": "Thank you @Prasad_Saya.If you have spare time can you take a look on my explain above.The explain says that we do an index scan,but it scanned all the rows of the table(1.157.000).Is there an explanation for that ?", "username": "harris" }, { "code": "winningPlan\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"samples.timestamp1\" : -1,\n\t\t \"samples.id13\" : 1\n\t\t },\n \"indexName\" : \"samples.timestamp1_-1_samples.id13_1\"\n \n\"executionStats\" : \n \"executionSuccess\" : true,\n \"nReturned\" : 0,\n \"executionTimeMillis\" : 582,\n \"totalKeysExamined\" : 1156920,\n \"totalDocsExamined\" : 96410,\n", "text": "Hello harriLooks like the index was used,1156920 index keys examined,and from them were FETCHED\n96410 documents. (fetch is consindered examined also)\nEven if an index is used,to get the other information from the documents,still FETCH is needed.\nIf it was collection scan it would say COLLSCAN not IXSCAN ,and totalDocsExamined equal\nto the collection size ndocs 1million +This page from Docs is very useful", "username": "Takis" }, { "code": "", "text": "@harris,This query is not very selective as it does only range query over a multikey index which is not considered selective , therefore lots of keys are scanned.See the following documentation for more:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
How to get totalDocsExamined with explain in pymongo
2021-06-07T09:16:16.010Z
How to get totalDocsExamined with explain in pymongo
4,564
null
[ "aggregation", "queries" ]
[ { "code": "$dayOfYear$dayOfYear$group", "text": "Hi guys!I’m trying to group documents with the operator $dayOfYear. Obviously, all dates are stored in UTC. The problem is if a document is recorded at 02/02/2021 02:00 AM UTC, in local time is 01/02/2021 11:00 PM, 4 hours of offset,So, how I can handle this? We don’t want to store timezone on every document because two users can make the same query but on different timezones, independently where was created the record.In short, how can set the timezone when using $dayOfYear with $group?", "username": "Matias_Lopez" }, { "code": "[{$group: {\n _id: { \"day\" : {$dayOfYear : {\"date\" : \"$saleDate\",\n \"timezone\" : \"GMT\"\n }}},\n count: {\n $sum : 1\n }\n}}]\n", "text": "Hi @Matias_Lopez,Welcome back to MongoDB Community.Why don’t you use the timezone in $dayOfYear syntax?Example for GMTBest regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "db.collection.aggregate([\n { \n $group: { _id: { day_of_year_local: { $dayOfYear: { date: \"$dateField\", timezone: \"-04:00\" } } } }\n }\n])\ntimezone: \"-04:00\"-4 hours", "text": "Hello @Matias_Lopez, you can try this:The timezone: \"-04:00\" is the -4 hours offset (or 4 hours behind UTC).", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks @Pavel_Duchovny and @Prasad_Saya!", "username": "Matias_Lopez" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Timezone with $dayOfYear on $group?
2021-06-06T18:12:56.629Z
Timezone with $dayOfYear on $group?
4,524
null
[ "data-modeling" ]
[ { "code": "$in", "text": "We are investigating if we can use mongodb as a cache where we fetch 100 keys everytimeStorage:\nWe have 1000 customers and upto 100000 of keys for each customer where values are 5KB JSON. We are expecting total size of data to be around 200GB.Access pattern:\nIn each request, we fetch 100 keys together (but all of them belong to same customer)Question 1:\nIf we use Hash(_id) as shard key, each request will need mongos router aggregating the data from multiple shards. Is that ok?\nIs mongos router efficient when I use $in clause with multiple _ids which belong in multiple physical nodes when sharded.Question 2:\nIs there a pattern of sharding which makes access more efficient?if it was redis, I could have use customerId has cluster hash tag so I can use MGET to fetch multiple keys together\nif it was cassandra, I could make (customer_id, key) as primary key with key portion as sort key to ensure queries go to same node for efficient retrievalII am new to MongoDB. Say, if I am sharding by using customerId here, wondering if that is optimal as it can lead to large chunks (which I read somewhere in docs that it is bad)", "username": "Hasan_Kumar" }, { "code": "{\ncustomerId : xxxxx,\nkeys : [\n { \"k\" : \"key1\" , \"v\" : \"value1\"},\n...\n { \"k\" : \"key100\" , \"v\" : \"value100\"}\n}\n}\n{ \"keys.k\" : 1, \"keys.v\" : 1}", "text": "Hi @Hasan_KumarWelcome to MongoDB Community.First I am not certain that 200GB of data is worth having a sharded cluster, why do you expect having a sharded cluster?Now when looking into the use case if you need to query 100 keys per customer together why not having them in the same document, for example:Will that work for you? Than you can index the custmerId and query one document which will be the best performance to fetch the 100 keys.If you need to update the keys or query by a specific key you can use the attribute pattern indexing { \"keys.k\" : 1, \"keys.v\" : 1}Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.For updates please look at array filters:In case you will need to shard this collection you may consider has sharding by “customerId” but than each customerId fetch will target a specific shard.I suggest to read the following blogs:Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks @Pavel_DuchovnyThanks for the input that sharding may not be needed for 200GB data Question: Each customer can have upto 100K keys with each value upto 5KB. If i am storing all keys for a single customer in one document, that will mean a single document of 500MB document. Isnt that a problem?But while accessing we need only 100 of those keys (assume filtered pagination).", "username": "Hasan_Kumar" }, { "code": "", "text": "@Hasan_Kumar,Limit the number of keys per document to be 100 only and bucket them into 1000 documents resulting in overall 100k keys (100 X 1000 docs).So each customer will have 1000 documents in a collection which is totally fine.Will that work?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{customerId:1, keyId:1}", "text": "I think the answer depends on whether you are fetching specific 100 keys always or arbitrary 100 keys (for a particular customer). Can you tell us more about the use case? Are you adding new keys for each customer over time? Are you getting most recent 100 keys or using some other way to choose them?You mention customerId as a cluster hash tag, in MongoDB you can have a secondary index on any field or combination of fields (like on {customerId:1, keyId:1} for instance. Is there a reason you’re looking to store things as key-values rather than using full power of documents?Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Thanks @Asya_Kamsky & @Pavel_Duchovny\nEach customer (organization) has upto100000 tasks (created by different users) created over a period of few months.I need 100 arbitrary keys everytime so storing them in multiple documents may not be ideal as I don’t know before hand which document will it be in.\nI intend to store documents as keys but given the current access pattern, I will need the full document everytime (hence calling it a key-value store).\ne.g., give me documents with ids 1, 23, 56, 799, …, 100212 (all belonging to same customer who owns 100000 such other documents). (Note: 100 ids to fetched are not completely random and are determined by a query to our posgres database)Also what is the max recommended document size?", "username": "Hasan_Kumar" }, { "code": "{\ncustomerId : ...,\nkeyId : ...,\nValue : .... ,\n... \n}\n{customerId : 1, keyId : 1}Coll.find({customerId : \"xxx\", keyId : { $in : [ 1 , 23 ... ]})\n", "text": "Hi @Hasan_Kumar,The document limit is 16MB , while you potentially can have documents near that size its not really recommend due to the risk of hitting the limit and moving 16mb over network to the client per document will need an extreme justification…As @Asya_Kamsky mentioned why not storing keys data clustered by customer and keyId :Index : {customerId : 1, keyId : 1}\nYou can than potentially query all documents for a customer per set of key ids:Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you! Will try the suggestions here.", "username": "Hasan_Kumar" } ]
Using mongodb as a key-value store which fetch multiple keys together
2021-05-26T12:50:12.363Z
Using mongodb as a key-value store which fetch multiple keys together
14,286
null
[]
[ { "code": " realm?.syncSession?.resume();\n realm?.syncSession?.addProgressNotification(\n 'upload', \n 'forCurrentlyOutstandingWork',\n (transferred, transferable) => {\n if (transferred >= transferable) {\n realm?.syncSession?.pause();\n }\n }\n );\n realm?.syncSession?.resume();\n realm?.syncSession?.uploadAllLocalChanges().then(() => {\n console.log('pausing transferring (fulfilled)');\n realm?.syncSession?.pause();\n }, (reason) => {\n console.log(`pausing transferring (rejected: ${reason})`);\n realm?.syncSession?.pause();\n });\n", "text": "I’m developing an app where user can do numerous updates to the same data within a short period of time. On top of that, I would like to reduce syncing for not premium users to once per day. My motivation is to reduce the number of requests to the server.\nSimilar topics were raised already in this forum:\nHow can I avoid syncing on every commit?\nHow to optimize Realm Sync for performance\nand the only solution suggested was to handle that manually and put data into synced realm once a while.\nIn my case that adds a lot of complexity to my code. And it seems it should be easily avoided if I would pause the sync and resume it once in a some period, then pause again.I have done my experiments already and here is what I got (I’m using React Native).\nOption 1. This is just for upload, but I suppose it can be coupled with download callback. It has a drawback though – resume does not happen instantly – it waits for the next change in realm:Option 2. This also has a drawback – requests to the server are sent even when there were no local changes:Please advice and share your ideas. Do you know of any plans to support this “delayed” sync out of the box?", "username": "Maxim_Novoseltsev" }, { "code": "", "text": "The realm guys can correct me if I am wrong, but from what I understand, pausing/resuming won’t affect the number of requests at all. Each commit, will still count, even if the sync is paused, so I don’t think this will be a valid strategy for cutting sync costs.What I will have to do for my app is to use a “draft object”, i.e. a local copy of the object to edit. This will be stored in a local realm and I can do thousands of commits without triggering requests to the server. When done editing, I will copy the draft object over to the synced realm which should result in a single commit.Totally agree that this adds a lot of complexity, but it is the only solution I have found so far.", "username": "Simon_Persson" }, { "code": "", "text": "I have done another experiment (compared usage data) and unfortunately it seems you are correct about “pausing won’t affect the number of tracked requests”. However, pausing/resuming should (I couldn’t check that easily) affect Sync Runtime time, which is also billable.I would appreciate any additional thoughts and opinions regarding this topic.", "username": "Maxim_Novoseltsev" }, { "code": "", "text": "Would the sync time contribute significantly to your costs? I haven’t gone live with sync yet, but when I did the math, it didn’t look like this was something worth optimizing for.In my app I have roughly 15k DAU and an average of 10 min of app usage per user per day. In this case this would mean that the monthly usage would be roughly 15 000 * 10 * 30 = 4 500 000 minutes. At a cost of $0.08/1 000 000min, this would only be $0.36/month… i.e. not worth pausing sync for in my case.For me it will definitely be the Realm requests I need to be careful of, not sync time and probably not data transfer either.", "username": "Simon_Persson" } ]
Sync changes in batch once in a minute, an hour, a day
2021-06-05T08:39:47.718Z
Sync changes in batch once in a minute, an hour, a day
2,480
null
[ "react-native" ]
[ { "code": "", "text": "Hi, can someone better explain to me realm with react native and how it works, and the advantages? thank you.\nI would also like to know if it is possible to use a database by calling it with only code written by me without external packages.", "username": "Samuele_Cervietti" }, { "code": "", "text": "Hi @Samuele_Cervietti.Realm’s React Native SDK lets you store your data locally within your app. Realm is an object database and so it’s very straightforward to code against. There are also SDKs available for other platforms – including iOS, Android, and web.You can optionally use MongoDB Realm Sync to synchronize data between mobile app instances and with MongoDB Atlas in the backend.If you want to work with MongoDB Atlas data from an app without using an SDK, one option is to use the GraphQL API available in MongoDB Realm.You can find a lot of Realm material in the MongoDB Developer Hub.", "username": "Andrew_Morgan" } ]
Database mongoDb with React Native
2021-06-06T22:00:09.687Z
Database mongoDb with React Native
2,043
null
[ "server", "configuration" ]
[ { "code": "", "text": "I am an user of MongoDB Community Edition 4.0.8 version on my local platform Windows 7, 64 bit. I also use MongoDB Atlas Cluster as a network service user of MongoDB Community Edition. While I use the following command line from mongoshell, db.enableFreeMonitoring() , I get the following errorUnable to get response from the cloud monitoring service. We will continue to retry in the background. Please check your firewall settings to ensure that mongod can communicate with \"https: // MongoDB Free Monitoring.", "username": "Arindam_Biswas2" }, { "code": "", "text": "Hi @Arindam_Biswas2, specific apps like MongoDB server can be allowed through firewall on Windows. Check the settings. Also, if you are on a corporate network or corporate VPN, consult with network admins to find out if the outgoing traffic is being blocked to cloud.mongodb.com. Maybe, they need to explicitly add it to the allowed list.Let me know if any of this helps.Mahi", "username": "mahisatya" }, { "code": "", "text": "Thank you for your reply. Sorry for delay. I was using the older version of MongoDB that too on Windows 7 whose EOL has already been declared. I uninstalled both Windows 7 and older version of MongoDB and reinstalled MongoDB 4.4.6 on Windows 10.\nNow, I am getting cloud free monitoring.", "username": "Arindam_Biswas2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error enabling free monitoring on MongoDB Community Server Version 4.0.8
2021-05-24T03:29:10.021Z
Error enabling free monitoring on MongoDB Community Server Version 4.0.8
3,051
null
[]
[ { "code": "mongosh> @mongosh/[email protected] evergreen-release\n> ts-node -r ../../scripts/import-expansions.js src/index.ts \"compile\"\n\n\n/work/jwoehr/MongoDB/mongosh/node_modules/bindings/bindings.js:126\n err = new Error(\n ^\nError: Could not locate the bindings file. Tried:\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/build/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/build/Debug/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/build/Release/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/out/Debug/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/Debug/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/out/Release/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/Release/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/build/default/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/compiled/16.3.0/linux/x64/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/addon-build/release/install-root/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/addon-build/debug/install-root/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/addon-build/default/install-root/deasync.node\n → /work/jwoehr/MongoDB/mongosh/node_modules/deasync/lib/binding/node-v93-linux-x64/deasync.node\n at bindings (/work/jwoehr/MongoDB/mongosh/node_modules/bindings/bindings.js:126:9)\n at Object.<anonymous> (/work/jwoehr/MongoDB/mongosh/node_modules/deasync/index.js:30:31)\n at Module._compile (node:internal/modules/cjs/loader:1109:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1138:10)\n at Object.require.extensions.<computed> [as .js] (/work/jwoehr/MongoDB/mongosh/node_modules/ts-node/src/index.ts:1045:43)\n at Module.load (node:internal/modules/cjs/loader:989:32)\n at Function.Module._load (node:internal/modules/cjs/loader:829:14)\n at Module.require (node:internal/modules/cjs/loader:1013:19)\n at require (node:internal/modules/cjs/helpers:93:18)\n", "text": "Been months since I tried to build mongoshNow it’s bottoming out here, any tips?", "username": "Jack_Woehr" }, { "code": "", "text": "Hi Jack!This is something I occasionally run into as well - it most likely means that you’re trying to run the build step with Node.js 16, and have run a previous install step with another Node.js version.We’re currently doing all of our work on mongosh with Node.js 14, so I’d recommend you do the same. Hope this helps!", "username": "Anna_Henningsen" }, { "code": "mongoshnpm run bootstrap\nnpm run compile-exec\n$ nvm ls\n v12.18.4\n v14.2.0\n-> v14.5.0\n v14.15.3\n v16.3.0\ndefault -> v14.5.0\niojs -> N/A (default)\nunstable -> N/A (default)\nnode -> stable (-> v16.3.0) (default)\nstable -> 16.3 (-> v16.3.0) (default)\nlts/* -> lts/fermium (-> N/A)\nlts/argon -> v4.9.1 (-> N/A)\nlts/boron -> v6.17.1 (-> N/A)\nlts/carbon -> v8.17.0 (-> N/A)\nlts/dubnium -> v10.24.1 (-> N/A)\nlts/erbium -> v12.22.1 (-> N/A)\nlts/fermium -> v14.17.0 (-> N/A)\n", "text": "Thanks @Anna_Henningsen … as I noted, it’s been a while since I was following the evolution of mongosh … is it still correct to build using these 2 command lines?BTW …", "username": "Jack_Woehr" }, { "code": "0 info it worked if it ends with ok\n1 verbose cli [\n1 verbose cli '/home/jwoehr/.nvm/versions/node/v14.5.0/bin/node',\n1 verbose cli '/home/jwoehr/.nvm/versions/node/v14.5.0/bin/npm',\n1 verbose cli 'run',\n1 verbose cli 'evergreen-release',\n1 verbose cli '--',\n1 verbose cli 'compile'\n1 verbose cli ]\n2 info using [email protected]\n3 info using [email protected]\n4 verbose run-script [\n4 verbose run-script 'preevergreen-release',\n4 verbose run-script 'evergreen-release',\n4 verbose run-script 'postevergreen-release'\n4 verbose run-script ]\n5 info lifecycle @mongosh/[email protected]~preevergreen-release: @mongosh/[email protected]\n6 info lifecycle @mongosh/[email protected]~evergreen-release: @mongosh/[email protected]\n7 verbose lifecycle @mongosh/[email protected]~evergreen-release: unsafe-perm in lifecycle true\n8 verbose lifecycle @mongosh/[email protected]~evergreen-release: PATH: /home/jwoehr/.nvm/versions/node/v14.5.0/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/work/jwoehr/MongoDB/mongosh/packages/build/node_modules/.bin:/home/jwoehr/.nvm/versions/node/v14.5.0/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/work/jwoehr/MongoDB/mongosh/node_modules/.bin:/home/jwoehr/.nvm/versions/node/v14.5.0/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/work/jwoehr/MongoDB/mongosh/node_modules/.bin:/home/jwoehr/.nvm/versions/node/v14.5.0/bin:/home/jwoehr/.local/bin:/home/jwoehr/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/jwoehr/work/jwoehr/gopath/bin\n9 verbose lifecycle @mongosh/[email protected]~evergreen-release: CWD: /work/jwoehr/MongoDB/mongosh/packages/build\n10 silly lifecycle @mongosh/[email protected]~evergreen-release: Args: [\n10 silly lifecycle '-c',\n10 silly lifecycle 'ts-node -r ../../scripts/import-expansions.js src/index.ts \"compile\"'\n10 silly lifecycle ]\n11 silly lifecycle @mongosh/[email protected]~evergreen-release: Returned: code: 1 signal: null\n12 info lifecycle @mongosh/[email protected]~evergreen-release: Failed to exec evergreen-release script\n13 verbose stack Error: @mongosh/[email protected] evergreen-release: `ts-node -r ../../scripts/import-expansions.js src/index.ts \"compile\"`\n13 verbose stack Exit status 1\n13 verbose stack at EventEmitter.<anonymous> (/home/jwoehr/.nvm/versions/node/v14.5.0/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:332:16)\n13 verbose stack at EventEmitter.emit (events.js:314:20)\n13 verbose stack at ChildProcess.<anonymous> (/home/jwoehr/.nvm/versions/node/v14.5.0/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)\n13 verbose stack at ChildProcess.emit (events.js:314:20)\n13 verbose stack at maybeClose (internal/child_process.js:1051:16)\n13 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:287:5)\n14 verbose pkgid @mongosh/[email protected]\n15 verbose cwd /work/jwoehr/MongoDB/mongosh/packages/build\n16 verbose Linux 5.11.14-200.fc33.x86_64\n17 verbose argv \"/home/jwoehr/.nvm/versions/node/v14.5.0/bin/node\" \"/home/jwoehr/.nvm/versions/node/v14.5.0/bin/npm\" \"run\" \"evergreen-release\" \"--\" \"compile\"\n18 verbose node v14.5.0\n19 verbose npm v6.14.10\n20 error code ELIFECYCLE\n21 error errno 1\n22 error @mongosh/[email protected] evergreen-release: `ts-node -r ../../scripts/import-expansions.js src/index.ts \"compile\"`\n22 error Exit status 1\n23 error Failed at the @mongosh/[email protected] evergreen-release script.\n23 error This is probably not a problem with npm. There is likely additional logging output above.\n24 verbose exit [ 1, true ]\n", "text": "Here’s the failure log.", "username": "Jack_Woehr" }, { "code": "node_modules", "text": "is it still correct to build using these 2 command lines?Yes – that should still work and is essentially what we do in CI as well.Is it still failing with the same error (“Could not locate the bindings file”) when using Node.js 14? If so, you may need to remove the top-level node_modules folder first and re-bootstrap using Node.js 14 as well.", "username": "Anna_Henningsen" }, { "code": "export SEGMENT_API_KEY=\"dummy\"", "text": "Build successfully!\n1 mistake and 1 omission on my part:", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error building mongosh
2021-06-05T23:19:54.754Z
Error building mongosh
4,179
null
[ "kafka-connector" ]
[ { "code": "copy.existingtruecopy.existing.pipeline", "text": "Hi team,\nI am using MongoDb Kafka Source connector to produce the existing document in Atlas DB into the Kafka topic. I want to exclude a collection from the database from which the documents are fetched.\nI set the pipeline option as:[{\"$match\": {“ns.coll”: {\"$regex\": /^(?!collection_to_be_excluded).*/}}}]along with copy.existing set to true. But I am not seeing any existing records from the allowed collections.\nUsing pipeline works fine when not copying existing data. So the newly added records in the allowed collections will be produced into the Kafka topic.I tried setting copy.existing.pipeline similar to pipeline value above, but still no record is getting into the topic.\nHow can I filter the collections for existing documents?Many thanks.cc: @Robert_Walters", "username": "Rajendra_Dangwal" }, { "code": "copy.existing.namespace.regex", "text": "Setting copy.existing.namespace.regex as:^(database_to_watch.(?!collection_to_be_excluded$)).+$will work. But is there a way we can make use of pipeline here?", "username": "Rajendra_Dangwal" }, { "code": "", "text": "Are you using the free tier or shared tier MongoDB Atlas size?", "username": "Robert_Walters" }, { "code": "", "text": "It is not working for both the free as well as dedicated (M30) tier.", "username": "Rajendra_Dangwal" }, { "code": "", "text": "Will the regex work for you or do you need to use a pipeline for copy.existing? If the latter can you describe the use case?", "username": "Robert_Walters" }, { "code": "", "text": "Never mind. The version 1.5.1 of the MongoDb connector fixed the issue.\nThis was the bug in previous version: https://jira.mongodb.org/browse/KAFKA-131Thanks a lot for your help.", "username": "Rajendra_Dangwal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get the existing documents from selective collections from a database
2021-05-25T18:25:26.383Z
Get the existing documents from selective collections from a database
2,507
null
[ "data-modeling" ]
[ { "code": "", "text": "We have a collection - Advertisement. Like to add an additional parameter - advertiserId.\nHave another collection - deal which has advertiserId and advertisementId parameters.\nLike to advertiserId to Advertisement based on the deal collection where the documents has both advertiserId and advertisementId.\nHow do we do this ?", "username": "prasad_mokkapati" }, { "code": "db.advertisment.update({_id : .... }, {$set : { advertiserId: \"YYY\" }})\n\ndb.deal.update({_id : .... }, {$set : { advertisementId : \"XXX\", \"advertiserId : \"YYY\" }})\n", "text": "Hi @prasad_mokkapati,Welcome to MongoDB community.Hopefully I understand your intentions correctly and you wish to populate two collection with documents that have “relationship” fields between them which is a normal design in MongoDB.However, since MongoDB has a flexible schema we don’t enforce this relationships and application should either populate the documents correctly or use a $merge aggregation to copy data from one set of documents to another by specifying the merging conditions.You can populate the collections during your application CRUD or via a script…What you can restrict in MongoDB is the structure of you documents to enforce specific fields presence using the Json schema validationThis JSON schema tutorial will walk through the basics of setting JSON schema standards and using schema for validation in MongoDB Atlas and MongoDB Compass.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Adding field with relationship to another collection
2021-06-05T17:59:22.685Z
Adding field with relationship to another collection
4,270
null
[ "compass" ]
[ { "code": "", "text": "I am running OSX 11.4. I am going through the introductory tutorials on MongoDB’s site. I can set up the organization, project, and clusters and load example files without problem. But I cannot connect with Compass. I have copied and pasted the information from MongoDB.com directly into Compass and changed the password to match the tutorial and I get a “bad auth” error and no connection.", "username": "Philo_Calhoun" }, { "code": "mongodb+srv://m001-student:<m001-mongodb-basics>@sandbox.kffoe.mongodb.net/test", "text": "mongodb+srv://m001-student:<m001-mongodb-basics>@sandbox.kffoe.mongodb.net/test\nis the generated link that does not work in compass (copied and pasted directly from the intro lesson)", "username": "Philo_Calhoun" }, { "code": "", "text": "Remove < > from password in your string and try again.It should work", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you!! That fixed it.", "username": "Philo_Calhoun" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Cannot connect with compass
2021-06-05T00:25:10.593Z
Cannot connect with compass
1,513
null
[ "swift", "atlas-device-sync" ]
[ { "code": "", "text": "My application uses a freemium model, wherein free users can interact with the application locally without an account.Currently data is stored in local files on-device.I would like to migrate these local files to default Realm, and use Realm for persistence.I would then like to offer the ability to backup & synchronize data for paying users of my application. However, this requires using “Sign in With Apple” in my case to authenticate them, and allow me to create a persistent, synced Realm.However, a user can “Stop using Apple ID” with my application via Settings, which effectively signs them out. Alternatively, their subscription could lapse, in which case they should no longer be able to sync their data, and revert to local-only modifications.This model is predicated on my ability to open a default Realm, operate within it, then convert it to a synced Realm, and at any point, stop syncing and revert back to a local-only Realm, much like the default Realm.Is this possible? I’m trying to figure it out from the docs and from my own sample apps but I’m coming up short on conclusive answers.", "username": "Majd_Taby" }, { "code": "", "text": "The only other solution I could think of is to manually copy all objects from one realm to another and back whenever the user “upgrades” or “downgrades”, but oof, that sounds scary.In my case, once a subscription lapses or an account is disabled, then the app should behave as if it doesn’t have a network connection, and all operations are local-only, unless the user renews their subscription or re-links the app, in which case changes are synced back up to the server.", "username": "Majd_Taby" }, { "code": "", "text": "Your proposed solution is the one we would recommend - you will need to copy data between sync and non-sync realms. That is the only option right now.", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrading a Default Realm to a Synced Realm and Vice Versa
2021-06-04T23:20:05.877Z
Upgrading a Default Realm to a Synced Realm and Vice Versa
2,339
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.5.Please note that this version of mongocxx requires the MongoDB C driver 1.17.0.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C++11 Driver 3.6.5 Released
2021-06-04T20:43:51.632Z
MongoDB C++11 Driver 3.6.5 Released
1,945
null
[ "java", "spring-data-odm" ]
[ { "code": "com.mongodb.MongoException: java.lang.NoClassDefFoundError: jdk/net/ExtendedSocketOptions\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:157) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.lookupServerDescription(DefaultServerMonitor.java:188) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:144) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat java.lang.Thread.run(Thread.java:744) [na:1.8.0]\nCaused by: java.lang.NoClassDefFoundError: jdk/net/ExtendedSocketOptions\n\tat com.mongodb.internal.connection.SocketStreamHelper.setExtendedSocketOptions(SocketStreamHelper.java:83) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:53) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat com.mongodb.internal.connection.SocketStream.initializeSocket(SocketStream.java:79) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat com.mongodb.internal.connection.SocketStream.open(SocketStream.java:65) ~[mongodb-driver-core-4.1.2.jar:na]\n\tat com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:143) ~[mongodb-driver-core-4.1.2.jar:na]\n\t... 3 common frames omitted\nCaused by: java.lang.ClassNotFoundException: jdk.net.ExtendedSocketOptions\n\tat java.net.URLClassLoader$1.run(URLClassLoader.java:372) ~[na:1.8.0]\n\tat java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[na:1.8.0]\n\tat java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0]\n\tat java.net.URLClassLoader.findClass(URLClassLoader.java:360) ~[na:1.8.0]\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[na:1.8.0]\n\tat sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.8.0]\n\tat java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[na:1.8.0]\n\t... 8 common frames omitted\n", "text": "Hello everyone,While running my Springboot application am getting below error. Please help me to resolve this issue.", "username": "Ripal_Bhagat" }, { "code": "", "text": "Hi there.This issue is tracked in https://jira.mongodb.org/browse/JAVA-4005 and a fix is available in the 4.2.2 release.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Thank you so much Jeff. I will go through the link and fix my issue.", "username": "Ripal_Bhagat" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
java.lang.NoClassDefFoundError: jdk/net/ExtendedSocketOptions | SpringBoot | MongoDB atlas
2021-06-04T13:49:01.028Z
java.lang.NoClassDefFoundError: jdk/net/ExtendedSocketOptions | SpringBoot | MongoDB atlas
3,235
null
[ "production", "cxx" ]
[ { "code": "", "text": "The MongoDB C++ Driver Team is pleased to announce the availability of mongocxx-3.6.4.Please note that this version of mongocxx requires the MongoDB C driver 1.17.0.See the MongoDB C++ Driver Manual and the Driver Installation Instructions for more details on downloading, installing, and using this driver.NOTE: The mongocxx 3.6.x series does not promise API or ABI stability across patch releases.Please feel free to post any questions on the MongoDB Community forum in the Drivers, ODMs, and Connectors category tagged with cxx. Bug reports should be filed against the CXX project in the MongoDB JIRA. Your feedback on the C++11 driver is greatly appreciated.Sincerely,The C++ Driver Team", "username": "Kevin_Albertson" }, { "code": "r3.6.4$ git tag -d r3.6.4\nDeleted tag 'r3.6.4' (was c585e14fa)\n$ git pull\nFrom github.com:mongodb/mongo-cxx-driver\n * [new tag] r3.6.4 -> r3.6.4\nAlready up to date\n", "text": "The initial 3.6.4 release had incorrectly tagged the commit c585e14fabf865c76c916bc1300bd59454ac0f4d with the tag r3.6.4. However, the release tarball was built from the correct commit. The tag was corrected the following day to refer to the commit 8a9ce93234f020f250c6dea1434865984c64e2c0. We apologize for any disruption this may have caused.If you updated tags during that time, you can correct the tag in your cloned copy as follows:", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C++11 Driver 3.6.4 Released
2021-06-04T01:01:36.747Z
MongoDB C++11 Driver 3.6.4 Released
2,145
null
[ "aggregation", "performance" ]
[ { "code": "> db.mycollection.aggregate ( \n [ { $group: {_id: \"$eventType\", applicationCount: {\"$sum\": 1} } } ], \n{ cursor: { batchSize: 32 }, allowDiskUse: false} )\nwinningPlan: { stage: 'COLLSCAN', direction: 'forward' },\n executionStats: {\n executionSuccess: true,\n nReturned: 786389,\n executionTimeMillis: 10409,\n totalKeysExamined: 0,\n totalDocsExamined: 786389,\ndb.mycollection.aggregate ( \n [ { $group: {_id: \"$eventType\", applicationCount: {\"$sum\": 1} } } ],\n { cursor: { batchSize: 32 }, allowDiskUse: false, hint: { eventType: 1 } } \n)\n stage: 'PROJECTION_COVERED',\n transformBy: { eventType: 1, _id: 0 },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { eventType: 1 },\n indexName: 'eventType_1',\n isMultiKey: false,\n executionStats: {\n executionSuccess: true,\n nReturned: 786389,\n executionTimeMillis: 5994,\n totalKeysExamined: 786389,\n", "text": "Hello,My query is something like that :The collection has above 780k items, so I think it is not huge; I have an index on the “eventType” field - but in spite of that it is very slow. With an explain I haveHowever, if I modify my query adding a hint, such asexplain(“executionStats”) shows an improvement:However, I still find the execution time long! (almost 6 seconds).\nMy questions are:", "username": "John_Me" }, { "code": "", "text": "why is it not using the index directly and needs to specify it?Most probably because you do not have $sort or $match stage.how could the query be improvedIt depends of your documents. If your entire collection does not fit in RAM then documents must be read from disk.I would try to $sort on eventType and then $project { _id:0 , eventType:1 }, then the index should be used and luckily it fits in memory and no documents will be read from disk.", "username": "steevej" }, { "code": "> db.myCollection.aggregate ( [ { $group: {_id: \"$eventType\", applicationCount: {\"$sum\": 1} } }, {$project: {_id:1}} ], { cursor: { batchSize: 32 }, allowDiskUse: false } ).explain(\"executionStats\")\n{\n stages: [\n {\n '$cursor': {\n query: {},\n fields: { eventType: 1, _id: 0 },\n queryPlanner: {\n plannerVersion: 1,\n namespace: 'mydb.myCollection',\n indexFilterSet: false,\n parsedQuery: {},\n queryHash: '8B3D4AB8',\n planCacheKey: '8B3D4AB8',\n winningPlan: { stage: 'COLLSCAN', direction: 'forward' },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 786389,\n executionTimeMillis: 10144,\n totalKeysExamined: 0,\n totalDocsExamined: 786389,\n executionStages: {\n stage: 'COLLSCAN',\n nReturned: 786389,\n executionTimeMillisEstimate: 1735,\n...\n", "text": "Either I’m missing something, or it really doesn’t improve the result:", "username": "John_Me" }, { "code": "", "text": "I would try to $sort on eventType and then $project { _id:0 , eventType:1 }, then the index should be used and luckily it fits in memory and no documents will be read from disk.You do that before the group stage. The sort will enforce the use of the index. And the projection will not fetch the documents from disk since you only project fields in the index.", "username": "steevej" }, { "code": ".aggregate ( [{ $sort: { eventType: 1 } }, \n{ $project: { eventType: 1, _id: 0 } }, \n{ $group: { _id: \"$eventType\", count: { $sum: 1 } } }] , \n{ cursor: { batchSize: 32 }, allowDiskUse: false } )\nwinningPlan: {\n stage: 'PROJECTION_COVERED',\n transformBy: { eventType: 1, _id: 0 },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { eventType: 1, productId: 1 },\n indexName: 'eventType_1_productId_1',\n...\nexecutionStats: {\n executionSuccess: true,\n nReturned: 786389,\n executionTimeMillis: 16786,\n totalKeysExamined: 786389,\n...\n {\n v: 2,\n key: { eventType: 1, productId: 1 },\n name: 'eventType_1_productId_1',\n ns: '...'\n },\n...\n {\n v: 2,\n key: { eventType: 1, workspaceName: 1, productId: 1 },\n name: 'eventType_1_workspaceName_1_productId_1',\n ns: '...'\n },\n...\n {\n v: 2,\n key: { eventType: 1, 'session.serviceName': 1 },\n name: 'eventType_1_session.serviceName_1',\n ns: '...'\n },\n {\n v: 2,\n key: { eventType: 1 },\n name: 'eventType_1',\n ns: '...'\n },\n...\n", "text": "Yes, it seems now it is using an index -> I am usingHowever, explain show that:So I would rather say that it is using a suboptimal key, that spans over 2 fields -> getIndexes () showsIs that a good explanation?", "username": "John_Me" }, { "code": "", "text": "So I would rather say that it is using a suboptimal keyI do not think it matters. As long as you get an index scan and it is covered.I think that what is sub-optimal, is to have an index that is a prefix of another one. You are just using more RAM for no real benefit as using the index eventType_1 will not give any benefit, in most case, because any query involving eventType can be served by the other indexes. I wrote in most case because may be, just may be, if you are low in RAM and eventType_1 is the only index that fits in RAM, then all others require disk access. But if you have this scenario then you have more serious problem anyway so you should not have this scenario.", "username": "steevej" }, { "code": "> db.myCollection.getIndexes ()\n[\n {\n v: 2,\n key: { _id: 1 },\n name: '_id_',\n ns: '...'\n },\n {\n v: 2,\n key: { eventType: 1 },\n name: 'eventType_1',\n ns: '...'\n }\n]\ndb.myCollection.aggregate ( [ { $sortByCount: \"$eventType\" } ], { allowDiskUse: false}).explain (\"executionStats\"){\n stages: [\n {\n '$cursor': {\n query: {},\n fields: { eventType: 1, _id: 0 },\n queryPlanner: {\n plannerVersion: 1,\n namespace: '...',\n indexFilterSet: false,\n parsedQuery: {},\n queryHash: '8B3D4AB8',\n planCacheKey: '8B3D4AB8',\n winningPlan: { stage: 'COLLSCAN', direction: 'forward' },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 783254,\n executionTimeMillis: 9941,\n totalKeysExamined: 0,\n totalDocsExamined: 783254,\n executionStages: {\n stage: 'COLLSCAN',\n nReturned: 783254,\n executionTimeMillisEstimate: 1181,\n works: 783256,\n advanced: 783254,\n needTime: 1,\n needYield: 0,\n saveState: 6217,\n restoreState: 6217,\n isEOF: 1,\n direction: 'forward',\n docsExamined: 783254\n }\n }\n }\n },\n {\n '$group': { _id: '$eventType', count: { '$sum': { '$const': 1 } } }\n },\n { '$sort': { sortKey: { count: -1 } } }\n ],\n serverInfo: {\n host: 'mongo-mongodb-56c7dffc8c-f5mmj',\n port: 27017,\n version: '4.2.4',\n gitVersion: 'b444815b69ab088a808162bdb4676af2ce00ff2c'\n },\n ok: 1\n}\n{\n stages: [\n {\n '$cursor': {\n query: {},\n fields: { eventType: 1, _id: 0 },\n queryPlanner: {\n plannerVersion: 1,\n namespace: '...',\n indexFilterSet: false,\n parsedQuery: {},\n queryHash: 'EFB2EDD9',\n planCacheKey: 'EFB2EDD9',\n winningPlan: {\n stage: 'PROJECTION_COVERED',\n transformBy: { eventType: 1, _id: 0 },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { eventType: 1 },\n indexName: 'eventType_1',\n isMultiKey: false,\n multiKeyPaths: { eventType: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { eventType: [ '[MinKey, MaxKey]' ] }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 783254,\n executionTimeMillis: 6770,\n totalKeysExamined: 783254,\n totalDocsExamined: 0,\n executionStages: {\n stage: 'PROJECTION_COVERED',\n nReturned: 783254,\n executionTimeMillisEstimate: 729,\n works: 783255,\n advanced: 783254,\n needTime: 0,\n needYield: 0,\n saveState: 6206,\n restoreState: 6206,\n isEOF: 1,\n transformBy: { eventType: 1, _id: 0 },\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 783254,\n executionTimeMillisEstimate: 550,\n works: 783255,\n advanced: 783254,\n needTime: 0,\n needYield: 0,\n saveState: 6206,\n restoreState: 6206,\n isEOF: 1,\n keyPattern: { eventType: 1 },\n indexName: 'eventType_1',\n isMultiKey: false,\n multiKeyPaths: { eventType: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { eventType: [ '[MinKey, MaxKey]' ] },\n keysExamined: 783254,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n }\n },\n {\n '$group': { _id: '$eventType', count: { '$sum': { '$const': 1 } } }\n },\n { '$sort': { sortKey: { count: -1 } } }\n ],\n", "text": "OK, I have copied my collection to a new one and I have added a single index (on my interest field) so now I haveBut when I execute my query (db.myCollection.aggregate ( [ { $sortByCount: \"$eventType\" } ], { allowDiskUse: false}).explain (\"executionStats\")) it still does a COLLSCAN only !!!The only way to force index usage is to add the hint; I wouldn’t mind it too much, but even in this case it is very slow (for a 783k items collection), and there are about 143 different values for eventType:Why totalKeysExamined is equal with the number of documents instead of being equal with the number of distinct values?", "username": "John_Me" }, { "code": "> db.mycollection.explain(\"executionStats\").aggregate ( [{ $sort: { eventType: 1 } }, \n{ $project: { eventType: 1, _id: 0 } }, \n{ $group: { _id: \"$eventType\", count: { $sum: 1 } } }] ,\n{ cursor: { batchSize: 320 }, allowDiskUse: false } )\n{\n stages: [\n {\n '$cursor': {\n query: {},\n sort: { eventType: 1 },\n fields: { eventType: 1, _id: 0 },\n queryPlanner: {\n plannerVersion: 1,\n namespace: '...',\n indexFilterSet: false,\n parsedQuery: {},\n queryHash: '34AFD5A6',\n planCacheKey: '34AFD5A6',\n winningPlan: {\n stage: 'PROJECTION_COVERED',\n transformBy: { eventType: 1, _id: 0 },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { eventType: 1 },\n indexName: 'eventType_1',\n isMultiKey: false,\n multiKeyPaths: { eventType: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { eventType: [ '[MinKey, MaxKey]' ] }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 783254,\n executionTimeMillis: 7728,\n totalKeysExamined: 783254,\n totalDocsExamined: 0,\n executionStages: {\n stage: 'PROJECTION_COVERED',\n nReturned: 783254,\n executionTimeMillisEstimate: 991,\n works: 783255,\n advanced: 783254,\n needTime: 0,\n needYield: 0,\n saveState: 6213,\n restoreState: 6213,\n isEOF: 1,\n transformBy: { eventType: 1, _id: 0 },\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 783254,\n executionTimeMillisEstimate: 868,\n works: 783255,\n advanced: 783254,\n needTime: 0,\n needYield: 0,\n saveState: 6213,\n restoreState: 6213,\n isEOF: 1,\n keyPattern: { eventType: 1 },\n indexName: 'eventType_1',\n isMultiKey: false,\n multiKeyPaths: { eventType: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { eventType: [ '[MinKey, MaxKey]' ] },\n keysExamined: 783254,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n }\n },\n {\n '$group': { _id: '$eventType', count: { '$sum': { '$const': 1 } } }\n }\n ],\n... \n", "text": "The same thing is is using your suggestion:And in all cases the query duration is something above 6-7 seconds, that I consider too much (and I’am afraid that it will depend of number of items in the collection).", "username": "John_Me" } ]
How to accelerate an aggregate query with $sum?
2021-06-03T09:37:07.225Z
How to accelerate an aggregate query with $sum?
6,043
null
[ "performance", "cxx" ]
[ { "code": "", "text": "Hello,I’m working with mongocxx driver version 3.4 and I experienced a performance issue. When I try to query from the big collection (using pipeline::match() method) app hangs for a second and closes. It happens only on big collections. I use exactly the same method to fetch documents from other collections and there is no such issue.The collection that causes issues has 49.9 K documents.\nThe biggest collection from the rest has 15 K documents and it works fine.Is it known? Maybe compiling the driver to a higher version will help?Code:Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.\nFYI bsonQuery is empty. I don’t look for any specific field values.", "username": "Lukasz_Kosinski" }, { "code": "explain", "text": "Hi @Lukasz_Kosinski,Your application shouldn’t just crash like this. I suspect that you are not handling errors & exceptions correctly here and because you are hitting a timeout, your application stops.Regarding the performances themselves, what’s the query exactly? Which index is baking this query? Can you share the explain output with the execution stats if you still have an issue despite using an index for this query?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88,Thanks for your answer.Regarding handling exceptions and timeout:\nThat can happen, but you can see that there is a try {} catch in the code snippet I linked to. It handles different exceptions, but not this.\nI tried adding socket timeout to the connection URI, but it didn’t change anything.\nQString(“mongodb://%1:%2/?socketTimeoutMS=1200000”).arg(host).arg(port);Regarding the query itself:\nMaybe the word “performance” is not the right one here. I thought that it was a performance issue because it works for smaller collections. Actually, pipeline.match doesn’t return anything and just cause crash, so it’s hard to say if it’s slow or not.\nBut saying, about the query, it’s empty in this case and I use bsoncxx::builder::core{false}.extract_document().Regarding indexes. I wasn’t aware of that feature. Maybe that’s the case. What I do, is archiving some stuff I get from Rest API. I wasn’t really thinking about what I store (that’s the flexibility we have with MongoDB).\nThree main collections are:What would you propose me to do, then? I guess I should create some indexes.", "username": "Lukasz_Kosinski" }, { "code": "", "text": "All the fields that you use in a query (find or match in the first stage of a pipeline) should be indexed in a perfect world.MongoDB offers many different type of indexes. If you aren’t sure, take this training:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.The doc is also a good source:I’m not a C# developer so I’m struggling to read the code to be honest. Maybe someone else will be able to help.Also one thing that could be an issue: if you don’t consume the aggregation (read the document from it) the pipeline won’t execute. It’s lazy by default.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Performance issue when quering from big collection
2021-06-02T12:14:45.956Z
Performance issue when quering from big collection
3,426
null
[ "node-js", "swift", "react-native", "realm-web", "stitch" ]
[ { "code": "", "text": "Hi Folks,If you have an existing app built with the MongoDB Stitch SDKs, you should migrate your app to use the new Realm SDKs. While much of the logic and flow of information hasn’t changed, there are a few important changes in the way your app connects to the realm backend.Our documentation team has put together a detailed guide on how to Migrate Your App from Stitch to Realm SDKs.This includes:If you have any feedback on guide documentation, you can share directly with the docs team via the “Give Feedback” button on the bottom right of any docs page (or comment on this forum topic).If you have more specific questions about migrating or using a Realm SDK, you are probably best starting a new discussion topic including details of your environment (eg SDK version, code sample, expected results, actual results). For pointers to more resources, see: About the Realm category.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "5 posts were split to a new topic: Migrating from Stitch to Realm (under iOS and Swift)", "username": "Stennie_X" } ]
New Guide: Migrate Your App from Stitch to Realm
2020-12-18T20:30:14.271Z
New Guide: Migrate Your App from Stitch to Realm
4,687
null
[]
[ { "code": "", "text": "I am working mongo import using Java application. My requirement is very large file around 500MB csv file .these needs to be import into DB. I am not find much resource related like mongo client or mongo driver.\nfollowing like of code I am using but not working\nval r = Runtime.getRuntime()\nvar p: Process? = null\nval command = “C:\\Program Files\\MongoDB\\Server\\4.2\\bin\\mongoimport.exe --uri mongodb://admin:[email protected]:27017/osmolytics?authSource=admin --collection my_collection --drop --type csv --headerline”+filePath\ntry {\np = r.exec(command)\n//p.waitFor()\n//var buf: BufferedReader= BufferedReader(InputStreamReader(p.inputStream))\n//var line=“”\n//while ((line==buf.readLine())!= null && buf.readLine()!=“”)\n//{\n//\tprintln(line)\n//}\nprintln(“Reading csv into Database”)\nPlease guide us", "username": "Basayya_Kulkarni" }, { "code": "", "text": "Hi @Basayya_Kulkarni,Welcome to the MongoDB Community.Can you share the error you are seeing when you run this program?What happens when you run the mongoimport command standalone from the command line?Joe.", "username": "Joe_Drumgoole" }, { "code": " val waitFor: Int = process.waitFor()\n", "text": "val process = Runtime.getRuntime().exec(command)process.waitFor() -This waitfor is not returning and it hangs", "username": "Basayya_Kulkarni" }, { "code": "", "text": "@ Joe_Drumgoole\nThank you so much for your help", "username": "Basayya_Kulkarni" }, { "code": "", "text": "//p.waitFor()Joe_Drumgoole\nWhen I run the same command in command standalone from the command line and it works fine. when use the same command in JAVA .process.waitFor() it hangs for large data set(Excel)", "username": "Basayya_Kulkarni" } ]
Mongo Import csv file
2021-05-13T10:58:44.522Z
Mongo Import csv file
3,547
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "After today’s meet-up about Cosync JWT, I modified some of my code to use the Custom JWT that they provide and am now saving the JWT token in the keychain during login and using that to reauthenticate when a user returns to the app.The issue I’m having is that if the user really wants to be able to do offline first, the realm needs to be able to open without signal. If the app is open and the user is logged in and then loses signal, the app continues to work.But if there is no signal when the app is opened, I check for a stored token and then attempt to log in. Obviously, that fails.Is there a way to open the realm locally with a stored token without reconnecting to the server to truly do offline first?", "username": "Kurt_Libby1" }, { "code": "", "text": "If you do not call logOut() then the user is cached and the app can open the realm by passing in the cached user reference to the realm without signal. If you want to use the app offline I’d recommend not calling logOut()", "username": "Ian_Ward" }, { "code": "", "text": "Great, so then we don’t really need to log the email/password/jwt-token.But where is that user cached?Is there somewhere in the docs that shows how to pass in the cached user reference?I’m not using logOut() unless they specifically tap a button to log out.Thanks.", "username": "Kurt_Libby1" }, { "code": "if app.currentUser != nil {\n self.loggedIn = true\n } \n", "text": "Hey @Ian_Ward,I did try it in a SwiftUI app and this seemed to work:Wondering if there are any drawbacks to using this. Does this cache timeout eventually? Is there anything else I need to think about with this?Thanks.–Kurt", "username": "Kurt_Libby1" }, { "code": "app.currentUser", "text": "Is there anything like this for React Native?I’m attempting to check for app.currentUser, but it doesn’t appear to be finding the cached user like it does in the SwiftUI app.@Ian_Ward @Kenneth_GeisshirtThanks.–Kurt", "username": "Kurt_Libby1" }, { "code": "\n \n import { useAuth } from \"../providers/AuthProvider\";\n import styles from \"../stylesheet\";\n \n export function WelcomeView({ navigation }) {\n const [email, setEmail] = useState(\"\");\n const [password, setPassword] = useState(\"\");\n const { user, signUp, signIn } = useAuth();\n \n useEffect(() => {\n // If there is a user logged in, go to the Projects page.\n if (user != null) {\n navigation.navigate(\"Projects\");\n }\n }, [user]);\n \n // The onPressSignIn method calls AuthProvider.signIn with the\n // email/password in state.\n const onPressSignIn = async () => {\n console.log(\"Press sign in\");\n try {\n await signIn(email, password);\n \n ", "text": "Yes - this is how our tutorial works -", "username": "Ian_Ward" }, { "code": "", "text": "Thanks!I’m assuming this should work for Anonymous login as well as Email/Password, right?–Kurt", "username": "Kurt_Libby1" } ]
Opening a *previously* Synced Realm without a Connection
2021-05-20T17:57:11.386Z
Opening a *previously* Synced Realm without a Connection
2,280
null
[]
[ { "code": "", "text": "Is possible use Sharding to route the computation for my clients?", "username": "Jose_Maria_Anacleto" }, { "code": "", "text": "This question is not related to M220P. It is also a duplicate, formulated differently, of\nhttps://www.mongodb.com/community/forums/t/is-possible-route-computation-for-clients/110046/2?u=steevej", "username": "steevej" } ]
Remote Sharding is possible?
2021-06-04T12:39:06.155Z
Remote Sharding is possible?
1,530
null
[ "queries", "performance" ]
[ { "code": "{\n\n $lookup: {\n\n from: \"listings\",\n\n as: \"listings\",\n\n let: { listingId: \"$_id\" },\n\n pipeline: [\n\n {\n\n $match: {\n\n $expr: {\n\n $and: [\n\n { $eq: [\"$project\", \"$$listingId\"] },\n\n { $eq: [\"$status\", \"available\"] },\n\n ],\n\n },\n\n },\n\n },\n\n ],\n\n },\n\n },\n", "text": "This lookup query is very slow. Can someone help to find an alternative or improve this query performance?", "username": "Rakshith_HR1" }, { "code": "", "text": "You only shared the $lookup stage. What do you have before and after? The issue might not be the $lookup.What are your indexes?What is your installation? RAM vs working set size.", "username": "steevej" } ]
Help me to improve this lookup query
2021-06-04T08:35:49.468Z
Help me to improve this lookup query
1,481
null
[ "database-tools" ]
[ { "code": "2021-06-02T18:06:29.958+0200 writing phoenix.audit_log to /mnt/backup/mongodump/phoenix.dmp/phoenix/audit_log.bson\n2021-06-02T18:06:32.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:35.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:38.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:41.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:44.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:47.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:50.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:53.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:56.211+0200 phoenix.audit_log 0\n2021-06-02T18:06:59.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:02.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:05.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:08.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:11.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:14.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:17.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:20.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:23.211+0200 phoenix.audit_log 0\n2021-06-02T18:07:26.211+0200 phoenix.audit_log 0\n", "text": "Hello,I’m trying to do a mongodate with date range:mongodump --port 27005 --db phoenix --collection audit_log --query ‘{“timestamp” :{ “$gte”: { “$date”: “2020-01-01T00:00:00.000Z” } } }’ --out /mnt/backup/mongodump/But output is:and still running.The collection was created at 2016, could you confirm if mongodump is checking all documents and it will write when arrive to 2020?Thanks in advance.Regards", "username": "Agus_Luesma" }, { "code": "", "text": "Hi @Agus_Luesma, it does look like this query is doing a collection scan. Do you have an index on the timestamp field? If there is no index that the query can use, it will have to check all documents which is inefficient. You can see here for more information on indexes: https://docs.mongodb.com/manual/indexes/", "username": "Tim_Fogarty" } ]
Mongodump with --query date
2021-06-02T16:14:58.105Z
Mongodump with &ndash;query date
6,308
null
[ "data-modeling" ]
[ { "code": "<fieldX>: null {\n usesFieldX: false,\n fieldX: null \n }\nuserpermissionsread[read][]null", "text": "I am in a bit of a dilemma. In my model an array field can have some values, be empty or not be used at all. I am wondering what is the best way to represent the situation when the field is not used. Would it be enough just to have the <fieldX>: null or would it be too confusing, and a flag should be used to simplify things. Namely:This seems cleaner, but the way I see it, we might end up having two sources of truth and the data structure might be less self-explanatory - one needs to know about the relationship between both fields to be able to work with the data properly, while in the first case is more fool-proof.Example:\nA user has permissions to read ([read]), no permissions ([]) or permissions are not applicable (null?).Can you share your thoughts with me? From your experience, what are the pros and cons of each solution?", "username": "_alex" }, { "code": "permissions_applicable: true / falsepermissions_applicabletruepermissionspermissions_applicablepermissions_applicable: truepermissions_applicabledb.users.insertOne( ... ){ _id: 1, name: \"John Doe\", permissions_applicable: true, other_fields: ... }\n{ _id: 1, name: \"John Doe\", permissions_applicable: true, permissions: [ \"read\", \"write\" ], other_fields: ... }\n{ _id: 1, name: \"John Doe\", permissions_applicable: false, other_fields: ... }\ndb.users.updateOne( { _id: 1, permissions_applicable: true }, { $pull: { permissions: \"write\" } } )\ndb.users.updateOne( { _id: 1, permissions_applicable: true }, { $push: { permissions: \"delete\" } } )\ndb.users.find( { permissions_applicable: true, ... } )$push$push$pushnullnull.null$pull", "text": "Hello @_alex, here are some thoughts:…an array field can have some values, be empty or not be used at all. I am wondering what is the best way to represent the situation when the field is not used.The array field has CRUD operations, and these depend upon the application functionality. This is the main consideration. What are the scenarios, in your code / application, you use this array field?Initially, when a user is created you might know that he or she may have permissions or permissions may not be applicable. In such a case, create a field with permissions_applicable: true / false. If permissions_applicable is true, and you know the permission value(s), include the array field (lets call it, permissions) along with the values in the newly created document. If you don’t know the permission values, do not create the field. At this stage, the field permissions_applicable is enough for further usage in other operations.Then you update the user’s array field (either push into or pull elements from the array), later. This can happen, only for documents with permissions_applicable: true.So, whenever you work with this field (users permissions functionality), always use the permissions_applicable field.When the user is newly created with db.users.insertOne( ... ), these are possible:When a user is updated with a new permission, or a permission is removed:Finally, while querying:db.users.find( { permissions_applicable: true, ... } )How the updates on an array field function using $push:Also, note that if you use the $pull operator on a non-existing array field, nothing happens (it is not an error).", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_Saya Thank you for sharing your thoughts! This seems like a healthy approach.", "username": "_alex" } ]
<fieldX>: null OR <hasFiledX>: false flag?
2021-06-03T12:23:59.835Z
&lt;fieldX&gt;: null OR &lt;hasFiledX&gt;: false flag?
1,923
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi Everyone,May I know any good data model tools to draw Entity diagram for Mongo? Thanks in advance.Foon Lui", "username": "Foon_Lui" }, { "code": "", "text": "Hi\ntools which are very light and completely text based (so you can check them in with your code in /doc) are Mermaid and Plantuml. Both have plugins in vscode. Personally I use them depending on the usecase, Mermaid is very light and pure javascript, planuml is supported in a wide range of tools but requires java and graphviz\ngrafik521×786 21.6 KB\n\ngrafik704×549 30.6 KB\nRegards,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "MermaidThank you for your advice.", "username": "Foon_Lui" } ]
Data Modeling tools
2021-06-01T02:35:38.497Z
Data Modeling tools
3,711
null
[ "queries" ]
[ { "code": "{ \"_id\" : 3, \"results\" : [ { \"product\" : \"xyz\", \"score\" : 9 },\n { \"product\" : \"xyz\", \"score\" : 8 } ] }\n", "text": "Hi There,I need your help, is it possible to collect only documents which all array values matches the query ?In the example here https://docs.mongodb.com/manual/reference/operator/query/elemMatch/#array-of-embedded-documentsI need a result like this only.An idea ?\nThanks", "username": "muama" }, { "code": "{ \"_id\" : 3, \"results\" : [ { \"product\" : \"xyz\", \"score\" : 9 },\n { \"product\" : \"xyz\", \"score\" : 8 } ] }\n", "text": "Hello @muama, Welcome to MongoDB developer forum,I need a result like this only.Three is no document with score: 9 in the documentation example.Your question is not clear can you please elaborate with more examples and details.", "username": "turivishal" }, { "code": "", "text": "Hi,I know that there is no such example.In the example we have this query :\ndb.survey.find(\n{ results: { $elemMatch: { product: “xyz”, score: { $gte: 8 } } } }\n)I need to have a result only if all element of the array are true, in the example only if product=xyz and scrore=gte(8).Normally in the example we should have no result.Thanks", "username": "muama" }, { "code": "\"results.product\": { $ne: \"xyz\" }{ \"results.score\": { $lt: 8 } }db.survey.find({\n $nor: [\n { \"results.product\": { $ne: \"xyz\" } },\n { \"results.score\": { $lt: 8 } }\n ]\n})\n", "text": "I am not sure is there any straight approach,You an try opposite and negative conditions,", "username": "turivishal" }, { "code": "", "text": "Finnaly, I used this :db.survey.find({“results”: {\n$not: { $elemMatch: {\n…\n}}\n}})Thanks for your help", "username": "muama" } ]
How to collect only documents which match all values of an array
2021-06-03T18:41:19.019Z
How to collect only documents which match all values of an array
6,025
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi there,\nfor a project I need to migrate a SQL DB to MongoDB in Python. Since SQL is using joins etc. i need to transform the schema. But I did not be able to do it…Does anyone has a idea how to do it in a propper way?\nI would be very thankful if there is anyone who can help me via teamviewer.For taking your time I will pay you since I really need to upload the SQL files to MongoDB.", "username": "Steffen_Hillmann" }, { "code": "", "text": "There is a course on data modelling with the MongoDB university.Data ModellingAlong with another course on use of python with MongoDB.Python and MongoDBThough it sounds you need it done quickly.", "username": "NeilM" } ]
From SQL files to MongoDB
2021-05-28T13:31:35.044Z
From SQL files to MongoDB
2,115
null
[ "queries" ]
[ { "code": "{\n _id: 758ab35de3a258foo,\n Arr: [\n { str: \"I am txt\", bool: true },\n { str: \"I am txt\", bool: false},\n ...\n ]\n}\n collection.find({ \"Arr.bool\": false }\n collection.find({ \"Arr\": { $elemMatch: {\"bool\": false} }\n collection.find({ \"Arr\": { \n $elemMatch: { \n $ne: { \"bool\": true }\n }\n } \n", "text": "I am trying to find all Documents/objects which contain false as one of their field’s values.The Collection in question has a single Document:Using collection.find() I have only been able to return the entire Document instead of returning only the Documents inside of the Arr array.I have tried:So my questions are:Is this the best structure/model I should be using for this Collection? [There will probably only be a few more Documents added]Is it possible to get the results I am looking for using collection.find() alone, or is it necessary to filter/map the results I am getting afterwards?Thank you for your time, and to anyone willing to help!! I really do appreciate it <3", "username": "lemme_lurk" }, { "code": " collection.find({ \"Arr.bool\": false }\n collection.find({ \"Arr\": { $elemMatch: {\"bool\": false} }\ncollection.find({ \"Arr\": { \n $elemMatch: { \n $ne: { \"bool\": true }\n }\n } \n$elemMatch<array>$elemMatchfind()$filterArrboolcollection.find(\n { \"Arr.bool\": false },\n {\n Arr: {\n $filter: {\n input: \"$Arr\",\n cond: { $ne: [\"$$this.bool\", true] }\n }\n }\n }\n);\n", "text": "Using collection.find() I have only been able to return the entire Document instead of returning only the Documents inside of the Arr array.Both queries are the same for match single field in an array and yes this will return the entire document because this is the match query part, it can not filter results in an array.When you see $elemMatch documentation,The $elemMatch operator limits the contents of an <array> field from the query results to contain only the first element matching the $elemMatch condition.I am not sure what is the description of your project, but make sure The maximum BSON document size is 16 megabytes. as per this documentation,Is it possible to get the results I am looking for using collection.find() alone, or is it necessary to filter/map the results I am getting afterwards?You can use aggregation projection operators in find() projection starting from MongoDB 4.4,", "username": "turivishal" } ]
Find Nested Documents in Top-Level Array
2021-06-04T04:01:54.080Z
Find Nested Documents in Top-Level Array
3,716
https://www.mongodb.com/…34bec1887fb6.png
[ "atlas-functions" ]
[ { "code": "", "text": "\nThis is how a double value from an array of nested documents gets printed out in SendGrid. I was wondering if anyone has had a similar experience? I am sending the array in my request to SendGrid. Then in the template, I am looping through the array and accessing properties.I have seen similar strange things happen with MongoDB collection values when they get sent over http.Anyone had similar experiences? Know a remedy?", "username": "Lukas_deConantseszn1" }, { "code": "JSON.parse(JSON.stringify(doc))\n", "text": "Hi @Lukas_deConantseszn1,This looks like a response from a realm webhook? Correct me if I am wrong.If so this is sent as extended json standard where types are represented this way.You can parse them into plain json before you return them :Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This data is happening in a realm function that is querying the collection with its atlas service.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "But how is the client access the data? Is it a web sdk or a webhook url?", "username": "Pavel_Duchovny" }, { "code": "", "text": "Realm function runs once a day, queries data, then sends it in an API request to SendGrid. SendGrid email template then reads the data like this.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "You use http service to send data? Have you used the encodeBodyAsJson flag,?Have you tried to parse it to json before ?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I use context.http.post and I use encodeBodyAsJson set to true.I tried the JSON.Stringify JSON.Parse thing and it sadly didn’t work. I’m going to try to parseFloat on the value itself next.", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "I switched to using axios in my realm function. context.http just has problems ", "username": "Lukas_deConantseszn1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Collection double values coming to SendGrid template weirdly
2021-05-30T12:59:01.857Z
Mongo Collection double values coming to SendGrid template weirdly
2,571
null
[]
[ { "code": "", "text": "I am very new using mongo and coding in general. I am taking a udemy course learning a little about mongodb. The initial section worked fine and I had no issues installing in the terminal. After a few sections it just stopped working when I typed in Mongod a bunch of errors were popping up and it was shutting down when initiated. I tried googling all day thursday until I eventually just tried to remove it from my Mac (Big Sur) and reinstall it because I have no clue to what to even ask. That some how made it much worse because now when I type in commands it says the commands don’t exist. Mongo config file doesn’t exist and I don’t know how to get all the files back. I am on day 3 now and I tried leaving a comment on the course video and even on their discord but no one was able to help me. I don’t even know where to begin with what is wrong. I followed along a website to uninstall mongo and after everything seemed to be uninstalled, when I went to go reinstall mongodb it says it is already installed on my computer but I think it is missing some config files and the data folder. I know this is super vague but if anyone could help me that would be awesome, I don’t know who to talk to about my problem or if this is even the correct place to seek help.", "username": "base_az" }, { "code": "", "text": "Welcome to the community!It could be PATH issue if you are getting command not found assuming mongo is installed\nCheck if mongodb/bin shows up in your PATH or not\necho $PATHcd to mongodb/bin directory and run mongo --version", "username": "Ramachandra_Tummala" }, { "code": "mongod", "text": "Hi @base_az, welcome to the community!\nCan you please post a screenshot of errors that you are getting while running the mongod command?\nAlso, can you please run the following command in your terminal and paste the output as well?pgrep mongodI am not sure, but maybe your mongod is already running, and you are not able to launch a new mongod(Mongo Daemon) process on your machine since the port is already busy.In case you have any doubts, please feel free to reach out to us and we will be more than happy to help you.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer", "username": "SourabhBagrecha" }, { "code": "", "text": "Hello, thank you for your help. I ended up getting it to start loading again but it still wont boot up. I typed in pgrep Mongod and nothing came back. Here is the result of trying to boot up Mongod now.\nScreen Shot 2021-06-02 at 6.03.41 PM1246×439 146 KB", "username": "base_az" }, { "code": "", "text": "It says /data/db dir not found\nGive another dir path and checkmongod --dbpath new_path (say your home dir)or change your config file to use a path where mongod can read/write\nAlso in Macos they removed access to root folder /data/db\nPlease check documentation.You have to use a different path for dbpath", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I will give it a go. Thank you for your help.", "username": "base_az" }, { "code": "", "text": "I am still getting some errors. I switched the dbpath and I made sure it was changed in the mongo.conf file. I also have this “disabled” in the mongo shell when trying to connect. This is what was happening to me last week before I uninstalled and had trouble even booting. I googled so much and tried a lot of stuff but I am so new to mongo or even using the terminal it is very overwhelming. Screen Shot 2021-06-02 at 7.39.59 PM1429×312 23.5 KB", "username": "base_az" }, { "code": "", "text": "Screen Shot 2021-06-02 at 7.41.14 PM1425×369 126 KB", "username": "base_az" }, { "code": "", "text": "The error says address already in useWhen you run mongod without --port param it tries to bring up on default port 27017\nLooks like you have another instance already running on this portYou can try to bring up your mongod on a different portmongod --port 28000 --dbpath your_path bind_ip 127.0.0.1\nPlease note when you try to connect to mongod you have to specify the port if you brought up mongod on a different port other than defaultmongo --port your_portPlease check documentation for the warning messages you got like access control etc\nTo secure your mongod you have to use auth parameterSuggest you to enroll to Mongo university free online courses", "username": "Ramachandra_Tummala" } ]
Newbie - Please help I broke mongodb
2021-05-30T00:41:41.988Z
Newbie - Please help I broke mongodb
2,684
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.0.25-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.0.24. The next stable release 4.0.25 will be a recommended upgrade for all 4.0 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.25-rc0 is released
2021-06-03T21:21:08.186Z
MongoDB 4.0.25-rc0 is released
2,625
null
[]
[ { "code": "", "text": "Heyy, I had connected my MongoDB Collection to store data of my discord bot but from last 10 days it’s acting weird. Weird in the sense, when I update any data after few hours later it roll back to that position what was 10 days ago. Data is updating but for only few hours. I’d verified my code there isn’t error from my side. My users had lost their records.I’d personally verified that when I got feedback from my users! For example I have a key which store no. of commands used yesterday it was 13490 but today it rolled back to 10489 which was 10 days ago. The problem is with every document. Nothing get updated for long time. After few time it rolled back to that position what was around 10 days ago.", "username": "Smiling_August" }, { "code": "", "text": "How do you expect anyone to answer this question with little to no context? All we know is that your DB is rolling back. What queries are you running etc…", "username": "Dogunbound_hounds" } ]
Need help about Database
2021-06-03T13:35:54.970Z
Need help about Database
1,553
null
[ "mongoose-odm", "indexes", "typescript" ]
[ { "code": " interface IStoreDB extends IStore, Document {}\n\n const storeSchema = new mongoose.Schema({\n name:{\n type: String,\n required:true,\n index: true\n },\n mobile: {\n type: String,\n required: false,\n unique: true,\n sparse: true,\n index: true\n },\n\n }, {\n timestamps: true\n });\n\n storeSchema.index({ name: 'text', mobile: 'text' });\n\n const model = createModel<IStoreDB>('Store', storeSchema);\n\n export default model;\n", "text": "We have an existing mongo cluster on mongodb.com, which we are using with a nodejs server, by means of mongoose.Over time, we realised that we needed to add indexes to a number of fields and now we have added ‘text’ indexes. When we do a search we running into the error that the field we are searching on need to be indexed as text, but it was set up as such in mongoose and local testing worked, but not since using our dev DB in mongo cloud.Looking in the admin panel of out cluster and in the indexes for the collection we do see it is not created, but we aren’t sure why. We did manually remove the indexes from the question in collection and restarted our nodejs server, but no change.We are able to create all the indexes manually in Mongo Cloud, but I would rather avoid any manual steps if possible.Can anyone indicate how I can get the fields to be indexed, if they weren’t marked as being indexed before?Follows is a reduced sample of the schema definition we are using (code is in Typescript):versions:", "username": "Andre-John_Mas" }, { "code": "", "text": "The code you mention here is capable of creating indexes for new products you are going to insert. To create indexes for existing documents in your cluster you need to manually call “createIndexes”. Create an API to call “createIndexes” in all your existing documents.", "username": "Ankit_Saini" } ]
Mongoose, creating new indexes on existing cluster?
2020-07-24T22:05:50.089Z
Mongoose, creating new indexes on existing cluster?
19,648
null
[]
[ { "code": "", "text": "Hi guys,I am new to MONGODB ATLAS, i am trying to figure out how we can create a standalone DB in MONGODB-ATLAS.The UI always ends up creating replica set.", "username": "Ankit_Rathi" }, { "code": "", "text": "Welcome to the MongoDB Community @Ankit_Rathi!MongoDB Atlas is a highly available managed database service, so the minimum deployment is currently a three member replica set.Per the MongoDB Atlas FAQ:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to create a standalone DB / single node in MONGO ATLAS
2021-06-03T09:23:47.027Z
Is it possible to create a standalone DB / single node in MONGO ATLAS
3,300
null
[ "connecting", "golang" ]
[ { "code": "mongodb://<username>:<password>@mongodbIP:27017mongo.CommandError connection(mongodbIP:27017[-12]) failed to write : context deadline exceeded", "text": "I am using the mongo-go-driver to perform several operations on the Atlas MongoDB database.I am able to connect to the instance: mongodb://<username>:<password>@mongodbIP:27017After several minutes and several update operations on the same collection I get this error every time: mongo.CommandError connection(mongodbIP:27017[-12]) failed to write : context deadline exceededThe MongoDB Atlas cluster has no alerts and neither do the MongoDB logs. And I didn’t find this type of problem in my searches.I would like to get some advice on how to move forward on this.golang", "username": "ortizbje" }, { "code": "dig any mongodbIP\n", "text": "Unless doing something very specific to a particular member of a replica set, you should connect with the SRV connection string or the old long version of a replica set connection string.Since, you mention Atlas, it is very unlikely that you are doing something very specific to a particular member of the replica set.Most likely, your given node became a secondary node with no write permission.I would be interested to see the output of the commandbecause unless you are running a DNS resolver to hide the real Atlas cluster address from your application configuration, then it is possible that mongodbIP is not really an Atlas cluster.", "username": "steevej" } ]
'Failed to write' error on update operations
2021-06-03T09:13:31.049Z
&lsquo;Failed to write&rsquo; error on update operations
4,364
null
[ "node-js", "crud" ]
[ { "code": "async function insertManyCallback(collectionName, query, options, callback) {\n \n try {\nawait MongoClient.connect(\n dburl,\n defaultOptions,\n function(err, client) {\n if (err) logger.error(err);\n\n if (!err) {\n client\n .db(dbName)\n .collection(collectionName)\n .insertMany(query, options, function(err, res) => {\n client.close();\n callback();\n });\n }\n }\n);\n\n } catch (err) {\nlogger.error(err);\n }\n\n return true;\n}\n", "text": "I’m inserting approximately 65K documents for each bulk write operation, and I understand it will be slow. My problem stems from me querying the MongoDB collection for count documents. I can see the number go up and up until it has “visibly” finished inserting all the documents. Then, the insertMany just freezes there for a minute without any visible changes to the collection before calling my callback function.Does anyone know why it freezes for a minute and anything I can do about it?I also tried .then instead of function(err, res) to try and bypass it returning a result; the same outcome in terms of performance.", "username": "Dogunbound_hounds" }, { "code": "", "text": "Hi @Dogunbound_hounds,I believe this requires a server side performance investigation as it might be that the MongoDB cluster is overwhelmed with the write workloads and stalls to reclaim resources.What is the monogo version of server and driver?Have you tried adding resources to the cluster or monitoring their utilisation?I recommend reading this article and its referenceLearn how to monitor a MongoDB instance and which metrics you should consider to optimize performance.What is the writeConcern you are using? Try using w majority if this is a replica set…To set the expectations here, I am not sure how community can deeply investigate server performance issues and I strongly recommend case contacting support who specialise in those areas…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "So this is a server performance issue. I was just trying to pinpoint what I was doing wrong. It isn’t a replica set, nor can I edit writeConcerns because the database is locally running with the backend.Thanks for telling me this is purely a performance issue.", "username": "Dogunbound_hounds" }, { "code": "", "text": "@Dogunbound_hounds,Without looking into the specific environment details and diagnostics I cannot tell anything, however, usually those are related to performance issues…Thanks,\nPavel", "username": "Pavel_Duchovny" } ]
MongoDB insertMany stalls at callback
2021-06-01T17:00:07.874Z
MongoDB insertMany stalls at callback
4,368
null
[ "configuration", "capacity-planning" ]
[ { "code": "", "text": "Hi!I have downloaded mongodb community server version 4.4. I have run according to the instructions, created several databases with collections. I wanted to ask if it is possible to limit each database to a size like 2GB? So that the sum of size all collections in the database < my limit.I looked through the forums but only found a solution involving “storage.mmapv1.smallFiles” which was removed from version 4.4.", "username": "Mateusz_Gajewski" }, { "code": "", "text": "Hi @Mateusz_Gajewski, welcome to the community. Glad to have you here.Why do you want to limit db size? I’m curious to know more about your use case.Mahi", "username": "mahisatya" }, { "code": "", "text": "I share the databases with other users, I dont want anyone to use all the memory allocated for the mongodb. I want to protect the server from filling up all the memory\nFor example, I made a mistake in my script and put n (while true) documents in a collection - each weighing 15Mb.", "username": "Mateusz_Gajewski" }, { "code": "mongod", "text": "There are couple ways to limit the size of a collection, and in turn the database it’s part of.First. Capped collection. If the upper-bound limit of the data to be stored is known ahead of it’s creation time then a fixed-size capped collection type would ensure the size will always be less than the max size. Capped collection achieves this by replacing the oldest documents when the collection gets full.Second. TTL indexes. It’s a type of index that uses date field, or an array of date values to expire/delete documents. A background thread in mongod process will make sure the documents are deleted when they are past their expiration date. This is helpful if the number of writes are predictable, or else, the storage could fill up based on the frequency of the writes and bursting.There are some nuances and trade-offs with the above two methods. For more details, please take a look at the documentation. Hopefully, they fit your use case.Thanks,\nMahi", "username": "mahisatya" } ]
Can I limit size each database on my server?
2021-06-01T16:27:13.723Z
Can I limit size each database on my server?
3,216
null
[ "c-driver", "alpha" ]
[ { "code": "", "text": "Announcing 1.18.0-alpha2 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.This is an unstable prerelease and is unsuitable for production applications.Bug fixes:This is an unstable prerelease and is unsuitable for production applications.Improvements:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.18.0-alpha2 released
2021-06-03T01:03:39.803Z
MongoDB C driver 1.18.0-alpha2 released
3,150
null
[ "production", "c-driver" ]
[ { "code": "", "text": "Announcing 1.17.6 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.No changes since 1.17.5; release to keep pace with libmongoc’s version.Bug fixes:Thanks to everyone who contributed to this release.", "username": "Kevin_Albertson" }, { "code": "", "text": "", "username": "system" } ]
MongoDB C driver 1.17.6 released
2021-06-02T22:31:15.933Z
MongoDB C driver 1.17.6 released
2,518
null
[ "atlas-functions" ]
[ { "code": "", "text": "i saw that we cannot install then “handly” but they told by drag a node_modules.zip or tar it will install automaticlly, after my dragging i get this message which i don’t understant’ please help .Failed to upload node_modules.tar.gz: unknown: static is a reserved word in strict mode (1:6) > 1 | const static = require(‘node-static’); | ^ 2 | const server = new static.Server(’.’, {cache: 0}); 3 | 4 | require(‘http’).createServer(function(req, res) {thank you very much", "username": "Mu_Hallumi" }, { "code": "package.json", "text": "Hi @Mu_Hallumi, welcome to the community forum!If you can share the contents of your package.json file then I can see if I hit the same problem.", "username": "Andrew_Morgan" }, { "code": "", "text": "Hi @Mu_Hallumi and welcome in the MongoDB Community !Yes, it’s a known issue. That’s why Realm Dependencies are still in Beta. Dependencies need to be transpiled in Realm and this step can go wrong. This is the error you get: the transpiler fails to interpret from code from the module.Some dependencies are built-in:This is a beta feature and it’s still not ready for mass prod as you can see.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "thnak you, that mean with my node_modules i cannot declare my functions to deploy them to my app WebSite, any way do you know maybe if there is another way to connect my nodjs server when have mongoose schemas models and controllers of each collection, in generally how could connect my webste domin to nodejs that connected to mongodb by mongoose.is there is a way?thank you", "username": "Mu_Hallumi" } ]
Install my nodjs dependecies, helppppp me
2021-05-31T12:53:24.140Z
Install my nodjs dependecies, helppppp me
2,376
null
[ "installation" ]
[ { "code": "processManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1, 162.144.146.93 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\nsecurity:\n authorization: \"enabled\"\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Wed 2021-06-02 08:24:18 MDT; 15min ago\n Docs: https://docs.mongodb.org/manual\n Process: 13698 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=48)\n Process: 13695 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 13692 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 13690 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n Main PID: 8732 (code=exited, status=0/SUCCESS)\n\nJun 02 08:24:17 server.abelardolg.com systemd[1]: Starting MongoDB Database Server...\nJun 02 08:24:18 server.abelardolg.com mongod[13698]: about to fork child process, waiting until server is ready for connections.\nJun 02 08:24:18 server.abelardolg.com mongod[13698]: forked process: 13701\nJun 02 08:24:18 server.abelardolg.com mongod[13698]: ERROR: child process failed, exited with 48\nJun 02 08:24:18 server.abelardolg.com mongod[13698]: To see additional information in this output, start without the \"--fork\" option.\nJun 02 08:24:18 server.abelardolg.com systemd[1]: mongod.service: control process exited, code=exited status=48\nJun 02 08:24:18 server.abelardolg.com systemd[1]: Failed to start MongoDB Database Server.\nJun 02 08:24:18 server.abelardolg.com systemd[1]: Unit mongod.service entered failed state.\nJun 02 08:24:18 server.abelardolg.com systemd[1]: mongod.service failed.\nroot 14756 0.0 0.0 112816 980 pts/0 S+ 08:45 0:00 grep --color=auto mongod\[email protected] [log]# kill -9 14756\n-bash: kill: (14756) - No such process\n", "text": "Hi there,My .conf file:How the process runsThe output for:systemctl status mongod.service -lThe output for:ps aux | grep “mongod”When I want to kill this process:Do you have at your doc a section to solve the code errors, please?Thanks in advance.Brs.", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "Hi @ABELARDO_GONZALEZ,Can you please share the relevant part from the log file?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "I am not sure if these lines are fairly enough:\nJun 02 08:26:25 server.abelardolg.com sshd[13796]: pam_unix(sshd:auth): check pass; user unknown\nJun 02 08:26:25 server.abelardolg.com sshd[13796]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=95-165-131-164.static.spd-mgts.ru\nJun 02 08:26:27 server.abelardolg.com sshd[13796]: Failed password for invalid user bowei from 95.165.131.164 port 58940 ssh2\nJun 02 08:26:28 server.abelardolg.com sshd[13796]: Received disconnect from 95.165.131.164 port 58940:11: Bye Bye [preauth]\nJun 02 08:26:28 server.abelardolg.com sshd[13796]: Disconnected from 95.165.131.164 port 58940 [preauth]Tell me if you would like to see more lines, please.Brs.", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "OFF-TOPIC: have you thought to add a feature to attach logs to this form?", "username": "ABELARDO_GONZALEZ" }, { "code": "like this\nmongod", "text": "The forum supports markdown.So, yes, you can include code.The logs you sent are about a failed connection. So I guess the mongod was running at that time. Now you can’t start it, right?", "username": "MaBeuLux88" }, { "code": "", "text": "Yes, I can’t start it.", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "So can you share the relevant logs then plz? The one when you try to start and get the error?", "username": "MaBeuLux88" }, { "code": "", "text": "systemctl status mongod.service -l\n● mongod.service - MongoDB Database Server\nLoaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\nActive: failed (Result: exit-code) since Wed 2021-06-02 08:24:18 MDT; 2h 26min ago\nDocs: https://docs.mongodb.org/manual\nProcess: 13698 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=48)\nProcess: 13695 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 13692 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 13690 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\nMain PID: 8732 (code=exited, status=0/SUCCESS)Jun 02 08:24:17 systemd[1]: Starting MongoDB Database Server…\nJun 02 08:24:18 mongod[13698]: about to fork child process, waiting until server is ready for connections.\nJun 02 08:24:18 mongod[13698]: forked process: 13701\nJun 02 08:24:18 mongod[13698]: ERROR: child process failed, exited with 48\nJun 02 08:24:18 mongod[13698]: To see additional information in this output, start without the “–fork” option.\nJun 02 08:24:18 systemd[1]: mongod.service: control process exited, code=exited status=48\nJun 02 08:24:18 systemd[1]: Failed to start MongoDB Database Server.\nJun 02 08:24:18 systemd[1]: Unit mongod.service entered failed state.\nJun 02 08:24:18 systemd[1]: mongod.service failed.", "username": "ABELARDO_GONZALEZ" }, { "code": "mongodJun 02 08:24:18 mongod[13698]: To see additional information in this output, start without the “–fork” option.\n", "text": "These are the log of systemctl. Not mongod.Also they recommend you try this:", "username": "MaBeuLux88" }, { "code": "", "text": "I didn’t add that parameter.What is the path of that log you need to see, please?\nWhat is the command I should execute to generate that log?", "username": "ABELARDO_GONZALEZ" }, { "code": "mongod\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.888-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.897-06:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.897-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.901-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":21841,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"\"}}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.901-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.6\",\"gitVersion\":\"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7\",\"openSSLVersion\":\"OpenSSL 1.0.1e-fips 11 Feb 2013\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel70\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.901-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"CentOS Linux release 7.9.2009 (Core)\",\"version\":\"Kernel 3.10.0-1160.25.1.el7.x86_64\"}}}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.901-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.904-06:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.904-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2021-06-02T11:02:45.905-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n", "text": "", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "{“t”:{“$date”:“2021-06-02T11:02:45.904-06:00”},“s”:“E”, “c”:“STORAGE”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file.”}}Config file format: https://docs.mongodb.com/manual/reference/configuration-options/#file-formatStorage options: https://docs.mongodb.com/manual/reference/configuration-options/#storage-options", "username": "MaBeuLux88" }, { "code": "", "text": "storage:\ndbPath: /var/lib/mongoThis path exists:\nroot@ [mongo]# pwd\n/var/lib/mongo", "username": "ABELARDO_GONZALEZ" }, { "code": "mongod/data/db/var/lib/mongo", "text": "Data directory /data/db not found.You are then either not using the config file you think you are using or the file is not formatted correctly because mongod is looking for the default /data/db and not /var/lib/mongo.There is definitely a problem around that config file or the one used by systemctl.", "username": "MaBeuLux88" }, { "code": "", "text": "That output:\n{“t”:{\"$date\":“2021-06-02T11:02:45.904-06:00”},“s”:“E”, “c”:“STORAGE”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file.”}}is coming from Mongo after executing “mongod” command, not from systemctl.I am going to google where the conf file is indicated to Mongo.My mongod.conf is located is under /etc directory.What’s wrong? ", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "From /var/log/mongodb/mongod.log:\n{“t”:{\"$date\":“2021-06-02T12:26:49.775-06:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:48}}", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "Do you have a section where error cods are showed along with their solutions, please?", "username": "ABELARDO_GONZALEZ" }, { "code": "mongodmongod -f /etc/mongod.confmongod", "text": "The command that systemctl is running is not just mongod. It’s most probably more something like mongod -f /etc/mongod.conf.If you run mongod without any parameter, it’s a different problem because you aren’t using the config anymore but all the default parameters.", "username": "MaBeuLux88" }, { "code": "", "text": "Is there a way to know what ¡s the command executed by systemctl?Where could I edit the path to mongod.conf?", "username": "ABELARDO_GONZALEZ" }, { "code": "", "text": "If you want to understand, you definitively want to take a look atIn the first two articles in this series, I explored the Linux systemd startup sequence.Otherwise\n`Unit files are stored in the /usr/lib/systemd directory and its subdirectories, while the /etc/systemd/ directory and its subdirectories contain symbolic links to the unit files necessary to the local configuration of this host.`", "username": "steevej" } ]
Code status = 48
2021-06-02T14:41:36.239Z
Code status = 48
15,615
null
[ "unity" ]
[ { "code": "var app = App.Create(realmAppId);\nvar credentials = Credentials.ApiKey(apiKey);\nvar user = await app.LogInAsync(credentials);\n\n try\n {\n client = user.GetMongoClient(mongoClient);\n database = client.GetDatabase(databaseName);\n}\ncatch(Exception e){ }\n", "text": "There seems to be strange behaviour when I run my app in the editor. The first time I run it, it’s fine. The second time I run it, it crashes the unity application. Here is a sample of my code:I am wondering if I have to handle dispose or close the connection? I am unsure.Thank you,", "username": "theresaj" }, { "code": "", "text": "Hey, the current public version of the Unity SDK is a very early preview where only the local database was tested and verified to be working. We have identified and resolved a number of issues related to the sync/remote MongoDB part of the SDK and hope to release those next week.", "username": "nirinchev" }, { "code": "", "text": "Nice! thanks for responding.", "username": "theresaj" }, { "code": "", "text": "@theresaj We’d love to hear more about the game you are looking to build with the Realm Unity SDK - drop me a line at [email protected] and I can let you fill you in on our gaming roadmap for Realm", "username": "Ian_Ward" }, { "code": "", "text": "Hey folks. I stumbled on this thread and wanted to add my observations as we are experiencing the same behavior the OP describes.We are doing some testing with Realm Sync and Unity as it seems like an attractive option for a future project.Currently using 10.1.4 of the Realm Unity SDK and seeing the following:–Logging in with emaill/pwd creds, syncing, and making changes seems to work as intended the first time after opening Unity and pressing play\n----A second play always freezes the editor–Based on what I see in the documentation (and some experimenting) it seems we need to specify a filename when we call GetInstanceAsync to get things to work as intended rather than a new .realm file being created on each sync and a resulting resync of all data\n----This works properly if I only specify a filename (subsequent syncs no longer show ‘for the first time’ and current data is not resynced\n----If I try to append a path (to the persistentDataPath folder for example) the editor crashes when trying to sync whether the file exists or not–Testing falling back to offline mode (via Realm.GetInstance) after a successful sync doesn’t seem to work\n----I receive ‘Incompatible histories. Expected a Realm with no or in-realm history, but found history type 3’ when attempting to open the .realm fileHoping this is useful in some way and wondering if the bug-fixes mentioned above were included in 10.1.4?Looking forward to efforts on this continuing as I think it would potentially be a great option for Unity developers and would like to give it a more through test-run myself.Thanks!", "username": "Snakes" }, { "code": "", "text": "The editor freezes are something we’re actively working on. You can try installing this WIP package and see if it fixes those for you.Regarding falling back to offline mode - this is probably something we should explain more explicitly in the docs, but there’s no need to fall back to offline mode - you should continue using SyncConfiguration when opening the Realm, even if you’re offline as the data will still be on your device.", "username": "nirinchev" }, { "code": "", "text": "Understood. I’ll give the file and the advice a try. Much appreciated!", "username": "Snakes" }, { "code": "", "text": "@nirinchevThe WIP package is a HUGE improvement. It occasionally stops recompiling the script after a change for some reason, and I’ve had it lock up the editor a couple times, but it works flawlessly 95% of the time. More than enough to keep testing this out. It also corrected the issue I was having when adding the data path. Very much appreciated!On the other front, thanks for the advice. After the changes I’m now able to access my Realm both on and offline.", "username": "Snakes" } ]
Strange Behaviour - Unity Realm SDK
2021-05-01T19:55:27.770Z
Strange Behaviour - Unity Realm SDK
4,529
null
[ "aggregation", "queries" ]
[ { "code": "\"_id\" : ObjectId(\"60b3576fb220dae53d75c995\"),\n\"Book_name\" : \"blablabla\",\n\"authors\" : \"Buddy\",\n\"the_book\" : \"this is what this book is about\"\n\"the_book\":\"SUPPOSING that Truth is a woman--what then?\"{_id:”1”, _value:”1”}, {_id:”2”, _value:”1”}, {_id:”4”, _value:”3”}, {_id:”5”, _value:”2”}, {_id:”9”, _value:”1”}\n", "text": "I have a database called books,this database contains several documents such as Book_name, authors, etc…One of these documents is called “the_book” and it’s value is basically a string which represents the content of the book itself.For example:What I am trying to do is to code a mapReducer that returns a pair of <_id,_value> where _id is equal to the size of each word in the_book and where _value is equal to the number of words of this size.For example, if we only had one book such has \"the_book\":\"SUPPOSING that Truth is a woman--what then?\" we should get:since we have one word “a” of size 1, one word “is” of size 2, three words “then”, “what” and “that” of size 4, two words “truth” and “woman” of size 5 and one word “supposing” of size 1.If we had more books then it should have return the total sum of words for all the books.I guess the mapper has to emit a pair of <word,size_of_word> for each word in “the_book” value and the reducer has to sum this up to get the <_id,_value> requested.I’m experiencing troubles with the way of splitting the string into an array of words (since the delimiter isn’t always just a space as in the example there are --)Thanks for the help !", "username": "buddy_jewgah" }, { "code": "[\n {\n '$addFields': {\n 'sizes': {\n '$map': {\n 'input': {\n '$filter': {\n 'input': {\n '$regexFindAll': {\n 'input': '$the_book', \n 'regex': '[a-z]*', \n 'options': 'i'\n }\n }, \n 'as': 'val', \n 'cond': {\n '$ne': [\n '$$val.match', ''\n ]\n }\n }\n }, \n 'as': 'val', \n 'in': {\n '$strLenCP': '$$val.match'\n }\n }\n }\n }\n }, {\n '$unwind': {\n 'path': '$sizes'\n }\n }, {\n '$group': {\n '_id': '$sizes', \n 'count': {\n '$sum': 1\n }\n }\n }\n]\n{ \"_id\" : 9, \"count\" : 1 }\n{ \"_id\" : 2, \"count\" : 1 }\n{ \"_id\" : 1, \"count\" : 1 }\n{ \"_id\" : 4, \"count\" : 3 }\n{ \"_id\" : 5, \"count\" : 2 }\n$reduce$unwind", "text": "Hi @buddy_jewgah,Here is my attempt.In action in Compass on the complex example you provided.image2266×1112 141 KBThe trick of my solution is to use a regex expression to identify the different words. The good thing is that the regex can be as simple or complex as you need.That’s the result I get:It’s most probably not the most optimized solution. I guess it’s possible to use $reduce in the first stage but it was too much for my small head…It would be a lot faster if you don’t have to use $unwind.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
MapReducer returning the number of words of each size from a document value
2021-05-31T09:12:43.054Z
MapReducer returning the number of words of each size from a document value
2,298
https://www.mongodb.com/…_2_1024x283.jpeg
[ "dot-net" ]
[ { "code": "", "text": "hi everyone, I’ve been trying for several days now to create a user using the createUser command in a mongodb atlas cluster database.\nI try to summarize the theory of the error.Through c # mongo.diriver I connect my client through a connection string in which I have the permissions of atlas admin and then I launch a “command” “” createUser \"but nothing tells me that I do not have the necessary permissions.if I try to do the same thing with mongo compass the exact same thing happens, I searched on google in the various forums but nothing I can not understand the extent of the problem I am attaching a screen where I hope you understand the matter better.image2076×574 159 KB", "username": "Salvatore_Lorello" }, { "code": "mongo", "text": "Hello @Salvatore_Lorello, welcome to the MongoDB Community forum!It is generally recommended that you perform MongoDB Administration tasks (like, security, in your case) using the command-line interface like mongo shell. You will have interactive response to the command response at each step of what you are trying.You can try the steps from this documentation to create MongoDB users:", "username": "Prasad_Saya" }, { "code": "", "text": "of course I understand, but I have a registration form where through C # driver I should create a new user who has read and write privileges only in a specific database that is dynamically created by registering the user on the form. Peeking around a bit I had read that the only way you can create a new user is either from api or from ui. I am trying to access from api but nothing I can not just could you give me some advice?", "username": "Salvatore_Lorello" }, { "code": "", "text": "through C # driver I should create a new user who has read and write privileges only in a specific databaseTo create a new user (e.g., “new-user-1”) with specific role (like read and write privileges on a specific database), you need to be a user (e.g., “super-1”) with user administrator privileges. This “super-1” user can create other users like “new-user-1” as required. This means you need to authenticate yourself as “super-1” - and then create the new user.Here are some relevant documentation / links to run the Create User command from MongoDB C# Driver:", "username": "Prasad_Saya" }, { "code": "", "text": "You cannot createUser with Atlas.See:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
createUser with MongoDB C# driver
2021-06-01T18:19:50.377Z
createUser with MongoDB C# driver
4,040
null
[ "aggregation", "queries" ]
[ { "code": "‘$push used too much memory and cannot spill to disk. Memory limit: 104857600 bytes’, ‘code’: 146, ‘codeName’: ‘ExceededMemoryLimit’\ndb.adminCommand({setParameter: 1, internalQueryMaxPushBytes:2048576000});\ndb.adminCommand({setParameter: 1, internalQueryMaxBlockingSortMemoryUsageBytes:2048576000});\n{ \n \"ok\" : 0.0, \n \"errmsg\" : \"BufBuilder attempted to grow() to 67108865 bytes, past the 64MB limit.\", \n \"code\" : 13548.0, \n \"codeName\" : \"Location13548\"\n} \n \"$match\": {\n \"$and\": \n [\n\t\t\t { \"TID\": /XP/ },\n { \"timestamp\": { \"$gte\": \"2021-05-15 00:00:00\" } },\n { \"timestamp\": { \"$lte\": \"2021-05-25 23:59:59\" } },\n { \"Col\": /^\\d.*$/ },\n { \"MT0\": /^\\d.*$/ },\n { \"Row\": /^\\d.*$/ },\n { \"TRM\": /^\\d.*$/ },\n { \"WRLFTR_0\": /^\\d.*$/ },\n { \"WRLFTR_1\": /^\\d.*$/ }\n ]\n }\n \"$match\": {\n \"$and\": \n [\n\t\t\t { \"TID\": /XP/ },\n { \"timestamp\": { \"$gte\": \"2021-05-27 00:00:00\" } },\n { \"timestamp\": { \"$lte\": \"2021-05-27 23:59:59\" } },\n { \"Col\": /^\\d.*$/ },\n { \"MT0\": /^\\d.*$/ },\n { \"Row\": /^\\d.*$/ },\n { \"TRM\": /^\\d.*$/ },\n { \"WRLFTR_0\": /^\\d.*$/ },\n { \"WRLFTR_1\": /^\\d.*$/ }\n ]\n }\n db.collection.aggregate(\n [\n {\n \"$match\": {\n \"$and\": \n [\n\t\t\t { \"TID\": /XP/ },\n { \"timestamp\": { \"$gte\": \"2021-05-15 00:00:00\" } },\n { \"timestamp\": { \"$lte\": \"2021-05-25 23:59:59\" } },\n { \"Col\": /^\\d.*$/ },\n { \"MT0\": /^\\d.*$/ },\n { \"Row\": /^\\d.*$/ },\n { \"TRM\": /^\\d.*$/ },\n { \"WRLFTR_0\": /^\\d.*$/ },\n { \"WRLFTR_1\": /^\\d.*$/ }\n ]\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"TID\": \"$TID\",\n \"Opt\": \"$Opt\",\n \"DSN1\": \"$DSN1\",\n \"DSN2\": \"$DSN2\",\n \"Col\": \"$Col\",\n \"Row\": \"$Row\",\n \"CSN\": \"$CSN\",\n\t\t\t\t\t\"PID\": {\n\t\t\t\t\t\t\"$switch\": {\n\t\t\t\t\t\t\t\"branches\": [\n\t\t\t\t\t\t\t\t{ \n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"S5\" ] }, \"then\": \"6350\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"FUNC\" ] }, \"then\": \"6400\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"STR\" ] }, \"then\": \"6600\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"FIN\" ] }, \"then\": \"6800\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{ \n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\", \"FTG\" ] }, \"then\": \"9000\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"default\": \"$_id.Opt\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}, \n \"CID\": {\n \"$switch\": {\n \"branches\": [\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 1.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 12.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 1.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 13.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 24.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 2.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 25.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 36.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 3.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 37.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 48.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 4.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 49.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 60.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 5.0\n },\t\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 61.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 72.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 6.0\n },\t\t\t\t\t\t \n ],\n \"default\": 0.0\n }\n }, \n },\n \"details\": {\n \"$push\": { // partition over\n \"MT0\": \"$MT0\",\n \"WRLFTR_0\": \"$WRLFTR_0\",\n \"WRLFTR_1\": \"$WRLFTR_1\",\n\t\t\t\t\t\t\"TRM\": \"$TRM\",\n \"timestamp\": \"$timestamp\"\n }\n },\n }\n },\n {\n \"$sort\": {\n \"_id\": 1.0,\n \"timestamp\": 1.0 // order by timestamp\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$details\",\n \"includeArrayIndex\": \"array_idx\" // row_number\n }\n },\n {\n \"$match\": { // select only order 30 - 180 in array\n \"$and\": [\n {\n \"array_idx\": {\n \"$gte\": 30.0\n }\n },\n {\n \"array_idx\": {\n \"$lte\": 180.0\n }\n }\n ]\n }\n },\n {\n \"$sort\": { // sort TRM for percentile calculation\n \"details.TRM\": 1.0\n }\n }, \n {\n \"$group\": { // group parameter back to array by partition \"_id\" \n \"_id\": \"$_id\",\n \"timestamp\": { \"$push\": {\"$toDate\":\"$details.timestamp\"} },\n \"MT0\": { \"$push\": {\"$toDouble\":\"$details.MT0\"} },\n \"TRM\": { \"$push\": {\"$toDouble\":\"$details.TRM\"} },\n \"WRLFTR_0\": { \"$push\": {\"$toDouble\":\"$details.WRLFTR_0\"} },\n \"WRLFTR_1\": { \"$push\": {\"$toDouble\":\"$details.WRLFTR_1\"} },\n }\n },\t\t\t\n { // reporting \"_id\" and calculating \n \"$project\": {\n \"_id\": 0,\n \"TID\": \"$_id.TID\",\n \"Col\": {\n \"$toDecimal\": \"$_id.Col\"\n },\n \"Row\": {\n \"$toDecimal\": \"$_id.Row\"\n },\n \"CSN\": {\n \"$substr\": [\n \"$_id.CSN\",\n 2.0,\n -6.0\n ]\n },\n \"Opt\": \"$_id.Opt\",\n \"PID\": \"$_id.PID\", \n \"DSN1\": \"$_id.DSN1\",\n \"DSN2\": \"$_id.DSN2\",\n \"CID\": \"$_id.CID\", \n \"MAX_TRM\": {\n \"$max\": \"$TRM\"\n },\n \"Q3_TRM\": { \n \"$arrayElemAt\": [ \"$TRM\", {\"$floor\": { \"$multiply\": [0.75,{\"$size\": \"$TRM\"}] }} ] \n },\n \"AVG_TRM\": {\n \"$avg\": \"$TRM\"\n },\n \"MAX_WRLFTR_0\": {\n \"$max\": \"$WRLFTR_0\"\n },\n \"MAX_WRLFTR_1\": {\n \"$max\": \"$WRLFTR_1\"\n },\n \"MAX_MT0\": {\n \"$max\": \"$MT0\"\n },\n \"MAX_timestamp\": {\n \"$max\": \"$timestamp\"\n },\n \"DAY_ID\":{\"$concat\":\n\t\t\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\t\t{\"$substr\":[{\"$toString\":{\"$max\": \"$timestamp\"}}, 0, 4]}, \n\t\t\t\t\t\t\t\t\t{\"$substr\":[{\"$toString\":{\"$max\": \"$timestamp\"}}, 5, 2]},\n\t\t\t\t\t\t\t\t\t{\"$substr\":[{\"$toString\":{\"$max\": \"$timestamp\"}},8,2]} \n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n }\n },\n ],\n {\n \"allowDiskUse\": true\n }\n);\n", "text": "HiI need some advise. I have aggregation pipeline in MongoDB. I try to run this pipeline with small number of document like 200K documents, it’s seem to ok.Then I try it with bigger sample ( longer timestamp period in $match) and I found that there is an error saidSo I google and see that $push is limited memory to 100 MB. Then I increased memory by using this commandThen run the same query then it gave me longer running time then different error message likeIt’s not always and permanent error, in same period of timestamp as thiswhich 43,000,000 documents from $match stage I can get result from this code with no error message.But In some period of timestampwhich 7,000,000 documents from $match stage I got this error “BufBuilder attempted to grow() to 67108865 bytes, past the 64MB limit.”Here is the code, it’s everything I just change parameter name from actual to something else. Could you please give some acvise if spot some typo or same items that need to improve. thanks.", "username": "Preutti_Puawade" }, { "code": "{ \n \"_id\" : ObjectId(\"60934f0aaa31cda568956999\"), \n \"TID\" : \"MX17\", \n \"component\" : \"ABC-DEF\", \n \"unique_key\" : \"XP05ZAVF_20210405_063305_FEAT_none_A0000001_none_A0000002_660835-20210405063327\", \n \"timestamp\" : \"2021-04-05 06:33:27\", \n \"CSN\" : \"0x660835\", \n \"Col\" : \"14\", \n \"DefaultMHT\" : \"40\", \n \"DefaultNRR\" : \"2\", \n \"DefaultPRR\" : \"6\", \n \"DF_MRPM\" : \"2787\", \n \"DSN1\" : \"A0000001\", \n \"DSN2\" : \"A0000002\", \n \"EF_MRPM\" : \"2517\", \n \"FT0\" : \"-1\", \n \"FT1\" : \"-1\", \n \"FFT0\" : \"-1\", \n \"FFT1\" : \"-1\", \n \"FMT0\" : \"-1\", \n \"FMT1\" : \"-1\", \n \"FMT0\" : \"-1\", \n \"FMT1\" : \"-1\", \n \"IAVOk\" : \"1\", \n \"ICOnTemp\" : \"0\", \n \"IDFanSpeedOk\" : \"1\", \n \"IHO\" : \"1\", \n \"IOTO\" : \"1\", \n \"IS0PI\" : \"1\", \n \"IS1PI\" : \"1\", \n \"MT0\" : \"-1\", \n \"MT1\" : \"-1\", \n \"MT\" : \"15\", \n \"Operation\" : \"FEAT\", \n \"Row\" : \"12\", \n \"SHTemp\" : \"35\", \n \"SOn1\" : \"FEAT\", \n \"SOn2\" : \"FEAT\", \n \"TATT\" : \"-1\", \n \"TATT\" : \"-1\", \n \"TCCC\" : \"34.8\", \n \"TContMode\" : \"Ramp To Temperature\", \n \"TRef\" : \"25\", \n \"TRAux\" : \"29.9\", \n \"TRMain\" : \"32.1\", \n \"WRFT_Active_now\" : \"-1\", \n \"WRLFTR_0\" : \"-1\", \n \"WRLFTR_1\" : \"-1\", \n \"WRLRATTC\" : \"-1\", \n \"WRLRTFTA\" : \"-1\", \n \"WR_GNRR\" : \"-1\", \n \"WR_GPRR\" : \"-1\", \n \"WR_IOPWCBFT\" : \"-1\", \n \"WR_SIOWCBFT\" : \"-1\", \n \"WOT\" : \"-1\"\n}\n{ \n \"TID\" : \"MX17\", \n \"Col\" : NumberDecimal(\"70\"), \n \"Row\" : NumberDecimal(\"16\"), \n \"CSN\" : \"107D81\", \n \"Opt\" : \"FEAT\", \n \"PID\" : \"9000\", \n \"DSN1\" : \"A0000001\", \n \"DSN2\" : \"A0000002\", \n \"CID\" : 6.0, \n \"MAX_TRM\" : 29.1, \n \"Q3_TRM\" : 25.5, \n \"AVG_TRM\" : 25.26026490066225, \n \"MAX_WRLFTR_0\" : 38.0, \n \"MAX_WRLFTR_1\" : 0.0, \n \"MAX_MT0\" : 67.0, \n \"MAX_timestamp\" : ISODate(\"2021-04-05T04:48:58.000+0000\"), \n \"DAY_ID\" : \"20210405\"\n}\n", "text": "Additional Information :Example of original document :Example of output :", "username": "Preutti_Puawade" } ]
MongoDB : Memory issue in aggregation pipeline, need advise
2021-06-02T03:35:51.842Z
MongoDB : Memory issue in aggregation pipeline, need advise
4,010
null
[ "swift", "performance" ]
[ { "code": "", "text": "For context, imagine a Realm containing about 100 of “Person” objects each with a “lastName” property.Creating a dictionary the following way:\nlet dict = Dictionary(grouping: realm.objects(Person.self), by: { $0.lastName })The above line takes ~0.6 milliseconds to execute if done in the same session after calling app.login(credentials:), but if I try to execute the same line after terminating the app and reopening it without logging in, it takes ~75 milliseconds.Is this kind of performance normal?", "username": "JamaicanCowboy" }, { "code": "", "text": "Calling login() does perform a network call so I would expect there to be additional latency when going through that codepath. After logging in though, all realm reads and writes will be to local disk so the latency should be the same. You also do not need to login with every app launch, the user credentials are cached so you can simply call currentUser() to open a local realm for access without needing to relogin.", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian, thanks for the quick reply.The issue here is that if I don’t call login() every app launch, the performance of that single line of code mentioned in my original post is over 100 times slower.", "username": "JamaicanCowboy" } ]
Read performance ~125 times slower using cached user
2021-06-02T00:58:25.977Z
Read performance ~125 times slower using cached user
1,856
null
[ "replication", "performance", "monitoring" ]
[ { "code": "", "text": "1 primary + 2 secondary + 1 arbiter; the secondary node sometimes needs to go to the primary to read the oplog long ago. When it reads the data, the main node host cpu is 100%, and the database cannot provide services at all. I want to know under what circumstances will trigger this situation and how can I avoid it from happening.At present, the size of my oplog is 52g (storagesize). Is it too large?My version is 4.0.8", "username": "liu_honbo" }, { "code": "rs.printReplicationInfo()", "text": "Hi @liu_honbo and welcome in the MongoDB Community !The first question is why do you have a secondary nodes that is lagging so far behind that it needs to catch up a big part of the oplog?That isn’t healthy because it means that this secondary node isn’t available for an election in case something happens to your primary and you need to elect a new primary which means that your replica set isn’t really Highly Available.A 52GB Oplog isn’t very shocking. It all depends on how much write operations you are supporting. Ideally your oplog window should cover at least a few days so you can sleep on your 2 ears during a weekend for example .If you have the hardware to support a 52GB oplog, you probably have a big cluster. Can you share some numbers maybe like RAM, nb CPUs, type of drives, rs.printReplicationInfo(), database sizes and total size of your indexes, average nb of read/writes operations per secs and how much data is inserted each day?I guess this could help to start to understand what is happening. But the first problem is why your secondary node isn’t already in sync to begin with.Cheers\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "yes,my secondary nodes get logs a few months ago.When the problem occurs, the system‘cpu is exhausted and the user’cpu is very lowHow can I avoid this problem?physical cpu:4,cors:10,logic cpu :80 ,Inte® Xeon® CPU E7-4820 v3 @1.90GHZ\nMEM:252G ![dbsize|205x106]", "username": "liu_honbo" }, { "code": "", "text": "Was your secondary node down for a few months? Is this the reason why it’s lagging behind the primary. If it needs to catch up 2 months of write operations, it’s kinda normal that it is struggling a bit…", "username": "MaBeuLux88" }, { "code": "", "text": "", "username": "liu_honbo" }, { "code": "", "text": "getmore1871×474 117 KB\nthe timestamp 1612189292 is 2021-02-01T14:21:322", "username": "liu_honbo" }, { "code": "", "text": "dbsize689×739 23.9 KB", "username": "liu_honbo" }, { "code": "", "text": "Can I block the automatic reading of a large number of oplog logs when the secondary node fails?\nOr can I specify to synchronize the failed node with another secondary node?", "username": "liu_honbo" }, { "code": "", "text": "Hey,Looks like you have 252GB RAM for 11.5+10.5 = 22GB of data which is completely overkill so it should work without any problem in theory.Your oplog is “oversized”. You have 18376h of history in it which is more than 2 years! That’s VERY confortable. Usually a few days is more than enough to allow you to resync a server that had an issue for a few hours.There is a procedure to resize the oplog but to be honest, I wouldn’t bother. It’s just more confortable and you clearly have room so it shouldn’t be an issue.You mentioned that you have 3 servers (PSS). Are the 3 servers identical?If one of your server is completely out of sync, I would reset completely this server and restart it from a backup at this point. It’s probably easier because the sync would just have to sync the difference between the primary and the backup so hopefully that shouldn’t take more than a few minutes / seconds (if your backup is recent, and it should be).It’s weird that your oplog is that big. By default, it’s supposed to be 5% of free disk space… So I guess you have a very large disk or a specific value in your config file.the timestamp 1612189292 is 2021-02-01T14:21:322From this log, I guess your cluster was failing since February 1st at least and it’s trying to catch up all the write operations that happened since then.So the question is: what is the fastest way to recover our secondary? Let it replay 4 months worth of write operation or reset everything and restart from a backup that will just need to replay a few hours worth of write operations (present - backup time)?From what I see, it’s completely normal to have these operations logged in the logs. That’s because operations slower than 100ms are logged by default but it’s expected for this kind of operation with an oplog this large. It’s not an issue as this will stop once our 3 nodes are in sync.Also, it’s apparently syncing from your other secondary (see readPreference) so your primary shouldn’t be impacted at all and your client workload should be fine I guess.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "The rectification of the cluster resource configuration is completely consistent\nAll businesses run on the primary node\nThe priority of the primary node is 100, and the priority of the secondary node is 50 and 30 respectively.\nNo abnormality occurred in the cpu of all secondary nodes", "username": "liu_honbo" } ]
Oplog.rs consume a lot of cpu
2021-06-01T12:51:52.554Z
Oplog.rs consume a lot of cpu
4,111
https://www.mongodb.com/…6_2_1024x721.png
[]
[ { "code": "{\n\t\"business_id\": {\"$oid\":\"606a770a62e058344e9c52a9\"},\n\t\"category_id\": {\"$oid\":\"5e182156bfa94d17ccefb4d9\"},\n\t\"sub_category_id\": {\"$oid\":\"5e182f96bfa94d17ccefb4e5\"},\n\t\"staff_id\": {\"$oid\":\"60b462c062e058344e9c5329\"}\n}\n\"business_id\": ObjectId(\"606a770a62e058344e9c52a9\") \n\"business_id\": {\"_id\":\"606a770a62e058344e9c52a9\"},\n", "text": "Hi All, hope everyone is doing well.I needed some guidance. In Realm I’m using collection.insertOne to insert a JSON record into the collection.Example:JSONIssue is that when I insert from Realm it shows like the second dataset below. But if I take the same JSON and insert via Compass or browser then it shows correctly i.e. the ObjectId’s are imported correctly.image1191×839 91.2 KBI understand JSON can’t store object information but I can’t figure out the best way to insert a record via Realm which has reference to ObjectId’s of different collection objects. I tried for exandetc etc but no go. Either the online Realm validator doesn’t accept or same issue as screenshot above.Any help would be appreciated. Thanks!", "username": "Sam" }, { "code": "BSON.ObjectId(\"...\")exports = async function(){\n const coll = context.services.get(\"mongodb-atlas\").db(\"test\").collection(\"coll\");\n await coll.deleteMany({});\n const result_insert_father = await coll.insertOne({\"name\": \"father\"});\n const father_id = result_insert_father.insertedId;\n console.log(\"ID of the father doc: \" + father_id);\n \n await coll.insertOne({\"name\": \"son\", \"parent\": father_id});\n const son = await coll.findOne({\"name\": \"son\"});\n console.log(EJSON.stringify(son));\n \n await coll.insertOne({\"name\": \"forged\", \"parent\": BSON.ObjectId(\"60b6b6779fa4e9249123a35e\")});\n const forged = await coll.findOne({\"name\": \"forged\"});\n console.log(EJSON.stringify(forged));\n return \"Done!\";\n};\n> ran on Wed Jun 02 2021 00:57:09 GMT+0200 (Central European Summer Time)\n> took 594.379835ms\n> logs: \nID of the father doc: 60b6bb46646de5c50f28b2c4\n{\"_id\":{\"$oid\":\"60b6bb46646de5c50f28b2ca\"},\"name\":\"son\",\"parent\":{\"$oid\":\"60b6bb46646de5c50f28b2c4\"}}\n{\"_id\":{\"$oid\":\"60b6bb47646de5c50f28b2d1\"},\"name\":\"forged\",\"parent\":{\"$oid\":\"60b6b6779fa4e9249123a35e\"}}\n> result: \n\"Done!\"\n> result (JavaScript): \nEJSON.parse('\"Done!\"')\n", "text": "Hi @Sam and welcome in the MongoDB Community !I think what you are looking for is BSON.ObjectId(\"...\").Doc: https://docs.mongodb.com/realm/functions/json-and-bson/#bson.objectidHere is an example in action:Output:In Compass:\nimage823×567 42.1 KB\nI hope this helps :-).\nCheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "BSON.ObjectId(\"60b6b6779fa4e9249123a35e\")", "text": "BSON.ObjectId(\"60b6b6779fa4e9249123a35e\")Awesome! Thanks so much. This was the magic line which worked for me (for ex):\ndataTemplate2.staff_id = BSON.ObjectId(json.ustandby_staff_id);Thanks again!!", "username": "Sam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Collection.insertOne ObjectId reference issue
2021-06-01T19:24:36.705Z
Collection.insertOne ObjectId reference issue
7,264
null
[]
[ { "code": "", "text": "Is there a way to install both version 6 and 10 at the same project react-native?\nits not to easy to migrate, i want to migrate in parts, and this way will help alot my migration.\nthx", "username": "Royal_Advice" }, { "code": "", "text": "Is there a way to install both version 6 and 10 at the same project react-native?No, unfortunately, it is not possible to have two different versions of the realm sdk running in the same app", "username": "Ian_Ward" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Install version 6 and version 10 in same project RN
2021-06-01T15:30:08.603Z
Install version 6 and version 10 in same project RN
1,942
null
[ "java" ]
[ { "code": "", "text": "Hello, I would like to check if anyone has used mongo document field mapping to pojo using @Field spring annotation? Also, did anyone use a specific convention of mapping instead of property to field, using more detailed explicit mapping to a sub document field to a property. say:\n\"{Person: {Address: City}} mapping city to pojo property as “person.address.city” instead of creating multiple embedded POJO for person, address.Please advice.", "username": "Gopal_P_Muthyala" }, { "code": "fname@Field(\"fname\")\nprivate String firstName;\ntargetType@Field(targetType = DECIMAL128)\nprivate BigDecimal amount;\nPersonAddressperson{ _id: <ObjectId>, personName: <string>, address: { number: <int>, street: <string>, city: <string> } }Person.javaAddress.javaAddressPerson", "text": "Hello @Gopal_P_Muthyala, welcome to the MongoDB community forum.I would like to check if anyone has used mongo document field mapping to pojo using @Field spring annotation?The @Field annotation is to define custom metadata for document fields. This annotation is applied at the field level. It lets describe the name and type of the field as it will be represented in the MongoDB BSON document. This allows the name and type to be different than the field name of the class as well as the property type.For example a MongoDB collection’s document has a field with name fname and the Pojo class mapped to it can define the property as:And, the mapping of the Java types to BSON types using the targetType attribute with values as defined in FieldType.If a field mapping needs no customization, then there is no need to use this annotation.Also, did anyone use a specific convention of mapping instead of property to field, using more detailed explicit mapping to a sub document field to a property. say:\n\"{Person: {Address: City}} mapping city to pojo property as “person.address.city” instead of creating multiple embedded POJO for person, address.If a Person class has a Address property, and if this is mapped to a MongoDB document in a person collection, then the document will be as:\n{ _id: <ObjectId>, personName: <string>, address: { number: <int>, street: <string>, city: <string> } }The Person.java and Address.java Pojo classes are defined for this mapping (sometimes, Address class can be a static inner class within the Person class).To simplify the coding in the Pojo classes, you can use the Project Lombok utility lbrary - this lets you annotate properties and not write their get/set, toString, hashcode and equals method code. There are quite a few annotations in this library including @Getter/Setter, @ToString, etc.", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you Prasad, Sure i will look at the Project Lombok. So to map city in a Person POJO, do i need to have Person, and Address pojo? Since my use case has a lot of pojo and complex nested mongo document. I just want one Pojo with all details as in PersonAddress.java having city as a property with @Field (“person.address.city”), would that work?", "username": "Gopal_P_Muthyala" }, { "code": "", "text": "I just want one Pojo with all details as in PersonAddress.java having city as a property with @Field (“person.address.city”), would that work?I don’t know. Did you try? What did you find?", "username": "Prasad_Saya" }, { "code": "", "text": "Hi. Whether it is working? Or any other way . Please share thoughts", "username": "halena_tiwari" }, { "code": "", "text": "Is there any other way. Please share thoughts", "username": "halena_tiwari" } ]
Java POJO mapping to mongo document with sub documents
2020-07-08T05:42:40.793Z
Java POJO mapping to mongo document with sub documents
9,938
https://www.mongodb.com/…b_2_1023x566.png
[]
[ { "code": "", "text": "Hello forum,I noticed something on MongoDB charts.Schermafbeelding 2021-05-31 1103391912×1058 164 KB\nOnly objects with comments are visible, in this case only one.\nI expect there to be other lines that flat-line at zero. Is this possible? And how?Thanks in advance,\nGreets Wilbert", "username": "Wilbert_Gotink" }, { "code": "", "text": "Hi @Wilbert_Gotink -Sorry, without knowing anything about your data, I don’t understand what result you are looking for. If you have a field encoded in the Series channel it will result in a line per series, but it can only plot a line if there are documents in the database that use that field.Tom", "username": "tomhollander" }, { "code": "", "text": "Hi Tom,We want to plot a zero line even if there are no documents in that collection. The field we put in the series channel has a lot of documents, see this example about the collection ‘posts’.\nimage2028×1096 211 KBWilbert", "username": "Wilbert_Gotink" }, { "code": "", "text": "I still don’t know what your data or chart definition look like, but if only one series is showing then it means no documents exist for any other series (for the specific fields/filters chosen on that chart).", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
0-line doesnt show on chart
2021-05-31T09:56:20.026Z
0-line doesnt show on chart
1,832
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to announce the release of 1.5.3 of the MongoDB Go Driver.This release contains several bug fixes. For more information please see the 1.5.3 release notes.You can obtain the driver source from GitHub under the v1.5.3 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team", "username": "Isabella_Siu" }, { "code": "", "text": "", "username": "system" } ]
MongoDB Go Driver 1.5.3 Released
2021-06-01T19:55:25.196Z
MongoDB Go Driver 1.5.3 Released
1,472
null
[ "crud" ]
[ { "code": "", "text": "As a Kafka subscriber, I am getting 2000+ records within 10 seconds and storing in one collection one by one (100+ millions records).\nFor deletion, I have scheduler that runs every 30 minutes. I need to delete records older than 2 days. Because mongo id is indexed by default. I am using it as a condition to delete by converting datetime to mongo object id(Node JS code)\ntime = new ObjectID(moment().subtract(2, ‘days’).valueOf()/1000);\ndb.deleteMany({_id:{$lte: time})Problem is, this delete query takes too much time to finish ( and increase CPU utilization). Is there any alternative way to delete old records? I heard about TTL but not sure if it works with large collection.Thanks in advance. ", "username": "avnish" }, { "code": "_id", "text": "Hi @avnish, welcome to the community. Glad to have you here.You heard right, about TTL indexes. They are great at removing documents that are past the expiration date without having to write code to explicitly delete the documents, and they work regardless of the collection size. However, _id field does not support TTL indexes. So, you would have to index a field whose value is either a date, or an array with date values.Check out the behavior section of the documentation for more details on the inner workings of this feature.Hope this helps. Let me know if you have any questions.Mahi", "username": "mahisatya" } ]
Insert and delete large number of data
2021-06-01T15:17:57.398Z
Insert and delete large number of data
4,325
null
[ "swift" ]
[ { "code": "@ObservedRealmObject **var** userRealm: User", "text": "Hello everyone,Trying to bind a toggle Binding (swiftui) to a bool in user realm, but cannot figure it out.Steps:User model has a bool field, “isVisible”, how to I create my binding var to a toggle that let’s user change it’s visibility ?It’s seems easy, but cannot get it done.Thank you.", "username": "Radu_Patrascu" }, { "code": " let isVisible = Binding<Bool> (\n get: {\n self.userRealm.isVisible.value!\n },\n set: {value in\n guard let thawed = userRealm.thaw(), let realm = thawed.realm else {\n //os_log(.error, \"Error\")\n return\n }\n try! realm.write {\n thawed.isVisible.value! = value\n }\n }\n )\n", "text": "Ok, I finally figure it out, even if really clunky, it works:Created a costume binding with custom getter and setter.If there are any other simple ideas, there are still welcome.", "username": "Radu_Patrascu" }, { "code": "$userRealm.wrappedValue.isVisible.toggle()\n", "text": "@Radu_Patrascu, try this…", "username": "Andrew_Morgan" }, { "code": "", "text": "Hey Andrew,I can wrap this in an action, but the Toggle inside SwiftUI is by default toggling it, so I would expect just to do $userRealm.isVisible.wrappedValue, but I cannot.", "username": "Radu_Patrascu" }, { "code": "", "text": "Could you please shared the failing code snippet (or the whole view) and the error that you’re seeing?", "username": "Andrew_Morgan" }, { "code": "var body: some View {\n Toggle(isOn: $userRealm.metricUnits.wrappedValue,\n label: {\n \n Text(\"Units\")\n .font(.title2)\n Spacer()\n Text(userRealm.metricUnits.value! ? \"Oilfield\" : \"Metric\")\n \n }\n )\n}\nModel:\n**@objcMembers** **class** User: Object, ObjectKeyIdentifiable {\n\n**dynamic** **var** _id: String = \"\"\n\n**dynamic** **var** _partition: String = \"\"\n\n**dynamic** **var** isSubscribed: Bool = **false**\n\n**let** darkMode = RealmOptional<Bool>()\n\n**dynamic** **var** email: String = \"\"\n\n**let** memberOf = RealmSwift.List<Project>()\n\n**let** metricUnits = RealmOptional<Bool>()\n\n**dynamic** **var** name: String? = **nil**\n\n**dynamic** **var** selectedWellPartition: String? = **nil**\n\n**dynamic** **var** isProjectOpen: Bool = **false**\n\n**override** **static** **func** primaryKey() -> String? {\n\n**return** \"_id\"\n\n}\n", "text": "Screenshot 2021-06-01 at 15.54.062256×760 106 KBimport SwiftUI\nimport RealmSwiftstruct exemSwift: View {\n@ObservedRealmObject var userRealm: User}}", "username": "Radu_Patrascu" }, { "code": "metricUnitsRealmOptionalTogglethaw()$userRealm.wrappedValue.metricUnits.value! = value", "text": "I couldn’t find a perfect solution (the fact that metricUnits is a RealmOptional is complicating passing it as a binding to Toggle).Hopefully, you should at least be able to avoid the need to explicitly open a Realm transaction and using thaw() in your custom binding though. Have you tried $userRealm.wrappedValue.metricUnits.value! = value?", "username": "Andrew_Morgan" }, { "code": "", "text": "Tried right now, it gives :\n“Cannot modify managed objects outside of a write transaction.”I actually don’t need it to be Optional, Going to change it to Bool and come with feedback.", "username": "Radu_Patrascu" }, { "code": "Toggle(isOn: $userRealm.metricUnits, ...", "text": "If it’s made non-optional then I’d expect this to work: Toggle(isOn: $userRealm.metricUnits, ...", "username": "Andrew_Morgan" }, { "code": "", "text": "Yep, works perfectly Thank you!", "username": "Radu_Patrascu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Live binding to a Bool variable in MongoDB Realm Object
2021-06-01T07:47:50.165Z
Live binding to a Bool variable in MongoDB Realm Object
3,245
null
[ "aggregation" ]
[ { "code": "db.collection.aggregate(\n [\n// {\n// \"$limit\": 200000.0\n// },\n {\n \"$match\": {\n \"$and\": \n [\n\t\t\t\t\t { \"TID\": /XP/ },\n { \"timestamp\": { \"$gte\": \"2021-05-26 00:00:00\" } },\n { \"timestamp\": { \"$lte\": \"2021-05-27 23:59:59\" } },\n { \"Col\": /^\\d.*$/ },\n { \"MT0\": /^\\d.*$/ },\n { \"Row\": /^\\d.*$/ },\n { \"TRM\": /^\\d.*$/ },\n { \"WRLFTR_0\": /^\\d.*$/ },\n { \"WRLFTR_1\": /^\\d.*$/ }\n ]\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"TID\": \"$TID\",\n \"Opt\": \"$Opt\",\n \"DSN1\": \"$DSN1\",\n \"DSN2\": \"$DSN2\",\n \"Col\": \"$Col\",\n \"Row\": \"$Row\",\n \"CSN\": \"$CSN\",\n\t\t\t\t\t\"PID\": {\n\t\t\t\t\t\t\"$switch\": {\n\t\t\t\t\t\t\t\"branches\": [\n\t\t\t\t\t\t\t\t{ \n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"S5\" ] }, \"then\": \"6350\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"FUNC\" ] }, \"then\": \"6400\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"STR\" ] }, \"then\": \"6600\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\",\"FIN\" ] }, \"then\": \"6800\"\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{ \n\t\t\t\t\t\t\t\t\t\"case\": { \"$eq\": [ \"$Opt\", \"FTG\" ] }, \"then\": \"9000\"\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t],\n\t\t\t\t\t\t\t\"default\": \"$_id.Opt\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}, \n \"CID\": {\n \"$switch\": {\n \"branches\": [\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 1.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 12.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 1.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 13.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 24.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 2.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 25.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 36.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 3.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 37.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 48.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 4.0\n },\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 49.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 60.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 5.0\n },\t\n {\n \"case\": \t{ \"$and\": \n\t\t\t\t\t\t\t\t\t\t\t[ \t\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$gte\": [ {\"$toDecimal\": \"$Col\"}, 61.0 ] },\n\t\t\t\t\t\t\t\t\t\t\t\t{ \"$lte\": [ {\"$toDecimal\": \"$Col\"}, 72.0 ] }\n\t\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t\t},\n \"then\": 6.0\n },\t\t\t\t\t\t \n ],\n \"default\": 0.0\n }\n }, \n },\n \"details\": {\n \"$push\": { // partition over\n \"MT0\": \"$MT0\",\n \"WRLFTR_0\": \"$WRLFTR_0\",\n \"WRLFTR_1\": \"$WRLFTR_1\",\n\t\t\t\t\t\t\"TRM\": \"$TRM\",\n \"timestamp\": \"$timestamp\"\n }\n },\n }\n },\n {\n \"$sort\": {\n \"_id\": 1.0,\n \"timestamp\": 1.0 // order by timestamp\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$details\",\n \"includeArrayIndex\": \"array_idx\" // row_number\n }\n },\n {\n \"$match\": { // select only order 30 - 180 in array\n \"$and\": [\n {\n \"array_idx\": {\n \"$gte\": 30.0\n }\n },\n {\n \"array_idx\": {\n \"$lte\": 180.0\n }\n }\n ]\n }\n },\n {\n \"$sort\": { // sort TRM for percentile calculation\n \"details.TRM\": 1.0\n }\n }, \n {\n \"$group\": { // group parameter back to array by partition \"_id\" \n \"_id\": \"$_id\",\n \"timestamp\": { \"$push\": {\"$toDate\":\"$details.timestamp\"} },\n \"MT0\": { \"$push\": {\"$toDouble\":\"$details.MT0\"} },\n \"TRM\": { \"$push\": {\"$toDouble\":\"$details.TRM\"} },\n \"WRLFTR_0\": { \"$push\": {\"$toDouble\":\"$details.WRLFTR_0\"} },\n \"WRLFTR_1\": { \"$push\": {\"$toDouble\":\"$details.WRLFTR_1\"} },\n }\n },\t\t\t\n { // reporting \"_id\" and calculating \n \"$project\": {\n \"_id\": 0,\n \"TID\": \"$_id.TID\",\n \"Col\": {\n \"$toDecimal\": \"$_id.Col\"\n },\n \"Row\": {\n \"$toDecimal\": \"$_id.Row\"\n },\n \"CSN\": {\n \"$substr\": [\n \"$_id.CSN\",\n 2.0,\n -6.0\n ]\n },\n \"Opt\": \"$_id.Opt\",\n \"PID\": \"$_id.PID\", \n \"DSN1\": \"$_id.DSN1\",\n \"DSN2\": \"$_id.DSN2\",\n \"CID\": \"$_id.CID\", \n \"MAX_TRM\": {\n \"$max\": \"$TRM\"\n },\n \"Q3_TRM\": { \n \"$arrayElemAt\": [ \"$TRM\", {\"$floor\": { \"$multiply\": [0.75,{\"$size\": \"$TRM\"}] }} ] \n },\n \"AVG_TRM\": {\n \"$avg\": \"$TRM\"\n },\n \"MAX_WRLFTR_0\": {\n \"$max\": \"$WRLFTR_0\"\n },\n \"MAX_WRLFTR_1\": {\n \"$max\": \"$WRLFTR_1\"\n },\n \"MAX_MT0\": {\n \"$max\": \"$MT0\"\n },\n \"MAX_timestamp\": {\n \"$max\": \"$timestamp\"\n },\n \"DAY_ID\":{\"$concat\":\n\t\t\t\t\t\t\t\t[\n\t\t\t\t\t\t\t\t\t{\"$substr\":[{\"$toString\":{\"$max\": \"$timestamp\"}}, 0, 4]}, \n\t\t\t\t\t\t\t\t\t{\"$substr\":[{\"$toString\":{\"$max\": \"$timestamp\"}}, 5, 2]},\n\t\t\t\t\t\t\t\t\t{\"$substr\":[{\"$toString\":{\"$max\": \"$timestamp\"}},8,2]} \n\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t},\n }\n },\n ],\n {\n \"allowDiskUse\": true\n }\n);\n", "text": "Hi ,I got this error after I run this code‘$push used too much memory and cannot spill to disk. Memory limit: 104857600 bytes’, ‘code’: 146, ‘codeName’: ‘ExceededMemoryLimit’the code I ran is quite expensive query in term of memory usage due to $group and $push however I got 64 GB of RAM in this server for MongoDB only. So question is how can I increase this limit because by default It’s seem $push is limited RAM to 100 MB.“allowDiskUse” is set to true but still have this errorAdditional info: this MongoDB version is “4.4.1”.", "username": "Preutti_Puawade" }, { "code": "db.adminCommand({setParameter: 1, internalQueryMaxPushBytes: 1048576000});\n", "text": "Maybe this one will work.Will try and update.", "username": "Preutti_Puawade" } ]
Need advise to increase memory to run $push command with large data set
2021-06-01T06:21:28.666Z
Need advise to increase memory to run $push command with large data set
3,500
null
[ "crud" ]
[ { "code": "", "text": "I want to assign one field value (existing field) to another field (new one) inside embedded (means array if hashes). In short I wanna create new key-value pair, where key is static but value is another field’s value.", "username": "Dhara_Pathak" }, { "code": "", "text": "Welcome to the MongoDB Community @Dhara_Pathak!To help us provide relevant advice, please comment with:an example document before & after the change you want to makeyour version of MongoDB serverYou may find Formatting code and log snippets in posts tips handy for improving readability.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X,Thanks for the advice.My MongoDB server version: 4.2.5Here is what I’m trying to do:", "username": "Dhara_Pathak" }, { "code": "db.mydata.insert({\n invoices: {\n services: {\n price: NumberDecimal(\"4.99\")\n }\n }\n})\npriceunit_pricedb.mydata.update(\n // Match all documents\n {},\n // MongoDB 4.2+ can use an aggregation pipeline for updates\n [{\n $set: {\n \"invoices.services.unit_price\": \"$invoices.services.price\"\n }\n }]\n)\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\ndb.mydata.find().pretty()\n{\n\t\"_id\" : ObjectId(\"60b623967440f0f009e0a6ff\"),\n\t\"invoices\" : {\n\t\t\"services\" : {\n\t\t\t\"price\" : NumberDecimal(\"4.99\"),\n\t\t\t\"unit_price\" : NumberDecimal(\"4.99\")\n\t\t}\n\t}\n}\n", "text": "Hi @Dhara_Pathak,Starting from MongoDB 4.2 you can perform Updates with an Aggregation Pipeline. An aggregation pipeline enables more expressive updates including calculated fields and references to other field values in the same document.You haven’t provided an example document, but based on your description I assume something like the following would be representative:Updating with an aggregation pipeline to copy price to a new unit_price field:Checking the resulting document:My MongoDB server version: 4.2.5There have been quite a few minor server releases since 4.2.5 was released in March, 2020. Minor releases do not introduce any backward-breaking compatibility issues or behaviour changes within the same server release series so I’d recommend updating to the latest 4.2.x release (currently 4.2.14) . The Release Notes for MongoDB 4.2 have more details on specific bug fixes and improvements in minor releases.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank You so much @Stennie_X !", "username": "Dhara_Pathak" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How can we assign one field value to another?
2021-02-23T03:28:24.052Z
How can we assign one field value to another?
63,458