image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "db.users.createIndex({ Id: 1 })\n",
"text": "I am new to the MongoDB database. Because the number of records is very large, it takes us a lot of time to sort them in each query. I want to have a field called Id, like in SQL server, that I can set myself. And when I retrieve the information from this table, the information will be returned to me sorted according to the Id field. With my research, I found out that fields cannot be auto-incremented in NoSQL databases. Now if we index on the Id field according to the following command:When reading the information in this table, the records will be sorted by Id or do we need to re-sort in the query?",
"username": "Hossein_Mahdavi"
},
{
"code": "db.users.createIndex({ Id: 1 })\nIdIdIddb.users.find().sort({ Id: -1 })\n-1sort()",
"text": "Hey @Hossein_Mahdavi,Welcome to the MongoDB Community!Now if we index on the Id field according to the following command:When reading the information in this table, the records will be sorted by Id or do we need to re-sort in the query?Based on the following documentation, if you index the Id field, the records will be sorted by Id when you read the documents from the collection, and there is no need to re-sort the query.However, if you want to sort the documents by Id in descending order, you can use the following query:The -1 in the sort() method tells MongoDB to sort the documents in descending order.Let us know if you have any other questions.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "I think this answer is a bit misleading.I am pretty sure that despite the existence of index {Id:1} that the documents are not necessarily return in sorted order of Id unless sort({Id:1}) is used. Calling db.users.find() will produce document in a random order. Calling db.users.find().sort({Id:1}) will present the document in ascending order of Id, a sort will not be perform because the index can be used to returned the documents in the desired order.By reading myself I am not that clear either but to resume.1 - db.users.find() returns documents in random order\n2 - db.users.find().sort({Id:1}) returns document sorted by Id but the server do not perform a sort\n3 - db.users.find().sort({Id:-1}) returns document sorted descending but the server do not perform a sort\n4 - db.users.find().sort({FieldWithourIndex:1}) returns document sorted by FieldWithoutIndex but the server performs a sort because there is no index.Conclusion:",
"username": "steevej"
},
{
"code": "db.users.find()db.users.find().sort({ Id: 1 })\"Id\"",
"text": "Hey @Hossein_Mahdavi,As @steevej rightly pointed out, it’s worth considering the following scenarios:Feel free to experiment with different queries to see how they behave in your specific use case. If you’re aiming for both specific ordering and fast retrieval, a combination of indexing and sorting is recommended.If you have further questions or need clarification, don’t hesitate to ask.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you @steevej for your response. That is the perfect answer.",
"username": "Hossein_Mahdavi"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | How to Auto sort on one field in MongoDB? | 2023-08-08T12:05:13.531Z | How to Auto sort on one field in MongoDB? | 394 |
[
"atlas-device-sync"
] | [
{
"code": "",
"text": "I’m considering using Realm Sync for a new project, but I’m trying to understand if I’m misunderstanding the pricing.If I understand the pricing (and that’s a big “if”), Firestore seems a lot more economical. Unless I’m missing something, the free tier seems far better on Firestore as well as the pricing after that (seems like it’s half the cost of realm sync).Firebase Pricing\nhttps://cloud.google.com/firestore/pricingFirebase Prices for Requests\nReads $0.036 per 100,000 documents\nWrites $0.108 per 100,000 documents\nDelete $0.012 per 100,000 documentsData Storage = $0.108 / GB per monthLet’s say that averages out to around 0.072 per 100,000 requests (excluding deletes)Realm Sync Pricing\nhttps://docs.mongodb.com/realm/billing/ $0.2 per 100,000 requests (effectively double if I’m reading it correctly).\nPlus a sync cost of 0.00000008 / min.\nPlus data transfer = $0.12 / GBSeems like Realm double/triple charges for syncs (since it charges for the requests, sync time, and data transfer), and the cost per request being more than double than FirestoreI guess the biggest difference is with Firebase you’re going to be paying a monthly fee for storage but I think the requests will be the most costly aspect of most apps (I could be wrong). Beyond that, Firestore isn’t truly off-line first.Check out the examples for example mobile product cost:\nFirebase Example:\n\nScreen Shot 2021-08-16 at 10.48.32 AM720×780 74.6 KB\nRealm Example (go-to mobile application section on the Realm Pricing Page above as I’m restricted in posting images or additional links)The costs per request seem to be the big differentiator. The free tier seems to be more generous with Firebase as well. Am I missing something or is Realm significantly more costly when compared with Firestore?",
"username": "Fred_Thompson"
},
{
"code": "",
"text": "Long time Firebase and Realm developer here…I am not sure a comparison can be made between the two products as they serve different purposes.Realm is an offline first database and all of the app data is persisted locally and syncing in the background when necessary. On the other hand, Firebase is an online first database and data exists in the cloud* and sync’s continually.So while you can compare some numbers, the bottom line is that if you’re fetching Realm data that has already been sync’d it’s local and (essentially) $0 cost. Whereas if you’re fetching Firebase data it will pull from the server at a cost.I would suggest evaluating how the app is going to be used - mostly offline first or online first and once you have clarity on that, comparing pricing would be more applicable.*while firebase offers offline persistence, it’s designed for brief outages, like when a train disconnects as it goes through a tunnel.",
"username": "Jay"
},
{
"code": "",
"text": "Firestore pricing is based on reads/writes per collection and Realm has it per request. Doesn’t that make a big difference or am I missing something? From this point of view it seems that Firestore might be a lot more expensive if there is a need to read/write/delete multiple collections/documents.\nFor example - you want to execute query of 500 collections/documents.\nSo for Firestore this will count as 500 reads, but for Realm this will count as 1 request. Same goes for writes/deletes. Could you please confirm this?\nSo if this is correct, then it looks like Realm might be a lot cheaper when it comes to application that needs to query/modify/delete multiple collections/documents?",
"username": "Arturs_Mikuckis"
},
{
"code": "",
"text": "On the other hand, Realm Sync pricing is strangely identical to AWS App Sync.About connection vs atomic change:",
"username": "Benoit_Delville"
},
{
"code": "",
"text": "MongoDB Atlas Device Sync will batch the operations in as “few requests” as possible.Based on the total size of the events, they are batched as densely as possible up to 1MB",
"username": "Tomas_Arguinzones"
}
] | Realm Sync vs Firestore Pricing | 2021-08-16T14:51:10.050Z | Realm Sync vs Firestore Pricing | 10,111 |
|
null | [
"queries",
"indexes"
] | [
{
"code": "",
"text": "If I have a string property that has content similar to a paragraph (usually several sentences of content) and want to speed up search for that property when I do “contains” queries, will making that property an Index help speed that up? Or does indexing only help if you are searching an exact match for the string content?In other words, I have a property that has string content that i am looking to find partial matches with for example “contains” the phrase “project management”. Does indexing still help make queries like this faster or does it need to be searching for “exact” matches of the entire string for indexing to help?Is there any documentation that explains how the indexing works behind the scenes? Thanks!",
"username": "Shawn_Murphy"
},
{
"code": "",
"text": "Hey @Shawn_Murphy,Adding a string index will only optimize queries for equality matching in strings currently, though we may optimize other cases in the future. If you are looking for the technical details of the string index, you can check out the source code comments. Querying with the “contains” operator uses the Boyer-Moore algorithm.It sounds like what you are looking for is full text search. It is not trivial to implement, but it is a frequently requested feature, you can add your vote to indicate community interest here: Full text search support for Realm – MongoDB Feedback Engine",
"username": "James_Stone"
},
{
"code": "",
"text": "Thanks a lot @James_Stone ! This is very helpful info and just what I was looking for. Much appreciated.Best,\nShawn",
"username": "Shawn_Murphy"
},
{
"code": "String.where().filter()let allFoos: Results<Foo> = someRealm.objects(Foo.self)\nlet theOneFoo: Foo = allFoos.filter({ $0.someIndexedStringProperty == \"someuniquestring\" })\nsomeIndexedStringPropertyFooFoolet theFoo: Foo = someRealm.object(ofType: Foo.self, forPrimaryKey: value)\nFoo",
"text": "@James_Stone Apologies for the late reply. If a String property is indexed, does that speed up querying in all operations that test against that property, such as .where() and .filter() (assuming we use NSPredicate or Realm Queries in those functions instead of a closure-based approach?Suppose I have this, for instance:If someIndexedStringProperty is the primaryKey for Foo, Realm is blazingly fast at retrieving that Foo using:I’m looking to get similar query performance on another indexed String property that is unique across all Foos, but is not the primaryKey. So far, I haven’t been able to do that.",
"username": "Bryan_Jones"
},
{
"code": "",
"text": "Hi @Bryan_Jones, yes there should be a marked improvement in performance when querying using equality matching on an indexed property vs a non-indexed property. If you are not seeing this, can you open an issue in the appropriate SDK (looks like realm-swift) and share your schema with the indexed property and the query? We should be able to help you sort out what’s going on there.",
"username": "James_Stone"
},
{
"code": "==NSOutlineView/",
"text": "@James_Stone I do indeed see a difference in performance. It does seem slower than a query by primaryKey, but querying with == on an indexed vs. non-indexed string property definitely shows a difference.My use case is constructing an NSOutlineView on-the-fly as the user expands each level. The reason is that the OutlineView shows the entire filesystem from / on down, so constructing the entire tree of model objects (800,000+) ahead of time is slow and wasteful. (Realm also doesn’t have support for a live-object “tree” collection, so I have to build the datasource level-by-level manually.)Realm’s query performance using the “materialized paths” approach seems to be sufficient to create each level of the tree’s data on-the-fly in the time it takes to animate the level open.",
"username": "Bryan_Jones"
},
{
"code": "",
"text": "To follow up here for those interested query performance and how an index can help, I’m happy to say that we now have support for full text search. There’s a nice intro here.",
"username": "James_Stone"
}
] | How does Realm indexes work with string properties and partial match search | 2022-05-26T23:46:09.750Z | How does Realm indexes work with string properties and partial match search | 3,693 |
null | [
"atlas-cluster",
"cloud-manager"
] | [
{
"code": "",
"text": "I cannot even visit cloud.mongodb.com. When I tried to connect with mobile hotspot I can visit.",
"username": "Akshith_Pottigari"
},
{
"code": "",
"text": "Hi @Akshith_Pottigari,I was able to visit the cloud.mongodb.com page no issues on my end. I believe it may be an issue with your initial internet connection (I presume the one before you connected via mobile hotspot) so you may wish to contact your network provider / administrator regarding the issues connecting to the domain mentioned.Additionally, I have checked the cloud status page and there have been no recent issues noted.Regards,\nJason",
"username": "Jason_Tran"
}
] | Cannot visit to cloud.mongodb.com | 2023-08-08T10:57:29.712Z | Cannot visit to cloud.mongodb.com | 448 |
null | [] | [
{
"code": "",
"text": "Hi All!\nI’m reaching out today wondering if anyone could help with accessing Atlas.We’re a small startup and have been utilizing MongoDB for our entire existence. Our original developer has been gone for about 1.5 years and we’re now a team of three. We interface with our databases (prod/dev) through Compass or libraries like PyMongo and of course our internal API.Recently, we’ve been wanting to explore with Atlas’s Vector Searching but have realized no one on the current team has access to our Atlas organization (not even any C-level). We are fairly certain the email of our original developer has been closed out- Is there any way we transfer organization ownership?Any help would be appreciated!!",
"username": "rivitt"
},
{
"code": "",
"text": "Hi @rivitt - Welcome to the community.I would contact the Atlas in-app chat support regarding this topic. Unfortunately we won’t be able to help you out much here on the forums itself as this is specific to your organizations Atlas account security.Regards,\nJason",
"username": "Jason_Tran"
}
] | Small team, lost access to MongoDB Atlas Organization | 2023-08-08T15:11:22.318Z | Small team, lost access to MongoDB Atlas Organization | 252 |
null | [
"atlas-search",
"serverless"
] | [
{
"code": "",
"text": "Hello )I want to store embedding values and text into a collection And use ann search via atlas vector search api.Is it possible to use vector search function in mongodb atlas serverless?\nI will not use Atlas triggerThank you",
"username": "kIL_Yoon"
},
{
"code": "",
"text": "Hi thereWhen you say “Atlas Serverless” do you mean Serverless instances? If so, unfortunately, vector search can currently not be used on MongoDB Atlas Serverless instances.",
"username": "Anurag_Kadasne"
}
] | Is it possible to use vector search function in mongodb atlas serverless? | 2023-08-07T09:34:41.597Z | Is it possible to use vector search function in mongodb atlas serverless? | 647 |
null | [
"aggregation"
] | [
{
"code": "db.Test.insert({ \"_id\": 1, \"Name\": \"A\" });\ndb.Test.insert({ \"_id\": 2, \"Name\": \"B\" });\ndb.Test.insert({ \"_id\": 3, \"Name\": \"C\" });\ndb.Test.insert({ \"_id\": 4, \"Name\": \"C\" });\ndb.Test.insert({ \"_id\": 5, \"Name\": \"D\" });\ndb.Test.insert({ \"_id\": 6, \"Name\": \"E\" });\ndb.Test.insert({ \"_id\": 7, \"Name\": \"F\" });\ndb.Test.insert({ \"_id\": 8, \"Name\": \"G\" });\ndb.Test.insert({ \"_id\": 9, \"Name\": \"H\" });\ndb.Test.insert({ \"_id\": 10, \"Name\": \"I\" });\ndb.Test.aggregate( [\n {\n $bucketAuto: {\n groupBy: \"$_id\",\n buckets: 5,\n output: {\"Unique\": {\"$addToSet\": \"$_id\"}},\n }\n }\n] )\n{ \"_id\" : { \"min\" : 1, \"max\" : 3 }, \"Unique\" : [ 1, 2 ] }\n\n{ \"_id\" : { \"min\" : 3, \"max\" : 5 }, \"Unique\" : [ 3, 4 ] }\n\n{ \"_id\" : { \"min\" : 5, \"max\" : 7 }, \"Unique\" : [ 6, 5 ] }\n\n{ \"_id\" : { \"min\" : 7, \"max\" : 9 }, \"Unique\" : [ 7, 8 ] }\n\n{ \"_id\" : { \"min\" : 9, \"max\" : 10 }, \"Unique\" : [ 9, 10 ] }\n{ \"_id\" : { \"min\" : 1, \"max\" : 3 }, \"Unique\" : [ 1, 2 ] , \"Counter\": 1}\n\n{ \"_id\" : { \"min\" : 3, \"max\" : 5 }, \"Unique\" : [ 3, 4 ], \"Counter\": 2 }\n\n{ \"_id\" : { \"min\" : 5, \"max\" : 7 }, \"Unique\" : [ 6, 5 ],\"Counter\": 3 }\n\n{ \"_id\" : { \"min\" : 7, \"max\" : 9 }, \"Unique\" : [ 7, 8 ] , \"Counter\": 4}\n\n{ \"_id\" : { \"min\" : 9, \"max\" : 10 }, \"Unique\" : [ 9, 10 ] , \"Counter\": 5}\n",
"text": "Hello Team,Below is my data set,Aggregate QueryResultsHow can I also add a new field and enable auto incrementing counter.Basically every bucket should be assigned a counter with incrementing value. How can I achieve this mongo aggregate 4.4 ?Regards,\nRama",
"username": "Laks"
},
{
"code": "db.Test.aggregate([\n {\n $bucketAuto: {\n groupBy: '$_id',\n buckets: 5,\n output: {\n unique: { \n $addToSet: '$_id'\n },\n },\n }\n },\n {\n // first we group the buckets, so we could loop through each bucket\n // and calculate 'counter' value for it\n $group: {\n _id: null,\n buckets: {\n $push: {\n _id: '$_id',\n unique: '$unique'\n }\n }\n }\n },\n {\n $project: {\n result: {\n // loop through buckets\n $reduce: {\n input: '$buckets',\n initialValue: {\n i: 0,\n buckets: [],\n },\n in: {\n i: {\n $add: ['$$value.i', 1]\n },\n buckets: {\n $concatArrays: ['$$value.buckets', [{\n _id: '$$this._id',\n unique: '$$this.unique',\n counter: {\n $add: ['$$value.i', 1]\n },\n }]]\n }\n }\n }\n }\n }\n },\n // ungroup buckets\n {\n $unwind: '$result.buckets',\n },\n // restore initial structure\n {\n $project: {\n _id: '$result.buckets._id',\n unique: '$result.buckets.unique',\n counter: '$result.buckets.counter',\n }\n }\n]);\n[\n { _id: { min: 1, max: 3 }, unique: [ 1, 2 ], counter: 1 },\n { _id: { min: 3, max: 5 }, unique: [ 4, 3 ], counter: 2 },\n { _id: { min: 5, max: 7 }, unique: [ 6, 5 ], counter: 3 },\n { _id: { min: 7, max: 9 }, unique: [ 7, 8 ], counter: 4 },\n { _id: { min: 9, max: 10 }, unique: [ 10, 9 ], counter: 5 }\n]\n",
"text": "Hello, @Laks !In order to add additional field with auto incrementing value, you need to add couple more stages to your aggregation pipeline:Output:",
"username": "slava"
}
] | MongoDB $bucketAuto Aggregate | 2023-08-07T13:34:15.740Z | MongoDB $bucketAuto Aggregate | 342 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "If I hypothetically have a User object that has nothing besides a List, does the number of elements in the list count towards the 16 MB BSON limit on the User record? So, if I have 1 million Todo in the List, is the size of the User BSON 1 million * some_constant or is it really small still?Thanks much,\n-Jon",
"username": "Jonathan_Czeck"
},
{
"code": "",
"text": "Hey Jon,The 16MB limit is for the whole document/object.So if you have a million todos in the list, the size will be 1 million * size of each todo object. It is stored together with the rest of the fields in the User object.Please let me know if you have any other questionsYaseen",
"username": "Yaseen_Kadir"
},
{
"code": "import Foundation\nimport RealmSwift\n\nclass User: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var todos: RealmSwift.List<Todo>\n}\n\nclass Todo: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted(originProperty: \"todos\") var users: LinkingObjects<User>\n}\n",
"text": "Even if Todo is a separate collection?To be more specific, I’m talking about something like this (RealmSwift code)",
"username": "Jonathan_Czeck"
},
{
"code": "{\nuser_id: 1234\ntodo: [task1, task2,task3, etc]\n}\n",
"text": "It is 16MB per Document. So if each todo item is a separate document they can each be 16MB. You can have x number of documents and EACH one can be up to 16mb in size.For example, if you have 1 million documents like below each of those 1 million would need to be 16MB or less.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Keep in mind however that the primary keys of the linked objects will be included in the parent document (that is how they are represented in your Atlas collections), so a hypothetical 1 million linked objects will still bloat the document size of the parent object.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Thank you Kiro Morkos, that’s what I was verifying. I wasn’t sure if there was something like an SQL join table under the hood, I don’t know much about MongoDB.",
"username": "Jonathan_Czeck"
},
{
"code": "todos",
"text": "Apologies Jon, I misinterpreted your question.You’re correct, the size of the full Todo will not count toward the size of the User document. The list of Todos on the user document will only contain a list of primary keys that reference the individual Todo documents. In this case, it will be a list of the ObjectIds only. These ObjectIds will count toward the size of the user document.You should be able to confirm this behavior by inspecting the user objects on the MongoDB collection directly and see that todos on documents contains only a list of ObjectIds.",
"username": "Yaseen_Kadir"
},
{
"code": "",
"text": "No problem, I could’ve been more clear. Thanks for responding so quickly ",
"username": "Jonathan_Czeck"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does a collection's size count towards the 16 MB BSON limit? | 2023-08-08T17:36:40.146Z | Does a collection’s size count towards the 16 MB BSON limit? | 436 |
null | [
"crud"
] | [
{
"code": "db.accounts.updateMany(\n {\"members.items\": {$exists:1}}, \n {$set:{\"members.items.$[elem].key\": \"123\"}},\n {arrayFilters:[\n {\"elem.key\":\"789\"}\n ]}\n)\n",
"text": "Spinning my wheels trying to get this update to work:Gives the error members.items must exist in order to apply array updates.My account object has an array of member objects and members have an array of item objects",
"username": "Jeff_VanHorn"
},
{
"code": " {\n \"members\": [\n {\n items: [\n { key: \"789\" }\n ]\n },\n {\n otherKey: 1\n }\n ]\n }\ndb.accounts.updateMany(\n { \"members.items.key\": \"789\" },\n {\n $set: {\n \"members.$[m].items.$[elem].key\": \"123\"\n }\n },\n {\n arrayFilters: [\n { \"m.items\": { $exists: true } },\n { \"elem.key\": \"789\" }\n ]\n }\n)\n",
"text": "Hello @Jeff_VanHorn,It would be helpful if you show an example document,I assume the document you have,Your query would be,",
"username": "turivishal"
},
{
"code": "",
"text": "That nailed it, thanks.Makes sense now that I see it.",
"username": "Jeff_VanHorn"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update an array within an array | 2023-08-08T17:37:01.056Z | Update an array within an array | 372 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.24-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.23. The next stable release 4.4.24 will be a recommended upgrade for all 4.4 users.Fixed in this release:4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Britt_Snyman"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.24-rc0 is released | 2023-08-08T17:36:25.609Z | MongoDB 4.4.24-rc0 is released | 578 |
null | [
"time-series"
] | [
{
"code": "",
"text": "Hello,Im working on a project which uses time series database. In our project we created many TS collections in ~november 2022. Last week we created some other TS collections and now seems like you can edit the data in the collection and now it show indexes.But in the older TS collections, no indexes are showing and the data can still be edited.Is there some update i missed? Should i create the indexes on the older TS by myself or they still exist?",
"username": "Felipe_Rojas1"
},
{
"code": "db.collection_name.getIndexes()",
"text": "Hey @Felipe_Rojas1,Welcome to the MongoDB Community!Could you please share the specific version of MongoDB you are using?Last week we created some other TS collections and now seems like you can edit the data in the collection and now it show indexes.Just to clarify, when you say “TS collection,” are you referring to a TimeSeries collection?Could you please share the command you used to create a TimeSeries collection? Also, which field is the index being created for? Would you mind sharing the output of db.collection_name.getIndexes().And when you mention “edit the data,” are you referring to updating the documents in the time series collection?Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Indexes on TSDB | 2023-08-03T19:07:53.478Z | Indexes on TSDB | 578 |
null | [] | [
{
"code": "search_index.json",
"text": "Hi,\nI’m currently doing the MongoDB Atlas Search - LESSON 2: CREATING A SEARCH INDEX WITH DYNAMIC MAPPING - lab.\nIn the second part of the lab I shoud “open the search_index.json file in the IDE by clicking the file name in the file explorer to the left”, but I’m not able to see this file in the file explorer which is empty.",
"username": "Cris_Mi"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | File explorer empty - lab | 2023-08-08T16:13:47.236Z | File explorer empty - lab | 464 |
null | [] | [
{
"code": "",
"text": "I had some activity yesterday trying to help others.Now I visited back and see reply notifications on 6 posts from 5 people who are either OP or other helpers.But when I visit them, only 1 of them has a reply.I restarted my browser and cleared the cache, still see no replies.It is hard to believe OPs decided to delete all those replies.Besides that, I got only 1 notification e-mail for that one post that has a real reply.Can you please check what is happenning?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz,Can you share a few of the topic links from your notifications ?If replies appear to be missing, a few possibilities (outside of technical glitches) are:Posts have been deleted by the original author.Posts have been flagged by community (usually temporarily hidden pending moderator review). This may result in off-topic posts being moved to a new topic and spam posts being deleted.You should still have an option to see temporarily hidden posts, eg:Besides that, I got only 1 notification e-mail for that one post that has a real reply.Perhaps some notifications you noticed were related to Likes, Accepted Solutions, Quotes, or Following activity rather than Replies. If you visit notification activity in your account (or by clicking on your user avatar at the top right of the page), each notification will be prefixed with an icon for the activity type.The example below shows a favourite, a reply, and an accepted solution:Replies can generate email notifications as they indicate something to read or respond to. I believe most other notifications (eg Likes or Quotes of your posts) are informational and will not trigger emails.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This one for example. Notification has the “Reply” icon and there is no flagged response from the OP.Adding in grades for courses - Working with Data - MongoDB Developer Community ForumsAll notifications were showing “replied” status by OPs. Much later during the day, they got responses from OP and/or others. but it was “much” later, like 4-5 hours.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Othe posts’s notifications are now overwritten since they got replies. The above post got a like but no reply, so my notifications still show this:\nthey were all like this at the time I opened the topic.",
"username": "Yilmaz_Durmaz"
}
] | Is there a problem on Forum servers or database? | 2022-11-13T09:00:43.841Z | Is there a problem on Forum servers or database? | 2,577 |
null | [
"aggregation"
] | [
{
"code": "[\n {\n keyword: \"tesla\",\n metrics: {\n rating: 1000,\n ...\n }\n },\n ...\n]\nkeywordmetrics.rating[\n {\n keyword: \"tesla model x\",\n ancestry: [\n {\n keyword: \"tesla\"\n },\n ...\n ]\n },\n ...\n]\nancestry.keywordmetrics.ratingkeywords.ancestries[\n {\n $match:\n {\n \"ancestry.keyword\": \"tesla\",\n }\n },\n {\n $lookup:\n {\n from: \"keywords.metrics\",\n let: {\n keyword: \"$keyword\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $eq: [\"$keyword\", \"$$keyword\"],\n }\n }\n }\n ],\n as: \"metrics\"\n }\n },\n {\n $sort:\n {\n \"metrics.metrics.rating\": 1\n }\n }\n]\nkeywords.metricsexplain$match$lookup$sortexplain$sort$lookupkeywords",
"text": "I am working with a set of ~5M items. I have two collections:keywords.metricsindexed: keyword, metrics.ratingkeywords.ancestriesindexed: ancestry.keywordThe task is:For a given keyword, retrieve all descendants and sort them by metrics.rating.\nIn the most extreme case, a single keyword can have up to 100k descendants.The following aggregation pipeline executed on keywords.ancestries will do that:It works and is quite performant even for 5M keywords in keywords.metrics, however, explain shows that only the $match and $lookup stages are supported by indexes. The $sort stage operates on an indexed field, but the index is not used.\nSince this query will be the foundation for a keyword browser frontend where several sorting and filtering options can be changed by users frequently, I anticipate the query to run very often. For that purpose, a query that runs in ~2s according to explain and narrowly avoids bleeding into disk in the $sort stage (80MB) seems suboptimal to me.So I would really like to avoid having to sort up to 100k documents without index.Is there a clever way to do this?One possible solution for the query to be fully covered by indexes, is to unify both collections such that the metrics $lookup can be skipped. However, I’m hesitating to pursue that solution, because it feels wrong to restructure my data just to support one additional query. If I perform the restructure, I can for example no longer get a keyword’s descendants without having to ‘drag along’ the metrics field in database calculations. Adding all the data to a single keywords collection seems to make documents rather big and I always thought bigger documents meant degrading performance for all other queries.",
"username": "cabus"
},
{
"code": "$lookuplet-keyword$expr$lookup$sortancestrykeywords.ancestrieskeywords.ancestrieskeywords.metrics",
"text": "Hello, @cabus! It’s been a while since your last post It seems, that the description of your issue needs some refinement.You have few mistakes in your aggregation code. For example, in your $lookup stage, defined let-keyword results to an array and you compare it with string later on in $expr. This will lead to no joined documents by $lookup stage, so your $sort stage will have nothing to sort.Worth to notice, that ancestry field in keywords.ancestries is an array. Can one entry in keywords.ancestries collection relate to multiple entries in keywords.metrics collection?",
"username": "slava"
},
{
"code": "$lookup[]collectiondocument$matchtesla$lookupkeywords.metricskeyword: 'tesla model x'keywords.ancestrieskeywords.metrics$sort$lookuprating",
"text": "Hi,\nthanks for your response!The $lookup stage is working as intended on my side.Maybe the description of the two involved collections was a bit poorly worded:\nThe [ and ] denote the collection boundary. I wrote the collection as a JSON array. Every object in that array represents a document.The pipeline operates as follows:",
"username": "cabus"
},
{
"code": "$lookup[\n {\n _id: ObjectId(\"64d23e0a5e9e8e87c5502356\"),\n keyword: 'tesla model x',\n ancestry: [ { keyword: 'tesla' } ],\n metrics: []\n }\n]\n...",
"text": "The $lookup stage is working as intended on my side.It is not with the sample date set you provided. Using your $match and $lookup I get:However, if I change the document in metrics from “keyword”:“tesla” to “keywork”:“tesla model x” I do get some result. Could you please provide a richer sample data set that supports your use-case directly? This means also please removes the ... from the documents because we cannot cut-n-paste your documents directly.Since the ultimate and problematic goal is to have the metrics sorted using an index, why don’t you simply start your aggregation from metrics. This way the $sort is supported by your index.",
"username": "steevej"
}
] | Use index of joined collection from $lookup in $sort after the join | 2023-08-06T15:33:08.760Z | Use index of joined collection from $lookup in $sort after the join | 345 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Is it possible to create a MongoDb trigger using .NET driver and C# code?",
"username": "Markus_Louw"
},
{
"code": "",
"text": "Hey @Markus_Louw,Is it possible to create a MongoDb trigger using .NET driver and C# code?From my understanding, Database Triggers offer the option to execute a function (server-side JavaScript code) or utilize AWS EventBridge. Therefore, as of now I don’t think it is possible to create a MongoDB trigger using the .NET framework or C# language.May I ask what specific use case you have in mind that requires you to use the .NET or C# for creating MongoDB triggers?Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | How to create trigger using .NET driver? | 2023-08-07T12:49:33.977Z | How to create trigger using .NET driver? | 604 |
null | [
"app-services-cli",
"app-services-hosting"
] | [
{
"code": "",
"text": "Hi,\nI’m trying to push my realm app’s hosting files (a react app) via the realm cli.\nI put my files in a “files” inside the “hosting” folder, I created “metadata.json” as the doc says.\nThis is the command I’m using and the response I get:$ realm-cli push --remote “my-app-id” --include-hosting\nDetermining changes\npush failed: EOFWhat does that means? How can I fix this?",
"username": "Benoit_Werner"
},
{
"code": "realm-cli -v",
"text": "Hi Benoit,Which realm-cli version are you using?\nrealm-cli -vWould it be possible to provide the app id (hexadecimal version), this is safe to provide publicly but if you prefer feel free to dm me or raise a support ticket.Regards\nManny",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hi Manny, my realm-cli version is 2.6.2\nMy app id hexadecimal version is 610932e76ef44e5b35860fd3\nI’m trying specifically to deploy the QA version.\nThanks.",
"username": "Benoit_Werner"
},
{
"code": "",
"text": "Alright, I figured it out.\nI added an empty array: , in the metadata.json file and it fixed the “push failed: EOF”",
"username": "Benoit_Werner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Deploying realm app hosting files with cli | 2023-08-07T19:10:56.813Z | Deploying realm app hosting files with cli | 445 |
null | [
"kotlin"
] | [
{
"code": "",
"text": "To be honest I’m wondering if this is at all possible. I can add the dependency as\nimplementation(npm()), but I’m not sure how to integrate it. The alternative, adding it from kotlin, doesn’t progresses beyond updating the gradle file",
"username": "Richard_Thorne"
},
{
"code": "",
"text": "The Kotlin SDK doesn’t support the JS target yet, and it is still some way out as we need to enable wasm support first.Using the Realm JS SDK should in theory be possible, but you would need to write some sort of interface in Kotlin that uses expect/actual and then delegates to the JS code. I have not tried this myself, and depending on what you want to do, it might also involve a fair amount of code.Not sure if that answers your question?",
"username": "ChristanMelchior"
},
{
"code": "",
"text": "Good answer, although I had hoped for a different answer.",
"username": "Richard_Thorne"
}
] | Has anyone experience with Kotlin/JS and Realm? | 2023-08-08T08:36:59.403Z | Has anyone experience with Kotlin/JS and Realm? | 485 |
null | [
"atlas-cluster"
] | [
{
"code": "",
"text": "i keep getting this error:\nError: queryTxt ESERVFAIL cluster0.3jphw9e.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\nerrno: undefined,\ncode: ‘ESERVFAIL’,\nsyscall: ‘queryTxt’,\nhostname: ‘cluster0.3jphw9e.mongodb.net’\n}\nthe same code was working this morning and i did not perform any changes.\nplease help i tried everything i found on the web .",
"username": "Mohamed_Benhamou"
},
{
"code": "# dig +short any cluster0.3jphw9e.mongodb.net\n\"authSource=admin&replicaSet=atlas-neawe5-shard-0\"\n0 0 27017 ac-swgseua-shard-00-00.3jphw9e.mongodb.net.\n0 0 27017 ac-swgseua-shard-00-01.3jphw9e.mongodb.net.\n0 0 27017 ac-swgseua-shard-00-02.3jphw9e.mongodb.net.\n",
"text": "cluster0.3jphw9e.mongodb.netLooks good for me at this time:Try a different set of dns servers perhaps.",
"username": "chris"
},
{
"code": "",
"text": "i dont quiet understand you idea,i am using a m1 mac , i dont know how to change or try different set of dns servers\nthanks",
"username": "Mohamed_Benhamou"
},
{
"code": "",
"text": "The error is DNS related. Changing the DNS servers you are using may allow the application to connect.I’m not familiar with changing dns servers on a mac either, but a quick google should show you.",
"username": "chris"
},
{
"code": "",
"text": "i changed my dns servers on my mac (i used google dns 8.8.8.8 8.8.4.4) and also tried opendns but it didn’t work.\nwhen i tested the command you mentioned earlier it says :;; connection timed out; no servers could be reached",
"username": "Mohamed_Benhamou"
}
] | Error: queryTxt ESERVFAIL | 2023-08-06T18:05:30.774Z | Error: queryTxt ESERVFAIL | 752 |
null | [
"flutter"
] | [
{
"code": "",
"text": "Hi,I am new to using realm and I would like to know how I could integrate keycloak with openid in mongo atlas, to add permissions on collections in mongo atlas using the roles that my users have in keycloak.",
"username": "Fabian_Eduardo_Diaz_Lizcano"
},
{
"code": "",
"text": "For clarity, a realm in Keycloak is unrelated to MongoDB Realm - they are two different things.Just mentioning it to ensure the question is about MongoDB Realm - the database and not a keycloak realm which is used to manage a set of users, credentials, roles, and groups.",
"username": "Jay"
},
{
"code": "",
"text": "Hi @Jay,Excuse me, I want uses custom jwt authentication with keycloak. At the moment I am working with realm and sync with mongo atlas.",
"username": "Fabian_Eduardo_Diaz_Lizcano"
},
{
"code": "",
"text": "Hi @Fabian_Eduardo_Diaz_Lizcano!\nI hope your project is going well.\nI suppose you can extract somehow the roles from Keycloak JWT in Flutter. Then you can import the roles as user.customData for your App Service users. These data are integrated in the token issued by Atlas once the user is authenticated. We have an example about using customData roles for setting users permissions rules.\nHere is the code that creates the roles. And here is an extension on User that returns the role from the customData of the current user.\nYou can find the App service configuration files in users_permissions/assets/atlas_app folder. You can check in the Readme.md how to configure the App Service.",
"username": "Desislava_St_Stefanova"
}
] | Connect keycloak with custom JWT authentication | 2023-08-07T03:34:06.600Z | Connect keycloak with custom JWT authentication | 908 |
null | [] | [
{
"code": "size 15K, *it's just a size to test*\nrotate 7\ncompress\ndelaycompress\ncreate 0640 mongod mongod\ncopytruncate\n",
"text": "Hello there,\nI’m setting up the log rotation of systemLog files. In my .conf file the section systemLog is the following:systemLog:\ndestination: file\nlogAppend : true\nlogRotate: renameThe configuration file of the logrotate is the following:I’m kindly ask what are the correct and/or useful parameters to use to have enough log file size and frequency of the rotation. Thank you in advance.",
"username": "Enrico_Bevilacqua1"
},
{
"code": "",
"text": "I don’t recall there’s a “log rotate file size threshold” from official doc.it’s no such exists, you may have to monitor the log file size externally (e.g. with a tool) and once the number is reached, send a siguser to mongod.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hello there,looking around the cyberspace I set up the following .conf file in a test environment to get some experience then to apply on production environment. I still I don’t understand why it’s needed to send a siguser to mongod as a postrotate even if, it’s seems to me, the rotation is working./var/log/mongodb/mongod.log {\nweekly\nrotate 4\ndateext\ndate %Y-%m-%d-%s\nmissingok\ncreate 0640 mongod mongod\ncopytruncate\nendscript\n}",
"username": "Enrico_Bevilacqua1"
}
] | How to setup log rotation for MongoDB properly | 2023-08-07T09:48:43.374Z | How to setup log rotation for MongoDB properly | 379 |
null | [
"node-js",
"react-js"
] | [
{
"code": "",
"text": "Hi\nCan i connect to monogodb directly from react without creating nodejs server\nFor example\nCan i use npm install mongodb\nIn react frontend folder then import mongodb client in react component and perform CRUD operations directly",
"username": "Ali_Aboubkr"
},
{
"code": "",
"text": "Hi @Ali_Aboubkr,Welcome to the MongoDB Community!Can i use npm install mongodb\nIn react frontend folder then import mongodb client in react component and perform CRUD operations directlyFrom what I understand, you cannot connect to MongoDB directly from “React” without creating a backend server. React is a front-end framework that is used to build user interfaces. It does not have the ability to connect to databases or perform CRUD operations on its own.To connect to MongoDB from React, you will need to create a server (say Node.js server) that will act as an intermediary between React and MongoDB.However, you can consider Next.js as a React framework that includes a built-in server that can be used to connect to MongoDB. This means that you don’t need to create a separate Node.js server for your application. You can refer to this tutorial - How to Integrate MongoDB Into Your Next.js App | MongoDB to learn more.Hope the above helps!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Use mongodb directly from React | 2023-08-07T18:32:22.438Z | Use mongodb directly from React | 1,317 |
null | [
"atlas-cluster"
] | [
{
"code": "AdvancedClusters.ListClusters.List",
"text": "Hello, there!I am wondering about the difference between the two types of clusters in Atlas. I noticed that both the Go Client (v0.31.0) and Terraform make a distinction between the two, with the Terraform provider’s resource page even having a suggestion for new users to use advanced cluster instead of cluster. The only thing that is mentioned in the page is that advanced clusters support multi-cloud clusters.But I haven’t been able to find any reference that could explain exactly how both of them interact with each other. I noticed that Atlas’ Go Client returns the same clusters from the AdvancedClusters.List and the Clusters.List methods, but in different structs.I tried creating a simple cluster using a single cloud provider, but I still got an advanced cluster from that creation.When I create a “regular” cluster, will it be an advanced cluster by default? How can I make a new cluster NOT an advanced cluster and vice-versa?\nWhat information can I query from Atlas’ API to validate if my cluster is advanced or not?Thanks in advance",
"username": "Gabriel_Almeida"
},
{
"code": "",
"text": "Hi @Gabriel_Almeida - Welcome to the community.I’ll try to get some clarification from the team regarding this question but just to confirm beforehand, are there any particular issues you’re running into regarding this? Or is the question more so purely for understanding the differences?I tried creating a simple cluster using a single cloud provider, but I still got an advanced cluster from that creation.Lastly, I assume you used the following http client for this but please correct me if I am wrong. If so, could you show the output that you used to identify that the “simple cluster” was an advanced cluster? This is just to help provide some context for myself to better understand the topic.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "Clusters.List",
"text": "Hello, Jason.Thank you so much for getting back to me. Yes, that was the client implementation I was using, version v0.31.0. What I wanted to understand is what’s the overlap between advanced clusters and regular clusters. I think it’s easier to visualize it as a Venn diagram: are all advanced clusters also regular clusters, but not all regular clusters advanced clusters?I’m asking that because I wanted to differentiate between the two whenever possible. I created one of each using terraform and, when querying for the regular clusters using the Clusters.List function from the client, I got a cluster that was created as an advanced cluster using terraform and vice-versa (albeit both would also show when I tried listing for the specific type they were created as).Another thing that could be useful is: what parameters to I need to provide to create a regular cluster without it being an advanced cluster?Thanks in advance!Gabriel",
"username": "Gabriel_Almeida"
},
{
"code": "\"mongodbatlas_advanced_cluster\"",
"text": "Hi @Gabriel_Almeida,Thanks for your patience and providing the information requested Another thing that could be useful is: what parameters to I need to provide to create a regular cluster without it being an advanced cluster?I believe the main part here is two differentiate between the “advanced” and (let’s just say for comparison purposes) “non-advanced” clusters being mentioned. To start, these are all simply just clusters. The API’s used for the creation of such clusters are versioned. The “non-advanced” clusters being mentioned was using the previous version of the API (during a time prior to when Atlas only had single cloud provider clusters). In order to make the API work for multi-cloud clusters, new endpoints were required without breaking the existing v1 endpoints which user’s were still using. The new endpoints were created that supported multi-cloud clusters which in turn ends up being associated with the terraform resource you mentioned \"mongodbatlas_advanced_cluster\".In short, the “advanced” and “non-advanced” clusters are simply just clusters but you could not do multi-cloud clusters with the “non-advanced” clusters endpoint(s). Additionally, any clusters going forward from now should use the “advanced” cluster resource as this will also be updated to have new features. You may find more useful information here on the Versioned Atlas Administration API Lifecycle documentation.I hope this helps with your concerns.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello, Jason.I appreciate you taking the time to write such a thorough explanation. I’ll mark the topic as solved.Thank you so much once again!",
"username": "Gabriel_Almeida"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Understanding the difference between cluster and advanced cluster | 2023-08-04T08:06:32.213Z | Understanding the difference between cluster and advanced cluster | 598 |
null | [
"aggregation",
"time-series"
] | [
{
"code": "",
"text": "Hello everyone! I’m about to migrate the documents of the collections i have to timeseries collection i want to create. Within these documents (hundreds of millions) there is a certain number of duplicated documents that i want to remove and i’m trying to find out which operation i should take care of first between restoring data into the new timeseries collections and cleaning data. I’ve done some testing so far and what i learned is that the restore operation is really slow and it would be better doing it with less data but the aggregation pipeline used to remove duplicates is a lot faster when working on a timeseries collection with respect to a normal collection. Any suggestions?",
"username": "Umberto_Casaburi"
},
{
"code": "$group$out",
"text": "Hi @Umberto_Casaburi,Welcome to the MongoDB Community!Within these documents (hundreds of millions) there is a certain number of duplicated documents that i want to remove and i’m trying to find out which operation.\nAny suggestions?There are some approaches for removing duplicate documents from the MongoDB collections:In the current version of MongoDB, you can’t do $out operator to the time-series collection. However, this capability is planned for a future version.Here are some pointers for optimizing the de-duplication process:I’ve done some testing so far and what i learned is that the restore operation is really slow and it would be better doing it with less data but the aggregation pipeline used to remove duplicates is a lot faster when working on a timeseries collection with respect to a normal collection. Any suggestions?May I ask which specific version of MongoDB you have used for testing?Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Data migration to timeseries | 2023-07-28T12:43:49.962Z | Data migration to timeseries | 529 |
null | [
"java",
"atlas-cluster"
] | [
{
"code": "com.example.demo3\n Demo3Application.java\n CRUDEMongo.java\n |_test\n CrudeController.java\npackage com.example.demo3;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\nimport org.springframework.context.annotation.ComponentScan;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.ServerApi;\nimport com.mongodb.ServerApiVersion;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\n\n@SpringBootApplication\n@ComponentScan(basePackages = {\"com.example.demo3\", \"com.example.demo3.test\"})\npublic class Demo3Application{\n\t@Autowired\n\tTest test;\n\tpublic static void main(String[] args) {\n\t\tSpringApplication.run(Demo3Application.class, args);\n String connectionString = \"mongodb+srv://torben:<password>@cluster0.orswsxw.mongodb.net/?retryWrites=true&w=majority\";\n\t\tServerApi serverApi = ServerApi.builder().version(ServerApiVersion.V1).build();\n MongoClientSettings settings = MongoClientSettings.builder().applyConnectionString(new ConnectionString(connectionString)).serverApi(serverApi).build();\n\t\tSystem.out.println(\"Hello\");\n\t\ttry(MongoClient client = MongoClients.create(settings)){\n\t\t}\n\t}\n\n}\n\npackage com.example.demo3;\n\nimport org.springframework.data.mongodb.repository.config.EnableMongoRepositories;\nimport org.springframework.data.repository.CrudRepository;\n@EnableMongoRepositories\npublic interface CRUDEMongo extends CrudRepository<Test, Test>{\n\n}\npackage com.example.demo3.test;\n\nimport org.springframework.stereotype.Controller;\nimport org.springframework.web.bind.annotation.RequestMapping;\nimport org.springframework.web.bind.annotation.RequestMethod;\nimport org.springframework.web.bind.annotation.ResponseBody;\n\n@RestController\n@RequestMapping(\"/\")\npublic class CrudeController {\n @RequestMapping(value=\"/te\", method= RequestMethod.GET)\n public String requestMethodName() {\n return \"Hello\";\n }\n}\n\nMongoClient with metadata {\"driver\": {\"name\": \"mongo-java-driver|sync\", \"version\": \"4.9.1\"}, \"os\": {\"type\": \"Linux\", \"name\": \"Linux\", \"architecture\": \"amd64\", \"version\": \"5.15.0-78-generic\"}, \"platform\": \"Java/Eclipse Adoptium/17.0.7+7\"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=majority, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='torben', source='admin', password=<hidden>, mechanismProperties=<hidden>}, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@5cff6b74, com.mongodb.Jep395RecordCodecProvider@627ff1b8]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[127.0.0.1:27017], srvHost=cluster0.orswsxw.mongodb.net, srvServiceName=mongodb, mode=MULTIPLE, requiredClusterType=REPLICA_SET, requiredReplicaSetName='atlas-12upi3-shard-0', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=true, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=UNSPECIFIED, serverApi=ServerApi{version=V1, deprecationErrors=null, strict=null}, autoEncryptionSettings=null, contextProvider=null}\n",
"text": "Ive tried to create a REST Service with MongoAtlas but get a 404 not Found Exception.I have the following folder structureEverytime I try to connect with URL/te on Postman I get a 404 not found Exception. Spring doesn’t seem to found my RestController. I tried everthing. Change folders, ComponentScan but it doesn’t seem to find it.Tried to set up a REST Service with Spring but get a 404 not found Exception via Postman.Im definitly connect with the database. I got the following back:",
"username": "Torben_Jox"
},
{
"code": "@RestControllerCrudeControllerimport org.springframework.web.bind.annotation.RestController;\n@RequestMappingCrudeController@RestController\n@RequestMapping(\"/api\") \npublic class CrudeController {\n\n @RequestMapping(\"/te\")\n public String requestMethodName() {\n ...\n }\n\n}\n/api/te/te",
"text": "Hey @Torben_Jox,Welcome to the MongoDB Community!Everytime I try to connect with URL/te on Postman I get a 404 not found ExceptionThen call /api/te instead of just /te.However, looking at the code, it appears that it currently outputs “Hello world”. May I ask how the MongoDB URI is being used within your code?Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | REST Service doesn't work | 2023-07-29T13:19:25.753Z | REST Service doesn’t work | 457 |
null | [
"atlas-search"
] | [
{
"code": "'index': search_index,\n'knnBeta':\n {\n\t'vector': vector,\n\t'path': embedding_path, \n\t'k': k, \n\t'filter': \n\t{\n\t\t'compound':{\n\t\t\t'must':[{\n\t\t\t\t'text':{\n\t\t\t\t\t'path': item_id_path, \n\t\t\t\t\t'query': item_id\n\t\t\t\t}\n\t\t\t}]\n\t\t}\n\t}\n}\n",
"text": "Hi, im currently trying to run a $search pipeline on a collecion with vector embedding. this collection has multiple objects with an item id and im trying to make sure the search in running only on a specific item id\nim using the following pipeline:This pipeline returns an empty response, without the filter I’m getting k item that match the vector, but from a few different item ids, I’m not sure why the filter causes the pipeline to return nothing, the id is present in the collection",
"username": "Guy_Machat"
},
{
"code": "$searchfilterfilter",
"text": "Hi @Guy_Machat,To better assist you here, can you provide the following information:I have some ideas why it may be returning nothing but its difficult to say without the above information.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "item_id: 9f41e31c-882f-42ef-add4-18688e810e01\nembeddings: <some array>\ntext: foo\nitem_id: 6f539716-00f0-42ea-b4af-fdf1db09183e\nembeddings: <some array>\ntext: foo\nitem_id: 9f41e31c-882f-42ef-add4-18688e810e01\nembeddings: <some array 2>\ntext: foooo\n{\n \"mappings\": {\n \"fields\": {\n \"embeddings\": [\n {\n \"dimensions\": 384,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n ]\n }\n }\n}\n",
"text": "Hi @Jason_Tran , thanks for reaching outlets assume the following the documents:doc #1:doc #2:doc #3:some notes: item_id is a string representation of a uuid v4, all embeddings are the same length, for this example they are 384, and they are all indexed in the same search index.for k=2 where the input im getting is embedding for “foo” I would get docs #1 and #2 as expected,\nhowever, I would like to get docs #1 and #3 as they are both with the same item_id, but adding the filter returns nothingthe mapping definition looks something like this:",
"username": "Guy_Machat"
},
{
"code": "$searchdimensions{\n \"mappings\": {\n \"fields\": {\n \"embeddings\": [\n {\n \"dimensions\": 4,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n ],\n \"item_id\": {\n \"type\": \"string\"\n }\n }\n }\n}\ndb.vectors.find({},{_id:0})\n[\n {\n item_id: '9f41e31c-882f-42ef-add4-18688e810e01',\n embeddings: [ -0.01, -0.02, -0.03, -0.04 ],\n text: 'foo'\n },\n {\n item_id: '6f539716-00f0-42ea-b4af-fdf1db09183e',\n embeddings: [ -0.01, -0.02, -0.03, -0.04 ],\n text: 'foo'\n },\n {\n item_id: '9f41e31c-882f-42ef-add4-18688e810e01',\n embeddings: [ -0.011, -0.021, -0.031, -0.041 ],\n text: 'fooo'\n }\n]\nknnBeta$searchfilter\"item_id\"db.vectors.aggregate({\n '$search': {\n 'index': 'default',\n 'knnBeta': {\n 'vector': [-0.01,-0.02,-0.03,-0.04],\n 'path': 'embeddings',\n 'k': 2,\n 'filter': {\n 'text': {\n 'path': 'item_id',\n 'query': '9f41e31c-882f-42ef-add4-18688e810e01'\n }\n }\n }\n }\n})\n[\n {\n _id: ObjectId(\"64d18ff706683323f56ba731\"),\n item_id: '9f41e31c-882f-42ef-add4-18688e810e01',\n embeddings: [ -0.01, -0.02, -0.03, -0.04 ],\n text: 'foo'\n },\n {\n _id: ObjectId(\"64d18ff706683323f56ba733\"),\n item_id: '9f41e31c-882f-42ef-add4-18688e810e01',\n embeddings: [ -0.011, -0.021, -0.031, -0.041 ],\n text: 'fooo'\n }\n]\nfilter$search",
"text": "Thanks for providing those details @Guy_Machat,As a note for future posts, it would be easier for users (including myself) to have copy and paste-able documents (and any code snippets) in valid format with the values you’re experiencing the behaviour described to help with the troubleshooting. In saying so, I have tested with those documents but had to guess the array values although I believe you may be receiving nothing in return possibly due to the index definition.Can you try with the following? You may need to wait a few minutes after saving the changes to run the $search query (you might need to altert the dimensions value as i’ve changed this to match the documents in my test environment):For reference, in my test environment with the below sample documents:I was able to run the following knnBeta $search with a filter on \"item_id\" to return documents 1 and 3 as you have mentioned:If you’re still running into issues with the filter can you share the documents (redacting any sensitive information) as well as the $search stage and index definition? I assume the index definition will probably differ each time with testing which is why I am requesting for it again if further help is required.Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "{\n _id: ObjectId(\"64d1b4a8ee4ddb16520b78a0\"),\n embeddings: [ -0.011, -0.021, -0.031, -0.041 ],\n text: 'fooo',\n item_id: 'abcde-18688e810e01-defgh' /// <--- matches the `text` operator value within the `filter` portion in the previous post reply\n}\nitem_id\"18688e810e01\"filterphrasetextfilteritem_id",
"text": "Just wanted to also add, due to the way the search query is analyzed, the following document (a new document from the sample 3), will probably be returned with the same query above:Take note of the item_id value. The middle portion \"18688e810e01\" will match the filter used in my previous example. You may wish to consider maybe using phrase instead of text in the filter to help with these scenarios.Please also note that i’ve only done this on the 3 sample documents + the document noted here so I am not aware of any other cases where different values of item_id may possibly be returned.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Filter in knnBeta $search | 2023-08-06T12:54:14.662Z | Filter in knnBeta $search | 617 |
null | [
"data-modeling",
"android",
"kotlin"
] | [
{
"code": "class Articles : RealmObject { // Main Class\n @PrimaryKey\n var _id: ObjectId = ObjectId.invoke()\n var title: String = \"\"\n var price: Int = 0\n}\n",
"text": "I’m creating an Ecommerce App, and i want it’s db to be collections of separate articles ,I want my db to be collections (Files) of : Clothes , Parfums , Accessories… Each collection has the same fields as Articles aboveI tried creating Multiple Dublicated Classes : class Clothes , class Parfums …etc But that doesn’t seems practicle",
"username": "soufiane_east"
},
{
"code": "class Article: RealmObject {\n @PrimaryKey\n var _id: ObjectId = ObjectId.invoke()\n var title: String = \"\"\n var price: Int = 0\n var type: String = \"\" //the object type; Clothing, Parfume, Accessory etc\n}\ntypeclass ClothingClass: Article {\n var clothing_type: String //pant, shirt etc\n}\n",
"text": "Collections are not files - they are groups of related objects. That being said, technically they could be separate (Realm) files but that’s probably not what you’re after, or needed.One option to consider simply adding a “type” property to your Article class - noting that “Articles” is plural and Realm objects are singular objects so better to name them accordingly.That allows you to look at all of the Article Objects as a group, or by filtering on the type property, view and work with them as a sub group (collection)Realm objects can also be subclassed if the subclasses need different functionality than the parent class. For example a Parfume object would not need a clothing type (pant, shirt etc) so here would be a Clothing Type subclass that has all of the attributes and properties of the parent class but also specific properties that only apply to ClothingKeeping in mind that none of this requires separate “files”",
"username": "Jay"
},
{
"code": "",
"text": "Thanks for ur reply , but in my case I strictly need a separate realm files for a unique fetching Articles algorithms , Is there a way ?! Using enum classes or something …",
"username": "soufiane_east"
},
{
"code": "",
"text": "Is there a waySure! Realm easily supports multiple local Realms - keeping in mind that relationships and queries cannot be cross-realm; they are limited to a single Realm file. However, if this is going to be a sync’d Realm, that changes the answer a bit, and since the thread is marked Android and ecommerce, is it correct this will be sync’d?Before providing any further advise or discussion, it would be important to include details about your use case because as mentioned, separating Realm into discreet files has a number of downsides and may not provide any advantages.Can you provide details about this statement? What kind of algorithm would require a unique Realm file?I strictly need a separate realm files for a unique fetching Articles algorithms",
"username": "Jay"
},
{
"code": "",
"text": "I decided to adopt Mongodb approach and reconstruct my db , thank u so much Jayson for u help ^^ , hope u all the best",
"username": "soufiane_east"
},
{
"code": "",
"text": "",
"username": "henna.s"
},
{
"code": "",
"text": "",
"username": "henna.s"
}
] | Multiple Realm Collections in kotlin | 2023-08-05T14:13:51.601Z | Multiple Realm Collections in kotlin | 568 |
null | [
"mongodb-shell",
"php"
] | [
{
"code": "",
"text": "Can Mongosh be installed on a MacOS High Sierra 10.13.6 ?I have been following W3Schools MongoDB Getting Started for the installation, where I have setup the MongoDB Atlas cloud database platform with a 512MB shared cluster and added my current IP address along with having the default cloud IP address .W3Schools is using mongosh version 1.3.1 for the tutorial.I have created a mongodb-learning repository.On my zsh command line, in my new W3Schools-MongoDB project folder, I have tried to use brew install mongosh , as well as brew install [email protected] , but each time when I check if mongosh is installed with mongosh --version, it says zsh: command not found: mongosh.I just want to get an understanding on how the Shell works with MongoDB Atlas cloud ?I have Visual Studio Code version 1.78.2 with OS: Darwin x64 17.7.0.I am a bit confused with the differences of how local and the cloud side works.\nFor the cloud setup that W3Schools recommended, does the database creation happen in an editor like my Visual Studio Code and the result shown on the server, whereas with local can you view result on your own localhost setup ?As well as mongosh, I also tried to install MongoDB for VSCode v 0.11.1 Extension in my project folder, connecting with a connection string, but that was taking a long time to install for some reason, so I abandoned that.Looking for help, so that I can get the best setup for my laptop and learn MongoDB from W3Schools tutorial.If anyone has any ideas on how I can make this work, then your ideas will be much appreciated thanks.",
"username": "Robert_Wilkinson"
},
{
"code": "mongosh/usr/local/Cellar/mongosh/1.8.2/bin/mongosh/usr/local/bin/mongosh/usr/local/bin$PATH",
"text": "it says zsh: command not found: mongosh.Brew puts mongosh in /usr/local/Cellar/mongosh/1.8.2/bin/mongosh and softlinks to /usr/local/bin/mongosh … if you don’t have /usr/local/bin in your zsh $PATH it won’t be found.",
"username": "Jack_Woehr"
},
{
"code": "brew install mongosh",
"text": "I have put this path in my .zshrc file.export PATH=“/usr/local/bin/mongosh/:$PATH”\n\nMongosh path in ZSHRC File1439×855 139 KB\nBut when I go back to my zsh command line, I am still getting zsh: command not found: mongoshWhen you use\nbrew install mongosh\nI assume that this installs the latest version of Mongosh (MongoDB Shell) .Is there a way to install a particular version of mongosh ?W3Schools uses version 1.3.1 for mongosh.\nI just wonder if I am better off installing that version with my MacOS High Sierra, as it may work better than the newer version on my laptop.",
"username": "Robert_Wilkinson"
},
{
"code": "export PATH=“/usr/local/bin:$PATH”",
"text": "I have put this path in my .zshrc file.export PATH=“/usr/local/bin/mongosh/:$PATH”This is incorrect. The PATH export points to a colon-separated list of directories, not to individual programs.\nexport PATH=“/usr/local/bin:$PATH” is what you want.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I have tried changing to\nexport PATH=“/usr/local/bin:$PATH”\nat the bottom of my .zshrc file , but when I type mongosh --version on my command line I am still getting zsh: command not found: mongosh.I do have an export path for MAMP used with my content Management System up the top of my .zshrc file, but that shouldn’t interfer with the path for mongosh would it ?\n\nMAMP Path - Not Mongosh1435×830 125 KB\nI have a MacOS High Sierra 10.13.6, can that be compatible with installing mongosh through brew ?",
"username": "Robert_Wilkinson"
},
{
"code": "mongoshmongoshmongosh/usr/local/bin/mongosh/usr/local/bin/mongosh --version",
"text": "If mongosh installed at all, it should be compatible.\nYour previous path statement is overridden or subsumed by any subsequent path statement.\nIf you have found the mongosh executable somewhere on your disk, try invoking mongosh via the fully qualified path, e.g., if you know it’s there as /usr/local/bin/mongosh, try /usr/local/bin/mongosh --version and see if it works.\nAlso, check if the executable bit is correctly set on the file if you find it and it still doesn’t work.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi JackThanks for your help so far.I have checked on my zsh command line for location of mongosh installation in 2 places.In my /usr/local/Cellar folder I could not find mongosh\nand also, I could not find mongosh in my /usr/local/bin folder as well.\n\nMongosh not in local bin Folder591×679 40.5 KB\nThis makes me think that maybe the installation did not work.I assume that brew install mongosh would give you the same version 1.8.2 as the download link MongoDB Shell Download | MongoDB. I had noticed in that link that the lowest supported version is MacOS 64 bit (10.14) for version 1.8.2Is there a way that I can install a lower version of mongosh, so that it can work with my MacOS 10.13.6 ?\nW3Schools uses mongosh version 1.3.1 and maybe if I use a slightly lower version then I may get the shell working OK.",
"username": "Robert_Wilkinson"
},
{
"code": "brew install package@<version>brew install [email protected]",
"text": "brew install package@<version> e.g., brew install [email protected]",
"username": "Jack_Woehr"
},
{
"code": "brew install mongoshbrew list mongosh/opt/homebrew/Cellar/mongosh/1.9.0/bin/mongosh\n/opt/homebrew/Cellar/mongosh/1.9.0/libexec/bin/mongosh\n/opt/homebrew/Cellar/mongosh/1.9.0/libexec/lib/ (9360 files)\n% ls -ld /opt/homebrew/bin/mongosh\nlrwxr-xr-x 1 me admin 35 May 18 10:04 /opt/homebrew/bin/mongosh -> ../Cellar/mongosh/1.9.0/bin/mongosh\n",
"text": "Please provide output of the commands:\nbrew install mongosh and brew list mongosh\nIn my case, it is:",
"username": "rafi"
},
{
"code": "/opt/homebrew/Cellar/mongosh/1.9.0/bin/mongosh\n",
"text": "Well, there you have it.Does the commandsucceed?",
"username": "Jack_Woehr"
},
{
"code": "Mongosh version 1.3.1brew installbrew listMongosh version 1.3.1MacOS High Sierra 10.13.6Brew list",
"text": "Hi rafiThanks for your replyAs I mentioned above in my question, I am following through W3Schools MongoDB Getting Started and they are using Mongosh version 1.3.1.So first off I have run the brew install and brew list commands for Mongosh version 1.3.1 on my MacOS High Sierra 10.13.6.\nThis is the output I have on the screenshot below.\n\nMongosh List & Older Version824×856 112 KB\nBrew list gives\nError: No such keg: /usr/local/Cellar/mongoshand there seems to be an issue with 10 outdated formulae installed even though it looks like it has updated homebrew from 4.0.17 to 4.0.18.\nIt also looks like it is having problems finding Mongosh 1.3.1 version.In my next reply, I have installed the latest version that shows how my laptop reacts to a normal install of Mongosh.",
"username": "Robert_Wilkinson"
},
{
"code": "brew install mongosh brew list mongoshGNU Complier Collectbrew install gccsqlite and libnghttp2Cellar directory brew install gccbrew install mongosh",
"text": "RafiThis is how my MacOs High Sierra 10.13.6 reacts to a normal brew install mongosh and brew list mongosh.\n\nInstall Current Mongosh (pg 1)1124×857 115 KB\nIt has a warning that I am using macOS 10.13 and also says to install GNU Complier Collect with brew install gcc.\nYou can also notice that sqlite and libnghttp2 have installed in Cellar directory.When I use brew install gcc this is what happens below.\n\nInstall Current Mongosh (pg 2)1194×859 149 KB\n\n\nInstall Current Mongosh (pg 3)1283×594 77.1 KB\nAgain, it has a warning that I am using macOS 10.13 and has an error about it at the end as well.\nSo it seems like it hasn’t installed completely or it hasn’t bundled the whole mongosh together in a mongosh folder like I believe it should do, when using brew install mongosh.I think moving forward , I just have to work out how I can get the Mongosh shell working with version 1.3.1 that W3Schools has, as that looks to be the version that may work OK on my MacOS High Sierra 10.13.6.",
"username": "Robert_Wilkinson"
},
{
"code": "",
"text": "Maybe good alternative in this case as you dont want to upgrade OS is to run mongosh as a docker container assuming you can start docker on your Macos.",
"username": "rafi"
},
{
"code": "brew install [email protected] have 10 outdated formulae installedwarning: No available formula with the name \"[email protected]\". Do you mean Mongosh ?MongoDB for VSCode ExtensionMongoDB Atlas cloud database platform with a 512MB shared cluster",
"text": "Hi rafiFirst of all I want to look and see if it is possible to install W3Schools version 1.3.1 of Mongosh on my MacOS High Sierra 10.13.6 .\nOn my top screenshot, when I used brew install [email protected],\nit mentioned You have 10 outdated formulae installed\nand had a warning: No available formula with the name \"[email protected]\". Do you mean Mongosh ?\nso, I just wonder if there is a way to get around this, to make it work ?The Mac operating system I have, has very important web learning stuff on it, so I feel that I need to do a backup and tread very carefully before I do an upgrade. With Technology updating, I most likely will have to upgrade at some stage, but I will have to be very careful about it.I have also seen that you can get a MongoDB for VSCode Extension.\nCan that be used when I have a MongoDB Atlas cloud database platform with a 512MB shared cluster ?\nDoes the extension have to work with the shell ?I will look into running mongosh as a docker container if all of what I have mentioned above does not work.",
"username": "Robert_Wilkinson"
},
{
"code": "mongosh1.3.1",
"text": "mongosh depends on Node.js and even if version 1.3.1 was using Node 14 instead of Node 16 I don’t believe that will run on OSX 10.13.Docker is definitely an option you have, assuming you can get a Docker version that runs on OSX 10.13.MongoDB for VS Code is also an alternative you have and it will work with any MongoDB, including the free tier cluster in Atlas. The extension does not need the shell to be installed to work, except for the functionality that lets you launch a shell. If you are running VS Code 1.78 as you mentioned above then you should be able to install the most recent version of the extension.",
"username": "Massimiliano_Marcon"
},
{
"code": "brew install [email protected] outdated formulae installedwarning: No available formula with the name \"[email protected]\". Do you mean Mongosh ?",
"text": "Hi MassimilianoI have a Node Version Manager setup on my MacOS 10.13.6.\nI have the alias default set to Node version 16.15.0 and I also have an older 8.11.3 Node version.I just wonder if having the correct Node version running on my Mac for Mongosh 1.3.1, will be the key to getting the shell running.\nIt is just a matter of finding out what Node version I can install that may make it work with Mongosh 1.3.1.From my previous reply, I mentioned that brew install [email protected] gave me 10 outdated formulae installed and a warning: No available formula with the name \"[email protected]\". Do you mean Mongosh ?\n, as can be seen in screenshot below.Would having it working with the correct Node version, fix this issue and make it work on my Mac ?\nMongosh List & Older Version824×856 112 KB\nIt would be good if I can get MongoDB shell working with Mongosh version 1.3.1 ,\nbut if I can’t, I will look into seeing how I can get the Docker or MongoDB for VSCode Extension options working.",
"username": "Robert_Wilkinson"
},
{
"code": "",
"text": "Hi @Massimiliano_MarconI have had a break finishing my Mongosh setup , as I became frustrated with it and so I went on to learn Django.Now I feel a bit more refreshed and just want to get an understanding on why I have not been able to get Mongosh working.Back in June I had installed Node Version 14.19.1 using nvm, as I found from screenshot shown below, that this the Node Version that runs with Mongosh 1.3.1 , I have been trying to get working.\n\nNeed Node 14 for Mongosh1440×900 176 KB\nI have a MacOS High Sierra 10.13.6.Below I have a screenshot showing myself using brew install v1.3.1 for Mongosh after I had installed Node version 14.19.1\n\nMongosh No Go 16th June 231311×481 81.3 KB\nAs you can see from the screenshot I am using brew install v1.3.1.\n: it initially did an auto- update of homebrew to 4.0.22\n: Update 3 taps and showed new formulae and casks\n: It said that I have 15 outdated formulae installed.\n: It shows website for where changelog 4.0.22 can be found\n: It has a warning saying no available formula with name v1.3.1 and it cannot find it.How do I get this to work, with the correct formulas and casks to be run with Mongosh 1.3.1 on my Mac OS 10.13.6, with Node 14.19.1 ?\nOr is this not possible to do with my Mac Operating System ?",
"username": "Robert_Wilkinson"
},
{
"code": "",
"text": "How do I get this to work, with the correct formulas and casks to be run with Mongosh 1.3.1 on my Mac OSThe current version of mongosh is 1.10.3 … I see your ostensible use case, but I think the world has moved on. You may not get much help around here for backlevel stuff. Maybe StackExchange?",
"username": "Jack_Woehr"
}
] | Installation of Mongosh for MongoDB Atlas on the cloud | 2023-05-17T03:59:18.722Z | Installation of Mongosh for MongoDB Atlas on the cloud | 1,461 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "{\n startDate: ISODate,\n endDate: ISODate\n}\n[\n {$match: { <query> } }\n {$unwind: \"$values\"}\n {$sort: {\"startDate\": 1}},\n]\n",
"text": "Hello all, i want to make a question regarding performance between two approaches.\nI have an array of nested objects inside collection.\nThe schema of each object inside the array is the following:Is it better to keep this array of objects sorted thus sorting the array in every write operation or is it better to sort the array with aggregation when retrieving the document with the array, like this:What would be the best in terms of performance?",
"username": "Vasilis_Mora"
},
{
"code": "",
"text": "What happens more often? Writing to the array or reading it? If it’s 99% reading, then keep them in order, if it’s 99% writing often then perhaps just sort as needed…\nA write it going to overwrite the document anyway if you push a new entry so perhaps not so much more of an overhead to push it sorted, you’ll find a number of solutions on the forums for inserting in the correct place into an array.You can always try it out, create a few million records and add a new item to them sorted and un-sorted to compare performance and IO overhead.",
"username": "John_Sewell"
}
] | Sorting inner array of objects in find vs keeping array sorted in write operations | 2023-08-07T13:26:25.555Z | Sorting inner array of objects in find vs keeping array sorted in write operations | 263 |
null | [
"python"
] | [
{
"code": "result = await coll.insert_many(batch, ordered = False)\n if result.inserted_ids:\n global box\n box +=1\n print(f\"Box {box} \")\n print(f\"batch {ID} Completed\")\n return True\nasync def Gather():\n tasks = []\n for ID in range(1, 65):\n tasks.append(async_scraper(ID))\n await asyncio.gather(*tasks)\n\nloop = client.get_io_loop()\n loop.run_until_complete(Gather())\n\n",
"text": "Hi there, the async insert_many() always froze at last few writes, used motor writing into mongoDB, total 65 tasks, each task 500 ducuments.The first 60 tasks are very fast, but then the 61 took 5 minutes, and the program just froze there, some times need to waiting for 2 hours to complete or not complete at all, is that something normal?The scraper scraping each batch and feed to mongoDB, my internet may has issues, so anyone knows what happened?\nRight now it stopped at 61th task, and just froze no progress, total 65 tasks should be, no error throwing for scraping, everything is fine.",
"username": "JJ_J"
},
{
"code": "",
"text": "Could it be that your scraper tasks are blocking for a long time?If not, could you isolate the potential bug into a script that reproduces the problem?",
"username": "Shane"
}
] | Python async writing performace issues, stopped at last few writes | 2023-08-06T14:12:26.697Z | Python async writing performace issues, stopped at last few writes | 461 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 6.0.9-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 6.0.8. The next stable release 6.0.9 will be a recommended upgrade for all 6.0 users.Fixed in this release:6.0 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Maria_Prinus"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 6.0.9-rc1 is released | 2023-08-07T15:53:36.843Z | MongoDB 6.0.9-rc1 is released | 453 |
null | [
"java",
"kotlin"
] | [
{
"code": "class MongoClientService {\n\n val logger: Logger = LoggerFactory.getLogger(MongoClientService::class.java)\n\n val mongoClient: MongoClient\n\n init {\n val codec = fromRegistries(\n MongoClientSettings.getDefaultCodecRegistry(),\n fromProviders(PojoCodecProvider.builder().automatic(true).build())\n )\n val settings = MongoClientSettings.builder()\n .applyConnectionString(ConnectionString(AppConfig.APPLICATION_MONGODB_URI!!))\n .codecRegistry(codec)\n .build()\n mongoClient = create(settings)\n }\n}\ndata class User(\n @BsonId\n @BsonRepresentation(BsonType.OBJECT_ID)\n val id: ObjectId = ObjectId(),\n var email: String,\n var firstName: String,\n var lastName: String,\n var password: String,\n var archived: Boolean = false,\n val tenants: List<Tenant> = listOf(),\n var language: Language = Language.EN,\n val currentTenant: Tenant? = null,\n val avatar: Avatar? = null,\n val darkMode: Boolean = false,\n val mode: AppMode = AppMode.EMPLOYEE,\n val superAdmin: Boolean = false,\n val entity: EntityEmbedded = EntityEmbedded(),\n)\n",
"text": "Hello,\nI’m trying to migrate from KMongo to the Kotlin driver, but I encountered an error with the codec: ‘Codec for id must implement RepresentationConfigurable to support BsonRepresentation’.This is my MongoClient Service :and my modelDoes anyone have an idea ?",
"username": "Sebastien_Carre"
},
{
"code": "@BsonRepresentation(BsonType.OBJECT_ID)ObjectId_id",
"text": "Hmm interesting,I don’t think you need @BsonRepresentation(BsonType.OBJECT_ID) if you are using ObjectId as the the _id type.The PojoCodec also isnt needed for Kotlin Data classes - just make sure to add Maven Central: org.mongodb:bson-kotlin:4.10.2 to the path and it will automatically be picked up by the default registry (same goes for bson-kotlinx).I hope that helps,Ross",
"username": "Ross_Lawley"
}
] | Problem with kotlin driver | 2023-08-06T10:25:21.566Z | Problem with kotlin driver | 544 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "[\n {\n '$search': {\n 'index': 'default', \n 'text': {\n 'query': 'Dinosaur', \n 'path': 'title'\n }\n }\n }, {\n '$project': {\n 'title': 1, \n 'awards': '$awards.wins', \n 'score': {\n '$meta': 'searchScore'\n }, \n 'new_score': {\n '$multiply': [\n '$score', '$awards'\n ]\n }\n }\n }\n]\n{\n \"_id\": {\n \"$oid\": \"573a139bf29313caabcf33d0\"\n },\n \"title\": \"Dinosaur\",\n \"awards\": 4,\n \"score\": 5.398964881896973,\n \"new_score\": null\n}\n",
"text": "I’m trying to modify the score by multiplying with aggregation but it is returning a “null” value for new_scoreOutput of the aggregation",
"username": "tapiocaPENGUIN"
},
{
"code": "'$multiply': [\n '$score', '$awards'\n ]\n'new_score': {\n '$multiply': [\n { '$meta': 'searchScore' },\n '$awards'\n ]\n }\n",
"text": "Most likely this happens becomes you are using the field score which is computed in the same $project stage. A computed field is only available in the next stage. So rather thanI would try to replace $score with the expression you use to compute score. That is I would",
"username": "steevej"
},
{
"code": "'new_score': {\n '$multiply': [\n { '$meta': 'searchScore' },\n '$awards.wins'\n ]\n }\n",
"text": "I have just notice that awards is also computed. So try",
"username": "steevej"
},
{
"code": "",
"text": "This worked, thank you @steevej",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to Multiply on search Score | 2023-08-04T17:45:12.903Z | How to Multiply on search Score | 483 |
null | [
"aggregation",
"java"
] | [
{
"code": "Mongoclient.getDatabase(db).get collection(order request).aggregate(Arrays.asList(new Document(\"$lookup\",new Document( \"from\", \"order\").append(\"localfield\",\"Id\").append(\"foreignField\",\"Id\").append(\"as\", \"orders\")),\nnew Document(\"$lookup\",new Document( \"from\", \"Inventory\").append(\"localfield\",\"Id\").append(\"foreignField\",\"Id\").append(\"as\", \"inventory\")),\nnew Document(\"$project\", new Document (\"I'd\",0L).append(\"order date\", new Document(\"$arrayElementAt\", Arrays.asList(\"$orders.date\",0L)))\n\n",
"text": "I want to sort the array return by lookup in mongo then use projection on that data in java. Can someone suggest how to do that.Sample code:I want to have max order date return by lookup on orders",
"username": "Sameer_kr"
},
{
"code": "instock[\n {\n $lookup:\n {\n from: \"inventory\",\n localField: \"item\",\n foreignField: \"sku\",\n as: \"inventory_docs\",\n },\n },\n {\n $sort:\n {\n \"inventory_docs.instock\": -1,\n },\n },\n {\n $limit:\n 1,\n },\n]\nArrays.asList(new Document(\"$lookup\", \n new Document(\"from\", \"inventory\")\n .append(\"localField\", \"item\")\n .append(\"foreignField\", \"sku\")\n .append(\"as\", \"inventory_docs\")), \n new Document(\"$sort\", \n new Document(\"inventory_docs.instock\", -1L)), \n new Document(\"$limit\", 1L))\n",
"text": "Hi @Sameer_kr and welcome to MongoDB community forums!!Based on the above requirement and the sample data examples in the $lookup documentation, I tried to create the following aggregation pipeline to sort the data based on “instock” field in descending order and the use $limit to display the max instock document.Aggregation Pipeline:and the resultant Java code:If the above is not what you are looking for, could you help me with the sample document and the desired output from the aggregation pipeline.Also, MongoDB Atlas and Compass provides you the functionality to export the aggregation pipeline stages into the desired driver code. Please visit the documentation for more information.Regards\nAasawari",
"username": "Aasawari"
}
] | Sort on look up in mongo using java | 2023-08-05T11:10:57.203Z | Sort on look up in mongo using java | 446 |
null | [
"aggregation",
"java"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"64a67f32dbe7c36e2e6c15c8\"\n },\n \"name\": {\n \"user\": \"xyz\"\n },\n\t\"updated_at\": {\n\t\t\"$date\": \"2023-07-20T22:44:51.334Z\"\n\t}\t\t\n \"data\": {\n \"age\": \"55\",\n \"address\": [{\n \"$ref\": \"Addresses\",\n \"$id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c6\"\n }\n }, {\n \"$ref\": \"Addresses\",\n \"$id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c7\"\n }\n },\n ]\n }\n}\n{\n \"_id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c6\"\n },\n \"name\": {\n \"type\": \"permenant\"\n },\n \"address\":\"permenant address 1\"\n}\n\n{\n \"_id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c7\"\n },\n \"name\": {\n \"type\": \"secondary\"\n },\n \"address\":\"permenant address 2\"\n}\nprivate static void getData(MongoCollection<Document> products) {\n\t\tDate currentDate = new Date(System.currentTimeMillis() - TimeUnit.MINUTES.toMillis(5));\n\t\tBson match = match(gte(\"updated_at\", currentDate));\n\t\tBson lookup = lookup(\"address\", \"address.$id\", \"_id\", \"address\");\n\t\tBson unwind = unwind(\"$address\");\n\t\tList<Document> results = products.aggregate(Arrays.asList(match, lookup, unwind)).into(new ArrayList<>());\n\t\tSystem.out.println(\"result given\" + results.size());\n\t\tresults.forEach(printDocuments());\n\t}\n",
"text": "Hi,I have basic operation where I’m doing aggregation with date data in MongoDB with Java.\nI’m not getting proper output in java but when I’m executing it in mongo shell it works fine.Addresses:Java code:",
"username": "Janak_Rana"
},
{
"code": "package com.company;\n\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.*;\nimport org.bson.Document;\nimport org.bson.codecs.BsonTypeClassMap;\nimport org.bson.codecs.DocumentCodec;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.conversions.Bson;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Date;\nimport java.util.List;\nimport java.util.concurrent.TimeUnit;\n\nimport static com.mongodb.client.model.Aggregates.*;\nimport static com.mongodb.client.model.Filters.gte;\n\n\npublic class QuickTour {\n\n public static void main(final String[] args) {\n\n String mongoURI = \"mongodb+srv://findThief:[email protected]/\";\n MongoClient mongoClient = MongoClients.create(mongoURI);\n CodecRegistry codecRegistry = MongoClientSettings.getDefaultCodecRegistry();\n\n MongoDatabase database = mongoClient.getDatabase(\"test\");\n MongoCollection<Document> collection = database.getCollection(\"users\");\n\n System.out.println(\"successful\");\n final DocumentCodec codec = new DocumentCodec(codecRegistry, new BsonTypeClassMap());\n Date currentDate = new Date(System.currentTimeMillis() - TimeUnit.MINUTES.toMillis(5)); \n Bson match = match(gte(\"updated_at\", currentDate)); \n Bson lookup = lookup(\"Addresses\", \"data.address.$id\", \"_id\", \"address\"); \n Bson unwind = unwind(\"$address\"); \n List<Document> results = collection.aggregate(Arrays.asList(match,lookup,unwind)).into(new ArrayList<>()); //System.out.println(\"result given\" + results.size());\n for (Document document : results) {\n System.out.println(document.toJson(codec));\n }\n\n mongoClient.close();\n }\n}\n{\n \"_id\": {\n \"$oid\": \"64a67f32dbe7c36e2e6c15c8\"\n },\n \"name\": {\n \"user\": \"xyz\"\n },\n \"updated_at\": {\n \"$date\": 1690757091334\n },\n \"data\": {\n \"age\": \"55\",\n \"address\": [\n {\n \"$ref\": \"Addresses\",\n \"$id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c6\"\n }\n },\n {\n \"$ref\": \"Addresses\",\n \"$id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c7\"\n }\n }\n ]\n }\n}\naddress.$iddata.address.$idaddress",
"text": "Hi @Janak_Rana and welcome to MongoDB community forums!!Based on the sample documents provided, I tried to replicate the code in my local environment and I was successful to get the expected output.\nI tried the code below:The aggregation pipeline given by you has an issue with the lookup stage.\nAfter the $match operation, the output would look like:The next stage as $lookup, uses the localField as address.$id which should be data.address.$id to perform the lookup with _id of the Addresses collection.\nIn your code, after the match stage, the lookup was resulting into an empty address array which further could not unwind the empty array and was not providing the correct output as expected.Could you please confirm, if the above code works for you?P.S: The code above is provided using the latest 4.10 MongoDB Java driver version.Let us know if you have any further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Sorry for late reply and I have tested your code but didn’t worked for me.\nBelow is the date data in my mongo db.\n“updated_at”: {\n“$date”: “2023-08-07T18:53:43.336Z”\n},",
"username": "Janak_Rana"
}
] | MongoDB with Java driver pipeline match date not working | 2023-07-20T03:36:24.906Z | MongoDB with Java driver pipeline match date not working | 585 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "I have to check whether the records exist by passing unique property values for 1 million records. Instead of checking each document, is there any approach to review multiple records in the single or batch call?",
"username": "Sudhesh_Gnanasekaran"
},
{
"code": "",
"text": "Hi,\nIt would be better if you shared a sample of your collections and describe the desired result\nBest,",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "sample document{\nEmpno : “123”,\nName :“ABC”\n}\n{\nEmpno : “222”,\nName :“XYZ”\n}if i pass the Empno in input list as {“123”,“333”}The unmatched input value “333” should return.",
"username": "Sudhesh_Gnanasekaran"
},
{
"code": "{ \"_id\" : 123 }\n{ \"_id\" : 333 }\nlookup =\n{\n\t\"$lookup\" : {\n\t\t\"from\" : \"Sudhesh_Gnanasekaran\",\n\t\t\"localField\" : \"_id\",\n\t\t\"foreignField\" : \"empno\",\n\t\t\"as\" : \"found\"\n\t}\n}\nmatch = { \"$match\" : { \"found.0\" : { \"$exists\" : false } } }\n// running the pipeline\ndb.wanted.aggregate( [ lookup , match ] )\n// would produce\n{ \"_id\" : 333, \"found\" : [ ] }\n// you could then add $project stage to remove the empty array\n// you could even add a $out stage to store the result in another collection\n",
"text": "That is a tricky one and I hope somebody will come up with a simpler solution.Read Formatting code and log snippets in posts before posting code or documents next time. I could not just copy your documents because they were not formatted correctly.My solution involves an extra collection. This extra collection would contains the empno you want to query. In your case it would be:Then using the aggregation pipeline on that extra collection you $lookup into your main collection to find matches as the first stage. Then a $match stage will remove the documents for which an emp was found in the main collection.",
"username": "steevej"
},
{
"code": ".aggregate([{\n $group: {\n _id: null,\n notFound: {\n $accumulator: {\n init: function(){\n return[ \"123\", \"333\"];\n },\n accumulate: function(arr, empno){\n return arr.filter(x=>x!==empno)\n },\n accumulateArgs: [\"$Empno\"],\n merge: function(arr1, arr2){\n let part1=arr1.filter(x=>!arr2.includes(x)); \n let part2=arr2.filter(x=>!arr1.includes(x));\n return part1.concat(part2)\n },\n lang: \"js\"\n }}}}])",
"text": "And this is a map-reduce solution:tested on v4.4.0 (shell & server)Regards",
"username": "Imad_Bouteraa"
},
{
"code": "",
"text": "Any solution that involves $group might require to use the disk if the group does not fit in RAM and might hit the 16MB limit.",
"username": "steevej"
},
{
"code": "db.fruits.insertMany([\n { name: 'apple' },\n { name: 'plum' },\n { name: 'pear' }\n]);\n['pear', 'onion', 'apple', 'potato']db.fruits.aggregate([\n {\n $group: {\n // List of values to check\n _id: ['pear', 'onion', 'apple', 'potato'],\n }\n },\n {\n $unwind: '$_id'\n },\n {\n $lookup: {\n from: 'fruits',\n localField: '_id',\n foreignField: 'name',\n as: 'joined',\n }\n },\n {\n $project: {\n missing: {\n $cond: [\n { $eq: [{ $size: '$joined' }, 0 ] },\n '$_id',\n '$$REMOVE',\n ]\n }\n }\n },\n {\n $group: {\n _id: null,\n missing: {\n $addToSet: '$missing'\n }\n }\n }\n]);\n[ { _id: null, missing: [ 'potato', 'onion' ] } ]",
"text": "In case you have lots of documents in your database, but you need to perform a check for a relatively small list of values, you could use the solution below.Dataset example:List example of values that you may want to check for existence:\n['pear', 'onion', 'apple', 'potato']Aggregation:Output:\n[ { _id: null, missing: [ 'potato', 'onion' ] } ]",
"username": "slava"
}
] | How to check existence of records for a given input list? | 2021-03-23T17:28:59.441Z | How to check existence of records for a given input list? | 7,527 |
null | [
"aggregation",
"queries",
"java"
] | [
{
"code": "[\n {\n \"_id\": 1,\n \"racks\": [\n {\n \"rackId\": 1,\n \"available\": true\n },\n {\n \"rackId\": 2,\n \"available\": true\n },\n {\n \"rackId\": 3,\n \"available\": true\n },\n {\n \"rackId\": 4,\n \"available\": false\n },\n {\n \"rackId\": 5,\n \"available\": false\n },\n {\n \"rackId\": 6,\n \"available\": false\n },\n \n ]\n },\n {\n \"_id\": 2,\n \"racks\": [\n {\n \"rackId\": 1,\n \"available\": true\n },\n {\n \"rackId\": 2,\n \"available\": true\n },\n {\n \"rackId\": 3,\n \"available\": true\n }\n \n ]\n },\n {\n \"_id\": 3,\n \"racks\": [\n {\n \"rackId\": 1,\n \"available\": true\n },\n {\n \"rackId\": 2,\n \"available\": true\n },\n {\n \"rackId\": 3,\n \"available\": true\n }\n \n ]\n }\n]\ndb.collection.find({\n \"_id\": 1,\n \"racks\": {\n \"$elemMatch\": {\n \"rackId\": {\n $all: [\n 1,\n 2,\n 3\n ]\n },\n \"available\": true\n }\n }\n})\n",
"text": "Hi ,Here is my data modelI want to find if a document matches the conditions provided.\na. Accept a list. Where list of “rackId” are given\nb. For each of rackId mentioned in list, its “available” value should be considered. In this case it should be true.Note i have to use $all as i might have many rackId’s to query so that i do not end up writing as many rack objects in the query.\nExample if i need to match 1 to 6 rackId with “available” true , i can pass [1, 2, 3, 4, 5, 6] using $all.My query so far. Which provides no documents found even though document “_id” 1 has rackId 1, rackId 2 and racked 3 with available field as “true”. Could you please help me in getting correct query considering $all operation ?\na. Mongo query\nb. Corresponding java spring query equivalent ?I did researched similar topic on this thread How to match $all and equal to conditions in $elemMatch operator? but not satisfied with the approach as it is not using $all",
"username": "Manjunath_k_s"
},
{
"code": "db.rackSets.insertMany([\n {\n _id: 'A',\n racks: [\n {\n rackId: 1,\n available: true\n },\n {\n rackId: 2,\n available: true\n },\n ]\n },\n {\n _id: 'B',\n racks: [\n {\n rackId: 1,\n available: true\n },\n {\n rackId: 2,\n available: true\n },\n {\n rackId: 3,\n available: true\n }\n\n ]\n },\n {\n _id: 'C',\n racks: [\n {\n rackId: 1,\n available: true\n },\n {\n rackId: 2,\n available: false\n },\n {\n rackId: 3,\n available: false\n },\n ]\n },\n\n {\n _id: 'D',\n racks: [\n {\n rackId: 1,\n available: false\n },\n {\n rackId: 2,\n available: true\n },\n {\n rackId: 3,\n available: true\n },\n {\n rackId: 4,\n available: true\n }\n ]\n },\n {\n _id: 'E',\n racks: [\n {\n rackId: 1,\n available: true\n },\n {\n rackId: 2,\n available: true\n },\n {\n rackId: 3,\n available: true\n },\n {\n rackId: 4,\n available: false\n }\n ]\n }\n]);\nracksrackId123racksavailabletruedb.rackSets.aggregate([\n {\n // first, we filter out the documents, that do not contain required rackIds,\n // so the machine's memory is not wasted processing them in the next stages\n $match: {\n 'racks.rackId': {\n // this is first condition - we need to mention required rack ids\n $all: [1,2,3] \n }\n },\n },\n {\n $addFields: {\n // next, we make sure that document with required rackId\n // also has property 'available' with boolean value 'true'.\n // nMatches - is a counter for how many objects \n // in the 'racks' array do meet our second condition \n nMatches: {\n $reduce: {\n input: '$racks',\n initialValue: 0,\n in: { \n $cond: {\n if: {\n $and: [\n { \n // do not forget to mention required rack ids here, in this condition \n $in: [ '$$this.rackId', [1,2,3]],\n },\n {\n $eq: ['$$this.available', true]\n }\n ]\n },\n then: {\n $add: ['$$value', 1]\n },\n else: {\n $add: ['$$value', 0]\n }\n }\n }\n }\n }\n }\n },\n {\n // in this stage we filter out documents, that contain our selected racks, \n // butwith 'available' field set to false\n $match: {\n nMatches: {\n // this is the sum of the rack ids we are looking for \n // [1,2,3] = 3 rack ids\n $eq: 3 \n }\n }\n },\n // remove temporary field\n {\n $unset: ['nMatches']\n }\n]);\n[\n {\n _id: 'B',\n racks: [\n { rackId: 1, available: true },\n { rackId: 2, available: true },\n { rackId: 3, available: true }\n ]\n },\n {\n _id: 'E',\n racks: [\n { rackId: 1, available: true },\n { rackId: 2, available: true },\n { rackId: 3, available: true },\n { rackId: 4, available: false }\n ]\n }\n]\nErack123availabletruerackId4rackdb.rackSets.aggregate([\n {\n $match: { /* unchanged */ },\n {\n $addFields: { /* unchanged */ }\n },\n {\n $addFields: {\n totalRacks: {\n $size: '$racks'\n }\n }\n },\n {\n $match: {\n nMatches: {\n $eq: 3\n },\n totalRacks: {\n $eq: 3\n }\n }\n },\n {\n $unset: ['nMatches', 'totalRacks']\n }\n]);\nE",
"text": "Hello, @Manjunath_k_s and welcome to the commjunity! What you’re trying to achieve is not possible with a simple find operation. Instead, consider using aggregation pipeline.Let me show you how it can be done by an example.First, we will create a simplified dataset:Now, let’s say, that we want to get all the documents, that meet the following conditions:To get the required result, we would use the following aggregation pipeline code:The output result:Notice, that document E also returned. It contains the rack objects with all the required ids (1,2,3) and available property set to true. But also It contains rack with rackId equals to 4. In case we want only the documents, that do not contain rack, that we are not seeking for - we can achieve it, just by modifying the end of the pipeline so it would look like this:The modification above will remove document E from the results.",
"username": "slava"
},
{
"code": "",
"text": "Hi Slava,Thank you. I appreciate your quick and idea to tackle the ask. I have some follow up questions.Thanks a ton in advance.",
"username": "Manjunath_k_s"
},
{
"code": "db.rackSets.find({\n 'racks.rackId': {\n $all: [1,2,3] \n },\n 'racks.available': {\n $ne: false\n }\n});\n[\n {\n _id: 'B1',\n racks: [\n {\n rackId: 1,\n available: true\n },\n {\n rackId: 2,\n available: true\n },\n {\n rackId: 3,\n available: true\n }\n ]\n },\n {\n _id: 'B2',\n racks: [\n {\n rackId: 1,\n available: true\n },\n {\n rackId: 2,\n available: true\n },\n {\n rackId: 3,\n available: true\n }\n ]\n },\n]\n",
"text": "Agree, the aggregation turned out to be a bit long, but it covers an edge case If you need to retrieve the document, that contain only 1 document with the rack ids you require and nothing more, then you can use simpler and more efficient solution However, this solution does not cover the case, where you have two documents with exact same rack ids list. Like these ones:If it is possible to have two documents with the same rack ids list in your system, you need to think about a logic on how to pick one singe document form the result .Regarding the atomicity. Any update on 1 single document in MongoDB is atomic. That means, if you need to update one single document at a time, you don’t need to worry about atomicity. If you need to update multiple documents at once - use transactions. If you have to update tons of documents at once - consider making some modifications to your data model.",
"username": "slava"
},
{
"code": "db.rackSets.find({\n 'racks.rackId': {\n $all: [1,2,3] \n },\n 'racks.available': {\n $ne: false\n }\n});\n",
"text": "Great this the kind of query i am looking for. Appreciate your suggestion. Your query is very close but not working in below cases.Does this consider matching the “available” field to all rackId’ s in given array (sort of AND condition between rackId and available field for a list of rackIds) ?Looks like not !If you take a look at below use case, input rackId list is [1, 2] for which available field is true. I know i have rack object rackId 3 which is set available false intentionally in the input. However the result is empty !\nIs this expected behavior ?\nimage1798×939 63.9 KB\n",
"username": "Manjunath_k_s"
},
{
"code": "db.rackSets.find({\n racks: {\n $all: [\n { rackId: 1, available: true },\n { rackId: 2, available: true }\n ]\n },\n});\nrack[\n {\n _id: 'A',\n racks: [\n {\n rackId: 1,\n label: 'green',\n available: true\n },\n {\n rackId: 2,\n label: 'red',\n available: true\n },\n ]\n }\n]\nlabeldb.rackSets.find({\n racks: {\n $all: [\n { rackId: 1, label: 'green', available: true },\n { rackId: 2, label: 'red', available: true }\n ]\n },\n});\n",
"text": "Try this one:Keep in mind, that this solution will work only if you provide the full list fields if the rack object you require. That means, if your real model has more properties in your rack objects, like this, for example:Then you will also have to mention that label field with its exact value in your query:So, if your model indeed has more fields or you’re planning to add them later - better to use that aggregation pipeline I showed you in the very first message .",
"username": "slava"
}
] | How to match additional field values with $all? | 2023-08-05T10:13:48.401Z | How to match additional field values with $all? | 410 |
null | [
"flutter"
] | [
{
"code": "",
"text": "I have a problem in my app with (flutter and realm) reconnecting to MongoDB Atlas.The problem is the following: When I lose the internet connection, the application tries to connect to MongoDB Atlas every 5 minutes (that’s correct), the problem I have is that when I recover the connection I also have to wait for those 5 minutes, how can I do so that when the internet connection recovers it connects automatically without waiting those 5 minutes?I have not found any information about it",
"username": "rivars"
},
{
"code": "",
"text": "Hi @rivars there is an exponential backoff associated with the sync-client attempting to reconnect which has a maximum of 5 minutes. This is because attempting to connect without a network is one of the more battery draining tasks an application can do. You can explicitly force a reconnect in user code by using the pause() and resume() APIs when you are back on the network -",
"username": "Ian_Ward"
},
{
"code": "",
"text": "thank you so much! it worked perfect!",
"username": "rivars"
},
{
"code": "",
"text": "I may have some kind of problem for doing this forcing synchronization?",
"username": "rivars"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I reconnect to MongoDB Atlas without having to wait 5 minutes? | 2023-08-03T11:55:29.470Z | How do I reconnect to MongoDB Atlas without having to wait 5 minutes? | 546 |
null | [
"ops-manager"
] | [
{
"code": "",
"text": "I am having an issue deploying OpsManager to AKS. As a policy we do not allow privilege escalation containers in our clusters.Privilege container is not allowed: mongodb-enterprise-init-appdbIt was my understanding that ‘allowPrivilegeEscalation’ was set to false for all images?",
"username": "Sean_O_Reilly"
},
{
"code": "",
"text": "Hello @Sean_O_Reilly ,Welcome to The MongoDB Community Forums! MongoDB Ops Manager is part of Enterprise Advanced, which is a product requiring a subscription to use.I would advise you to bring this up with the Enterprise Advanced Support | MongoDB as typically these issues will require detailed knowledge into your deployment infrastructure. Alternatively, if you’re evaluating Ops Manager and would like more insight, please DM me and I should be able to connect you to the right team.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thanks Tarun.We are currently evaluating Ops Manager using the kubernetes operator. I have the operator deployed successfully, but on trying to deploy Ops Manager, I am getting the privilege escalation error mentioned. We do not allow privilege escalation in any of our clusters.RegardsSean",
"username": "Sean_O_Reilly"
}
] | MongoDB ops manager privilege escalation | 2023-08-01T10:21:54.544Z | MongoDB ops manager privilege escalation | 610 |
null | [
"sharding",
"performance"
] | [
{
"code": "",
"text": "Hello,\ncan a single-core performance of primary config server be a bottleneck of chunk migration after adding an additional shard?Context:\nI have a 4-shard (3 nodes each, plus 3 config server) 4.4.1 cluster. Each node has about 3TB compressed (zstd) data, ~15TB uncompressed. I have recently added a new (5th) shard due to DB growth. After a few hours I calculated that balancing will take long weeks or even months. I also found that mongod process on primary config server uses about 1 full CPU core (on a 4-core) system and seems to be a bottleneck since I/O (including network) and CPU usage on other nodes is rather low. Is the balancer process single-threaded or it just doesn’t want to use the whole CPU? Can I speed it up by adding more CPU cores?\nIs there any other way to speed the balancing up?Without that I may have disk space issues on existing nodes since I won’t be able to release disk space fast enough.Thanks\nAndrzej",
"username": "Andrzej_Podgorski"
},
{
"code": "",
"text": "Bump, after almost 2 years and adding new shards I always have the same problem of balancer performance but it hits me even harder (the performance is even lower).\nIs there ANY way to speed up the balancer?",
"username": "Andrzej_Podgorski"
},
{
"code": "",
"text": "Hey Andrzej,The balancer is currently single-threaded in order to reduce the impact of chunk migrations on ongoing workloads. With that said, depending on your machine sizes, you can be bottlenecked on RAM, CPU or IOPS. We are actively working on speeding up the addShard process and will get back to you once the improvements are delivered.Thanks for your input!Garaudy",
"username": "Garaudy_Etienne"
},
{
"code": "",
"text": "Hi @Garaudy_Etienne ,\nI am also having production issues with this, after adding a new shard, the performance is hit very hard.Is their any way to handle this until the addShard optimizations are done in the next versions?Currently using mongodb 6.0.6.",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "Try running the balancer in non peak window only?Migrating data of course consume resources amd may impact performance at busy time",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank you @Kobe_W .\nUnfortunately, I see that the balancer window should be used when:particularly when your data set grows slowly and a migration can impact performanceIn my case the data grows very quickly (~5K IOPS).Also, seeing the following desclaimer:The balancer window must be sufficient to complete the migration of all data inserted during the day.And am not really sure what the length of the window should be in my case, or what should I do if I don’t find the right window.Is there any other option we can use? or can we make it less impactful in general? (without a window)",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "Try searching mongo doc. There are some available params to tune it.Eg",
"username": "Kobe_W"
},
{
"code": "",
"text": "If your io is really high, also consider upgrading your hardware.",
"username": "Kobe_W"
},
{
"code": "userId:1userId:\"hashed\"userId:1state:1,city:1state:\"hashed\",city:1state:1,city:1",
"text": "There is an option of resharding twice, that would be much faster and less impactful on your system. You would need to be on MongoDB 5.0 or later and make sure you have enough spare disk storage capacity. Please read this article to understand the concept behind it.Let’s say your shard key is userId:1. You would reshard to the hashed version of your shard key, so userId:\"hashed\" then reshard again back to your original shard key of userId:1.Let’s say your shard key is state:1,city:1. You would reshard to the hashed version of your shard key, so state:\"hashed\",city:1 then reshard again back to your original shard key of state:1,city:1.Depending on how much data you have per shard, each resharding could take anywhere from a day to a week. This works much faster because resharding writes to all shards in parallel instead of migrating data one chunk at-a-time.The upside is that it’s much faster than chunk balancing/migration and has almost no impact on your workload (since you simply drop the old collection), meaning you can run it 24/7. But you must ensure that you have enough spare disk space for it. (Please read the resharding documentation)Since you are resharding to the hashed version of your shard key and then back, you DO NOT need to rewrite your application’s queries to use both the current shard key and the new shard key. You can simply reshard twice without any changes to your application.",
"username": "Garaudy_Etienne"
},
{
"code": "",
"text": "Hi @Garaudy_Etienne ,\nUnfortunately, I don’t meet the requirements of 1.2x free disk space to do this, and part of the reason for the shard expansion is due to low disk space, so the data can move out to other shards, so it will lower the load and disk usage.",
"username": "Oded_Raiches"
}
] | Balancing performance after adding a shard | 2020-10-23T19:27:42.061Z | Balancing performance after adding a shard | 3,638 |
[
"aggregation"
] | [
{
"code": "aggregation pipelinejsonjson[]$convert$toStringobjectarray$accumulator$reduce$concat$toStringJSON.stringify$cond$reduce{\n \"_id\": {\n \"$oid\": \"64c3934020a49e88d4b17f84\"\n },\n \"jsonArrayValues\": [\n {\n \"int\": 1,\n \"double\": 2.5,\n \"boolean\": true,\n \"string\": \"Testing\",\n \"objectId\": {\n \"$oid\": \"61b0fdcbdee485f7c0582db6\"\n },\n \"date\": {\n \"$date\": \"2022-04-08T00:00:00.000Z\"\n },\n \"arrayInt\": [\n 1,\n 2,\n 3\n ],\n \"nested\": {\n \"int\": 3,\n \"string\": \"Testing 2\"\n },\n \"arrayObj\": [\n {\n \"int\": 4,\n \"string\": \"Testing 3\"\n },\n {\n \"int\": 5,\n \"string\": \"Testing 4\"\n }\n ]\n },\n {\n \"int\": 6,\n \"double\": 3.5,\n \"boolean\": false,\n \"string\": \"Testing 5\",\n \"objectId\": {\n \"$oid\": \"62b0fdcbdee485f7c0582db6\"\n },\n \"date\": {\n \"$date\": \"2023-04-08T00:00:00.000Z\"\n },\n \"arrayInt\": [\n 4,\n 5,\n 6\n ],\n \"nested\": {\n \"int\": 7,\n \"string\": \"Testing 6\"\n },\n \"arrayObj\": [\n {\n \"int\": 8,\n \"string\": \"Testing 7\"\n },\n {\n \"int\": 9,\n \"string\": \"Testing 8\"\n }\n ]\n }\n ],\n \"testId\": \"bugfix.schema-aware-queries.cast-json-array-to-varchar.case1\"\n}\n$substr$project",
"text": "The problem: We have documents stored in our collection which have nested objects, nested arrays of objects etc, and what we need to be able to do is within an aggregation pipeline convert the values of the json and json[] fields into a sting . Ideally what we would like is for the $convert or the $toString operators to support input types of type object and array (of objects as well as primitives).We can’t use the $accumulator operator as we don’t have or want JavaScript turned on in our servers.What we have done so far is to use a combination of $reduce along with $concat and $toString to “manually” do a JSON.stringify and have gotten quite far. The problem we are running into with this approach is we end up with trailing commas in the string. I tried to use $cond to check if we are in the last item in the array but $reduce does not seem to give us an iteration index we can use to determine if we are in the last item or not. Our test data looks as follows:The real problem with this approach is the trailing commas within the objects themselves. We are building up the pipeline itself programmatically and it will support multiple types of documents so the complexity goes up quickly if we try to $substr all the instances where a trailing comma would be.Our $project step looks as follows:\n\nimage840×666 81.3 KB\nWe could have a mapping function server side that would transform the data after it comes out of the db, but this is library code which puts the onus on the consumer of the library to remember to call the map so not ideal.Any advice or assistance would be appreciated!",
"username": "Ryan_Kotzen"
},
{
"code": "$cond",
"text": "For anyone else experiencing a similar issue we have a work-around using $cond as follows to fix the trailing commas:\n\nimage1728×1718 209 KB\nThough natively being able to convert an object to string would simplify the process dramatically!",
"username": "Ryan_Kotzen"
}
] | JSON.stringify within an aggregation pipeline | 2023-07-31T14:29:26.978Z | JSON.stringify within an aggregation pipeline | 532 |
|
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "I am installed bi connector ,I am able to connect local database in tableau and unable to connect external database with tableau and not able to configure bi connector also",
"username": "Abraham_Catchirayar"
},
{
"code": "external database",
"text": "Hello @Abraham_Catchirayar ,Welcome to The MongoDB Community Forums! unable to connect external database with tableauPlease confirm what do you mean by external database here? Do you mean MongoDB Atlas?\nAlso, I would recommend you to kindly check below resources for connecting MongoDB to Tableau.Learn how to connect Tableau with MongoDB and leverage the power of Atlas SQL for all your business analytics and reporting needs.not able to configure bi connector alsoWhat is the issue that you are facing with this?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hello @Tarun_Gaur ,Thanks for the reply , External database means , we connect a database in atllas to mongo db compass and internal database means locolhost in compass , we can connect that internal database , while running Bi connector , only show databse from local host , help on this issue\n\nScreenshot (161)1920×1080 239 KB\n",
"username": "Abraham_Catchirayar"
}
] | MongoDB BI connector Issue in Tableau | 2023-08-02T13:31:36.037Z | MongoDB BI connector Issue in Tableau | 593 |
null | [
"aggregation",
"queries",
"java",
"crud"
] | [
{
"code": "",
"text": "hello. I am a developer trying to develop an application with MongoDB as the main repository. If you have any questions, please leave a question.The project I am currently developing has application instance group A and application instance group B. Both operate as multi-instance, group A saves documents in DB in parallel, and group B reads and uses documents stored by A.There is one limitation here. The point at which the document is stored in the DB is very important. When Group A’s instances save documents in parallel, they need to be able to recognize exactly when a document is saved to the DB when Group B retrieves a document.Therefore, I would like a feature where MongoDB, not the application, automatically initializes the document’s creation-time fields when the document is saved.To solve this, I initially tried to use ObjectId. From reading the official description, it’s because when you save the document, if you leave the _id field blank, it automatically initializes the field, and the ObjectId contains a timestamp within it. However, reading the official description, if the _id field is empty, it is initialized by the driver, not the DB, so this couldn’t be a solution.I found $currentDate as another solution. If you use $currentDate when upsert the document, this also seems to satisfy your requirements.I have a question for you here. Where does the logic to initialize the time run when a document is upsert using $currentDate? I’d like it to be initialized to the server’s native time, but I don’t think I can use this workaround if the document is initialized using the application’s system time.Thanks to everyone who replies. ps. sorry for my english skill.",
"username": "YounWan_Kim"
},
{
"code": "",
"text": "if the _id field is empty, it is initialized by the driver, not the DB, so this couldn’t be a solutionthe time difference between driver and db server may be only 10ms, does it really matter for your case?There’s no global wall clock across computers. Even with NTP, there will be clock drift. So if you really need a “100% accurate timestamp”, don’t use wall clock. Instead use virtual time. (e.g. lamport clock).So i suggest you revisit your use case and/or refine your design.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Thank for replying I have one more question. If I use $currentDate when instances in group A take turns saving data, can I guarantee the relative chronological order between the saved data?Let’s give an example. Let’s say we have instances 1 and 2 in group A, and data 1 and 2 need to be processed by instance 1 and 2, respectively. Let’s assume that data 1 and data 2 are imported into group A in order.Timestamp1 must not be less than timestamp2 in this process because the relative point in time at which data is stored is important.Could $currentStamp solve this problem?",
"username": "YounWan_Kim"
},
{
"code": "",
"text": "i think you need to get a better understanding on “ordering” in a distributed system.because the relative point in time at which data is stored is important.A relative point in time can not be represented by a wall clock timestamp (in this case). Because if there’s no “happen before” relationship between two events, then we consider the two events as “concurrent”, meaning no one happens before the other.In your example case, there’s no happen before in data 1 and data 2, so how do you know which one goes first?No, you don’t.Check out happen-before/message-ordering/lamport-clock/… concepts on Internet.To give you an example, if the same client sends msg1 first, then sends msg2, then we say: msg1 happens before msg2.",
"username": "Kobe_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | I am curious about how $currentDate works | 2023-08-05T16:56:12.358Z | I am curious about how $currentDate works | 509 |
null | [
"mongodb-shell",
"atlas-cluster"
] | [
{
"code": "",
"text": "Hello. Greetings.\nI have two organizations on my account. One is for projects of the company I work for and the other I created as a playground.I’m currently without Fiber, so my mobile hotspot keeps changing the IP.\nI’ve decided to add a dedicated IP from my VPN Provider.\nNow I can see my dedicated IP if I use services such as whatsmyip and so on.If I go to my playground and allow this IP, I can connect with no problems whatsoever.\nBut when I ask my project owner to add this IP to my company’s project, I can’t connect.And this is the error I get:❯ mongosh “mongodb+srv://company.mongodb.net/?authSource=%24external&authMechanism=MONGODB-X509” --apiVersion 1 --tls --tlsCertificateKeyFile dist/db/mongoTlsCertificate.pem\nCurrent Mongosh Log ID: ************\nConnecting to: mongodb+srv://company.mongodb.net/?authSource=%24external&authMechanism=MONGODB-X509&tls=true&tlsCertificateKeyFile=dist%2Fdb%2FmongoTlsCertificate.pem&appName=mongosh+1.10.1\nMongoServerSelectionError: connection to ..**.78:27017 closed. It looks like this is a MongoDB Atlas cluster. Please ensure that your Network Access List allows connections from your IP.Any ideas on why this happens??",
"username": "Jonathan_Martins"
},
{
"code": "",
"text": "Hi @Jonathan_Martins - Welcome to the community.If I go to my playground and allow this IP, I can connect with no problems whatsoever.\nBut when I ask my project owner to add this IP to my company’s project, I can’t connect.The behaviour being described does appear a bit odd. I’m curious to know if you waited a few minutes before attempting to connect to the company’s cluster after thet IP was added? Atlas requires a few moments to propogate the applied changes to the project (most of the time this happens in seconds for adding IP entries to the Network Access List in my experience).I assume they have other clients able to connect to the same cluster(s) (in the company’s project) with no issue but please correct me if I am wrong here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Can't connect from a dedicated iP | 2023-08-04T14:51:40.355Z | Can’t connect from a dedicated iP | 434 |
null | [
"licensing"
] | [
{
"code": "",
"text": "Hello,I have recently joined my company and we utilized the mongo db Atlas. we own some credits for mongo db like - MongoDB Atlas Pro Package - FlexCommit and MongoDB Atlas Rollover.We do receive the invoice from mongo for the credit consumed for every month.I wanted to understand from where we can check the usage for these credit.\nis there a report we can utilized and how to fetch that report that showes the details for these credit consumed.Thanks !!Deepak Chauhan",
"username": "Deepak_Chauhan1"
},
{
"code": "",
"text": "Hi @Deepak_Chauhan1,We do receive the invoice from mongo for the credit consumed for every month.I wanted to understand from where we can check the usage for these credit.\nis there a report we can utilized and how to fetch that report that showes the details for these credit consumed.You can head over to your billing overview section of your Atlas organization - I’ve linked the documentation for steps on how you can view the available credits.If you’re after more specific information regarding credit usage, I would recommend contacting the Atlas in-app chat support team as they have more insight into your Atlas account.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Important : Mongo DB Atlas | 2023-08-04T09:32:22.947Z | Important : Mongo DB Atlas | 459 |
null | [] | [
{
"code": " cannot define relationship for property \"Address.city\" which does not exist in schema for collection \"Person\"\n",
"text": "Hi,This seems to be a limitation (bug) on the Realm interace.\nConsider a collection “Person” containing an embedded objet “Address”.\n“Address” is made of several fields (strings, boolean,…) and one reference to an object in the “City” collection.In that case, when creating the Realm schema there is no issue to create a relationship between Address.City and the related City._id.That been said, if you want to update the schema to allow several addresses to be stored by placing them into an array, then it is not possible to create the relationship anymore.Note that, switching to Development Mode and declare the same structure in your client app seems to work but then edition of the schema become impossible via the Realm web console.Is there any workaround to this issue ?thanks by advance.\nregards\nbruno",
"username": "bruno_levx"
},
{
"code": "{\n \"Address.city\": {\n \"foreign_key\": \"_id\",\n \"ref\": \"#/relationship/mongodb-atlas/forum/City\",\n \"is_list\": true\n }\n}\n",
"text": "Hi @bruno_levx,I’ve think that I’ve recreated what you’re seeing through the Realm UI. This is my (failed) relationship:When you run in development mode, what shows up in the “Relationships” tab (or in the “Advanced Mode” doc?)",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi Andrew,If it’s created in dev mode, what is shown in the interface is exactly the same. That’s why I said we can’t modify the schema.\nWhen switching in advanced mode, the relationship is simply not shown at all.regards\nBruno",
"username": "bruno_levx"
},
{
"code": "",
"text": "Hi Bruno, I was hoping to see what the relationship definition looked like when it was inferred from the app code in development mode – is that visible? Also, could you please share your application object definitions?",
"username": "Andrew_Morgan"
},
{
"code": "public class Target : RealmObject\n {\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; }\n [MapTo(\"__partition\")]\n public string Partition { get; set; }\n [MapTo(\"data\")]\n public string data { get; set; }\n }\n\n public class Parent_Emb : EmbeddedObject\n {\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; }\n [MapTo(\"__partition\")]\n public string Partition { get; set; }\n [MapTo(\"createdOn\")]\n public DateTimeOffset CreatedOn { get; set; }\n [MapTo(\"target\")]\n public Target target { get; set; }\n }\n\n public class Parent :RealmObject\n {\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; }\n [MapTo(\"__partition\")]\n public string Partition { get; set; }\n [MapTo(\"embedded\")]\n public IList<Parent_Emb> embedded { get; }\n }\n...\n...\n Target target = new Target();\n target .Id = ObjectId.GenerateNewId();\n target .Partition = user.Id;\n target .data = \"test\";\n\n Parent p = new Parent();\n p.Id = ObjectId.GenerateNewId();\n p.Partition = user.Id;\n Parent_Emb pe = new Parent_Emb();\n pe.Id = ObjectId.GenerateNewId();\n pe.Partition = user.Id;\n pe.CreatedOn = DateTime.Now;\n p.embedded.Add(pe);\n\n realm.WriteAsync((realmt) =>\n {\n\n realmt.Add(target);\n realmt.Add(p);\n p.embedded[0].target = target;\n });\n{\n \"embedded.[].target\": {\n \"ref\": \"#/relationship/mongodb-atlas/Establishment/Target\",\n \"foreign_key\": \"_id\",\n \"is_list\": false\n }\n}\n{\n \"roles\": [],\n \"filters\": [],\n \"schema\": {\n \"title\": \"Parent\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"__partition\": {\n \"bsonType\": \"string\"\n },\n \"embedded\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"title\": \"Parent_Emb\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"createdOn\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"__partition\": {\n \"bsonType\": \"string\"\n },\n \"createdOn\": {\n \"bsonType\": \"date\"\n },\n \"target\": {\n \"bsonType\": \"objectId\"\n }\n }\n }\n }\n }\n }\n}\nerror validating rule relationships: cannot define relationship for property \"embedded.[].target\" which does not exist in schema for collection \"Parent\"\n",
"text": "Andrew,\nI remade the test.here the code used in C#here is the relation created automatically :if i switch in advanced mode, relation completly vanish:if I try to modify the schema ( I.e. just adding a “s” to “target” property name ) , I got this message.regards,\nbruno",
"username": "bruno_levx"
},
{
"code": "{\n \"Address.[].city\": {\n \"foreign_key\": \"_id\",\n \"ref\": \"#/relationship/mongodb-atlas/forum/City\",\n \"is_list\": false\n }\n}\nrules.jsonschema.jsonPerson{\n \"title\": \"Person\",\n \"properties\": {\n \"Address\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"city\": {\n \"bsonType\": \"string\"\n },\n \"Street\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"Name\": {\n \"bsonType\": \"string\"\n },\n \"_id\": {\n \"bsonType\": \"objectId\"\n }\n }\n}",
"text": "Thanks.The UI allowed me to add this relationship:The relationship doesn’t show in the “Advanced Mode” view for me either, but if I export the app then I can see that it is in the rules.json file rather than schema.json and so I think all is OK.This is my Person schema:",
"username": "Andrew_Morgan"
},
{
"code": ".[].",
"text": "What happen if you try to change ‘Street’ to “Streets” in the UI.\nWill I allow you to make the change ?On my side it won’t. In other words, it works technically but it is impossible to do it in the UI only in dev mode.\nThis means, if a change is required, there is no other ways than turning dev mode on again then update the schema thru an app, what is a bit annoying if the app is already in production.I think the issue is only in the schema validation rules that refuses the .[]. part of the relationship.regards,\nBruno.",
"username": "bruno_levx"
},
{
"code": "StreetStreets",
"text": "It lets me change Street to Streets without a problem. I set up the relationship before enabling sync, but I was still able to edit it after turning on sync (and without needing to use development mode). I get a warning when editing the schema and it restarts sync, but it seems to accept the change.",
"username": "Andrew_Morgan"
},
{
"code": "error validating rule relationships: cannot define relationship for \nproperty \"embedded.[].target\" which does not exist in schema for collection \"Parent\"\n",
"text": "So this means I may miss something somewhere. On my side, it doesn’t even want to save the change.Edit: After stoping dev mode and turning off/on the sync it seems to accept the change now…\nI don’t exactly understand were i made a mistake but in the end, it seams to work.Thanks for your time !",
"username": "bruno_levx"
},
{
"code": "",
"text": "@Andrew_Morgan\nJust an additionnal question:\nIs there any plan to add the possibility to create such link via the [Add Relationship] wizzard ?for now, field declared in embedded object are not shown in the dropdown list.thanks in advance.\nregards",
"username": "bruno_levx"
},
{
"code": "",
"text": "Hi @bruno_levx, I agree that this would be a good enhancement for the UI – I’d suggest up-voting this request (I just did ) Allow deeper relationships under Rules – MongoDB Feedback Engine",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Hi @Andrew_Morgan , after adding the relationship using the “Address.[].city” format successfully, the change isn’t reflected in the GraphQL schema. Any ideas?Thanks in advance.",
"username": "Chetan_Bhuwania"
},
{
"code": "",
"text": "Having the same issue (not seeing the change reflected in GraphQL or the relationship visible in GraphiQL. Any advice or did you find a way to get this working?",
"username": "Mike_Tedeschi"
},
{
"code": "",
"text": "Hey @Mike_Tedeschi , I’m still stuck on this \nIf you come across a solution, please share here.Thanks.",
"username": "Chetan_Bhuwania"
},
{
"code": "",
"text": "The issue is still present …",
"username": "Olivier_Wouters"
},
{
"code": "",
"text": "This issue is still present 1 year later",
"username": "Joseph_Devlin"
},
{
"code": "",
"text": "Hi @Joseph_Devlin, there seem to be a few subtly different things going on in this thread. What is it you are trying to do?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "So the specific issue I am having is that I cannot create relationships with embedded items but after trying a few things its just a problem with the new UI. I am still able to manually create these relationships using the json view when editing the schema. It is now merely an inconvenience but is misleading at first because the default UI makes it appear this operation is impossible.",
"username": "Joseph_Devlin"
},
{
"code": "",
"text": "Hi @Joseph_Devlin! I can confirm that this is indeed misleading – I’ve created a ticket to track this work on our end!",
"username": "Valeria_Tiourina"
},
{
"code": "\"lineups.[].team\": {\n \"foreign_key\": \"_id\",\n \"ref\": \"#/relationship/mongodb-atlas/MyAppName/Teams\",\n \"is_list\": false\n }```\n\n**lineups** is an embedded array of objects, with **team** being one of the fields in the object.\n\nI have relationships working elsewhere in the schema, but clearly not on embedded array of objects. I have relationships with Team in other schemas too so it's not specifically that.\n\nRegards\nPaul",
"text": "Hi all,I can confirm this is happening to me too. I am also unable to add manually into the JSON View, when I do this, I keep getting:Relationships syntax error: Prop foreignKey must exist in lineups..team relationshipThis is my manually entered relationship:",
"username": "Paul_Pounder"
}
] | [issue] Unable to add a relationship in a embedded object stored in an array | 2021-03-30T09:19:00.878Z | [issue] Unable to add a relationship in a embedded object stored in an array | 6,121 |
null | [
"queries"
] | [
{
"code": "for (var i = 1; i <= 101; i++) {\n db.testCollection.insert( { x : i } )\n }\n\nvar documentSize = 16 * 1024 * 1024-32; // 16MB\nvar largeData = new Array(documentSize).join('y');\ndb.testCollection.insert({ data: largeData });\n\n> db.adminCommand({\"getParameter\":1,\"cursorTimeoutMillis\":1})\n{ \"cursorTimeoutMillis\" : NumberLong(10000), \"ok\" : 1 }\n\nvar myCursor = db.testCollection.find();\nvar count = 0;\n\nwhile (myCursor.hasNext()) {\n var document = myCursor.next();\n count++;\n if (count > 101) {\n sleep(60000); ------> At this point, I expected this cursor should be expired.\n printjson(count)\n var a = document;\n }\n else {\n printjson(count,document)\n }\n}\n\n",
"text": "I’m testing this parameter and I really want to know how cursorTimeoutMillis parameter works.I expected that if I set this parameter to 10 seconds,\nthe cursor would time out in 10 seconds when it is in idle state, but nothing happened. The following is my scenario.before testing, I prepared my test data like thisafter creating data, I made my cursor .however, this cursor never expired. it ends normally without a cursor timeout.\nI don’t understand. What’s the problem?",
"username": "e_cofff"
},
{
"code": "",
"text": "After sleep try myCursor.isExhausted().",
"username": "ram_Kumar3"
},
{
"code": "kimdubi_repl [direct: primary] test> while (myCursor.hasNext()) {\n... sleep(60000);\n... print(myCursor.isExhausted())\n... var document = myCursor.next();\n... count++;\n... printjson(count)\n... var a = document;\n... }\n\n\n\n\nfalse\n1\nfalse\n2\nfalse\n3\n\nkimdubi_repl [direct: primary] admin> db.aggregate([ { $currentOp: { idleCursors: true } }, { $match: { \"ns\": \"test.testCollection\" } }, { $sort: { \"cursor.createdDate\": 1 } }])\n[\n {\n type: 'idleCursor',\n host: '83992e27a784:27017',\n ns: 'test.testCollection',\n lsid: {\n id: new UUID(\"9944244f-3870-4a3e-8694-7a324ef6228d\"),\n uid: Binary(Buffer.from(\"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\", \"hex\"), 0)\n },\n planSummary: 'COLLSCAN',\n cursor: {\n cursorId: Long(\"3063305060822111253\"),\n createdDate: ISODate(\"2023-08-06T03:51:58.999Z\"),\n lastAccessDate: ISODate(\"2023-08-06T03:54:59.213Z\"),\n nDocsReturned: Long(\"4\"),\n nBatchesReturned: Long(\"4\"),\n noCursorTimeout: false,\n tailable: false,\n awaitData: false,\n originatingCommand: {\n find: 'testCollection',\n filter: {},\n lsid: { id: new UUID(\"9944244f-3870-4a3e-8694-7a324ef6228d\") },\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1691293898, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n '$db': 'test'\n }\n }\n }\n]\n",
"text": "Thank you for the response, but it’s not resolved yet.",
"username": "e_cofff"
}
] | cursorTimeoutMillis parameter does not work. It never expires | 2023-07-27T13:05:07.566Z | cursorTimeoutMillis parameter does not work. It never expires | 358 |
[
"data-modeling",
"replication",
"sharding"
] | [
{
"code": "",
"text": "According to this:“You can set the number of shards to deploy with the sharded cluster. Your cluster can have between 1 and 50 shards, inclusive.”But according to MongoDB employee:“I’m not aware of a specific limit on number of shards. For a similar discussion on limits, please see my response on Database and collection limitations - #2 by Stennie 108.”and:“MongoDB offers horizontal scale-out using sharding: While a single ‘Replica Set’ (aka a shard in a sharded cluster) cannot exceed 4TB of physical storage, you can use as many shards as you want in your MongoDB Atlas sharded cluster.”Please resolve the contradiction or my mis understanding?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Hi. @Big_Cat_Public_Safety_Act I also knew that there was no limit to the number of shards.\nHowever, looking at the 50 restrictions in the atlas manual, isn’t there no limit in the on-prem environment, and isn’t there a limit in the atlas environment?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "@Kim_HakseonThe last quote:“you can use as many shards as you want in your MongoDB Atlas sharded cluster.”So this is wrong?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "Or is it that the number of shards that can be automatically added at the same time as the initial sharded cluster setting(sh.shardCollection()) is 1-50, and then manually add more through sh.addShard()?It’s as if it’s below. (Explained with the repl command for feeling)rs.initiate({members:[<1>,<2>,<3>, …, <50>]})\nrs.add(<51>)\nrs.add(<52>)",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Would be great if a source of authority can confirm or deny the above.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "“You can set the number of shards to deploy with the sharded cluster. Your cluster can have between 1 and 50 shards, inclusive.”This is the designed limit in Atlas, as the limits page points out:If any of these limits present a problem for your organization, contact Atlas support.There are limitations of the cloud vendor environments that need to be navigated. MongoDB Support will make you aware of those and what options are available.",
"username": "chris"
},
{
"code": "",
"text": "There is no limit on the number of shards for MongoDB. 50 shards is just the limit we put in the Atlas User Interface because we believe that the majority of users will not need to start with more than 50 shards.We feel that it’s prudent for MongoDB to speak to those who want to start out with more than 50 in order to help ensure that they have the correct setup in place.",
"username": "Garaudy_Etienne"
},
{
"code": "",
"text": "Oh, so the limit of 50 for Atlas only applies during the initial setup? Does that mean that after the initial setup, the user is then free to scale to 50+ shards?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "",
"text": "not the initial setup, Atlas just wants to be aware if someone wants to go past 50 shards. You’re free to scale past 50 shards, but you just need to contact Atlas and say “I want to add N more shards”. We’re contemplating raising that “contact us” limit since quite a few people have wondered if we have a 50-shard limit on Atlas.",
"username": "Garaudy_Etienne"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Maximum number of shards a collection can be horizontally distributed? | 2023-07-25T00:52:40.672Z | Maximum number of shards a collection can be horizontally distributed? | 828 |
|
[] | [
{
"code": "",
"text": "I am looking to get the exact information on RTO and RPO for mongo atlas, but couldn’t get it. I found the below link and its useful but not specific to the point.MongoDB Atlas is built with distributed fault tolerance and automated data recovery to support your mission-critical workloads and applications.Appreciate any help on this.",
"username": "Neeraj_Acharya"
},
{
"code": "",
"text": "Check this link which discusses on RTO & RPO",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I Have Got Some Of the Best Information About Rto In India from A Blog Sharing It With You.\nI Think They Provide The Info Are Correct & Informative For A New user To RTO.",
"username": "Sunil_Santosh"
}
] | Where can I find the information on RTO and RPO for Mongo Atlas | 2021-12-20T16:58:10.922Z | Where can I find the information on RTO and RPO for Mongo Atlas | 4,421 |
|
[
"compass"
] | [
{
"code": "",
"text": "I am using the MongoDB Compass desktop app on Windows 10 and everytime I try to search something with a query, it doesn’t pull up any results even though I can see the query is correct. Am I doing this wrong? This also doesn’t work on the web version of MongoDB.\nimage2287×863 72.2 KB\n",
"username": "vNziie_N_A"
},
{
"code": "",
"text": "\nimage2308×991 84.4 KB\n",
"username": "vNziie_N_A"
},
{
"code": "_id",
"text": "@vNziie_N_A what is the type of that _id field you are trying to find with your query?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "It’s an integer as seen in the first image @Massimiliano_Marcon",
"username": "vNziie_N_A"
},
{
"code": "",
"text": "Can anyone help? I still need help with this issue.",
"username": "vNziie_N_A"
},
{
"code": "",
"text": "Hello @vNziie_N_A ,Welcome to The MongoDB Community Forums! To understand your use case better and test this at my end, can you please share a few more details, such as:Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Please post UN-cropped Compass window snapshots. Sometimes the context helps up what is wrong. Can you also enter edit mode for the document you do not find. This way we should see the exact data type of the field you are querying. One thing I do when I have issue is to add a temporary field like _debug:true to some documents, this way I can query {_debug:true} and find my difficult documents.As hinted by Massimiliano_Marcon, I also suspect a type mismatch.",
"username": "steevej"
}
] | Search results not accurate | 2023-07-27T18:01:27.955Z | Search results not accurate | 490 |
|
null | [
"python"
] | [
{
"code": "connection = \"mongodb+srv://\"+ username + \":\" + password + \"@xxxx.xxxxx.mongodb.net/Configurations?retryWrites=true&w=majority\"\n\nclient = pymongo.MongoClient(connection)\nclient = pymongo.MongoClient(connection, ssl_ca_certs=certifi.where())\ncacert.pem",
"text": "Hello, I am working on a python project using MongoDB for several months now, I had no connect issue until today where I encounter a CERTIFICATE_VERIFY_FAILED problem that looks like this:pymongo.errors.ServerSelectionTimeoutError: xxxx-shard-00-02.xxxxxx.mongodb.net:00000: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failedI browse through the community forums and lucky found the problem:My original code:My new fixed code:So it had something to do with the cacert.pem.But I find this problem odd, why today? it has been working for months with no issues, and furthermore, which cacert.pem is the default “client = pymongo.MongoClient(connection)” using? Where is the problem originated from and how do I fix it at its core?I would very much appreciate it if someone explains this to me.Thank you.",
"username": "CYC"
},
{
"code": "client = pymongo.MongoClient(connection, tlsCAFile=certifi.where())\n",
"text": "By default pymongo relies on the operating system’s root certificates.But I find this problem odd, why today?It could be that Atlas itself updated its certificates or it could be that something on your OS changed. “certificate verify failed” often occurs because OpenSSL does not have access to the system’s root certificates or the certificates are out of date. For how to troubleshoot see TLS/SSL and PyMongo — PyMongo 4.3.3 documentationAlso please note that “ssl_ca_certs” is deprecated and you should use “tlsCAFile” instead:",
"username": "Shane"
},
{
"code": "",
"text": "thanks for sharing, currently still working in 2023!",
"username": "Daniel_Haycraft"
},
{
"code": "",
"text": "thank you it finally worked after adding the tlsCAFile",
"username": "nidhish_N_A"
}
] | ServerSelectionTimeoutError [SSL: CERTIFICATE_VERIFY_FAILED] Trying to understand the origin of the problem | 2021-07-14T23:55:36.494Z | ServerSelectionTimeoutError [SSL: CERTIFICATE_VERIFY_FAILED] Trying to understand the origin of the problem | 12,073 |
null | [
"aggregation",
"queries",
"data-modeling",
"sharding"
] | [
{
"code": "user.aggregate([\n {\n $match: {\n age: { $gt: 18 },\n city: { $in: [\"chicago\", \"paris\"] }\n }\n },\n {\n $sort: {\n last_logged_in: -1\n }\n },\n {\n $limit: 10000\n }\n])\nuser10,000100,000",
"text": "If the user collection is partitioned into 10 shards, will each shard return 10,000 documents, totalling to 100,000?",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "$limit$limit: 10000$match$sortlast_logged_in$limit$limit: 10000",
"text": "No, each shard will not return 10,000 documents, totaling 100,000. When you use $limit in an aggregation pipeline, it limits the total number of documents returned by the entire pipeline, not per shard. Therefore, the $limit: 10000 in your pipeline will retrieve a total of 10,000 documents from all shards combined, not 10,000 per shard.The $match operator in your pipeline will filter documents based on the given criteria across all shards. This operation is performed on each individual shard and only the matching documents are returned from each shard.The $sort operator will then sort these filtered documents based on the last_logged_in field.Finally, the $limit operator will limit the total number of documents passed by the pipeline. This limit applies to the entire pipeline and not on a per-shard basis. Therefore, if you have specified $limit: 10000, it will retrieve a total of 10,000 documents from all shards combined, not 10,000 per shard.So, the total number of documents returned by the pipeline will be a maximum of 10,000, not 100,000.",
"username": "Garaudy_Etienne"
},
{
"code": "",
"text": "But mongos will potential receive 100_000 documents. 10_000 per shards because it does not know which 10_000 to return before it merge the sort result of each shard.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $limit in a scatter and gather query to a sharded collection | 2023-07-29T05:31:01.815Z | $limit in a scatter and gather query to a sharded collection | 524 |
[
"android",
"kotlin"
] | [
{
"code": "",
"text": "I am trying to set up an Android app using Realm Sync. I already have all my schema etc. set up, so I wanted to just import my existing objects into a new Android app in Android Studio.I have followed these instructions and (for the first time ever!) they seem to have actually worked! (Yay! My experience with Android Studio and Gradle files in the past has been that very little works the way it’s supposed to.)However, when I paste my objects from the online App Services portal into a new .kt file, I get errors, Unresolved reference: RealmObject etc. It can’t seem to find io.realm or org.bson and I don’t know why.What am I doing wrong?\nimage1920×1200 128 KB\n",
"username": "polymath74"
},
{
"code": "",
"text": "Ok, so looking at this docs page it seems the online tool generates code for the older Android/Java SDK. Do I have to convert this manually? (I’m having a go now.) Is there a way to get the new Kotlin SDK syntax out instead?",
"username": "polymath74"
},
{
"code": "",
"text": "… and how do I translate a RealmDictionary? I can’t find any mention of it in the Kotlin SDK docs.",
"username": "polymath74"
},
{
"code": "",
"text": "So at this point, it looks like:So much for my experiment with the new Kotlin SDK. I’m just going to try using the old one.",
"username": "polymath74"
},
{
"code": "",
"text": "Hey everyone!First off, big cheers for sharing your experience, polymath74! Navigating the intricate world of Android development can indeed be a rollercoaster of emotions, and I’m thrilled to see you making progress despite the challenges.Now, diving into your current hiccup, the Unresolved reference: RealmObject issue seems to be a classic twist in this journey. It looks like the online tool is spitting out code for the older Java-based SDK, which is probably why Kotlin’s giving you the cold shoulder. Converting it manually could be an option, but let’s explore other avenues.Since the new Kotlin SDK is still catching up, I’d suggest going the ‘old school’ route for now. You’ve got the spirit for experimentation, so it’s not a retreat but rather a well-thought pivot.Believe me, sometimes going the “old school” route is a great decision. I have many years of experience in mobile development at one of the best German android app entwicklung agentur I have resorted to old technologies many times. And it has often worked out well for me.I’m interested to observe your reflections. You seem to me a very talented developer and I believe you have a great future. Do you have a social media presence, or maybe a blog? I would love to follow what you are doing.",
"username": "aleksandr.sharshakov.99"
},
{
"code": "",
"text": "Thanks Alex. Yes, I have now built my Android app using the old Java SDK.No social media sorry. I’m actually relatively new to mobile apps. I used to be a back end developer. (I’m talking long ago - more than 2 decades ago!)",
"username": "polymath74"
}
] | Unresolved reference: RealmObject | 2022-12-16T06:55:39.352Z | Unresolved reference: RealmObject | 2,954 |
|
null | [
"queries",
"indexes"
] | [
{
"code": " \t\t\"executionStages\" : {\n \t\t\t\"stage\" : \"FETCH\",\n \t\t\t\"nReturned\" : 0,\n \t\t\t\"executionTimeMillisEstimate\" : 3344,\n \t\t\t\"works\" : 604601,\n \t\t\t\"advanced\" : 0,\n \t\t\t\"needTime\" : 604599,\n \t\t\t\"needYield\" : 0,\n \t\t\t\"saveState\" : 23619,\n \t\t\t\"restoreState\" : 23619,\n \t\t\t\"isEOF\" : 1,\n \t\t\t\"docsExamined\" : 0,\n \t\t\t\"alreadyHasObj\" : 0,\n \t\t\t\"inputStage\" : {\n \t\t\t\t\"stage\" : \"IXSCAN\",\n \t\t\t\t\"nReturned\" : 0,\n \t\t\t\t\"executionTimeMillisEstimate\" : 3294,\n \t\t\t\t\"works\" : 604600,\n \t\t\t\t\"advanced\" : 0,\n \t\t\t\t\"needTime\" : 604599,\n \t\t\t\t\"needYield\" : 0,\n \t\t\t\t\"saveState\" : 23619,\n \t\t\t\t\"restoreState\" : 23619,\n \t\t\t\t\"isEOF\" : 1,\n \t\t\t\t\"keyPattern\" : {\n \t\t\t\t\t\"ivi_purchase.state\" : 1,\n \t\t\t\t\t\"ivi_purchase.expires_at\" : -1,\n \t\t\t\t\t\"state\" : 1\n \t\t\t\t},\n \t\t\t\t\"indexName\" : \"ivi_purchase.state_1_ivi_purchase.expires_at_-1_state_1\",\n \t\t\t\t\"isMultiKey\" : false,\n \t\t\t\t\"multiKeyPaths\" : {\n \t\t\t\t\t\"ivi_purchase.state\" : [ ],\n \t\t\t\t\t\"ivi_purchase.expires_at\" : [ ],\n \t\t\t\t\t\"state\" : [ ]\n \t\t\t\t},\n \t\t\t\t\"isUnique\" : false,\n \t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n \t\t\t\t\"indexVersion\" : 2,\n \t\t\t\t\"direction\" : \"forward\",\n \t\t\t\t\"indexBounds\" : {\n \t\t\t\t\t\"ivi_purchase.state\" : [\n \t\t\t\t\t\t\"[2.0, 2.0]\"\n \t\t\t\t\t],\n \t\t\t\t\t\"ivi_purchase.expires_at\" : [\n \t\t\t\t\t\t\"(new Date(1691150495789), true)\"\n \t\t\t\t\t],\n \t\t\t\t\t\"state\" : [\n \t\t\t\t\t\t\"[0.0, 0.0]\"\n \t\t\t\t\t]\n \t\t\t\t},\n \t\t\t\t\"keysExamined\" : 604600,\n \t\t\t\t\"seeks\" : 604600,\n \t\t\t\t\"dupsTested\" : 0,\n \t\t\t\t\"dupsDropped\" : 0\n \t\t\t}\n \t\t}\n \t},\n \t\"serverInfo\" : {\n \t\t\"host\" : \"...\",\n \t\t\"port\" : 27017,\n \t\t\"version\" : \"4.2.18\",\n \t\t\"gitVersion\" : \"f65ce5e25c0b26a00d091a4d24eec1a8b3a4c016\"\n \t},\n frontend:PRIMARY> db.getProfilingStatus()\n {\n \t\"was\" : 0,\n \t\"slowms\" : 100,\n \t\"sampleRate\" : 1,\n \t\"$clusterTime\" : {\n \t\t\"clusterTime\" : Timestamp(1691148595, 2055),\n \t\t\"signature\" : {\n \t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \t\t\t\"keyId\" : NumberLong(0)\n \t\t}\n \t},\n \t\"operationTime\" : Timestamp(1691148595, 2055)\n }\n",
"text": "Hi. I have a query that takes 1 minute while everything happens with IXSCAN. No documents are taken from the disk, yet the query is very slow. Could you help me to understand why and how I can improve this?Just in case somebody is interested in this:",
"username": "Peter_Volkov"
},
{
"code": " \t\t\t\t\"keysExamined\" : 604600,",
"text": " \t\t\t\t\"keysExamined\" : 604600,600k keys? it will definitely be slow. Possible to reduce this number?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Oh, I see. I’ve changed index from {“ivi_purchase.state” : 1, “ivi_purchase.expires_at” : -1, state: 1} to {“ivi_purchase.state” : 1, state: 1, “ivi_purchase.expires_at” : -1} (order of keys) and now the query is blazingly fast. Thank you for pointing to right direction!",
"username": "Peter_Volkov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb IXSCAN is very slow | 2023-08-04T11:44:18.762Z | Mongodb IXSCAN is very slow | 527 |
null | [
"queries"
] | [
{
"code": "",
"text": "I have a purchases table.I want to get all documents from users that have a source of “twitter”.So the user’s table has source.Getting all users that have source: “twitter” and then sending that list of userIds to find() on the purchases table does not seem very efficient.The only other option I can think of is to copy the user’s source field to all the purchase documents, which does not feel very elegant, so I’m sure there’s a better way.Thanks for any help!",
"username": "Lakoh_A"
},
{
"code": "",
"text": "I’m sure there’s a better way.Better or not, depends on what you want. You either store the source together or in a separate collection.If the user source never changes, why not store it in the purchase info?",
"username": "Kobe_W"
},
{
"code": "",
"text": "It seems like a waste - extra storage, since it’s already stored in the user table.I think I may be able to do what I need with a lookup aggregation?",
"username": "Lakoh_A"
},
{
"code": "",
"text": "Aggregation can work. It’s just slower.",
"username": "Kobe_W"
}
] | Most efficient way to select all documents about users from a specific source? | 2023-08-04T11:53:06.733Z | Most efficient way to select all documents about users from a specific source? | 281 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi,I have a follow up question regarding my last topic (here) that was brilliantly answered by @John_Sewell.\nHow can I forward the ‘sold’ and ‘bought’ group so that they are available in the final result?",
"username": "Florian_Baumann"
},
{
"code": "db.getCollection(\"token_balances\").aggregate([\n{\n $match:{\n $or:[\n {\"token_bought_address\":\"0xBB\"},\n {\"token_sold_address\":\"0xBB\"}\n ]\n }\n},\n{\n $facet:{\n sold:[\n {\n $match:{\n \"token_sold_address\":\"0xBB\"\n }\n },\n {\n $group:{\n _id:'$maker',\n total:{$sum:{$multiply:[-1, '$token_sold_amount']}}\n }\n }\n ],\n bought:[\n {\n $match:{\n \"token_bought_address\":\"0xBB\"\n }\n },\n {\n $group:{\n _id:'$taker',\n total:{$sum:'$token_bought_amount'}\n }\n }\n ],\n }\n},\n{\n $project:{\n allItem:{\n $setUnion:[\"$sold\",\"$bought\"]\n },\n soldData:\"$sold\",\n boughtData:\"$bought\"\n }\n},\n{\n $unwind:\"$allItem\"\n},\n{\n $group:{\n _id:'$allItem._id',\n total:{$sum:\"$allItem.total\"},\n soldData:{$first:'$soldData'},\n boughtData:{$first:'$boughtData'}\n }\n},\n])\n",
"text": "You’ll want to carry the down down when it’s all grouped up, something like this…I just threw this together but it should give you can idea:",
"username": "John_Sewell"
},
{
"code": "await mongoose.model('Trades').aggregate([\n\t\t{\n\t\t\t$match: {\n\t\t\t\t$or: [\n\t\t\t\t\t{ 'tokenOut.address': address },\n\t\t\t\t\t{ 'tokenIn.address': address }\n\t\t\t\t]\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$facet: {\n\t\t\t\tsold: [\n\t\t\t\t\t{\n\t\t\t\t\t\t$match: {\n\t\t\t\t\t\t\t'tokenIn.address': address\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t$group: {\n\t\t\t\t\t\t\t_id: '$taker',\n\t\t\t\t\t\t\ttotal: { $sum: { $multiply: [-1, '$amountIn'] } }\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t\tbought: [\n\t\t\t\t\t{\n\t\t\t\t\t\t$match: {\n\t\t\t\t\t\t\t'tokenOut.address': address\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t$group: {\n\t\t\t\t\t\t\t_id: '$taker',\n\t\t\t\t\t\t\ttotal: { $sum: '$amountOut' }\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t],\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$project: {\n\t\t\t\tallItem: {\n\t\t\t\t\t$setUnion: ['$sold', '$bought']\n\t\t\t\t},\n\t\t\t\tsold: '$sold',\n\t\t\t\tbought: '$bought'\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$unwind: '$allItem'\n\t\t},\n\t\t{\n\t\t\t$replaceRoot: {\n\t\t\t\tnewRoot: '$allItem'\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$group: {\n\t\t\t\t_id: '$_id',\n\t\t\t\ttotal: { $sum: '$total' },\n\t\t\t\tsold: { $first: '$sold' },\n \t\tbought: { $first: '$bought' }\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$project: {\n\t\t\t\t_id: 0,\n\t\t\t\twallet: '$_id',\n\t\t\t\tamount: '$total',\n\t\t\t\tsold: '$sold',\n\t\t\t\tbought: '$bought'\n\t\t\t}\n\t\t}\n\t]);\n[\n {\n \"wallet\":\"0xb5c86bbda44ece35d2dc8824050a2b217c45a3a4\",\n \"amount\":207099169.0671463,\n \"sold\":null,\n \"bought\":null\n },\n {\n \"wallet\":\"0xcf53addc53cce46de839c9c05c05466a8d2249d9\",\n \"amount\":4811109993.772441,\n \"sold\":null,\n \"bought\":null\n },\n {\n \"wallet\":\"0xe9ed3ad8e68b3925a33cab867a29c73e8357cfc4\",\n \"amount\":198403983902.09695,\n \"sold\":null,\n \"bought\":null\n }\n]\n",
"text": "Thank you John, but unfortunately it’s not workingThe result looks like thisThie fields ‘sold’ and ‘bought’ are always empty",
"username": "Florian_Baumann"
},
{
"code": "",
"text": "The replace root is killing the data that’s not in the child element allItem, you need to not use that, if you note in the above code I left out this stage and referred to the elements under that element instead.",
"username": "John_Sewell"
},
{
"code": "",
"text": "When debugging this kind of thing I REALLY recommend using something like Studio3T in the script window and you can quickly comment out stages of the aggregation pipeline to see where data is and when it disappears.\nThis goes for debugging things like this as well as performance, where you can work out which stage is causing performance issues (as well as using .explain()!)",
"username": "John_Sewell"
},
{
"code": "sold: { $first: '$sold' },\nbought: { $first: '$bought' }\n",
"text": "Thanks, I will have a look into Studio3T.\nI change the code like you recommended, but now the query seems to hang forever and never returns.\nAs soon as I comment outIt works perfectly like before. Any idea?",
"username": "Florian_Baumann"
},
{
"code": "",
"text": "No, that’s a bit weird, can you paste the exact query your using, how much data is flowing down into that stage, i.e. if if you comment out from the group down, how much data is in there?You could try running an explain() before and after that change to see the difference in the execution plans to pick out anything of interest.",
"username": "John_Sewell"
},
{
"code": "{\n _id: new ObjectId(\"64c6cd10dfdcef2a02effe98\"),\n eventIndex: 147,\n hash: '0xa29142b106b96b68e480d903af13b4b6523b17e3fafb855a14bb8d99e13c3496',\n amountIn: 0.056138649184,\n amountOut: 56335975.66790488,\n blockNumber: 17261504,\n blockTime: 1684108799000,\n chain: { id: 1, name: 'ethereum' },\n contract: '0xa43fe16908251ee70ef74718545e4fe6c5ccec9f',\n maker: '0x3001f6f2187d875a1bc24b10fe9616ebcaf4fb45',\n project: 'uniswap',\n taker: '0x3001f6f2187d875a1bc24b10fe9616ebcaf4fb45',\n tokenIn: {\n address: '0xc02aaa39b223fe8d0a0e5c4f27ead9083c756cc2',\n symbol: 'WETH',\n decimals: 18\n },\n tokenOut: {\n address: '0x6982508145454ce325ddbe47a25d4ec3d2311933',\n symbol: 'PEPE',\n decimals: 18\n },\n version: 2\n}\n\n[\n {\n $match:{\n $or:[\n {\n 'tokenOut.address': '0x6982508145454ce325ddbe47a25d4ec3d2311933'\n },\n {\n 'tokenIn.address': '0x6982508145454ce325ddbe47a25d4ec3d2311933'\n }\n ]\n }\n },\n {\n $facet:{\n sold:[\n {\n $match:{\n 'tokenIn.address': '0x6982508145454ce325ddbe47a25d4ec3d2311933'\n }\n },\n {\n $group:{\n _id:'$taker',\n total:{\n $sum:{\n $multiply:[\n -1,\n '$amountIn'\n ]\n }\n }\n }\n }\n ],\n bought:[\n {\n $match:{\n 'tokenOut.address': '0x6982508145454ce325ddbe47a25d4ec3d2311933'\n }\n },\n {\n $group:{\n _id:'$taker',\n total:{\n $sum:'$amountOut'\n }\n }\n }\n ],\n \n }\n },\n {\n $project:{\n allItem:{\n $setUnion:[\n '$sold',\n '$bought'\n ]\n },\n sold:'$sold',\n bought:'$bought'\n }\n },\n {\n $unwind:'$allItem'\n },\n {\n $group:{\n _id:'$allItem._id',\n total:{\n $sum:'$allItem.total'\n },\n sold:{\n $first:'$sold'\n },\n bought:{\n $first:'$bought'\n }\n }\n },\n {\n $project:{\n _id:0,\n wallet:'$_id',\n amount:'$total',\n sold:'$sold',\n bought:'$bought'\n }\n }\n]\n{\n \"explainVersion\" : \"1\",\n \"stages\" : [\n {\n \"$cursor\" : {\n \"queryPlanner\" : {\n \"namespace\" : \"dex_trades.trades\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$or\" : [\n {\n \"tokenIn.address\" : {\n \"$eq\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n }\n },\n {\n \"tokenOut.address\" : {\n \"$eq\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n }\n }\n ]\n },\n \"queryHash\" : \"D2428EAE\",\n \"planCacheKey\" : \"D9A6C586\",\n \"maxIndexedOrSolutionsReached\" : false,\n \"maxIndexedAndSolutionsReached\" : false,\n \"maxScansToExplodeReached\" : false,\n \"winningPlan\" : {\n \"stage\" : \"SUBPLAN\",\n \"inputStage\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"amountIn\" : NumberInt(1),\n \"amountOut\" : NumberInt(1),\n \"taker\" : NumberInt(1),\n \"tokenIn.address\" : NumberInt(1),\n \"tokenOut.address\" : NumberInt(1),\n \"_id\" : NumberInt(0)\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"OR\",\n \"inputStages\" : [\n {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"tokenIn.address\" : NumberInt(1)\n },\n \"indexName\" : \"tokenIn.address_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"tokenIn.address\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : NumberInt(2),\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"tokenIn.address\" : [\n \"[\\\"0x6982508145454ce325ddbe47a25d4ec3d2311933\\\", \\\"0x6982508145454ce325ddbe47a25d4ec3d2311933\\\"]\"\n ]\n }\n },\n {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"tokenOut.address\" : NumberInt(1)\n },\n \"indexName\" : \"tokenOut.address_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"tokenOut.address\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : NumberInt(2),\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"tokenOut.address\" : [\n \"[\\\"0x6982508145454ce325ddbe47a25d4ec3d2311933\\\", \\\"0x6982508145454ce325ddbe47a25d4ec3d2311933\\\"]\"\n ]\n }\n }\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [\n\n ]\n }\n }\n },\n {\n \"$facet\" : {\n \"sold\" : [\n {\n \"$internalFacetTeeConsumer\" : {\n\n }\n },\n {\n \"$match\" : {\n \"tokenIn.address\" : {\n \"$eq\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n }\n }\n },\n {\n \"$group\" : {\n \"_id\" : \"$taker\",\n \"total\" : {\n \"$sum\" : {\n \"$multiply\" : [\n \"$amountIn\",\n {\n \"$const\" : NumberInt(-1)\n }\n ]\n }\n }\n }\n }\n ],\n \"bought\" : [\n {\n \"$internalFacetTeeConsumer\" : {\n\n }\n },\n {\n \"$match\" : {\n \"tokenOut.address\" : {\n \"$eq\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n }\n }\n },\n {\n \"$group\" : {\n \"_id\" : \"$taker\",\n \"total\" : {\n \"$sum\" : \"$amountOut\"\n }\n }\n }\n ]\n }\n },\n {\n \"$project\" : {\n \"_id\" : true,\n \"allItem\" : {\n \"$setUnion\" : [\n \"$sold\",\n \"$bought\"\n ]\n },\n \"sold\" : \"$sold\",\n \"bought\" : \"$bought\"\n }\n },\n {\n \"$unwind\" : {\n \"path\" : \"$allItem\"\n }\n },\n {\n \"$group\" : {\n \"_id\" : \"$allItem._id\",\n \"total\" : {\n \"$sum\" : \"$allItem.total\"\n }\n }\n },\n {\n \"$project\" : {\n \"wallet\" : \"$_id\",\n \"amount\" : \"$total\",\n \"sold\" : \"$sold\",\n \"bought\" : \"$bought\",\n \"_id\" : false\n }\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"DeFiHub\",\n \"port\" : NumberInt(27017),\n \"version\" : \"6.0.8\",\n \"gitVersion\" : \"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74\"\n },\n \"serverParameters\" : {\n \"internalQueryFacetBufferSizeBytes\" : NumberInt(104857600),\n \"internalQueryFacetMaxOutputDocSizeBytes\" : NumberInt(104857600),\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\" : NumberInt(104857600),\n \"internalDocumentSourceGroupMaxMemoryBytes\" : NumberInt(104857600),\n \"internalQueryMaxBlockingSortMemoryUsageBytes\" : NumberInt(104857600),\n \"internalQueryProhibitBlockingMergeOnMongoS\" : NumberInt(0),\n \"internalQueryMaxAddToSetBytes\" : NumberInt(104857600),\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : NumberInt(104857600)\n },\n \"command\" : {\n \"aggregate\" : \"trades\",\n \"pipeline\" : [\n {\n \"$match\" : {\n \"$or\" : [\n {\n \"tokenOut.address\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n },\n {\n \"tokenIn.address\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n }\n ]\n }\n },\n {\n \"$facet\" : {\n \"sold\" : [\n {\n \"$match\" : {\n \"tokenIn.address\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n }\n },\n {\n \"$group\" : {\n \"_id\" : \"$taker\",\n \"total\" : {\n \"$sum\" : {\n \"$multiply\" : [\n NumberInt(-1),\n \"$amountIn\"\n ]\n }\n }\n }\n }\n ],\n \"bought\" : [\n {\n \"$match\" : {\n \"tokenOut.address\" : \"0x6982508145454ce325ddbe47a25d4ec3d2311933\"\n }\n },\n {\n \"$group\" : {\n \"_id\" : \"$taker\",\n \"total\" : {\n \"$sum\" : \"$amountOut\"\n }\n }\n }\n ]\n }\n },\n {\n \"$project\" : {\n \"allItem\" : {\n \"$setUnion\" : [\n \"$sold\",\n \"$bought\"\n ]\n },\n \"sold\" : \"$sold\",\n \"bought\" : \"$bought\"\n }\n },\n {\n \"$unwind\" : \"$allItem\"\n },\n {\n \"$group\" : {\n \"_id\" : \"$allItem._id\",\n \"total\" : {\n \"$sum\" : \"$allItem.total\"\n }\n }\n },\n {\n \"$project\" : {\n \"_id\" : NumberInt(0),\n \"wallet\" : \"$_id\",\n \"amount\" : \"$total\",\n \"sold\" : \"$sold\",\n \"bought\" : \"$bought\"\n }\n }\n ],\n \"allowDiskUse\" : true,\n \"maxTimeMS\" : NumberLong(0),\n \"cursor\" : {\n\n },\n \"$db\" : \"dex_trades\"\n },\n \"ok\" : 1.0\n}\n",
"text": "The aggregation has to process about 250.000 data entries.\nI can only run explain() before the changes. Afterwards it never returns and even crashes mongod.\nFurthermore I have to activate the “allowDiskUse” option, otherwise the aggregation won’t run.data examplequeryexplain (before changes)\nStudio_3T_explain1661×707 87.7 KB\n",
"username": "Florian_Baumann"
},
{
"code": "",
"text": "Nobody has an idea what could be wrong?",
"username": "Florian_Baumann"
},
{
"code": "",
"text": "Do you have an anonymised sample dataset I can play with? That represents what the data looks like?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Sure. It’s not allowed to upload zip files therefore I used WeTransfer1 file sent via WeTransfer, the simplest way to send your files around the worldThe dataset contains about 10.000 items",
"username": "Florian_Baumann"
},
{
"code": "",
"text": "Sorry Florian, I’ve been tied up with work the last few days, I did have a play with the data yesterday but didnt get far.There is a fair amount of data flowing through, one thought I had was if you need to get this data at the same time as the breakdown? If it’s for a drilldown then perhaps call a different query for the drilldown to show details as opposed to getting this for all records on every call?",
"username": "John_Sewell"
},
{
"code": "[\n\t\t{\n\t\t\t$match: {\n\t\t\t\t'tokenOut.address': '0x6982508145454ce325ddbe47a25d4ec3d2311933'\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$project: {\n\t\t\t\t_id: 0,\n\t\t\t\ttaker: 1,\n\t\t\t\tbought: '$amountIn'\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$unionWith: {\n\t\t\t\tcoll: 'trades',\n\t\t\t\tpipeline: [\n\t\t\t\t\t{\n\t\t\t\t\t\t$match: {\n\t\t\t\t\t\t\t'tokenIn.address': '0x6982508145454ce325ddbe47a25d4ec3d2311933'\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t$project: {\n\t\t\t\t\t\t\t_id: 0,\n\t\t\t\t\t\t\ttaker: 1,\n\t\t\t\t\t\t\tsold: '$amountOut'\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t]\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$group: {\n\t\t\t\t_id: '$taker',\n\t\t\t\tbought: { $sum: '$bought' },\n\t\t\t\tsold: { $sum: '$sold' }\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\t$sort: {\n\t\t\t\tbought: -1\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t$limit: 100\n\t\t},\n\t\t{\n\t\t\t$project: {\n\t\t\t\t_id: 0,\n\t\t\t\twallet: '$_id',\n\t\t\t\tbought: 1,\n\t\t\t\tsold: 1,\n\t\t\t\troi: {\n\t\t\t\t\t$round: [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t$multiply: [\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t$divide: [100, '$bought'],\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\t$subtract: ['$sold', '$bought']\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t]\n\t\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t\t\t\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t]\n",
"text": "It’s working now. I reworked the aggregation pipeline",
"username": "Florian_Baumann"
},
{
"code": "",
"text": "Excellent, glad you got it working!",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Aggregation follow up | 2023-07-28T17:57:20.424Z | Aggregation follow up | 686 |
null | [
"queries",
"dot-net"
] | [
{
"code": "public async Task<List<Bird>> GetBirdByName(string[] birdsName)\n {\n try\n {\n birdsName = birdsName.Select(x => x.ToLower()).ToArray();\n FilterDefinition<Bird> filter = Builders<Bird>.Filter.In(r => r.BirdName, birdsName);\n List<Bird> result = await _collection.Find(filter).ToListAsync();\n return result;\n }\n catch (Exception ex)\n {\n await _logger.ExceptionLogAsync(\"BirdRepository.GetBirdByName\", ex).ConfigureAwait(false);\n }\n return null;\n}\n",
"text": "I have a field say birds name and I want to search all the birds which contain words.I am not sure if this use the search index or not but I have created a search index on BirdName field and want to use it if it is not using it. Also how to check if my search index is being used or not?",
"username": "Akshay_Katoch"
},
{
"code": "",
"text": "I have found that we can use OR to search multiple inputs so if join the array with “OR” then will it cause any performance issue? Or I should not change the code",
"username": "Akshay_Katoch"
}
] | Search index as $in in c# | 2023-08-04T14:15:32.686Z | Search index as $in in c# | 387 |
null | [
"queries",
"python",
"spark-connector"
] | [
{
"code": "",
"text": "Hi,\nWe have a use case to perform upsert operation for 20 millions records from Pyspark to Mongo Collection.\nHowever it takes more than an hour (1 hr 5 mins or sometimes even more) just for 100K records…\nApproaches that I tried:while writing to mongo, I tried with multiple batch sizes but no luck\ndf.write.format(‘com.mongodb.spark.sql.DefaultSource’).mode(‘append’) \n.option(‘uri’, connection_uri) \n.option(‘database’, database) \n.option(‘collection’, collection) \n.option(‘maxBatchSize’, 1000) \n.option(‘replaceDocument’, False) \n.option(‘shardkey’, shard_key) \n.option(‘multi’, True) \n.save()Splitting dataframe and calling the above df.write operation:\ndef upsert_operation_into_mongo(df3, connection_uri, mode, database, collection, shard_key):\ndf3.write.format(‘com.mongodb.spark.sql.DefaultSource’).mode(‘append’) \n.option(‘uri’, connection_uri) \n.option(‘database’, database) \n.option(‘collection’, collection) \n.option(‘replaceDocument’, False) \n.option(‘shardkey’, shard_key) \n.option(‘multi’, True) \n.save()\nreturn ‘Successfully Written Data to Mongo DB Collection’each_len = 3000\ncopy_df = df\ni = 0\nwhile i < df.count():\nj = copy_df.count()\nif each_len < j:\ni += each_len\ntemp_df = copy_df.limit(each_len)\ncopy_df = copy_df.subtract(temp_df)\nmsg = upsert_operation_into_mongo(temp_df, connection_uri, mode, database, collection, shard_key)\nprint(msg)\nelse:\ni += j\ntemp_df = copy_df.limit(j)\ncopy_df = copy_df.subtract(temp_df)\nmsg = upsert_operation_into_mongo(temp_df, connection_uri, mode, database, collection, shard_key)\nprint(msg)I understand Mongo-Spark Connector internally uses BulkWrite.\nIs there any efficient way by which we can increase the speed for Upsert operation ?On the other hand, overwrite operation hardly takes 2 mins for the same number of records.Thanks,\nSarvesh",
"username": "Sarvesh_Dubey"
},
{
"code": "",
"text": "Note: It is known that it takes time during BulkWrite operation only",
"username": "Sarvesh_Dubey"
},
{
"code": "",
"text": "Facing the same performance issue with UPSERT. Have around 150k records.\nTried adding index into collection, got improvement on low amounts like 1-2k, but for more, it is still not acceptable.— without index —\n10k : 47 sec - 212/s\n20k : 2.3 min - 144/s\n50k : 12.1 min - 68/sWhich makes me think it is not doing any batches during UPSERT\n@Sarvesh_Dubey were able to resolve this?— with index —\n10k : 13 sec\n20k : 22 sec\n50k : 34 sec\n140k : 40 sec",
"username": "Dmytro_Sokhach_XE050993933"
}
] | Upsert Operation using Mongo-Spark Connector takes very long | 2022-08-05T16:22:18.856Z | Upsert Operation using Mongo-Spark Connector takes very long | 3,840 |
null | [
"node-js",
"compass"
] | [
{
"code": "",
"text": "I created database in mongodb cloud and then try to access through mongodb compass and its connected but unable to see databse.",
"username": "vishal_lambe"
},
{
"code": "MongoDB CloudMongoDB AtlasDatabase AccessSecurity",
"text": "Hello @vishal_lambe ,Welcome to The MongoDB Community Forums! I created database in mongodb cloudPlease correct me if I am wrong but I believe by MongoDB Cloud you mean MongoDB Atlas.try to access through mongodb compass and its connected but unable to see databse.Please make sure that the user you are using while connecting MongoDB Compass to your Atlas cluster has relevant role.You can check the users in your Atlas cluster by clicking on Database Access under Security which is available on the left side of the Atlas UI.For more information, please check below resources.I hope this helps with your issue, in case you face any more issues or have any queries, feel free to post a new thread. In case you are still not able to see your data in Compass, kindly share below details.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Issue Resolved.",
"username": "vishal_lambe"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to register data in mongodb and unable to connect to mongodb compass | 2023-08-04T07:30:16.263Z | Unable to register data in mongodb and unable to connect to mongodb compass | 388 |
null | [
"node-js"
] | [
{
"code": "${optionWord} ${Array.from(unsupportedOptions).join(', ')} ${isOrAre} not supported",
"text": "These errors pop-up in the terminal when connecting mongoDB to the backendthrow new error_1.MongoParseError(${optionWord} ${Array.from(unsupportedOptions).join(', ')} ${isOrAre} not supported);\n^\nMongoParseError: option usenewparser is not supported\nat parseOptions (C:\\Users\\sangw\\OneDrive\\Documents\\GitHub\\course-selling-website\\node_modules\\mongodb\\lib\\connection_string.js:272:15)\nat new MongoClient (C:\\Users\\sangw\\OneDrive\\Documents\\GitHub\\course-selling-website\\node_modules\\mongodb\\lib\\mongo_client.js:48:63)\nat NativeConnection.createClient (C:\\Users\\sangw\\OneDrive\\Documents\\GitHub\\course-selling-website\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\connection.js:288:14)\nat NativeConnection.openUri (C:\\Users\\sangw\\OneDrive\\Documents\\GitHub\\course-selling-website\\node_modules\\mongoose\\lib\\connection.js:738:34)\nat Mongoose.connect (C:\\Users\\sangw\\OneDrive\\Documents\\GitHub\\course-selling-website\\node_modules\\mongoose\\lib\\index.js:404:15)\nat file:///C:/Users/sangw/OneDrive/Documents/GitHub/course-selling-website/server/index.js:109:10\nat ModuleJob.run (node:internal/modules/esm/module_job:194:25) {\n[Symbol(errorLabels)]: Set(0) {}\n}",
"username": "Abhishek_Sangwan"
},
{
"code": "",
"text": "Hello @Abhishek_Sangwan ,Welcome to The MongoDB Community Forums! Can you share more information for me to better understand the error you are encountering. I suspect you may have an error in your connection string or are using an older version of Mongoose that does not support the options you are trying to set.Please provide:A snippet of code showing how you are creating the connection including your MongoDB connection string with any password or hostname details redactedVersions of Mongoose and MongoDB Node.js driver being usedRegards,\nTarun",
"username": "Tarun_Gaur"
}
] | Please help me in removing these errors: | 2023-08-01T07:37:04.598Z | Please help me in removing these errors: | 432 |
null | [
"python"
] | [
{
"code": "# Importing the required libraries\nimport pymongo\n# Connect to local MongoDB server\nclient = pymongo.MongoClient('mongodb://localhost:27017/')\n# Client\nclient\nclient.list_database_names()\n\n---------------------------------------------------------------------------\nServerSelectionTimeoutError Traceback (most recent call last)\n<ipython-input-6-62f658703d98> in <module>\n----> 1 client.list_database_names()\n\n~/.local/lib/python3.6/site-packages/pymongo/mongo_client.py in list_database_names(self, session, comment)\n 1784 .. versionadded:: 3.6\n 1785 \"\"\"\n-> 1786 return [doc[\"name\"] for doc in self.list_databases(session, nameOnly=True, comment=comment)]\n 1787 \n 1788 def drop_database(\n\n~/.local/lib/python3.6/site-packages/pymongo/mongo_client.py in list_databases(self, session, comment, **kwargs)\n 1757 cmd[\"comment\"] = comment\n 1758 admin = self._database_default_options(\"admin\")\n-> 1759 res = admin._retryable_read_command(cmd, session=session)\n 1760 # listDatabases doesn't return a cursor (yet). Fake one.\n 1761 cursor = {\n\n~/.local/lib/python3.6/site-packages/pymongo/database.py in _retryable_read_command(self, command, value, check, allowable_errors, read_preference, codec_options, session, **kwargs)\n 763 )\n 764 \n--> 765 return self.__client._retryable_read(_cmd, read_preference, session)\n 766 \n 767 def _list_collections(self, sock_info, session, read_preference, **kwargs):\n\n~/.local/lib/python3.6/site-packages/pymongo/mongo_client.py in _retryable_read(self, func, read_pref, session, address, retryable)\n 1362 while True:\n 1363 try:\n-> 1364 server = self._select_server(read_pref, session, address=address)\n 1365 with self._socket_from_server(read_pref, server, session) as (sock_info, read_pref):\n 1366 if retrying and not retryable:\n\n~/.local/lib/python3.6/site-packages/pymongo/mongo_client.py in _select_server(self, server_selector, session, address)\n 1194 raise AutoReconnect(\"server %s:%d no longer available\" % address)\n 1195 else:\n-> 1196 server = topology.select_server(server_selector)\n 1197 return server\n 1198 except PyMongoError as exc:\n\n~/.local/lib/python3.6/site-packages/pymongo/topology.py in select_server(self, selector, server_selection_timeout, address)\n 249 def select_server(self, selector, server_selection_timeout=None, address=None):\n 250 \"\"\"Like select_servers, but choose a random server if several match.\"\"\"\n--> 251 servers = self.select_servers(selector, server_selection_timeout, address)\n 252 if len(servers) == 1:\n 253 return servers[0]\n\n~/.local/lib/python3.6/site-packages/pymongo/topology.py in select_servers(self, selector, server_selection_timeout, address)\n 210 \n 211 with self._lock:\n--> 212 server_descriptions = self._select_servers_loop(selector, server_timeout, address)\n 213 \n 214 return [self.get_server_by_address(sd.address) for sd in server_descriptions]\n\n~/.local/lib/python3.6/site-packages/pymongo/topology.py in _select_servers_loop(self, selector, timeout, address)\n 227 raise ServerSelectionTimeoutError(\n 228 \"%s, Timeout: %ss, Topology Description: %r\"\n--> 229 % (self._error_message(selector), timeout, self.description)\n 230 )\n 231 \n\nServerSelectionTimeoutError: localhost:27017: [Errno 111] Connection refused, Timeout: 30s, Topology Description: <TopologyDescription id: 64cb7b71341a70310f04d71c, topology_type: Unknown, servers: [<ServerDescription ('localhost', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:27017: [Errno 111] Connection refused',)>]>\n",
"text": "i have been trying to connect and work using python. But when i try listing databases it throws an error.here is the codeafter this i get an error",
"username": "Dhananjay_Patil2"
},
{
"code": "",
"text": "Either your mongod configuration has errors or your request URI is not in compliance with the current configuration. Please consult https://www.mongodb.com/docs/manual/reference/configuration-options/ with regard to your configuration.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "thanks for the response.Can you specify which configuration setting to look for. I’m a beginner and dont have much experience",
"username": "Dhananjay_Patil2"
},
{
"code": "",
"text": "",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "the authentication is disabled and interfaces enabled are set to:\nnet:\nport:27017\nbindip:0.0.0.0",
"username": "Dhananjay_Patil2"
},
{
"code": "",
"text": "It sounds like perhaps the MongoDB server mongod is not actually running?",
"username": "Jack_Woehr"
}
] | Connection error during pymongo in jupyter to locla mongodb server | 2023-08-03T10:12:12.256Z | Connection error during pymongo in jupyter to locla mongodb server | 594 |
null | [
"aggregation",
"indexes"
] | [
{
"code": "$match[{\n \"$match\": {\n \"__STATE__\": { \"$eq\": \"PUBLIC\" },\n \"countryCode\": {\n \"$exists\": true,\n \"$ne\": null\n }\n }\n },\n{\n \"$project\": {\n \"countryCode\": 1,\n \"createdAt\": 1,\n \"success\": 1\n }\n },\n//optional\n{\n \"$match\": {\n \"success\" : true\n }\n },\n//optional\n{\n \"$match\": {\n {\n \"$expr\": {\n \"$and\": [{\n \"$lte\": [\n \"$createdAt\",\n {\n \"$dateFromString\": {\n \"dateString\": \"#createdAtTo#\"\n }\n }\n ]\n }, {\n \"$gte\": [\n \"$createdAt\",\n {\n \"$dateFromString\": {\n \"dateString\": \"#createdAtFrom#\"\n }\n }\n ]\n }]\n }\n },\n]\n",
"text": "I’m trying to understand how the aggregation pipeline optimization works and what is the best index (or maybe more than one) to satisfy the following aggregation that have some optional stage with $match, based on user filter from UI, so it’s like to have 4 four different queries to satisfy.I’ve tried with some compound indexes and testing them with explain but I’m not getting good results also on basic case (only first $match), maybe I have to rewrite some lines?",
"username": "Francesco_Fiorentino"
},
{
"code": "{ \"__STATE__\" : 1 , \"countryCode\": 1 , \"success\": 1 , \"createdAt\": 1}\n{\n \"$project\": {\n \"_id\" : 0,\n \"countryCode\": 1,\n \"createdAt\": 1,\n \"success\": 1\n }\n }\n",
"text": "Hi @Francesco_Fiorentino,The way to optimize aggregation pipelines is to try and minimize the amount of stags while pushing as much filtering as possible to the first stage. So if your optional stages can be added to the first stage this will be really helpful.In general indexing should have the order of Equality Range Sort when it comes to compound index placements.In your case I suggest to do an index on:And changed the projectiion to:Read this: Performance Best Practices: Indexing | MongoDB BlogThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "The courses M121 and M201 from https://university.mongodb.com are also very good resources in this regard.",
"username": "steevej"
},
{
"code": "$exprcreatedAtFromISODate(){__STATE__:1, countryCode:1, ... }\"$exists\": true,\"$ne\": null",
"text": "You can run explain and see how the pipeline gets transformed. Two things you can see from that:What we see is that you are using $expr for some reason and it’s NOT as efficient as regular match expressions. If the format of createdAtFrom date string is “normal” then it can just be passed to ISODate() constructor as right hand side of the comparison.Now, best index will always start with {__STATE__:1, countryCode:1, ... } since it seems like those are always filtered on, but the order of the other three fields depend on which filtering is more likely (and equality being ahead of range comparisons).Asya\nP.S. you don’t need \"$exists\": true, since that’s a strict subset of \"$ne\": null for countryCode in your query.",
"username": "Asya_Kamsky"
},
{
"code": "explain{__STATE__:1, countryCode:1, ... }{__STATE__:1, createdAt:-1}",
"text": "Thanks all for your suggests and sorry for my late response. I have already seen the suggested courses and tried to have more confidence with explain.\nThere are a couple of things not clear to me with this case:",
"username": "Francesco_Fiorentino"
},
{
"code": "explain(\"executionStats\")",
"text": "As always, please run explain(\"executionStats\") on the full aggregation and provide the output here - without seeing what the time is being spent on we would be guessing where the improvements could be best made.Asya\nP.S. if you are on 4.4 or later then full explain will show how much time is being spent in each stage of aggregation.",
"username": "Asya_Kamsky"
},
{
"code": "{\n\"stages\" : [ \n {\n \"$cursor\" : {\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"labid.outcomes\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"__STATE__\" : {\n \"$eq\" : \"PUBLIC\"\n }\n }, \n {\n \"countryCode\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n }\n ]\n },\n \"queryHash\" : \"850416C8\",\n \"planCacheKey\" : \"58935625\",\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_SIMPLE\",\n \"transformBy\" : {\n \"success\" : true,\n \"createdAt\" : true,\n \"countryCode\" : true,\n \"_id\" : false\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"countryCode\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"__STATE__\" : 1,\n \"createdAt\" : -1\n },\n \"indexName\" : \"state_createdAt\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"__STATE__\" : [],\n \"createdAt\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"__STATE__\" : [ \n \"[\\\"PUBLIC\\\", \\\"PUBLIC\\\"]\"\n ],\n \"createdAt\" : [ \n \"[MaxKey, MinKey]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"PROJECTION_SIMPLE\",\n \"transformBy\" : {\n \"success\" : true,\n \"createdAt\" : true,\n \"countryCode\" : true,\n \"_id\" : false\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"countryCode\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"__STATE__\" : 1,\n \"success\" : 1\n },\n \"indexName\" : \"state_success\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"__STATE__\" : [],\n \"success\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"__STATE__\" : [ \n \"[\\\"PUBLIC\\\", \\\"PUBLIC\\\"]\"\n ],\n \"success\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }, \n {\n \"stage\" : \"PROJECTION_SIMPLE\",\n \"transformBy\" : {\n \"success\" : true,\n \"createdAt\" : true,\n \"countryCode\" : true,\n \"_id\" : false\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"__STATE__\" : 1,\n \"countryCode\" : 1\n },\n \"indexName\" : \"state_countryCode\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"__STATE__\" : [],\n \"countryCode\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"__STATE__\" : [ \n \"[\\\"PUBLIC\\\", \\\"PUBLIC\\\"]\"\n ],\n \"countryCode\" : [ \n \"[MinKey, undefined)\", \n \"(null, MaxKey]\"\n ]\n }\n }\n }\n }\n ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 2188033,\n \"executionTimeMillis\" : 8133,\n \"totalKeysExamined\" : 2189018,\n \"totalDocsExamined\" : 2189018,\n \"executionStages\" : {\n \"stage\" : \"PROJECTION_SIMPLE\",\n \"nReturned\" : 2188033,\n \"executionTimeMillisEstimate\" : 1909,\n \"works\" : 2189019,\n \"advanced\" : 2188033,\n \"needTime\" : 985,\n \"needYield\" : 0,\n \"saveState\" : 2279,\n \"restoreState\" : 2279,\n \"isEOF\" : 1,\n \"transformBy\" : {\n \"success\" : true,\n \"createdAt\" : true,\n \"countryCode\" : true,\n \"_id\" : false\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"countryCode\" : {\n \"$not\" : {\n \"$eq\" : null\n }\n }\n },\n \"nReturned\" : 2188033,\n \"executionTimeMillisEstimate\" : 1437,\n \"works\" : 2189019,\n \"advanced\" : 2188033,\n \"needTime\" : 985,\n \"needYield\" : 0,\n \"saveState\" : 2279,\n \"restoreState\" : 2279,\n \"isEOF\" : 1,\n \"docsExamined\" : 2189018,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 2189018,\n \"executionTimeMillisEstimate\" : 413,\n \"works\" : 2189019,\n \"advanced\" : 2189018,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 2279,\n \"restoreState\" : 2279,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"__STATE__\" : 1,\n \"createdAt\" : -1\n },\n \"indexName\" : \"state_createdAt\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"__STATE__\" : [],\n \"createdAt\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"__STATE__\" : [ \n \"[\\\"PUBLIC\\\", \\\"PUBLIC\\\"]\"\n ],\n \"createdAt\" : [ \n \"[MaxKey, MinKey]\"\n ]\n },\n \"keysExamined\" : 2189018,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n }\n },\n \"nReturned\" : NumberLong(2188033),\n \"executionTimeMillisEstimate\" : NumberLong(7433)\n }, \n {\n \"$group\" : {\n \"_id\" : \"$countryCode\",\n \"count\" : {\n \"$sum\" : {\n \"$const\" : 1.0\n }\n }\n },\n \"nReturned\" : NumberLong(203),\n \"executionTimeMillisEstimate\" : NumberLong(8120)\n }\n],\n\"serverInfo\" : {\n \"host\" : \"atlas-iddbm2-shard-00-01.ake5m.gcp.mongodb.net\",\n \"port\" : 27017,\n \"version\" : \"4.4.9\",\n \"gitVersion\" : \"b4048e19814bfebac717cf5a880076aa69aba481\"\n},\n\"ok\" : 1.0,\n\"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1632748186, 1),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"er1V/UEDRoO+EvH/KZ5Nh2jjYWA=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(6977486985841606664)\n }\n},\n\"operationTime\" : Timestamp(1632748186, 1)\n",
"text": "Following the entire explain output directly on production environment:}",
"username": "Francesco_Fiorentino"
},
{
"code": "2188033",
"text": "2188033This query is aggregating over two million documents - even using an efficient index processing that many documents is going to take time. However here the best index isn’t being used. It doesn’t look like you have an index on the two fields you are querying on (STATE and countryCode).Asya",
"username": "Asya_Kamsky"
},
{
"code": "getIndexes()rejectedPlans[\n {\n \"v\" : 2,\n \"key\" : {\n \"_id\" : 1\n },\n \"name\" : \"_id_\"\n },\n {\n \"v\" : 2,\n \"key\" : {\n \"__STATE__\" : 1,\n \"createdAt\" : -1\n },\n \"name\" : \"state_createdAt\",\n \"background\" : false\n },\n {\n \"v\" : 2,\n \"key\" : {\n \"__STATE__\" : 1,\n \"success\" : 1\n },\n \"name\" : \"state_success\",\n \"background\" : true\n },\n {\n \"v\" : 2,\n \"key\" : {\n \"tagUid\" : 1\n },\n \"name\" : \"tagUid\",\n \"background\" : true\n },\n {\n \"v\" : 2,\n \"key\" : {\n \"__STATE__\" : 1,\n \"countryCode\" : 1\n },\n \"name\" : \"state_countryCode\"\n }\n]\n",
"text": "However here the best index isn’t being used. It doesn’t look like you have an index on the two fields you are querying on (STATE and countryCode).That index should be there, below the output of getIndexes() but it seems to be mentioned also on rejectedPlans: I didn’t understand why it is not the winning one.This query is aggregating over two million documents - even using an efficient index processing that many documents is going to take time.This means, as I supposed previously, that on demand aggregation is not to be used in this case? What is the best approach to use? It is ok to set periodic aggregation with output on another collection or there is some approach more effective?",
"username": "Francesco_Fiorentino"
},
{
"code": " \"nReturned\" : 2188033,\n \"totalKeysExamined\" : 2189018,\n \"totalDocsExamined\" : 2189018,\n",
"text": "The reason a different index wouldn’t be used is it’s not going to be much more selective - the country code matches in all but one thousand documents, indexes are most helpful when they are selective (i.e. narrow down the number of documents that match).ok to set periodic aggregation with output on another collectionThat’s actually the best approach assuming you know most/many of the aggregations needed. Running aggregations periodically with output going into a “summary” collection is a long accepted way to reduce query time for most popular complex queries.Asya",
"username": "Asya_Kamsky"
},
{
"code": "$match__STATE__countryCode_idcountryCode$group",
"text": "One thing that occurred to me is that you could simplify your pipeline a little to only $match on __STATE__ and then group on countryCode and then filter out _id being null. Filtering out records where countryCode is null or missing may be taking a lot more time/effort before $group (with minimal reduction in total records processed) when it would be very fast after…Asya",
"username": "Asya_Kamsky"
},
{
"code": " { $count: \"Total_log\" }\n ] } }] ,{ allowDiskUse: true }).toArray();\nin this query i want to fetch data last 24 hour but it getting huge time to fetch and in my collection data approx 300 million so i use index also but it taking so much time\n",
"text": "db.alerts.aggregate([ {\n$addFields: {\ntimestamp: {\n$toDate: “$timestamp”\n}\n}\n},\n{\n$match: {\ntimestamp: {\n$gte: new Date(Date.now() - 24* 60 * 60 * 1000),\n$lte: new Date()\n},\n}\n},{\n$facet: {\nTotal_log: [",
"username": "Deepak_Tak"
},
{
"code": "",
"text": "The $addFields needs to stream in memory all documents to convert them .As the match happens after it cannot use an index (data is already in agg memory.You need to filter on timestamp or have the documents with date values and not timestamps.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for confirming what he already knows since June:\nhttps://www.mongodb.com/community/forums/t/queries-with-large-volumes-of-returned-data/212570/6?u=steevej",
"username": "steevej"
}
] | Best index on aggregation with multiple match conditions | 2021-08-31T14:56:26.346Z | Best index on aggregation with multiple match conditions | 15,620 |
null | [
"queries",
"time-series"
] | [
{
"code": "",
"text": "I have a large timeseries data collection. Think 10Hz time series data sampled data running 24/7. It’s mostly jagged tabular data.\nThe primary index on this is time.\nIf you run a query to return a weeks worth of data, this is roughly 6K rows per column in each document.\nThe goal is to do some fairly compute intense calculations on this data.\nRunning the compute cycles in the database is what I’d expect for a query followed by a main CPU based execution engine. However, returning that kind of data to be processed by different computing resources is slower than I’d expect for an I/O operation. I suspect this is because the data is returned in a text format and not a binary format. I’m not sure how to change how data is returned. If this is possible, I’m not looking for the right things.What is the best way to deal with large volumes of data being returned from a MongoDB query?",
"username": "winterberry"
},
{
"code": "db.collection.stats()db.getCollectionInfos({name:<time-series collection name>})",
"text": "Hi @winterberry,Welcome to the MongoDB Community forums I have a large time series data collection. Think 10Hz time series data sampled data running 24/7. It’s mostly jagged tabular data.Can you be a little specific here with the dataset size? Like what is the collection size of your time-series collections?Please share the sample document.Also, share the output ofdb.collection.stats()anddb.getCollectionInfos({name:<time-series collection name>})If you run a query to return a week’s worth of data, this is roughly 6K rows per column in each document.Do you mean 6K documents? MongoDB doesn’t have the concept of rows & columns, can you clarify this?Also, what specific query you are executing to get the result?The goal is to do some fairly computationally intense calculations on this data.Can you clarify your approach to calculating the data? Will it be done at the database level using aggregation pipelines, at the application level, or through some other method?Running the compute cycles in the database is what I’d expect for a query followed by a main CPU-based execution engine. However, returning that kind of data to be processed by different computing resources is slower than I’d expect for an I/O operation.What do you mean by “CPU-based execution engine” here? Also, kindly help me understand what specific computing resources you are referring to when you mention “different computing resources”?I suspect this is because the data is returned in a text format and not a binary format.The query returns the cursor of the Result Set in a text format, specifically in JSON after which we can iterate over the result set.What is the best way to deal with large volumes of data being returned from a MongoDB query?Could kindly help me understand what you mean by “best” and what “deal” refers to? Also, I was curious if having a lot of data returned by a query would be a problem.Also, provide us with information about your MongoDB deployment. Specifically, please let us know the following:Furthermore, refer to the Best Practices for Time Series Collections to read how to improve performance and data usage for time series collections.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "5 posts were split to a new topic: MongoDB Queries - Fetch and optimize",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Queries with large volumes of returned data | 2023-02-08T23:41:34.441Z | Queries with large volumes of returned data | 2,369 |
null | [
"python",
"atlas",
"change-streams"
] | [
{
"code": "expireAfterSecondsimport pymongo\nclient = pymongo.MongoClient(mongo_db_connection_string)\ndb = client[database_name]\n\nresult = db.command(\n {\n 'setClusterParameter': {\n 'changeStreamOptions': {\n 'preAndPostImages': {\n 'expireAfterSeconds': expire_after_seconds\n }\n }\n }\n }\n)\nOperationFailure: setClusterParameter may only be run against the admin database. db = client['admin']\n\nresult = db.command(\n {\n 'setClusterParameter': {\n 'changeStreamOptions': {\n 'preAndPostImages': {\n 'expireAfterSeconds': expire_after_seconds\n }\n }\n }\n }\n)\nnot authorized on admin to execute commandimport urllib.parse\nfrom pymongo import MongoClient\n\nusername = urllib.parse.quote_plus(\"USERNAME\")\npassword = urllib.parse.quote_plus(\"PASSWORD\")\n\nmongo_uri = f'mongodb+srv://{username}:{password}@CONNECTION_STRING_DETAILS/admin?retryWrites=true&w=majority'\nclient = MongoClient(mongo_uri)\n\ndb = client[\"admin\"]\n... etc ...\nOperationFailure: Authentication failed., full error:ClusterAdminsetClusterParameter",
"text": "Hey ,I’m trying to add the expireAfterSeconds parameter to our MongoDB instance. The full command i’m trying to run is…But I get this error…OperationFailure: setClusterParameter may only be run against the admin database. Then when I try to run it against the Admin database like this…I get this error…\nnot authorized on admin to execute commandSo I can’t use the Database role I made (which has Atlas Admin privileges) to run this command, so I need another user/role. I tried authenticating with my own username and password (when I log into the Atlas Gui), and I have full access rights to the MongoDB project, but wasn’t able to run the command either…But got this error…\nOperationFailure: Authentication failed., full error:I seem to only be able to connect to my Project using a Database User, but I am unable to assign ClusterAdmin to any Database Users - so I can’t run any commands on the admin database. So I thought I would connect with my user account that I log into MongoAtlas, but that authentication doesn’t work.How can I connect to the Atlas database so that I can run setClusterParameter commands?This post is similar but was not resolved: MongoServerError: not authorized on admin to execute command - #3 by Hannes_CalitzMy question boils down to this:I’m using MongoDB Atlas.Any help or advice would be great.Cheers,\nPaul",
"username": "Paul_Chynoweth"
},
{
"code": "",
"text": "Many commands are unavailable, limited or use another mechanism to change configuration(atlas cli or gui) and this can vary from shared tier to dedicated tier.I don’t think this list is exhaustive:",
"username": "chris"
},
{
"code": "setClusterParametersetParametersetParametersetClusterParametersetClusterParameter:changeStreamOptions: preAndPostImages:",
"text": "Ok thanks @chris , appreciate the response.The command I want to run is setClusterParameter. In the documentation you shared it doesn’t explicitly say that command is not available on Atlas clusters, but the setParameter is not available. Is it possible that setParameter encompasses setClusterParameter? Seems likely but want to confirm.Is there any way to manually adjust the setClusterParameter:changeStreamOptions: preAndPostImages: for clusters hosted on Atlas? This is quite important for our use case and seems like a major feature limitation if it’s not possible.Cheers,\nPaul",
"username": "Paul_Chynoweth"
},
{
"code": "collMod",
"text": "This one might need an answer from the MongoDB team. What Altlas tier are you on?Are you specifically trying to change retention period for pre and post images or are you trying to enable pre and post images?The latter can be done via collMod",
"username": "chris"
},
{
"code": "",
"text": "Thanks @chris ,They can be quite hard to get a hold of But yes I’ve already enabled Pre & Post images via collmod, but would like to now set the retention period.I would happily try to run the setClusterParameter on the DB via mongosh but can’t authenticate with my email and password. I have checked the email and password & url encoded it as documentation mentions - but still no dice. So i’m wondering if it’s even possible to do this (log in using Atlas email and password via mongosh).Cheers,\nPaul",
"username": "Paul_Chynoweth"
}
] | Running commands with Pymongo on Admin database | 2023-08-02T07:44:29.790Z | Running commands with Pymongo on Admin database | 744 |
null | [
"queries"
] | [
{
"code": "[\n {\n _id: ObjectId(\"64ad9e7d9f6ecf83a7a693bb\"),\n ip: '192.168.1.1',\n date: '1689017660',\n ports: [\n {\n number: 21,\n protocol: 'tcp',\n state: 'open',\n banner: 'product: ProFTPD hostname: 192.168.1.1 ostype: Unix',\n name: 'ftp,\n servicefp: '',\n scripts: '',\n checks: [ { name: 'Anonymous Access', result: 'success' } ]\n },\n {\n number: 80,\n protocol: 'tcp',\n state: 'open',\n banner: 'product: nginx',\n name: 'http',\n servicefp: '',\n scripts: '',\n checks: [],\n },\n {\n number: 444,\n protocol: 'tcp',\n state: 'open',\n name: 'some new'\n }\n ]\n }\n]\n",
"text": "Hello!\nI have a collection with documents likeAnd I want to add object into array checks like:\n{ name: “some name”, description: “some description”, result: “some result”, …}\nNumber of keys of object is uknown. If element of array “checks” with name of new check already exists, then it should be replaced with new provided values. If element with such name doesn’t exists, then push it into array “checks”.I’ve tried with arrayFilters and other ways, but can’t figure out how to do that. Any help would be appreciate",
"username": "amenlust2"
},
{
"code": "'ftpchecksportsnumberportschecksdb.c.drop()\ndb.c.insertOne({\n \"ip\": \"192.168.1.1\",\n \"ports\": [\n {\n \"number\": 21,\n \"checks\": [\n {\n \"name\": \"ABC\",\n \"result\": \"success\"\n }\n ]\n },\n {\n \"number\": 80,\n \"checks\": []\n },\n {\n \"number\": 444\n }\n ]\n})\nconst number = 21\nconst name = \"DEF\"\nconst description = \"my new description\"\nconst result = \"my new result\"\nnumbernamechecksdb.c.updateOne({\"ip\": \"192.168.1.1\", \"ports.number\": number}, [{\n \"$set\": {\n \"ports\": {\n \"$map\": {\n input: \"$ports\",\n as: \"p\",\n in: {\n \"$cond\": {\n if: {\"$eq\": [\"$$p.number\", number]},\n then: {\n \"$mergeObjects\": [\"$$p\", {\n \"checks\": {\n \"$concatArrays\": [\n {\n \"$filter\": {\n \"input\": \"$$p.checks\",\n \"cond\": {\"$ne\": [\"$$this.name\", name]}\n }\n },\n [{\"name\": name, \"description\": description, \"result\": result}]\n ]\n }\n }]\n },\n else: \"$$p\"\n }\n }\n }\n }\n }\n }]\n)\ndb.c.createIndex({\"ip\": 1, \"ports.number\": 1})\nDEFchecks{\n _id: ObjectId(\"64cbf22a5f5937b78f2b78b4\"),\n ip: '192.168.1.1',\n ports: [\n {\n number: 21,\n checks: [\n { name: 'ABC', result: 'success' },\n {\n name: 'DEF',\n description: 'my new description',\n result: 'my new result'\n }\n ]\n },\n { number: 80, checks: [] },\n { number: 444 }\n ]\n}\ndescriptionresultnameconst description = \"my OTHER description\"\nconst result = \"my OTHER result\"\n{\n _id: ObjectId(\"64cbf22a5f5937b78f2b78b4\"),\n ip: '192.168.1.1',\n ports: [\n {\n number: 21,\n checks: [\n { name: 'ABC', result: 'success' },\n {\n name: 'DEF',\n description: 'my OTHER description',\n result: 'my OTHER result'\n }\n ]\n },\n { number: 80, checks: [] },\n { number: 444 }\n ]\n}\n",
"text": "Hi @amenlust2 and welcome in the MongoDB Community! I have to say that this one made me sweat a little! First of all, you have a typo in your document above, it’s missing a ' after the field ftp.Then, I’m guessing that you need to provide some extra information to identify which element of the checks array needs to be updated in the ports array so I took the supposition / decision that number was unique in your array of ports and I assumed that this is how you identify which element to update as you probably don’t want to update all the checks arrays.So I simplified a bit the document model for the sake of the example:Now that I have a sample document, let’s define some variables that I can use as query parameters.And here is the update query. This query loops through the ports array elements. When the provided number is found, it removes the existing document with the provided name from the checks array (if it exists) and then appends the new document at the end.Note that to support this query, you need the index:Here is the result after the first execution of this query. As DEF doesn’t exist in the checks, it’s added in the array:Now if I provide new values for description and result but with the same name and run the query again, this time the array entry is “updated”. (It’s actually removed and re-added in the query but you get the idea…).Result:Enjoy!\nMaxime ",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "wow, seems really great. More over, your assumptions are correct. In your example I can find some new cool features for my app, but I have to learn and try them first. Thank you so much",
"username": "amenlust2"
},
{
"code": "",
"text": "You are welcome!I forgot to mention it but the main concept I’m using here is the update with an aggregation pipeline. You can notice it because the second parameter is an array, not a document.The following page provides examples of updates with aggregation pipelines.This unlocks the power of the aggregation pipeline to perform the update operation and as you can see there is a bunch of possibilities with $mergeObject, $map, $cond, $filter, $concatArrays, …If my answer was , I’d appreciate if you can select it as the solution. Feel free to open another topic if you have more “challenges”.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Modifying entire document in nested array of documents | 2023-07-31T17:49:19.393Z | Modifying entire document in nested array of documents | 475 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Dear all,I have a Node.js application which will query the MongoDB every 5 seconds. When I use the default setting for the MongoDB config, I find that the mongodb log is extremely large. It will create entries for every Node.js query.So, is there a way to skip the logging? I want to log only error and fatal logs. I have checked the documentation but seems it doesn’t mention how to set this.Would anyone can provide samples of the config how to eliminate those information log but keep only fatal and error logs?Thanks a lot",
"username": "Paul_Lee2"
},
{
"code": "",
"text": "Dear all,I’m using Mongo DB version 5.0.6 for Windows.",
"username": "Paul_Lee2"
},
{
"code": "",
"text": "Hi @Paul_Lee2Logging verbosity can be adjusted as a whole or per component.",
"username": "chris"
},
{
"code": "{\"t\":{\"$date\":\"2023-08-04T09:11:05.409+08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn71675\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:63098\",\"client\":\"conn71675\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.5.0\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\"},\"platform\":\"Node.js v14.17.6, LE (unified)|Node.js v14.17.6, LE (unified)\"}}}\n\n{\"t\":{\"$date\":\"2023-08-04T09:11:05.407+08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn71674\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"admin.$cmd\",\"command\":{\"ismaster\":true,\"helloOk\":true,\"client\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.5.0\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19045\"},\"platform\":\"Node.js v14.17.6, LE (unified)|Node.js v14.17.6, LE (unified)\"},\"compression\":[\"none\"],\"loadBalanced\":false,\"$db\":\"admin\"},\"numYields\":0,\"reslen\":775,\"locks\":{},\"remote\":\"127.0.0.1:63097\",\"protocol\":\"op_query\",\"durationMillis\":0}}\n",
"text": "Dear Chris,Thanks for your reply. However, no matter I set the level, from 0 to 5, I still got the following log.My mongo db config setting:systemLog:\ndestination: file\nlogAppend: false\npath: C:\\Program Files\\MongoDB\\Server\\5.0\\log\\mongod.log\nquiet: true\nverbosity: 5My question is how can I remove those log with severity=“Information”, I only want to log severity with fatal or error. I check the document, there is no hints on how to set this.Thanks",
"username": "Paul_Lee2"
},
{
"code": "--quiet",
"text": "Setting to 0 will reduce to the lowest verbosity, but you will still get these INFO log lines. There is also the --quiet parameter that may reduce this further.If size of logs is your concern then logrotating with greater frequency could be appropriate.",
"username": "chris"
}
] | Reduce Mongo DB logging in version 5.0.6 | 2023-08-03T08:42:46.795Z | Reduce Mongo DB logging in version 5.0.6 | 629 |
[
"serverless",
"chennai-mug"
] | [
{
"code": "Lead Technical Architect at Zoominfo IndiaAssociate Engineer - Product Specialist at CorestackOpen Source Engineer at Local StackSoftware Engineer at Kissflow",
"text": "\n1920×1080 114 KB\n\nJoin us for an exciting in-person meetup at Chennai as we bring developers, communities and enthusiasts together to participate, collaborate, and share their knowledge on the latest trends in Cloud Native, Databases, and AI ecosystem at KissFlow’s Office. This meetup will feature exciting speaker sessions centered around MongoDB, LocalStack, AWS, Serverless, and Generative AI and open networking sessions to introduce various new toolings & technologies in the developer tooling ecosystem. Brought to you by LocalStack & MongoDB User Group Chennai. Date: August 5, 2023\n Time: 10:00 AM IST (Indian Standard Time)\n Venue: KissFlow Office, No: 5, Tower-B, 10th Floor, World Trade Center, 142, Rajiv Gandhi Salai, Perungudi, Chennai, Tamil Nadu 600096 (Google Maps)This meetup is free and open to all! We thank our community partners KonfHub, Collabnix, and r/developersIndia, for supporting the event!Event Type: In-Person\nLocation: No:5, Tower-B, 10th Floor, World Trade Center, 142, Rajiv Gandhi Salai, Perungudi, Chennai, Tamil Nadu 600096Lead Technical Architect at Zoominfo India–\nAssociate Engineer - Product Specialist at Corestack–Open Source Engineer at Local Stack–Software Engineer at Kissflow",
"username": "Rishi_Agrawal"
},
{
"code": "",
"text": "Yaaay \nExcited and Can’t wait to be a part of this.",
"username": "Ahamed_Basha_N"
},
{
"code": "",
"text": "Hi everyone,\nThose of you got the confirmation please do fill the form sent to you email today, by 5PM IST otherwise we cannot guarantee you the entry to premises.",
"username": "Rishi_Agrawal"
}
] | MongoDB x Localstack Chennai Meetup | 2023-07-12T06:28:07.913Z | MongoDB x Localstack Chennai Meetup | 2,559 |
|
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "Pretty much the title - can I expect atlas search indexes to be synced?",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Hi @Alex_Bjorlig,In this post there is a link to this which then points to that which says that since June 14, 2023, Atlas Search indexes are saved with the Cloud Snapshots.So – at least – Cloud Backups are here to help as a workaround.Mongodump and mongorestore are not cloud specific, they are the same tool for MongoDB community or MongoDB entreprise. As Atlas Search is a specific cloud feature of MongoDB Atlas, I don’t see mongodump and mongorestore supporting Atlas Search index definition for now at least but I could be wrong.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for the response Do you happen to know Alex Bevilacqua - our team is very excited about news on this topic ",
"username": "Alex_Bjorlig"
}
] | Does mongodump & mongorestore support Atlas search indexes? | 2023-08-03T08:58:25.973Z | Does mongodump & mongorestore support Atlas search indexes? | 783 |
null | [
"android",
"flutter"
] | [
{
"code": "",
"text": "Hello everyone,\nI am pretty close to going in production with a mobile application that uses Realm.While testing different scenarios however I found a problem, I did not really think of before.\nI created my own CI/CD that does the following:Now however arises the following problem:\nThe moment I promote Realm from Dev to Prod, the applications in the stores are not yet live (this can take up to a few days). And even if they are, I can not ensure that the users will even update the application on time. That means, that my Realm is always “ahead” of my application. So if I - for example - rename a field in Realm, the application of all users will break in the second, I promote my schema from Dev to Prod.\nAll the migration logic that I could put into my application to compensate this will thus have no effect, as they dont have the application yet.I am kind of lost now, how the procedure should look like. I thought of following scenario and wanted to ask for feedback first:I hope I made my problem clear and there are already some best practices.",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "Hello @Thomas_Anderl : Previously when I have encountered similar situation, we always mark urgent release for iOS and keep the Android App ready in pipeline and once both are ready we force update all the previous build user. (Didn’t impact us, we had 400K DAU)And not sure if you meant dev branch by “Don’t douch Dev for 5 days”, we normally create a release candidate once code is ready for production from Dev branch, which unblocks future release work.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Hey, thank you for the insight.I had a typo. I meant “touch”. As I am low scale, I only have 2 environments for now: Dev and Prod.So apparently your approach is similar to the one I suggested? You just didnt control it with the timeline (e.g. 5 days), but did it manually after?",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "Yes, you are correct.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Hi, just chiming in with this checklist we have for applications going into production with sync: https://www.mongodb.com/docs/atlas/app-services/sync/go-to-production/production-checklist/As for breaking schema changes, the best advice I can offer is to not do them. If at all possible it is best to just perform additive schema changes in production.We are trying to think about how to make this a better experience for users though, so I am curious if you can go into more detail about what kinds of changes you expect to make in production and why?",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Hey, only doing additive changes is already a good suggestion.I plan on adding additional features (e.g. profile verification). Most of them could be added by making the fields optional and nothing should break.However when a field of a future feature becomes mandatory, it will get tricky.So far I usually change datatypes often. As there is no enum support, for example, I sometimes switch between saving keys of enums as strings vs integers.\nAnother thing I might change in the future is references. As it is not possible to query links with sync directly, there is a chance that I need to change links to ObjectIds to be able to query them.\nTo load all messages for a chat, for example, I need to store the Chat in the Message as ObjectId to sync them. If I ever decide to add different queries for performance reasons, I will need to make breaking changes to the schems.",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "This is great. Thank you for your feedback. I have added it to the doc as we scope out the work on how to make these changes more palatable when using sync.For background, the reason these are difficult are that many mobile developers do not control when people update their apps, so if you change a field from string to int you will have some % of your mobile clients think a field is a string and some % think it is an integer and it can be incredible unclear what the server should send to older clients once the field has been updated.Out of curiosity, what would you expect in this situation?",
"username": "Tyler_Kaye"
},
{
"code": "/*\n* Version 1.2.5 is making an integer to an optional string and assumes new clients send it as string already.\n* This function is required and attached to version 1.2.5 to send data correctly to mobile phones with version 1.2.4.\n* If the type in the database is still in the type of 1.2.4, just return it as it is.\n* Otherwise this function is called. There should probably also be a smilar function with\n* reversed parameter/return-value, for old devices still uploading an integer and parsing it to an optional string. This function becomes useless, once all users migrated.\n*/\nfunction educatonLevelRead(string? typeInDatabase): int{\n\tif(typeInDatabase == null)\n\t\treturn 0;\n\t\n\tswitch(typeInDatabase){\n\t\tcase \"elementary school\":\n\t\t return 1;\n\t\tcase \"bachelors\":\n\t\t return 2;\n\t\tdefault: \n\t\t return 0;\n\t}\n}\n",
"text": "I can imagine, that different versions make extra effort when designing such systems.\nMy first, intuitive thought would be to have versioned schemas. As the application itself comes with a specifc version (e.g. 1.2.4) this could be passed when initializing the realm. This would of course require to also define that application version, when the schema is deployed.Realm could then sync with the corresponding schema version to send the data in the correct format. Once all users are on the current version (e.g. 1.2.5), all users receive the current schema, and the version 1.2.4 of the schema could be deleted.So lets assume in version 1.2.5 the datatype changes from int to an optional string:There must however be a strategy defined in code on how the data should be parsed to the correct type and what should happen if it cannot be parse (e.g. in the database is already saved a string, and it couldn’t parse it to integer). The most suffisticated version would be, that breaking changes require a parser, that basically is a funcion taking the old type and returning the new type. This could look like this:I would whatsoever recommend/encourage developers to force updating the application once a new version is available in the store, so there is a maximum of two parallel schema versions available (one being the newest version, and one being the version that users who didn’t upgrade yet, still have).This is just a rather spontanous idea I got. I assume you already considered this, giving my input whatsoever.",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "Appreciate your thoughts. It is a tough issue to solve and we are just trying to take in some opinions to ensure we ultimately build things the way people want to use them.Thanks,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "It sounds like you have identified an issue with your current deployment process. You can add versioning to your Realm schema and application. This way, you can ensure that the application will only work with a specific version of the Realm schema. When you make changes to your Realm schema, you can update the version number and include a migration strategy to handle the update. If it doesn’t work, look for alternatives. Some reliable experts, such as those from https://smartengines.com/ may help. The best approach depends on your specific needs and circumstances. However, implementing versioning, feature flags, and releasing updates together can help ensure that users do not experience issues.",
"username": "Minat_Criss"
},
{
"code": "",
"text": "Thanks! Your link is so helpful ",
"username": "aleksandr.sharshakov.99"
}
] | Recommended approach to promote mobile app to production | 2022-11-22T07:30:27.160Z | Recommended approach to promote mobile app to production | 3,048 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi Team,Can we add a secondary to a 3 node replica set(PSA). The secondary will be having priority: 0, votes: 0.The thing is that we want to add secondary , test it a few days and remove arbiter later once all set.Finally we will update priority and votes of newly added secondary.Is this config valid or does it cause any issue to the existing configuration.{“Thanks and Regards”, “Satya”}",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "Yes you can.But you may want to check out how priority/vote settings can affect your majority writes. (This is important given you also have an arbiter).",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi @Kobe_W ,Thanks for the update.What if it is a backing database such as appdb or oplog store.{“Thanks and Regards”, “Satya”}",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "Hi ,Could someone please check the above and answer it.{“Thanks and Regards”, “Satya”}",
"username": "Satya_Oradba"
}
] | Can a replica set have even number of members | 2023-08-01T13:36:23.790Z | Can a replica set have even number of members | 533 |
null | [] | [
{
"code": "[\n {\n \"_id\": ObjectId(\"64bfd2552c1771905ea095a8\"),\n \"status\": \"In Progress\",\n \"sectionDetails\": [\n {\n sectionId: ObjectId(\"64bfcf3164df731eb1e40e87\"),\n status: \"In Progress\",\n slideQuestionAnswer: [\n {\n \"slideId\": ObjectId(\"64bfe10564c2850b70b613d2\"),\n \"answers\": [\n ObjectId(\"64c00ff20f4d05a55793759f\"),\n ObjectId(\"64c01003a55c370435f7a6d8\"),\n ObjectId(\"64c0100cf3267c399667d1b3\"),\n ]\n }\n ]\n }\n ]\n }\n]\ndb.collection.update({\n \"_id\": ObjectId(\"64bfd2552c1771905ea095a8\")\n},\n{\n $set: {\n \"sectionDetails.$[sd].slideQuestionAnswer.$[sqa].answers\": [\n ObjectId(\"64c00ff20f4d05a55793759f\")\n ]\n }\n},\n{\n \"arrayFilters\": [\n {\n \"sd.sectionId\": ObjectId(\"64bfcf3164df731eb1e40e87\")\n },\n {\n \"sqa.slideId\": ObjectId(\"64bfe10564c2850b70b613d2\")\n }\n ]\n})\n",
"text": "I have a collection course assignment as shown below:I am able to do a query to update the answers of slideQuestionAnswer by matching the slideId with the below queryBut I want to push newer objects to slideQuestionAnswer if it is blank along with the above query. How can I do it?",
"username": "Yash_Singhvi"
},
{
"code": "",
"text": "Check the answer provided in the following as it seems more or less the same issue.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for sharing the link. Really appreciated.",
"username": "Yash_Singhvi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to update the nested array object field and push an array with an object if array doesn't exist | 2023-07-25T18:14:16.602Z | How to update the nested array object field and push an array with an object if array doesn’t exist | 470 |
null | [
"queries",
"node-js",
"crud"
] | [
{
"code": "db.User.updateOne({\n _id: '1234567890', {\n 'basicInfo.checksum': {$ne: '67b95c9a412f422e309c4100ac0e8fd5'}\n}, {\n $set: {\n basicInfo: {\n name: 'Yamada Taro', \n age: 30,\n checksum: '67b95c9a412f422e309c4100ac0e8fd5'\n }\n }\n}, {\n upsert: true\n})\n",
"text": "If _id=‘1234567890’ already exists and the value of checksum has not changed, the following query will generate an E11000 error.\nI understand that this is because there is no document that matches the filter criteria, so I try to create a new document with _id=‘1234567890’.If the document with _id=‘1234567890’ does not exist, create a new one. If it exists and the checksum is different, we want to update it.\nIs there any way to modify the query so that this error does not occur?\nIf there is no other way, is it not good manners to ignore the E11000 error that occurs?Sorry if the text is wrong as I am using an automatic translation into English.",
"username": "Daisuke_Mizuno"
},
{
"code": "db.User.updateOne({\n _id: '1234567890',\n 'basicInfo.checksum': {$ne: '67b95c9a412f422e309c4100ac0e8fd5'}\n}, {\n $set: {\n basicInfo: {\n name: 'Yamada Taro', \n age: 30,\n checksum: '67b95c9a412f422e309c4100ac0e8fd5'\n }\n }\n}, {\n upsert: true\n})\n",
"text": "Sorry. There was an error in the example code. The correct code is as follows.",
"username": "Daisuke_Mizuno"
},
{
"code": "",
"text": "and the value of checksum has not changedwhat you mean by this?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi @Kobe_W\nThanks for the reply.The checksum is a hashed value of the value of the basicInfo field (excluding the checksum) using a certain algorithm. the same value of the checksum means that the value of the basicInfo field has not changed.",
"username": "Daisuke_Mizuno"
},
{
"code": "",
"text": "Sorry. I will close this post as I have to deal with this in other ways.",
"username": "Daisuke_Mizuno"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | E11000 error when using upsert=true in updateOne | 2023-05-31T09:17:39.489Z | E11000 error when using upsert=true in updateOne | 865 |
null | [
"react-native",
"data-api"
] | [
{
"code": "",
"text": "I’m purchasing the server less cluster does realm allow its own hosting to store the file.",
"username": "Team_CNA"
},
{
"code": "",
"text": "Hi @Team_CNA and welcome in the MongoDB Community! If your images and documents are small enough, you can store them in MongoDB using a binary field or base64 for example. But if you have a lot, I’d recommend a cold storage solution instead like AWS S3.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
] | I'm developing the react-native app with using realm have a concern about the images and other documents data i need to store | 2023-08-02T21:26:35.140Z | I’m developing the react-native app with using realm have a concern about the images and other documents data i need to store | 509 |
[] | [
{
"code": "error issuing collMod command for becollective.: (InvalidNamespace) Invalid namespace specified 'becollective.'",
"text": "Hi,We have a desire to have a Trigger fire regardless of collection, and it’s not clear if this is supported or not.At present we seem to be getting this error when we specific Cluster and Database but not a Collection:\nerror issuing collMod command for becollective.: (InvalidNamespace) Invalid namespace specified 'becollective.'I have a feeling Triggers might only work when you specify a Collection? It’s not specifically mentioned whether or not Triggers work this way.(Edit) Given that ChangeStreams can listen to all collections as specified in the link below, I’m expecting that it should be possible, but it’s not clear whether Triggers use this .watch() format.MongoDB triggers, change streams, database triggers, real time",
"username": "Mark_Johnson"
},
{
"code": "$match",
"text": "The error you mentioned seems to indicate that you are trying to create a trigger without specifying a collection, which is not supported by MongoDB.Regarding ChangeStreams, they do allow you to listen to all changes in a MongoDB deployment by using the $match stage to specify an empty query. However, this is different from triggers. ChangeStreams are a way to subscribe to the change events happening in a MongoDB cluster, but they are not the same as triggers.Triggers are part of the MongoDB Realm platform and are used to respond to specific changes in a collection by executing serverless functions or other actions. They are designed to be collection-specific and do not work at the database or cluster level.If there have been updates or changes to MongoDB after my knowledge cutoff date, I recommend checking the MongoDB documentation or release notes for the latest information on triggers and their capabilities. MongoDB’s documentation is comprehensive and frequently updated, so you can find the most up-to-date information there.",
"username": "patoji_patoji"
},
{
"code": "",
"text": "Thanks, that’s exactly my question though; the documentation doesn’t specifically state that you CAN’T do database level triggers - and after all it lets me save the function whereas anything else throws errors, and the lack of an error when not selecting a collection led me to believe that this is therefore supported.I believe Triggers are state functions using ChangeStreams but I guess only on a per-collection basis.",
"username": "Mark_Johnson"
},
{
"code": "",
"text": "Hey Mark, you are correct Atlas App Services currently only supports Collection level triggers. However, I’m excited to share the team is in the process of adding support for Deployment and Database level triggers and it should be available later this quarter.",
"username": "Nathan_Frank"
},
{
"code": "",
"text": "Thanks for the reply Frank, hope to see this feature live soon! I guess for now we’ll rollback our existing per-collection Function method (we migrated aws-sdk to v3, but SQS SendMessage fails with node10, as per other thread).",
"username": "Mark_Johnson"
}
] | Database level triggers rather than Collection? | 2023-08-03T05:27:38.301Z | Database level triggers rather than Collection? | 672 |
|
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "",
"text": "Hi,\nI have a list of _id, I need to check whether these _id(s) are already there or not. And I need to get the _id(s) which are not there in the database. Is there any way I can achive this without taking the list from database and comparing it with my list in backend code.",
"username": "sandeep_s1"
},
{
"code": "",
"text": "Hello @sandeep_s1,Refer to this similar question,",
"username": "turivishal"
},
{
"code": "",
"text": "Thanks for the suggestion",
"username": "sandeep_s1"
},
{
"code": "var matchList = ['D', 'C']\ndb.getCollection(\"Test\").aggregate([\n{\n $project:{\n _id:1\n }\n},\n{\n $unionWith:{\n coll:'Dual',\n pipeline:[\n {\n $project:{\n _id:matchList\n },\n },\n {\n $unwind:'$_id'\n }\n ]\n }\n},\n{\n $group:{\n _id:'$_id',\n total:{$sum:1}\n }\n},\n{\n $match:{\n total:1\n }\n}\n])\n",
"text": "I had another approach…Mongo playground: a simple sandbox to test and share MongoDB queries onlineBasically we create a new collection called Dual that has one dummy record, all it does it provide a way to inject a record for each of the IDs we’re searching for, in the linked code you replace [‘D’, ‘C’] with your array of Ids to search for:What this does it get all the IDs from your main collection and then add IDs in from your list of IDs you want to search for and then group up, anything with a count of more than one is a match and we can filter out to return a simple list of IDs that do not exist on the collection.\nThis way you avoid any complex processing or lookups.I’ve not tested this on a large data set…but perhaps worth a try to compare to other approaches.",
"username": "John_Sewell"
},
{
"code": "",
"text": "While at it, for the benefit of all users, could you please provide closure on your other thread",
"username": "steevej"
},
{
"code": "",
"text": "@John_Sewell, thanks for the reply.",
"username": "sandeep_s1"
},
{
"code": "db.getCollection(\"Test\").aggregate([\n{\n $project:{\n _id:1\n }\n},\n{\n $addFields:{\n flag:1\n }\n},\n{\n $unionWith:{\n coll:'Dual',\n pipeline:[\n {\n $project:{\n _id:[5,6,'C', 'B']\n },\n },\n {\n $unwind:'$_id'\n },\n {\n $addFields:{\n flag:-1\n }\n }\n ]\n }\n},\n{\n $group:{\n _id:'$_id',\n total:{$sum:'$flag'}\n }\n},\n{\n $group:{\n _id:'$total',\n total:{$sum:1}\n }\n}\n])\ndb.getCollection(\"Test\").aggregate([\n{\n $project:{\n _id:1\n }\n},\n{\n $addFields:{\n flag:1\n }\n},\n{\n $unionWith:{\n coll:'Dual',\n pipeline:[\n {\n $project:{\n _id:[5,6,'C', 'B']\n },\n },\n {\n $unwind:'$_id'\n },\n {\n $addFields:{\n flag:-1\n }\n }\n ]\n }\n},\n{\n $group:{\n _id:'$_id',\n total:{$sum:'$flag'}\n }\n},\n{\n $match:{\n total:-1\n }\n}])\n",
"text": "I just ran a test with 10M records in a collection and checking if 4 IDs exist, took about 40s on my workstation.Each records was just a basic document with an ID and took up about 140MB of storage, I was trying to push a lot of data through the pipeline.I has another play and came up with this:So this categorises the IDs into one of 3 types:So you could have a filter as the last stage (as opposed ot the second group) to filter out what you want to return.(Note the extra $addFields stage as opposed to setting the value in the project as if you set the field value to 1 there, it regards this as a projection inclusion and does not actually set to 1, there must be a better way of doing this.)So finding items that are in your passed in list and not in the collection:Mongo playground: a simple sandbox to test and share MongoDB queries online",
"username": "John_Sewell"
},
{
"code": "/* first you insert your _id in a temporary collection, let's name it Dual like John_Sewell did */\nlookup =\n{\n\t\"$lookup\" : {\n\t\t\"from\" : \"grades\",\n\t\t\"localField\" : \"_id\",\n\t\t\"foreignField\" : \"_id\",\n\t\t\"as\" : \"_tmp.found\" ,\n \"pipeline\" : [\n { \"$limit : 1\" } , /* I still $limit:1 just to make sure the $lookup is stopped fast */\n { \"$project\" : { \"_id\" : 1 } }\n ]\n\t} ,\n}\nmatch = { \"$match\" : { \"_tmp.found.0\" : { \"$exists\" : false } } }\nunset = { \"$unset\" : \"_tmp\" }\nout = { \"$out\" : \"result\" }\ndb.Dual.aggregate( [ lookup , match , unset , out ] )\n",
"text": "Here is an adaptation of the solution from the thread shared by turivishal.I used the grades collection from the sample_training database from the sample dataset of Atlas.The above took around 2s on Atlas M0 with 1.7m documents in the grades (I duplicated the original grades collection documents a few time to get 1.7m) collection and 10000 _id presents and 10000 _id not presents.",
"username": "steevej"
},
{
"code": "",
"text": "That’s much better! Order of magnitude faster!",
"username": "John_Sewell"
},
{
"code": "",
"text": "The main reason is that $group is expensive. It has to process all incoming documents before outputting the first one. In some case it will need to hit the disk if all $group’s do not fit in memory. Yes, my $lookup is also kind of expensive, but it is using the index on _id.The $addField is also expensive since it is done for all of 10M docs.If you still have your 10M and 4 IDs, it would be nice to have numbers with the same dataset.",
"username": "steevej"
},
{
"code": "",
"text": "First thing I did after seeing your solution, ran in a fraction of a second! Im away from pc at moment but shall post the actual time tomorrow.\nAs you say, not grouping all that data and instead hitting an index is so much more performant especially as the index is already there!",
"username": "John_Sewell"
}
] | Checking whether a list of _id already exist in the collection or not | 2023-08-02T04:55:45.013Z | Checking whether a list of _id already exist in the collection or not | 1,313 |
null | [
"flutter"
] | [
{
"code": "final tasks = realm.query<Task>('TRUEPREDICATE SORT(timeIntervals[0].startDate ASC)');List<$TimeInterval>ObjectType.embeddedObject\"Error code: 3013 . Message: All but last property must be a link\".",
"text": "final tasks = realm.query<Task>('TRUEPREDICATE SORT(timeIntervals[0].startDate ASC)');In short I want tasks to be sorted by datetime starting from newest, but information about datetime is in first List<$TimeInterval> object, which in my case is ObjectType.embeddedObject.When I try to sort I receive \"Error code: 3013 . Message: All but last property must be a link\".\nFor now I have two options:\nUse sort function, but in long list it will iterate every object freezing my UI.\nOr second storing separate startDate in Task object and sort from there,Maybe I am not aware of some realm queries, that might solve this problem.",
"username": "Edgars_Belevics"
},
{
"code": "",
"text": "The question is a little vague to me and it’s not clear what the expected result is. (I am probably reading it wrong so feel free to straighten me out)In a list of sorted ascending timestamps the ‘newest’ will be the most recent; if that list is sorted from the newest ascending… it will only sort one element as there is only one most recent, and being the ‘last’ one in the list, will have nothing following it.Then, a List is always ordered so why is the latest always at index 0, is it being inserted?Alsobut in long list it will iterate every object freezing my UIMay not be the case. Realm objects are lazily loaded to sorting a very very (very) large list should have very little impact on the UI. However, if it does, an asynchronous or background task may be in order. I would test it first.",
"username": "Jay"
}
] | Sorting based on reference elements | 2023-08-03T08:52:25.764Z | Sorting based on reference elements | 573 |
null | [
"aggregation"
] | [
{
"code": "db.users.aggregate([\n {\n \"$match\": {\n \"sourceId\": \"643d2b71183ef6ad50889c0d\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"games\",\n \"let\": {\n \"gameIds\": \"$gameIds\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$in\": [\n \"$_id\",\n \"$$gameIds\"\n ]\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"name\": 1,\n \"logo\": 1\n }\n }\n ],\n \"as\": \"gameIds\"\n }\n },\n {\n \"$addFields\": {\n \"lastActivity\": {\n \"$max\": {\n \"$map\": {\n \"input\": {\n \"$cond\": {\n \"if\": {\n \"$eq\": [\n [\n\n ],\n [\n\n ]\n ]\n },\n \"then\": \"$activities\",\n \"else\": {\n \"$filter\": {\n \"input\": \"$activities\",\n \"as\": \"activity\",\n \"cond\": {\n \"$in\": [\n \"$$activity.activityId\",\n [\n\n ]\n ]\n }\n }\n }\n }\n },\n \"as\": \"activity\",\n \"in\": \"$$activity.lastActivity\"\n }\n }\n },\n \"score\": {\n \"$ifNull\": [\n \"$score\",\n 0\n ]\n },\n \"badgesCount\": {\n \"$size\": {\n \"$cond\": {\n \"if\": {\n \"$ne\": [\n [\n\n ],\n [\n\n ]\n ]\n },\n \"then\": {\n \"$filter\": {\n \"input\": \"$badges\",\n \"as\": \"badge\",\n \"cond\": {\n \"$in\": [\n \"$$badge.activityId\",\n [\n\n ]\n ]\n }\n }\n },\n \"else\": \"$badges\"\n }\n }\n },\n \"gamesCount\": {\n \"$size\": \"$gameIds\"\n }\n }\n },\n {\n \"$sort\": {\n \"lastActivity\": -1\n }\n },\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 100\n },\n {\n \"$project\": {\n \"badgesCount\": 1,\n \"usernameFormatted\": 1,\n \"gameIds\": 1,\n \"score\": 1,\n \"gamesCount\": 1,\n \"lastActivity\": 1\n }\n }\n ])\n",
"text": "Generally, the user collection has 270K documents, and the specific source has 3.7K users. Request takes 33 seconds on the server. And when I run the same query from DataGrip for more than 10 seconds, Any suggestions?I have indexes for sourceId, asc and desc indexes for scores, and usernameFormatted as I have sorting by them.",
"username": "Ani_Davtyan"
},
{
"code": "$skip",
"text": "Hi @Ani_Davtyan and welcome to MongoDB community forums!!Based on the query posted, I see that you have been using sort, skip and limit in your aggregation pipeline. As mentioned in the MongoDB documentation make sure to include at least one field in your sort that contains unique values, before passing results to the $skip stage, the field used in the sort stage has unique values.In saying so, I would be able to help you in more depth, if you could help with a few information regarding the deployment.Request takes 33 seconds on the server.Is 33 seconds for the query execution looks desired time for the query to do the processing? If not, could you share the explain output for the query?\nAlso, the server mentioned above, is this the MongoDb server you are talking about ?Finally, since we do not have enough expertise on the DataGrip we might not be able to assist you completely with the IDE and would recommend using the JetBrains Community forums for details assistance.Please feel free to reach out in case of further queries.Regards\nAasawari",
"username": "Aasawari"
}
] | Aggregate $lookup and $sort takes so long time | 2023-07-27T14:31:30.058Z | Aggregate $lookup and $sort takes so long time | 338 |
null | [
"java",
"spring-data-odm"
] | [
{
"code": "spring.data.mongodb.uri=mongodb://localhost:8000\nspring.data.mongodb.username=torben\nspring.data.mongodb.host=localhost\nspring.data.mongodb.database=te\npackage com.example.demo3;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\nimport org.springframework.context.annotation.ComponentScan;\n\n@SpringBootApplication\n@ComponentScan(basePackages = {\"com.example.demo3\", \"com.example.demo3.test\"})\npublic class Demo3Application {\n public static void main(String[] args) {\n\tSpringApplication.run(Demo3Application.class, args);\n }\nhttp://localhost:8080/ . ____ _ __ _ _\n /\\\\ / ___'_ __ _ _(_)_ __ __ _ \\ \\ \\ \\\n( ( )\\___ | '_ | '_| | '_ \\/ _` | \\ \\ \\ \\\n \\\\/ ___)| |_)| | | | | || (_| | ) ) ) )\n ' |____| .__|_| |_|_| |_\\__, | / / / /\n =========|_|==============|___/=/_/_/_/\n :: Spring Boot :: (v3.1.2)\n\n2023-08-02T01:00:40.649+02:00 INFO 68540 --- [ main] com.example.demo3.Demo3Application : Starting Demo3Application using Java 17.0.7 with PID 68540 (/home/t/Dokumente/demo3/target/classes started by t in /home/t/Dokumente/demo3)\n2023-08-02T01:00:40.652+02:00 INFO 68540 --- [ main] com.example.demo3.Demo3Application : No active profile set, falling back to 1 default profile: \"default\"\n2023-08-02T01:00:40.965+02:00 INFO 68540 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositories in DEFAULT mode.\n2023-08-02T01:00:40.974+02:00 INFO 68540 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 6 ms. Found 0 MongoDB repository interfaces.\n2023-08-02T01:00:41.197+02:00 INFO 68540 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)\n2023-08-02T01:00:41.203+02:00 INFO 68540 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]\n2023-08-02T01:00:41.203+02:00 INFO 68540 --- [ main] o.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.1.11]\n2023-08-02T01:00:41.263+02:00 INFO 68540 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext\n2023-08-02T01:00:41.264+02:00 INFO 68540 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 579 ms\n2023-08-02T01:00:41.490+02:00 INFO 68540 --- [ main] org.mongodb.driver.client : MongoClient with metadata {\"driver\": {\"name\": \"mongo-java-driver|sync|spring-boot\", \"version\": \"4.9.1\"}, \"os\": {\"type\": \"Linux\", \"name\": \"Linux\", \"architecture\": \"amd64\", \"version\": \"5.15.0-78-generic\"}, \"platform\": \"Java/Eclipse Adoptium/17.0.7+7\"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, CollectionCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.client.model.mql.ExpressionCodecProvider@2839e3c8, com.mongodb.Jep395RecordCodecProvider@66bf40e5]}, loggerSettings=LoggerSettings{maxDocumentLength=1000}, clusterSettings={hosts=[localhost:8000], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=JAVA_LEGACY, serverApi=null, autoEncryptionSettings=null, contextProvider=null}\n2023-08-02T01:00:41.502+02:00 INFO 68540 --- [-localhost:8000] org.mongodb.driver.cluster : Monitor thread successfully connected to server with description ServerDescription{address=localhost:8000, type=STANDALONE, state=CONNECTED, ok=true, minWireVersion=0, maxWireVersion=17, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=9443740}\n2023-08-02T01:00:41.611+02:00 INFO 68540 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''\n2023-08-02T01:00:41.618+02:00 INFO 68540 --- [ main] com.example.demo3.Demo3Application : Started Demo3Application in 1.172 seconds (process running for 1.365)\n2023-08-02T01:02:15.267+02:00 INFO 68540 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'\n2023-08-02T01:02:15.267+02:00 INFO 68540 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'\n2023-08-02T01:02:15.268+02:00 INFO 68540 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 1 ms\n2023-08-02T01:02:15.275+02:00 ERROR 68540 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed: org.springframework.http.InvalidMediaTypeException: Invalid mime type \"application/\": does not contain subtype after '/'] with root cause\n\norg.springframework.util.InvalidMimeTypeException: Invalid mime type \"application/\": does not contain subtype after '/'\n at org.springframework.util.MimeTypeUtils.parseMimeTypeInternal(MimeTypeUtils.java:232) ~[spring-core-6.0.11.jar:6.0.11]\n\n",
"text": "I have the following application.propertiesAnd I have the following codeBut when I make a Requet with Postman and type http://localhost:8080/I get the following ExceptionWhat did I do wrong? Spring itself says that there is a http server via Tomcat at 8080.",
"username": "Torben_Jox"
},
{
"code": "org.springframework.util.InvalidMimeTypeException: Invalid mime type \"application/\": does not contain subtype after '/'\n at org.springframework.util.MimeTypeUtils.parseMimeTypeInternal(MimeTypeUtils.java:232) ~[spring-core-6.0.11.jar:6.0.11]\n\nInvalid mime type \"application/\": does not contain subtype after '/'Content-Type: application/Content-Type: application/json",
"text": "Nothing to do with mongo.Invalid mime type \"application/\": does not contain subtype after '/'The request does not have a complete header. Probably content-type or accept header.Postman is sending a header like Content-Type: application/ where it need the subtype like: Content-Type: application/json",
"username": "chris"
}
] | MongoDB and Tomcat Server Problems | 2023-08-02T10:49:50.182Z | MongoDB and Tomcat Server Problems | 605 |
null | [] | [
{
"code": "",
"text": "Hi!I have just started “M312: Diagnostics and Debugging” and can’t the Vagrant image set up in Lecture: Installing Vagrant for M312 for my ARM64 M1 Mac. Does somebody have a Vagrant/provider/image combo that works?Thanks,Jochen",
"username": "Jochen_Schneider"
},
{
"code": "",
"text": "Having the same issue…",
"username": "Ade_Williams1"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | M312: Diagnostics and Debugging Vagrant box on M1 ARM64 Mac | 2023-07-26T16:13:40.650Z | M312: Diagnostics and Debugging Vagrant box on M1 ARM64 Mac | 607 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi,I am trying to migrate mongodb data from On Premise server to Azure VM using replica set. The version of MongoDB is 2.6.12 and the data is about 6.5 TiB.For one of the server which had less amount of data, replication was successful but for this server, the replica set on secondary node is not acquiring secondary state and I am getting error as “Too many files open”.I have also set the limit for open files to 700000000, core file size to unlimited, file size to unlimited, max memory size to unlimited, max locked memory to unlimited, virtual memory to unlimited and file locks to unlimited but all in vain…After making all the changes to the replica set in secondary node, I always get an error of too many open files on reaching to a point at 1937 GB.Please suggest me of some ways to make this data get migrated from on premise to Azure VM.",
"username": "Jaya_Verma"
},
{
"code": "cat /proc/$(pgrep -x mongod)/limits",
"text": "The version of MongoDB is 2.6.12 and the data is about 6.5 TiB.This is ancient. Current versions are 4.4, 5.0, 6.0.\n4.4 is EoL February 2024\n5.0 is EoL October 2024So you should target an upgrade to 6.0I have also set the limit for open files to 700000000, core file size to unlimited, file size to unlimited, max memory size to unlimited, max locked memory to unlimited, virtual memory to unlimited and file locks to unlimited but all in vain…It does not appear they are getting set in the correct places. Check they are being set in the init/systemd script that is invoking mongod.Check the running limits of mongod: cat /proc/$(pgrep -x mongod)/limits to confirm if they are set or not.",
"username": "chris"
},
{
"code": "",
"text": "We can’t upgrade now. Upgradation is planned after the data migration and go live in cloud.",
"username": "Jaya_Verma"
},
{
"code": "",
"text": "This will be an epic upgrade to a current version!Lots of upgrade notes and many major changes to take care of along the way. And don’t forget the application drivers.\n2.6 → 3.0 → 3.2 → 3.4 → 3.6 → 4.0 → 4.2 → 4.4",
"username": "chris"
},
{
"code": "",
"text": "Can you please let me know of some way to replicate the data from my current on premise version to cloud…I know the upgrade will be epic😜",
"username": "Jaya_Verma"
},
{
"code": "",
"text": "You’re on the right course adding a replica. You need to see why the limits are not being set, my guess is they are not being set correctly in the systemd/upstart/init.But the output of the command in my previous post will confirm that.",
"username": "chris"
}
] | Migrating MongoDB data from on premise to Azure VM | 2023-08-03T03:31:10.821Z | Migrating MongoDB data from on premise to Azure VM | 544 |
[] | [
{
"code": "",
"text": "Hi,\nMy cluster have one host is down, the last message is : “we are deploying your changes: 0 of 1 servers complete (current actions: configuring mongodb, resyncing 1 server)” and after 2 days, the current message is “We are deploying your changes: 1 of 3 servers complete (current actions: waiting for 1 server to be healthy, capturing backup snapshot)”.\n\nI have to wait or can I do something? how to restart?, remove and add another?\nThanks,",
"username": "Son_Quach"
},
{
"code": "",
"text": "Did you make any chages to your cluster?Try opening a support ticket or use the in-app support if you don’t have a suuport subscription.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Host is down for a long time | 2023-08-03T03:56:41.668Z | Host is down for a long time | 351 |
|
null | [
"replication"
] | [
{
"code": "",
"text": "hi,\nI have a replica set version 3.6.63 and almost every evening around 21:00 the server goes down, when I open the log file I see this message:2023-07-30T21:06:48.847+0300 I COMMAND [conn14689698] command local.oplog.rs command: find { find: “oplog.rs”, filter: { ts: { $exists: true } }, sort: { $natural: 1 }, skip: 0, limit: 1, batchSize: 1, singleBatch: true, maxTimeMS: 3000, $readPreference: { mode: “secondaryPreferred” }, $db: “local” } planSummary: COLLSCAN exception: operation exceeded time limit code:ExceededTimeLimit numYields:0 reslen:246 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } protocol:op_query 3417msHas anyone encountered this phenomenon? Why does the system scan the file oplog.rs?The question was why was LOCAL’s collection even scanned?\nIs this an automatic operation?",
"username": "Amit_Faibish"
},
{
"code": "",
"text": "Has anyone encountered this phenomenon? Why does the system scan the file oplog.rs?This is likely just replication on a secondary pulling the oplog. This is a normal INFO log message and not related to your server stopping.The question was why was LOCAL’s collection even scanned?I think the question should be: Why did it stop. Without logs indicating a shutdown or fatal condition I would suggest that monod was killed by an OOM event killing processes. Check they system and kernel messages to verify this.If there is memory contention move other processes or mongo to another system to eliminate the contention, or add more memory to the system.",
"username": "chris"
}
] | Connection to mongodb is unavailable1 | 2023-08-03T10:40:05.920Z | Connection to mongodb is unavailable1 | 430 |
null | [
"server",
"storage"
] | [
{
"code": "mongod --dbpathmongod --dbpath ~/Downloads/restore-64caaf6f53157227c9123456/\n2023-08-02T18:43:53.776-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] MongoDB starting : pid=92287 port=27017 dbpath=/Users/username/Downloads/restore-64caaf6f53157227c9123456/ 64-bit host=Adminstrators-MacBook-Pro.local\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] db version v4.0.3\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] git version: 7ea530946fa7880364d88c8d8b6026bbc9ffa48c\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] allocator: system\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] modules: none\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] build environment:\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] distarch: x86_64\n2023-08-02T18:43:53.804-0400 I CONTROL [initandlisten] target_arch: x86_64\n2023-08-02T18:43:53.805-0400 I CONTROL [initandlisten] options: { storage: { dbPath: \"/Users/username/Downloads/restore-64caaf6f53157227c9123456/\" } }\n2023-08-02T18:43:53.807-0400 I CONTROL [initandlisten] machdep.cpu.extfeatures unavailable\n2023-08-02T18:43:53.807-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7680M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2023-08-02T18:43:53.911-0400 E STORAGE [initandlisten] WiredTiger error (-31802) [1691016233:911251][92287:0x206e0d600], connection: __log_open_verify, 1015: unsupported WiredTiger file version: this build only supports versions up to 3, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1691016233:911251][92287:0x206e0d600], connection: __log_open_verify, 1015: unsupported WiredTiger file version: this build only supports versions up to 3, and the file is version 5: WT_ERROR: non-specific WiredTiger error\n2023-08-02T18:43:53.946-0400 E STORAGE [initandlisten] WiredTiger error (-31802) [1691016233:946039][92287:0x206e0d600], connection: __log_open_verify, 1015: unsupported WiredTiger file version: this build only supports versions up to 3, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1691016233:946039][92287:0x206e0d600], connection: __log_open_verify, 1015: unsupported WiredTiger file version: this build only supports versions up to 3, and the file is version 5: WT_ERROR: non-specific WiredTiger error\n2023-08-02T18:43:53.966-0400 E STORAGE [initandlisten] WiredTiger error (-31802) [1691016233:966886][92287:0x206e0d600], connection: __log_open_verify, 1015: unsupported WiredTiger file version: this build only supports versions up to 3, and the file is version 5: WT_ERROR: non-specific WiredTiger error Raw: [1691016233:966886][92287:0x206e0d600], connection: __log_open_verify, 1015: unsupported WiredTiger file version: this build only supports versions up to 3, and the file is version 5: WT_ERROR: non-specific WiredTiger error\n2023-08-02T18:43:53.987-0400 W STORAGE [initandlisten] Failed to start up WiredTiger under any compatibility version.\n2023-08-02T18:43:53.987-0400 F STORAGE [initandlisten] Reason: -31802: WT_ERROR: non-specific WiredTiger error\n2023-08-02T18:43:53.987-0400 F - [initandlisten] Fatal Assertion 28595 at src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp 645\n2023-08-02T18:43:53.987-0400 F - [initandlisten] \n\n***aborting after fassert() failure\nmongod --dbpath ~/Downloads/restore-64caaf6f53157227c9123456/\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.657-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.659-04:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.660-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":93877,\"port\":27017,\"dbPath\":\"/Users/username/Downloads/restore-64caaf6f53157227c9123456/\",\"architecture\":\"64-bit\",\"host\":\"Adminstrators-MacBook-Pro.local\"}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.660-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"4.4.21\",\"gitVersion\":\"07fb62484a27e3e464ecdd6c746de64e53e19e56\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.660-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.660-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"storage\":{\"dbPath\":\"/Users/username/Downloads/restore-64caaf6f53157227c9123456/\"}}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.661-04:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7680M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],\"}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.765-04:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":4671205, \"ctx\":\"initandlisten\",\"msg\":\"This version of MongoDB is too recent to start up on the existing data files. Try MongoDB 4.2 or earlier.\"}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.766-04:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23089, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":4671205,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":923}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.766-04:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23090, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.766-04:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Abort trap: 6).\\n\"}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"initandlisten\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"10662336C\",\"b\":\"10442B000\",\"o\":\"21F836C\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE\",\"s+\":\"10C\"},{\"a\":\"106624A68\",\"b\":\"10442B000\",\"o\":\"21F9A68\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"28\"},{\"a\":\"1066225AB\",\"b\":\"10442B000\",\"o\":\"21F75AB\",\"s\":\"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP9__siginfoPv\",\"s+\":\"BB\"},{\"a\":\"7FF8008B6DFD\",\"b\":\"7FF8008B3000\",\"o\":\"3DFD\",\"s\":\"_sigtramp\",\"s+\":\"1D\"},{\"a\":\"4\"},{\"a\":\"7FF8007ECD24\",\"b\":\"7FF80076B000\",\"o\":\"81D24\",\"s\":\"abort\",\"s+\":\"7B\"},{\"a\":\"106605CE7\",\"b\":\"10442B000\",\"o\":\"21DACE7\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiPKcj\",\"s+\":\"197\"},{\"a\":\"1044AE3B6\",\"b\":\"10442B000\",\"o\":\"833B6\",\"s\":\"_ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEES9_\",\"s+\":\"7D6\"},{\"a\":\"1044AB8A8\",\"b\":\"10442B000\",\"o\":\"808A8\",\"s\":\"_ZN5mongo18WiredTigerKVEngineC2ERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEES9_PNS_11ClockSourceES9_mmbbbb\",\"s+\":\"15D8\"},{\"a\":\"1044B0188\",\"b\":\"10442B000\",\"o\":\"85188\",\"s\":\"_ZN5mongo18WiredTigerKVEngineC1ERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEES9_PNS_11ClockSourceES9_mmbbbb\",\"s+\":\"38\"},{\"a\":\"104488327\",\"b\":\"10442B000\",\"o\":\"5D327\",\"s\":\"_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE\",\"s+\":\"337\"},{\"a\":\"104F4741A\",\"b\":\"10442B000\",\"o\":\"B1C41A\",\"s\":\"_ZN5mongo23initializeStorageEngineEPNS_14ServiceContextENS_22StorageEngineInitFlagsE\",\"s+\":\"77A\"},{\"a\":\"104438283\",\"b\":\"10442B000\",\"o\":\"D283\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi\",\"s+\":\"463\"},{\"a\":\"10442DDBA\",\"b\":\"10442B000\",\"o\":\"2DBA\",\"s\":\"_ZN5mongo12_GLOBAL__N_111mongoDbMainEiPPc\",\"s+\":\"161A\"},{\"a\":\"10442C799\",\"b\":\"10442B000\",\"o\":\"1799\",\"s\":\"main\",\"s+\":\"9\"},{\"a\":\"208A4252E\"},{\"a\":\"5\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.21\",\"gitVersion\":\"07fb62484a27e3e464ecdd6c746de64e53e19e56\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Darwin\",\"release\":\"21.6.0\",\"version\":\"Darwin Kernel Version 21.6.0: Mon Aug 22 20:20:05 PDT 2022; root:xnu-8020.140.49~2/RELEASE_ARM64_T8101\",\"machine\":\"x86_64\"},\"somap\":[{\"path\":\"/usr/local/Cellar/[email protected]/4.4.21/bin/mongod\",\"machType\":2,\"b\":\"10442B000\",\"vmaddr\":\"100000000\",\"buildId\":\"D2F1B69B6822388890FAD35A3CF5B151\"},{\"path\":\"/usr/lib/system/libsystem_c.dylib\",\"machType\":6,\"b\":\"7FF80076B000\",\"vmaddr\":\"7FF8001EB000\",\"buildId\":\"E42E9D7A03B4340BB61EDCD45FD4ACC0\"},{\"path\":\"/usr/lib/system/libsystem_platform.dylib\",\"machType\":6,\"b\":\"7FF8008B3000\",\"vmaddr\":\"7FF800333000\",\"buildId\":\"A8A337746D4435E9AD2ABAD9E4D5192A\"}]}}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"10662336C\",\"b\":\"10442B000\",\"o\":\"21F836C\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE\",\"s+\":\"10C\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"106624A68\",\"b\":\"10442B000\",\"o\":\"21F9A68\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"28\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"1066225AB\",\"b\":\"10442B000\",\"o\":\"21F75AB\",\"s\":\"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP9__siginfoPv\",\"s+\":\"BB\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF8008B6DFD\",\"b\":\"7FF8008B3000\",\"o\":\"3DFD\",\"s\":\"_sigtramp\",\"s+\":\"1D\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"4\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF8007ECD24\",\"b\":\"7FF80076B000\",\"o\":\"81D24\",\"s\":\"abort\",\"s+\":\"7B\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"106605CE7\",\"b\":\"10442B000\",\"o\":\"21DACE7\",\"s\":\"_ZN5mongo25fassertFailedWithLocationEiPKcj\",\"s+\":\"197\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"1044AE3B6\",\"b\":\"10442B000\",\"o\":\"833B6\",\"s\":\"_ZN5mongo18WiredTigerKVEngine15_openWiredTigerERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEES9_\",\"s+\":\"7D6\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"1044AB8A8\",\"b\":\"10442B000\",\"o\":\"808A8\",\"s\":\"_ZN5mongo18WiredTigerKVEngineC2ERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEES9_PNS_11ClockSourceES9_mmbbbb\",\"s+\":\"15D8\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"1044B0188\",\"b\":\"10442B000\",\"o\":\"85188\",\"s\":\"_ZN5mongo18WiredTigerKVEngineC1ERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcEEEES9_PNS_11ClockSourceES9_mmbbbb\",\"s+\":\"38\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"104488327\",\"b\":\"10442B000\",\"o\":\"5D327\",\"s\":\"_ZNK5mongo12_GLOBAL__N_117WiredTigerFactory6createERKNS_19StorageGlobalParamsEPKNS_21StorageEngineLockFileE\",\"s+\":\"337\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"104F4741A\",\"b\":\"10442B000\",\"o\":\"B1C41A\",\"s\":\"_ZN5mongo23initializeStorageEngineEPNS_14ServiceContextENS_22StorageEngineInitFlagsE\",\"s+\":\"77A\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"104438283\",\"b\":\"10442B000\",\"o\":\"D283\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi\",\"s+\":\"463\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"10442DDBA\",\"b\":\"10442B000\",\"o\":\"2DBA\",\"s\":\"_ZN5mongo12_GLOBAL__N_111mongoDbMainEiPPc\",\"s+\":\"161A\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"10442C799\",\"b\":\"10442B000\",\"o\":\"1799\",\"s\":\"main\",\"s+\":\"9\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"208A4252E\"}}}\n{\"t\":{\"$date\":\"2023-08-02T19:07:59.769-04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"initandlisten\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"5\"}}}\nAbort trap: 6\n",
"text": "Hello,I recently downloaded a backup of my production database. I was hoping to be able to connect to this downloaded (and extracted) backup data using mongod --dbpath. But I am not able to connect to these local files. I tried it using 2 different mongod versions, 4.0.3 and 4.4.21. Both gave me different errors. My production db shows its running on version 4.4.23 as per Atlas.Here are the details of errors on both versions.mongod version 4.0.3mongod version 4.4.21Any help here is highly appreciated. I need to fix some production documents and restoring from backup is not an option for me (will result in too much data loss).regards",
"username": "Vinit_Acharekar"
},
{
"code": "",
"text": "Hi @Vinit_Acharekar, welcome to the forums.Try 4.2., 4.0 is reporting the file version is newer than it supports and 4.4 is too new.Here is an old post that helps find the last version that sucessfully started the DB the backup was taken from.",
"username": "chris"
}
] | Not able to start mongodb using dbpath | 2023-08-02T23:11:11.740Z | Not able to start mongodb using dbpath | 632 |
null | [
"queries",
"cxx",
"c-driver"
] | [
{
"code": "_CONSTEXPR20 void _Container_base12::_Swap_proxy_and_iterators_unlocked(_Container_base12& _Right) noexcept {\n _Container_proxy* _Temp = _Myproxy;\n _Myproxy = _Right._Myproxy;\n _Right._Myproxy = _Temp;\n\n if (_Myproxy) {\n _Myproxy->_Mycont = this; //Error occurs here at this line\n }\n\n if (_Right._Myproxy) {\n _Right._Myproxy->_Mycont = &_Right;\n }\n}\nstd::tuple<std::string, std::string, std::string> learning::MongoDB::findDocument(const std::string& value)\n{\n std::string key;\n if (value.find('@') != std::string::npos) {\n // Contains '@' symbol, so it looks like an email\n key = \"email\";\n }\n else {\n // Doesn't contain '@', so it looks like a username\n key = \"username\";\n }\n // Add query filter argument in find\n\tauto find_one_filtered_result = loginInfoCollection.find_one(bsoncxx::builder::basic::make_document(bsoncxx::builder::basic::kvp(key, value)));\n \n if (!find_one_filtered_result) {\n return { \"\", \"\", \"\" };; // No data found\n }\n\n // Extract the first document from the cursor\n\tauto document = *find_one_filtered_result;\n\n // Extract the individual components of the retrieved data\n std::string retrievedUsername = std::string(document[\"username\"].get_string().value);\n std::string retrievedEmail = std::string(document[\"email\"].get_string().value);\n std::string retrievedPassword = std::string(document[\"pwd\"].get_string().value);\n return { retrievedUsername, retrievedEmail, retrievedPassword };\n}\n>\tbsoncxx.dll!std::_Container_base12::_Swap_proxy_and_iterators_unlocked(std::_Container_base12 & _Right) Line 1255\tC++\n \tbsoncxx.dll!std::_Container_base12::_Swap_proxy_and_iterators_locked(std::_Container_base12 & _Right) Line 1093\tC++\n \tbsoncxx.dll!std::_Container_base12::_Swap_proxy_and_iterators(std::_Container_base12 & _Right) Line 1276\tC++\n \tbsoncxx.dll!std::string::_Swap_proxy_and_iterators(std::string & _Right) Line 5036\tC++\n \tbsoncxx.dll!std::string::_Take_contents(std::string & _Right) Line 3160\tC++\n \tbsoncxx.dll!std::string::basic_string<char,std::char_traits<char>,std::allocator<char>>(std::string && _Right) Line 2893\tC++\n \tbsoncxx.dll!bsoncxx::v_noabi::builder::core::key_owned(std::string key) Line 274\tC++\n \t[Inline Frame] project.exe!bsoncxx::v_noabi::builder::basic::sub_document::append_(std::tuple<std::string &,std::string const &> &&) Line 76\tC++\n \t[Inline Frame] project.exe!bsoncxx::v_noabi::builder::basic::sub_document::append(std::tuple<std::string &,std::string const &> &&) Line 47\tC++\n \t[Inline Frame] project.exe!bsoncxx::v_noabi::builder::basic::make_document(std::tuple<std::string &,std::string const &> &&) Line 112\tC++\n \tproject.exe!learning::MongoDB::findDocument(const std::string & value) Line 59\tC++\n \tproject.exe!LoginPageState::update(sf::Time deltaTime) Line 164\tC++\n \tproject.exe!main() Line 6\tC++\n \t[External Code]\t\n\n",
"text": "Getting a runtime error in a function which i made to find a specific document, it gives the error in the xmemory file at line 1255 asException thrown: read access violation.\nthis->_Myproxy was 0xFFFFFFFFFFFFFFFF.The code in the xmemory where the error occurs looks like :This is the function use to find the specific document. And it looks like the filter variable initialization is giving errors. Also the code seems to work in Debug mode, but it doesnt in the Release mode.The call stackAm i doing it wrong?? Is there another way to find specific value in a collection??I even checked if the collection is valid or not.\nAny help would be appreciated.",
"username": "Abhay_More"
},
{
"code": "",
"text": "Hi Abhay, this seems to be the same case as https://jira.mongodb.org/browse/CXX-2707.\nPlease double check your build configuration for the libraries. All of the components (C driver, Boost, C++ Driver, Application) must agree on whether the Debug or Release CRT is in play, along with whether you are building against the Static or Dynamic version is being used. Weird string crashes are the canonical symptom of such misconfigurations.",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "How can i check that, i think the DLLs were generated using cmake.\nLast time the project failed in release mode, had the runtime error when connecting thru URI. So this time i made the configuration as Release instead of RelWithDebInfo(Release With Debug Info) when buidling and it atleast ran until i got this error(and again works good in debug mode but fails in release)",
"username": "Abhay_More"
},
{
"code": "dumpbin.exe /DEPENDENTS XYZ.dll",
"text": "It may be helpful to inspect the output of dumpbin.exe to check that your application, the C driver, Boost, and C++ driver, all use the same CRT - /DEPENDENTS | Microsoft Learndumpbin.exe /DEPENDENTS XYZ.dll",
"username": "Rishabh_Bisht"
},
{
"code": "Dump of file bsoncxx.dll\n\nFile Type: DLL\n\n Image has the following dependencies:\n\n bson-1.0.dll\n MSVCP140D.dll\n VCRUNTIME140D.dll\n VCRUNTIME140_1D.dll\n ucrtbased.dll\n KERNEL32.dll\n\n Summary\n\n 1000 .00cfg\n 1000 .data\n 2000 .idata\n 5000 .pdata\n 28000 .rdata\n 1000 .reloc\n 1000 .rsrc\n 33000 .text\n 1000 .tls\n",
"text": "Ok, so i did use the command to check content of Dumpbin for bsoncxx.dll, but i am unable to understand it\nWhat things should i be doing to figure out which DLL configuration is causing the error.",
"username": "Abhay_More"
},
{
"code": "",
"text": "I ran the same command on my end (your code works fine for me on release config in VS 2022) I see similar output as yours. Can you run the same command on your application as well?\nAlso cross check all the project configuration settings in release, specially Config Properties > C/C++ > Pre Processor > Pre processor definitions (it should have NODEBUG) .\nI could also give a try to use the libraries you compiled, if you could zip your mongo-cxx-folder and share it.",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Also note the last message in Crash in windows when creating uri::uri where Ian was facing similar issue:building with visual studio seems to ignore -DCMAKE_BUILD_TYPE when building mongoc-driver and mongocxx-driver. you have to specify --config RelWithDebInfo as well when building them.",
"username": "Rishabh_Bisht"
},
{
"code": "Dump of file project.exe\n\nFile Type: EXECUTABLE IMAGE\n\n Image has the following dependencies:\n\n sfml-system-2.dll\n sfml-graphics-2.dll\n sfml-audio-2.dll\n sfml-window-2.dll\n tgui.dll\n bsoncxx.dll\n mongocxx.dll\n MSVCP140.dll\n VCRUNTIME140_1.dll\n VCRUNTIME140.dll\n api-ms-win-crt-runtime-l1-1-0.dll\n api-ms-win-crt-heap-l1-1-0.dll\n api-ms-win-crt-utility-l1-1-0.dll\n api-ms-win-crt-time-l1-1-0.dll\n api-ms-win-crt-math-l1-1-0.dll\n api-ms-win-crt-stdio-l1-1-0.dll\n api-ms-win-crt-locale-l1-1-0.dll\n KERNEL32.dll\n\n Summary\n\n 2000 .data\n 2000 .pdata\n D000 .rdata\n 1000 .reloc\n 1000 .rsrc\n 2D000 .text\n",
"text": "If its working in yours, am i doing something wrong from your tutorials for the installations?(i just changed the config RelWithDebInfo to Release, this time)\nalso yea so the command that i ran to get the above output is this:dumpbin /DEPENDENTS bsoncxx.dllI ran for my project.exe as well, it gavedumpbin /DEPENDENTS project.exeAs for the preprocessor defnitions, it does haveNDEBUG\n_CONSOLEThe libraries i compiled: libraries.7z - Google DriveAbout the commandscmake --build . --config RelWithDebInfo --target installThis is the build command for the mongo-c-driver\nI should also use the same command for the mongo-cxx-driver?",
"username": "Abhay_More"
},
{
"code": "",
"text": "Thanks for sharing the information!cmake --build . --config RelWithDebInfo --target installThis is the build command for the mongo-c-driver\nI should also use the same command for the mongo-cxx-driver?Ideally this shouldn’t be needed because by default mongocxx chooses release build (https://github.com/mongodb/mongo-cxx-driver/blob/master/CMakeLists.txt#L185). However VS builds by default in debug. I wonder if there’s a bug in the system which has caused the existing behaviour to change and VS override the build config. It may also explain why it works for you in debug.Could you please try to build only the C++ driver with above command, ie. specifically providing build config while building?",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Hii… i did the command, and it worked…atleast no runtime errors so far…\nBut i did the :cmake --build . --config Release --target installI will try to build both in RelWithDebInfo build config.\nThank you",
"username": "Abhay_More"
},
{
"code": "",
"text": "Phew! That’s great to hear!",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongocxx specifc document find fails, Unhandled Exception by bsoncxx.dll | 2023-07-31T18:59:06.548Z | Mongocxx specifc document find fails, Unhandled Exception by bsoncxx.dll | 1,069 |
null | [
"flutter"
] | [
{
"code": "I/flutter ( 6870): [INFO] Realm: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\nI/flutter ( 6870): [INFO] Realm: Connected to endpoint '52.64.157.195:443' (from '10.0.2.16:36396')\nI/flutter ( 6870): [INFO] Realm: Verifying server SSL certificate using 155 root certificates\nI/flutter ( 6870): [INFO] Realm: Connection[1]: Connected to app services with request id: \"63ecb35db314827f0e6bfb8b\"\nI/flutter ( 6870): [INFO] Realm: Connection[1]: Session[1]: Received: ERROR \"Invalid query (IDENT, QUERY): failed to parse query: query contains table not in schema: \"UserQuery\"\" (error_code=226, try_again=false, error_action=ApplicationBug)\n\nsyncErrorHandler: (syncError) {\n log().d('syncErrorHandler : ${syncError.category} $syncError');\n realm.close();\n Realm.deleteRealm(realm.config.path);\n },\n",
"text": "I’m having problems understanding the concept of schema updates in the context of Atlas Device Synced Realm. The documentation doesn’t make it clear on what is the approach to migrate breaking changes. The only 2 options available are stated as:Partner collection is more for production environment - and even then I’d rather not have every breaking change to have a partner collection… 100 breaking changes = 100x write per collection? Ideally, I’d like to have a schema versioned so that my app can handle the migration on a breaking change… anyway I digress.So I figured, option 2! Client reset each time there is a breaking change, so that the client will delete the local realm and sync up to the new schema… Nope this does not work at all. I’m currently sitting on this issue where I’ve reset the device sync service many times in App Services UI, but it is still giving me a non-descriptive error message below:I tried to delete the realm manually during the error handling but to no avail:Any assistance would be much appreciated thanks.I’m currently using the Flutter Realm 1.0.0.",
"username": "lHengl"
},
{
"code": "clientResetHandlersyncErrorHandleronManualResetFallbackclientResetError.resetRealm()onManualResetFallback",
"text": "Hi @lHengl!\nTo handle the breaking changes you have to use clientResetHandler instead of syncErrorHandler. We recommend using the “Recover or Discard Unsynced Changes Mode” strategy in such cases, since it will try to automatically recover the changes. If this is not possible then the automatic recovery fails and it tries to discard unsynced changes. In case discarding changes fails the execution will go into the onManualResetFallback, where you can prompt the users before resetting the realm file (clientResetError.resetRealm()). You can find a detailed example about onManualResetFallback implementation in “Manual Client Reset Fallback” documentation.\nFeel free to comment if anything is unclear from the documentation.",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Thank you, I fixed my issue by uninstalling the app. I will try these strategy next I come across a breaking change and report back.",
"username": "lHengl"
},
{
"code": " RealmHandle openRealm(Configuration config) {\n final configHandle = _createConfig(config);\n final realmPtr = _realmLib.invokeGetPointer(() => _realmLib.realm_open(configHandle._pointer), \"Error opening realm at path ${config.path}\");\n return RealmHandle._(realmPtr);\n }\n late final Realm _realm;\n\n Future<Realm> openRealm(User user) async {\n log().d('openRealm : opening realm for ${user.profile.name}');\n final config = _flexibleConfig(user);\n try {\n _realm = Realm(config);\n } catch (e) {\n _realm.close(); // This gives late initialisation error\n Realm.deleteRealm(config.path); // This gives realm is already opened error\n rethrow;\n }\n }\n",
"text": "Hi @Desislava_St_Stefanova I’m coming across this issue again, however this time I really want to get to the bottom of it. Here are my finding so far:realm_core.dartBelow is the error:[log] RealmException: Error opening realm at path /data/data/fit.tick.fitapp.dev/files/mongodb-realm/fitapp-dev-kkccq/63f2c277511cef8ea5219c0f/default.realm. Error code: 18 . Message: The following changes cannot be made in additive-only schema mode:What I’ve tried so far to resolve this:However, I can’t seem to delete the realm for two reasons:Below is my code in an attempt to delete the realm:Any assistance will be appreciated. FYI, I’m following this thread for a solution: https://jira.mongodb.org/browse/DOCS-14211",
"username": "lHengl"
},
{
"code": "clientResetHandlerfinal config = Configuration.flexibleSync(currentUser, schema,\n clientResetHandler: RecoverOrDiscardUnsyncedChangesHandler(\n // All the following callbacks are optional\n onBeforeReset: (beforeResetRealm) {\n // Executed before the client reset begins.\n // Can be used to notify the user that a reset is going\n // to happen.\n },\n onAfterRecovery: (beforeResetRealm, afterResetRealm) {\n // Executed if and only if the automatic recovery has succeeded.\n },\n onAfterDiscard: (beforeResetRealm, afterResetRealm) {\n // Executed if the automatic recovery has failed\n // but the discard unsynced changes fallback has completed\n // successfully.\n },\n onManualResetFallback: (clientResetError) {\n // Automatic reset failed. Handle the reset manually here.\n // Refer to the \"Manual Client Reset Fallback\" documentation\n // for more information on what you can include here.\n },\n ));\nclientResetError.resetRealm()onManualResetFallbackclientResetHandler",
"text": "Hi @lHengl,\nGood to hear you are moving on with the Realm.\nDid you configure the clientResetHandler as follow?You don’t have to delete the realm. You can call clientResetError.resetRealm() inside onManualResetFallback and then to notify your users that they have to restart the app, for example.\nI will try to reproduce your issue. What is the schema change that you did? Is it only renaming a property?\nI will appreciate it if you can share some sample of your code using clientResetHandler.",
"username": "Desislava_St_Stefanova"
},
{
"code": "Property 'userSearches.reference' has been changed from '<references>' to '<ReferenceRealmEO>/// A wrapper singleton instance of a realm app for convenience of access to the realm app\nclass RealmApp {\n static Logger log([Set<String> tags = const {}]) => LogFactory.infrastructure.service<RealmApp>(tags);\n\n ///////////////////////////////////// STATIC\n\n static final RealmApp instance = RealmApp._internal();\n\n static Future<void> initialiseApp(AppConfiguration realmAppConfiguration) async {\n log().d('initializeApp : initialising RealmApp');\n instance._app = App(realmAppConfiguration);\n log().d('initializeApp : done');\n }\n\n ///////////////////////////////////// INSTANCE\n\n RealmApp._internal();\n\n /// Holds a single instance of the realm app which must be initialized before use\n late final App _app;\n\n /// Holds the current realm for used throughout the app\n late Realm _realm;\n Realm get realm => _realm;\n\n Future<Realm> openRealm(User user) async {\n log().d('openRealm : opening realm for ${user.profile.name}');\n final config = _flexibleConfig(user);\n try {\n _realm = Realm(config);\n } catch (e) {\n log().d('openRealm : $e'); // the error is thrown here and not in the clientResetHandler\n rethrow;\n }\n log().d('openRealm : opened realm for ${user.profile.name} at path ${_realm.config.path}');\n\n log().d('openRealm : updating sync subscription length : ${_realm.subscriptions.length}');\n\n // Add subscription to sync all objects in the realm\n _realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(_realm.all<TaskRealm>());\n mutableSubscriptions.add(_realm.all<ActualBodyCompRealm>());\n mutableSubscriptions.add(_realm.all<TargetBodyCompRealm>());\n mutableSubscriptions.add(_realm.all<UserPreferencesRealm>());\n mutableSubscriptions.add(_realm.all<UserSearchRealm>());\n });\n\n log().d('openRealm : updated sync subscription length : ${_realm.subscriptions.length}');\n\n log().d('openRealm : waiting for sync subscription');\n await _realm.subscriptions.waitForSynchronization();\n\n log().d('openRealm : done');\n return _realm;\n }\n\n Configuration _flexibleConfig(User user) => Configuration.flexibleSync(\n user,\n _flexibleSyncSchema,\n syncErrorHandler: (syncError) {\n log().d('syncErrorHandler : ${syncError.category} $syncError');\n switch (syncError.category) {\n case SyncErrorCategory.client:\n break;\n case SyncErrorCategory.connection:\n break;\n case SyncErrorCategory.resolve:\n break;\n case SyncErrorCategory.session:\n break;\n case SyncErrorCategory.system:\n break;\n case SyncErrorCategory.unknown:\n break;\n }\n },\n clientResetHandler: RecoverOrDiscardUnsyncedChangesHandler(\n onBeforeReset: (before) {\n log().d('clientResetHandler : onBeforeReset');\n },\n onAfterRecovery: (before, after) {\n log().d('clientResetHandler : onAfterRecovery');\n },\n onAfterDiscard: (before, after) {\n log().d('clientResetHandler : onAfterDiscard');\n },\n onManualResetFallback: (error) {\n log().d('clientResetHandler : onManualResetFallback');\n },\n ),\n );\n\n /// Logs in a user with the given credentials.\n Future<User> logIn({required Credentials credentials}) async {\n log().d('logIn : logging in with ${credentials.provider.name} credentials');\n final user = await _app.logIn(credentials);\n log().d('logIn : logged in as ${user.profile.name}');\n await openRealm(user);\n log().d('logIn : opened realm for ${user.profile.name}');\n return user;\n }\n\n /// Logs out the current user, if one exist\n Future<void> logOut() async => currentUser?.logOut();\n\n /// Gets the currently logged in [User]. If none exists, `null` is returned.\n User? get currentUser => _app.currentUser;\n\n /// Gets all currently logged in users.\n Iterable<User> get users => _app.users;\n\n /// Removes a [user] and their local data from the device. If the user is logged in, they will be logged out in the process.\n Future<void> removeUser({required User user}) async {\n return _app.removeUser(user);\n }\n\n /// Deletes a user and all its data from the device as well as the server.\n Future<void> deleteUser({required User user}) async {\n return _app.deleteUser(user);\n }\n\n /// Switches the [currentUser] to the one specified in [user].\n Future<Realm> switchUser({required User user}) async {\n _app.switchUser(user);\n return realm;\n }\n}\nI/flutter ( 5782): [RealmApp] : logIn : logging in with jwt credentials\nI/flutter ( 5782): [RealmApp] : logIn : logged in as YdkTtwVn9XM1UxkswW6UPvQYM7B3\nI/flutter ( 5782): [RealmApp] : openRealm : opening realm for YdkTtwVn9XM1UxkswW6UPvQYM7B3\nI/flutter ( 5782): [RealmApp] : openRealm : RealmException: Error opening realm at path /data/data/fit.tick.fitapp.dev/files/mongodb-realm/fitapp-dev-kkccq/63f2c277511cef8ea5219c0f/default.realm. Error code: 18 . Message: The following changes cannot be made in additive-only schema mode:\nI/flutter ( 5782): - Property 'userSearches.reference' has been changed from '<references>' to '<ReferenceRealmEO>'.\nI/flutter ( 5782): [INFO] Realm: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\n[log] RealmException: Error opening realm at path /data/data/fit.tick.fitapp.dev/files/mongodb-realm/fitapp-dev-kkccq/63f2c277511cef8ea5219c0f/default.realm. Error code: 18 . Message: The following changes cannot be made in additive-only schema mode:\n- Property 'userSearches.reference' has been changed from '<references>' to '<ReferenceRealmEO>'.\n #0 _RealmCore.throwLastError.<anonymous closure> (package:realm/src/native/realm_core.dart:119:7)\n #1 using (package:ffi/src/arena.dart:124:31)\n #2 _RealmCore.throwLastError (package:realm/src/native/realm_core.dart:113:5)\n #3 _RealmLibraryEx.invokeGetPointer (package:realm/src/native/realm_core.dart:2784:17)\n #4 _RealmCore.openRealm (package:realm/src/native/realm_core.dart:599:32)\n #5 Realm._openRealm (package:realm/src/realm_class.dart:194:22)\n #6 new Realm._ (package:realm/src/realm_class.dart:149:98)\n #7 new Realm (package:realm/src/realm_class.dart:147:38)\n #8 RealmApp.openRealm (package:fitapp/infrastructure/mongodb/realm/app/realm_app.dart:36:16)\n #9 RealmApp.logIn (package:fitapp/infrastructure/mongodb/realm/app/realm_app.dart:104:11)\n <asynchronous suspension>\n #10 FirebaseRealmAuthService._realmLogIn (package:fitapp/infrastructure/hybrid/auth/firebase_realm_auth_service.dart:99:5)\n <asynchronous suspension>\n #11 FirebaseRealmAuthService._watchAuthStateChanges.<anonymous closure> (package:fitapp/infrastructure/hybrid/auth/firebase_realm_auth_service.dart:60:13)\n <asynchronous suspension>\nI/flutter ( 5782): [INFO] Realm: Connected to endpoint '52.64.157.195:443' (from '10.0.2.16:40040')\nI/flutter ( 5782): [INFO] Realm: Verifying server SSL certificate using 155 root certificates\nI/flutter ( 5782): [INFO] Realm: Connection[1]: Connected to app services with request id: \"64236ef23e632940cea87942\"\nD/EGL_emulation( 5782): app_time_stats: avg=36.71ms min=12.31ms max=123.14ms count=27\nD/EGL_emulation( 5782): app_time_stats: avg=16.68ms min=9.88ms max=21.42ms count=60\nI/flutter ( 5782): [INFO] Realm: Connection[1]: Session[1]: Received: ERROR \"Invalid query (IDENT, QUERY): failed to parse query: query contains table not in schema: \"userSearches\"\" (error_code=226, try_again=false, error_action=ApplicationBug)\nI/flutter ( 5782): [INFO] Realm: Connection[1]: Disconnected\n",
"text": "Hi @Desislava_St_Stefanova,The change is I made was changing the property to an embedded object type:Property 'userSearches.reference' has been changed from '<references>' to '<ReferenceRealmEO>As I’ve mentioned, the clientResetHandler is not invoked. The log does not print.I did remember when I did my first reset, the log did show up, but my reset code only printed the logs, so nothing was done about the reset. Now, subsequent reset does not seem to invoke the resetHandler.I have a singleton RealmApp class below (the error occurs in openRealm method):Here is the log for the above code which proves that the error is caught and not handled by the flexible sync configuration:",
"username": "lHengl"
},
{
"code": "clientResetErrorclientResetError",
"text": "Hi @lHengl ,We managed to reproduce the described scenario.The reason why clientResetError event doesn’t occur is because you have probably changed the schema on both sides, the client and the server. clientResetError is invoked when the schema on the server is different from the schema on the client app. So that the server is not involved here.We suppose that you already have a realm file with the old schema on the device, then you change the schema in the client app and open the same old file with the new app. In such cases the only option is to delete the local realm file as you actually do.The reason that your realm file was not deleted could be because you had opened a realm instance to the same file somewhere.It is easily reproducible if we open the realm with the old schema and then open the realm with the new schema. The first realm, which was successfully opened, should be closed before trying to delete the file. Even though the schemas are different both realms are sharing the same file.It could be the Realm Studio that holds the file if you have it opened.",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "@lHengl You may wish to launch a script to delete the realm file on the clients, or if necessary terminate sync, wait 10 minutes, and re-initiate sync, but I would advice contacting MongoDB Support directly, and have them look at what’s going on in a formal manner, because if you terminate sync, all unsynced data will be lost. But it will remove all local realm files from the apps/devices.The biggest issue with destructive changes, is all changes you need to make you want to plan for, and alert your users whether via an in-app push notification or the like that you will be shutting down the app on X day at X time with the time zone, and then you in a controlled manner, shut down the app, terminate sync, do you destructive changes and push your updates, and then reinitiate sync.A lot of companies do this as a part of their routine maintenance cycles and setup days/times with the least impact to their customers.",
"username": "Brock"
},
{
"code": "",
"text": "Also makes sure you have client reset logic in place before you terminate and re-enable sync.",
"username": "Brock"
},
{
"code": "",
"text": "Hi @Desislava_St_Stefanova, @Brock,I’ve found a workaround as described here -142667.This workaround should help to improve my experience in the mean time, and I look forward to the completion of the mentioned project Thanks for your help!",
"username": "lHengl"
},
{
"code": " try {\n Realm(configV2);\n } catch (e) {\n await user.logOut();\n Realm.deleteRealm(configV2.path);\n }\n",
"text": "@lHengl by the way if you can not delete the file even though all the realm instances are closed, be sure to logout the users before deleting the file.",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Thank you, I fixed my issue by uninstalling the app.",
"username": "aciloc_N_A"
}
] | How to update breaking-change schema of a Synced Realm during development? | 2023-02-15T10:44:17.197Z | How to update breaking-change schema of a Synced Realm during development? | 2,699 |
null | [
"node-js",
"crud"
] | [
{
"code": "",
"text": "I’ve just upgraded my mongodb driver from an older version and I noticed that insertmany doesn’t return the inserted documents aymore.Using find right after an insert to return the created documents seems odd, is this the way to go now or is it possible to have insertmany return the created docs ?",
"username": "coffee_cup"
},
{
"code": "",
"text": "is this the way to go now or is it possible to have insertmany return the created docsat least no such feature from manual.But you are inserting the documents by calling API, so you are supposed to know what are being inserted. InsertMany can return a lot of info to you, including any failures.",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi @coffee_cup, Looking at the node.js tag, I assume you are talking about node driver.\nPlease see the InsertManyResult | mongodb which should give you IDs of inserted documents, which is returned by insertMany (Collection | mongodb)",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Hello,\nyea I noticed that, however the driver version I used before returned all inserted documents ( not only the ID ), but looks like thats the way to go now.Thanks anyways.",
"username": "coffee_cup"
}
] | Is it possible to make insertmany return the inserted documents? | 2023-08-02T23:55:06.146Z | Is it possible to make insertmany return the inserted documents? | 766 |
null | [
"aggregation",
"queries"
] | [
{
"code": "skip()sort()count()find()",
"text": "in searching on this topic I found use cases combining the skip(), sort() and count() methods with the find(), as well as aggregation.What I would like to know is with a large dataset, which method is most efficient in retrieving the last record, especially when it needs to be done often.",
"username": "tolu_collins"
},
{
"code": "db.getCollection(\"Test\").explain().aggregate([\n{\n $sort:{\n _id:-1\n }\n},\n{\n $limit:1\n}\n])\n \"winningPlan\" : {\n \"stage\" : \"LIMIT\",\n \"limitAmount\" : 1.0,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"_id\" : 1.0\n },\n \"indexName\" : \"_id_\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"_id\" : [\n\n ]\n },\n \"isUnique\" : true,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2.0,\n \"direction\" : \"backward\",\n \"indexBounds\" : {\n \"_id\" : [\n \"[MaxKey, MinKey]\"\n ]\n }\n }\n }\n },\n",
"text": "What does your data look like? What’s the field that determines the “last” record? Last inserted or last by time field? What indexes do you have?If inserting via the driver, the driver can generate the _id field so if you have multiple processes on different machines inserting data then the _id may not guarantee insertion order over all data depending on OS Clocks.https://www.mongodb.com/docs/manual/core/document/#:~:text=In%20MongoDB%2C%20each%20document%20stored,ObjectId%20for%20the%20_id%20field.As a very basic, if you had one process inserting data then sorting by ID desc and pulling one record could result in making use of the auto-generated index on _id and be a very quick lookup:A Computer Science portal for geeks. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice/competitive programming/company interview Questions.",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is most efficient way to get the last record in a large MongoDB collection | 2023-08-03T07:28:46.439Z | What is most efficient way to get the last record in a large MongoDB collection | 359 |
null | [
"queries"
] | [
{
"code": "",
"text": "I have a collection for inserting the log for certain operation. We have a TTL for 30 days. Still the number is documents that are inserted into the collection is large. So we have an option to delete all logs. Currently we have some 49K documents (~42369KB) size. While deleting using deleteMany(), documents are not deleting from the collection.Is there any limit for deleteMany opertion?",
"username": "sandeep_s1"
},
{
"code": "",
"text": "Is there any limit for deleteMany opertion?I am not aware of such a limit. Most likely your query is wrong and documents that are not deleted do not match your query. Please share your query and some undeleted documents.Another pisibility is that you are writing new documents while the deleteMany is running and new documents are not considered for deleting. I do not know enough about the internals to confirm if this is a real posibility.Finally, it could be that you reach Atlas cluster limits that slows down the deleteMany to make you think that documents are not deleted. But, I think you would get a timeout error if that would be the case.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej, we were using the Azure Cosmosdb for Mongo API, what really happened is that, during the bulk delete, it is well over the allotted RU’s(Request Unit). That is the reason we were not able to delete all records at a time. So based on the RU’s we are doing batch-wise delete.",
"username": "sandeep_s1"
}
] | deleteMany : delete large number of documents not working | 2023-06-29T06:54:08.864Z | deleteMany : delete large number of documents not working | 917 |
null | [
"aggregation",
"mongodb-shell"
] | [
{
"code": "$accumulator$groupallowDiskUseByDefaulttruerunCommanddb.adminCommand(\n\t{\n\t\tsetParameter: 1,\n\t\tallowDiskUseByDefault: true\n\t}\n)\n...\ndb.runCommand({\n\t\"aggregate\":sourceCollection,\n\t\"pipeline\":pipeline,\n\tallowDiskUse: true,\n\tcursor:{},\n});\n$groupMongoServerError: PlanExecutor error during aggregation :: caused by :: Out of memory\nJavaScript execution interrupted\n$accumulator$set$group$groupallowDiskUse",
"text": "I stumbled upon two issues when migrating code from MapReduce to an aggregation pipeline using $accumulator in a $group stage on a MongoDB 6.0.8 server using mongosh 1.10.1.I explicitly set allowDiskUseByDefault to true and pass it in the runCommand for the aggregationFirst one was this OOM error when data fed into the group stage exceeded a certain size. Average size of docs going into the $group was 231KB and aggregation bombed when processing 620 such documents (~140MB), but was working up to 612 documents (138MB):I worked around this one by limiting the amount of documents processed in one run to stay well below that amount (chose max. 200 documents at a time). This being a time based aggregation, I cannot freely choose the exact amount of documents, as I’d always need to process whole hours or days respectively.The second error I stumbled over was this one:The documents processed each contain an array of objects and the $accumulator merges these arrays by adding docs to the target if it’s not present and adding up some properties and simply setting others when it is. If the number of sub-documents in these arrays exceeded a certain amount, I got the aforementioned “JavaScript execution interrupted”; in most cases it would be fine, but there were a couple of hours where we suffered a bot attack that led to an excessive amount of documents per each array.I worked around this problem by inserting an addition $set phase before the $group where I would sort these arrays by a specific key and then limit the number per array to 2,000 elements. I didn’t experiment to find out the exact limit when it would start to fail, but with the selected limit of 2,000 I consistently achieved successful aggregations.So my question is: Shouldn’t $group just spill to disk if allowDiskUse is allowed? Why is there still an OOM error and how else could this have been avoided? MapReduce just ran very long and consumed quite some memory, but I could depend upon it completing eventually - with the aggregation framework I have a somewhat bad feeling that there may be circumstances where an aggregation may bomb and we’d have to find out some workaround. Is there maybe some configuration setting I missed that would allocate more ressources to specific aggregations? This thing is running on a server with 128GB of RAM and I just assume that such a server is not that out of the ordinary, so it would be nice if larger aggregations could actually make use of the available memory?",
"username": "Markus_Wollny"
},
{
"code": "MongoServerError: PlanExecutor error during aggregation :: caused by :: Out of memory\n$accumulator$group59mongoshmongodpipelinepipeline",
"text": "Hi @Markus_Wollny,Thanks for providing the very detailed analysis. I did do some testing of my own very briefly using $accumulator and $group on ~10million documents with an average object size of 59 but wasn’t able to replicate any OOM errors. For what it’s worth, my test environment was also MongoDB 6.0.8 and I also used mongosh 1.10.1.To help with replicating this behaviour you’ve experienced, can you provide some further information:Please redact any personal or sensitive information before posting here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "tar -zxvf impression_hourly_collection.tar.gz\nmongorestore --db=testdb --collection=bi_impression_hourly ./impression_hourly_collection/track/bi_impression_hourly.bson\n\ntime /usr/bin/mongo testdb --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js\nmaxDocumentsPerRun = 1850maxDocumentsPerRun = 1800JavaScript execution interrupted",
"text": "Hello,I am sorry for the long delay, but unfortunately to to the deployment of workarounds for the original problem, I no longer have the original raw data that caused the problem and needed some time to compile a new test case to show the problem; I have provided a ZIP archive of test data and aggregation code that will reproduce the Out of Memory error; you can download the dump of the data and the aggregation code here: Sign in to your account - Link is valid for 30 days, download size is 561MB.The archive contains a bson dump of a collection of preprocessed hourly documents that need to be aggregated to days (same code can then be used to aggregare days to weeks and/or months). The source collection has 27.4k documents, average document size is 151.7KB.You’ll want to restore the dump to a test installation and then run the aggregation:This will crash (at least on my machine with 64GB RAM) with maxDocumentsPerRun = 1850, will run fine though with maxDocumentsPerRun = 1800.I haven’t tuned anything in the configuration of the mongod instance. I hope this helps to narrow down the issue. I wasn’t able to reproduce the JavaScript execution interrupted error with this data, but as I said before, sanitizing the number of nested documents in the array did the trick and this is a good enough solution for my purposes.There is no personal or sensitive information contained in the data whatsoever.Kind regardsMarkus",
"username": "Markus_Wollny"
},
{
"code": "tar -zxvf impression_hourly_collection.tar.gz\nmongorestore --db=testdb --collection=bi_impression_hourly ./impression_hourly_collection/track/bi_impression_hourly.bson\n\ntime /usr/bin/mongo testdb --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js\nmaxDocumentsPerRun = 1850maxDocumentsPerRun = 1800time /usr/bin/mongo testdb --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js\n/user/bin/mongomongoshConnecting to:\t\tmongodb://127.0.0.1:27017/testdb?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nUsing MongoDB:\t\t6.0.8\nUsing Mongosh:\t\t1.10.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n------\n The server generated these startup warnings when booting\n 2023-07-27T12:41:42.510+10:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2023-07-27T12:41:42.510+10:00: This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\n 2023-07-27T12:41:42.510+10:00: Soft rlimits for open file descriptors too low\n------\n\nLoading file: ./aggregationDemo/aggregateImpressionsToDay.js\nLimiting time to Sun Nov 27 2022 05:00:00 GMT+1000 (GMT+10:00)\nStill here, writing the lock...\nProcessing selection:\n{ '_id.t': { '$lt': ISODate(\"2022-11-26T19:00:00.000Z\") } }\nNumber of items found:\n(node:29056) [MONGODB DRIVER] Warning: cursor.count is deprecated and will be removed in the next major version, please use `collection.estimatedDocumentCount` or `collection.countDocuments` instead \n(Use `node --trace-warnings ...` to show where the warning was created)\n1849\nDone.\nmongosh testdb --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js 1.01s user 0.23s system 0% cpu 7:06.42 total\n",
"text": "@Markus_Wollny - Firstly, thank you for the detail reproduction dataset and instructions.You’ll want to restore the dump to a test installation and then run the aggregation:This will crash (at least on my machine with 64GB RAM) with maxDocumentsPerRun = 1850, will run fine though with maxDocumentsPerRun = 1800.After running:I was not able to get a crash. This is my output (I did alter the /user/bin/mongo portion to instead use mongosh on my own environment).The environment i’m running this test on has around 16GB of RAM.I’ll see what else I can find out digging through the execution stats of the aggregation.Thanks for your patience.Jason",
"username": "Jason_Tran"
},
{
"code": "maxDocumentsPerRunLoading file: ./aggregationDemo/aggregateImpressionsToDay.js\nLimiting time to Sat Dec 10 2022 02:00:00 GMT+1000 (GMT+10:00)\nStill here, writing the lock...\nProcessing selection:\n{\n '_id.t': {\n '$lt': ISODate(\"2022-12-09T16:00:00.000Z\"),\n '$gte': ISODate(\"2022-11-26T19:00:00.000Z\")\n }\n}\nNumber of items found:\n(node:30490) [MONGODB DRIVER] Warning: cursor.count is deprecated and will be removed in the next major version, please use `collection.estimatedDocumentCount` or `collection.countDocuments` instead \n(Use `node --trace-warnings ...` to show where the warning was created)\n1855\nDone.\nmongosh testdb --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js 1.00s user 0.16s system 0% cpu 4:58.61 total\n",
"text": "Ran it a second time with the same 1850 maxDocumentsPerRun value again to see if any changes occured:",
"username": "Jason_Tran"
},
{
"code": "~# time /usr/bin/mongo testdb --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js\nCurrent Mongosh Log ID: 64c76de0abfc62b06937b75b\nConnecting to: mongodb://127.0.0.1:27017/testdb?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nUsing MongoDB: 6.0.8\nUsing Mongosh: 1.10.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n------\n The server generated these startup warnings when booting\n 2023-07-14T08:03:55.986+02:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n 2023-07-14T08:03:58.367+02:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n------\n\nLoading file: ./aggregationDemo/aggregateImpressionsToDay.js\nLimiting time to Sat Nov 26 2022 20:00:00 GMT+0100 (Mitteleuropäische Normalzeit)\nStill here, writing the lock...\nProcessing selection:\n{ '_id.t': { '$lt': ISODate(\"2022-11-26T19:00:00.000Z\") } }\nNumber of items found:\n1849\nMongoServerError: PlanExecutor error during aggregation :: caused by :: Out of memory\n\nreal 14m21,631s\nuser 0m2,134s\nsys 0m0,256s\n/etc/mongod.confstorage:\n dbPath: /var/lib/mongodb\n directoryPerDB: true\n engine: wiredTiger\n wiredTiger:\n engineConfig:\n directoryForIndexes: true\n\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n # traceAllExceptions: true # this needs to be disabled in production\n\nnet:\n port: 27017\n bindIp: 0.0.0.0\n maxIncomingConnections: 20000\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\nreplication:\n oplogSizeMB: 2048\n replSetName: rs0\n",
"text": "Here’s what I got when running the demo, output differs a little from yours, so maybe there is some issue with a different config; regarding mongo vs. mongosh - I simply symlinked the mongosh to the old name, so I am really running mongosh, too.Here’s all of my /etc/mongod.conf:So I have enabled replication, though this is the sole member of its set. Could this be causing the issue? Maybe something to do with the oplog? I also noticed that I didn’t get the deprecation warning about using cursor.count, but I assume that this is irrelevant here.Thank you for looking into this!Kind regardsMarkus",
"username": "Markus_Wollny"
},
{
"code": "jason.tran@M-VNYW6V6WX4 2023-07-31_MongoDB_issue % time mongosh --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js\nCurrent Mongosh Log ID:\t64cae7f649cced044592f79d\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nUsing MongoDB:\t\t6.0.8\nUsing Mongosh:\t\t1.10.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n------\n The server generated these startup warnings when booting\n 2023-08-03T09:32:47.201+10:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2023-08-03T09:32:47.201+10:00: This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\n 2023-08-03T09:32:47.201+10:00: Soft rlimits for open file descriptors too low\n------\n\nLoading file: ./aggregationDemo/aggregateImpressionsToDay.js\nLimiting time to Sat Apr 15 2023 01:00:00 GMT+1000 (GMT+10:00)\nStill here, writing the lock...\nProcessing selection:\n{\n '_id.t': {\n '$lt': ISODate(\"2023-04-14T15:00:00.000Z\"),\n '$gte': ISODate(\"2023-02-04T05:00:00.000Z\")\n }\n}\nNumber of items found:\n10003\nDone.\nmongosh --verbose -f ./aggregationDemo/aggregateImpressionsToDay.js 1.09s user 0.19s system 0% cpu 25:55.11 total\n",
"text": "Thanks @Markus_Wollny,I tried with a similar config file and even increased the number of max documents to 10,000 but wasn’t able to get the crash to occur.I will see if the team knows anything further about the crashes but it’s a bit difficult without being able to reproduce it on my system. In saying so, could you describe the environment details (RAM, OS, etc) in which you’re running the script / aggregation?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "free -m# LANG=C free -m\n total used free shared buff/cache available\nMem: 32101 12585 14017 64 5498 18996\nSwap: 974 107 867\nfs.file-max=786432vm.max_map_count=262144/etc/sysctl.d/30-mongodb.confmongod.service[Service]\nLimitFSIZE=infinity\nLimitCPU=infinity\nLimitAS=infinity\nLimitMEMLOCK=infinity\nLimitNOFILE=64000\nLimitNPROC=64000\nOOMScoreAdjust=-1000\n# cat /etc/apt/sources.list.d/mongodb-org-6.0.list\ndeb [ signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg] http://repo.mongodb.org/apt/debian bullseye/mongodb-org/6.0 main\n",
"text": "Hi,I’m running MongoDB/mongosh on Debian 11.7 (Bullseye), system has 32 GB RAM. This is running on a VMWare ESXi host, I have two CPU-cores assigned. Kernel is 5.10.0-23-amd64 #1 SMP Debian 5.10.179-1 (2023-05-12) x86_64 GNU/Linux. The machine has one partition of ext4 type and a total size of 75GB, 36GB of which are still available. free -m output:I have set fs.file-max=786432 and vm.max_map_count=262144 in /etc/sysctl.d/30-mongodb.conf. I have diabled transparent huge pages and adjusted the following in the mongod.service:I get mongo packages via apt:Kind regardsMarkus",
"username": "Markus_Wollny"
}
] | allowDiskUse has no effect in case of $accumulator | 2023-07-25T13:58:40.277Z | allowDiskUse has no effect in case of $accumulator | 612 |
[
"aggregation",
"atlas-search"
] | [
{
"code": "{\n scheduleDays: ['sat', 'sun']\n},\n{\n scheduleDays: ['sun']\n},\n{\n scheduleDays: ['sat']\n},\n{\n scheduleDays: ['mon', 'sun']\n} ...\n[\n {\n $match: {\n $or: [\n {\n \"scheduleDays\": {\n $eq: [\"sat\", \"sun\"],\n },\n },\n {\n \"scheduleDays\": {\n $eq: [\"sat\"],\n },\n },\n {\n \"scheduleDays\": {\n $eq: [\"sun\"],\n },\n },\n ],\n },\n },\n]\n[\n {\n $search: {\n // index: \"\",\n compound: {\n filter: [\n {\n equals: {\n path: \"isActive\",\n value: true,\n },\n },\n {\n geoWithin: {\n path: \"location\",\n circle: {\n center: {\n type: \"Point\",\n coordinates: [\n // 0,\n // 0\n ],\n },\n radius: 50000,\n },\n },\n },\n {\n compound: {\n should: [\n {\n text: {\n path: \"scheduleDays\",\n query: [\"sat\", \"sun\"],\n },\n },\n {\n text: {\n path: \"scheduleDays\",\n query: [\"sat\"],\n },\n },\n {\n text: {\n path: \"scheduleDays\",\n query: [\"sun\"],\n },\n },\n ],\n },\n },\n ],\n must: [\n {\n near: {\n origin: {\n type: \"Point\",\n coordinates: [\n // 0,\n // 0\n ],\n },\n pivot: 5000,\n path: \"location\",\n score: {\n boost: {\n value: 3,\n },\n },\n },\n },\n ],\n should: [],\n },\n count: {\n type: \"total\",\n },\n scoreDetails: true,\n highlight: {\n path: [\n // \"\"\n ],\n },\n },\n },\n {\n $limit: 20,\n },\n {\n $skip: 0,\n },\n {\n $addFields: {\n scoreDetails: {\n $meta: \"searchScoreDetails\",\n },\n score: {\n $meta: \"searchScore\",\n },\n highlights: {\n $meta: \"searchHighlights\",\n },\n total: \"$$SEARCH_META.count.total\",\n },\n },\n]\n",
"text": "I want to get the exact match value of a specific array.\ninput: { “filter.schedule” : [“sat”, “sun”]}example data :i hope trying toso, i tried convert atlas search compound query.\nimage623×626 28.3 KB\nIs it possible to change compound.filter to $eq syntax, not $in?thx.",
"username": "wrb"
},
{
"code": "$eqscheduleDaysqueryStringANDAND NOT{\n$search: {\nindex: 'default',\nqueryString: {\ndefaultPath: 'scheduleDays',\nquery: '(sun AND sat) AND NOT (mon OR tue OR wed OR thu OR fri)'\n}\n}\n}\nqueryAND NOT",
"text": "Hi @wrb and welcome to MongoDB community forums!!I see that response to your question has been pending for a while now. While I hope that the issue must have been resolved, this is how I would like to resolve the query.Currently, Atlas search indexes array elements as described in How Does Atlas Search Index Array Elements? documentation.While I would love to take this internally to the relevant team, I would like to know your specific use case (including if positioning matters) and the condition that could not be satisfied using the currently present operators .As mentioned the official MongoDB documentations for $equals, it could be used only to ObjectID, boolean, number and dates.input: { “filter.schedule” : [“sat”, “sun”]}In saying so, I assume since you are after an equivalent to $eq you want position to matter. However, in the case position doesn’t matter and since I note that the scheduleDays contains the days in a week, one workaround would be to use queryString AND and AND NOT boolean operators after deciding the days you are wanting (application side) to return in a format similar to below:The first portion of the query would be days you wish to return and the second portion (after the AND NOT would contain the remaining days) - This could possibly be calculated application side beforehand.Based off your sample documents, the above query would return the first document only.Please reach out if you have further questions.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | [atlas search] how to exact array match? | 2023-07-03T11:57:31.425Z | [atlas search] how to exact array match? | 735 |
|
null | [
"queries",
"node-js",
"mongoose-odm",
"mongodb-shell"
] | [
{
"code": " const findDocuments = function(db, callback) {\n // Get the documents collection\n const collection = db.collection('fruits');\n // Find some documents\n collection.find({}).toArray(function(err, fruits) {\n assert.equal(err, null);\n console.log(\"Found the following records\");\n console.log(fruits)\n callback(fruits);\n });\n };\n",
"text": "Hi Community,I am struck at a situation where I’m unable to go trough using mongo DB with node js.My configuration version;Now, I have done all the steps from;However, when working with following lines of code, I do following steps;“node app.js” doesn’t givve any results and keeps on as it is.Please help me here, I’m stuck here for 2 days straight without sleep, also I’m following a bootcamp where she made us create a .bash_profile which contained following lines but as of now I removed (didn’t work when present also);alias mongod=“/c/Program\\ files/MongoDB/6.0/bin/mongod.exe”\nalias mongo=“/c/Program\\ Files/MongoDB/6.0/bin/mongo.exe”Also, refer to the code below and let me know my issue or bunder.</>\nconst mongoose = require (‘mongoose’);mongoose.connect(‘mongodb://127.0.0.1:27017/fruitsDB’, {useNewUrlParser: true, useUnifiedTopology: true});const furitSchema = new mongoose.Schema({\nname: String,\nrating: Number,\nreview: String\n});const Fruit = mongoose.model(“Fruit”,furitSchema);const fruit = new Fruit({\nname:“Apple”,\nrating: 7,\nreview:“Pretty solid”\n});fruit.save();</>",
"username": "Tushar_Saraswat"
},
{
"code": "",
"text": "when working with following lines of code, I do following sI found the solution:\nRefer to this: https://www.udemy.com/course/the-complete-web-development-bootcamp/learn/lecture/12385780#questions/18017566",
"username": "Divyanshu_N_A"
}
] | Not able to run mongoDB with node.js | 2023-05-25T02:36:25.167Z | Not able to run mongoDB with node.js | 953 |
[] | [
{
"code": "",
"text": "Hi!\nOur trigger appears to be unresponsive in the morning. Actions are not being preformed by the trigger.What can be wrong? I contacted the mongo project support - they did not answer a single request.",
"username": "Ilona_Hancharova"
},
{
"code": "",
"text": "Hi @Ilona_Hancharova - Welcome to the community.I contacted the mongo project support - they did not answer a single request.I assume you’ve raised a support case in this instsance but correct me if I’m wrong here. In this case, could you DM me the support case number?Note: The support case is not the same as the chat supportInstructions on how to open a chat with a support agent and raising a support case noted here. Raising a support case will require a Developer Support plan or higher.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Trigger appears to be unresponsive in the morning | 2023-08-02T13:35:11.815Z | Trigger appears to be unresponsive in the morning | 511 |
|
null | [] | [
{
"code": "bashmongo",
"text": "G’day, I’m Stennie . I joined the MongoDB engineering team in mid-2012 when we started the office in Sydney . For my first 7 years at MongoDB I was part of the Technical Services org where I was involved with support, training (internal & external), consulting, and knowledge management. In September 2019 I moved to the Developer Relations org to help scale our community interaction and self-service options.You may come across me commenting on the occasional MongoDB question on community channels including Stack Overflow, DBA Stack Exchange, Twitter, GitHub, or here on the new MongoDB Community site.When I find spare time outside of family and travel, I often contribute to open source projects I use such as mtools (Python scripts for MongoDB log analysis & launching test deployments), m (bash script for managing multiple local versions of MongoDB server), and Mongo Hacker (JavaScript extensions for the mongo shell). You may sense a subtle theme connecting these projects which happen to use different programming languages I look forward to seeing the MongoDB community continue to thrive.If you’ve found any of my contributions extra helpful, I’d be interested in feedback on your favourites (and why).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie. Nice to hear from you again.",
"username": "Steve_Hand"
},
{
"code": "m",
"text": "Hi Stennie,I’ve been an on and off student @ Mongo U. Thanks for all that you have contributed to my learning experience and thank you for sharing with us some of the cool tools that you use. I’m thinking that I may have to try them all. I’m especially looking forward to installing and running m. I’ve been hoping to find a local version manager for MongoDB for awhile now. Good thing that I clicked on this thread!Cheers:-)",
"username": "Juliette_Tworsey"
},
{
"code": "",
"text": "Nice to hear from you again. @Steve_Hand!FYI there is an Austin meetup coming up on 24th March 2020: https://www.meetup.com/en-AU/Austin-MongoDB-User-Group/.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "mmbashmm",
"text": "I’m especially looking forward to installing and running m. I’ve been hoping to find a local version manager for MongoDB for awhile now.Hi @Juliette_Tworsey,I find m handy because (a) it is just a single bash script and (b) I routinely use All The Versions of MongoDB :).m should work in most O/S environments officially supported by MongoDB (Linux/Unix/MacOS/Windows) with the caveat that you need a Linux environment on Windows (eg Ubuntu for Windows or Docker).Note: m is also only intended as a development tool. It currently doesn’t create or manage extras like MongoDB configuration files or db/log paths.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "mm",
"text": "m should work in most O/S environments officially supported by MongoDB (Linux/Unix/MacOS/Windows) with the caveat that you need a Linux environment on Windows (eg Ubuntu for Windows or Docker).Note: m is also only intended as a development tool. It currently doesn’t create or manage extras like MongoDB configuration files or db/log paths.Hi @Stennie_X,Thanks for the heads up! I appreciate it.Cheers and Happy Monday:-)",
"username": "Juliette_Tworsey"
},
{
"code": "",
"text": "Hi Stennie!Thank you so much for your help to me, past and present. I was able to log in to the “new experience” successfully with your help.This web interface, at first blush, does seem visually more interesting than the Google group. I’ll see what happens when I try posting code snippets.Thanks again for your help!Bob Cochran",
"username": "Robert_Cochran"
},
{
"code": "",
"text": "Thank you so much for your help to me, past and present. I was able to log in to the “new experience” successfully with your help.Welcome @Robert_Cochran, great to have you here! I’d especially appreciate any feedback you have on site accessibility (there’s a Site Feedback category for all feedback).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,I sure will!Bob",
"username": "Robert_Cochran"
},
{
"code": "",
"text": "I’m looking attending this once you have found a suitable reschedule date.",
"username": "Steve_Hand"
},
{
"code": "",
"text": "G’day @Steve_Hand!Although in-person user groups aren’t an option in most locations at the moment, virtual events are definitely possible if you might be interested in volunteering to present (and/or find presenters) for a session.Since virtual events can reach a much broader audience, we have set up a Global Virtual Community chapter on our new user group platform. You can contact the organisers of this (or any of our other) user groups via the MUG chapter page or start a new discussion in the User Groups forum category.We are also looking for community co-organisers for our user groups, so perhaps you may be interested in helping with the Austin MUG.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I have found you… You are this user User Stennie - Stack Overflow",
"username": "Ashish_Lal"
},
{
"code": "",
"text": " Welcome to the MongoDB Community @Ashish_Lal!That is indeed my profile on Stack Overflow, which is linked from my introduction in the first post in this topic .Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi, Stennie Thank you for your help with my questions in 2018:I appreciate your help. It was my first experience using MongoDB and related to it technologies. You were one of the first people, who helped me.For now I have to work more with other technologies, but I hope to be able to return to this soon.",
"username": "invzbl3"
},
{
"code": "",
"text": " Hi @invzbl3!I appreciate the feedback and I’m glad those discussions were helpful for getting you started with MongoDB (almost three years ago, now!).Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi there @Stennie_X , I love the new badges for first accepted solutions. Awesome! I am also super interested in trying to contribute to open source projects, and because of my extreme interest in MongoDB, I was hoping to contribute to something related to Mongo. DO you have any suggestions for someone who is new to contributing to open source? I am not unfamiliar with github, only working with others in github. I would very much like to change that and looking for any help that I can get.",
"username": "Jason_Nutt"
},
{
"code": "get-started-readmehelp-wantedgood-first-issuemongodb",
"text": "Hi @Jason_Nutt,Open source contribution normally begins from something you use (or would like to use). There’s a very broad scope of potential projects, but I assume you have specific interests in terms of tech stack and types of applications or tools to contribute to.What sort of tech stack and projects are you looking to work with?If you are looking for a collaborative project to help develop specific skills, I would consider if there are any established projects you are already using that you might be able to contribute to. Almost all of the projects I contribute to or maintain are ones I use, with the exception of occasional PRs that arise from community discussion.Starting with small changes like improvements to documentation, testing, or trying to reproduce some of the open issues are great ways to become familiar with a project and the maintainer(s). Projects that are more open to contributors will typically have a README and Contributing Guide that will help orient you. For example, see the get-started-readme from my colleague @wan or WildAid’s O-FISH project which uses MongoDB Realm.If you really aren’t fussed on what sort of project you want to contribute to, there are some tagging conventions on GitHub like help-wanted or good-first-issue. You could search relevant tagged issues with some additional keywords like mongodb and your preferred programming language. GitHub also has a short guide: Finding ways to contribute to open source on GitHub - GitHub Docs.You can also contribute to MongoDB documentation, drivers, Compass, Server, etc … but typically interest would start from a specific improvement or bug you are looking to address rather than looking through the open list of issues which others have reported.An important consideration to keep in mind with code contributions to any open source project is that once a maintainer accepts your pull request, they will end up dealing with any bugs or support requests that arise. Adding test coverage and documentation will help reassure maintainers that they are not merging a change which might lead to significantly more support work for them.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks @Stennie_X This is extremely helpful && motivating my friend! I’ll be seeing you after some reading and looking. Awesome stuff. Thanks for the direction.",
"username": "Jason_Nutt"
}
] | 🌱 G'day, I'm Stennie from MongoDB 🇦🇺 | 2020-01-30T03:23:19.632Z | :seedling: G’day, I’m Stennie from MongoDB :australia: | 9,320 |
null | [
"production",
"golang",
"transactions"
] | [
{
"code": "options.LogComponentAll",
"text": "The MongoDB Go Driver Team is pleased to release version 1.12.1 of the MongoDB Go Driver.This release fixes a bug in the Go Driver where connections are leaked if a user runs a transaction while connected to a load balancer. To resolve this issue, the Go Driver will now unpin connections when ending a session.This release fixes a logging design oversight in which enabling logging with options.LogComponentAll does not result in the publication of logs.This release fixes two runtime errors, which occur on unmarshaling an empty bson.RawValue with an invalid type, specifically the 0x00 (null) type, and on marshaling a nil pointer of ReadConcern.For more information please see the 1.12.1 release notes.You can obtain the driver source from GitHub under the v1.12.1 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team",
"username": "Qingyang_Hu1"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.12.1 Released | 2023-08-02T21:50:47.029Z | MongoDB Go Driver 1.12.1 Released | 574 |
null | [
"production",
"golang",
"transactions",
"change-streams"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.11.9 of the MongoDB Go Driver.This release fixes a bug in the Go Driver where connections are leaked if a user runs a transaction while connected to a load balancer. This release will also include a new feature to allow setting batch size on a ChangeStream objected returned by a Watch method.For more information please see the 1.11.9 release notes.You can obtain the driver source from GitHub under the v1.11.9 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team",
"username": "Preston_Vasquez"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Go Driver 1.11.9 Released | 2023-08-02T20:55:04.271Z | MongoDB Go Driver 1.11.9 Released | 522 |
null | [
"flutter"
] | [
{
"code": "",
"text": "Hello all, sorry if the question was already asked (not able to find a response).Do it exist a way to keep alive a session even if the user kill the application ? Because if my understanding is correct, we have to login with the right provider when the user will open again the app. The only solution I see is to store credentials somewhere (Store Secure for example) and then perform the app.logIn() when the app is opened. But is it working with the Apple ID token for example ? I don’t want to force the user to connect each time. I have the feeling to miss something.Can someone help me to understand ?\nThanks in advance",
"username": "Remi_Mastriforti"
},
{
"code": "appvar user = app.currentUser;\nif (user == null) {\n // only need to do the login here\n}\n",
"text": "You can just ask the app instance like:The app will remember the currently logged in user across app restarts, so it will only be null if he never logged in.",
"username": "Kasper_Nielsen1"
},
{
"code": "return runApp(MultiProvider(providers: [\n ChangeNotifierProvider<AppServices>(create: (_) => AppServices(appId, baseUrl)),\n ChangeNotifierProxyProvider<AppServices, RealmServices?>(\n // RealmServices can only be initialized only if the user is logged in.\n create: (context) => null,\n update: (BuildContext context, AppServices appServices, RealmServices? realmServices) {\n // Here the currentUser is null on app instance killed and relaunch\n return appServices.app.currentUser != null ? RealmServices(appServices.app) : null;\n }),\n ], child: const App()))\n",
"text": "I tried, if the app is closed and reopened the currentUser is not null but If I kill the instance of the app, on relaunch the currentUser is null. I did my test with the project example:",
"username": "Remi_Mastriforti"
}
] | Flutter Realm - How to keep alive session after on app was killed | 2023-08-02T19:18:26.261Z | Flutter Realm - How to keep alive session after on app was killed | 590 |
null | [] | [
{
"code": "export const CaseStatus = {\n Closed: \"CLOSED\",\n New: \"NEW\",\n Open: \"OPEN\",\n};\nconst { CaseStatus } = require(\"enums.js\");\npush failed: error validating Function: enums: runtime error during function validation\n",
"text": "I have a number of enums I would like to define once, that are used in my trigger functions. Is there a way that a trigger function can load another file that contains just exported consts, not functions?E.g. if I had a file enums.js with the following contents:How do I do something like the following in a trigger function to use them?I keep getting errors like:Do I need to put them into a module instead that I load as a dependency my function can access?",
"username": "Ben_Giddins"
},
{
"code": "",
"text": "Anyone? Still looking for a solution to this.",
"username": "Ben_Giddins"
}
] | Is it possible for a trigger function to import/load/require consts from another file in the app? | 2023-04-03T18:31:58.370Z | Is it possible for a trigger function to import/load/require consts from another file in the app? | 987 |
null | [] | [
{
"code": "",
"text": "When you start a new project, a username and password are suggested. The username and password fields use a font with almost identical lowercase L and capital i.I imagine many people are like me and use a password manager on their phones, which means it’s very annoying to have to figure out which one is which on your computer every time you have to copy down info for a new project.My suggestion is to use a monospace font in those fields for disambiguation.",
"username": "Cutler_Sheridan"
},
{
"code": "",
"text": "Hello @Cutler_Sheridan ,I would recommend posting your feedback/idea at MongoBD Feedback Engine. In order to help prioritize, please include the following informationA brief description of what you are looking to doHow you think this will helpWhy this matters to youRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thank you! I actually tried to find somewhere like that but wasn’t able to, appreciate the link",
"username": "Cutler_Sheridan"
}
] | FEEDBACK: Ambiguous font in Atlas project user creation fields | 2023-08-01T18:55:34.962Z | FEEDBACK: Ambiguous font in Atlas project user creation fields | 317 |
null | [
"aggregation"
] | [
{
"code": "$set$unset$push$sortstatusJournalstatusJournal \"statusJournal\" : [\n {\n \"status\" : \"PRODUCT_CREATED\",\n \"date\" : ISODate(\"2023-08-02T13:27:37.415+0000\"),\n \"description\" : \"test\",\n },\n {\n \"status\" : \"PRODUCT_OUT_FOR_DELIVERY\",\n \"date\" : ISODate(\"2023-08-30T22:00:00.000+0000\"),\n \"description\" : \"test2\",\n },\n {\n \"status\" : \"PRODUCT_DELIVERED\",\n \"date\" : ISODate(\"2023-08-03T22:00:00.000+0000\"),\n }\n ],\n$push const patchedProduct = await updateProduct(\n {productId: req.params.productId},\n {$set: patchProduct, $unset: patchProductUnsetQuery,\n $push: {statusJournal: {$each: [statusJournalEntry]}, $sort: {date: 1}}});\ndatestatusJournaldate",
"text": "I have an endpoint that updates products and among other things that use $set and $unset it also optionally uses $push whenever a product delivery status is updated. I have found the following documentation page that shows how to use $sort during update: https://www.mongodb.com/docs/manual/reference/operator/update/sort/#up._S_sortUnfortunately, I was not able to sort the statusJournal array of subdocuments inside my Product document.The statusJournal array is structured as follows:The code that performs the potential $push (among other things described above) looks like this:As you can see, I am attempting to (re)sort the array of subdocuments based on the date field value whenever a new status object is pushed, however it does not work, and a new pushed subdocument is simply placed at the end of the array. As you can see in the statusJournal example provided above, after pushing the PRODUCT_DELIVERED subdocument it appears below the PRODUCT_OUT_FOR_DELIVERY subdocument, even though the date is lesser.What am I doing wrong?Thanks",
"username": "Vladimir"
},
{
"code": "",
"text": "OR: Does it only sort the input elements and don’t touch the elements in the array that already exist?",
"username": "Vladimir"
},
{
"code": "$push: { statusJournal: { $each: [ statusJournalEntry ] , $sort: {date: 1} }\n",
"text": "According to the documentation you share you are using the syntax wrong by having the { and } and the wrong place. I think it should be",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much @steevej. Nice catch",
"username": "Vladimir"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to sort embedded array of subdocuments | 2023-08-02T16:03:40.712Z | Unable to sort embedded array of subdocuments | 304 |
[
"chicago-mug"
] | [
{
"code": "",
"text": "\nScreenshot 2023-08-01 at 3.25.33 PM1305×537 43.4 KB\nJoin us for a day filled with educational breakout sessions, customer stories, keynote address, 1:1 Ask the Experts consulting sessions, and more!At MongoDB.local Chicago, you’ll learn technologies, tools, and best practices that make it easy to build data-driven applications without distraction. Connect with our experts and customers to find new ways to build with MongoDB, hear what’s coming out, and meet developers shaking up their industries.Keynote Presentation\nLearn about the latest product announcements from a MongoDB expert. More details to come!Technical Sessions\nExperience educational technical sessions for all levels, delivered by MongoDB experts and customers.MongoDB Product Demos\nStop by the MongoDB booth for demos of the latest products and to get your questions answered.Networking\nMeet with other MongoDB enthusiasts from all over Chicagoland!Event Type: In-Person\nLocation: Morgan Manufacturing ( Map ) 401 N Morgan St Suite #100 Chicago, IL 60642Register Here: MongoDB.local Chicago | August 15, 2023 | MongoDB\nUse Codes MUG50 and Cassiano10 to get 60% off!",
"username": "Harshit"
},
{
"code": "",
"text": "Hey Community,We’d love for you to join us for a MongoDB Community Happy Hour the evening before .local Chicago. When you arrive, please look for people in MongoDB gear.Chicago Community Happy Hour @ Mon Aug 14, 2023 5:30pm - 6:30pm (CDT)The Aberdeen Tap, 440 N Aberdeen St 2nd Floor, Chicago, IL 60642, USA\nView map",
"username": "bein"
},
{
"code": "",
"text": "Looking forward to meeting you all in Chicago! If you’re planning to attend the Community Happy Hour please share here and we will add you to the calendar invitation as a reminder!",
"username": "Veronica_Cooley-Perry"
}
] | Chicago MUG: 2023 MongoDB .local Chicago | 2023-08-01T14:35:20.853Z | Chicago MUG: 2023 MongoDB .local Chicago | 1,290 |
|
null | [
"replication"
] | [
{
"code": "\t\t\t{\n\t\t\t\t\"durationMillis\" : 9186078,\n\t\t\t\t\"status\" : \"InitialSyncFailure: error cloning databases :: caused by :: HostUnreachable: Error cloning collection 'DB1.collection2' :: caused by :: network error while attempting to run command 'collStats' on host '10.10.0.52:27017' \",\n\t\t\t\t\"syncSource\" : \"10.10.0.52:27017\",\n\t\t\t\t\"rollBackId\" : 10,\n\t\t\t\t\"operationsRetried\" : 1,\n\t\t\t\t\"totalTimeUnreachableMillis\" : 0\n\t\t\t}\nrs0:STARTUP2> rs.conf()\n{\n\t\"_id\" : \"rs0\",\n\t\"version\" : 30,\n\t\"term\" : 124,\n\t\"protocolVersion\" : NumberLong(1),\n\t\"writeConcernMajorityJournalDefault\" : true,\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"host\" : \"10.10.0.52:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 2,\n\t\t\t\"tags\" : {\n\t\t\t\t\"serviceName\" : \"db-support\",\n\t\t\t\t\"podName\" : \"db-support-rs0-2\"\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"host\" : \"10.10.0.177:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 2,\n\t\t\t\"tags\" : {\n\t\t\t\t\"podName\" : \"db-support-rs0-1\",\n\t\t\t\t\"serviceName\" : \"db-support\"\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 5,\n\t\t\t\"host\" : \"10.11.10.74:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 0,\n\t\t\t\"tags\" : {\n\t\t\t\t\"external\" : \"true\"\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 6,\n\t\t\t\"host\" : \"10.10.0.151:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 0,\n\t\t\t\"tags\" : {\n\t\t\t\t\"podName\" : \"db-support-rs0-0\",\n\t\t\t\t\"serviceName\" : \"db-support\"\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 0\n\t\t}\n\t],\n\t\"settings\" : {\n\t\t\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 100000,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"catchUpTimeoutMillis\" : -1,\n\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\"getLastErrorModes\" : {\n\t\t\t\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"wtimeout\" : 0\n\t\t},\n\t\t\"replicaSetId\" : ObjectId(\"635efd17a1573f6faf2f2161\")\n\t}\n}\n\n\t\t\t},\n\t\t\t\"DB1\" : {\n\t\t\t\t\"collections\" : 3,\n\t\t\t\t\"clonedCollections\" : 0,\n\t\t\t\t\"start\" : ISODate(\"2023-08-01T08:13:52.225Z\"),\n\t\t\t\t\"DB1.collection1\" : {\n\t\t\t\t\t\"documentsToCopy\" : 406344735,\n\t\t\t\t\t\"documentsCopied\" : 406344735,\n\t\t\t\t\t\"indexes\" : 4,\n\t\t\t\t\t\"fetchedBatches\" : 7321,\n\t\t\t\t\t\"bytesToCopy\" : 120041237379,\n\t\t\t\t\t\"approxBytesCopied\" : 119871696825,\n\t\t\t\t\t\"start\" : ISODate(\"2023-08-01T08:13:52.259Z\"),\n\t\t\t\t\t\"receivedBatches\" : 7321\n\t\t\t\t},\n\t\t\t\t\"DB1.collection2\" : {\n\t\t\t\t\t\"documentsToCopy\" : 0,\n\t\t\t\t\t\"documentsCopied\" : 0,\n\t\t\t\t\t\"indexes\" : 0,\n\t\t\t\t\t\"fetchedBatches\" : 0,\n\t\t\t\t\t\"bytesToCopy\" : 0,\n\t\t\t\t\t\"receivedBatches\" : 0\n\t\t\t\t},\n },\n \"s\": \"I\",\n \"c\": \"CONNPOOL\",\n \"id\": 22572,\n \"ctx\": \"ShardRegistry\",\n \"msg\": \"Dropping all pooled connections\",\n \"attr\": {\n \"hostAndPort\": \"10.11.10.75:27017\",\n \"error\": \"ShutdownInProgress: Pool for 10.11.10.75:27017 has expired.\"\n }\n}\n",
"text": "Hi,I have a mongo v4.4 and there is one replicaset doing InitialSync and it keeps failing and resets the data directory when it reaches a specific collection with the below error:and this is the rs.conf()the collection always fails at this specific pointand I can see some error on the sync source host as below ::please not I don’t have any networking issues",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "can anyone help please",
"username": "Ahmed_Asim"
},
{
"code": "ShutdownInProgress",
"text": "ShutdownInProgresslooks like the sync source host is being shutdown??\nit is dropping all connections (and may refuse to connect the the new replica member, thus causing the sync to fail)i got this: https://jira.mongodb.org/browse/SERVER-47554",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hey @Ahmed_Asim,there is one replicaset doing InitialSync and it keeps failingI suggest trying alternative methods for the initial sync. You can refer to theHope it helps!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Kobe_W , Thanks for your reply much appreciated \nthe source host is not restarting at all and for glibc, installed version is 2.17 so I believe it’s not affected.Hi @Kushagra_Kesav , Thanks for your reply much appreciated we are working on that now but is this way supported in mongo v4.4? because we noticed that in the documentation , what does it mean?\nimage_20230802_191617dcbf6a70-fe31-401a-9a6f-fb1b9a52338c-11424×214 15.7 KB\n",
"username": "Ahmed_Asim"
}
] | InitialSync Failure at specific collection | 2023-08-01T11:16:36.896Z | InitialSync Failure at specific collection | 556 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.