image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "Hello All,\nMy name is John, and I’m from a little town called Adelaide in South Australia.\nI enjoy music, art, and coffee (which I drink too much of). I consider myself a front end developer\nand freelance under my business name Mojo Digital Solutions, Looking forward to getting an\nunderstanding of the Mongo Database.\ncheers",
"username": "Jon_N_A1"
},
{
"code": "",
"text": "Hey @Jon_N_A1,\nWelcome to MongoDB Community!\nGreat to know your interests, work, and especially that you are a fan!We have some great free university courses and docs to help you get started:Also, in case you are interested in meeting some of the MongoDB developers in the region, we have a Sydney, MongoDB User Group that organizes online as well in-person events. You can join the group to stay updated with their upcoming events.Let us know if you are looking for something specific and we will be able to help you with the right set of resources.Thanks\nHarshit",
"username": "Harshit"
}
] | Introduce myself | 2022-08-08T06:42:38.110Z | Introduce myself | 2,649 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "I am trying to work out a way of getting all unique pairs of a key-value array property across all documents in a collection.The array ‘Tags’ property looks like this on say two documents:\nDocument1:\n{\nTags: [\n{ k: “key1”, v: “value1” },\n{ k: “key2”, v: “value2” },\n…\n]\n}\nDocument2:\n{\nTags: [\n{ k: “key1”, v: “value3” }\n…\n]\n}With expected query output something like this:\n[\n{ k: “key1”, v: “value1” },\n{ k: “key2”, v: “value2” }\n{ k: “key1”, v: “value3” },\n…\n]\nor this:\n[\n{\nk: “key1”,\nvalues: [\n“value1”,\n“value3”,\n…\n]\n},\n{\nk: “key2”,\nvalues: [\n“value2”,\n…\n]\n}\n…\n},\n{\nk: “key2”,\nvalues: [\n“value2”,\n…\n]\n}\n]I have an index Tags_k_1_Tags_v_1, but I cannot get a query to return that uses it. I cannot use distinct() as more than one field is required. I have tried aggregation by unwinding the array, then grouping by k, and addToSet the values. No luck using the index. There are possibly millions of documents that will grow over time, so it is important an index is used.Any help appreciated.",
"username": "Richard_Hannah"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update your sample documents so that we can cut-n-paste into our servers for experimentation.",
"username": "steevej"
},
{
"code": "",
"text": "Rather than delve into an example, I will try to articulate the question a bit as I imagine it is a common performance problem to solve.If you have a multikey index on a property (‘Tags’) that is array of key-value (‘k’ and ‘v’) pairs, is there an index that allows you to efficiently build a complete list of unique key-value pairs across the collection? I want to additionally filter by another property (‘Owner’) matching a value.I have tried indexes “Owner_1_Tags_1” and “Owner_1_Tags.k_1_Tags.v_1”.\nI tried using Distinct(“Tags”, “{ Owner: }”)\nI tried a number of aggregation pipeline variants involving $unwind or $group.I can get the list of key-value pairs for all documents where Owner matches a value, but I cannot get it to efficiently use an index, causing seconds to query across a growing millions of matching documents. How can I do this using an index?",
"username": "Richard_Hannah"
},
{
"code": "$unwindTags$groupk$addToSetv$project{ \n $project: { \n _id: 0, \n k: \"$_id\", \n v: \"$values\" \n } \n}\n\n$groupkv",
"text": "There are a few different ways you could approach this problem depending on the specific requirements of your application and the specific database you are using. Here’s one example of how you could use MongoDB’s aggregation framework to get the unique pairs of key-value properties across all documents in a collection:Note that you need to make sure that your index Tags_k_1_Tags_v_1 is on the correct collection and fields, otherwise it won’t be used.Also, if you’re just looking for unique pairs of key-value properties and don’t care about the exact format of the output, you could use the $group stage again to group the documents by both the k and v fields, which will automatically eliminate any duplicate pairs.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "[\n {\n $match: {\n Owner: \"Test\",\n },\n },\n {\n $unwind: {\n path: \"$Tags\",\n },\n },\n {\n $group: {\n _id: \"$Tags.k\",\n values: {\n $addToSet: \"$Tags.v\",\n },\n },\n },\n {\n $project: {\n _id: 0,\n k: \"$_id\",\n v: \"$values\",\n },\n },\n]\n",
"text": "Thanks for the swift reply!I have tried your suggestion below before and found that, although it works, it does not use an index.Explain() does not indicate index Owner_1_Tags.k_1_Tags.v_1 or Owner_1_Tags_1 is used. Is it the Owner filtering stage that is stopping the opportunity to use an index?",
"username": "Richard_Hannah"
},
{
"code": "",
"text": "Further to the reply above:I have just tried removing the Owner match stage and simplifying the indexes I am trying to use to be just Tags_1 and Tags.k_1_Tags.v_1, but again, neither index is used.",
"username": "Richard_Hannah"
},
{
"code": "project = { $project : { \"tags.k\" : 1 } }\nunwind = { $unwind : \"$tags\" }\nfilter = { $filter : { input : \"$tags\" , cond : { \"$eq\" : [ \"$$this.k\" , \"$$k\"] } } }\nlookup = { $lookup : {\n from : \"tags\" ,\n localField : \"tags.k\" ,\n foreignField : \"tags.k\" ,\n let : { k : \"$tags.k\" } ,\n pipeline : [\n { $project : { _id:0,\"v\" : filter } } ,\n { $unwind : \"$v\" } ,\n { $set : { v : \"$v.v\" } }\n ]\n as : \"_result\"\n} }\ncosmetic = { $project : { _id : 0 , k : \"$tags.k\" , v : \"$_result.v\" } }\npipeline = [ project , unwind , lookup , cosmetic ]\n",
"text": "Some stages, $group for sure and probably $unwind too, modifies the original indexed documents is such a way that an indexes cannot be used.I have an idea that might work. Might, because the lack of sample documents stop me from being able to test. I am lazy. There is no way I will create documents that the requester can make available to me by simply using the appropriate markup.",
"username": "steevej"
},
{
"code": "[\n{\n \"Tags\": [\n {\n \"k\": \"Type\",\n \"v\": \"Asset\"\n },\n\t{\n \"k\": \"Code\",\n \"v\": \"DEV\"\n },\n ],\n \"Owner\": \"Test\"\n},\n{\n \"Tags\": [\n {\n \"k\": \"Type\",\n \"v\": \"Metric\"\n },\n\t{\n \"k\": \"Code\",\n \"v\": \"POL\"\n },\n ],\n \"Owner\": \"Test\"\n},\n{\n \"Tags\": [\n {\n \"k\": \"Type\",\n \"v\": \"Asset\"\n },\n\t{\n \"k\": \"Asset\",\n \"v\": \"asset1.json\"\n },\n ],\n \"Owner\": \"Test\"\n}\n]\ndb.event.distinct(\"Tags\", { Owner: \"Test\" })[\n\t{\n \"k\": \"Type\",\n \"v\": \"Asset\"\n },\n\t{\n \"k\": \"Type\",\n \"v\": \"Metric\"\n },\n\t{\n \"k\": \"Code\",\n \"v\": \"DEV\"\n },\t\n\t{\n \"k\": \"Code\",\n \"v\": \"POL\"\n },\n\t{\n \"k\": \"Asset\",\n \"v\": \"asset1.json\"\n }\n]\n",
"text": "@steevej apologies for not providing example documents. Given these documents:I effectively want to run this:\ndb.event.distinct(\"Tags\", { Owner: \"Test\" })\nto get all unique k-v tag pairs across all documents where Owner = “Test”:I expect to be able to use a multi-key index on Tags, and it should be a covered query if the index includes Owner and Tags.I cannot get the lookup pipeline above to work for me in Compass. It complains localField and foreignField cannot be used with Pipeline. I am not sure how anything involving the unwind can work efficiently as it does not appear to be able to use an index.",
"username": "Richard_Hannah"
},
{
"code": "{Owner:1,Tags.k:1,Tags.v:1}\n[ { '$match': { Owner: 'Test' } },\n { '$sort': { 'Tags.k': 1, 'Tags.v': 1 } },\n { '$project': { 'Tags.k': 1, 'Tags.v': 1 } },\n { '$unwind': '$Tags' },\n { '$group': { _id: { k: '$Tags.k', v: '$Tags.v' } } }\n]\n{ _id: { k: 'Asset', v: 'asset1.json' } }\n{ _id: { k: 'Type', v: 'Asset' } }\n{ _id: { k: 'Code', v: 'POL' } }\n{ _id: { k: 'Type', v: 'Metric' } }\n{ _id: { k: 'Code', v: 'DEV' } }\nstage: 'IXSCAN',\nkeyPattern: { Owner: 1, 'Tags.k': 1, 'Tags.v': 1 },\nindexName: 'Owner_1_Tags.k_1_Tags.v_1',\nisMultiKey: true,\nmultiKeyPaths: { Owner: [], 'Tags.k': [ 'Tags' ], 'Tags.v': [ 'Tags' ] },\n/* some fields omitted */\nindexBounds: \n { Owner: [ '[\"Test\", \"Test\"]' ],\n 'Tags.k': [ '[MinKey, MaxKey]' ],\n 'Tags.v': [ '[MinKey, MaxKey]' ] }\n",
"text": "It complains localField and foreignField cannot be used with Pipeline.Try updating Compass. The localField/foreignField with pipeline version is recent.With your sample documents, the indexand the following pipeline:produce the result:the explain plan indicatesBut it looks like it is not covered since IXSCAN is under FETCH and totalDocsExamined is not 0.And it looks this is the best that can be done according to",
"username": "steevej"
},
{
"code": "Command aggregate failed: Sort exceeded memory limit of 104857600 bytes",
"text": "@steevej thanks for your efforts. I tried this out but unfortunately it is still taking seconds across 100,000s of documents. It also has thrown:\nCommand aggregate failed: Sort exceeded memory limit of 104857600 bytesIt is odd that there is not a way of extracting the data that is fully covered by an index. I guess the only way to get this to perform is to maintain a separate collection of unique tag key-value pairs with reference counts and keep it in sync with the documents having tags? This would be a pain.Any thoughts on alternatives would be appreciated.",
"username": "Richard_Hannah"
},
{
"code": "[\n{\n \"Tags\": [\n {\n \"k\": \"Type\",\n \"v\": \"Asset\"\n },\n\t{\n \"k\": \"Code\",\n \"v\": \"DEV\"\n },\n ], \n \"k\" : [ \"Type\" , \"Code\" ]\n \"Owner\": \"Test\"\n},\n{\n \"Tags\": [\n {\n \"k\": \"Type\",\n \"v\": \"Metric\"\n },\n\t{\n \"k\": \"Code\",\n \"v\": \"POL\"\n },\n ],\n \"k\" : [ \"Type\" , \"Code\" ] ,\n \"Owner\": \"Test\"\n},\n{\n \"Tags\": [\n {\n \"k\": \"Type\",\n \"v\": \"Asset\"\n },\n\t{\n \"k\": \"Asset\",\n \"v\": \"asset1.json\"\n },\n ],\n \"k\" : [ \"Type\" , \"Asset\" ] ,\n \"Owner\": \"Test\"\n}\n]\n",
"text": "ForCommand aggregate failed: Sort exceeded memory limit of 104857600 bytesI will simply remove the $sort. You still get an IXSCAN without it. The $match and $project seems sufficient to use the index.Note that performance issues are not strictly caused by the code logic. It also depends on the hardware you use. If the working set does not fit in RAM, you will be I/O bound and that is slow.Read Atlas Search to return all the applicable filters for a given search query without specifying - #2 by Erik_Hatcher, the idea is smart as you may be able to add an extra field to speed up things rather than a new collection. Your documents could look like:The index Owner:1,k:1 would be a much smaller working set.Another thing you could try is to use the index Owner:1,Tags:1 to see if the index size is smaller it may help.Schema wise, do you really need the attribute pattern for tags. The range of value of “k” seems to be quite small so having direct attributes “Type”, “Asset” and “Code” might be much better.",
"username": "steevej"
},
{
"code": "",
"text": "Just to let you know that I decided to abandon the approach of getting all unique key-value pairs altogether, managing to get the same information from dedicated properties and elsewhere. Many thanks for your help. I will mark the above as a solution, as ultimately it is as good as it could be.",
"username": "Richard_Hannah"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Querying for all unique key-value pairs in an array property across all documents | 2023-01-06T18:38:05.489Z | Querying for all unique key-value pairs in an array property across all documents | 4,342 |
null | [
"data-modeling",
"atlas-device-sync"
] | [
{
"code": "",
"text": "I have developer mode enabled for my partitioned sync.\nI added a few fields to my model in my Maui app and hoped these new fields would get synced back and appear in the collection in Atlas. The fields did not appear, and I tried various ways to force that to happen. I then tried terminated sync in Atlas and re-enabled it, but that did not help either. Eventually, something I did made the new fields appear.\nCan anyone tell me the correct way to make this happen? My app is in its early stages, so I know I will be expanding various models as the app develops.\n[Later]\nI now see there are 8 documents in the collection in Atlas. 4 with the old schema and 4 with the new schema. It appears that the app is correctly showing the 4 docs with the new schema. It would be good to know what is the best approach to changing a schema. If there were thousands of docs in the collection, it would not be great to duplicate all of them each time the schema is changed.",
"username": "John_Atkins"
},
{
"code": "",
"text": "Hi. Can you elaborate on what you did you change your schema? When you update your schema from dev mode or in the UI we will perform another initial sync to resync down all changes (and new fields in existing documents)",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "I just added new fields to the C# model in the MAUI app. It eventually worked, as I said, but I wondered if there was a proper procedure to do this to avoid clearing data on the device etc.I have to say, when sync works, it is very impressive. I had 2 devices side by side with the same person logged in on each. Changing data on one screen was updated on the other screen in about 1 second or less. It looked quite magical. This is making use of MVVM so I was not having to tap save.",
"username": "John_Atkins"
},
{
"code": "",
"text": "I have a similar problem now. I realised that one of my properties in the MAUI C# model had been set to required when it shouldn’t have been so I removed the attribute. Now, whatever I do I get an exception“The following changes cannot be made in additive-only schema mode:\\n- Property ‘User.name’ has been made optional.”on this linevar realm = await Realm.GetInstanceAsync(syncConfig);I terminated and restarted sync in App Services but that made no difference. I also uninstalled and reinstalled my mobile app (to clear data) but that made no difference.The collection for the model that had the required attribute removed has yet to be synced back to Atlas. By that I mean the collection is empty in the database at the server.I will try adding back the required attribute in the C# model to see if I can get it to work again. At present, when these kinds of problems happen, I don’t know how to fix them. In fact, I’ll try restarting Visual Studio because that has sometimes helped.",
"username": "John_Atkins"
},
{
"code": "",
"text": "I’ve managed to get it to work again but the question remains, what is the correct procedure to change a model? I’m in dev mode making all schema changes in the MAUI app by changing the model. Currently, after any change, I have a long process of trial and error to make my mobile app work again. Deleting data for the app on the device, terminating sync and restarting, and deleting collections. I only try these things as a last resort to try to make it work again.",
"username": "John_Atkins"
},
{
"code": "",
"text": "I am having the same issues, would you be so kind and explain the steps you take as I seem to be going around in circles when I make changes to the models in my MAUI app.Thank you",
"username": "Paul_Betteridge"
},
{
"code": "",
"text": "I have developer mode enabled for my partitioned sync.\nI added a few fields to my model in my Maui app and hoped these new fields would get synced back and appear in the collection in Atlas. The fields did not appear, and I tried various ways to force that to happen. I then tried terminated sync in Atlas and re-enabled it, but that did not help either. Eventually, something I did made the new fields appear.\nCan anyone tell me the correct way to make this happen? My app is in its early stages, so I know I will be expanding various models as the app develops.\n[Later]\nI now see there are 8 documents in the collection in Atlas. 4 with the old schema and 4 with the new schema. It appears that the app is correctly showing the 4 docs with the new schema. It would be good to know what is the best approach to changing a schema. If there were thousands of docs in the collection, it would not be great to duplicate all of them each time the schema is changed.When you modify your Realm model in the client-side of your app, the new fields will only be present in the objects that are created after the change. If you want the new fields to appear in the documents in the Atlas database, you need to perform a schema migration on the server-side to update the existing documents to the new schema.The best approach to changing a schema would be to write a server-side migration script to update the existing documents. This script should update the existing documents in a way that is safe, efficient and handles any potential data loss or corruption. The script can be run as a single, one-time operation on the server, and it should be able to update all of the documents in the collection to the new schema. This way, you will not have to duplicate all of the documents each time the schema is changed, even if there are thousands of documents in the collection.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Hi, thanks @Sumanta_Mukhopadhyay for the excellent explanation. Changing schemas in sync can be difficult due to the fact that sync is history-based and therefore making a destructive change to the schema (changing the type of a field) requires invalidating history.That being said, I believe the initial issue is about using developer mode. Currently, we only support making additive changes while in developer mode (adding tables and fields). It sounds like the initial issue what John was running into was that they were trying to change the optionality of a field (required to optional), which is currently a breaking change. Currently, the only procedure for a breaking change is to terminate sync, modify the schema, and re-enable sync (updating the schema directly in the UI with this change will terminate and re-enable sync on its own with a warning to the user) .We have two projects on the horizon that could be of interest.The first will be to allow for destructive changes to be made by clients while in dev mode. It is important to understand that this will still require sync to be terminated and re-enabled, but this will now be possible instead of getting the error above “field exists but is optional”.The second project will be to allow breaking changes to occur in production without terminating and re-enabling sync by adding the concept of schema versioning. We are excited about this but we are still in the early stages of planning this feature.Happy hacking,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Please let me know if can contribute to any of them",
"username": "Sumanta_Mukhopadhyay"
}
] | Adding fields to a model | 2022-08-18T11:58:05.857Z | Adding fields to a model | 3,503 |
[
"aggregation",
"queries",
"java",
"spring-data-odm"
] | [
{
"code": "{\n \"WeeklyUsersCount\": 39,\n \"_id\": 34,\n \"difference\" : \"17\",\n \"differencePercentage\" : \"X%\"\n },\n {\n \"WeeklyUsersCount\": 22,\n \"_id\": 33,\n \"difference\" : \"10\",\n \"differencePercentage\" : \"y%\"\n }\n",
"text": "Actually, I’m getting an output below\nWhat I need is I have to compare the Outputs and have to display the difference between the current and previous week. I’m adding an expecting output below.Can you refer me the Aggregation and Spring Boot Code(If Possible)?",
"username": "Abishan_Parameswaran"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"612e13c475ca254df57156e1\"\n },\n \"uid\": 860,\n \"country_iso2\": \"UZ\",\n \"country_iso3\": \"UZB\",\n \"country_code\": 860,\n \"country\": \"Uzbekistan\",\n \"combined_name\": \"Uzbekistan\",\n \"population\": 33469199,\n \"loc\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 64.5853,\n 41.3775\n ]\n },\n \"date\": {\n \"$date\": \"2021-08-30T00:00:00.000Z\"\n },\n \"confirmed\": 155639,\n \"deaths\": 1075,\n \"recovered\": 0,\n \"confirmed_daily\": 795,\n \"deaths_daily\": 5,\n \"recovered_daily\": 0\n}\nconfirmed_dailydeaths_dailyrecovered_dailycumulative_value_today - cumulative_value_yesterday = daily_countdef calculate_daily_counts(client, collection, unique_daily_field):\n start = time.time()\n coll = client.get_database(DB).get_collection(collection)\n pipeline = [\n {\"$sort\": {unique_daily_field: 1, \"date\": 1}},\n {\"$group\": {\"_id\": \"$\" + unique_daily_field, \"docs\": {\"$push\": {\"dt\": \"$date\", \"c\": \"$confirmed\", \"d\": \"$deaths\", \"r\": \"$recovered\"}}}},\n {\n \"$set\": {\n \"docs\": {\n \"$map\": {\n \"input\": {\"$range\": [0, {\"$size\": \"$docs\"}]},\n \"as\": \"idx\",\n \"in\": {\n \"$let\": {\n \"vars\": {\"d0\": {\"$arrayElemAt\": [\"$docs\", {\"$max\": [0, {\"$subtract\": [\"$$idx\", 1]}]}]}, \"d1\": {\"$arrayElemAt\": [\"$docs\", \"$$idx\"]}},\n \"in\": {\"dt\": \"$$d1.dt\", \"dc\": {\"$subtract\": [\"$$d1.c\", \"$$d0.c\"]}, \"dd\": {\"$subtract\": [\"$$d1.d\", \"$$d0.d\"]},\n \"dr\": {\"$subtract\": [\"$$d1.r\", \"$$d0.r\"]}}\n }\n }\n }\n }\n }\n },\n {\"$unwind\": \"$docs\"},\n {\"$project\": {\"_id\": \"$$REMOVE\", unique_daily_field: \"$_id\", \"date\": \"$docs.dt\", \"confirmed_daily\": {\"$ifNull\": [\"$docs.dc\", \"$$REMOVE\"]},\n \"deaths_daily\": {\"$ifNull\": [\"$docs.dd\", \"$$REMOVE\"]}, \"recovered_daily\": {\"$ifNull\": [\"$docs.dr\", \"$$REMOVE\"]}}},\n {\"$merge\": {\"into\": collection, \"on\": [unique_daily_field, \"date\"], \"whenNotMatched\": \"fail\"}}\n ]\n coll.aggregate(pipeline, allowDiskUse=True)\n print('Calculated daily fields for', collection, 'in', round(time.time() - start, 2), 's')\n",
"text": "Hey @Abishan_Parameswaran,I think this looks a bit like something I have done for the Open Data COVID-19 project.In this project, I have documents like this:The raw data I retrieve from JHU only contains the cumulative values for confirmed, deaths and recovered. The 3 fields confirmed_daily, deaths_daily and recovered_daily are calculated by applying a simple math operation: cumulative_value_today - cumulative_value_yesterday = daily_count.I feel like you are trying to achieve the same thing here. Initially my documents don’t contain these 3 daily fields. But they are added by an aggregation pipeline that calculates the differences for each document and then merges these result within the relevant document.Here is the function I’m using in my Python code. Hopefully you can read it .I hope this helps.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "bro you got the query code",
"username": "Sai1232"
}
] | Find Difference Between Two Output Documents | 2021-08-30T14:20:21.518Z | Find Difference Between Two Output Documents | 6,198 |
|
null | [
"aggregation",
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": " const pageSize = +req.query.pagesize;\n const currentPage = +req.query.currentpage;\n\n let recordCount;\n\n ServiceClass.find().count().then((count) =>{\n recordCount = count;\n ServiceClass.aggregate().skip(currentPage).limit(pageSize).exec().then((documents) => {\n res.status(200).json({\n message: msgGettingRecordsSuccess,\n serviceClasses: documents,\n count: recordCount,\n });\n })\n .catch((error) => {\n res.status(500).json({ message: msgGettingRecordsError });\n });\n }).catch((error) => {\n res.status(500).json({ message: \"Error obteniendo cantidad de registros\" });\n });\nServiceClass.aggregate([\n { $match: { $or: [{ code: { $regex: regex } }, { description: { $regex: regex } }] } },\n { $skip: currentPage },\n { $limit: pageSize }\n])\nServiceClass.aggregate([\n { $match: { $or: [{ code: { $regex: regex } }, { description: { $regex: regex } }] } },\n { $count: \"count\" }\n])\n",
"text": "Hello! First, I know similar questions have been asked a lot but most answers are downright painfully slow. I have a 10,000,000 documents in a collection and I’m having serious issues with speed when filtering.I’ve been using mongoose paginate v2 and speed is not absurdly painful but it is slow, taking around 27s to return the documents filtered with pagination. Recently, I learned that aggregate().skip().limit() is lightning fast with unfiltered data:This function returns any page within 8 ms, extremely fast and this is amazing but whenever I start filtering the issues start. First, If i just filter like this:The function takes around 14s to get the data, an improvement of almost 50% over the mongoose paginate plugin, however, I need the total amount of records without the limit so I can’t use $count inside that aggregate as I will just get pageSize, therefor, I need to run another query before:It kills the performance as both queries add up and in both queries I end up getting all documents, taking about 34s to complete.So the question is as the title says: How can I get the filtered results along with total count of documents before limit as fast as possible?both fields -code and description- have an index.",
"username": "Mauricio_Ramirez"
},
{
"code": "{\n _id: ObjectId(\"63db8c46002b226488ec38a3\"),\n sku: 'abc123',\n description: 'First line\\n' +\n 'Second line'\n}\n{\n _id: ObjectId(\"63db8c46002b226488ec38a4\"),\n sku: 'xyz789',\n description: 'Many spaces before line'\n}\n{\n _id: ObjectId(\"63db8c46002b226488ec38a5\"),\n sku: 'Abc789',\n description: 'Multiple\\n' +\n 'line description'\n}\n{\n _id: ObjectId(\"63db8c46002b226488ec38a6\"),\n sku: 'abc123',\n description: 'Many spaces before line'\n}\n\nskudescriptiondb.sample.aggregate([ { $match: { $or: [{ sku: { $regex: /789$/ } }, { description: { $regex: /^ABC/}}] } },{ $count: \"count\" }]).explain('executionStats')\nexplain(\"executionStats\")executionStats: {\n executionSuccess: true,\n nReturned: 285670,\n executionTimeMillis: 628,\n totalKeysExamined: 500001,\n totalDocsExamined: 0,\n ...\n}\ndb.sample.aggregate([ { $match: { $or: [{ sku: { $regex: /789$/ } }, { description: { $regex: /^ABC/i}}] } },{ $count: \"count\" }]).explain('executionStats')\nexecutionStats: {\n executionSuccess: true,\n nReturned: 285670,\n executionTimeMillis: 1081,\n totalKeysExamined: 1000000,\n totalDocsExamined: 0,\n ...\n} \ndb.sample.aggregate([ { $match: { $or: [{ sku: { $regex: /^ABC$/ } }, { description: { $regex: /^Multiple\\nline description/}}] } },{ $count: \"count\" }]).explain('executionStats')\n executionStats: {\n executionSuccess: true,\n nReturned: 125200,\n executionTimeMillis: 280,\n totalKeysExamined: 196700,\n totalDocsExamined: 0\nIXSCANCOLLSCAN{$or: [ {unanchored regex}, {case-insensitive regex} ] }skip/limitskip/limit",
"text": "Hey @Mauricio_Ramirez,Typically, slow queries are generally not using any of the appropriate indexes. It’s important to ensure that appropriate indexes are created on the query fields to improve query performance. The MongoDB explain() method can be used to see if an index is being used for a particular query and to understand the query performance.I also noticed you are using $or operator as well as $regex in your aggregation queries. This may cause performance issues when not designed carefully. Typically when using unanchored $regex, MongoDB scans the entire collection to find the matching documents/indexes, which can be slow for large collections. To check this, I used a sample collection of over 500,000 documents. All documents looked like this:Both sku and description have an index.Then I tried a query similar to the one you provided.The output of explain(\"executionStats\") looked like this:Similarly for this query(case-insensitive regex):we get:However, if we use an anchored case-sensitive regex query, the number of index entries scanned (totalKeysExamined) dropped radically:we get:Even though all queries use IXSCAN, as you can see, it’s scanning the whole index, which in some cases is not much faster than COLLSCAN. Furthermore, Unanchored regex is not the best for performance, and using $or is only making the performance worse as it scans 1 million index keys total, even though the collection only contains half that number. Meaning that it scans the whole index twice due to the use of {$or: [ {unanchored regex}, {case-insensitive regex} ] }.\nYou can read more about the behavior of $or and $regex from the documentation:\n$or behavior\n$regex and index use.Additionally, although the skip/limit is suitable for some applications, it suffers from 2 major problems:I’m also linking some additional resources that you can refer that should certainly help you:Hope this helps. Please let us know if any part of the answer feels confusing or not clear.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the fastest way to filter documents with pagination in collection with millions of records? | 2023-01-19T23:21:52.723Z | What is the fastest way to filter documents with pagination in collection with millions of records? | 4,864 |
[
"transactions",
"installation",
"storage"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-02-05T14:41:53.141+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:53.142+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.517+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.518+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.519+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.519+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.519+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.520+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":4008,\"port\":27017,\"dbPath\":\"C:/data/db/\",\"architecture\":\"64-bit\",\"host\":\"SuperFresh\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.520+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23398, \"ctx\":\"initandlisten\",\"msg\":\"Target operating system minimum version\",\"attr\":{\"targetMinOS\":\"Windows 7/Windows Server 2008 R2\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.520+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.4\",\"gitVersion\":\"44ff59461c1353638a71e710f385a566bcd2f547\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"windows\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.521+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Microsoft Windows 10\",\"version\":\"10.0 (build 19044)\"}}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.522+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.524+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22270, \"ctx\":\"initandlisten\",\"msg\":\"Storage engine to use detected by data files\",\"attr\":{\"dbpath\":\"C:/data/db/\",\"storageEngine\":\"wiredTiger\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.524+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=7614M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.914+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":390}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.914+01:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.922+01:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.923+01:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22140, \"ctx\":\"initandlisten\",\"msg\":\"This server is bound to localhost. Remote systems will be unable to connect to this server. Start the server with --bind_ip <address> to specify which IP addresses it should serve responses from, or with --bind_ip_all to bind to all interfaces. If this behavior is desired, start the server with --bind_ip 127.0.0.1 to disable this warning\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.926+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915702, \"ctx\":\"initandlisten\",\"msg\":\"Updated wire specification\",\"attr\":{\"oldSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true},\"newSpec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":17,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.926+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"6.0\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.928+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-02-05T14:41:54.930+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-02-05T14:41:55.158+01:00\"},\"s\":\"W\", \"c\":\"FTDC\", \"id\":23718, \"ctx\":\"initandlisten\",\"msg\":\"Failed to initialize Performance Counters for FTDC\",\"attr\":{\"error\":{\"code\":179,\"codeName\":\"WindowsPdhError\",\"errmsg\":\"PdhAddEnglishCounterW failed with 'Das angegebene Objekt wurde nicht auf dem Computer gefunden.'\"}}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:55.158+01:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"C:/data/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:55.163+01:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReplicationDisabled\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:55.163+01:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-02-05T14:41:55.168+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-02-05T14:41:55.168+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}}\n",
"text": "I tried t to set up Community MongoDB on my windwos pc. I followed the installation process and tried to set up the environment variables. I copied the pathes from my explorer, so the pathes should be correct.\n\nenv1299×727 182 KB\nI tried to run the commands in my command shell, like in the installation tutorial, but it’s not working.Nonetheless I can’t use any mongo commands on my command shell. Anytime I try to run “mongo” or “mongo --version” it says “command is either misspelled or couldn’t be found”.When I use the command “mongod” some strange code appears:MongoDB is running when I check my TaskmanagerHow can I solve this problem?",
"username": "Konsky_O"
},
{
"code": "mongodmongosh",
"text": "You seem to have totally misread the documentation.mongod is the database itself.\nmongosh is the interactive shell.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "seems like it now its working thank you! If other beginners are struggeling: you have to set up mongoshell first and then you can use the command “mongosh” to use it with the windows command. This tutorial helped me set everything up: youtube.com/watch?v=V-d6VAYrjeQ",
"username": "Konsky_O"
}
] | Can't run mongo in command shell on windows | 2023-02-05T13:57:07.902Z | Can’t run mongo in command shell on windows | 2,836 |
|
null | [
"queries",
"node-js",
"atlas-functions",
"atlas"
] | [
{
"code": "exports = async function({ query, headers, body}, response) {\n\n // const {id} = await query;\n const id = \"63a18506eeb4aa7b878751b5\"\n const collection = await context.services.get(\"mongodb-atlas\").db(\"GatherDB\").collection(\"Project\")\n const project = await collection.findOne({_id:BSON.ObjectId(id)})\n \n return project.name\n};\n\n{\n \"error\": \"cannot compare to undefined\",\n \"error_code\": \"FunctionExecutionError\",\n \"link\": \"https://realm.mongodb.com/groups/637360845e9d607c6236bea6/apps/6373656a5122b133c4f719f1/logs?co_id=63ce65fe10282834d993ce0a\"\n}\n",
"text": "My simple Atlas function is as follows:When I call this function (impersonating a user) using the “run” button on Atlas Function editor I get the expect response, a project name for the given Id.When I call this function via Postman passing the user authentication in headers I get this error:I’m struggling to understand this inconsistent behaviour.",
"username": "Sam_Roberts"
},
{
"code": "{\n \"error\": \"cannot compare to undefined\",\n \"error_code\": \"FunctionExecutionError\",\n \"link\": \"https://realm.mongodb.com/groups/637360845e9d607c6236bea6/apps/6373656a5122b133c4f719f1/logs?co_id=63ce65fe10282834d993ce0a\"\n}\n",
"text": "Hi @Sam_Roberts - Welcome to the community Just a few questions i’m hoping you can provide some details for:When I call this function (impersonating a user) using the “run” button on Atlas Function editor I get the expect response, a project name for the given Id.When I call this function via Postman passing the user authentication in headers I get this error:Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "A simple case of not toggling “Fetch Custom User Data” on when using Application Auth. Apologies for wasting your time!!",
"username": "Sam_Roberts"
},
{
"code": "",
"text": "Thanks for updating the post with the resolution Sam ",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Atlas Function inconsistent behaviour between HTTPS call and debug run | 2023-01-23T11:01:56.800Z | MongoDB Atlas Function inconsistent behaviour between HTTPS call and debug run | 1,545 |
null | [
"aggregation",
"dot-net"
] | [
{
"code": "var searchName = \"test\";\nvar searchTermRegEx = $\"^{searchName}*\";\nvar firstNameFilter = Builders<User>.Filter.Regex(f => f.FirstName, searchTermRegEx);\n\nvar collation = new Collation(\"en\", strength: CollationStrength.Primary);\nvar options = new AggregateOptions()\n{\n\tCollation = collation\n};\nvar userList = await UserCollection\n\t.Aggregate(options)\n\t.Match(firstNameFilter)\n\t.ToListAsync();\nreturn userList;\n",
"text": "I have been struggling with finding a way to include case-insensitive collation into the Match stage of an Aggregate using the C# driver. I’ve verified that there is a case-insensitive index for the first name field in the code below by listing the indexes with UserCollection.Indexes.List().ToListAsync(). It contains the correct locale and strength.The below code does not work unless the search term matches the case of the first name. I’ve tried googling just about everything as well as using chat.openai and have not been able to find the right way to get this working.Does anyone have some suggestions of where to go with this?\nThanks!",
"username": "Jim_Owen"
},
{
"code": "$regexvar searchTermRegEx = new Regex($\"^{searchName}*\", RegexOptions.IgnoreCase);\nBuilders<User>.Filter.EqFirstNameSearchableFirstNameFirstName.ToLower()SearchableFirstName/^test/",
"text": "Hi, @Jim_Owen,Welcome to the MongoDB Community Forums. I understand that you’re having an issue with a query involving a regular expression and a case insensitive collation.Unfortunately this is a known limitation of MongoDB’s regular expression implementation:Case insensitive regular expression queries generally cannot use indexes effectively. The $regex implementation is not collation-aware and is unable to utilize case-insensitive indexes.See the section on index use for $regex for full details.If you want to perform a case insensitive regex, you can do something like this:However this query cannot use the index and will perform a collection scan, which could result in suboptimal performance for a large collection.If you are doing an exact match (rather than a prefix match), you can use Builders<User>.Filter.Eq, which would be able to leverage the case-insensitive collation. You would be matching the entire FirstName, not just the beginning of the name.If you need to perform a prefix match, you can store SearchableFirstName where you perform the FirstName.ToLower() operation in code. Then you can create a normal index on SearchableFirstName and /^test/ can leverage the index because the contents of the field and the search term are all in lower case.If you are using MongoDB Atlas, you can leverage Atlas Search to implement full text search capabilities. MongoDB .NET/C# Driver 2.19.0 now includes a fluent API for Atlas Search.Hopefully that provides you some ideas to implement your solution.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi @James_KovacsThanks for the very prompt reply - it’s appreciated.I had begun taking a look at Atlas Search, but didn’t find much in the way of a fluent API for it. Can you give me a link to the doc and examples? I think I’d prefer to go that direction. Thanks!",
"username": "Jim_Owen"
},
{
"code": "Builders<T>.SearchIMongoCollection.Aggregate().Search(searchExpression)IMongoCollection.AsQuerayble().Search(searchExpression)$search$search$search",
"text": "Hi, Jim,The Fluent Atlas Search API was just added to the driver in 2.19.0, which was only released this past Friday. Our Docs Team is still in the process of writing the documentation for this feature. In the meantime, the best source of examples is our integration tests:master/tests/MongoDB.Driver.Tests/SearchThe Official C# .NET Driver for MongoDB. Contribute to mongodb/mongo-csharp-driver development by creating an account on GitHub.From a high-level, you can create a search expression using Builders<T>.Search, which can then be used with Fluent Aggregate (IMongoCollection.Aggregate().Search(searchExpression)) or LINQ (IMongoCollection.AsQuerayble().Search(searchExpression).The API closely mirrors the $search syntax, which can be found in our $search documentation. Hopefully Intellisense and the $search documentation is enough for you to make some progress while we work on documenting this new driver feature.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thanks James! Very glad to hear it’s available now. I’ll get the NuGet package and give it a try.",
"username": "Jim_Owen"
},
{
"code": " var results2 = UserCollection\n .Aggregate()\n .Search(\n Builders<User>.Search.Wildcard(\n Builders<User>.SearchPath.Multi(x => x.FirstName, x => x.LastName), \"tes*\")\n )\n .ToListAsync();\nCommand aggregate failed: PlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: Field FirstName is analyzed. \nUse a keyword analyzed field or set allowAnalyzedField to true.\n",
"text": "Hi James,Thanks for the assistance - I have case-insensitive search working when indexing on a single or multi column and that’s much more than I had before.What I’d like to accomplish is performing a SearchPath.Multi on two fields combined with a wildcard. Something like this:The error I’m receiving isBut I can’t find out where allowAnalyzedField should be set.\nI also have the feeling that my code is not quite right on the simple call with the wildcard and the multi-column search path.Any suggestions you can offer would be great.Thanks!",
"username": "Jim_Owen"
},
{
"code": "var searchName = \"test\";\nvar searchTermRegEx = $\"^{searchName}*\";\nvar firstNameFilter = Builders<User>.Filter.Regex(f => f.FirstName, searchTermRegEx);\n\nvar collation = new Collation(\"en\", strength: CollationStrength.Primary);\nvar options = new AggregateOptions()\n{\n\tCollation = collation\n};\nvar userList = await UserCollection\n\t.Aggregate(options)\n\t.Match(firstNameFilter)\n\t.ToListAsync();\nreturn userList;\n$expr$regexMatchvar searchName = \"test\";\nvar searchTermRegEx = $\"^{searchName}*\";\nvar firstNameFilter = Builders<User>.Filter.Expr(\n expr => new { expr = expr.FirstName, type = \"regex\", pattern = searchTermRegEx, options = \"i\" }\n);\n\nvar collation = new Collation(\"en\", strength: CollationStrength.Primary);\nvar options = new AggregateOptions()\n{\n\tCollation = collation\n};\nvar userList = await UserCollection\n\t.Aggregate(options)\n\t.Match(firstNameFilter)\n\t.ToListAsync();\nreturn userList;\n\n$expr$regexMatchoptionsi",
"text": "have been struggling with finding a way to include case-insensitive collation into the Match stage of an Aggregate using the C# driver. I’ve verified that there is a case-insensitive index for the first name field in the code below by listing the indexes with UserCollection.Indexes.List().ToListAsync(). It contains the correct locale and strength.The below code does not work unless the search term matches the case of the first name. I’ve tried googling just about everything as well as using chat.openai and have not been able to find the right way to get this working.Does anyone have some suggestions of where to go with this?\nThanks!To apply the case-insensitive collation to the Match stage of the Aggregate pipeline, you need to use the $expr operator along with the $regexMatch expression. Here is an example of how you could modify your code:The $expr operator evaluates a specified expression and returns its result. The $regexMatch expression performs a regular expression match, and the options field specifies the regular expression options, which include i for case-insensitive matching.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "public async Task<List<User>> ReadListAsync(string searchName, int pageSize, int offset, bool includeActiveUsers, bool includeDeletedUsers)\n{\n // pageSize\n pageSize = pageSize <= 0 ? int.MaxValue : pageSize; // Although Mongo Doc states 0 = unlimited, zero will throw an error\n\n // Create the name filter for first or last name match\n var searchTermRegEx = $\"{searchName}*\";\n // Create the wildcard search def\n var wildcardSearchDef = Builders<User>.Search.Wildcard( Builders<User>.SearchPath.Multi(x => x.FirstName, x => x.LastName), \n searchTermRegEx, \n allowAnalyzedField: true);\n // Create the Sort Stage\n var sortStage = Builders<User>.Sort.Ascending(f => f.LastName).Ascending(f => f.FirstName);\n\n var userList = await UserCollection\n .Aggregate()\n .Search(wildcardSearchDef)\n .Sort(sortStage)\n .Skip(offset)\n .Limit(pageSize)\n .ToListAsync();\n\n return userList;\n}\n",
"text": "Hi Sumanta,Thanks for the response although I was able to get the case-insensitive search working using the new Atlas Search features of the most recent release of the C# driver. Here’s the code that I’m using that works perfectly for the two different columns of the collection.The only thing I have yet to include is the additional filtering needed for the two boolean values passed in the method parameters.If you have any suggestions for that, I’d be happy to hear it.\nCheers!",
"username": "Jim_Owen"
}
] | C# Aggregate and Collation | 2023-02-01T16:34:58.975Z | C# Aggregate and Collation | 2,154 |
null | [
"atlas-search"
] | [
{
"code": "{\n \"_id\": {\n \"$oid\": \"63876f3ad75881cafe41a3e9\"\n },\n \"articleid\": \"b89bfa05-70b3-11ed-b775-2c59e5044e7b\",\n \"headline\": \"Innovative Lessons for Rest of the World\",\n \"subtitle\": \"\",\n \"fulltext\": \"While the world wants to indigenize high-tech, weuses simple, local technologies to solve most of the problems.\",\n \"pubdate\": \"2022-12-01\",\n \"article_type\": \"print\", \n \"date\": 2022-12-01T00:00:00.000+00:00\n }\n} \n [\n {\n \"$search\":{\n \"index\":\"fulltext\",\n \"compound\":{\n \"filter\":[\n {\n \"range\":{\n \"path\":\"date\",\n \"gte\":\"2023-01-30T00:00:00.000Z\",\n \"lte\":\"2023-02-05T00:00:00.000Z\"\n }\n }\n ],\n \"should\":[\n {\n \"text\":{\n \"query\":\"indigenize\",\n \"path\":[\n \"headline\",\n \"fulltext\",\n \"subtitle\"\n ]\n }\n },\n {\n \"text\":{\n \"query\":\"technologies\",\n \"path\":[\n \"headline\",\n \"fulltext\",\n \"subtitle\"\n ]\n }\n }\n ]\n }\n }\n }\n] \n",
"text": "I am trying to search the document as OR operator which is Should in Mongodb atlas search. but it is really really slow as it seems compared to the must operators. I am using must operator which is really fast and works well. but in case of single search or in SHould operator it will be as slow as 20X as compared to mustam I missing anything?here is my document sample where I am trying to search - indigenize or technologies,\nI also need to search keyword1 or keyword2 or keyword3…or in the future!Schema-",
"username": "Utsav_Upadhyay2"
},
{
"code": "[\n {\n \"$search\": {\n \"index\": \"fulltext\",\n \"compound\": {\n \"filter\": [\n {\n \"range\": {\n \"path\": \"date\",\n \"gte\": \"2023-01-30T00:00:00.000Z\",\n \"lte\": \"2023-02-05T00:00:00.000Z\"\n }\n }\n ],\n \"should\": [\n {\n \"text\": {\n \"query\": \"indigenize\",\n \"path\": [\n \"headline\",\n \"fulltext\",\n \"subtitle\"\n ],\n \"boost\": 2\n }\n },\n {\n \"text\": {\n \"query\": \"technologies\",\n \"path\": [\n \"headline\",\n \"fulltext\",\n \"subtitle\"\n ],\n \"boost\": 1\n }\n }\n ],\n \"minimumShouldMatch\": \"50%\"\n }\n }\n }\n]\n\n",
"text": "The performance of Atlas Search’s OR operator (or “should” clause) may slow down when searching for multiple keywords because the engine must retrieve and score documents that match any of the keywords. However, there are a few ways you can improve the performance of the OR operator:Here’s an example of how you can optimize your Atlas Search query with a custom scoring function:In this example, we’re using a custom scoring function to boost the scores of documents that contain the keyword “indigenize” by a factor of 2. This will help prioritize the most relevant results and improve the performance of the OR operator. Additionally, we’re using the “minimumShouldMatch” parameter to only return documents that match at least one of the keywords.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "thank you so much for this information, but may I know why a single search is slow too, does it works the same as should Operator? could you please share syntax with the above schema for a single search using date lte, gte and query ?",
"username": "Utsav_Upadhyay2"
}
] | How OR operator works in Atlas search? | 2023-02-05T08:08:45.536Z | How OR operator works in Atlas search? | 1,014 |
null | [
"java",
"crud",
"spring-data-odm"
] | [
{
"code": " @Test\n public void testInsertAndUpdate() {\n ObjectId clientSideObjectId = new ObjectId();\n TestedObjectWithId object = new TestedObjectWithId(\"name\", clientSideObjectId);\n BulkOperations bulkOperations =\n mongoTemplate.bulkOps(UNORDERED, COLLECTION_NAME)\n .insert(object)\n .updateOne(\n query(where(ID_FIELD_NAME).is(clientSideObjectId)),\n new Update().currentDate(\"serverSideCurrentDate\")\n );\n bulkOperations.execute();\n }\n",
"text": "HiWhen I use bulk operations as in the example below (inserting a new document, and then updating it it with server-side date) - does the mongo-server creates a new document without the Date, and then updates it with Date, or the document will be created with Date from the beginning.I want to know if there could be happen that a client will fetch the document without the Date?",
"username": "Noam_Gershi"
},
{
"code": "UNORDERED",
"text": "UNORDEREDmeans that the operations are NOT ORDERED. It means that the operations are NOT guarantied to be executed in the order you specified. So potentially, updateOne is performed before the insert, so the query part will not matched any document because it is not inserted yet. The bulkWrite will then leave a document without the server side current date.A transaction would be a better choice to make sure a date is set before anyone retrieve the document.",
"username": "steevej"
},
{
"code": "",
"text": "and if I change it to ORDERED?",
"username": "Noam_Gershi"
},
{
"code": "",
"text": "Since it is not atransactionsomeone can retrieve the document after the insertion and before you have the time to set the date. Unlikely but possible. Always plan for the worst case.",
"username": "steevej"
},
{
"code": "",
"text": "Thanx.For my case maybe it is sufficient, since we are going to fetch according to the Date field, so it will be fetched only after the last update. I will investigate transaction also, and compare the performance of the insert+update with transaction.Thanx again!",
"username": "Noam_Gershi"
},
{
"code": " @Test\n public void testInsertAndUpdate() {\n ObjectId clientSideObjectId = new ObjectId();\n TestedObjectWithId object = new TestedObjectWithId(\"name\", clientSideObjectId);\n BulkOperations bulkOperations =\n mongoTemplate.bulkOps(UNORDERED, COLLECTION_NAME)\n .insert(object)\n .updateOne(\n query(where(ID_FIELD_NAME).is(clientSideObjectId)),\n new Update().currentDate(\"serverSideCurrentDate\")\n );\n bulkOperations.execute();\n }\n",
"text": "HiWhen I use bulk operations as in the example below (inserting a new document, and then updating it it with server-side date) - does the mongo-server creates a new document without the Date, and then updates it with Date, or the document will be created with Date from the beginning.I want to know if there could be happen that a client will fetch the document without the Date?In MongoDB, bulk operations like insert and update are executed in order and atomically, meaning either all operations succeed or all operations fail.In the example you provided, the MongoDB server will create the document with the “name” field and the client-side generated ObjectId, and then update the document to include the “serverSideCurrentDate” field.There is no possibility for a client to fetch the document without the “serverSideCurrentDate” field, as the update operation is guaranteed to be atomic and successful.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Read about bulk operation. Your understanding aboutoperations like insert and update are executed in order and atomicallyseems to different from what is written in the documentation. In particular:With an unordered list of operations, MongoDB can execute the operations in parallel, but this behaviour is not guaranteed. If an error occurs during the processing of one of the write operations, MongoDB will continue to process remaining write operations in the list.You wroteare executed in orderbut the documentation saysexecute the operations in parallel\nSo the order cannot be guarantee.You writeare executed … atomicallybut the documentation says:If an error occurs during the processing of one of the write operations, MongoDB will continue to process remaining write operations in the list.So the insert and its subsequent updateOne are not atomic to each other, one might fail but not the other.So like I wroteSo potentially, updateOne is performed before the insert, so the query part will not matched any document because it is not inserted yet. The bulkWrite will then leave a document without the server side current date.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the feedback I may have understood it in a different way @steevej .I will keep you posted thanks alot.It feels really great and good when people like u are there to lift",
"username": "Sumanta_Mukhopadhyay"
}
] | MongoTemplate BulkOperations atomicity | 2023-02-04T16:49:48.821Z | MongoTemplate BulkOperations atomicity | 1,469 |
[
"node-js",
"mongoose-odm",
"connecting"
] | [
{
"code": "",
"text": "\nScreenshot_111200×276 11.7 KB\n\n// code\nimport mongoose from “mongoose”;\nimport dotenv from “dotenv”;dotenv.config();const url = process.env.MONGODB_CONNECTION_URL || “”;\nmongoose.set(“strictQuery”, true);const Connection = async () => {\ntry {\nawait mongoose.connect(url);\nconsole.log(“Juice Bar Database is Running”);\n} catch (error) {\nconsole.log(“Error while connecting with DB”);\nconsole.log(error);\n}\n};export default Connection;",
"username": "AZIZUL_HAQUE_TOUSIF"
},
{
"code": "connection refusedMONGODB_CONNECTION_URLmongo",
"text": "// code\nimport mongoose from “mongoose”;\nimport dotenv from “dotenv”;dotenv.config();const url = process.env.MONGODB_CONNECTION_URL || “”;\nmongoose.set(“strictQuery”, true);const Connection = async () => {\ntry {\nawait mongoose.connect(url);\nconsole.log(“Juice Bar Database is Running”);\n} catch (error) {\nconsole.log(“Error while connecting with DB”);\nconsole.log(error);\n}\n};export default Connection;The error connection refused typically occurs when the MongoDB server is not running or is not accessible at the URL specified in the connection string. Here are a few common causes:Try resolving these issues and see if the error persists. If the error continues, try updating the MongoDB driver version or checking the MongoDB logs for more information.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "connection refused;QUESTION juice-bar.qlanux0.mongodb.net. IN ANY **\n;ANSWER\njuice-bar.qlanux0.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-12a1mm-shard-0\"\njuice-bar.qlanux0.mongodb.net. 60 IN SRV 0 0 27017 ac-jidwr7g-shard-00-00.qlanux0.mongodb.net.\njuice-bar.qlanux0.mongodb.net. 60 IN SRV 0 0 27017 ac-jidwr7g-shard-00-01.qlanux0.mongodb.net.\njuice-bar.qlanux0.mongodb.net. 60 IN SRV 0 0 27017 ac-jidwr7g-shard-00-02.qlanux0.mongodb.net.\n;AUTHORITY\n;ADDITIONAL\n",
"text": "The error EREFUSED is notconnection refusedSo it is not aboutIncorrect Connection URLthe error would be a parse error if not formatted correctly or ENOTFOUND if formatted correctly but wrong.as theMongoDB Server Not Runningit is not aboutFirewall Blocking the ConnectionIt is also not aboutIf the MongoDB server and the client are on different networks, make sure that the network configuration allows connections between the two.It is clear that they are the URI specifies an Atlas cluster and the DNS information is correct as seen inWhat I suspect is that your Internet provider is using a deprecated DNS server that do not understand SRV records. Try setting your DNS resolver to use google’s 8.8.8.8",
"username": "steevej"
},
{
"code": "",
"text": "Thanks alot for the clarification @steevej",
"username": "Sumanta_Mukhopadhyay"
}
] | Please Help me to solve the following error | 2023-02-04T13:39:01.144Z | Please Help me to solve the following error | 1,245 |
|
null | [
"dot-net"
] | [
{
"code": "ObjectIdObjectId",
"text": "Hi,I’m working with MongoDB for the first time in a .NET application. I’m struggling with the model\nand “default” way of working which makes my application maintainable by multiple people.One struggle I’m encountering is how to work with the ObjectId key in code. Currently, I define “MongoEntities” which have an Id of type ObjectId. This seems to be optimal for querying. However,\nin the controller layer, this means that I have to map a string from the route to an ObjectId using the TryParse method in every action. A solution would be to add a Custom Model Binder on a type that inherits from ‘ObjectId’ but I’m not sure whether this is the best possible solution.Are there any ideas or solutions to do this?",
"username": "Diederik_Mathijs"
},
{
"code": "",
"text": "There are a few ways you can handle the ObjectId key in your .NET application when working with MongoDB:Ultimately, the solution you choose will depend on the specific requirements and constraints of your application. The custom model binder and custom ObjectId class options are good choices if you want to encapsulate the logic of converting a string to an ObjectId in a reusable way. The helper method option is a simple solution that works well if you have a small number of conversions to make.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Thanks for taking time to respond, I guess I’ll continue with the custom model binder! ",
"username": "Diederik_Mathijs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Guideline on working with ObjectId's in .NET controllers? | 2023-02-05T10:11:30.302Z | Guideline on working with ObjectId’s in .NET controllers? | 559 |
null | [
"dot-net",
"xamarin"
] | [
{
"code": "if (_opts.contains != null) query = query.Where(note => note.content.Contains(_opts.contains, StringComparison.OrdinalIgnoreCase));\nif (_opts.contains != null) query = query.Where(note => note.content.Contains(_opts.contains));\nstring queryContains = \"blah\";\nif (_opts.contains != null) query = query.Where(note => note.content.Contains(queryContains, StringComparison.OrdinalIgnoreCase));\nif (_opts.contains != null) query = query.Where(note => note.content.Contains(\"blah\", StringComparison.OrdinalIgnoreCase));\n",
"text": "Hi, all–Just started using Realm- it’s amazing. I’m building an iOS app in Xamarin a having trouble making a .Where( x.contains(y)) query case-insensitive. The documentation states there’s an extension method which provides support for StringComparison.OrdinalIgnoreCase, but when I use it with a variable as the query source, it breaks. A string literal works. The error I get is here:“The left-hand side of the Call operator must be a direct access to a persisted property in Realm.”And this is the problematic line of code:Without the StringComparison, it works just fine:When I try dropping in another variable, i.e.:it fails as well.As I mentioned, string literals work, i.e.:Any ideas what could be going on?thanks!: j",
"username": "Jesse_Garrison"
},
{
"code": "",
"text": "Did you ever figure this out? I am having the same issue. Another thing I can do to make it work is replace the compared string variable with a string literal and it works.",
"username": "Sevren_Brewer"
},
{
"code": "",
"text": "@Sevren_Brewer can you file an issue at GitHub - realm/realm-dotnet: Realm is a mobile database: a replacement for SQLite & ORMs and ideally provide some code samples and the team will be happy to dive deeper.",
"username": "nirinchev"
}
] | Error with case insensitive search | 2020-04-24T18:27:33.189Z | Error with case insensitive search | 3,991 |
null | [
"database-tools",
"containers",
"backup",
"atlas"
] | [
{
"code": "",
"text": "How can I restore a single collection from a MongoDB online Snapshot?I have tried downloading the snapshot and copying it over the local Docker data folder of MongoDB. It does start but then I am no longer able to login using either the old root user password, or any of the users in the MongoDB Atlas database.I need to be able to restore just a single collection from one of the backups from a point in time. I know if I make an archive using mongodump, I can restore it locally with mongorestore. However, mongorestore doesn’t let me specify the downloaded gz file. It reports it is in an incorrect format.I have deleted the contents of a collection by accident, and just need to get that data back either using the online UI or a local docker copy.Thanks",
"username": "Stephen_Eddy1"
},
{
"code": "mongorestoremongorestore --host <hostname> --port <port> --username <username> --password <password> --db <database> --collection <collection> <path-to-snapshot>\n\nmongoexportmongoimportmongoexport --host <new-instance-hostname> --port <new-instance-port> --username <new-instance-username> --password <new-instance-password> --db <new-instance-database> --collection <collection> --out <collection>.json\n\nmongoimport --host <original-instance-hostname> --port <original-instance-port> --username <original-instance-username> --password <original-instance-password> --db <original-instance-database> --collection <collection> --drop --file <collection>.json\n\n",
"text": "To restore a single collection from an online MongoDB snapshot, you can use the following steps:This way, you can restore a single collection from a MongoDB online snapshot without affecting the rest of the data in your original instance.",
"username": "Sumanta_Mukhopadhyay"
}
] | How to restore a single collection from a MongoDB Atlas Snapshot | 2023-02-03T20:42:27.694Z | How to restore a single collection from a MongoDB Atlas Snapshot | 2,964 |
null | [
"swift"
] | [
{
"code": "@ObservedRealmObject@StateObjectEnvironmentObject@ObservedRealmObjectstruct ContentView: View {\n @ObservedRealmObject var someObject: SomeObject\n var body: some View {\n ...\n SubView(someObject: someObject)\n ...\n }\n}\n\nstruct SubView: View {\n @ObservedRealmObject var someObject: SomeObject\n var body: some View {\n ...\n }\n}\nSubView",
"text": "Hi there!I am a bit confused about passing @ObservedRealmObject to children views in SwiftUI. Unlike with @StateObject where I inject it as EnvironmentObject, I need to pass @ObservedRealmObject as argument. Would this example be the proper way to do this?While this seems to work, I think I am concerned as this looks like I have created two sources of truth. I also tried removing the property wrapper in SubView but I end up with stale data when modifying the object.What is the correct approach here? Thanks!",
"username": "Glyphe_Sektor"
},
{
"code": "@ObservedRealmObject@ObservedRealmObject@ObservedRealmObjectSubView@ObservedRealmObject@ObservedRealmObject",
"text": "When using @ObservedRealmObject, it is correct to pass it as an argument to child views, as shown in your example.Having multiple references to the same @ObservedRealmObject instance in different views does not create multiple sources of truth, as the object is still managed by the Realm database and its changes are reflected in all references to it. However, passing the @ObservedRealmObject instance as an argument ensures that the child views have the latest version of the object.In your example, removing the property wrapper in the SubView would result in stale data because @ObservedRealmObject is responsible for monitoring changes to the object and updating the view accordingly. Without it, the view would not receive updates when the object changes in the Realm database.So, in conclusion, the approach you’ve shown is the correct way to pass the @ObservedRealmObject instance to child views.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Passing ObservedRealmObject between views in SwiftUI | 2023-02-05T11:05:42.207Z | Passing ObservedRealmObject between views in SwiftUI | 977 |
null | [
"mongodb-shell",
"realm-web"
] | [
{
"code": "",
"text": "I open mongodb atlas\nI m able to see data service but\nwhen i moved to app service\nit is send me back to data serviceWhat can be the issue?",
"username": "Sanjay_Prasad"
},
{
"code": "",
"text": "did this start new or always has been like this? if new, there might be temporary work on the site. At least for me, App Services is working as of now.if old, your browser extensions might be interfering with the page, especially adblockers. they are not normally doing that, but if set to block things aggressively this might happen. I have multiple of them installed but the site is usually OK for me.open the Developer Tools of your browser (usually F12 key opens it) and switch to its “console” tab and check error messages there. this may show info related to extensions if you know their names. also try disabling them and see if it fixes the issue.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I am also facing the same issue. I do see some HTTP status codes 401s in console.",
"username": "binoy_s"
},
{
"code": "",
"text": "401 error code is mainly used for authentication problems (unless a developer wants something else)Atlas UI invalidates all logins and requires fresh login daily or earlier, and you should not be able to use service pages for that reason and be redirected back to the login page.However, things can happen and the page might get stuck in the browser and still somehow show data service while refusing app services. (or it is just temporary issues caused by some fix/update applied at that moment).you can try two things on the page itself:",
"username": "Yilmaz_Durmaz"
}
] | I m not able to open app service on atlas | 2023-01-31T12:05:45.099Z | I m not able to open app service on atlas | 1,425 |
null | [
"queries",
"dot-net"
] | [
{
"code": "using MongoDB.Bson.Serialization;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Bson.Serialization.Serializers;\nusing MongoDB.Driver;\n\nint idToFind = 5;\n\nvar filter = Builders<MyDocument>.Filter.Where(x => x.Id == idToFind);\nvar renderedFilter = filter.Render(BsonSerializer.LookupSerializer<MyDocument>(), BsonSerializer.SerializerRegistry);\nConsole.WriteLine(renderedFilter);\n\npublic class MyDocument\n{\n\tpublic MyId Id { get; set; }\n\tpublic string Name { get; set; }\n}\n\n[BsonSerializer(typeof(MyIdSerializer))]\npublic class MyId\n{\n\n\tpublic MyId(int id)\n\t{\n\t\tId = id;\n\t}\n\n\tpublic int Id { get; }\n\n\tpublic static bool operator ==(int id, MyId other) => id == other.Id;\n\tpublic static bool operator ==(MyId id, int other) => id.Id == other;\n\tpublic static bool operator !=(int id, MyId other) => !(id == other);\n\tpublic static bool operator !=(MyId id, int other) => !(id == other);\n}\n\npublic class MyIdSerializer : SerializerBase<MyId>\n{\n\tpublic override MyId Deserialize(BsonDeserializationContext context, BsonDeserializationArgs args)\n\t{\n\t\treturn new MyId(context.Reader.ReadInt32());\n\t}\n\n\tpublic override void Serialize(BsonSerializationContext context, BsonSerializationArgs args, MyId value)\n\t{\n\t\tcontext.Writer.WriteInt32(value.Id);\n\t}\n}\n\nUnhandled exception. System.InvalidCastException: Unable to cast object of type 'System.Int32' to type 'MyId'.\n at MongoDB.Bson.Serialization.Serializers.SerializerBase`1.MongoDB.Bson.Serialization.IBsonSerializer.Serialize(BsonSerializationContext context, BsonSerializationArgs args, Object value)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Serialize(IBsonSerializer serializer, BsonSerializationContext context, Object value)\n at MongoDB.Driver.Linq.Linq3Implementation.Misc.SerializationHelper.SerializeValue(IBsonSerializer serializer, Object value)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToFilterTranslators.ExpressionTranslators.ComparisonExpressionToFilterTranslator.Translate(TranslationContext context, BinaryExpression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToFilterTranslators.ExpressionToFilterTranslator.TranslateUsingQueryOperators(TranslationContext context, Expression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToFilterTranslators.ExpressionToFilterTranslator.Translate(TranslationContext context, Expression expression, Boolean exprOk)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToFilterTranslators.ExpressionToFilterTranslator.TranslateLambda(TranslationContext context, LambdaExpression lambdaExpression, IBsonSerializer parameterSerializer, Boolean asRoot)\n at MongoDB.Driver.Linq.Linq3Implementation.LinqProviderAdapterV3.TranslateExpressionToFilter[TDocument](Expression`1 expression, IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.ExpressionFilterDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.FilterDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at Program.<Main>$(String[] args) in C:\\src\\MongoTest\\MongoTest\\Program.cs:line 9\n",
"text": "I created a custom Id type that wraps an integer. I implemented the equality operators for this type. The problem is, when I want to filter in mongoDB with linq and the expression includes this custom operator, the expression translator throws an error.Code:Exception:",
"username": "zator"
},
{
"code": "\n \n var field = ExpressionToFilterFieldTranslator.Translate(context, leftExpression);\n var serializedComparand = SerializationHelper.SerializeValue(field.Serializer, comparand);\n return AstFilter.Compare(field, comparisonOperator, serializedComparand);\n \n ",
"text": "The issue is probably in ComparisonExpressionToFilterTranslator. It tries to use the serializer of the leftExpression for serializing the right operand:",
"username": "zator"
},
{
"code": "",
"text": "Hi, @zator,Welcome to the MongoDB Community Forums. Thank you for reporting this bug. Please file an issue in our CSHARP JIRA project with the provided repro and we will be happy to investigate further.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Ok, I opened an issue in Jira:\nhttps://jira.mongodb.org/browse/CSHARP-4517Thanks",
"username": "zator"
}
] | System.InvalidCastException when using custom operator in filter | 2023-02-03T14:08:11.227Z | System.InvalidCastException when using custom operator in filter | 1,082 |
null | [
"queries"
] | [
{
"code": "{\n \"_id\": ObjectId(\"5524d12d2702a21830bdb8e5\"),\n \"code\": \"Apple\",\n \"name\": \"iPhone\",\n \"parameters\": [\n {\n \"code\": \"xxx\",\n \"name\": \"Andrew\",\n \"value\": \"9\",\n \n },\n {\n \"code\": \"yyy\",\n \"name\": \"Joy\",\n \"value\": \"7\",\n \n },\n \n ]\n }\ndb.coll.update({\n \"parameters.name\": \"Andrew\"\n},\n{\n $push: {\n \"parameters\": {\n \"code\": \"$code\",\n \"name\": \"bar\",\n \"value\": \"10\",\n \n }\n }\n},\n{\n multi: true\n})\ncodeparameters.name == \"Andrew\"xxx",
"text": "I have the following Mongo collection,I am using the following query to push into the parameters array object,However, for the value of code, I want to use the value of the object that matched (i.e. the object with parameters.name == \"Andrew\", which here is xxx.Here’s a playground link to the problem Mongo playground Also, I am using a really old version (3.2) of MongoDb. It would be preferable if the solution worked with that. However, if that’s impossible, you can also suggest a solution with the minimum version that’s required.",
"username": "Manak_Bisht"
},
{
"code": "$// FIND THE DOCUMENTS\nvar docs = db.coll.find(\n { \"parameters.name\": \"Andrew\" }, \n { \"parameters.$\": 1 }\n);\n\n// PREPARE THE BULK WRITE OBJECTS\nvar bulkUpdate = [];\ndocs.forEach(function(doc) {\n bulkUpdate.push({\n \"updateOne\": {\n \"filter\": { \"_id\" : doc._id },\n \"update\": { \n \"$push\": { \n \"code\": doc.parameters[0].code,\n \"name\": \"bar\",\n \"value\": \"10\"\n } \n }\n }\n });\n});\n\n// BULK WRITE QUERY\ndb.coll.bulkWrite(bulkUpdate);\n$concatArrayscode$indexOfArrayname$arrayElemAtcodedb.coll.updateMany(\n { \"parameters.name\": \"Andrew\" },\n [{\n \"$set\": {\n \"parameters\": {\n \"$concatArrays\": [\n \"$parameters\",\n [\n {\n \"code\": {\n \"$arrayElemAt\": [\n \"$parameters.code\",\n { \"$indexOfArray\": [\"$parameters.name\", \"Andrew\"] }\n ]\n },\n \"name\": \"bar\",\n \"value\": \"10\"\n }\n ]\n ]\n }\n }\n }]\n)\nmongoshupdateOneupdateMany",
"text": "Hello @Manak_Bisht, Welcome to the MongoDB community forum,You have to do 2 queries in the 3.2 version, first, find the matching documents, and second, update query (you need to pass that property value from the above first query result).You can try something like below example, i have not tested,Second option,\nYou can use update with aggregation pipeline starting from v4.2, something like,PlaygroundNote: db.collection.update method is deprecated in mongosh. For alternative methods, you can use updateOne for single document updates and updateMany for multiple documents update, see Compatibility Changes with Legacy mongo Shell..",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to use an existing field value with $push in MongoDb? | 2023-02-04T08:51:53.609Z | How to use an existing field value with $push in MongoDb? | 967 |
null | [] | [
{
"code": "",
"text": "Hi, I’m struggling to understand how to properly insert a document reference into a List using splice, haven’t found any clear examples in the documentation.I’m able to achieve this when the document being inserted doesn’t exist yet (and needs to be created at this step), but when I try to add an existing document’s reference to the List, it fails.Is there a simple solution to this? Thanks ",
"username": "Cameron_Cruz"
},
{
"code": "",
"text": "Hi @Cameron_Cruz,I think what you are looking for is $push or $addToSet in an update:https://docs.mongodb.com/manual/reference/operator/update/push/Let me know if that helps.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "realm.write(() => {\n realmObj.someList.splice(index, 0, { _id: existingObjectId, })\n})\n",
"text": "Hi @Pavel_Duchovny, thanks for your suggestion.Looking at the following docs, it doesn’t look like .push allows me to specify the index to insert at?https://docs.mongodb.com/realm-sdks/js/latest/Realm.List.html#pushI’d like to do something like the following:where someList is an array of document references, but this fails when there’s already an object with that id in Realm.",
"username": "Cameron_Cruz"
},
{
"code": "const list = useObject(MyList)\nconst swap = (from, to) => \n\tlist.arr.splice(from, 1, list.arr.splice(to, 1, list.arr[from])[0]);\n//…\n",
"text": "Hey, I’m trying to use splice in order to swap order position in a Realm.List property but nothing happens.Here’s a code example of what I’m trying to do:It seems the code above fails silently and can’t make the swap. I noticed that the second splice does not return anything, so is there a bug in Realm or what is the efficient and correct way to do this kind of swap position?",
"username": "David_Had"
}
] | How to insert a document reference into a List using splice? | 2020-08-24T06:04:25.017Z | How to insert a document reference into a List using splice? | 2,157 |
null | [
"serverless"
] | [
{
"code": "",
"text": "I was poking around the menus but couldn’t find anything about this - is there any way to shut off the service automatically after a certain spend threshold is reached? I’m running a small but public app and want to protect myself against denial-of-wallet attacks. I know you can set alerts if it reaches over $X but in case I’m not close to a computer or it’s the middle of the night, I don’t want to be out thousands or more dollars ",
"username": "Timmy_Chen"
},
{
"code": "",
"text": "Hi @Timmy_Chen,is there any way to shut off the service automatically after a certain spend threshold is reached?There isn’t a feature to terminate serverless instances once a particular spend threshold is met.If you’d like this type of feature to possibly be implemented in future, I would advise creating a feedback post in the MongoDB Feedback Engine in which you can describe the use case and others can vote for.In saying so, I understand you’ve noted it’s a small app, would perhaps maybe a shared tier (M0 - M5) instance work better for your use case?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "find_one",
"text": "You can vote up this idea at serverless – MongoDB Feedback Engine. Copying my comment on that post here:I accidentally wasted ~$150 in ~30 minutes because I used find_one (without an index) in a function that was running many instances in a high-performance computing environment. As someone using this for academic research as a student, not for business, it would have been really nice if I could have set some kind of cap or throttle. Had things gone slightly differently, this could have been much worse. I happened to notice the icon in the corner with the default alert of 1 million RPU/s. This led to me digging a bit deeper into the RPU pricing model trying to resolve the difference in how units are presented in the documentation vs. the monitoring page (M vs. MM, both referring to million apparently). This led me to do some rough calculations which quickly led me to stop the HPC runs that were causing the high usage. Had I left it running overnight instead of catching it after ~30 minutes, the situation would have been worse.",
"username": "Sterling_Baird"
}
] | Limit Serverless Spend | 2022-11-26T17:44:17.463Z | Limit Serverless Spend | 1,985 |
null | [] | [
{
"code": "close_idle_time",
"text": "Hi, all.I find close_idle_time defaults to 30s in wiredtiger(config_def.c),why does mongo 4.4 change it to 100000(~28h) ? (related to https://jira.mongodb.org/browse/SERVER-41492, I know 4.0 also uses 100000.)If I create many collections and don’t access them for a long period of time, is it a good idea to sweep them from memory quickly and make room for other active collections ?And can’t we change it at runtime ?",
"username": "Lewis_Chan"
},
{
"code": "",
"text": "We actually changed the default for this recently:\nhttps://jira.mongodb.org/browse/SERVER-24949The context for the original setting of 28 hours is here:\nhttps://jira.mongodb.org/browse/SERVER-17907Note that idle collection pages don’t need to be removed from memory to “make room for other active collections”, as the eviction policy will do that automatically anyway. The real reason to close idle file handles is to remove the overhead of the bookkeeping and resources necessary to keep each file open. The removal of pages in memory associated with a file handle being closed is a necessary (and somewhat undesirable) side-effect.",
"username": "Eric_Milkie"
},
{
"code": "close_idle_time",
"text": "Thanks for replying.Eviction does work when there’s cache pressure. But if eviction cannot keep up with the rate of collection being created and inserted (We once saw performance problem of eviction efficiency in mongo 4.0 production, that is, cache_used is hard to decrease even if we increase number of eviction worker), reducing close_idle_time will help sweep inactive pages, right ?I suppose changing to 10min also applies to mongo 4.0 ? (For some reason we’re still using mongo 4.0.)",
"username": "Lewis_Chan"
},
{
"code": "",
"text": "I think you are talking about eviction of dirty cache pages. Clean cache pages are simple to evict, and idle file handles only have clean pages in the cache (unless your checkpoints are taking longer than the idle sweep time; in such a case, lowering the idle sweep time will do nothing to improve this).\nIf you are having cache pressure, that typically means the number of dirty cache pages has hit the dirty cache limit, and no amount of freeing clean cache pages will do anything to alleviate that.",
"username": "Eric_Milkie"
},
{
"code": "close_idle_time",
"text": "The default value for close_idle_time in many software systems is 100,000 seconds, or approximately 27.8 hours, for a few reasons:This default value is often considered a reasonable compromise between the need for efficient resource usage and the risk of breaking active connections. It can be adjusted based on the specific needs and requirements of the system.",
"username": "Aliven_jes"
}
] | Why close_idle_time defaults to 100000s? | 2021-05-15T01:23:21.975Z | Why close_idle_time defaults to 100000s? | 4,670 |
null | [
"swift",
"objective-c"
] | [
{
"code": "",
"text": "Hello,I’m a newbie in realm. I’m using objective-c realm.I’ve looked at mongo documentation for LinkingObjects in realmSwift. I’m trying to do the same in realm objective-c.is there an equivalent thing in objective-c or a workaround?my use case is:\nI’ve Category that has a list (RLMArray). I assume each time I update an item, it will be updated in Category instance.Thank!",
"username": "Hagar"
},
{
"code": "@property (readonly) RLMLinkingObjects *tasks;",
"text": "Hi @Hagar, welcome to the forums. We would be happy to help.Many components of Realm have their roots in ObjC so it’s well supported. When your going through the Getting Started Guide, most of the example code has a couple of tabs at the top of each code segment; Swift, Objective CSee the documentation Linking ObjectsIn general a linkingObject property is defined like this:@property (readonly) RLMLinkingObjects *tasks;If you have some specific code you need help with, post it and we’ll take a look.",
"username": "Jay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to use LinkingObjects with objective-c Realm? | 2023-02-03T22:54:04.840Z | How to use LinkingObjects with objective-c Realm? | 828 |
null | [
"queries",
"java"
] | [
{
"code": "Document d = collection.find(eq(\"UUID\", id)).first();\n\n if (d == null) {\n System.out.println(\"document = null\");\n return;\n }\n System.out.println(\"document exists\");\nif (collection.countDocuments(query) < 1)\n System.out.println(\"Document exists\");\n",
"text": "Hey, im trying to check if a document already exits. But im having some problems.\nI tried different solutionsThe error that is saw the most is that I can’t cast for example: ‘Publisher’ to ‘long’ or ‘Publisher’ to ‘Document’Method 1Method 2Im using the reactive streams driver.\nThanks",
"username": "Jackolix"
},
{
"code": "collection.find(eq(\"UUID\", id)).first();doesn't work",
"text": "Hello @Jackolix ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not then can you confirm if documents exist in your collection and can you share an example document?Also,collection.find(eq(\"UUID\", id)).first();Can you try your query in shell and check if you are getting the expected results?(doesn’t work)By doesn't work do you mean that it never returns the expected value or are you seeing some other error?The error that is saw the most is that I can’t cast for example: ‘Publisher’ to ‘long’ or ‘Publisher’ to ‘Document’Can you please share the exact error that you are getting? If it’s a casting error, then please refer to this thread which I think is related.Additionally, I would recommend you to go through below thread and resources which includes some examples on querying database using reactive streams.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "public static boolean exists(String id) {\n Query query = new Query(Criteria.where(\"UUID\").is(id));\n ReactiveMongoTemplate template = new ReactiveMongoTemplate(Database.getConnection(), \"minecraft\");\n Mono<Boolean> exists = template.exists(query, \"players\");\n exists.subscribe(\n value -> {\n Console.send(Database.PREFIX + value);\n return value;\n },\n error -> Console.send(Database.ERROR + error)\n );\npublic static Perk getPerks(Player player) {\n Perk nullperk = new Perk(\"booster\", \"enterhaken\", \"rocket_jump\", true, 6, 7);\n if (!exists(player.getUniqueId().toString())) //here I need the variable\n setPerk(player, nullperk);\n if (Items.perks.containsKey(player))\n return Items.perks.get(player);\n return nullperk;\n",
"text": "Thanks for your reply.\nIm using now springframework with the ReactiveMongoTemplate and it works.\nThe answer is here clickAnd for the casting this was a helpful comment clickThe only question I have now how do I use the variable in another method when it becomes available?My exist methodAnd my method where I need the variable",
"username": "Jackolix"
},
{
"code": "interface Callback {\n void onSuccess(boolean value);\n }\n\npublic static void exists(String id, final Callback callback) {\n Query query = new Query(Criteria.where(\"UUID\").is(id));\n ReactiveMongoTemplate template = new ReactiveMongoTemplate(Database.getConnection(), \"minecraft\");\n Mono<Boolean> exists = template.exists(query, \"players\");\n exists.subscribe(\n value -> callback.onSuccess(value),\n error -> Console.send(Database.ERROR + error)\n );\n }\n exists(player.getUniqueId().toString(), new Callback() {\n @Override\n public void onSuccess(boolean value) {\n if (!value)\n setPerk(player, nullperk);\n }\n });\n",
"text": "My Solution.\nIm using now a Callback that is executed when the value is available. Now, everything works async.",
"username": "Jackolix"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Check if Document exists | 2023-01-19T10:11:28.314Z | Check if Document exists | 5,026 |
null | [
"aggregation",
"views"
] | [
{
"code": "....\n\n\"$redact\": {\n\t\"$cond\": {\n\t\t\"if\": {\n\t\t\t\"$lte\": [{\n\t\t\t\t\"$sum\": [\"$slots\", 1]\n\t\t\t}, 10]\n\t\t},\n\t\t\"then\": \"$KEEP\",\n \"else\": \"$PRUNE\",\n\t}\n}\n....\n\n\"$merge\": {\n \"into\": \"syncs_locks\",\n \"on\": \"_id\",\n \"whenMatched\": \"fail\",\n \"whenNotMatched\": \"insert\"\n}\n",
"text": "Hi!I have an aggregation pipeline (using mongocxx driver) that executes a $redact with a $cond, something like this:And the final stage includes a $merge, like this:The driver can catch “whenMatched” errors correctly with an exception.But if the pipeline is executed “correctly”, either with a merge performed, or “stopped” in $redact, then, it is not possible to know what happened. The driver always returns an iterator that is always empty.Is there any way to know if a document was inserted or modified? Or throw any exception if the cond was not met?Thanks",
"username": "alvarolb"
},
{
"code": "$merge",
"text": "Hi @alvarolb, thanks for the great question! Unfortunately the information you’re trying to surface isn’t available as the $merge stage doesn’t offer any type of result reporting.This is currently being tracked by SERVER-43194, and is in the backlog. Once this feature is prioritized, built and released downstream tools such as the C++ Driver would be able to surface these results after the aggregation pipeline completes.",
"username": "alexbevi"
},
{
"code": "$out$out$redact$cond",
"text": "The MongoDB aggregation framework provides an $out stage to write the results of an aggregation pipeline to a new collection. If the $out stage is included in the pipeline after the $redact stage, you can check the size of the new collection to determine if any documents were modified or inserted.Alternatively, you can create a separate pipeline to check the size of the target collection before and after the aggregation pipeline, and compare the results to determine if any changes were made.However, there is no direct method to throw an exception if the condition in the $cond stage was not met. You would have to add additional checks in your code to achieve this behavior.",
"username": "Sumanta_Mukhopadhyay"
}
] | MongoCXX: Is it possible to know when an aggregation pipeline "stopped" or finished with a merge? | 2023-02-02T10:50:50.513Z | MongoCXX: Is it possible to know when an aggregation pipeline “stopped” or finished with a merge? | 1,122 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Hi, I have 3 shards with having 2 replica sets(P+S+S) and one mongos and when i am going to perform sharding on one collection of size 5.6GB and after sharding we observed size and document count started increasing\nsuppose we have 1000 doc in shard 1 then after sharding when we connect to mongo shell count of doc showing 1500 and we found size is also consumed more than previous one\nDo we have any solution for that?",
"username": "Digvijay_Singh_Tomar"
},
{
"code": "",
"text": "The increased document count that you observe after sharding a collection is related to chunk balancing and orphaned dociments.As your sharded cluster starts to balance chunks, documents will be moved between shards, for example, from shard 0 to shard 1.A copy of documents from shard 0 will be moved over to shard 1, and the original documents on shard 0 will be marked for deletion, also known as orphaned documents.Depending on your cluster tier, and how busy it is, these orphaned documents will be removed in the future. Sometimes this can happen around 24 hours after the chunk migration.You can read more about orphaned documents here:Note that the orphaned document clean up is automated, so there’s no need to intervene.If orphaned documents are having a negative effect on your cluster, you can try pausing the balancer, or setting a balancing window to throttle how many orphaned documents are being produced.",
"username": "Eamon_Scullion"
},
{
"code": "sh.rebalance> use mydb\n> db.orders.stats()\n\n> sh.rebalance(\"mydb.orders\")\n\n> db.orders.getIndexes()\n\n> db.orders.dropIndex(\"indexName\")\n\ndb.serverStatus()",
"text": "Hi, I have 3 shards with having 2 replica sets(P+S+S) and one mongos and when i am going to perform sharding on one collection of size 5.6GB and after sharding we observed size and document count started increasing\nsuppose we have 1000 doc in shard 1 then after sharding when we connect to mongo shell count of doc showing 1500 and we found size is also consumed more than previous one\nDo we have any solution for that?Sharding in MongoDB is a process of distributing data across multiple servers, which can improve performance and scalability. However, it’s possible to experience a growth in size and document count after sharding, as well as increased memory usage, as the database must manage more metadata.To address this issue, there are several steps you can take:It’s important to keep in mind that sharding can be a complex process, and it may require some trial and error to find the best configuration for your specific use case. If you’re still having issues, you may want to consider reaching out to the MongoDB community or professional services for additional guidance.Here’s an example of how you can resolve the issue of increased size and document count after sharding in MongoDB:Suppose you have a collection called “orders” that you want to shard. The collection is 5.6GB in size and contains 1000 documents. After sharding, you observe that the count of documents has increased to 1500 and the size of the collection has also grown.To resolve this issue, you can follow these steps:This will redistribute the data evenly across the shards, reducing the metadata overhead and the size of the collection.If you have any unused or redundant indexes, you can remove them by running the following command:By following these steps, you can resolve the issue of increased size and document count after sharding in MongoDB and maintain the performance and scalability of your database.",
"username": "Sumanta_Mukhopadhyay"
}
] | Sharding increases count and size of collection, what to do? | 2023-01-24T10:09:18.536Z | Sharding increases count and size of collection, what to do? | 1,699 |
null | [
"aggregation",
"queries",
"dot-net",
"compass"
] | [
{
"code": "BsonDocument pipelineStage1 = new BsonDocument{\n {\n \"$match\", new BsonDocument{\n { \"companyId\", \"12345\" }\n }\n }\n};\n\nBsonDocument pipelineStage2 = new BsonDocument{\n {\n \"$group\", new BsonDocument {\n\t\t\t{ \"_id\",\n\t\t\t\tnew BsonDocument\n\t\t\t\t{\n\t\t\t\t\t{ \"osDescription\", \"$osDescription\" },\n\t\t\t\t\t{ \"osVersion\", \"$osVersion\" }\n\t\t\t\t} },\n\t\t\t{ \"total\",\n\t\t\t\tnew BsonDocument(\"$sum\", 1) }\n \n }\n};\n\nBsonDocument[] pipeline = new BsonDocument[] { \n pipelineStage1,\n pipelineStage2\n};\n\nList<BsonDocument> pResults = myCollection.Aggregate<BsonDocument>(pipeline).ToList();\n[\n {\n $match: { companyId: \"12345\" }\n },\n {\n $group: { _id: { osDescription: \"$osDescription\", osVersion: \"$osVersion\" }, total: { $sum: \"1\" } }\n }\n]\n",
"text": "Is there a way to write portable aggregation queries in dotnet core? I have some aggregation queries written in C# using Raw BsonDocument Stages, but I cant test them without debugging the whole app. I mean, Is there a way I can write my aggregation query in C# in a way that I can copy/paste it into MongoDb Shell or Compass or VSCode extension or somewhere and test it?For example, how can I test the below aggregation query without debugging my dotnet core app?I’d like to write portable native MongoDb aggregation queries in C# like this:Then I would be able to copy/paste it into Compass, or VSCode Extension and test itThanks",
"username": "Javier123"
},
{
"code": "BsonDocumentsmongoshToString()var query = myCollection.Aggregate<BsonDocument>(pipeline);\nConsole.WriteLine(query.ToString());\n",
"text": "Hi, @Javier123,Welcome to the MongoDB Community Forums.I would generally recommend writing your aggregations and LINQ using strongly-typed POCOs. (POCO == plain old C# object) The .NET/C# Driver will do the heavy lifting for you of serializing/deserializing your POCOs into BSON on the wire. This has the additional advantage of making your code and queries type-safe and more easily refactorable.Regardless of whether you use POCOs or BsonDocuments, there are a few ways to accomplish your goal…First is to install the MongoDB Analyzer, which is a NuGet package. Once installed, your IDE will display a tooltip for each query, which will contain the rendered MQL. This is the same MQL that the driver will send to the server. You can also use this MQL in mongosh.Another option is to call ToString() on the query. This will return the resulting MQL.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "My approach differs from the one already presented.I keep my queries in the native mongosh/JS format inside a file. At run time, outside mongosh, in Java in particular, I read the file and get back a org.bson.Document. See Java query from shell script - #2 by steevej for more details.I am pretty sure BsonDocument has a parse method to do the same.",
"username": "steevej"
},
{
"code": "*.mongodb",
"text": "Thanks, that is similar to what I was doing. So I have a set of *.mongodb files with native mongodb aggregation queries that I can easily test from MongoDb Compass or MongoDb Shell or MongoDb VSCODE extension, then I can use MongoDb Compass to convert them to C# code.But I like the LINQ approach, although it is not portable as I wish since I need to debug the whole dotnet core microservice in order to test the query",
"username": "Javier123"
}
] | Portable aggregation query in dotnet core | 2023-02-01T17:51:18.614Z | Portable aggregation query in dotnet core | 816 |
null | [
"containers"
] | [
{
"code": "2023-02-02T18:19:58.094+0000 E QUERY [thread1] Error: listDatabases failed:{\n \"operationTime\" : Timestamp(1675361994, 1),\n \"ok\" : 0,\n \"errmsg\" : \"there are no users authenticated\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1675361994, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"3m4e6DdOpo7rTKlGzJDj8+MnlMw=\"),\n \"keyId\" : NumberLong(\"7188273050037518337\")\n }\n }\n} :\n",
"text": "Doing a docker exec into the container but can’t list the DBs. Tried everything but can’t get it work. Any help?\n</>",
"username": "Dev_Engine"
},
{
"code": "",
"text": "I am guessing your config file enables authentication, but you forgot to add, at least, an admin user. when auth is enabled you can only access from localhost (inside the container) until you create the first user.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Dev_Engine,\nAs mentioned by the error, you are not authorized to do this operation!\nSo you need to authenticate before with a user which have the correct privileges. For example:mongosh --authenticationDatabase admin -u admin -p ***And then you can list database!I hope it is useful.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "yes, I’m trying to access inside the container (after doing docker exec -it…) and I continue to see this error.",
"username": "Dev_Engine"
},
{
"code": "",
"text": "@Fabio_Ramohitaj , I’ve already tried this but same error.",
"username": "Dev_Engine"
},
{
"code": "",
"text": "alright then, second round: after the first user is created, you cannot just do whatever you want without proper credentials and assigned roles now, to understand how you ended up with this error, please share how you enabled auth, how you added users, which user is logged in to which database, and how you logged in.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Mongodb is running in a k8 container. So I figured that there is no way to do it within the container itself. I had to recreate the container or go to etcd to get the settings.Thanks for the responses, truly appreciate!",
"username": "Dev_Engine"
},
{
"code": "",
"text": "the equivalent command for k8 is “kubectl exec”. but the important part is that many things are still in your configuration files (pod,deployment,service etc). so also check them. keep in mind, if you are doing too many things inside the container/pod manually, then you are not following the repetable installation logic.by the way, did that recration solve your problem?",
"username": "Yilmaz_Durmaz"
}
] | "Unauthorized" error in a docker container | 2023-02-02T18:36:37.560Z | “Unauthorized” error in a docker container | 1,335 |
null | [
"aggregation",
"java"
] | [
{
"code": " def shopInfringementsNoPartner(String fromDate, String toDate, Long clientId) {\n Client client = Client.get(clientId)\n \n // GMongo query\n def match = ['date.d' : [$gte: fromDate, $lte: toDate],\n 'product.cl' : clientId,\n 'platform.id' : [$in: client.platforms.id],\n \"shop.inf.${clientId}\" : [$exists: true],\n \"shop.cl_pt.${clientId}\": [$exists: false],\n 'product.a' : true]\n\n def result = facts().aggregate(\n [$match: match],\n [$group: [_id : [shop_id: '$shop.id'], // I think this line is the groupBy\n count: [$sum: 1],\n n : [$first: '$shop.n'],\n dn : [$first: '$shop.dn']]]\n ).results()\n // Mongo DB Java Sync Driver API query\n final List<Bson> aggregationPipeline = asList(\n Aggregates.match(Filters.and(Filters.gte(\"date.d\", fromDate), Filters.lte(\"date.d\", toDate))),\n Aggregates.match(Filters.eq(\"product.cl\", clientId)),\n Aggregates.match(Filters.in(\"platform.id\", platformIds)),\n Aggregates.match(Filters.exists(\"product.cl\", true)),\n Aggregates.match(Filters.exists(\"product.cl\", false)),\n Aggregates.match(Filters.eq(\"product.cl\", true)),\n Aggregates.group(null,\n asList(\n // not sure what to put here\n Accumulators.sum(\"count\",1),\n Accumulators.first(\"n\",\"$shop.n\"),\n Accumulators.first(\"dn\",\"$shop.dn\")\n )\n )\n );\nAccumulators.groupBy()",
"text": "Greetings all,Apologies if this has already been answered somewhere but I’m struggling to find a solution to what should be a very simple GROUP BY requirement using the MongoDB Java Sync Driver.I am upgrading MongoDB in an aging tech stack from 3.4 to 5.0.13 in order to fully support a Grails 5 upgrade.Previously this application was using GMongo, which is a Groovy wrapper around the old Mongo Java Driver that allowed you to write native-like queries. It hasn’t been updated since 2016 and doesb’t support the new driver API, hence the need to upgrade.Anyway I have the following Group stage written in GMongo format which I’m trying to convert to MongoDB Sync Driver API format:You can see during the Group stage that there is a groupBy (I think) on “$shop.id”.I have the following code written in the new Java API:And someone suggested that to do a GROUP BY using this API, I would need to do Accumulators.groupBy()Problem with this suggestion is, there is no such .groupBy() method anywhere in the Java Sync driver API that I can see. I’m actually having a lot of trouble figuring out how to do this one simple thing.Can anyone explain how I would do a GROUP BY using the new Java API in order to replicate the original GMongo logic?",
"username": "Dale_Culpin"
},
{
"code": "",
"text": "Can anybody help me please?Is it just not possible to do a Group By using this API?",
"username": "Dale_Culpin"
},
{
"code": "asList(\n // not sure what to put here\n Accumulators.sum(\"count\",1),\n Accumulators.first(\"n\",\"$shop.n\"),\n Accumulators.first(\"dn\",\"$shop.dn\")\n )\nAggregates.group(nullAggregates.group(\"$shop.id\"_id : [shop_id: '$shop.id']",
"text": "I do not use the aggregation’s builders. I prefer to keep my queries in plain JSON format and then use Document.parse(). Mainly because I switch often between Java, node, Compass and mongosh.But you could try to write your aggregation in Compass and then use the Export feature to get the builders’ version.But with my little knowledge about builders and 0 knowledge of GMongo, the following looks okayButAggregates.group(nullmight need to be something likeAggregates.group(\"$shop.id\"in order to match_id : [shop_id: '$shop.id']With null you probably group everything together rather than by store.I also thing that calling Aggregates.match many times will create a lot of $match stage rather than a single one.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks Steeve, I’ll test out a couple of things you have suggested.As an aside Steeve, when you parse and execute the queries you keep in native JSON format, are you using Java?Can you give me an example of how you do this please?",
"username": "Dale_Culpin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot figure out how to do a GROUP BY using the MongoDB Java Sync driver API | 2023-02-01T16:59:59.752Z | Cannot figure out how to do a GROUP BY using the MongoDB Java Sync driver API | 903 |
null | [] | [
{
"code": "",
"text": "Hi,Trying to find a way to be able to query data lakes across multiple projects, and so far couldn’t find a solution for this “issue”. Is it possible at all to add data lake sources to a virtual db that are deployed across multiple projects, but in the same organization?\nI assume due to the database access management this could be tricky, but figured I would ask regardless. Not exactly a show stopper, would be more of a QoL improvement on our end.",
"username": "Attila_Pinter"
},
{
"code": "",
"text": "Hey Attila,Thanks for asking this! Unfortunately right now you can’t query across projects using Atlas Data Federation, and it’s exactly for the reason you mentioned. The database access management model we have today does not really allow for this. That said, it’s something we’re actively looking into as a future enhancement.Would you be able to share more about your use case? It will help us as we prioritize working on this functionality.-Ben",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Deploying federated database using data lake sources from multiple projects | 2023-02-03T04:29:51.428Z | Deploying federated database using data lake sources from multiple projects | 680 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "// review.js\nimport mongoose from \"mongoose\";\n\nconst schema = new mongoose.Schema(\n {\n productId: mongoose.Schema.Types.ObjectId,\n email: String,\n name: String,\n review: String,\n rating: Number,\n gravatar: String\n },\n { timestamps: true, strict: true, strictQuery: true }\n);\n\nexport default mongoose.model(\"Reviews\", schema);\n// product.js\nconst schema = new mongoose.Schema(\n {\n name: String,\n test: [{ type: mongoose.Schema.Types.ObjectId, ref: 'Reviews' }],\n },\n { timestamps: true, strict: true, strictQuery: true }\n);\n\nexport default mongoose.model(\"products\", schema);\nimport { Product } from \"../database/models/index\"\n\nconst product = await Product.findById(productId).populate('test');\n{\n test: [],\n _id: '63d5ca000a0f69dc24ac8f7c',\n name: 'Tydle Tie Down Straps',\n",
"text": "It appears I have also fallen victim to the all might populate issue.For me I just get an empty array (I do have data in the collection).Result:Any help would be great.",
"username": "Trent_Mackness"
},
{
"code": "",
"text": "I figured this out. I didn’t store the ID in the test array that referenced the review.",
"username": "Trent_Mackness"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Mongoose populate not working? | 2023-02-03T12:56:24.681Z | Mongoose populate not working? | 1,090 |
[
"replication",
"mongodb-shell"
] | [
{
"code": "",
"text": "\nmongosh error1353×798 59.6 KB\nplease help me resolve this",
"username": "RAJYAVARDHAN_SINGH_RATHORE"
},
{
"code": "",
"text": "Just chande mongosh by mongod",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "sorry, you are already inside the instance so you can use the command rs.initiate(), and then add the necessary members with rs.add()",
"username": "Leandro_Domingues"
},
{
"code": "[root@com ~]# mongod --logpath \"path_for_log\" --dbpath \"path_for_data\" --replSet \"name\" --bind_ip 0.0.0.0 --fork etc...\n",
"text": "Hi @RAJYAVARDHAN_SINGH_RATHORE ,\nyou’ re trying to start a replica set inside the mongoshell, with the mongosh command and is wrong!\nYou need to do something like that (for example in linux):I hope is useful!Regards",
"username": "Fabio_Ramohitaj"
}
] | I am not able to create replica set on localhost | 2023-02-03T12:24:08.622Z | I am not able to create replica set on localhost | 765 |
|
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\":\"63ca455cb7dd228bcb3d85e5\"\n \"name\": \"Alex\",\n \"phoneNumber\": \"+235946546654\",\n \"level\": 5,\n \"workByCateg\": [\n {\n \"workId\": \"741qaz852wsx963edc\",\n \"customWork\": [\n {\n \"dName\": \"Status\",\n \"type\": \"options\",\n \"value\": \"Others\"\n },\n {\n \"dName\": \"Appointment\",\n \"type\": \"dateTime\",\n \"value\": 1324645625\n },\n {\n \"dName\": \"Notes\",\n \"type\": \"status\",\n \"value\": \"hi hello how are you..\"\n }\n ]\n },\n {\n \"workId\": \"123qaz456wsx789edc\",\n \"customWork\": [\n {\n \"dName\": \"Status\",\n \"type\": \"options\",\n \"value\": \"Work Done\"\n },\n {\n \"dName\": \"Appointment\",\n \"type\": \"dateTime\",\n \"value\": 1326546546\n },\n {\n \"dName\": \"Cus-Field\",\n \"type\": \"multi-options\",\n \"value\": [\"opt1\", \"opt2\"]\n }\n ]\n }\n ]\n }\n[\n {\n $match: <my_query>\n },\n {\n $unwind: \"$workByCateg\",\n },\n {\n $set: {\n workByCateg: {\n processId: \"$workByCateg.workId\",\n customWork: {\n $map: {\n input: \"$workByCateg.customWork\",\n as: \"theField\",\n in: {\n name: \"$$theField.dName\",\n value: \"$$theField.value\",\n },\n },\n },\n },\n },\n },\n {\n $addFields: {\n \"workByCateg.customWork\": {\n $ifNull: [\"$customWork\", \"$$REMOVE\"],\n },\n },\n },\n {\n $group: {\n _id: \"$_id\",\n name: { $last: \"$name\" },\n phoneNumber: { $last: \"$phoneNumber\" },\n level: { $last: \"$level\" },\n workByCateg: { $push: \"$workByCateg\" },\n },\n },\n ]\n",
"text": "I wanted to rename field inside a nested array of objects which looks like below:I want to retain this structure, but do some changes like the following, on the result:I don’t want to make changes to the database.The following is the aggregation that I use:Here I’m doing unwind first, to change field names, and then im grouping it again to club all “customWork” together under “workByCateg” field.\nI feel like the aggregation that i use is an overkill and thinking about performance issues. Any straight forward approach for this?Any help would be appreciated. Thank you for your time.",
"username": "Sooraj_S"
},
{
"code": "customWorkdb.workers.updateMany(\n {},\n [\n {\n $set: {\n \"workByCateg\": {\n $map: {\n input: \"$workByCateg\",\n as: \"work\",\n in: {\n workId: \"$$work.workId\",\n customWork: {\n $map: {\n input: \"$$work.customWork\",\n as: \"custom\",\n in: {\n name: \"$$custom.dName\",\n value: \"$$custom.value\"\n }\n }\n }\n }\n }\n }\n }\n }\n ]\n);\n",
"text": "customWorkHi @Sooraj_S,\nI believe we can take the following approach:It would be good to test performance on a larger collection, and even adopt some kind of filter and run updateMany in batches.See if it makes sense and let me know!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Hi, thanks for taking the time and replying.\nI’m sorry that i didn’t mention that I don’t want to update the documents. I just want to modify it for the response.This one updates the docs in the collection right? @Leandro_Domingues",
"username": "Sooraj_S"
},
{
"code": "",
"text": "Hi, thanks for taking the time and replying.\nI’m sorry that i didn’t mention that I don’t want to update the documents. I just want to modify it for the response.This one updates the docs in the collection right?Actually I believe I was the one who made a mistake, you really mentioned that you didn’t want to change the documents in the database, I’m sorry… Yes this script changes the documents in the collection.Let me see if I can think of something.",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Hi @Leandro_Domingues ,\nThanks, this one really helped.Now, I’m really sorry for not testing out the aggregation method you mentioned.\nI used your suggestion inside aggregation, rather than in updateMany, and it worked.I request you to, if possible, edit your answer and update “updateMany” to “aggreate”, so that it helps others in need.\nI’m marking your answer as the solution.Thanks again. Cheers!",
"username": "Sooraj_S"
},
{
"code": "db.workers.aggregate(\n [\n {\n $set: {\n \"workByCateg\": {\n $map: {\n input: \"$workByCateg\",\n as: \"work\",\n in: {\n workId: \"$$work.workId\",\n customWork: {\n $map: {\n input: \"$$work.customWork\",\n as: \"custom\",\n in: {\n name: \"$$custom.dName\",\n value: \"$$custom.value\"\n }\n }\n }\n }\n }\n }\n }\n }\n ]\n);\n",
"text": "Sure!Here it goes, now without any changes to the database, using the aggregate to return the documents in the desired format.",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to rename a field that is nested inside an array of objects deep without compromising on performance? | 2023-02-02T09:06:50.234Z | How to rename a field that is nested inside an array of objects deep without compromising on performance? | 1,412 |
null | [
"aggregation"
] | [
{
"code": "{\n\t\"relatedJobs\": [\n\t\t{\n\t\t\t\"completedAt\": \"2022-09-02\",\n\t\t\t\"jobId\": \"muDYtPWUkFC4555anxHu\",\n\t\t\t\"weight\": \"2009\",\n\t\t\t\"jobType\": \"RECEPTION_JOB\",\n\t\t\t\"partnerId\": \"17\"\n\t\t}\n\t]\n}\n{\n \"firebaseId\": \"muDYtPWUkFC4555anxHu\",\n \"completedAt\": \"2022-09-02\",\n \"createdAt\": \"2022-09-02\",\n \"dueAt\": \"2022-09-02\",\n \"jobType\": \"RECEPTION_JOB\",\n \"partnerId\": \"17\",\n \"relatedBoxes\": [...],\n \"status\": \"DONE\"\n }\n",
"text": "I have recently started working with MongoDB and I have a task which is giving me a bit of a headache.\nI have a boxes collection with many attributes but this is the one I need right now:And I have a jobs collection again with many more attributes than these:These data were migrated over from a Firestore DB, that’s why the id is called firebaseId. What I’d like to achieve is to update the boxes.relatedJobs so their jobId contains the _id attribute of the corresponding job document and then drop the firebaseId altogether. $unsetting the firebaseId is not an issue, but how do I update that one attribute in my array?\nThanks",
"username": "K_Cs"
},
{
"code": "{ \n \"_id\": ObjectId('63da520f6cd602b05233cc4b'),\n\t \"relatedJobs\": [\n\t\t{\n\t\t\t\"completedAt\": \"2022-09-02\",\n\t\t\t\"jobId\": \"muDYtPWUkFC4555anxHu\",\n\t\t\t\"weight\": \"2009\",\n\t\t\t\"jobType\": \"RECEPTION_JOB\",\n\t\t\t\"partnerId\": \"17\"\n\t\t}\n\t]\n}\n_idboxesjobsrelatedJobs.jobIdjobsfirebasedIdboxesrelatedJobs",
"text": "Hi @K_Cs,Welcome to the MongoDB Community forums I have a boxes collection with many attributes but this is the one I need right now:I assume, your boxes collection is looking like this:their jobId contains the _id attribute of the corresponding job documentI noticed that the _id field is missing from the sample provided. Could you kindly provide additional examples or specify the field you would like to use for the replacement? Additionally, it would be helpful if you could provide the expected output based on the sample documents.If I may suggest, utilizing a driver and a language that you are comfortable with may make the process simpler unless it is mandatory to use MongoDB’s MQL. Could you please let me know if MongoDB’s MQL is a requirement in this scenario?Also, for clarity, is the connection between the job document in the boxes collection and the corresponding job in the jobs collection determined by the match between the relatedJobs.jobId field and the jobs collection’s firebasedId field?Lastly, I observed that the example in the boxes collection only includes one job, but I assume that the relatedJobs field may contain multiple entries. Please correct me if my assumption is incorrect.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "[{\n \"_id\": {\n \"$oid\": \"63da3c1feac7af351445caad\"\n },\n < many more fields >\n \"relatedJobs\": [\n {\n \"partnerId\": \"41\",\n \"completedAt\": \"2022-09-13\",\n \"jobType\": \"RECEPTION_JOB\",\n \"weight\": 1972,\n \"jobId\": \"tCDcpp4K1JFfsXju6LYQ\"\n },\n {\n \"jobId\": \"TDfrqods85bQYECOa3n3\",\n \"partnerId\": \"13\",\n \"jobType\": \"DELIVERY_JOB\",\n \"completedAt\": \"2023-01-18\",\n \"weight\": 23060\n }\n ]\n}]\n[{\n \"_id\": {\n \"$oid\": \"63da2b01eac7af351445c946\"\n },\n \"firebaseId\": \"TDfrqods85bQYECOa3n3\",\n \"completedAt\": \"2023-01-18\",\n \"createdAt\": \"2023-01-18\",\n \"dueAt\": \"2023-01-18\",\n \"partnerId\": \"13\",\n \"startedAt\": \"2023-01-18\",\n \"status\": \"DONE\",\n \"jobType\": \"DELIVERY_JOB\",\n < many more fields >\n},{\n \"_id\": {\n \"$oid\": \"63da2b01eac7af351445ca57\"\n },\n \"firebaseId\": \"tCDcpp4K1JFfsXju6LYQ\",\n \"completedAt\": \"2022-09-13\",\n \"createdAt\": \"2022-09-13\",\n \"dueAt\": \"2022-09-13\",\n \"partnerId\": \"41\",\n \"startedAt\": \"2022-09-13\",\n \"status\": \"DONE\",\n \"jobType\": \"RECEPTION_JOB\",\n < many more fields >\n}]\njobIdrelatedJobsfirebaseId_id[{\n \"_id\": {\n \"$oid\": \"63da3c1feac7af351445caad\"\n },\n < many more fields >\n \"relatedJobs\": [\n {\n \"partnerId\": \"41\",\n \"completedAt\": \"2022-09-13\",\n \"jobType\": \"RECEPTION_JOB\",\n \"weight\": 1972,\n \"jobId\": {\n\t\t \"$oid\": \"63da2b01eac7af351445ca57\"\n\t\t }\n },\n {\n \"jobId\": {\n\t\t \"$oid\": \"63da2b01eac7af351445c946\"\n\t\t },\n \"partnerId\": \"13\",\n \"jobType\": \"DELIVERY_JOB\",\n \"completedAt\": \"2023-01-18\",\n \"weight\": 23060\n }\n ]\n}]\nconst db = db.getSiblingDB(\"database_name\");\nconst boxes = db.boxes;\nconst jobs = db.jobs;\nboxes.updateMany( {}, [\n\t{ $set: {\n\t\tfillDate: { $dateFromString: { dateString: \"$fillDate\", format: \"%Y-%m-%d\" } },\n\t\texpirationDate: { $dateFromString: { dateString: \"$expirationDate\", format: \"%Y-%m-%d\" } }\n\t} }\n] );\n",
"text": "Hi @Kushagra_Kesav,Here is an exported box document which has multipe relatedJobs:And this is the 2 jobs related to it:Currently the jobId in a relatedJobs array element contains the firebaseId from the job and that’s my only connection to it. I want to replace that obsolete reference with the job’s _id which is the ObjectId while preserving everything else as it was. The updated box document should look similar to this:My current approach is to write JavaScript for it where I already converted some strings to date:Please let me know if there is any more info I should provide.\nThanks",
"username": "K_Cs"
},
{
"code": "var boxesWithJobs = boxes.aggregate( [\n\t{\n\t\t$lookup: {\n\t\t\tfrom: \"jobs\",\n\t\t\tlocalField: \"relatedJobs.jobId\",\n\t\t\tforeignField: \"firebaseId\",\n\t\t\tas: \"relatedJobsWithIds\"\n\t\t}\n\t}\n] ).toArray();\n\nboxesWithJobs.forEach(boxWithJobs => {\n\tboxWithJobs.relatedJobs.forEach(relatedJob => {\n\t\tvar job = boxWithJobs.relatedJobsWithIds.find(rj => rj.firebaseId === relatedJob.jobId);\n\t\trelatedJob.jobId = job._id;\n\t});\n\tdelete boxWithJobs.relatedJobsWithIds;\n});\n\nboxes.drop();\nboxes.insertMany(boxesWithJobs);\n",
"text": "I solved it by returning everything from the lookup and doing the changes in JS:Might not be the best or most beautiful solution but it gets the job done (pun intended).",
"username": "K_Cs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update array element from $lookup | 2023-02-01T11:44:29.180Z | Update array element from $lookup | 918 |
null | [
"node-js"
] | [
{
"code": " name: req.body.person_name,\n position: req.body.person_position,\n level: req.body.person_level,\n name: req.body.name,\n position: req.body.position,\n level: req.body.level,\n$set: {\n person_name: req.body.person_name,\n person_position: req.body.person_position,\n person_level: req.body.person_level,\n },\n$set: {\n name: req.body.name,\n position: req.body.position,\n level: req.body.level,\n },\n",
"text": "Hello, when I trying the How To Use MERN Stack: A Complete Guide | MongoDB I ran into having “null” for all my entries.I believe in the “mern/server/routes/record.js” section’s code all the variablesshould bealso forshould befor “recordRoutes.route(”/update/:id\")\" functionPlease correct me if I am wrong.",
"username": "Jonathan_Aghachi"
},
{
"code": "",
"text": "I’m just here to agree—I kept getting null values (other than the _ids) until I made the above change. Thanks for posting this.",
"username": "Jessica_Gallagher"
},
{
"code": "",
"text": "Nice tutorial…Thanks for posting. MERN stack training in Pune",
"username": "Pooja_Kapadia"
}
] | Mern Stack Tutorial ("null" enteries) | 2022-03-04T00:21:49.462Z | Mern Stack Tutorial (“null” enteries) | 3,138 |
null | [
"database-tools",
"backup"
] | [
{
"code": "mongodump --archive --gzip --db=someDistantDB | mongorestore --archive --gzip --nsFrom='someLocalDB.*' --nsTo='someLocalDB.*'\n",
"text": "Hi,In order to import db on my dev team computers i’m editing a command line to copy a portion of a production database. It mostly looks like this:In order to increase perfomance of the command, and save network data usage, it is important that data transfered from my distantDB is gziped on the distant server before the transfert on my local computer.I didn’t found any information on how the --gzip parameter work with distant database, so here is my questions:",
"username": "Johan_Maupetit"
},
{
"code": "",
"text": "From tests i did i have the answer to the first answer => The data are flat transfered and gziped locally",
"username": "Johan_Maupetit"
},
{
"code": "",
"text": "Hi Johan,As you need to restore on several different computers, I think it’s best that you run mongodump on one of these computers, or if possible on the MongoDB server and from there transfer the file to a common location on your network. You can then point mongorestore to that file on each computer that needs restoring.Do you believe it makes sense?",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Hi @Leandro_Domingues thank you for your answer. This is actually what we are doing for now, and i was working on a lighter solution (that’s why i tested the pipe solution).Generating archive on server have some drawbacks:Either need to give an ssh access to users so they can generate their dump, or need to make a webService which generate it. (We got on the second solution)With the second solution we have some issues. Mongodump take some time on our database, for big exports it can easily take more than 2 minutes … this is hard to monitor/manage with a web server. For instance how to be sure their is no 2 process running at the same time. How to know the progression of the process. And how to interrupt the export process.Generated files need to be clean.With the pipe solutions i find it’s lighter ^^",
"username": "Johan_Maupetit"
},
{
"code": "ssh [email protected] sh << 'EOF' | mongorestore --archive --gzip --drop --nsFrom='remoteDb.*' --nsTo='localDb.*' --host=\"127.0.0.1:27017\"\n mongodump --gzip --ssl --uri='mongodb://127.0.0.1:27017/remoteDb' --archive 2>/dev/null\nEOF\n",
"text": "Hi @Leandro_Domingues i found a temporary workaround which needs users to have an ssh access =>Data are gziped when transferring over HTTP. However this look a bit hacky isn’t it ? ",
"username": "Johan_Maupetit"
},
{
"code": "",
"text": "I oppened a bug / feature request. You may want to follow this here https://jira.mongodb.org/browse/TOOLS-3240 ",
"username": "Johan_Maupetit"
}
] | Mongodump gzip distant database | 2023-02-02T10:04:10.481Z | Mongodump gzip distant database | 1,492 |
null | [] | [
{
"code": "{\"t\":{\"$date\":\"2021-02-25T02:12:58.430+04:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n\n\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.430+04:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileRenameFailed: Access is denied\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n\"}}\n\n\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FF6B562AAE3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"},{\"a\":\"7FF6B562C97E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"},{\"a\":\"7FF6B56EFFC7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},{\"a\":\"7FF6B56EFFA9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FFD4391DFF8\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},{\"a\":\"7FFD34AB1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"},{\"a\":\"7FFD34AB232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"},{\"a\":\"7FFD34AB40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},{\"a\":\"7FF6B58CD9B4\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FFD46BC468F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"},{\"a\":\"7FFD46B24BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"},{\"a\":\"7FFD46B289E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"},{\"a\":\"7FFD43469149\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FFD35DA6210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF6B5693A01\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/ec2715fda96eb965d8ebeac00b80092c/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1877,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"},{\"a\":\"7FF6B5637323\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"},{\"a\":\"7FF6B457F66E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5DE\"},{\"a\":\"7FF6B457EF2C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FFD438D1FFA\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"},{\"a\":\"7FFD441281F4\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B562AAE3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B562C97E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B56EFFC7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B56EFFA9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD4391DFF8\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD34AB1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD34AB232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD34AB40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B58CD9B4\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD46BC468F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD46B24BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD46B289E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD43469149\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD35DA6210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B5693A01\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/ec2715fda96eb965d8ebeac00b80092c/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1877,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B5637323\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.609+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B457F66E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5DE\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.610+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B457EF2C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.610+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD438D1FFA\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.610+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD441281F4\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.610+04:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23134, \"ctx\":\"ftdc\",\"msg\":\"Unhandled exception\",\"attr\":{\"exceptionString\":\"0xE0000001\",\"addressString\":\"0x00007FFD43469149\"}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.610+04:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23136, \"ctx\":\"ftdc\",\"msg\":\"*** stack trace for unhandled exception:\"}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.611+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FFD43469149\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FF6B562C169\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":97,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"},{\"a\":\"7FF6B562C98D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":257,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"13D\"},{\"a\":\"7FF6B56EFFC7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},{\"a\":\"7FF6B56EFFA9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FFD4391DFF8\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},{\"a\":\"7FFD34AB1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"},{\"a\":\"7FFD34AB232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"},{\"a\":\"7FFD34AB40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},{\"a\":\"7FF6B58CD9B4\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FFD46BC468F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"},{\"a\":\"7FFD46B24BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"},{\"a\":\"7FFD46B289E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"},{\"a\":\"7FFD43469149\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FFD35DA6210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF6B5693A01\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/ec2715fda96eb965d8ebeac00b80092c/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1877,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"},{\"a\":\"7FF6B5637323\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"},{\"a\":\"7FF6B457F66E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5DE\"},{\"a\":\"7FF6B457EF2C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FFD438D1FFA\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"},{\"a\":\"7FFD441281F4\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD43469149\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B562C169\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":97,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B562C98D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":257,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"13D\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B56EFFC7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B56EFFA9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD4391DFF8\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD34AB1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD34AB232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD34AB40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B58CD9B4\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD46BC468F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD46B24BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD46B289E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD43469149\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD35DA6210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B5693A01\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/ec2715fda96eb965d8ebeac00b80092c/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1877,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B5637323\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B457F66E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5DE\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF6B457EF2C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD438D1FFA\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"}}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFD441281F4\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n\n{\"t\":{\"$date\":\"2021-02-25T02:12:58.612+04:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23132, \"ctx\":\"ftdc\",\"msg\":\"Writing minidump diagnostic file\",\"attr\":{\"dumpName\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.4\\\\bin\\\\mongod.2021-02-24T22-12-58.mdmp\"}}\n{\"t\":{\"$date\":\"2021-02-25T02:12:59.369+04:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23137, \"ctx\":\"ftdc\",\"msg\":\"*** immediate exit due to unhandled exception\"}\n",
"text": "Hello Dears,\nThe “mongod” service stopped after the following error displayed in log file,\nNotes : Operating system : windows server 2019 , Mongo version 4.4Thanks a lot",
"username": "Abdelrahman_N_A"
},
{
"code": "{\"t\":{\"$date\":\"2021-03-14T23:03:28.152+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:28.298+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileRenameFailed: Access is denied\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n\"}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.797+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FF62DEFE2F3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"},{\"a\":\"7FF62DF0018E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"},{\"a\":\"7FF62DFC3E07\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},{\"a\":\"7FF62DFC3DE9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FF95276DE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},{\"a\":\"7FF9439F1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"},{\"a\":\"7FF9439F232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"},{\"a\":\"7FF9439F40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},{\"a\":\"7FF62E19DA54\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FF9559D41BF\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"},{\"a\":\"7FF955934BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"},{\"a\":\"7FF9559389E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"},{\"a\":\"7FF951B396C9\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FF943D76220\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF62DF67721\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/828db7bca19a7173123cb605d6b56a03/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1885,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"},{\"a\":\"7FF62DF0AB33\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"},{\"a\":\"7FF62CE2B6AD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"},{\"a\":\"7FF62CE2AF8C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FF95272268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"},{\"a\":\"7FF952C07974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.797+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DEFE2F3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.797+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DF0018E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DFC3E07\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DFC3DE9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n",
"text": "Hi,\nWe’ve same issue here, did you find the root cause ?Windows Server 2019 / MongoDb 4.4ThanksSteve",
"username": "STEVE_HELDEBAUME1"
},
{
"code": "storage.dbPathsystem.logPathmongod.exe",
"text": "Welcome to the community forum @Abdelrahman_N_A and @STEVE_HELDEBAUME1.I added some formatting to make your logs more readable (see Formatting code and log snippets in posts), but both of your deployments appear to have a similar problem with file permissions.If the MongoDB process is unable to write to an expected file or path, the process will shut down. The relevant log message in both cases includes:“msg”:“Writing fatal message”,“attr”:{“message”:“DBException::toString(): FileRenameFailed: Access is denied\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n”}}I expect there may be more context on the operation attempted in the log lines immediately preceding the “Writing fatal error” message, but the solution should be to ensure the file & directory permissions for your MongoDB storage.dbPath and system.logPath match the user you are running the mongod.exe process as.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "{\"t\":{\"$date\":\"2021-03-14T09:36:44.125+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn637\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:49697\",\"connectionId\":637,\"connectionCount\":26}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:28.152+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:28.298+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileRenameFailed: Access is denied\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n\"}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.797+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FF62DEFE2F3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"},{\"a\":\"7FF62DF0018E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"},{\"a\":\"7FF62DFC3E07\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},{\"a\":\"7FF62DFC3DE9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FF95276DE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},{\"a\":\"7FF9439F1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"},{\"a\":\"7FF9439F232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"},{\"a\":\"7FF9439F40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},{\"a\":\"7FF62E19DA54\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FF9559D41BF\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"},{\"a\":\"7FF955934BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"},{\"a\":\"7FF9559389E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"},{\"a\":\"7FF951B396C9\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FF943D76220\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF62DF67721\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/828db7bca19a7173123cb605d6b56a03/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1885,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"},{\"a\":\"7FF62DF0AB33\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"},{\"a\":\"7FF62CE2B6AD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"},{\"a\":\"7FF62CE2AF8C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FF95272268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"},{\"a\":\"7FF952C07974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.797+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DEFE2F3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":349,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.797+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DF0018E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":256,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"12E\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DFC3E07\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DFC3DE9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF95276DE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9439F1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9439F232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9439F40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62E19DA54\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9559D41BF\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF955934BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9559389E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF951B396C9\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF943D76220\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DF67721\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/828db7bca19a7173123cb605d6b56a03/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1885,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DF0AB33\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62CE2B6AD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62CE2AF8C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF95272268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.798+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF952C07974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.800+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23134, \"ctx\":\"ftdc\",\"msg\":\"Unhandled exception\",\"attr\":{\"exceptionString\":\"0xE0000001\",\"addressString\":\"0x00007FF951B396C9\"}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.800+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23136, \"ctx\":\"ftdc\",\"msg\":\"*** stack trace for unhandled exception:\"}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FF951B396C9\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FF62DEFF979\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":97,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"},{\"a\":\"7FF62DF0019D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":257,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"13D\"},{\"a\":\"7FF62DFC3E07\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},{\"a\":\"7FF62DFC3DE9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FF95276DE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},{\"a\":\"7FF9439F1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"},{\"a\":\"7FF9439F232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"},{\"a\":\"7FF9439F40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},{\"a\":\"7FF62E19DA54\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FF9559D41BF\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"},{\"a\":\"7FF955934BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"},{\"a\":\"7FF9559389E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"},{\"a\":\"7FF951B396C9\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FF943D76220\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF62DF67721\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/828db7bca19a7173123cb605d6b56a03/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1885,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"},{\"a\":\"7FF62DF0AB33\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"},{\"a\":\"7FF62CE2B6AD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"},{\"a\":\"7FF62CE2AF8C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FF95272268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"},{\"a\":\"7FF952C07974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF951B396C9\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DEFF979\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":97,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DF0019D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":257,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"13D\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DFC3E07\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DFC3DE9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF95276DE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9439F1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9439F232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9439F40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62E19DA54\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.802+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9559D41BF\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF955934BEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF9559389E6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF951B396C9\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF943D76220\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DF67721\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/828db7bca19a7173123cb605d6b56a03/src/build/opt/mongo/base/error_codes.cpp\",\"line\":1885,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"421\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62DF0AB33\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":256,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"1B3\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62CE2B6AD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF62CE2AF8C\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF95272268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.803+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF952C07974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.804+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23131, \"ctx\":\"ftdc\",\"msg\":\"Failed to open minidump file\",\"attr\":{\"dumpName\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.4\\\\bin\\\\mongod.2021-03-14T22-03-41.mdmp\",\"error\":\"Access is denied.\"}}\n{\"t\":{\"$date\":\"2021-03-14T23:03:41.804+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23137, \"ctx\":\"ftdc\",\"msg\":\"*** immediate exit due to unhandled exception\"}\n",
"text": "Hi Stennie,Thanks for your answer.Unfortunately there is no log before the error…\nPlease find complete traceThanks,Steve",
"username": "STEVE_HELDEBAUME1"
},
{
"code": "",
"text": "Hi All,Any solution for this error? I have the same error and mongo crash.\nSame version of mongo 4.4 and windows server 2019.Any help?Best Regards,",
"username": "Rui_Horta"
},
{
"code": "",
"text": "Have you checked memory utilization that time?",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "Hi,We have 32Gb in our server, the process has taken 12 GB.What is “id”:4757800\" is the same that @STEVE_HELDEBAUME1 and @Abdelrahman_N_A put?And what is ctx\":“ftdc” ?“FileRenameFailed” it is permissions? it is another process using file?\nWe mongodb as windows service.Any ideia?Best Regards",
"username": "Rui_Horta"
},
{
"code": "",
"text": "Good Morning Everyone,I’m encountering the same issue, are there any feeback, Service running MongodDB service has fulle persions to storage and log paths.MongoDB 4.4\nWindows Server 2019{“t”:{\"$date\":“2022-03-11T11:15:09.016+02:00”},“s”:“F”, “c”:“CONTROL”, “id”:4757800, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“terminate() called. An exception is active; attempting to gather more information”}}\n{“t”:{\"$date\":“2022-03-11T11:15:09.016+02:00”},“s”:“F”, “c”:“CONTROL”, “id”:4757800, “ctx”:“ftdc”,“msg”:“Writing fatal message”,“attr”:{“message”:“DBException::toString(): FileRenameFailed: Access is denied\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n”}}",
"username": "Willem_Silver"
},
{
"code": "",
"text": "Hello @Stennie_X ,\nI’m also facing the same issue can you please reply with the resolution",
"username": "Sri_Sai_Ram_Akam"
}
] | How to avoid this error "terminate() called. An exception is active; attempting to gather more information"? | 2021-02-25T06:44:05.412Z | How to avoid this error “terminate() called. An exception is active; attempting to gather more information”? | 9,056 |
null | [
"compass",
"mongodb-shell"
] | [
{
"code": "",
"text": "Is there a way to copy a specific document’s structure and insert it in the same collection but add new fields to the new document using MongoSH and not Compass?",
"username": "Tre_Rush"
},
{
"code": "",
"text": "What you could try is an aggregation that $match the desired document, then use $set and $unset stages to modify the document. You add a final $merge stage to insert back the modified document.Just run the aggregation with the $merge, once you verified that your $set and $unset do the correct modification. Make sure you $unset the _id and use whenMatched:failed otherwise you risk modifying the original document.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you! I appreciate the insight, I’ll try it out.",
"username": "Tre_Rush"
},
{
"code": "mongosh_id_id_id_id// Get the document you want to copy\nvar docToCopy = db.collectionName.findOne({});\n\n// Create a new document with the same structure as the original\nvar newDoc = Object.assign({}, docToCopy);\n\n// Remove the _id field from the new document\ndelete newDoc._id;\n\n// Add new fields to the new document\nnewDoc.newField1 = \"value1\";\nnewDoc.newField2 = \"value2\";\n\n// Generate a new ObjectId for the _id field\nnewDoc._id = new ObjectId();\n\n// Insert the new document into the collection\ndb.collectionName.insertOne(newDoc);\nfindOne()newField1newField2ObjectId_id",
"text": "Hello @Tre_Rush ,As well as @steevej’s suggestion, the following may fulfill your requirements as well using mongosh in particular. It is possible to copy a specific document’s structure and insert it into the same collection but with new fields and an updated _id using MongoDB’s shell (mongo shell). The reason you need to update _id also is because no two documents can have same _id and if we try to insert the document with same _id we will get below error.MongoServerError: E11000 duplicate key error collection: Test.performer index: id dup key: { _id: *** }Here’s an example in JavaScript using the MongoDB shell:This will copy the structure of the document you retrieved with findOne() into a new document, add two new fields newField1 and newField2, generate a new ObjectId for the _id field, and insert the new document into the collection. If this is not what you require, could you please post more details about the use case?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Copy Specific Document in same collection | 2023-02-01T15:58:33.764Z | Copy Specific Document in same collection | 2,127 |
null | [
"indexes"
] | [
{
"code": "test{\n \"_id\": \"63d771990ba1354fef3577c2\",\n \"project\": \"63a2f3537844f79dd4fe7234\",\n \"email\": \"[email protected]\"\n}\n{\n \"_id\": ObjectId,\n \"project\": ObjectId,\n \"email\": String\n}\ntest{ \"email\": 1 }{ \"project\": 1, \"email\": 1 }project{ \n project: ObjectId('63a2f3537844f79dd4fe7234')\n}\n{ \"email\": 1 }projectemail{ \n project: ObjectId('63a2f3537844f79dd4fe7234'),\n email: '[email protected]'\n}\n{ \"email\": 1 }test{ \"project\": 1, \"email\": 1 }{ \"email\": 1 }{ \"project\": 1, \"email\": 1 }{ \"email\": 1 }{ \"project\": 1, \"email\": 1 }{ \n project: ObjectId('63a2f3537844f79dd4fe7234'),\n email: '[email protected]'\n}\n{ \"email\": 1 }{ \"project\": 1, \"email\": 1 }{ \"email\": 1 }{ \"project\": 1, \"email\": 1 }{ \"email\": 1 }{ \"project\": 1, \"email\": 1 }",
"text": "Suppose I have one collection called test which has a below doc. Just for simplicity, it has one doc.Below is the doc schemaBelow are the indexes that I have created in test collection.Now if I run the below query it uses 3rd index (compound index) and that is correct because we have project field in the query and that belongs to the 3rd index.Now if I run the below query why it uses the 2nd index ({ \"email\": 1 }) why not the 3rd compound index? why it is not using the 3rd compound index even though the query contains the compound index prefix project and also the email field?Now if I remove the 2nd index { \"email\": 1 } then we have only two indexes in test collection as below.Now if I create the same index { \"email\": 1 } again then we have the below indexes.Now if I run the same below query again then it will use the 2nd compound index { \"project\": 1, \"email\": 1 }.Why for the same above query if we have indexes as belowthen it will use { \"email\": 1 } index to find documentsand if we have the below indexesthen it will use { \"project\": 1, \"email\": 1 } index to find documents?",
"username": "Svarup_Desai"
},
{
"code": "{ \n project: ObjectId('63a2f3537844f79dd4fe7234'),\n email: '[email protected]'\n}\nexplain()allPlansExecutionemail:1 score: 2.5002,\n executionStages: {\n stage: 'FETCH',\n filter: { project: { '$eq': -1960449405 } },\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\nproject:1, email:1 score: 2.5002,\n executionStages: {\n stage: 'FETCH',\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\nsample_mflix.comments{ \nmovie_id: ObjectId('573a1395f29313caabce1855'), \nemail: \"[email protected]\" \n}\nmovie_id_1_email_1movie_id_1_email_1",
"text": "Hi @Svarup_Desai,Welcome to the MongoDB Community forums The query optimizer in MongoDB chooses the index with the best performance. When multiple indexes are available, the query optimizer will evaluate the relative cost of each index and choose the index that provides the most selective results with the least amount of I/O cost.Here you are trying to execute the query:From my experiments with the scenario you posted, both indexes perform equally well so technically it doesn’t matter which one was chosen.You can confirm it by using explain() results to see the output of allPlansExecution.For index email:1For index project:1, email:1So it chooses either index arbitrarily as the score for both indexes is the same.However, when you do the same query with a large collection of documents the query optimizer may change with the size and complexity of the data. With a larger collection, the optimizer may have more information about the data distribution and select a different index based on this information. This is because the optimizer’s goal is to choose the index that will provide the most efficient way to resolve the query and this can change based on the size and complexity of the data.Like I tried with one collection which is sample collection sample_mflix.comments having 41.1K documents and I created two indexes very similar to yours:\nindexes2000×560 43.6 KB\nAfter running two queries similar to the one you described above, the results were not the same. The first queryutilized the movie_id_1_email_1 index. However, even after reordering the indexes by deleting and recreating them, the same movie_id_1_email_1 index was utilized.I hope it helps!Let us know if you have any further questions!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you, @Kushagra_Kesav for the explanation,Now I understand it, MongoDB does not stick with the same index for any single query. The query optimizer in MongoDB chooses the best available index dynamically based on the size and complexity of the data, so in my example, all was happening because of low data.",
"username": "Svarup_Desai"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Collection index creation order matters? | 2023-01-30T08:18:50.575Z | Collection index creation order matters? | 1,073 |
null | [] | [
{
"code": "",
"text": "Hi,\nApparently my app has stopped working, the error appears:\nconst serverSelectionError = new ServerSelectionError();\n^I cannot connect using MongoDB Compass and when trying to view my collections in the database on the website, the error appears:\n“An error occurred while querying your MongoDB deployment.\nPlease try again in a few minutes.”Can anyone help me?",
"username": "Leonardo_Guimaraes"
},
{
"code": "",
"text": "Hi @Leonardo_Guimaraes,Welcome to the MongoDB Community forums when trying to view my collections in the database on the website, the error appears:\n“An error occurred while querying your MongoDB deployment.\nPlease try again in a few minutes.”Are you referring to an error encountered when accessing a database deployed on MongoDB Atlas using the Data Explorer?You mentioned your app stopped working, can you clarify if it was working before and suddenly stopped working without any changes being made from the application end?I cannot connect using MongoDB CompassWere you able to connect before using MongoDB compass?Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | My app has stopped working | 2023-02-02T21:10:13.210Z | My app has stopped working | 512 |
null | [
"vscode"
] | [
{
"code": "",
"text": "I installed the MongoDB plugin for a Visual Studio Code.\nI run a script in the playground but there is no output either in the ‘Output’ tab or in the ‘Playground Result’ (undefined) tab windows.The print command did send the text to the Output tab but missing the database-related output.What needs to be configured so I can see the database-related output?",
"username": "Saptaji_Basuki"
},
{
"code": "testuse <db>",
"text": "Hi @Saptaji_Basuki, welcome to the community! Glad to have you here!What database is selected before you run the query? I believe the default database is set to test when new playground is launched. Highlight and run the use <db> statement along with the query to see if that resolves your issue.Thanks,\nMahi",
"username": "mahisatya"
},
{
"code": "",
"text": "@mahisatya\nThank you for your response.I don’t know if this is related. I disabled the ‘Code Runner’ VSCode extension and now I can see the output.Things are working fine now. Thank you.",
"username": "Saptaji_Basuki"
},
{
"code": "console.log('hello world')console.log(db.collectionName.find({}).length)console.log(db.collectionName.find({}).toArray().length)",
"text": "I can see output from console.log('hello world') in the Output tab,\nbut not with console.log(db.collectionName.find({}).length)UPDATE: I found a fix:\nconsole.log(db.collectionName.find({}).toArray().length)",
"username": "John_Grant1"
}
] | VSC Playground Result - No output | 2021-07-01T22:00:43.234Z | VSC Playground Result - No output | 6,016 |
null | [] | [
{
"code": "",
"text": "Disclaimers: I know this is a REALLY old version, but it’s a legacy system that I need to make work for a few more months. Also, I know literally nothing about MongoDB.I have a cluster of 3 nodes running on Windows as EC2 instances on AWS. The memory allocated to the VMs is 16GB. The data partition is configured with 1000 IOPs. The size of the database is roughly one terabyte. The largest collection is on the order of 400GB or so, with some other 40-100GB collections.What I’m seeing in logs is a lot of elections and heartbeat failures. Members are being flagged as down, or slow to respond. I’m also seeing a lot of connection drops.Windows shows there’s only 300MB of free memory, with somewhere around 5-8000 page faults a second on average. The commit charge for the MongoD.exe process varies, but is as high as 55GB.Additional Info: Two of the smaller collections have TTLs. At this point, the only write activity is the TTL processes.My analysis is that there’s so much swapping going on that it’s destabilizing the cluster, and that’s manifested as seeing hosts down, elections, etc. Is that a reasonable idea?What would be a reasonable way to address this? I’m thinking bumping memory allocation to 64GB. Would that be sufficient?",
"username": "George_Sexton"
},
{
"code": "",
"text": "Hi @George_Sexton welcome to the community!MongoDB 2.4.4 was released in June 2013, so almost 10 years ago! Unfortunately that means that we have limited options. Notably, the 2.4 series was using the MMAPv1 storage engine, which was removed in modern MongoDB versions, so their behaviour and performance characteristics are radically different. In fact, the Atlas Live Migration service only goes as far back as the 2.6 series, so migrating to Atlas is out of the question as well.Due to the age of the infrastructure, I can perhaps offer some pointers on what to look for, but may be unable to give you a more direct solution, unfortunately.Two of the smaller collections have TTLs. At this point, the only write activity is the TTL processes.Do you mean that there is no further data going into the database, only getting removed?My analysis is that there’s so much swapping going on that it’s destabilizing the cluster, and that’s manifested as seeing hosts down, elections, etc. Is that a reasonable idea?Barring other evidence from the logs when the event happens, I say this is a very reasonable analysis. It is possible that the server is busy swapping it doesn’t have time to do anything else. Although it’s curious to see this apparent resource crunch in a system where no data is being added. But then again, this is MMAPv1, of which I’m not entirely familiar with its performance behaviour, especially under (I assume) an equally old Windows I’m thinking bumping memory allocation to 64GB. Would that be sufficient?At this point I don’t think there’s any harm in trying this step, although whether it’s sufficient or not, it’s hard to say at this point. The system is on the verge of failing anyway, and adding more RAM usually helps when you suspect that the hardware does not have enough resources to do its work.If all else fails, I would suggest you to upgrade to at least MongoDB 2.6, then use Live Migration to migrate to Atlas. From there, you can upgrade to a modern MongoDB version, then you can decide if you want to dump the data back into an on-prem deployment, or just simply use Atlas from that point onward.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Kevin,I’m concerned that there would be compatibility problems with your suggested route. The OS is old, the code base is equally old. Driver compatibility with the DB is a concern, along with things like TLS versions, supported ciphers, etc.George",
"username": "George_Sexton"
},
{
"code": "",
"text": "Hello Kevin,\nI am working with George and we have a question regarding the upgrade. Which version would you recommend we upgrade to (from 2.4.4) so that :The idea is to cause as little ripple as possible.\nThanx !\nPascal",
"username": "Pascal_Audant"
},
{
"code": "",
"text": "Hi @Pascal_Audant , @George_Sextonwe won’t have compatibility issue when we try to restore the backup from 2.4.4Assuming you’re trying to move away from 2.4.4, I would suggest experimenting with at least MongoDB 2.6 at this point, as this is the oldest version that Atlas and most drivers can handle. Using at least 2.6 opens up many possibilities like Atlas Live Migration (which might come in handy later), and the oldest MongoDB version that most drivers support is 2.6.we won’t have to upgrade the driver on the app that’s connecting to the MongoDBMost drivers support 2.6 as the oldest version, up to a certain point. For example, the latest Node driver (5.0) supports MongoDB 3.6 as the oldest version, but Node driver 4.1 supports up to MongoDB 2.6. See https://www.mongodb.com/docs/drivers/node/current/compatibility/ for more details. Other drivers would also have a similar compatibility matrix.The idea is to cause as little ripple as possible.Without knowing the exact details of the app and infrastructure, I cannot say how risky any operation will be.I believe the least risky proposition is to upgrade the instance’s RAM to see if it solves the resource issue. However staying at 2.4.4 is just as risky, as you could see a repeat of these events again later since the root cause of the issue is still unknown.If modernizing the infrastructure is the ultimate goal, though I would experiment with 2.6. MongoDB typically only support upgrades between major versions and 2.6 is one major version up from 2.4.Another option is, if this is a vital data to your operation and you’re hesitant about the options we’re currently discussing, you might want to engange Enterprise Advanced Support to provide guidance and support to help you modernize the infrastructure.Best regards\nKevin",
"username": "kevinadi"
}
] | Advice for Scaling 2.4.4 on Windows | 2023-02-01T16:18:37.887Z | Advice for Scaling 2.4.4 on Windows | 508 |
[
"berlin-mug"
] | [
{
"code": "",
"text": "\n_MUG - Design Kit - Berlin (1)1920×1080 182 KB\nJoin us for a MongoDB Meetup featuring two main talks and several lightning talks on real-world use cases and best practices for using MongoDB in your technology stack.We look forward to seeing you there!Event Type: In-Person\nLocation: Adalbertstraße 8 Adalbertstraße 8 · Berlin To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Do you use Meetup.com? You can also RSVP on our Meetup.com Event",
"username": "Roman_Right"
},
{
"code": "",
"text": "Here are some photos from the event ",
"username": "Harshit"
}
] | Berlin MUG: Inaugural Meetup | 2022-12-15T21:20:31.254Z | Berlin MUG: Inaugural Meetup | 3,658 |
|
null | [
"nairobi-mug"
] | [
{
"code": "",
"text": "This event will be targeting university students to get excited about MongoDB and what it does. We will have a simple project presentation by our speaker, Shadrack, created using MongoDB.The following is the agenda;Event Type: In-Person\nLocation: Jomo Kenyatta University of Agriculture and Technology(JKUAT)",
"username": "delphine_nyaboke"
},
{
"code": "",
"text": "Here’s a photo from the event \nimage800×599 49.8 KB\n",
"username": "Harshit"
}
] | Nairobi MUG: JKUAT University Event | 2022-12-26T16:15:12.667Z | Nairobi MUG: JKUAT University Event | 3,965 |
[] | [
{
"code": "",
"text": "Hi,I would like to add a filter on my dashboard, but I would like to avoid typing the selected value because the filtered value is a string on which the user can easily make mistakes, especially since he does not know all of them possible values.Actually the filter shows some values and terminate by “Not all values are displayed”, inviting the user to manually type the data.\nI didn’t find how to assume this in the doc, except manually setting it Explicit spelling is definitively not a good idea for a string, isn’t it possible to show a multiselect combo ?\nOtherwise is there a workaround for that or an approach I should have ?Thanks ",
"username": "St_ef"
},
{
"code": "",
"text": "The string filter cards only show a maximum of 20 values. One other option you could try is to create a table chart that shows all the possible values, and use Interactive Filtering to use the table to filter your other charts.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Good idea this interactive filtering… but it seems a table has no influence on other charts.",
"username": "St_ef"
},
{
"code": "",
"text": "Make sure you set this option on the table.",
"username": "tomhollander"
}
] | Not all values are displayed | 2023-02-01T20:39:46.102Z | Not all values are displayed | 1,242 |
|
null | [] | [
{
"code": "",
"text": "Can I populate a collection using a custom field instead of using the default mongo generated object id,\ncan’t find any resources which answer this question",
"username": "Mohammed_Ateeq_Uddin"
},
{
"code": "",
"text": "Hi @Mohammed_Ateeq_Uddin ,\nYes, but It isn’ t a good idea because there are some important information in the ObjectIdI think the better way is to create another field and not override the ObjectId.I hope it is useful!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "tlas atlas-d7b9wu-shard-0 [primary] test> c.insertOne( { _id : 0 } )\n{ acknowledged: true, insertedId: 0 }\nAtlas atlas-d7b9wu-shard-0 [primary] test> c.insertOne( { _id : new Date( \"2023-01-31\" ) } )\n{ acknowledged: true, insertedId: ISODate(\"2023-01-31T00:00:00.000Z\") }\nAtlas atlas-d7b9wu-shard-0 [primary] test> c.insertOne( { _id : 20230131 } )\n{ acknowledged: true, insertedId: 20230131 }\nAtlas atlas-d7b9wu-shard-0 [primary] test> c.insertOne( { _id : new UUID() } )\n{\n acknowledged: true,\n insertedId: UUID(\"ab2e3ede-d8b0-46ba-9047-5bfffdd475fb\")\n}\nAtlas atlas-d7b9wu-shard-0 [primary] test> c.insertOne( { _id : { year : 2023 , month : 1 , day : 31 } } )\n{ acknowledged: true, insertedId: { year: 2023, month: 1, day: 31 } }\nAtlas atlas-d7b9wu-shard-0 [primary] test> c.find()\n[\n { _id: 0 },\n { _id: ISODate(\"2023-01-31T00:00:00.000Z\") },\n { _id: 20230131 },\n { _id: UUID(\"ab2e3ede-d8b0-46ba-9047-5bfffdd475fb\") } ,\n { _id: { year: 2023, month: 1, day: 31 } }\n]\n",
"text": "You simply set _id to any value. The only restriction is that it has to be unique within a collection. Some use UUID rather than generated ObjectId.For example, in mongosh:As you see, you may even mix different types. It’s unusual but possible.Sometimes, it may be more efficient to use a different _id. For example, US ZIP codes are unique, so you could use the ZIP code it self as the _id, rather than having a normal ObjectId with its index and the another field zipcode with another index. More efficient because you spare 1 index.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Populating a collection using a custom field instead of _id | 2023-02-02T17:17:43.345Z | Populating a collection using a custom field instead of _id | 2,592 |
null | [
"aggregation",
"queries",
"replication",
"performance"
] | [
{
"code": "db.getCollection('matchmaking').find({matchmakingId:”<eventId>”, score:{$gte : <lowerRange>, $lte:<upperRange>}}).sort({“matchmakeTimestamp” : 1}).limit(6)matchmakingId_1_lastMatchmakeTimestamp_1_score_-1",
"text": "Hey all,We’ve been having trouble with a particular collection that we use for matchmaking. Players will enter an event and matchmake against 3 random opponents based on the score of their matchmaking record. These events start/end at 10am PST, so a large portion of the user base is active and trying to matchmake around this time.In this collection, we store various matchmaking records for different events that a user can participate in, so they will have one record that has their userId and the matchmakingId for that particular event. The record also has a score which is what we use to determine the strength of that particular user’s record. The collection currently has around 20 million records in it. And some events can have upwards of almost 1 million records associated with it.When we try to matchmake for a user, we grab 6 random records (we reduce down to 3 in the application code) for this collection within a particular score range. Originally, we tried to do this using the aggregation pipeline but we found that it did not seem performant enough (although we’re willing to give an another shot if that’s the best way). Our current implementation now just has a timestamp inserted into the record of when that particular matchmaking record was match made against and we sort against that timestamp so we cycle through the records. This makes our query look something like this:db.getCollection('matchmaking').find({matchmakingId:”<eventId>”, score:{$gte : <lowerRange>, $lte:<upperRange>}}).sort({“matchmakeTimestamp” : 1}).limit(6)We have an index that supports this query in equality, sort, range order matchmakingId_1_lastMatchmakeTimestamp_1_score_-1Within any given score range, there can be tens of thousands of records that fall in that range.Our DB infrastructure is shared with threes shards, each with a primary and two secondary machines in each replica set, and we’re running Mongo version 3.6.23 (we know, it’s a bit outdated)At the event start/end time (10am PST), our monitoring tools report that the load average is high and we see that our db.currentOps queue becomes large and filled up with matchmaking queries. We’ve tried to investigate why these matchmaking queries have been such a problem, and have tried to optimize our queries as much as possible, but it seems like there’s still some issue that occurs that causes things to get backed up and cause the whole system to become sluggish.Is there something that we’re missing, or is there some other way that we can perform these matchmaking queries more efficiently? Or are the queries fine and we need to adjust something else? Is MongoDB just not suited for these types of queries? We’ve been trying to work on this problem for the better part of a year and we’re at a loss for what to do.",
"username": "Andrew_Dos_Santos"
},
{
"code": "db.getCollection('matchmaking').find({ matchmakingId: “<matchmakingId>”, score: { $lte: 1516, $gte: 1011 } }).sort( { lastMatchmakeTimestamp: 1 }).limit(6).explain(\"executionStats\"){\n \"queryPlanner\" : {\n \"mongosPlannerVersion\" : 1,\n \"winningPlan\" : {\n \"stage\" : \"SHARD_MERGE_SORT\",\n \"shards\" : [ \n {\n \"shardName\" : \"rs3\",\n \"connectionString\" : \"rs3/shard3a:27018,shard3b:27018,shard3c:27018\",\n \"serverInfo\" : {\n \"host\" : \"shard3a.hostname\",\n \"port\" : 27018,\n \"version\" : \"3.6.23\",\n \"gitVersion\" : \"d352e6a4764659e0d0350ce77279de3c1f243e5c\"\n },\n \"plannerVersion\" : 1,\n \"namespace\" : \"userdb.matchmaking\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"matchmakingId\" : {\n \"$eq\" : \"<matchmakingId>\"\n }\n }, \n {\n \"score\" : {\n \"$lte\" : 1516.0\n }\n }, \n {\n \"score\" : {\n \"$gte\" : 1011.0\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"LIMIT\",\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"lastMatchmakeTimestamp\" : 1.0,\n \"score\" : -1.0\n },\n \"indexName\" : \"matchmakingId_1_lastMatchmakeTimestamp_1_score_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"lastMatchmakeTimestamp\" : [],\n \"score\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"lastMatchmakeTimestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"score\" : [ \n \"[1516.0, 1011.0]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"lastMatchmakeTimestamp\" : 1.0\n },\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"SORT_KEY_GENERATOR\",\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [ \n {\n \"score\" : {\n \"$lte\" : 1516.0\n }\n }, \n {\n \"score\" : {\n \"$gte\" : 1011.0\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"userId\" : 1.0\n },\n \"indexName\" : \"matchmakingId_1_userId_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"userId\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"userId\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }\n }\n ]\n }, \n {\n \"shardName\" : \"rs1\",\n \"connectionString\" : \"rs1/shard1a:27018,shard1b:27018,shard1c:27018\",\n \"serverInfo\" : {\n \"host\" : \"shard1c.hostname\",\n \"port\" : 27018,\n \"version\" : \"3.6.5\",\n \"gitVersion\" : \"a20ecd3e3a174162052ff99913bc2ca9a839d618\"\n },\n \"plannerVersion\" : 1,\n \"namespace\" : \"userdb.matchmaking\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"matchmakingId\" : {\n \"$eq\" : \"<matchmakingId>\"\n }\n }, \n {\n \"score\" : {\n \"$lte\" : 1516.0\n }\n }, \n {\n \"score\" : {\n \"$gte\" : 1011.0\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"LIMIT\",\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"lastMatchmakeTimestamp\" : 1.0,\n \"score\" : -1.0\n },\n \"indexName\" : \"matchmakingId_1_lastMatchmakeTimestamp_1_score_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"lastMatchmakeTimestamp\" : [],\n \"score\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"lastMatchmakeTimestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"score\" : [ \n \"[1516.0, 1011.0]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"lastMatchmakeTimestamp\" : 1.0\n },\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"SORT_KEY_GENERATOR\",\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [ \n {\n \"score\" : {\n \"$lte\" : 1516.0\n }\n }, \n {\n \"score\" : {\n \"$gte\" : 1011.0\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"userId\" : 1.0\n },\n \"indexName\" : \"matchmakingId_1_userId_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"userId\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"userId\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }\n }\n ]\n }, \n {\n \"shardName\" : \"rs2\",\n \"connectionString\" : \"rs2/shard2a:27018,shard2b:27018,shard2c:27018\",\n \"serverInfo\" : {\n \"host\" : \"shard2c.hostname\",\n \"port\" : 27018,\n \"version\" : \"3.6.5\",\n \"gitVersion\" : \"a20ecd3e3a174162052ff99913bc2ca9a839d618\"\n },\n \"plannerVersion\" : 1,\n \"namespace\" : \"userdb.matchmaking\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [ \n {\n \"matchmakingId\" : {\n \"$eq\" : \"<matchmakingId>\"\n }\n }, \n {\n \"score\" : {\n \"$lte\" : 1516.0\n }\n }, \n {\n \"score\" : {\n \"$gte\" : 1011.0\n }\n }\n ]\n },\n \"winningPlan\" : {\n \"stage\" : \"LIMIT\",\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"lastMatchmakeTimestamp\" : 1.0,\n \"score\" : -1.0\n },\n \"indexName\" : \"matchmakingId_1_lastMatchmakeTimestamp_1_score_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"lastMatchmakeTimestamp\" : [],\n \"score\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"lastMatchmakeTimestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"score\" : [ \n \"[1516.0, 1011.0]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\" : [ \n {\n \"stage\" : \"SORT\",\n \"sortPattern\" : {\n \"lastMatchmakeTimestamp\" : 1.0\n },\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"SORT_KEY_GENERATOR\",\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [ \n {\n \"score\" : {\n \"$lte\" : 1516.0\n }\n }, \n {\n \"score\" : {\n \"$gte\" : 1011.0\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"userId\" : 1.0\n },\n \"indexName\" : \"matchmakingId_1_userId_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"userId\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"userId\" : [ \n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n }\n }\n ]\n }\n ]\n }\n },\n \"executionStats\" : {\n \"nReturned\" : 18,\n \"executionTimeMillis\" : 195,\n \"totalKeysExamined\" : 12208,\n \"totalDocsExamined\" : 18,\n \"executionStages\" : {\n \"stage\" : \"SHARD_MERGE_SORT\",\n \"nReturned\" : 18,\n \"executionTimeMillis\" : 195,\n \"totalKeysExamined\" : 12208,\n \"totalDocsExamined\" : 18,\n \"totalChildMillis\" : NumberLong(314),\n \"shards\" : [ \n {\n \"shardName\" : \"rs3\",\n \"executionSuccess\" : true,\n \"executionStages\" : {\n \"stage\" : \"LIMIT\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 10,\n \"works\" : 3950,\n \"advanced\" : 6,\n \"needTime\" : 3943,\n \"needYield\" : 0,\n \"saveState\" : 61,\n \"restoreState\" : 61,\n \"isEOF\" : 1,\n \"invalidates\" : 0,\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 10,\n \"works\" : 3949,\n \"advanced\" : 6,\n \"needTime\" : 3943,\n \"needYield\" : 0,\n \"saveState\" : 61,\n \"restoreState\" : 61,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"docsExamined\" : 6,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 10,\n \"works\" : 3949,\n \"advanced\" : 6,\n \"needTime\" : 3943,\n \"needYield\" : 0,\n \"saveState\" : 61,\n \"restoreState\" : 61,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"lastMatchmakeTimestamp\" : 1.0,\n \"score\" : -1.0\n },\n \"indexName\" : \"matchmakingId_1_lastMatchmakeTimestamp_1_score_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"lastMatchmakeTimestamp\" : [],\n \"score\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"lastMatchmakeTimestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"score\" : [ \n \"[1516.0, 1011.0]\"\n ]\n },\n \"keysExamined\" : 3949,\n \"seeks\" : 3944,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0,\n \"seenInvalidated\" : 0\n }\n }\n }\n }, \n {\n \"shardName\" : \"rs1\",\n \"executionSuccess\" : true,\n \"executionStages\" : {\n \"stage\" : \"LIMIT\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 20,\n \"works\" : 3786,\n \"advanced\" : 6,\n \"needTime\" : 3779,\n \"needYield\" : 0,\n \"saveState\" : 59,\n \"restoreState\" : 59,\n \"isEOF\" : 1,\n \"invalidates\" : 0,\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 20,\n \"works\" : 3785,\n \"advanced\" : 6,\n \"needTime\" : 3779,\n \"needYield\" : 0,\n \"saveState\" : 59,\n \"restoreState\" : 59,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"docsExamined\" : 6,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 20,\n \"works\" : 3785,\n \"advanced\" : 6,\n \"needTime\" : 3779,\n \"needYield\" : 0,\n \"saveState\" : 59,\n \"restoreState\" : 59,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"lastMatchmakeTimestamp\" : 1.0,\n \"score\" : -1.0\n },\n \"indexName\" : \"matchmakingId_1_lastMatchmakeTimestamp_1_score_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"lastMatchmakeTimestamp\" : [],\n \"score\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"lastMatchmakeTimestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"score\" : [ \n \"[1516.0, 1011.0]\"\n ]\n },\n \"keysExamined\" : 3785,\n \"seeks\" : 3780,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0,\n \"seenInvalidated\" : 0\n }\n }\n }\n }, \n {\n \"shardName\" : \"rs2\",\n \"executionSuccess\" : true,\n \"executionStages\" : {\n \"stage\" : \"LIMIT\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 40,\n \"works\" : 4475,\n \"advanced\" : 6,\n \"needTime\" : 4468,\n \"needYield\" : 0,\n \"saveState\" : 69,\n \"restoreState\" : 69,\n \"isEOF\" : 1,\n \"invalidates\" : 0,\n \"limitAmount\" : 6,\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 40,\n \"works\" : 4474,\n \"advanced\" : 6,\n \"needTime\" : 4468,\n \"needYield\" : 0,\n \"saveState\" : 69,\n \"restoreState\" : 69,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"docsExamined\" : 6,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 6,\n \"executionTimeMillisEstimate\" : 40,\n \"works\" : 4474,\n \"advanced\" : 6,\n \"needTime\" : 4468,\n \"needYield\" : 0,\n \"saveState\" : 69,\n \"restoreState\" : 69,\n \"isEOF\" : 0,\n \"invalidates\" : 0,\n \"keyPattern\" : {\n \"matchmakingId\" : 1.0,\n \"lastMatchmakeTimestamp\" : 1.0,\n \"score\" : -1.0\n },\n \"indexName\" : \"matchmakingId_1_lastMatchmakeTimestamp_1_score_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"matchmakingId\" : [],\n \"lastMatchmakeTimestamp\" : [],\n \"score\" : []\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"matchmakingId\" : [ \n \"[\\\"<matchmakingId>\\\", \\\"<matchmakingId>\\\"]\"\n ],\n \"lastMatchmakeTimestamp\" : [ \n \"[MinKey, MaxKey]\"\n ],\n \"score\" : [ \n \"[1516.0, 1011.0]\"\n ]\n },\n \"keysExamined\" : 4474,\n \"seeks\" : 4469,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0,\n \"seenInvalidated\" : 0\n }\n }\n }\n }\n ]\n }\n },\n \"serverInfo\" : {\n \"host\" : “host.address”,\n \"port\" : 27017,\n \"version\" : \"3.6.23\",\n \"gitVersion\" : \"d352e6a4764659e0d0350ce77279de3c1f243e5c\"\n },\n \"ok\" : 1.0,\n \"operationTime\" : Timestamp(1675365138, 1417),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1675365138, 1461),\n \"signature\" : {\n \"hash\" : { \"$binary\" : \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\", \"$type\" : \"00\" },\n \"keyId\" : NumberLong(0)\n }\n }\n}\n",
"text": "For additional context, this is an example query with the output of explaining the query with executionStats\ndb.getCollection('matchmaking').find({ matchmakingId: “<matchmakingId>”, score: { $lte: 1516, $gte: 1011 } }).sort( { lastMatchmakeTimestamp: 1 }).limit(6).explain(\"executionStats\")",
"username": "Andrew_Dos_Santos"
}
] | Troubleshooting Problematic Collection and Query | 2023-02-02T19:55:49.881Z | Troubleshooting Problematic Collection and Query | 1,054 |
null | [
"containers"
] | [
{
"code": "",
"text": "I’m tasked with debugging an already running docker container running mongodb. Is there a way to know which user/passwd was used to create the dbs etc., I’ve access to their scripts but they’re using a number of environment variables but can’t the find the source of any. Any pointers would be great.",
"username": "Dev_Engine"
},
{
"code": "",
"text": "Ho @Dev_Engine ,\nThe first work around is keep in my mind, is to comment the security parameter in the configuration file, restart the instance, create a new admin user and decomment the security parameter. Then you can log with your new user admin and get the other users with the command db.getUsers(). Eventually you can modify their passwd.I hope it is useful!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Any way to know what user was used for the mongodb in a container? | 2023-02-02T18:40:00.634Z | Any way to know what user was used for the mongodb in a container? | 719 |
null | [
"data-modeling",
"swift",
"graphql"
] | [
{
"code": "Object(init: value)[String : AnyBSON]",
"text": "Every question I seem to find about converting GraphQL results to Swift Objects or using the remote access api in swift only has one answer right now which is to add the objects to a non synced realm but there’s no documentation whatsoever. So I’d love some help with this from anyone who has implemented this in their app. I’d be grateful because I’ve been trying continuously to figure it out.For context I’m using an atlas function to search a query and return the resulting documents as objects to the user in a list. When the user taps on the ListCell it takes them to a detail view so I need the full object. I can’t use projections or individual values from the results to construct my search view.I’ve tried to use the Object(init: value) and it always fails with the same error citing that ObjectId was not found.I’ve tried the BSON library on Github to try and decode the BSON values but that didn’t work either.At this point the only solution I haven’t tried is parsing through each Document and decoding everything using a switch statement and assign values individually to each field for every object. I know that will definitely work but That just seems messy and def not the most elegant solution.If anybody has a better solution or can tell me how to add add objects from a [String : AnyBSON] search result to a realm. Please help.Thank you for your time.",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "add the objects to a non synced realmThat’s a bit vague. You would only need to add the objects to a non-synced realm if you wanted to persist the objects.If you don’t want to persist them, you can really take the data returned from the remote access query and add that to any kind of object, a pure Swift object or a Realm object.If you want to persist the data, then well, that’s how Realm works. Instantiate the object, populate the properties and write it.Can you clarify what the use case is?",
"username": "Jay"
},
{
"code": "User: Object {\n\n @Persisted(primarykey: true) var _id: ObjectId\n @Persisted var uid : String // _id.stringValue\n @Persisted var firstName: String\n @Persisted var lastName: String\n @Persisted var created : Date\n @Persisted var age : Int\n @Persisted var gender : Gender\n @Persisted var limit: Double\n \n @Persisted var following = MutableSet<User>() \n\n} \n@ObservedRealmObjectUser",
"text": "I can convert them into any realm objects? How is that possible?Okay let me share some schema and provide more context.this is the basic user object. the actual schema is much more complex with a lot of fields but this is the basic idea of itI have a search view where users can type a query and it return a fuzzy search results of top 20 possible documents to display a list view which navigates to display a profile view which is expecting an @ObservedRealmObject of type User. Now how can i go from 20 [String : AnyBSON] dictionaries to a profile view on tap. I dont need the objects to persist but i need to them to be converted into realm objects so my profile view can work with them.",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "Can you clarify what you are using as they are quite different:I’m using an atlas functionorusing the remote access api in swiftIf you using atlas functions, are you hitting endpoints to retrieve the data? If so, Remote Access is going to be quite a bit more streamlined.",
"username": "Jay"
},
{
"code": "exports = async function(query){\n \n let users = context.services.get(\"mongodb-atlas\").db(\"test\").collection(\"User\");\n \n let pipeline = [\n {\n '$search': {\n 'index': 'default',\n 'text': {\n 'query': query,\n 'path': {\n 'wildcard': '*'\n },\n 'fuzzy': {\n \"maxEdits\": 2,\n \"maxExpansions\": 20,\n }\n }\n }\n \n },\n {\n $sort: {value: -1 },\n },\n {\n $limit : 20\n }\n];\n \n return users.aggregate(pipeline);\n};\n",
"text": "Hello,I’m currently using an atlas function to retrieve the data. This is the exact function I’m using.",
"username": "Timothy_Tati"
},
{
"code": "class User: Object {\n @Persisted var firstName: String\n @Persisted var lastName: String\n @Persisted var age : Int\n}\nlet firstName = AnyBSON(stringLiteral: \"some first name\")\nlet lastName = AnyBSON(stringLiteral: \"some last name\")\nlet age = AnyBSON(integerLiteral: 55)\n\nlet yourData = [\n \"firstName\": firstName,\n \"lastName\": lastName,\n \"age\": age\n]\nlet user = User()yourData.forEach { dict in\n switch dict.key {\n case \"firstName\":\n user.firstName = dict.value.stringValue ?? \"No first name\"\n case \"lastName\":\n user.lastName = dict.value.stringValue ?? \"No last name\"\n case \"age\":\n user.age = dict.value.asInt() ?? 0\n default:\n print(\"oops, key not found\")\n }\n}\n\nprint(user)\nUser {\n\tfirstName = some first name;\n\tlastName = some last name;\n\tage = 55;\n}\n//create a fake [String: AnyBSON] document\nlet myDocument = AnyBSON(dictionaryLiteral: (\"firstName\", firstName), (\"lastName\", lastName), (\"age\", age) )\n\n//and then process it\nswitch myDocument {\ncase .document(let d):\n print(d)\ndefault:\n break\n}\n[\"lastName\": Optional(RealmSwift.AnyBSON.string(\"some last name\")), \"firstName\": Optional(RealmSwift.AnyBSON.string(\"some first name\")), \"age\": Optional(RealmSwift.AnyBSON.int64(55))]",
"text": "Thanks for that.The server function isn’t so important as to how you’re calling it and handling the returned data within so perhaps seeing that code would help.Now how can i go from 20 [String : AnyBSON] dictionaries to a profile viewAgain, some code would help but let me give a verbose example - and this may be absolutely no help at all but just covering some options:Here’s a simple User modelThen some fake data; this is an example of 3 [String: AnyBSON] contained in a yourData array.Then, once yourData is received, process it to create a user objectlet user = User()and the outputAs mentioned, that’s verbose. You could leverage Codeable protocols to instantiate the objects directly from the returned data.–If you want to work with Realm Documents (and the returned data may be a document depending on your code)and the output[\"lastName\": Optional(RealmSwift.AnyBSON.string(\"some last name\")), \"firstName\": Optional(RealmSwift.AnyBSON.string(\"some first name\")), \"age\": Optional(RealmSwift.AnyBSON.int64(55))]",
"username": "Jay"
},
{
"code": "",
"text": "Thank you for your time, Jay. Your answer was enough for me to devise a proper solution for my problem.",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | GraphQL to Swift in a Non-Synced Realm | 2023-02-01T09:22:01.824Z | GraphQL to Swift in a Non-Synced Realm | 1,112 |
null | [
"dot-net"
] | [
{
"code": " FilterDefinition<BsonDocument>SimpleFilterDefinition<BsonDocument, int>.ToBsonDocument()",
"text": "I have service methods that I am using to dynamically return FilterDefinition<BsonDocument>. I am trying to write unit tests in C# to test that the correct FilterDefinitions are being returned. At runtime, the current unit test is returning a SimpleFilterDefinition<BsonDocument, int>. I thought to attempt to cast to that and if not null, check that the int value is as expected. But, I cannot cast to SimpleFilterDefinition due to its protection level. I did .ToBsonDocument(), but I am not seeing any values in it that would be helpful as far as checking that the field and values are as expected. What would be the suggested way to check that the field and value are as expected?",
"username": "Steven_Rothwell"
},
{
"code": "private String ConvertFilterToJson(FilterDefinition<BsonDocument> filter)\n{\n var serializerRegistry = MongoDB.Bson.Serialization.BsonSerializer.SerializerRegistry;\n var documentSerializer = serializerRegistry.GetSerializer<BsonDocument>();\n return filter.Render(documentSerializer, serializerRegistry).ToJson();\n}\nGuid id = Guid.Empty;\nvar expectedFilter = Builders<BsonDocument>.Filter.Eq(\"Id\", id);\nvar expectedJson = ConvertFilterToJson(expectedFilter);\n\nvar result = _mongoDbService.GetIdFilter(id);\n\nAssert.NotNull(result);\n\nvar resultJson = ConvertFilterToJson(result);\n\nAssert.Equal(expectedJson, resultJson);\n",
"text": "I was able to test this by creating the following private method:Then, in my unit test, I do:",
"username": "Steven_Rothwell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to Unit Test methods that return FilterDefinition<BsonDocument> | 2023-01-29T18:24:24.974Z | How to Unit Test methods that return FilterDefinition<BsonDocument> | 1,508 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 6.0.4 is out and is ready for production deployment. This release contains only fixes since 6.0.3, and is a recommended upgrade for all 6.0 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "does the queryable encryption stable for this version? or is it in beta or still public preview?\notherwise what ould be the anticipated date for its stability release?",
"username": "Hashmat_20626"
},
{
"code": "",
"text": "How do minor version upgrades work for Atlas? My cluster options just say 6.0. Do i have to enable auto upgrades to get minor version upgrades?",
"username": "kwM5l76i6b7pSZNg_fHM5P2x15BDHxlI1"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 6.0.4 is released | 2023-01-30T21:26:37.399Z | MongoDB 6.0.4 is released | 1,980 |
[
"golang"
] | [
{
"code": " \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"analyzer\": \"lucene.english\",\n \"type\": \"string\"\n }\n }\n }\n}\n{\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"label\": {\n \"analyzer\": \"lucene.simple\",\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n",
"text": "\nimage1818×1094 86.7 KB\nYour index is incompatible with the Visual Editor. Instead, cancel and use the JSON editor to view and refine your existing index.\n\nimage2498×1056 212 KB\n",
"username": "Syed_Umair"
},
{
"code": "analyzerautocompletelucene.standard",
"text": "Hi there! Thanks for sharing in the MongoDB community. You should be able to see the cause of an “Incompatible with the Visual Index Builder” error by hovering over the “Save” button. In this case, it’s because the Visual Index Builder does not support configuring the analyzer field for autocomplete field mappings. It’s built this way because using an analyzer other than the default lucene.standard for autocomplete field mappings is an advanced and complex use case that more novice users get confused by.Would you be able to share more about why you want to use the Visual Index Builder after creating the index via the Admin API?",
"username": "amyjian"
}
] | Atlas Search Admin API Creating Autocomplete index but no Field is shown on the Atlas UI and not able to access the Visual Editor | 2023-02-01T08:08:23.160Z | Atlas Search Admin API Creating Autocomplete index but no Field is shown on the Atlas UI and not able to access the Visual Editor | 959 |
|
null | [
"dot-net",
"crud",
"compass",
"mongodb-shell"
] | [
{
"code": "db.Books.updateMany({}, {$set: {'newField': true}})\ndb.createCollection('Books2')\nMongoServerError: db already exists with different case already have: [Test] trying to create [test]\n at Connection.onMessage (C:\\Users\\eisen\\AppData\\Local\\MongoDBCompass\\app-1.35.0\\resources\\app.asar.unpacked\\node_modules\\@mongosh\\node-runtime-worker-thread\\dist\\worker-runtime.js:1917:3099431)\n at MessageStream.<anonymous> (C:\\Users\\eisen\\AppData\\Local\\MongoDBCompass\\app-1.35.0\\resources\\app.asar.unpacked\\node_modules\\@mongosh\\node-runtime-worker-thread\\dist\\worker-runtime.js:1917:3096954)\n at MessageStream.emit (node:events:394:28)\n at c (C:\\Users\\eisen\\AppData\\Local\\MongoDBCompass\\app-1.35.0\\resources\\app.asar.unpacked\\node_modules\\@mongosh\\node-runtime-worker-thread\\dist\\worker-runtime.js:1917:3118818)\n at MessageStream._write (C:\\Users\\eisen\\AppData\\Local\\MongoDBCompass\\app-1.35.0\\resources\\app.asar.unpacked\\node_modules\\@mongosh\\node-runtime-worker-thread\\dist\\worker-runtime.js:1917:3117466)\n at writeOrBuffer (node:internal/streams/writable:389:12)\n at _write (node:internal/streams/writable:330:10)\n at MessageStream.Writable.write (node:internal/streams/writable:334:10)\n at Socket.ondata (node:internal/streams/readable:749:22)\n at Socket.emit (node:events:394:28)\n",
"text": "I do not understand what’s going on. I never had these issues before. I encountered this behaviour in both the latest 6.x version as well as on 4.2.X (which we use now as we want to use Azure Cosmos DB later on).I was buffled when I tried to use the following snippet to test our microframework we’re currently building (using the .NET Driver):This occurs in both MongoDB Compass as well as DataGrip, both connected to my test database “Test”. This has one collection called “Books” with 10 documents I generated using my test application.This command does not match any document (which makes no sense since I am using an empty filter), while when I execute the same thing using the .NET driver the field is inserted into all 10 documents.Now I needed to create a new collection. So I used:Which should obviously create a collection called “Books2” in my Test database. But all Mongo returns is the following exception:Why? I clearly am operating ON my databse and try to create a collection as the method name states. Why is it trying to create a new database when I run this command?Something is awfully off with mongosh. I don’t know why, but so far no command worked as expected while running these via the driver worked just fine.Can somebody explain me what’s going on? If this is it, we might have to consider seriously moving to a different database system.",
"username": "Manuel_Eisenschink"
},
{
"code": "db already exists with different case already have: [Test] trying to create [test]",
"text": "Most likely you have upper case vs lower case issue.The error strongly point to what is your error.db already exists with different case already have: [Test] trying to create [test]Your collection Books probably exists in the database Test with an the first letter being an uppercase T. You most likely run the command use test with a leading lowercase t, (or used a connection string that specify test with lowercase t. Since Books exists in the database Test, no documents exists in the database test. For some technical reasons, file names in Windows are not case sensitive, you cannot create the collection test.Books2 because the database Test already exists. Exactly as mentioned in the error message.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks, that seems to be the issue. I never noticed that the shell said “test”. I just figured this should be sufficient. When explicitly switching to “Test” it works like a charm. I don’t get why Compass and DataGrip use the right database name but in wrong casing as default then…",
"username": "Manuel_Eisenschink"
},
{
"code": "",
"text": "I do not know about DataGrip but it seems Compass respect the case of the default database specified on the URI.If my URI ends with /Foobar the db variable of the shell is set to Foobar. If it ends with /foobar it db is equal to foobar.",
"username": "steevej"
},
{
"code": "",
"text": "I see no option to configure this in MongoDB Compass. Only database you can define is the authentication database and that’s something different. But at least I now know what to do…",
"username": "Manuel_Eisenschink"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongosh does not work as expected in several cases | 2023-02-02T12:39:08.973Z | Mongosh does not work as expected in several cases | 1,746 |
null | [] | [
{
"code": "",
"text": "If we change our service server date to a future date then we are not able to connect the mongo atlas db so can someone help in updating the DB date to the Future date.",
"username": "Mayank_Anand1"
},
{
"code": "",
"text": "Hello @Mayank_Anand1 ,Welcome to The MongoDB Community Forums! What is the error you are getting while connecting to your MongoDB Atlas cluster?It is important to keep in mind that time manipulation can have serious consequences for the consistency and accuracy of your data, so it’s important to be careful when making these types of changes. If you have specific requirements or questions about time-based operations in MongoDB, please share your use case and your requirements.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "man I’m getting issue , and want to update the cluster db to future date and for that need help from mongo team",
"username": "Mayank_Anand1"
},
{
"code": "",
"text": "I’m getting issueWhat is the error/issue you are facing while connecting to your Atlas cluster?\nCould you please share a screenshot/error message of the error received?",
"username": "Tarun_Gaur"
}
] | Need help in updating DB date to Future date | 2023-01-31T18:32:54.783Z | Need help in updating DB date to Future date | 467 |
null | [
"aggregation",
"queries",
"node-js"
] | [
{
"code": "[\n {\n \"_id\": “abc”,\n “key1”: “value”,\n “key”2: “some value,\n “req1”: 1,\n \"req2”: “222”\n },\n {\n \"_id\": “xyz”,\n “key1”: “value”,\n “key”2: “some value,\n “req1”: 2,\n “req2”: “333”\n },\n] \n[\n{ \"req1\": 1,\n \"req2\": 222\n},\n{ \"req1\": 2\n \"req2\": 333\n}]\n",
"text": "Hi Team,\nCould someone help to fetch particular keys instead of entire object.using await collection.find({_id: {’$in’: _ids }}).toArray(); to fetch entire object. Is there way to filter only 2 of the keys from that object.\nObject :instead of entire object, can we haveNote: used project to achieve that but not seen any performance.\ncould anyone help any other better approach?Thanks",
"username": "rajesh_kumar10"
},
{
"code": "",
"text": "used project to achieve that but not seen any performance.Projection is how you reshape the document being returned to the client - what exactly do you mean by not seen any performance? Do you mean it didn’t work correctly? It worked but was slow? Something else? Please provide the full syntax you tried, what happened and also what version this is.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "sorry my understanding was wrong. Time taken to convert cursor.toArray() while finding multiple documents. using Nodejs mongo version: “3.5.9”",
"username": "rajesh_kumar10"
}
] | Fetch specific keys from array of objects | 2023-01-23T13:47:43.307Z | Fetch specific keys from array of objects | 1,020 |
null | [
"aggregation",
"queries",
"data-modeling"
] | [
{
"code": "{\n\t\"_id\" : ObjectId(\"...\"),\n\t\"date\" : ISODate(\"...\"),\n\t\"unique_id\" : \"field_a=value_a+field_b=value_b...+field_n=value_n\", // Unique index using this field together with date\n \"is_valid\": false,\n\t\"name\": \"My new Document\",\n\t\"extra_attributes\" : [\n {\n \"attribute\": \"\", // Used to return all set using the index - if there's a better way to do this please let me know!\n \"value\": \"\"\n },\n\t\t{\n\t\t\t\"attribute\" : \"field_a\",\n\t\t\t\"value\" : \"value_a\"\n\t\t},\n\t\t{\n\t\t\t\"attribute\" : \"field_b\",\n\t\t\t\"value\" : \"value_b\"\n\t\t},\n\t\t...\n\t],\n\t\"count_a\": NumberLong(\"0\"),\n\t\"count_b\": NumberLong(\"0\"),\n\t...\n\t\"count_n\": NumberLong(\"0\")\n}\nexecutionStatsexecutionStats{\n\t\"explainVersion\" : \"1\",\n\t\"queryPlanner\" : {\n\t\t\"namespace\" : \"my_db.my_collection\",\n\t\t\"indexFilterSet\" : false,\n\t\t\"parsedQuery\" : {\n\t\t\t\"$and\" : [\n\t\t\t\t{\n\t\t\t\t\t\"extra_attributes.attribute\" : {\n\t\t\t\t\t\t\"$eq\" : \"\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"extra_attributes.value\" : {\n\t\t\t\t\t\t\"$eq\" : \"\"\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$gte\" : ISODate(\"...\")\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\t\"is_valid\" : {\n\t\t\t\t\t\t\"$in\" : [\n\t\t\t\t\t\t\tfalse,\n\t\t\t\t\t\t\ttrue\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t\"optimizedPipeline\" : true,\n\t\t\"maxIndexedOrSolutionsReached\" : false,\n\t\t\"maxIndexedAndSolutionsReached\" : false,\n\t\t\"maxScansToExplodeReached\" : false,\n\t\t\"winningPlan\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"extra_attributes.attribute\" : {\n\t\t\t\t\t\"$eq\" : \"\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\"extra_attributes.attribute\" : 1,\n\t\t\t\t\t\"extra_attributes.value\" : 1,\n\t\t\t\t\t\"is_valid\" : 1,\n\t\t\t\t\t\"name\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"read_index_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\"extra_attributes.attribute\" : [ \"extra_attributes\" ],\n\t\t\t\t\t\"extra_attributes.value\" : [ \"extra_attributes\" ],\n\t\t\t\t\t\"is_valid\" : [ ],\n\t\t\t\t\t\"name\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\"extra_attributes.attribute\" : [ \"[\\\"\\\", \\\"\\\"]\" ],\n\t\t\t\t\t\"extra_attributes.value\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\"is_valid\" : [ \"[false, false]\", \"[true, true]\" ],\n\t\t\t\t\t\"name\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"rejectedPlans\" : [ ]\n\t},\n\t\"executionStats\" : {\n\t\t\"executionSuccess\" : true,\n\t\t\"nReturned\" : 1037256,\n\t\t\"executionTimeMillis\" : 12311,\n\t\t\"totalKeysExamined\" : 1037265,\n\t\t\"totalDocsExamined\" : 1037256,\n\t\t\"executionStages\" : {\n\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\"filter\" : {\n\t\t\t\t\"extra_attributes.attribute\" : {\n\t\t\t\t\t\"$eq\" : \"\"\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"nReturned\" : 1037256,\n\t\t\t\"executionTimeMillisEstimate\" : 7658,\n\t\t\t\"works\" : 1037266,\n\t\t\t\"advanced\" : 1037256,\n\t\t\t\"needTime\" : 9,\n\t\t\t\"needYield\" : 0,\n\t\t\t\"saveState\" : 1220,\n\t\t\t\"restoreState\" : 1220,\n\t\t\t\"isEOF\" : 1,\n\t\t\t\"docsExamined\" : 1037256,\n\t\t\t\"alreadyHasObj\" : 0,\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\"nReturned\" : 1037256,\n\t\t\t\t\"executionTimeMillisEstimate\" : 1388,\n\t\t\t\t\"works\" : 1037266,\n\t\t\t\t\"advanced\" : 1037256,\n\t\t\t\t\"needTime\" : 9,\n\t\t\t\t\"needYield\" : 0,\n\t\t\t\t\"saveState\" : 1220,\n\t\t\t\t\"restoreState\" : 1220,\n\t\t\t\t\"isEOF\" : 1,\n\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\"date\" : 1,\n\t\t\t\t\t\"extra_attributes.attribute\" : 1,\n\t\t\t\t\t\"extra_attributes.value\" : 1,\n\t\t\t\t\t\"is_valid\" : 1,\n\t\t\t\t\t\"name\" : 1\n\t\t\t\t},\n\t\t\t\t\"indexName\" : \"read_index_1\",\n\t\t\t\t\"isMultiKey\" : true,\n\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\"date\" : [ ],\n\t\t\t\t\t\"extra_attributes.attribute\" : [ \"extra_attributes\" ],\n\t\t\t\t\t\"extra_attributes.value\" : [ \"extra_attributes\" ],\n\t\t\t\t\t\"is_valid\" : [ ],\n\t\t\t\t\t\"name\" : [ ]\n\t\t\t\t},\n\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\"extra_attributes.attribute\" : [ \"[\\\"\\\", \\\"\\\"]\" ],\n\t\t\t\t\t\"extra_attributes.value\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\"is_valid\" : [ \"[false, false]\", \"[true, true]\" ],\n\t\t\t\t\t\"name\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t},\n\t\t\t\t\"keysExamined\" : 1037265,\n\t\t\t\t\"seeks\" : 10,\n\t\t\t\t\"dupsTested\" : 1037256,\n\t\t\t\t\"dupsDropped\" : 0\n\t\t\t}\n\t\t}\n\t},\n\t\"command\" : {\n\t\t\"aggregate\" : \"my_collection\",\n\t\t\"pipeline\" : [\n\t\t\t{\n\t\t\t\t\"$match\" : {\n\t\t\t\t\t\"is_valid\" : {\n\t\t\t\t\t\t\"$in\" : [\n\t\t\t\t\t\t\ttrue,\n\t\t\t\t\t\t\tfalse\n\t\t\t\t\t\t]\n\t\t\t\t\t},\n\t\t\t\t\t\"date\" : {\n\t\t\t\t\t\t\"$gte\" : ISODate(\"...\"),\n\t\t\t\t\t\t\"$lt\" : ISODate(\"...\")\n\t\t\t\t\t},\n\t\t\t\t\t\"extra_attributes.attribute\" : \"\",\n\t\t\t\t\t\"extra_attributes.value\" : \"\"\n\t\t\t\t}\n\t\t\t}\n\t\t],\n\t\t\"cursor\" : {\n\t\t\t\n\t\t},\n\t\t\"$db\" : \"my_db\"\n\t},\n\t\"serverInfo\" : {\n\t\t\"port\" : 27017,\n\t\t\"version\" : \"5.0.6\"\n\t},\n\t\"serverParameters\" : {\n\t\t\"internalQueryFacetBufferSizeBytes\" : 104857600,\n\t\t\"internalQueryFacetMaxOutputDocSizeBytes\" : 104857600,\n\t\t\"internalLookupStageIntermediateDocumentMaxSizeBytes\" : 104857600,\n\t\t\"internalDocumentSourceGroupMaxMemoryBytes\" : 104857600,\n\t\t\"internalQueryMaxBlockingSortMemoryUsageBytes\" : 104857600,\n\t\t\"internalQueryProhibitBlockingMergeOnMongoS\" : 0,\n\t\t\"internalQueryMaxAddToSetBytes\" : 104857600,\n\t\t\"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\" : 104857600\n\t},\n\t\"ok\" : 1\n}\n",
"text": "Morning all,Been recently trying to troubleshoot a big slowdown in my aggregation pipeline stage, when FETCH is performed. A sneak peak into what documents look like before going into further details:In order to have a smaller working set I’ve locally replicated some of the data, approx 1M documents with average size per document of 1.2KB and with an overall collection size of 200GB.A couple of indexes are found in this collection:Local testing shows the following times for the executionStats only having:IXSCAN (1.4s) → FETCH (7s) → …Fetching is really bringing down my aggregation times, and the same goes when the $group is added. To return the amount of documents mentioned the times go way above 30s.Would it be just a matter of resources or am I doing something wrong at data model level?Here is the executionStats output:Thanks in advance!",
"username": "eddy_turbox"
},
{
"code": "\"indexBounds\" : {\n\t\t\t\t\t\"date\" : [ \"[new Date(...), new Date(...))\" ],\n\t\t\t\t\t\"extra_attributes.attribute\" : [ \"[\\\"\\\", \\\"\\\"]\" ],\n\t\t\t\t\t\"extra_attributes.value\" : [ \"[MinKey, MaxKey]\" ],\n\t\t\t\t\t\"is_valid\" : [ \"[false, false]\", \"[true, true]\" ],\n\t\t\t\t\t\"name\" : [ \"[MinKey, MaxKey]\" ]\n\t\t\t\t}\n",
"text": "Hi @eddy_turbox and welcome to the MongoDB community forum!!From the execution stats being shared above, it looks like the indexes are being used and seems to be optimal for the current criteria.\nHowever, for further understanding, could you share the aggregation pipeline that you are trying and the index created for the above sample documents.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "db.my_collection.aggregate([\n {\"$match\": {\"is_valid\": {\"$in\": [true, false]}, \"date\": {\"$gte\": {\"$date\": \"...\"}, \"$lt\": {\"$date\": \"...\"}}, \"extra_attributes.attribute\": \"\", \"extra_attributes.value\": \"\"}},\n {\"$group\": {\"_id\": \"$name\", \"count_a\": {\"$sum\": \"$count_a\"}, \"count_b\": {\"$sum\": \"$count_b\"}, \"others\": {\"$first\": \"$others\"}}},\n {\"$match\": {\"count_a\": {\"$gt\": 0}}},\n {\"$sort\": {\"name\": 1, \"count_a\": -1}},\n {\"$limit\": 200}\n])\n$group$sum date: 1 + name: 1 + unique_id: 1, is_valid: 1 [unique index used for inserts] | Size: 100MB\n date: 1 + extra_attributes.attribute: 1 + extra_attributes.value: 1 + is_valid: 1 + name: 1 [read index] | Size: 90MB\n",
"text": "Thanks for your reply @Aasawari!Regarding the rest of the steps in the aggregation:Regarding the $group step, there are around +40 counters in there, all with the same $sum clause.Indexes are the one specified in the OP:",
"username": "eddy_turbox"
},
{
"code": "db.collection.explain('executionStats').aggregate(...)",
"text": "Hi @eddy_turbox and apologies for the delayed response.Thank you for sharing the above aggregation pipeline. Could you also share the execution stats db.collection.explain('executionStats').aggregate(...) for the above query being used, and also the actual query?\nYou mentioned there are more than 40 counters. This will help us understand how much work the server needs to complete. The examples you provided are very useful for getting a general concept of the job, but a complete view of the whole document and aggregation query is required to analyse performance issues.Looking at the aggregation pipeline being shared, the first match stage is based on a boolean fields which would involve only two values and would eventually scan the complete collection. Is my assumption here correct?The other following stages after the second match of group and sort could also be expensive stages depending on document sizes and also depends on the hardware of the device. Regarding this, could you also share your deployment topology and hardware spec?Adding to the above, if the above aggregation query is frequently used in the application, my initial recommendation would be to use the MongoDB materialised views for better efficiency.Let us know if you have any further concerns.Best regards\nAasawari",
"username": "Aasawari"
}
] | Why is my FETCH in aggregation so slow? | 2023-01-11T15:27:08.090Z | Why is my FETCH in aggregation so slow? | 1,620 |
null | [
"aggregation"
] | [
{
"code": "organization collection:{\norgId:kjdk-khsk,\norgDetails:\"all details will go here\"\n}\noperation collection:{\noperationId:kieush-jhsih,\norgId:kjdk-khsk,\ncategoryId:111111\n}\ncategory collection:{\ncategoryId:111111,\ncategoryDetails:\"category details will go here\"\n}\n{\norgId:kjdk-khsk,\norgDetails:\"details will go here\",\noperations:[{\noperationId:jksjkskj,\ncategory:[{categoryId:\"\",categoryDetails:\"\"}]\n}]\n}\n",
"text": "I have three collection and every collection has relation between them.So when I get one collection data I expect it will give all related data.\nExample:so I want to join this three collection data like this :I want this in a single aggregation. Is this possible in MongoDB?",
"username": "Moyen_Islam"
},
{
"code": "kjdk-khskjksjkskj",
"text": "Please update your documents in such a way that we can cut-n-paste them directly into our setup. We cannot use documents with things like:kjdk-khsk\njksjkskjPlease use valid JSON values.",
"username": "steevej"
},
{
"code": "organization: {\n \"name\": \"Popular Diagnostic\",\n \"status\": \"active\",\n \"username\": \"popular\",\n \"orgId\": \"6b40d-1cfd-459f-b086\",\n \"tagline\": \"To serve is our ultimate goal\",\n \"email\": \"[email protected]\"\n }\n operation: {\n \"categoryId\": \"d1e01-8398-4b61-9363\",\n \"operationName\": \"Acute Appendicitis\\t\",\n \"price\": \"15000\",\n \"orgId\": \"6b40d-1cfd-459f-b086\",\n \"serial\": 2,\n \"uid\": \"ee223-17f3-404c-84a7\"\n }\ncategory : {\n \"orgId\": \"6b40d-1cfd-459f-b086\",\n \"categoryName\": \"Appendicectomy\",\n \"serial\": 2,\n \"uid\": \"d1e01-8398-4b61-9363\"\n }\n",
"text": "Here you go:",
"username": "Moyen_Islam"
},
{
"code": "lookup_operations = { \"$lookup\" : {\n \"from\" : \"operation\" ,\n \"localField\" : \"orgId\" ,\n \"foreignField\" : \"orgId\" ,\n \"as\" : \"operations\" ,\n \"pipeline\" : [ { \"$project\" : {\n \"categoryId\" : 1 ,\n \"operationName\" : 1 , /* In your original post you shared a redacted document\n with a field operationId but not such field exists in your\n real documents. The field operationName was the one\n that is the closest.\n */\n } } ]\n} }\n{\n \"name\": \"Popular Diagnostic\",\n \"status\": \"active\",\n \"username\": \"popular\",\n \"orgId\": \"6b40d-1cfd-459f-b086\",\n \"tagline\": \"To serve is our ultimate goal\",\n \"email\": \"[email protected]\" ,\n \"operations\" : [ \n {\n \"categoryId\": \"d1e01-8398-4b61-9363\",\n \"operationName\": \"Acute Appendicitis\\t\" \n }\n ]\n}\nlookup_categories = { \"$lookup\" : {\n \"from\" : \"category\" ,\n \"localField\" : \"operations.categoryId\" ,\n \"foreignField\" : \"uid\" ,\n \"as\" : \"categories\"\n /* You have no field named categoryDetails in your real category collection document,\n so I assume you want it all.\n */\n} }\n{\n \"name\": \"Popular Diagnostic\",\n \"status\": \"active\",\n \"username\": \"popular\",\n \"orgId\": \"6b40d-1cfd-459f-b086\",\n \"tagline\": \"To serve is our ultimate goal\",\n \"email\": \"[email protected]\" ,\n \"operations\" : [ \n {\n \"categoryId\": \"d1e01-8398-4b61-9363\",\n \"operationName\": \"Acute Appendicitis\\t\" \n }\n ] ,\n \"categories\" : [\n {\n \"orgId\": \"6b40d-1cfd-459f-b086\",\n \"categoryName\": \"Appendicectomy\",\n \"serial\": 2,\n \"uid\": \"d1e01-8398-4b61-9363\"\n }\n ]\n}\n",
"text": "The first step is to do the $lookup to get all the information.This lookup will forward the following document from your sample documents to the next stage.The next stage is a $lookup in category collection. Note that we do not need to $unwind.This 2nd lookup will produce document like:Personally, I stop here as I have all the information. I feel the application layer can deal with the cosmetic of putting the category information with the operation information.But you could do it on the aggregation pipeline with a $set stage that uses $map on operations to $mergeObjects $$this and the result of $reduce on categories to find the element with the corresponding uid.",
"username": "steevej"
},
{
"code": "{\n orgId:kjdk-khsk,\n orgDetails:\"details will go here\",\n operations:[{\n operationId:jksjkskj,\n category:[{\n categoryId:\"\",\n categoryDetails:\"\"\n }]\n }]\n}\norganization.orgIdcategory.orgIdcategory.uidoperation.categoryId// Requires official MongoShell 3.6+\ndb = db.getSiblingDB(\"mongo_forums\");\ndb.getCollection(\"organization\").aggregate([{\n \"$lookup\": { \n \"from\": \"category\",\n \"as\": \"categories\",\n \"let\": { \"req_orgId\": \"$orgId\" },\n \"pipeline\": [{\n \"$match\": { \"$expr\": { \"$eq\": [ \"$orgId\", \"$$req_orgId\" ] } }\n }, {\n \"$lookup\": {\n \"from\": \"operation\",\n \"as\": \"operations\",\n \"localField\": \"uid\",\n \"foreignField\": \"categoryId\"\n }\n }]\n }\n}],\n{\n \"allowDiskUse\": false\n});\norganizationorganization$match$lookup$lookuppipelinepipeline$matchorgIdcategoryorganization$lookupcategoryIdoperationcategorycategory{\n \"_id\" : ObjectId(\"63c7eb12325f5cc4223e812e\"),\n \"name\" : \"Popular Diagnostic\",\n \"status\" : \"active\",\n \"username\" : \"popular\",\n \"orgId\" : \"6b40d-1cfd-459f-b086\",\n \"tagline\" : \"To serve is our ultimate goal\",\n \"email\" : \"[email protected]\",\n \"categories\" : [\n {\n \"_id\" : ObjectId(\"63c7eb12325f5cc4223e8130\"),\n \"orgId\" : \"6b40d-1cfd-459f-b086\",\n \"categoryName\" : \"Appendicectomy\",\n \"serial\" : NumberInt(2),\n \"uid\" : \"d1e01-8398-4b61-9363\",\n \"operations\" : [\n {\n \"_id\" : ObjectId(\"63c7eb12325f5cc4223e812f\"),\n \"categoryId\" : \"d1e01-8398-4b61-9363\",\n \"operationName\" : \"Acute Appendicitis\\t\",\n \"price\" : \"15000\",\n \"orgId\" : \"6b40d-1cfd-459f-b086\",\n \"serial\" : NumberInt(2),\n \"uid\" : \"ee223-17f3-404c-84a7\"\n }\n ]\n }\n ]\n}\n",
"text": "@Moyen_Islam,In your original post, the example result you provided looked like this……implying that the relationships are Organization → Operation → Category.However, the example data you provided at @steevej’s request makes it appear that the relationships are actually Organization → Category → Operation. This is based on the following analysis.The solution offered below structures the documents in the result in this way in order to limit the complexity of the pipeline.",
"username": "Rick_Culpepper"
},
{
"code": "",
"text": "Nice and clean.This was your first post. I hope it will not be the last.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks, @steevej !I’m not one to post often – I’ll never be a top contributor by volume – but sometimes a question motivates me to answer based on my own experience. I’ve been working a lot with aggregation queries over the last few months and this question hit the sweet-spot of my recent experience.",
"username": "Rick_Culpepper"
},
{
"code": "",
"text": "Thanks a lot my dear @Rick_Culpepper\nI got this and it’s working for me as I wanted.I realy appreciate you.\nGod bless you.",
"username": "Moyen_Islam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $lookup aggregation | 2023-01-10T09:00:55.063Z | $lookup aggregation | 927 |
[
"aggregation"
] | [
{
"code": "",
"text": "$nin is not working in aggregation for _id where I used aggregation for latitute and longitude wise search\n\nWhatsApp Image 2023-01-29 at 01.38.281280×1147 165 KB\n",
"username": "Prakash_Jayaswal"
},
{
"code": "",
"text": "Hi @Prakash_Jayaswal,Welcome to the MongoDB Community forums The query you posted doesn’t look like a valid MongoDB aggregation query. Are you using a specific driver or product?In order to better understand your question, it would be helpful if you could provide the following information:Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | $nin is not working in aggregation | 2023-01-28T21:10:46.045Z | $nin is not working in aggregation | 727 |
|
null | [
"replication",
"sharding"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-01-30T12:47:32.140-05:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Invalid access at address: 0x9ae8c\\n\"}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.140-05:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 11 (Segmentation fault).\\n\"}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"initandlisten\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"55D2950F40A5\",\"b\":\"55D2911E4000\",\"o\":\"3F100A5\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.361\",\"s+\":\"215\"},{\"a\":\"55D2950F6B29\",\"b\":\"55D2911E4000\",\"o\":\"3F12B29\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"55D2950EF09C\",\"b\":\"55D2911E4000\",\"o\":\"3F0B09C\",\"s\":\"abruptQuitWithAddrSignal\",\"s+\":\"EC\"},{\"a\":\"7F8A6A9F8420\",\"b\":\"7F8A6A9E4000\",\"o\":\"14420\",\"s\":\"funlockfile\",\"s+\":\"60\"},{\"a\":\"7F8A6A9F3376\",\"b\":\"7F8A6A9E4000\",\"o\":\"F376\",\"s\":\"pthread_cond_wait\",\"s+\":\"216\"},{\"a\":\"55D29529B76C\",\"b\":\"55D2911E4000\",\"o\":\"40B776C\",\"s\":\"_ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE\",\"s+\":\"C\"},{\"a\":\"55D2950EA987\",\"b\":\"55D2911E4000\",\"o\":\"3F06987\",\"s\":\"_ZN5mongo15waitForShutdownEv\",\"s+\":\"107\"},{\"a\":\"55D29274CB91\",\"b\":\"55D2911E4000\",\"o\":\"1568B91\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1929\",\"s+\":\"13E1\"},{\"a\":\"55D29274E5AF\",\"b\":\"55D2911E4000\",\"o\":\"156A5AF\",\"s\":\"_ZN5mongo11mongod_mainEiPPc\",\"s+\":\"CDF\"},{\"a\":\"55D2925E2F2E\",\"b\":\"55D2911E4000\",\"o\":\"13FEF2E\",\"s\":\"main\",\"s+\":\"E\"},{\"a\":\"7F8A6A816083\",\"b\":\"7F8A6A7F2000\",\"o\":\"24083\",\"s\":\"__libc_start_main\",\"s+\":\"F3\"},{\"a\":\"55D2927489DE\",\"b\":\"55D2911E4000\",\"o\":\"15649DE\",\"s\":\"_start\",\"s+\":\"2E\"}],\"processInfo\":{\"mongodbVersion\":\"5.0.14\",\"gitVersion\":\"1b3b0073a0b436a8a502b612f24fb2bd572772e5\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"5.4.0-137-generic\",\"version\":\"#154-Ubuntu SMP Thu Jan 5 17:03:22 UTC 2023\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"55D2911E4000\",\"elfType\":3,\"buildId\":\"44AD2830EB7E90ABFF5F592CAAA6392F81AEC690\"},{\"b\":\"7F8A6A9E4000\",\"path\":\"/lib/x86_64-linux-gnu/libpthread.so.0\",\"elfType\":3,\"buildId\":\"7B4536F41CDAA5888408E82D0836E33DCF436466\"},{\"b\":\"7F8A6A7F2000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"1878E6B475720C7C51969E69AB2D276FAE6D1DEE\"}]}}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D2950F40A5\",\"b\":\"55D2911E4000\",\"o\":\"3F100A5\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.361\",\"s+\":\"215\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D2950F6B29\",\"b\":\"55D2911E4000\",\"o\":\"3F12B29\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D2950EF09C\",\"b\":\"55D2911E4000\",\"o\":\"3F0B09C\",\"s\":\"abruptQuitWithAddrSignal\",\"s+\":\"EC\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F8A6A9F8420\",\"b\":\"7F8A6A9E4000\",\"o\":\"14420\",\"s\":\"funlockfile\",\"s+\":\"60\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F8A6A9F3376\",\"b\":\"7F8A6A9E4000\",\"o\":\"F376\",\"s\":\"pthread_cond_wait\",\"s+\":\"216\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D29529B76C\",\"b\":\"55D2911E4000\",\"o\":\"40B776C\",\"s\":\"_ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE\",\"s+\":\"C\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D2950EA987\",\"b\":\"55D2911E4000\",\"o\":\"3F06987\",\"s\":\"_ZN5mongo15waitForShutdownEv\",\"s+\":\"107\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D29274CB91\",\"b\":\"55D2911E4000\",\"o\":\"1568B91\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1929\",\"s+\":\"13E1\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D29274E5AF\",\"b\":\"55D2911E4000\",\"o\":\"156A5AF\",\"s\":\"_ZN5mongo11mongod_mainEiPPc\",\"s+\":\"CDF\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D2925E2F2E\",\"b\":\"55D2911E4000\",\"o\":\"13FEF2E\",\"s\":\"main\",\"s+\":\"E\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7F8A6A816083\",\"b\":\"7F8A6A7F2000\",\"o\":\"24083\",\"s\":\"__libc_start_main\",\"s+\":\"F3\"}}}\n{\"t\":{\"$date\":\"2023-01-30T12:47:32.226-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"55D2927489DE\",\"b\":\"55D2911E4000\",\"o\":\"15649DE\",\"s\":\"_start\",\"s+\":\"2E\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.317-05:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Invalid access at address: 0x3dfff\\n\"}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.317-05:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":6384300, \"ctx\":\"initandlisten\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 11 (Segmentation fault).\\n\"}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"initandlisten\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"56098E17A0A5\",\"b\":\"56098A26A000\",\"o\":\"3F100A5\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.361\",\"s+\":\"215\"},{\"a\":\"56098E17CB29\",\"b\":\"56098A26A000\",\"o\":\"3F12B29\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"56098E17509C\",\"b\":\"56098A26A000\",\"o\":\"3F0B09C\",\"s\":\"abruptQuitWithAddrSignal\",\"s+\":\"EC\"},{\"a\":\"7FD7CF902420\",\"b\":\"7FD7CF8EE000\",\"o\":\"14420\",\"s\":\"funlockfile\",\"s+\":\"60\"},{\"a\":\"7FD7CF8FD376\",\"b\":\"7FD7CF8EE000\",\"o\":\"F376\",\"s\":\"pthread_cond_wait\",\"s+\":\"216\"},{\"a\":\"56098E32176C\",\"b\":\"56098A26A000\",\"o\":\"40B776C\",\"s\":\"_ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE\",\"s+\":\"C\"},{\"a\":\"56098E170987\",\"b\":\"56098A26A000\",\"o\":\"3F06987\",\"s\":\"_ZN5mongo15waitForShutdownEv\",\"s+\":\"107\"},{\"a\":\"56098B7D2B91\",\"b\":\"56098A26A000\",\"o\":\"1568B91\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1929\",\"s+\":\"13E1\"},{\"a\":\"56098B7D45AF\",\"b\":\"56098A26A000\",\"o\":\"156A5AF\",\"s\":\"_ZN5mongo11mongod_mainEiPPc\",\"s+\":\"CDF\"},{\"a\":\"56098B668F2E\",\"b\":\"56098A26A000\",\"o\":\"13FEF2E\",\"s\":\"main\",\"s+\":\"E\"},{\"a\":\"7FD7CF720083\",\"b\":\"7FD7CF6FC000\",\"o\":\"24083\",\"s\":\"__libc_start_main\",\"s+\":\"F3\"},{\"a\":\"56098B7CE9DE\",\"b\":\"56098A26A000\",\"o\":\"15649DE\",\"s\":\"_start\",\"s+\":\"2E\"}],\"processInfo\":{\"mongodbVersion\":\"5.0.14\",\"gitVersion\":\"1b3b0073a0b436a8a502b612f24fb2bd572772e5\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"5.4.0-137-generic\",\"version\":\"#154-Ubuntu SMP Thu Jan 5 17:03:22 UTC 2023\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"56098A26A000\",\"elfType\":3,\"buildId\":\"44AD2830EB7E90ABFF5F592CAAA6392F81AEC690\"},{\"b\":\"7FD7CF8EE000\",\"path\":\"/lib/x86_64-linux-gnu/libpthread.so.0\",\"elfType\":3,\"buildId\":\"7B4536F41CDAA5888408E82D0836E33DCF436466\"},{\"b\":\"7FD7CF6FC000\",\"path\":\"/lib/x86_64-linux-gnu/libc.so.6\",\"elfType\":3,\"buildId\":\"1878E6B475720C7C51969E69AB2D276FAE6D1DEE\"}]}}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098E17A0A5\",\"b\":\"56098A26A000\",\"o\":\"3F100A5\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.361\",\"s+\":\"215\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098E17CB29\",\"b\":\"56098A26A000\",\"o\":\"3F12B29\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098E17509C\",\"b\":\"56098A26A000\",\"o\":\"3F0B09C\",\"s\":\"abruptQuitWithAddrSignal\",\"s+\":\"EC\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FD7CF902420\",\"b\":\"7FD7CF8EE000\",\"o\":\"14420\",\"s\":\"funlockfile\",\"s+\":\"60\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FD7CF8FD376\",\"b\":\"7FD7CF8EE000\",\"o\":\"F376\",\"s\":\"pthread_cond_wait\",\"s+\":\"216\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098E32176C\",\"b\":\"56098A26A000\",\"o\":\"40B776C\",\"s\":\"_ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE\",\"s+\":\"C\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098E170987\",\"b\":\"56098A26A000\",\"o\":\"3F06987\",\"s\":\"_ZN5mongo15waitForShutdownEv\",\"s+\":\"107\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098B7D2B91\",\"b\":\"56098A26A000\",\"o\":\"1568B91\",\"s\":\"_ZN5mongo12_GLOBAL__N_114_initAndListenEPNS_14ServiceContextEi.isra.1929\",\"s+\":\"13E1\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098B7D45AF\",\"b\":\"56098A26A000\",\"o\":\"156A5AF\",\"s\":\"_ZN5mongo11mongod_mainEiPPc\",\"s+\":\"CDF\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098B668F2E\",\"b\":\"56098A26A000\",\"o\":\"13FEF2E\",\"s\":\"main\",\"s+\":\"E\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FD7CF720083\",\"b\":\"7FD7CF6FC000\",\"o\":\"24083\",\"s\":\"__libc_start_main\",\"s+\":\"F3\"}}}\n{\"t\":{\"$date\":\"2023-02-01T11:55:24.398-05:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"initandlisten\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"56098B7CE9DE\",\"b\":\"56098A26A000\",\"o\":\"15649DE\",\"s\":\"_start\",\"s+\":\"2E\"}}}\n",
"text": "Hi, we have experienced random crashes on 2 different servers. Below is the stack trace from mongod.log. Each server is a member of a 3-node replica set (and part of a sharded cluster). I was wondering if anyone has any clues on what is causing the seg fault. Each server has 128GB RAM with 16 threads (Intel Xeon E2288G CPU @ 3.70Ghz).At the time of crash, they seem to be pretty busy.",
"username": "AmitG"
},
{
"code": "",
"text": "Hi @AmitGSorry you’re experiencing this issue. I think this needs a more in-depth analysis. Do you mind opening a ticket in the SERVER project detailing the issue? Please post all relevant informations (logs, stacktraces, core dumps, etc.) that may help determining the root cause.Best regards\nKevin",
"username": "kevinadi"
}
] | MongoDB 5.0.14 segfault on Ubuntu 20.04 | 2023-02-01T19:12:19.043Z | MongoDB 5.0.14 segfault on Ubuntu 20.04 | 962 |
null | [
"python",
"transactions"
] | [
{
"code": "",
"text": "Hello Team,\nIn order to prepare for the MongoDB Dev Exam, I see the transactions in the materials and video, however I do not see them in the Exam Guide, please can you verify and confirm whether ‘transactions’ are required within Dev Exam?",
"username": "Ola_Zieminska1"
},
{
"code": "",
"text": "Hey @Ola_Zieminska1,Welcome to the MongoDB Community Forums! Yes, only the topics listed in the Study Guide have to be followed in order to prepare for the Certification Exam.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MonogoDB Dev Exam - transactions | 2023-02-01T09:40:31.580Z | MonogoDB Dev Exam - transactions | 1,290 |
null | [
"aggregation",
"performance"
] | [
{
"code": "[\n {groupId: \"ABC\", riskModelComplianceType: \"1\", costEstimate: 200},\n {groupId: \"ABC\", riskModelComplianceType: \"1\", costEstimate: 200},\n {groupId: \"ABC\", riskModelComplianceType: \"2\", costEstimate: 100},\n {groupId: \"ABC\", riskModelComplianceType: \"3\": costEstimate: 400},\n]\n[\n {\n $match: {\n groupId: \"5cb5cba6d5815e780d12c13f\",\n },\n },\n {\n $group: {\n _id: \"$riskModelComplianceType\",\n costEstimate: {\n $sum: \"$costEstimate\",\n },\n },\n },\n]\n",
"text": "Our team is struggling to understand this very slow query, please help.The data looks like this:Our aggregation looks like this:The pipeline “works” but the performance is terrible. We only have 17K docs, but the above query takes around 4 seconds What is going on here? Do we need a certain index or why is it so slow?I have attached the explain output from Mongodb.\nexplain-from-mongo.json (58.3 KB)",
"username": "Alex_Bjorlig"
},
{
"code": "{\n \"groupId\": -1,\n \"factoryId\": -1,\n \"riskModelCategoryLevel\": -1,\n \"eddyLabelIds\": -1,\n \"status\": -1,\n \"responsible\": -1\n }\nFETCHIXSCAN$sort$group$group",
"text": "Hey @Alex_Bjorlig,Looking at your explain output, the ordering of the index being used by the query planner is:which indicates that you have more fields in your document than what the sample document you provided is showing. Would you be able to provide the full document structure along with any other information so that we can reproduce it better on our end as I have attempted to use 17K sample documents & the same pipeline mentioned in this post which resulted in a 8ms execution. It would also be great if you can share your db.collection.stats() output along with details of your hardware: RAM, CPU, Disk specs, etc for us to be better able to understand and help you.Also, from reading your explain output, there is a FETCH stage too in the output. It is usually recommended to avoid this as much as possible and filter as much as possible using IXSCAN. You can try adding a $sort stage before the $group so the documents in the input of the $group state are already sorted which could help improve the performance. Please see: $group optimization. Also see covered queryRegards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help with very slow $group aggregation, on very small dataset | 2023-01-28T13:48:24.894Z | Help with very slow $group aggregation, on very small dataset | 1,309 |
[
"node-js",
"mongoose-odm"
] | [
{
"code": "const mongoose = require(\"mongoose\");\n\nconst options = {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n serverSelectionTimeoutMS: 5000,\n};\nconst uri =\n process.env.NODE_ENV === \"development\"\n ? \"mongodb://localhost:27017\"\n : process.env.MONGODB_URI;\n\nconst connectDB = async () => {\n try {\n const conn = await mongoose.connect(uri, options);\n console.log(`MongoDB Connected: ${conn.connection.host}`);\n } catch (error) {\n console.log(error);\n process.exit(1);\n }\n};\n\nmodule.exports = connectDB;\n",
"text": "I’m using Mongoose to attempt to connect to my mongodb cluster and I keep getting this error:image1298×528 21.7 KBI have tried searching online and most of the solutions point to the Network Access section of Atlas.\nI have allowed access to all ips, but i still get the same error.I’m using Node.js.\nHere is the code i’m using to connect if it’s any help:Node version: v16.16.0\nNodemon: ^2.0.20\nMongoDB: 6.0\nMongoose: ^6.0.12",
"username": "John_Fiewor"
},
{
"code": "2701727017",
"text": "Hello @John_Fiewor ,Welcome to The MongoDB Community Forums! Could you please try connecting to your Atlas cluster via Mongo Shell or Compass?If your URI is valid and you are able to establish a connection then, the next most common reason for such connection issues is either your IP address is not whitelisted (you have whitelisted your IP/ Allow access to all IPs) or port 27017 is blocked by either your IT department or your Internet Service Provider.Can you try below command in your terminal to make sure port 27017 is open for you?curl http://portquiz.net:27017Let me know if this works.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi @Tarun_Gaur ,Thank you for the warm welcome and for your answer.I was able to connect to my cluster via Compass.I also tried the command you suggested and it worked which should mean port 27017 is available.",
"username": "John_Fiewor"
},
{
"code": "",
"text": "As you are able to connect to your Atlas Cluster using Compass means your Atlas cluster is working as expected and there are no connection issues between your machine and Atlas.Can you reconfirm if your Connection string is correct and the user your are trying to connect with is having all the required access?I have allowed access to all ipsHave you added 0.0.0.0/0 in the IP access list of your Atlas cluster?\nIf not, please double check the whitelist includes the host IP your app is running on.There could be many other other cases such asAbove are some of the most common reasons for connection issues.Tarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connection error while trying to connect to MongoDB Atlas | 2023-01-11T19:27:22.331Z | Connection error while trying to connect to MongoDB Atlas | 2,749 |
|
null | [
"replication",
"backup",
"upgrading"
] | [
{
"code": "",
"text": "Hi Team,Currently we have mongodb 3.6 with 3 servers(1 primary and 2 replicas), we would like to upgrade 3.6 to 4.4, Can we directly upgrade to 4.4 from 3.6 or we have to upgrade first to 4.0 → 4.2 → 4.4 ?Could you please guide on this?",
"username": "Srinivasa_Reddy1"
},
{
"code": "",
"text": "Hello @Srinivasa_Reddy1 ,Welcome to The MongoDB Community Forums! The recommended upgrade path is to do in-place upgrades through successive major releases of MongoDB.Here is the recommended upgrade path:\n3.6 → 4.0 → 4.2 → 4.4This ensures that any potential compatibility issues with the intermediate versions are resolved before moving to the latest version. It is also advisable to thoroughly test the upgrade process in a test environment before attempting it in a production environment.Please go through this thread which includes alternative approaches such as automation: Replace mongodb binaries all at once? - #3 by Stennie .You can check release notes to make sure you are following the correct procedure and avoid any unintentional mishaps.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to upgrade Mongodb replica set 3.6 to 4.4 version | 2023-01-25T04:28:01.879Z | How to upgrade Mongodb replica set 3.6 to 4.4 version | 1,510 |
null | [
"queries",
"python"
] | [
{
"code": "\"createdTime\": ISODate(\"2023-01-20T11:20:50.268Z\")\n \"createdTime\" : { '$date': '2023-01-20T11:20:50.268Z' }\ndb.MyCollection.find().sort({createdTime:-1}).map(x => x.createdTime)\n[\n { '$date': '2023-01-20T11:20:50.268Z' },\n { '$date': '2023-01-20T11:20:26.587Z' },\n { '$date': '2023-01-20T11:20:04.108Z' },\n]\ndb.MyCollection.countDocuments({ createdTime: { $lt: ISODate(\"2023-02-01T00:00:00.000Z\")} })\n0\n// even with \"$date\" as a key\ndb.MyCollection.countDocuments({ \"createdTime.$date\": { $lt: ISODate(\"2023-02-01T00:00:00.000Z\")}})\n0\n",
"text": "Hi,I am having this especially weird issue, that is turning me crazy ! I am centralising data on my Atlas Cluster from bare-metal servers running community MongoDB databases locally. However I have an issue with my ISODates() attributes that are converted to “$date” dictionnaries, after which the sort() query still works but the “$lt” and “$gt” queries are failing.Attribute in local dbAttribute in the Atlas Cluster’db :In my Atlas Cluster’s collection I am still able to sort on the time attribute :But when I try to use the operators “$gt” and “$lt” then mongo doesn’t know what to do :Eventually I figured I could make it work with a Javascript function to convert the “$date” objects back to ISODate, but I want to be able to use the “$lt” and “$gt” in my python API, relying on pymongo.Any idea how I could solve what I assume is a date formatting issue ?Thanks a lot for your help, apologies if this is the wrong place to ask.",
"username": "Barthelemy_Leveque"
},
{
"code": "",
"text": "I am not fluent in python, but most of the python date queries I saw are using datetime.datetime rather than ISODate.",
"username": "steevej"
},
{
"code": "ISODate()datetime()from pymongo import MongoClient\nimport pprint\n\nclient = MongoClient(\"mongodb://localhost:27017/\")\ndb = client[\"test\"]\ncollection = db[\"time\"]\n\ndoc = collection.find_one()\n\npprint.pprint(doc)\n{'_id': ObjectId('63da39ed6cd602b05233cc45'),\n 'time': datetime.datetime(2023, 1, 20, 11, 20, 50, 268000)}\n$dateISODateif \"time\" in doc:\n created_time = doc[\"time\"]\n iso_date = created_time.isoformat()\n print(\"Time as ISO date:\", iso_date)\nTime as ISO date: 2023-01-20T11:20:50.268000\ndb.MyCollection.countDocuments({ createdTime: { $lt: ISODate(\"2023-02-01T00:00:00.000Z\")} })\nmongoshadmin 0.000GB\nconfig 0.000GB\nlocal 0.000GB\nmy_database 0.000GB\ntest 0.000GB\n> use test\nswitched to db test\n> db.time.countDocuments({ time: { $lt: ISODate(\"2023-02-01T00:00:00.000Z\")} })\n3\n> db.time.find()\n{ \"_id\" : ObjectId(\"63da39ed6cd602b05233cc45\"), \"time\" : ISODate(\"2023-01-20T11:20:50.268Z\") }\n{ \"_id\" : ObjectId(\"63da39ed6cd602b05233cc4d\"), \"time\" : ISODate(\"2023-01-20T11:20:50.268Z\") }\n{ \"_id\" : ObjectId(\"63da39ed6cd602b05233cc35\"), \"time\" : ISODate(\"2023-01-21T11:20:50.268Z\") }\n> db.time.countDocuments({ time: { $lt: ISODate(\"2023-02-01T00:00:00.000Z\")} })\n3\n> db.time.countDocuments({ time: { $gt: ISODate(\"2023-01-01T00:00:00.000Z\")} })\n3\n> db.time.countDocuments({ time: { $lt: ISODate(\"2023-01-01T00:00:00.000Z\")} })\n0\n$lt$gt$gtefrom pymongo import MongoClient\nfrom datetime import datetime\n\nclient = MongoClient(\"mongodb://localhost:27017/\")\ndb = client[\"test\"]\ncollection = db[\"time\"]\n\nnew_time = datetime(2023, 2, 1)\nprint(new_time)\n\nquery = {\"time\": {\"$lt\": new_time}}\nresult = collection.count_documents(query)\n\nprint(\"Number of documents matching:\", result)\n2023-02-01 00:00:00\nNumber of documents matching: 3\n",
"text": "Hi @Barthelemy_Leveque,Welcome to the MongoDB Community forums As @steevej mentioned Python does not recognize ISODate() which is a Javascript function. It uses datetime() instead. Please see the following example for more details:It will return you the output as follows:convert the “$date” objects back to ISODateAnd you can further add a code snippet to the above code to convert your $date format to ISODate format:which will return the following:Furthermore, I tried the following queryon my local MongoDB server version: 6.0.1 using mongoshAnd for me $lt, $gt, and $gte all worked.After that I tried the same query using pymongo:And it also returned an output very similar to the above:I hope it helps, and if not please share the version of the MongoDB server installed on your local machine and the code snippet you have written so that we can better understand the issue.Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Sorting on ISODate attributes works, but the "$lt" and "$gt" queries are failing following a transfer to Atlas | 2023-02-01T09:53:57.015Z | Sorting on ISODate attributes works, but the “$lt” and “$gt” queries are failing following a transfer to Atlas | 1,324 |
null | [
"aggregation",
"queries"
] | [
{
"code": "schema = {\n _id: ObjectId('some ID'),\n created: new Date('some date'),\n ...\n}\n",
"text": "Hello,I need a little help with the aggregation framework. I want to generate a report where I am meant to get the average day over a set of matched documents in the aggregation. I have just one date field in the record to compare across all checked documents. Below is a sample document",
"username": "Bolatan_Ibrahim"
},
{
"code": "",
"text": "Hi @Bolatan_Ibrahim,Welcome to the MongoDB Community forums I want to generate a report where I am meant to get the average day over a set of matched documents in the aggregation. I have just one date field in the record to compare across all checked documents.To clarify, you want to generate a report where you compare each document in the collection to the average date of all the other documents in the same collection. Is that correct?Can you please provide more information about what you are trying to achieve in your report, such as what specific date field you want to compare or what kind of result you are expecting? This will help us better understand your requirements and provide more accurate assistance.Best,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Average date field in an aggregation | 2023-02-01T09:19:37.009Z | Average date field in an aggregation | 554 |
null | [
"replication",
"mongodb-shell"
] | [
{
"code": "",
"text": "Our current P-S-S MongoDB replica set running on Linux. Replication has stopped working among the 3 servers and we are now seeing an issue with port binding on start-up with the mongod service.\nMongo is running, on netstat i’m not seeing binding 27017 port listening, causing replica unable to find nodes, Tried to run with mongo with config location, restart, renaming lock and starting service. Nothing helped.We guessing it might be doing prep synchronize before joining to network not sure, this happened sudden over weekend and 2 nodes were unable to join replica because of that.Any suggestions, thoughts? would be much appreciated.",
"username": "pruthvi_reddy"
},
{
"code": "",
"text": "Hi pruthvi_reddy,It would be helpful if you could show any error messages in the logs or when starting up. Without any error messages or anything to go off of we can’t provide any substantial help.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Hi Sure:Here is various errors/times we received, up-on various trouble shooting what we observed is 27017 socket is not open and thats where we stuck.Error: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused.mongod.service: Failed with result ‘exit-code’Error: couldn’t connect to server 127.0.0.1:27017 src/mongo/shell/mongo.js:91\nexception: connect failed",
"username": "pruthvi_reddy"
},
{
"code": "ps -ax | grep mongo\n",
"text": "Thank you, can you also post your config file of the mongod you are trying to start.also if you do the following unix command does it show any mongod running?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "HI Sure, here is the results for the server mongo:",
"username": "pruthvi_reddy"
},
{
"code": "",
"text": "Here is mongo config:\n\nimage910×671 14 KB\n",
"username": "pruthvi_reddy"
},
{
"code": "",
"text": "Replication has stopped workingwhat do you mean by “stopped”? did you detach members from the replica set? else how many of members have this problem?also, where are these members hosted? virtual machines on single host pc? or all on their designated bare-metal hosts?most importantly, when have this started? was it from the beginning as you are still configuration/developing phases, or it was working fine and started recently?Connection refusedthis error is due to either incorrect firewall settings, or simply because there is no server listening thus OS refuses the request.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "We didn’t detach server from replica set, looks like we had a reboot happen on the 2 servers (P and S).After the servers cam online 2 of them were sitting in refused state where on startup we have mongo.service that starts the mongo with config we have. Some reasons on network we cant see 27017 port is not showing up. so that caused replica to sync.Servers were hosted on AWS Linux.This was on a running server, these were running 2.5 years minimum and no issues, this is sudden from Jan 28th.Connection refused: This is because mongo service is running but replica/mongo can’t able to reach the network because on start-up 27017 port is not opened by OS. We tried to reboot and did so much for some reasons OS is not starting / opening the port.",
"username": "pruthvi_reddy"
},
{
"code": "",
"text": "can you try to check your AWS settings (and logs) of hosts for those members? maybe there was a system update (hence a reboot) that broke some settings. or maybe someone tried to update mongodb but did not follow clean steps.the “log file” set in the config should be showing what errors are encountered when mongod tries to start. if it gets an error, it will just exit to keep data safe meaning there won’t be a service listening on port 27017. this also makes it easy to search the log file because the error won’t be far away from the last line logged.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you and That make sense, but i keep seeing oplog command over and over. current log file size is actually 3GB due to that over and over logging {verbose:0}.Seems like 2 servers were out from saturday, from what we seeing on logs, our servers had an outage and we didn’t had monitor for mongo replica since other members can handle the downtime but in this case more than one was down and caused replica to cant caughtup and in a state where exceeded the log timelimit, so that must be why those servers were not able to open port because of trying to recover.Can we delete the date from one of the servers and add them into replica? does replica can handle sync server? we almost 1M objects.",
"username": "pruthvi_reddy"
},
{
"code": "\"logRotate: rename\"",
"text": "1M does not seem big but before tinkering with the data, let’s try cleaning log files first to check if it heals.if you think the current log file might be needed for older events, stop the server first, move it to a safe location, then restart the server. if not, just edit the config file so it reads \"logRotate: rename\" and try restarting the server.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Got you did same and still seeing same issue, service is started but mongo shutdown itself.",
"username": "pruthvi_reddy"
},
{
"code": "rs.status()rs.conf()",
"text": "Hi @pruthvi_reddySorry you’re facing this issue. In many cases, a 3-node replica set would be able to tolerate 1 node down, but having 2 nodes down would put the remaining node in a read-only node, so at least you know that your data is accessible. You just can’t add new data.Having said that, we’ve been getting the information in piecemeal fashion so far. Could you post the relevant logs from all 3 nodes? When a node shuts down, we need to see what’s been written in the log. Please provide all the information that you think can help.The output of rs.status() and rs.conf() from the remaining node would be helpful to the picture. Also please post your MongoDB version and your OS version.Best regards\nKevin",
"username": "kevinadi"
}
] | Mongo On-Prem Replica set - Binding issue | 2023-02-01T13:40:32.587Z | Mongo On-Prem Replica set - Binding issue | 1,123 |
null | [
"queries"
] | [
{
"code": "{\n id: \"docId\",\n colors: [\n {value: \"red\", default: true},\n {value: \"white\"},\n {value: \"blue\"}\n ]\n}\nmagicUpdate(\"docId\", \"white\", true) // only white should have default=true\nmagicUpdate(\"docId\", \"blue\", true) // only blue should have default=true\n",
"text": "I have documents with an array field that contains nested documents (objects). On the nested objects, there is a field called “default” in which only one array item can have default=true. If another nested document had default=true, I want to mark it with default=false or remove the “default” field completely AT THE SAME MOMENT I update the desired nested document with default=true.Example document:How can I update the document so that “white” is the only nested document with “default=true” and other nested documents have either no default property or “default=false” ?Pseudocode:",
"username": "John_Grant1"
},
{
"code": "{\n id: \"docId\",\n colorsDefault: \"red\",\n colors: [\n {value: \"red\"},\n {value: \"white\"},\n {value: \"blue\"}\n ]\n}\n{\n $set: {colorsDefault: \"blue\"}\n}\ndefault=true",
"text": "I think I’m going to change my data model to avoid the problem. Luckily I have the luxury to change it. A more convenient data model will look like:Now the update command is obvious. I just need to change a single field and no array elements.Still, I wonder if it is possible to remove the property default=true from one array element while atomically updating another element.",
"username": "John_Grant1"
},
{
"code": "{\n id: \"docId\",\n colorsDefault: \"red\",\n colors: [\n {value: \"red\"},\n {value: \"white\"},\n {value: \"blue\"}\n ]\n}\ndefault=truec.updateOne( { id : \"docId\" } ,\n { $unset : { \"colors.$[oldDefault].default\" : 1 } ,\n $set : { \"colors.$[newDefault].default\" : true }\n } ,\n { arrayFilters :\n [ { \"oldDefault.default\" : true } ,\n { \"newDefault.value\" : \"blue\"}\n ]\n } )\n",
"text": "I prefer the schemaFor completeness, toto remove the property default=true from one array element while atomically updating another elementyou use arrayFilters. In your case the following should work:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to atomically update one array item's property=true and all other array items' property=false | 2023-01-31T16:03:16.832Z | How to atomically update one array item’s property=true and all other array items’ property=false | 536 |
null | [
"dot-net",
"unity"
] | [
{
"code": "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.MissingMethodException: void System.Security.Cryptography.Rfc2898DeriveBytes..",
"text": "Hi, I’m making SCP: Secret Laboratory server (C#) and got this error, while trying to connect to Mongo Atlas\nMongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. ---> System.MissingMethodException: void System.Security.Cryptography.Rfc2898DeriveBytes..In fact, it works fine with connections without any auth\nTested on Mongo 2.17.1, 2.18 and 2.19Do you have any solutions to that?",
"username": "TeMbI4"
},
{
"code": "Rfc2898DeriveBytesRfc2898DeriveBytesnetstandard2.0netstandard2.1net472",
"text": "Welcome to the MongoDB Community Forums.Rfc2898DeriveBytes is used by SCRAM-SHA1 and SCRAM-SHA256 authenticators. It appears that your runtime does not support this method. We vendor a version of Rfc2898DeriveBytes for netstandard2.0 because it doesn’t include the required method, but netstandard2.1 and net472 should both include that method.Which .NET version do you compile SCP:SL plugins for? What is the .NET runtime environment version?Another option to work around the issue is to use a different authentication mechanism such as x509 auth.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": ".NET SDK:\n Version: 7.0.102\n Commit: 4bbdd14480\n\nRuntime Environment:\n OS Name: Windows\n OS Version: 10.0.20348\n OS Platform: Windows\n RID: win10-x64\n Base Path: C:\\Program Files\\dotnet\\sdk\\7.0.102\\\n\nHost:\n Version: 7.0.2\n Architecture: x64\n Commit: d037e070eb\n\n.NET SDKs installed:\n 7.0.102 [C:\\Program Files\\dotnet\\sdk]\n\n.NET runtimes installed:\n Microsoft.AspNetCore.App 7.0.2 [C:\\Program Files\\dotnet\\shared\\Microsoft.AspNetCore.App]\n Microsoft.NETCore.App 7.0.2 [C:\\Program Files\\dotnet\\shared\\Microsoft.NETCore.App]\n Microsoft.WindowsDesktop.App 7.0.2 [C:\\Program Files\\dotnet\\shared\\Microsoft.WindowsDesktop.App]\n\nOther architectures found:\n None\n\nEnvironment variables:\n Not set\n\nglobal.json file:\n Not found\n\nLearn more:\n https://aka.ms/dotnet/info\n\nDownload .NET:\n https://aka.ms/dotnet/download```\n\nRegards, Artem",
"text": "Hi, @James_Kovacs !\nI’m using .NET Framework 4.8 for plugins.Here is the information on runtime environment:",
"username": "TeMbI4"
},
{
"code": "Rfc2898DeriveBytesSystem.MissingMethodExceptionRfc2898DeriveByteslink.xmlRfc2898DeriveBytes",
"text": "Thanks for providing the runtime environment. You are compiling against .NET 7, which includes the Rfc2898DeriveBytes class. The Unity compiler will strip out what it thinks is unreachable code to reduce the size of the resulting binary and this can lead to a System.MissingMethodException at runtime. However the .NET/C# Driver explicitly uses Rfc2898DeriveBytes and thus it should not be stripped out of the final binary.I would suggest updating your link.xml file to prevent Rfc2898DeriveBytes from being stripped during the build process. See Managed code stripping in the Unity docs for more information.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | Got error, when connecting to Mongo Atlas in unity game server | 2023-01-31T18:33:11.416Z | Got error, when connecting to Mongo Atlas in unity game server | 1,214 |
null | [
"java"
] | [
{
"code": "pom.xmlosgi feature repository <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongo-java-driver</artifactId>\n <version>3.12.11</version>\n </dependency>\n <repository>mvn:org.apache.camel.karaf/apache-camel/3.18.4/xml/features</repository>\n\n <feature name=\"module1\" description=\"An OSGi module\" version=\"1.0.1-SNAPSHOT\">\n <feature>scr</feature>\n <feature prerequisite=\"true\">aries-blueprint</feature>\n <feature>camel-core</feature>\n <feature>camel-blueprint</feature>\n <feature>camel-cxf</feature>\n <feature>camel-xslt-saxon</feature>\n <feature>camel-jetty</feature>\n <feature>camel-rabbitmq</feature>\n <feature>camel-openapi-java</feature>\n <feature>camel-jackson</feature>\n <capability>osgi.service;objectClass=org.apache.aries.blueprint.NamespaceHandler;osgi.service.blueprint.namespace=http://camel.apache.org/schema/blueprint;effective:=active;\n </capability>\n\n <bundle dependency=\"true\">mvn:org.mongodb/mongo-java-driver/3.12.11</bundle>\n\n <bundle>mvn:my.own.project/module1/1.0.1-SNAPSHOT</bundle>\n </feature>\n <dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongodb-driver-sync</artifactId>\n <version>4.8.2</version>\n </dependency>\n <repository>mvn:org.apache.camel.karaf/apache-camel/3.18.4/xml/features</repository>\n\n <feature name=\"module1\" description=\"An OSGI module\" version=\"1.0.1-SNAPSHOT\">\n <feature>scr</feature>\n <feature prerequisite=\"true\">aries-blueprint</feature>\n <feature>camel-core</feature>\n ...\n <!-- <bundle dependency=\"true\">mvn:org.mongodb/mongo-java-driver/3.12.11</bundle>-->\n\n <bundle dependency=\"true\">mvn:org.mongodb/mongodb-driver-sync/4.8.1</bundle>\n\n <bundle>mvn:my.own.project/module1/1.0.1-SNAPSHOT</bundle>\n </feature>\nError executing command: Unable to resolve root: missing requirement [root] osgi.identity;\nosgi.identity=mesh; type=karaf.feature; version=\"[1.0.1.SNAPSHOT,1.0.1.SNAPSHOT]\";\nfilter:=\"(&(osgi.identity=mesh)(type=karaf.feature)(version>=1.0.1.SNAPSHOT)(version<=1.0.1.SNAPSHOT))\" \n[caused by: Unable to resolve mesh/1.0.1.SNAPSHOT: missing requirement [mesh/1.0.1.SNAPSHOT] osgi.identity; osgi.identity=mesh-vms-tso01; type=karaf.feature \n[caused by: Unable to resolve mesh-vms-tso01/1.0.1.SNAPSHOT: missing requirement [mesh-vms-tso01/1.0.1.SNAPSHOT] osgi.identity; osgi.identity=mesh-vms-tso01; type=osgi.bundle; version=\"[1.0.1.SNAPSHOT,1.0.1.SNAPSHOT]\"; resolution:=mandatory \n[caused by: Unable to resolve mesh-vms-tso01/1.0.1.SNAPSHOT: missing requirement [mesh-vms-tso01/1.0.1.SNAPSHOT] osgi.wiring.package; filter:=\"(&(osgi.wiring.package=com.btc.mesh.core.camel)(version>=1.0.0)(!(version>=2.0.0)))\" \n[caused by: Unable to resolve mesh-core/1.0.1.SNAPSHOT: \nmissing requirement [mesh-core/1.0.1.SNAPSHOT] osgi.wiring.package; filter:=\"(&(osgi.wiring.package=com.mongodb)(version>=4.8.0)(!(version>=5.0.0)))\"]]]]\nmongodb.github.iowww.mongodb.com",
"text": "Hi thereI am new here with a specific question about OSGi support of mongo db driver. I posted the same question on Stackobverflow (Cannot use MongoDB driver \"mongodb-driver-sync\" in OSGi Project: Unable to resolve - Stack Overflow) but I hope to get better support here.The way I see it, OSGi isn’t very popular anymore, at least judging by the posts and documentation on the web. Nevertheless, Apache Karaf and OSGi is exactly the right tool for our purposes.Our application uses Apache Camel and MongoDB.First we successfully used a quite old version of “mongo-java-driver” (3.12.11): By adding this dependency to the modules pom.xml and to the osgi feature repository we were able to start our application and connect to MongoDB:pom.xml (maven module1)feature.xml:But that driver is legacy, quite old and missing important features, so we would like to use a modern driver, namely mongodb-driver-sync (version 4.8.2).We replaced the previous driver with “mongodb-driver-sync”:pom.xml (maven module1)feature.xml:This fails when starting the feature in Karaf:I also tried version 4.1.0 because an older documentation mentioned that it is a valid OSGI bundle:The mongodb-driver-sync artifact is a valid OSGi bundle whose symbolic\nname is org.mongodb.driver-sync.\"MongoDB Java Driver documentationBut we also had no luck. Does the symbolic name help in any way?After version 4.3.x the documentation moved from mongodb.github.io to www.mongodb.com and the reference to OSGi has been removed. The MANIFEST file of all mentioned drivers looks quite similar including the bundle information. So, AFAIK , OSGi should work. Very confusing.So, we need your helpThanx",
"username": "Bert_Speckels"
},
{
"code": "",
"text": "Hi @Bert_SpeckelsWe’ve had occasional bug reports over the years about the driver not working properly in an OSGi environment, but they all have been related to missing or mis-configured dependencies in the Import-Package entry in the manifest. The most recent report is https://jira.mongodb.org/browse/JAVA-4836, which showed up in 4.8.0 and was fixed in 4.8.2. In all cases I can recall, the reporter of the issue was able to successfully use the driver in OSGi after the fix, so I have some confidence that the driver does work with OSGi, in particular 4.8.2.Nothing in the error message that you posted looks familiar, or at all similar to ones previously reported, so I am unsure how to proceed.",
"username": "Jeffrey_Yemin"
},
{
"code": "com.amazonaws.authbsonmondodb.driver.core<feature name=\"sample version=\"1.0.1-SNAPSHOT\">\n ...\n <bundle dependency=\"true\">mvn:org.mongodb/bson/4.8.2</bundle>\n <bundle dependency=\"true\">mvn:org.mongodb/mongodb-driver-core/4.8.2</bundle>\n <bundle dependency=\"true\">mvn:org.mongodb/mongodb-driver-sync/4.8.2</bundle>\n</feature>\n",
"text": "First of all @Jeffrey_Yemin: Thank your for your quick reply and for confirming that version 4.8.2 should run successfully with OSGi. I am very relieved.Actually I found a solution myself this afternoon but encountered another error concerning com.amazonaws.auth. Apparently that was the error you are describing. I had tested with version 4.8.1!I found the solution somewhere in a note and it works well with version 4.8.2: If integrating the drivers JAR file directly into a project, then you should also add the JARs of bson and mondodb.driver.core. Apparently this is also necessary when defining the dependency for OSGi:I’m wondering if this is really the right way. Maybe someone can confirm that for me.Of course it would be great if everyone could find the solution in the documentation. Maybe that can be integrated?!Overall OSGi doesn’t seem to be very common anymore, especially in documentation. IMHO, OSGi is still a wonderful technology that represents a great compromise between microservices and monoliths!",
"username": "Bert_Speckels"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo Driver (mongo-driver-sync) with OSGi support: Unable to resolve | 2023-02-01T09:30:47.806Z | Mongo Driver (mongo-driver-sync) with OSGi support: Unable to resolve | 956 |
null | [
"swift",
"app-services-user-auth"
] | [
{
"code": "do {\n let _ = try await mergeAnon?.linkUser(credentials: credentials)\n} catch {\n let nsError = error as NSError\n if nsError.domain == RLMAppErrorDomain && nsError.code == 2 {// how to catch \"a user already exists with the specified provider\"?\n let _ = try await app.login(credentials: credentials)\n } else {\n\t throw error\n }\n}\nError Domain=io.realm.app Code=2 \"a user already exists with the specified provider\" UserInfo=(Server Log URL=(...), NSLocalizedDescription=a user already exists with the specified\nprovider, HTTP Status Code=401}\nError Domain=io.realm.app Code=2 \"invalid session:\naccess token expired\" UserInfo=(Server Log URL=(...), NSLocalizedDescription=invalid\nsession: access token expired, HTTP Status Code=401} \n",
"text": "Hello everyone.\nWe have the app which uses anonymous user authentication and also provides means for users to authenticate later.\nWe have following logic: when anonymous user authenticates with Apple ID, we try to link anonymous user to that Apple ID. If user already exists with that Apple ID, then we attempt to log in with that Apple ID:The problem is that we don’t know how to match the exact “a user already exists with the specified provider” error. Different errors may be thrown from the linkUser(…) call, but they have the same domain and error code. Here is the description of 2 different error we catch here:\n1: Error we try to match (catch):2: Error we don’t need to match (catch):Both have userInfo with 3 key/value pairs, both share HTTP Status Code 401 (which is no help), but they have distinct localised descriptions.\nObviously there is some error subcode missing in userInfo, which we would like use to match our target error. And to match error based on localizedDescription would be a terrible idea.\nCould someone confirm this is a swift SDK issue so we could bump it somehow to become fix in future update?\nOr maybe we’re missing something, like specific Error type to catch?",
"username": "Anton_Yermilin"
},
{
"code": "\n \n XCTAssertNil(user);\n RLMValidateError(error, RLMAppErrorDomain, RLMAppErrorInvalidPassword,\n @\"invalid username/password\");\n [expectation fulfill];\n }];\n \n [self waitForExpectationsWithTimeout:2.0 handler:nil];\n }\n \n /// Registering a user with existing email should return corresponding error.\n - (void)testExistingEmailRegistration {\n XCTestExpectation *expectationA = [self expectationWithDescription:@\"registration should succeed\"];\n [[self.app emailPasswordAuth] registerUserWithEmail:NSStringFromSelector(_cmd)\n password:@\"password\"\n completion:^(NSError *error) {\n XCTAssertNil(error);\n [expectationA fulfill];\n }];\n [self waitForExpectationsWithTimeout:2.0 handler:nil];\n \n XCTestExpectation *expectationB = [self expectationWithDescription:@\"registration should fail\"];\n \n ",
"text": "@Anton_Yermilin We have a test case for this in our test harness here -I believe this should help you",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks for the try but it didn’t help. The test case covers user registration aspect but in our case it is user linking.\nI did some deeper research meanwhile but all I found is that error code 2 is actually RLMAppError.invalidSession. So the problem can be reduced to the following:\nDifferent errors which emerge while using linkUser(…) SDK method share common error code – RLMAppError.invalidSession.",
"username": "Anton_Yermilin"
}
] | How to catch the exact realm SDK authentication Error? | 2023-01-31T13:58:51.668Z | How to catch the exact realm SDK authentication Error? | 1,308 |
null | [] | [
{
"code": "",
"text": "Hi there.I’m trying to run a $text $search combining individual words and phrases, in an OR statement,\nsuch as “first phrase” OR “second phrase” OR word1 OR word2.\nThe following syntax won’t work: {$text: { $search: “\"first phrase\" \"second phrase\" word1 word2” } }According to the docs it’s not supposed to work because the $search operator given a string with a phrase and individual words searches only the phrase and ignores the rest, as explained here: https://docs.mongodb.com/manual/reference/operator/query/text/#phrasesAny idea how to achieve a “first phrase” OR “second phrase” OR word1 with mongodb?Amit============EDIT: the below link and notes refer to aggregation so it seems unrelated to my above question. sorry about that.However, I found this answer by @Doug_Tarr in the community forum from which I understand I may use an array instead of a string: /how-to-full-text-search-multiple-phrases/3832Unfortunately that did not work for me either.\nWhen a phrase is followed by a single word then the word is ignored (same behaviour as above)\n{$text: { $search: [ “\"first phrase\"”, “word1” } } // “word1” is ignoredand when a phrase is followed by another phrase it seems that MongoDB tries to search “first phrase” AND “second phrase”, rather than “phrase 1” OR “phrase 2”\n{$text: { $search: [ “\"first phrase\"”, “\"second phrase\"” } } // “first phrase” AND “second phrase” instead of OR.",
"username": "Amit"
},
{
"code": "db.collection.aggregate([\n\t{\n\t\t$search: {\n\t\t\t'compound': {\n\t\t\t\t'should': [\n\t\t\t\t\t{\n\t\t\t\t\t\t'phrase': {\n\t\t\t\t\t\t\t'path': '<field_path>',\n\t\t\t\t\t\t\t'query': '[<first_phrase>, <second_phrase>]'\n\t\t\t\t\t\t}\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t'text': {\n\t\t\t\t\t\t\t'path': '<field_path>',\n\t\t\t\t\t\t\t'query': '[<first_word>, <second_word>]'\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\t}\n])\n",
"text": "Hi @Amit,\ncan you give us more precisions about the method you want to use ? Your title and first link refer to the $text operator, but your keywords and second link refer to the $search aggregation pipeline.\nIn case you want to use the latter, try to play around with the compound operator:Strictly speaking, compound operator doesn’t act as an OR logical operator; it classifies the returned documents with respect to their score.",
"username": "Mounir_Farhat"
},
{
"code": "",
"text": "Thanks @Mounir_Farhat,\nI’m interested in querying a collection by an OR combination of two-word phrases and single words, applied to a single String field (which is text indexed). Since regex will be too slow and lacks stemming I was trying to use a $text operator. It seems that it’s not supported, so I’m hoping there’s a workaround.\nI’ll edit the original post because I see now that second link is unrelated like you said.\nAmit",
"username": "Amit"
},
{
"code": "",
"text": "Since regex will be too slow and lacks stemming I was trying to use a $text operatorIt seems like a good candidate for a $search aggregation pipeline. But as I understand it, your use case doesn’t allow this solution.\nIf you want any further help, I’d suggest to post here what your document schema looks like, with 2 or 3 documents representing the kind of data your are trying to retrieve (not your actual data, but a minimalist modified version with the fields we are interested in). You should also post the requests you tried so far, and the expected results.",
"username": "Mounir_Farhat"
},
{
"code": "",
"text": "Hi @Amit .\nI happen to have the same requirement you are describing in this post; been doing some testing today and came across the very same issues you describe. Just wondering how you ended up handling the issue, and if you could possibly share.\nThanks!",
"username": "Eduardo_Espejel"
},
{
"code": "",
"text": "Hi all,Is there any way to support search multiple words by Atlas Search?\nFor example: my keyword is “Wheat EU” so if “Wheat” AND “EU” both are present hence it will return the results as expected.\nI don’t need match with exactly “Wheat EU” because the user need search Wheat in the EU. This something like start with or contain operator that is not a OR logical operator.Because with OR logical as MongoDB default operator when searching with multiple words hence there are a lot of irrelevant results.Thank you so much!",
"username": "Southern_Tran"
},
{
"code": "",
"text": "{$text: { $search: [ ““first phrase””, ““second phrase”” } }you can use\n{$text: { $search: “\\“first phrase\\” \\“second phrase\\”” } }there is a space between “\\“first phrase\\” and \\“second phrase\\””",
"username": "Sergio_Alfredo_Flores_Alfonso"
}
] | Full text search: multiple phrases and words | 2020-12-20T20:03:06.784Z | Full text search: multiple phrases and words | 9,779 |
[] | [
{
"code": "",
"text": "HiI have got several charts in a dashboard that I would like to filter globally. I’d like to add a filter on dates, to view only the datas of the last day. Actually dates are stored as strings, and I convert and filter it on each chart (hardcoded) ; il would like to deport this filter on the dashboard and print it as a parameter.If I understand this post :Those 2 functionnalities are only available on a single chart and not on the dashboard :Is that exact or am I missing something ?Thanks ",
"username": "St_ef"
},
{
"code": "",
"text": "Hi @St_ef -When you add a dashboard filter, the types of each field will be the original type, and it doesn’t take any chart-level type conversions into account. To do what you’re looking for, you should create a “Charts View” (from the Data Sources page) that converts the string field to a date, and then use the View instead of the raw collection for each chart.Note that while this will achieve your functional outcome, the performance may be poor if you have a lot of data, as Atlas will need to convert every string field to dates each time and it won’t be able to use any indexes. To improve the performance you should store your dates as the correct type and index the field.HTH\nTom",
"username": "tomhollander"
},
{
"code": "db.getCollection('mycollection').updateMany(\n {\n \"mdate\":{\n \"$type\":\"string\"\n }\n },\n [\n {\n \"$set\":{\n \"mdate\":{\n \"$dateFromString\":{\n \"dateString\":\"$mdate\"\n }\n }\n }\n }\n ]\n)\n",
"text": "you should store your dates as the correct type and index the field.Great @tomhollander, that’s it !The mdate attribute is now a date and filter works !\nMany thanks ",
"username": "St_ef"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Filtering on a whole dashboard | 2023-01-29T15:47:32.339Z | Filtering on a whole dashboard | 1,103 |
|
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "C:\\Users\\Moin\\Desktop\\SpieleDatein\\Discord\\DieScord\\node_modules\\mongoose\\lib\\index.js:582\n throw new _mongoose.Error.OverwriteModelError(name);\n ^\n\nOverwriteModelError: Cannot overwrite `ticketsetup` model once compiled.\n at Mongoose.model (C:\\Users\\Moin\\Desktop\\SpieleDatein\\Discord\\DieScord\\node_modules\\mongoose\\lib\\index.js:582:13)\n at Object.<anonymous> (C:\\Users\\Moin\\Desktop\\SpieleDatein\\Discord\\DieScord\\event\\schema\\ticketsetupDB.js:18:18)\n at Module._compile (node:internal/modules/cjs/loader:1159:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1213:10)\n at Module.load (node:internal/modules/cjs/loader:1037:32)\n at Module._load (node:internal/modules/cjs/loader:878:12)\n at Module.require (node:internal/modules/cjs/loader:1061:19)\n at require (node:internal/modules/cjs/helpers:103:18)\n at Object.<anonymous> (C:\\Users\\Moin\\Desktop\\SpieleDatein\\Discord\\DieScord\\event\\selectmenu\\select-ticket-option-menu.js:11:23)\n at Module._compile (node:internal/modules/cjs/loader:1159:14)\n\nNode.js v18.12.1\n[nodemon] app crashed - waiting for file changes before starting...\nmodule.exports = model(\"ticketsetup\", ticketSetupSchema);",
"text": "Eror:Was habe ich getan: ich habe herausgefunden das es ohne das also die se zeihle geht module.exports = model(\"ticketsetup\", ticketSetupSchema); aber ich brauche es denn das erstellt die tabelle in der datenbank. Ich verstehe nicht was ich nun falsch gemacht habe und wundere mich sehr ich bitte um hilfe.",
"username": "Jesper_Richert"
},
{
"code": "",
"text": "wenn code benötig wird bitte bescheid sagen",
"username": "Jesper_Richert"
},
{
"code": "",
"text": "Hallo, entschuldigung aber ich weiß es leider nicht, denn ich den gleichen error habe. Ich suche jetzt auf dem Internet ob ich was finde aber glaube da gibt es nichts. Falls ich etwas finde sage ich es bescheid.",
"username": "Gudlin_Business"
}
] | Hallo ich habe probleme mit dem error | 2023-01-11T14:20:57.964Z | Hallo ich habe probleme mit dem error | 828 |
null | [] | [
{
"code": "",
"text": "Is there a way to have my realm app (local version) run a local/in-development version of a function? Or do I always have to push my function in order to run it?Thanks!",
"username": "Malek_Cherif"
},
{
"code": "",
"text": "Hi Malek_CherifThat would be really useful! I’ve experimented with some of the plug-ins available for VS Code but I haven’t found anything that does what you say. What IDE do you use?I use a combination of editing code in the Realm UI for testing combined with using the sync to GitHub and back to VS Code. However, saving a draft can take 30 seconds and deploying to GitHub another 45 seconds which is not great when you are just trying to debug something iteratively.Having the ability to test Realm functions directly from VS Code would be great. Anyone know of anything?",
"username": "ConstantSphere"
},
{
"code": "",
"text": "This would be a very much welcomed feature. Maybe something along the lines of how firebase does it?\nIt’d be nice to run a local functions emulator with the ability to attach a debugger.I’ve really come to like Atlas but this just makes developing functions slow.",
"username": "Daniel_Weiss"
}
] | Realm Functions Local testing | 2022-05-01T14:10:47.386Z | Realm Functions Local testing | 1,979 |
null | [] | [
{
"code": "",
"text": "I just got access to MongoDB free certificate and credits with my verified student pack developer account.However, it’s not clear to me what steps I should take to make the most of these benefits and how to use them. These questions have been going on in my head: What courses are required to be completed before I can take the exam or on what can you use the credits you have in Atlas?Also, can students only attempt this exam once for free?I’m eager to start learning about MongoDB and looking forward to an answer.",
"username": "KironStylo_N_A"
},
{
"code": "",
"text": "Hello @KironStylo_N_A ,Welcome to the MongoDB for Academia community!To be eligible for free certification, students must complete and pass all courses associated with one of our two learning paths (Developer or Database Administrator). Students who complete both learning paths (Developer and DBA) will be eligible for two vouchers, one per exam. Additional instructions on how to access this benefit can be found on MongoDB Student Pack after logging in with GitHub.You are encouraged to start building your next project on our Free Cluster or create a paid cluster with this $50 Atlas credits GitHub student pack perk! For inspiration, check out the MongoDB Developer Center (MongoDB Developer Center) for the latest MongoDB tutorials, videos and code examples.",
"username": "Sarah"
},
{
"code": "",
"text": "Thanks for providing me with these helpful links.I have been reading some of the articles and discovering stuff on my Atlas account for a while. I would like to clear up this doubt related to the access code and Atlas.If I redeem my access code, it will be applied to an organization I own. if that’s true, does it mean I can use these credits anytime to make a purchase like upgrading my cluster?I wanted to know if reedeming my access code will make these credits for limited-time use only? Like you must use them before a period of time.Thanks again for your help.",
"username": "KironStylo_N_A"
},
{
"code": "",
"text": "Hi @KironStylo_N_A,When you are logged into the MongoDB for Students page, you will see a button labeled “request your access code”. You can press that and you will be given an Atlas code. You then take that code and apply it to your Atlas organization in the manner Sarah described above. You have 12 months after applying your credit code to use them all. If you generate a code and do not use it, it expires in 6 months.I hope this answers your questions, please let me know if you have any others!",
"username": "Aiyana_McConnell"
}
] | Inquiries about student pack benefits | 2023-01-31T18:04:43.294Z | Inquiries about student pack benefits | 1,648 |
null | [
"dot-net",
"production"
] | [
{
"code": "ObjectSerializerAllowedTypesFunc<Type, bool>AllowedTypesObjectSerializer.DefaultAllowedTypesvar objectSerializer = new ObjectSerializer(type => ObjectSerializer.DefaultAllowedTypes(type) || type.FullName.StartsWith(\"MyNamespace\"));\nBsonSerializer.RegisterSerializer(objectSerializer);\nObjectSerializervar connectionString = \"mongodb://localhost\";\nvar clientSettings = MongoClientSettings.FromConnectionString(connectionString);\nclientSettings.LinqProvider = LinqProvider.V2;\nvar client = new MongoClient(clientSettings);\n",
"text": "This is the general availability release for the 2.19.0 version of the driver.The main new features in 2.19.0 include:The ObjectSerializer has been changed to only allow deserialization of types that are considered safe.\nWhat types are considered safe is determined by a new configurable AllowedTypes function (of type Func<Type, bool>).\nThe default AllowedTypes function is ObjectSerializer.DefaultAllowedTypes which returns true for a number of well-known framework types that we have deemed safe.\nA typical example might be to allow all the default allowed types as well as your own types. This could be accomplished as follows:More information about the ObjectSerializer is available in our FAQ.Default LinqProvider has been changed to LINQ3.\nLinqProvider can be changed back to LINQ2 in the following way:If you encounter a bug in LINQ3 provider, please report it in CSHARP JIRA project.The full list of issues resolved in this release is available at CSHARP JIRA project.Documentation on the .NET driver can be found here.",
"username": "Boris_Dogadov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | .NET Driver 2.19.0 Released | 2023-01-27T20:36:54.924Z | .NET Driver 2.19.0 Released | 3,946 |
[
"atlas-functions",
"connector-for-bi"
] | [
{
"code": "",
"text": "Hi, im a new user of MongoDB Atlas and i have some questions…I’m using an M10 cluster (2GB RAM, 20GB storage) and would like to do some data visualization tests with BI tools. To use this tool I need to upgrade my current cluster, correct? Is this the only way to use BI tools? Does MongoDB offer some kind of free trial?\nimage771×507 23.3 KB\n",
"username": "Eduardo_Ferreira"
},
{
"code": "",
"text": "Hello @Eduardo_Ferreira\nThanks for your inquiry. The BI Connector is not the only way to do data visualization on Atlas Data. We actually have available (in Preview) the Atlas SQL Interface. Atlas SQL allows users to connect, access, and visualize Atlas Data (clusters, S3 buckets, data lake storage etc.) from SQL-based BI tools. Currently, we have a generic Atlas SQL JDBC Driver and a Tableau custom connector ready for use. We are developing an Atlas SQL ODBC Driver and a Power BI custom connector - these will be available mid year. So if your BI tool is Tableau or Power BI (preview coming in a few months), I would highly suggest trying out these new custom connectors instead of the BI Connector. If your BI tool is something other than Tableau or Power BI, we can look to see if it supports a generic JDBC or ODBC driver for database connection.Atlas SQL can be used with free tier or shared clusters. There isn’t any cost to turn it on. There can be costs associated with query consumption, but it’s typically small and you may not see any costs if you are just trying it out. Send me an email if you’d like to have a call where I can walk you through the Atlas SQL set up, or just determine if Atlas SQL is right for you. I have included a few links so you can gain some more information on Atlas SQL:Atlas SQL\nAtlas SQL Tableau ConnectorBest,\nAlexi Antonino\[email protected]\nProduct Manager, SQL & BI Connector",
"username": "Alexi_Antonino"
},
{
"code": "",
"text": "Hi @Alexi_Antonino, thanks for the reply, I will test the Tableau Connector and get back to you in this topic or by email if I have more issues, thank you very much for your help.",
"username": "Eduardo_Ferreira"
},
{
"code": "",
"text": "Awesome! don’t hesitate to reach out if you get blocked.Also, I didn’t mention it in my first reply because you seemed to be specifically looking for BI Tool Connection, but MongoDB Charts might also support your needs. “MongoDB Charts is a tool to create visual representations of your MongoDB data. Data visualization is a key component to providing a clear understanding of your data, highlighting correlations between variables and making it easy to discern patterns and trends within your dataset. MongoDB Charts makes communicating your data a straightforward process by providing built-in tools to easily share and collaborate on visualizations.”Here is link to check out Charts: MongoDB Charts",
"username": "Alexi_Antonino"
}
] | BI Connector for Atlas Mongodb - (Help) | 2023-02-01T15:02:54.401Z | BI Connector for Atlas Mongodb - (Help) | 1,078 |
|
[
"flexible-sync"
] | [
{
"code": "",
"text": "Hello! I hope you can help me and thanks in advance! I’ve got Realm Flex sync set up for my mobile app. I’d like the app services to only let the devices download certain data from my db, data which contains a value equals to a value in the custom user data object (i.e. only data belonging to that user). However, I can’t get the Custom User Data setup to work properly, and/or can’t reference the custom user data properly in the realm sync permissions, even with the given examples. The sync permissions work fine when using a constant, so it’s really referencing the custom user data where I have issues… I’ve attached some screenshots highlighting my setup with realm flex sync and custom user data here:\nScreenshot 2023-01-23 at 12.23.472237×778 72.6 KB\n\n\nScreenshot 2023-01-23 at 12.22.311338×804 65.3 KB\n\n\nScreenshot 2023-01-23 at 12.22.501380×856 102 KB\n\n\nScreenshot 2023-01-23 at 12.24.372416×1226 182 KB\n",
"username": "Eric_Klaesson"
},
{
"code": "wasteOperator: \"%%user.custom_data.wasteOperator\"custom_data",
"text": "Hi @Eric_Klaesson,The correct syntax would be wasteOperator: \"%%user.custom_data.wasteOperator\" (note the nested custom_data field). See https://www.mongodb.com/docs/atlas/app-services/sync/configure/permissions/ for additional details.Hope this helps!",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Hi Kiro! Thanks for your response! I did try this already, but the result is the same. On the error (“BadValue”) it sounds like the custom user data is simply not there. In the %%user variable, who provides this? Is it from within the app trying to sync? When I checked if the device had the custom user data (io.realm.mongodb.User.getCustomData()), the returned Document was empty",
"username": "Eric_Klaesson"
},
{
"code": "",
"text": "The error is indicating that your expansion is evaluating to undefined so the resolved expression is malformed. Looking at your screenshots, it seems like the issue is that your linked “User ID Field” is of type ObjectID, while the ID on user objects is actually a string. Can you try updating that and see if that starts working?See https://www.mongodb.com/docs/atlas/app-services/users/enable-custom-user-data/#specify-the-user-id-field for more details.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Ah okay, that is one mistake corrected. I now use the field “username” for the User ID Field in the Custom User Data settings. I reinstalled the app on the device to start afresh. However, the error is still the exact same, unfortunately.I’m still not sure how Atlas App Services picks the correct custom data user object and uses if for the sync permissions. And maybe therein lies the problem? When the device tries to sync data, it has to somehow provide a username or some link to its custom data user object, no?",
"username": "Eric_Klaesson"
},
{
"code": "%%user.custom_data",
"text": "Hi @Eric_Klaesson, I think this section in the docs might help answer your question.For sync specifically, the %%user.custom_data expansion used in permissions will be populated with the custom user document for the user passed to the realm config.Let me know if it’s still unclear!",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Okay Thanks Kiro, I think I understand. In any case, I still have the same error as in the beginning of the thread. This custom user data object seems to be undefined… Any other ideas?\nScreenshot 2023-01-30 at 10.19.361452×728 67.7 KB\n\n\nScreenshot 2023-01-30 at 10.19.592072×1028 155 KB\n\n\nScreenshot 2023-01-30 at 10.20.441428×936 106 KB\n",
"username": "Eric_Klaesson"
},
{
"code": "usernameusername",
"text": "Hi @Eric_Klaesson,It doesn’t seem like the username field is storing a user ID. If you navigate to the “App Users” tab in your App Services app dashboard, you should see a table like this:\n\nimage3257×820 135 KB\nThe user ID is the second column in this table. If you want to do a super quick test, you can copy the ID value for the user you’re testing with, and plug that in to the username field for the document you want to use as your custom data. Once you have that working, you can define a new field in the user documents to store the ID and update your custom user data settings accordingly.",
"username": "Kiro_Morkos"
},
{
"code": "",
"text": "Thanks Kiro! Fantastic! That did the trick! This wasn’t very easy to solve, so I really appreciate your help in this back-and-forth!",
"username": "Eric_Work_Klaesson"
},
{
"code": "",
"text": "Glad to hear it’s working now!",
"username": "Kiro_Morkos"
}
] | How to use custom user data in custom realm flex sync permissions | 2023-01-23T04:36:12.807Z | How to use custom user data in custom realm flex sync permissions | 1,755 |
|
null | [
"kafka-connector"
] | [
{
"code": "{\n ...,\n \"heartbeat.topic.name\": \"my.heartbeats\",\n \"heartbeat.interval.ms\": \"600000\"\n}\n{\"schema\":{\"type\":\"bytes\",\"optional\":true},\"payload\":null}\n",
"text": "I added heartbeat options like below in my source connector’s config.But when I checked all messages from the heartbeat topic, I just got empty content with some schema from the messages like below.How can I get proper content from my source connector?",
"username": "Hyunsang_h"
},
{
"code": "",
"text": "I am also facing this issue. Can someone please help here? The documentation on this is also not very helpful. I have gone through the following: -@Hyunsang_h , were you able to figure this out by any chance? If yes, could you please share your findings. Thanks.",
"username": "Prabhatika_Vij"
}
] | Heartbeat message of Source Connector is empty | 2021-12-02T23:40:07.939Z | Heartbeat message of Source Connector is empty | 2,837 |
null | [
"queries",
"node-js",
"replication",
"connecting"
] | [
{
"code": "(node:9837) UnhandledPromiseRejectionWarning: MongoServerSelectionError: connect ECONNREFUSED 192.168.50.50:27019\n at Timeout._onTimeout\n(node:9837) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 52)\n(node:9837) UnhandledPromiseRejectionWarning: MongoServerSelectionError: not primary\n at Timeout._onTimeout\n(node:9837) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode)\nasync function runAdapter() {\n try {\n const {\n MongoClient\n } = require('mongodb');\n const client = new MongoClient(common.appConfigObj.mongoConfig.url, common.MongoClientOption);\n await client.connect();\n const db = await client.db(common.appConfigObj.mongoConfig.dbName);\n db.createCollection(common.appConfigObj.mongoConfig.cappedCollection, {\n capped: true,\n size: common.appConfigObj.mongoConfig.size ? common.appConfigObj.mongoConfig.size : 100000,\n max: common.appConfigObj.mongoConfig.max ? common.appConfigObj.mongoConfig.max : 5000,\n }, (err, data) => {\n if (err) {\n logger.info(`In HapiServer : hapiServerFun : mongo : error while creating the cappped collection : ${err} `);\n if (err.message == `Collection already exists. NS: ${common.appConfigObj.mongoConfig.dbName}.${common.appConfigObj.mongoConfig.cappedCollection}`)\n connectAdapter(db.collection(common.appConfigObj.mongoConfig.cappedCollection))\n else\n runAdapter();\n } else if (data) {\n logger.info(`In HapiServer : hapiServerFun : mongo : capped collection successfully created `);\n connectAdapter(db.collection(common.appConfigObj.mongoConfig.cappedCollection))\n }\n })\n const connectAdapter = (coll) => {\n socketIO.adapter(createAdapter(coll));\n coll.isCapped().then((data) => {\n logger.info(`In HapiServer : hapiServerFun : mongo : cappedCollection : data : ${data}`);\n }).catch((err) => {\n logger.error(`In HapiServer : hapiServerFun : mongo : cappedCollection : error : ${err}`);\n })\n logger.info(`In HapiServer : hapiServerFun : mongo : cappedCollection : `);\n logger.info(`In HapiServer : hapiServerFun : mongo : mongo connection done`);\n console.log(`In HapiServer : hapiServerFun : mongo : mongo connection done`)\n return;\n }\n } catch (err) {\n console.log(`In HapiServer : hapiServerFun : mongo connection Exception : : ${err}`)\n logger.info(`In HapiServer : hapiServerFun : mongo connection Exception : ${err}`);\n logger.error(`In HapiServer : hapiServerFun : mongo connection Exception : ${err}`);\n return runAdapter();\n }\n }\n runAdapter();\n",
"text": "Hello,I am getting the below error logs after the MongoDB election occur.192.168.50.50 It was primary node before MongoDB election is occurred, After MongoDB election the NodeJS application still trying to connect with old primary node(192.168.50.50) which was shut down due to some reason. And after sometime when 192.168.50.50 node is running up, It will become secondary MongoDB node and NodeJS application trying to connect with same IP (192.168.50.50 currently this is secondary node). Hence now application getting the below error.Note: IT IS RANDOM ISSUE AND GETTING ON RANDOM NODEJS SERVERWhy MongoDB node driver connect with old mongo instance?\nWhy it behave randomly?Application structure:Can anyone help to find out the issue. Please let me know if you want anything else.",
"username": "PRAVIN_DASARI"
},
{
"code": "common.appConfigObj.mongoConfig.urlcommon.MongoClientOption",
"text": "What does the common.appConfigObj.mongoConfig.url and common.MongoClientOption look like?Redacting the sensitive details.",
"username": "chris"
},
{
"code": "{\n appname: 'NodeProject',\n connectTimeoutMS: 20000,\n}\nasync function runAdapter() {\n try {\n const {\n MongoClient\n } = require('mongodb');\n const client = new MongoClient('mongodb://username:[email protected]:27019,192.168.50.51:27019,192.168.50.52:27020/dbName?replicaSet=mongo-cluster', {\n appname: 'NodeProject',\n connectTimeoutMS: 20000,\n });\n await client.connect();\n const db = await client.db('dbName');\n db.createCollection('socketClusterData', {\n capped: true,\n size: 100000,\n max: 5000,\n }, (err, data) => {\n if (err) {\n logger.info(`In HapiServer : hapiServerFun : mongo : error while creating the cappped collection : ${err} `);\n runAdapter();\n } else if (data) {\n logger.info(`In HapiServer : hapiServerFun : mongo : capped collection successfully created `);\n connectAdapter(db.collection('socketClusterData'))\n }\n })\n const connectAdapter = (coll) => {\n socketIO.adapter(createAdapter(coll));\n coll.isCapped().then((data) => {\n logger.info(`In HapiServer : hapiServerFun : mongo : cappedCollection : data : ${data}`);\n }).catch((err) => {\n logger.error(`In HapiServer : hapiServerFun : mongo : cappedCollection : error : ${err}`);\n })\n logger.info(`In HapiServer : hapiServerFun : mongo : cappedCollection : `);\n logger.info(`In HapiServer : hapiServerFun : mongo : mongo connection done`);\n console.log(`In HapiServer : hapiServerFun : mongo : mongo connection done`)\n return;\n }\n } catch (err) {\n console.log(`In HapiServer : hapiServerFun : mongo connection Exception : : ${err}`)\n logger.info(`In HapiServer : hapiServerFun : mongo connection Exception : ${err}`);\n logger.error(`In HapiServer : hapiServerFun : mongo connection Exception : ${err}`);\n return runAdapter();\n }\n}\nrunAdapter();\n",
"text": "common.appConfigObj.mongoConfig.url → It contains the mongodb url (mongodb://username:[email protected]:27019,192.168.50.51:27019,192.168.50.52:27020/dbName?replicaSet=mongo-cluster)common.MongoClientOption —> It contains the option object which is passed while connecting to mongo.Please see the below sample code.Variable details:socketIO: It is a socket server instance. (socket.io library of nodejs)\ncreateAdapter: It is a adapter function to make socket.io cluster (@socket.io/mongo-adapter library of nodejs)Let me know if you need anything else.",
"username": "PRAVIN_DASARI"
},
{
"code": "",
"text": "Is each mongo ip:port accessible by the kubenetes nodes and/or application containers?It almost seems like the app can only connect to 192.168.50.50:27019",
"username": "chris"
},
{
"code": "",
"text": "Is each mongo ip:port accessible by the kubenetes nodes and/or application containers?\nAns: YesIt almost seems like the app can only connect to 192.168.50.50:27019\nReply:\nNo, If we restart the node application then application connect to primary node only. Whenever this scenario is occur, we restarting the node application and then it working fine.",
"username": "PRAVIN_DASARI"
},
{
"code": "serverSelectionTimeoutMS",
"text": "That is great information @PRAVIN_DASARIDo you know how long it take for your new primary to be elected on your cluster? You could try raising your serverSelectionTimeoutMS for the driver, but with the default at 30s I’d be interested in why an election is taking so long.",
"username": "chris"
},
{
"code": "(node:15687) UnhandledPromiseRejectionWarning: MongoServerSelectionError: Server selection timed out after 60000 ms\n at Timeout._onTimeout (/usr/local/share/packages/node-v12.16.1-linux-x64/lib/node_modules/mongodb/lib/sdam/topology.js:293:38)\n at listOnTimeout (internal/timers.js:549:17)\n at processTimers (internal/timers.js:492:7)\n(node:15687) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 93)\n",
"text": "Do you know how long it take for your new primary to be elected on your cluster?\nAns: It take around 10s to elect new primary.I have raise the time for server selection timeout. (serverSelectionTimeoutMS: 60000) But I’m still facing the same issue on a random node server instance.Find the below error logWhy driver is unable to find the new primary mongo node? And why it behave randomly?",
"username": "PRAVIN_DASARI"
},
{
"code": "",
"text": "Hello @chrisDo you know any other solution?",
"username": "PRAVIN_DASARI"
},
{
"code": "",
"text": "As next step I would recommend updating MongoDB to the latest available 4.4 release 4.4.8 is not recommend for production use.Likewise updating the node driver to the latest available release.And seeing if the problem persists.",
"username": "chris"
},
{
"code": "",
"text": "The last 2-3 years have been in a cycle faster release/patch cycle. MongoDB has seen 4.x, 5x and now 6x, as nodejs driver also saw more than 10 of them from 3x family to 4.x.For this reason, things can go sideways anytime unexpectedly. the problem you are having might be a one-off issue on that specific driver version or specific database version (and maybe fixed in next patch version). It is easier to try a few different nodejs versions (both nodejs itself and mongodb driver) compared to replacing the database version. try up(down)grading to driver v4.8 and 4.10 for example.PS: is there any specific reason you do not use the 27017 port on your replica members?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hello @chrisI’ll update MongoDB and will check the same scenario. The latest node driver version required node v14.x but I have some internal dependencies which are not supported node v14.x. I’ll figure out how to handle internal dependency with node v14.x and get back to you.Hello @Yilmaz_Durmaz,Due to some security reasons, we didn’t use the default port.",
"username": "PRAVIN_DASARI"
},
{
"code": "",
"text": "was it always like this or started after some upgrades? it is possible you have a missing configuration from your replica set members or their host machines, such as firewall settings not identical to the primary.The latest node driver version required node v14.xthis one is weird! Nodejs v16 and v18 (current LTS) are in use for quite a long time and the latest driver should be working with v18. if your app does not use any v14-specific functions, try v18 instead.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "@Yilmaz_DurmazI am moving to the latest driver and it required node v14.x+. and I am updating it to the current LTS(node v18.x).Will let you know once done all the things.",
"username": "PRAVIN_DASARI"
}
] | Getting MongoServerSelectionError: ECONNREFUSED and Not primary | 2023-01-10T13:24:11.411Z | Getting MongoServerSelectionError: ECONNREFUSED and Not primary | 1,740 |
null | [] | [
{
"code": "",
"text": "I am creating a 3-node replication set I’m enabling the security in the config file MongoDB services have not started.what is the solution?",
"username": "Hemanth_Perepi_1"
},
{
"code": "",
"text": "What error are you getting?\nAre the params added in the correct format and indentation?\nAlso check your mongod.log for errors",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Mongo server does not start as a Windows service. The error is: “error 1053: The service did not respond to the start or control request in a timely fashion”",
"username": "Hemanth_Perepi_1"
},
{
"code": "",
"text": "Are you starting service as administrator?\nCheck these linksUsers experience the error message 1053 which states ‘The service did not respond to the start or control request in a timely fashion’. This error message",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "And this one",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "can you check the 4.4 version on the windows server?\nthe actual issue is I’m created3 a node replication and everything working fine, but the config file in add security enable and save MongoDB service restart time service is not started. it shows “error 1503”",
"username": "Hemanth_Perepi_1"
}
] | I am create a 3 node replication set I'm enable the security in config file mongodb services not started.what the solution | 2023-02-01T06:36:55.528Z | I am create a 3 node replication set I’m enable the security in config file mongodb services not started.what the solution | 876 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "",
"text": "i want to make a $geoNear operation to the results of a $search operation to list the results in order from nearest to farthest. the order of the operations are important because i first get the items and then $lookup for there location fields in another collection which is required for the geoNear operation.",
"username": "Rahman_Colak"
},
{
"code": "$geoNear",
"text": "Hi @Rahman_Colak,Welcome to the community Could you please provide the following:Please note that $geoNear must be the first stage of your pipeline as noted in the documentation.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello Jason,what i try to do is: i want to make a autocomplete search and with the results i want to list them by nearest to farthest. first i have to make a lookup with this results to get the needed coordinates so i cant use geoNear as first or make a near operation in an compound with search.The other way to reach this would be i make first the geoNear but how can i make after that a autocomplete search to the results to filter only the matchesmaybe the simplest way is to update the schema for the usecase.Regards,\nRahman",
"username": "Rahman_Colak"
},
{
"code": "$lookup$geoNear$search",
"text": "Hi @Rahman_Colak,Thanks for getting back to me with those details.first i have to make a lookup with this results to get the needed coordinates so i cant use geoNear as first or make a near operation in an compound with search.Do you have sample documents that you could provide here that you are using for the $lookup ?The other way to reach this would be i make first the geoNear but how can i make after that a autocomplete search to the results to filter only the matches$geoNear and $search cannot be used together in a single aggregation pipeline as they each are required to be the first stage of the pipeline.maybe the simplest way is to update the schema for the usecase.This could be the case but I’m not entirely certain of how the documents in your collections are structured. If you can provide the information I asked for in my previous post I may be able to provide more specific recommendations.You may find the following post helpful: Searching on Your Location with Atlas Search and Geospatial OperatorsRegards,\nJason",
"username": "Jason_Tran"
},
{
"code": "$geoNear and $search cannot be used together in a single aggregation pipeline \nas they each are required to be the first stage of the pipeline.\n",
"text": "Hey Jason,Thank you for your prompt reply. The post which you attached: i try the same way, but if you see my schema you will understand why i can not do it like that.so i have a box collection(with 2dsphere index) with document structureand i have the item collection(with atlas search index) with document structureevery item belong to a box. I want to search for items and order the results byAs you said:Maybe i need a way to query over the results “on the fly” but then i have no index i think…best Regards,Rahman",
"username": "Rahman_Colak"
},
{
"code": "$geoNear$search",
"text": "Hi @Rahman_Colak,Thank you for providing the box and item collection structures.With the current schema design you’ve prompted, you won’t be able to perform the $geoNear and $search autocomplete stages in a pipeline as I have suggested in my previous post.Perhaps you could further narrate the use case from the user’s point of view to see if there may be an alternative solution / approach. Additionally, please provide sample document(s) from each collection.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello! Though of sharing my approach to similar issue.I really hope we get to use $geoNear and $Search in the same aggregate soon since workarounds are cumbersome and resource heavy.I needed the calculated distance(without maxDistance since my query should not limit the distance; just sort by it) and search score to be used in the sort stage by the end of my aggregate.I first tried to use lookup stage to create the text score using $search in my $lookup for the same collection (that has >100k items). This approach was so slow it could not work for production application.I ended up using 2 separate consecutive aggregate pipelines where the first pipeline calculated the text score and saved it to database using $merge with and ID that I can use in the next pipeline. (This collection used with merge is configured with TTL index so this collection never grows huge).Second pipeline uses $lookup to fetch the data saved on the first pipeline (Using my ID scheme that ensures I always get the correct set of text scores, basically i have utilized user ID:s to make the ID).Also this approach is a bit slower but works better at least on initial tests.Hope this helpes someone else struggling with similar issue. Anyone have any idea on a better way to solve this kind of situation?If there is some MongoDB team members reading this: Timeline available for supporting $search and $geonear in same pipeline? OR $geoWithin operator to add distance info to the pipeline like $geoNear does?Cheers!",
"username": "Sakarias_Jarvinen"
},
{
"code": "$searchneargeoWithin",
"text": "If there is some MongoDB team members reading this: Timeline available for supporting $search and $geonear in same pipeline? OR $geoWithin operator to add distance info to the pipeline like $geoNear does?Would either of the following $search stage operators work for your use case?:",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hello Jason,Thanks for you reply.I have tried also this approach but we need the distance calculated and added to the pipeline to be used as one param in the $sort-stage later on our pipeline and if I understood correctly, with $near and $within this is not possible. (Only the $geoNear-stage has this functionality where the distance-field is calculated and outputted to be used on the pipeline.)These operators enable to sort within the $search stage and with maxdistance-param it is possible to limit the results within some distance. But this is not our use case.-Sakarias",
"username": "Sakarias_Jarvinen"
}
] | $geoNear(aggregation) AFTER a search(aggregation) mongo DB | 2022-02-10T15:31:58.099Z | $geoNear(aggregation) AFTER a search(aggregation) mongo DB | 3,854 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "Hi,I am trying to send push notification from mongodb function using firebase (fcm).I have the following code working on node.js, which i am trying to replicate in function:\nconst admin = require(‘firebase-admin’);\nlet serviceAccount = require(’./serviceAccountKey.json’);\nadmin.initializeApp({\ncredential: admin.credential.cert(serviceAccount)\n});Uploaded all the relevant external dependencies.Now, the problem is i don’t know how the following code will work:\nlet serviceAccount = require(’./serviceAccountKey.json’);this line is looking for external json file (unfortunately, not able to upload it as dependencies)Any help to make it work would be appreciated.Thanks",
"username": "Vishnu_Rana"
},
{
"code": "exports = function(deviceToken){\n \n // Construct message\n const message = {\n \"to\": deviceToken,\n \"notification\": {\n \"title\": \"Test Title\",\n \"body\": \"Test Message\"\n }\n };\n \n // Send push notification\n const gcm = context.services.get('gcm');\n const result = gcm.send(message);\n console.log(JSON.stringify(result));\n return result;\n};\n",
"text": "hi,If you set your Sender Id and API Key under Push Notifications -> Config you can use following function to send notifications:regards",
"username": "rouuuge"
},
{
"code": "",
"text": "I suppose this answer is considered depricated now. What is the new way to deal with that? I am dealing with the same problems, that Firebase requires either a File or changing a environment variable in the shell. Both things are not an option as far as I know.",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "Hi Thomas, have you found the answer?",
"username": "Marcin_K_N_A"
},
{
"code": " const firebase_credentials = context.values.get(\"firebase_credentials\");\n const doc = JSON.parse(firebase_credentials);\n admin.initializeApp({\n credential: admin.credential.cert(doc),\n });\n",
"text": "Yes, I solved the issue.I created a secret in the MongoDB values and referenced it. Into the secret went the credentials itself (the JSON).Then, you can read it from the functions like this (assuming you named the link ‘firebase_credentials’):",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "May you please share what firebase-admin version you use? I assume you use it as Function dependency?Many thanks,\nTomas",
"username": "Tomas_Mikenas"
},
{
"code": "",
"text": "Hey.I am using version 9.7.0",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "Hello developers,I’m new in development, can someone fix my problem? I am facing an issue in connecting MongoDB with the firebase account.Waiting for your reply!",
"username": "Brad_Smith"
}
] | Send Push Notification from mongodb function using firebase | 2021-03-23T16:31:13.140Z | Send Push Notification from mongodb function using firebase | 7,414 |
null | [
"app-services-cli"
] | [
{
"code": "{\n \"use_natural_pluralization\": false\n}\nrealm-cli push --project <Atlas ProjectID>\npush failed: cannot update graphql config field \"use_natural_pluralization\" \n",
"text": "Hello,I’m currently developing a CI/CD Pipeline for a Realm application and have come across a problem with the Realm CLI that I can’t seem to figure out.The CLI won’t let me push a new application if the GraphQL configuration (root/graphql/config.json) defines the natural pluralization as false.So, when the config.json looks like this:And I push the application with:I get the following error:I understand that this setting cannot be turned off once it is enabled. But science I am creating a new application, I don’t understand why can’t I disable the setting initially.I have tried multiple variations of the file to work around the problem:Because the setting can’t be turned off once it is enabled, all these methods won’t work for us.I noticed, that when creating a new App via the Realm Web UI, the setting seems to be enabled by default as well.So, is natural pluralization now the standard and there is no way around it? I would really like to avoid reworking our entire schema, so if there is a way to operate without it, I would be delighted.Regards.Sam",
"username": "Sam_Ha"
},
{
"code": "use_natural_pluralizationuse_natural_pluralization",
"text": "Hello,If anyone else comes across this, the use_natural_pluralization property cannot be changed from true to false on an existing app. It is also the case that the value is set to true by default when an app is created.The only way to set it to false would be to use the Admin API to create a new app where you can pass the use_natural_pluralization in the body of the request when creating a new app.Regards\nManny",
"username": "Mansoor_Omar"
}
] | Can’t push Realm app when natural pluralization is false | 2021-12-12T13:35:43.234Z | Can’t push Realm app when natural pluralization is false | 3,728 |
null | [
"compass",
"schema-validation"
] | [
{
"code": "",
"text": "Hello!I received a lot of JSON files to store in a new collection and create schema validation rules. Is there a way or tool to read JSON files and suggest schema validation rules?The best way that I found was to insert all the files in a collection, run the Analyze Schema in compass, and create the rules.",
"username": "Rafael_Martins"
},
{
"code": "",
"text": "Hello @Rafael_Martins ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, can you please explain more about the purpose of your schema validation and what are you trying to achieve because inserting documents in the collection and then creating rules seems like a reverse workflow. The purpose of schema validation is to enforce that documents in a collection conform to a certain criteria, and the schema was typically created during the design phase of the app, instead of being created after the fact. Schema validation lets you create validation rules for your fields, such as allowed data types and value ranges.MongoDB uses a flexible schema model, which means that documents in a collection do not need to have the same fields or data types by default. Once you’ve established an application schema, you can use schema validation to ensure there are no unintended schema changes or improper data types.The best way that I found was to insert all the files in a collection, run the Analyze Schema in compass, and create the rules.If you just want to use the same schema analysis outside of Compass, you can try this mongodb-schema command line tool, it is open source (Apache 2.0 license) and usable as a Node.js library.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What is the best way to define schema validation rules? | 2023-01-16T18:30:23.209Z | What is the best way to define schema validation rules? | 1,255 |
[
"indexes"
] | [
{
"code": "",
"text": "Unable to create Index on Mongo Version 4.4.18 on Ubuntu even for small collections\ndb.currentop() shows as below.Neither getting error Nor Index being created .\nimage1594×915 70.8 KB\nCould someone help on this???",
"username": "Geetha_M"
},
{
"code": "waitingForLatch",
"text": "Hello @Geetha_M ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nAs per the documentation, the waitingForLatch document is only available if the operation is waiting to acquire an internal locking primitive (a.k.a. a latch) or for an internal condition to be met.“waitingForLatch” : {\n“timestamp” : ISODate(“2023-01-27T07:01:57.013Z”),\n“captureName” : “IndexBuilds Coordinator::_mutex”\n},Here,This shows that the index creation command is being executed. Could you share below details for us to understand your use-case better?Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Unable to create Index on Mongo Version 4.4.18 | 2023-01-27T07:22:56.663Z | Unable to create Index on Mongo Version 4.4.18 | 1,206 |
|
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.4.19-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.18. The next stable release 4.4.19 will be a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.4.19-rc0 is released | 2023-01-31T01:51:13.114Z | MongoDB 4.4.19-rc0 is released | 1,068 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 5.0.15-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.14. The next stable release 5.0.15 will be a recommended upgrade for all 5.0 users.\nFixed in this release:",
"username": "James_Hippler"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 5.0.15-rc1 is released | 2023-01-31T17:27:52.882Z | MongoDB 5.0.15-rc1 is released | 1,075 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.2.24-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.23. The next stable release 4.2.24 will be a recommended upgrade for all 4.2 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB 4.2.24-rc0 is released | 2023-01-31T22:36:30.329Z | MongoDB 4.2.24-rc0 is released | 1,051 |
null | [
"node-js",
"typescript"
] | [
{
"code": "mongodbmongodb-legacymongodb-legacyFilterStrictFilterBSONBSON.EJSONbsonObjectIdLong@aws-sdk/credential-providersCollection.insertCollection.updateCollection.removemongodb",
"text": "The MongoDB Node.js team is pleased to announce version 5.0.0 of the mongodb package!Node.js driver v5 emphazises the modernization of our API.Most notably, we have removed support for callbacks in favor of a Promise-only public API.\nTo ease the migration to a Promise-only approach when using the Node.js driver, callback support is available via the mongodb-legacy package. You can read more about this change in the Optional callback support migrated to mongodb-legacy section of the migration guide.Version 4.3.0 of the Node.js driver introduced strict type checking on Filter queries that used dot notation. This functionality was enabled by default and proved to be a barrier for users upgrading to later versions of the Node.js v4.x driver. In order to ease the migration to v5.0.0, type strictness on queries that use dot notation has been removed from the CRUD API. The type checking capabilities are still available in an experimental type called StrictFilter. You can read more about this change in the Dot Notation TypeScript Support Removed By Default section of the migration guide.This release also adopts all the changes in BSON v5.0.0 (see the release notes).\nThe driver now exports a BSON namespace that also has BSON.EJSON APIs available.\nWhen working in projects where both the driver and bson are used, we recommend importing BSON types (ObjectId, Long, etc.) and BSON APIs from the driver instead of from BSON directly to ensure consistency when serializing and deserializing instances of the BSON types.@aws-sdk/credential-providers has now been moved to an optional peer dependency.\nConsequently, in v5.0.0 or later versions of the driver, the AWS credential provider module must be installed manually to enable the use of the native AWS SDK for authentication.Collection.insert, Collection.update, and Collection.remove methods have been removed in favor of their non-deprecated counterparts. You can read more about this and other changes in our Driver v5 Migration GuideWe invite you to try the mongodb library and report any issues to the NODE project.",
"username": "Bailey_Pearson"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Nodejs Driver 5.0.0 Released | 2023-01-31T21:21:18.781Z | MongoDB Nodejs Driver 5.0.0 Released | 2,381 |
[
"atlas-functions"
] | [
{
"code": "{\n \"name\": \"functions\",\n \"version\": \"1.0.0\",\n \"main\": \"index.js\",\n \"license\": \"private\",\n \"engines\": {\n \"node\": \">=10.17.0 <=10.18.2\"\n },\n \"scripts\": {\n \"test\": \"jest\"\n },\n \"dependencies\": {\n \"stripe\": \"10.17.0\",\n \"aws-sdk\": \"2.737.0\"\n },\n \"devDependencies\": {\n \"jest\": \"27\"\n }\n}\n",
"text": "AWS sdk does not seem to work with atlas functions. I tried to use version 2.737.0 as suggested by @Drew_DiPalma in this post: Link.I’m always getting this error when trying to deploy:\nThis is what’s in my package.json:I tried so many different versions. This is so very frustrating as I’ve been spending too much time on it. Currently, I’m halfway into implementing my calls using REST API with manual request signing but it’s just so unnecessary since there is a perfectly good sdk to use. With firebase, this is just working like a charm.Is there any way to use the sdk with atlas functions?",
"username": "Daniel_Weiss"
},
{
"code": "",
"text": "Hi @Daniel_Weiss – Apologies, but we were having an issue with saving dependencies within Functions for a few hours last night that this is likely related to. Are you able to re-try the AWS SDK today?You can find specific timing of this issue on our status page FYI.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Thank you very much for your quick reply. I tried it again and it’s working like a charm now ",
"username": "Daniel_Weiss"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | AWS sdk in Atlas functions not working | 2023-01-31T00:26:33.627Z | AWS sdk in Atlas functions not working | 1,062 |
|
null | [
"node-js",
"atlas-functions",
"app-services-cli"
] | [
{
"code": "axios1.2.0package.jsonrealm-cli push --include-package-jsonpush failed: failed to install dependencies: failed to install dependencies",
"text": "Nothing happens when I try to install dependencies. I am trying to install axios v1.2.0.When I use the Web UI:The popup closes and nothing happens. (I have also tried clearing all my cookies, logging out and then back in, …, and nothing happens.)When I use the realm-cli and a package.json fileI encounter this error: push failed: failed to install dependencies: failed to install dependenciesMy realm app logs show nothing related to function updates.Help!",
"username": "Alexander_Ye"
},
{
"code": "",
"text": "Same issue here; this used to work just fine until about 4 hours ago",
"username": "Dima"
},
{
"code": "",
"text": "Same issue, nothing changed from the last deployment, which was successful",
"username": "Oksana_Hrashchenkova"
},
{
"code": "",
"text": "Yep same here. This looks like a service issue on Realm’s side. Can we get an update on this and confirmation it is a problem with Realm deployments?",
"username": "Eric_Bauer"
},
{
"code": "",
"text": "Hi,Thank you for raising this issue with us. We have identified the problem and we are currently working on a fix to release it. Please follow our MongoDB Cloud status page regarding the problem with dependencies, where you can also subscribe to updates.",
"username": "Mar_Cabrera"
},
{
"code": "",
"text": "It is very concerning that there was such a significant delay between the start of the issue and Mongo becoming aware of it … (8+ hours)",
"username": "Dima"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Failed to Install Dependencies | 2023-01-31T01:27:50.420Z | Failed to Install Dependencies | 1,424 |
null | [
"security",
"configuration"
] | [
{
"code": "",
"text": "Went through this guide: Configure Federated Authentication from Azure AD — MongoDB Atlas multiple times but still cannot login with azure ad account.I am getting this error: { message: “Login Error” }",
"username": "John_Conduit"
},
{
"code": "",
"text": "Hi John, are you using the official app that is in the Azure AD Marketplace? If you are and you have also followed the Azure docs on configuring the app on the AD side, the next thing would be to look at a SAML trace of what is being sent. You can use a Chome extension or the browser developer tools to get this information.",
"username": "bencefalo"
}
] | Configure Federated Authentication from Azure AD | 2021-11-03T22:56:31.644Z | Configure Federated Authentication from Azure AD | 3,286 |
null | [
"dot-net",
"transactions"
] | [
{
"code": "",
"text": "I’ve found an issue with inserts to a collection with a unique index constraint within a transaction. This only fails when there are concurrent actions to insert the same unique index. My session is enlisting with System.Transaction.Current.The problem is when two threads with separate transactions both try to insert a document with the same unique index - the InsertOneAsync method completes and other actions (publishing to the bus) in the TransactionScope are executed.The MongoDB exception raised by the “losing” thread to insert is only raised at the point of calling the Session’s CommitTransation(), however this is too late for the other participants in the wider transaction - as all operations completed, therefore the transaction is deemed “Commitable” and they happily commit.I understand the the insert isn’t commited to the collection until the transaction is completed - however I need to be able to ensure that the publish isn’t allowed if the CommitTransaction() fails? I need to get the InsertOneAsync method to error at the point of execution.Many thanks.",
"username": "Alex_Stevens"
},
{
"code": "",
"text": "Sorry this took so long to reply but I was wondering what would happen here myself.\nTesting (not in C# granted) shows it failing on the insert not the commit as that’s the only time MongoDB can detect it - the insert does happen and the check is made.The only exception is if the collection does not already exist and it’s _id that isn’t unique. In that case it fails on commit with a collection already exists error.Are you still seeing this as it’s not expected or as far as we can tell possible.",
"username": "John_Page"
}
] | C# Transactions and Unique Indexes | 2021-01-11T17:39:22.929Z | C# Transactions and Unique Indexes | 2,892 |
null | [
"queries",
"node-js"
] | [
{
"code": "cursorIDgetMorecursorIDCannot run getMore on cursor <CURSOR_ID>, which was created in session <SESSION_ID>, in session <SESSION_ID>lsid",
"text": "Hello!We have an interesting challenge. Doing exports for our customers at scale in a microservice environment. Right now, when performing an export, we perform. find using a limit + skip query, and then iterate over the DB by limiting the backpressure. Each find chunk is spread over 10 different workers allowing us to perform live updates in our core infrastructure.A better way to solve this problem would have been to retrieve the cursorID and then perform getMore commands using cursorID across all our workers.The issue is when doing this, we are getting an error Cannot run getMore on cursor <CURSOR_ID>, which was created in session <SESSION_ID>, in session <SESSION_ID>.OK. So it is not possible doing cross session cursors. But what if I hack the MongoDB driver? It’s what I have been working on with success by crafting getMore commands having a lsid payload. Using this trick, I have been able to perform cross workers cursors.So my question is simple. Why not allow developers to pass their own sessionID parameter? Let’s say I specifically want to create a long-running job, right now, my only option will be using skip/limit. Most Databases provide a clean way to continue a cursor job programmatically. It is technically possible on MongoDB using a lsid + cursorid, why not offering this on official drivers?",
"username": "Baptiste_Jamin"
},
{
"code": "getMorelsid// OLD\ndb.foo.find({ status: \"draft\" }).skip(1000).limit(100);\ndb.foo.find({ status: \"draft\" }).skip(1100).limit(100);\ndb.foo.find({ status: \"draft\" }).skip(1200).limit(100);\n\n// NEW\ndb.foo.find({ status: \"draft\", created_at: { $gte: new Date(\"2021-01-01\"), $lt: new Date(\"2021-02-01\") } })\ndb.foo.find({ status: \"draft\", created_at: { $gte: new Date(\"2021-02-01\"), $lt: new Date(\"2021-03-01\") } })\ndb.foo.find({ status: \"draft\", created_at: { $gte: new Date(\"2021-03-01\"), $lt: new Date(\"2021-04-01\") } })\nlsidgetMore",
"text": "Hi @Baptiste_Jamin,OK. So it is not possible doing cross session cursors. But what if I hack the MongoDB driver? It’s what I have been working on with success by crafting getMore commands having a lsid payload. Using this trick, I have been able to perform cross workers cursors.Since MongoDB 3.6, compatible drivers would create implicit sessions for CRUD operations. getMore operations (per the documentation) must be called from within a session, however as the Driver abstracts the implicit session management away passing the lsid value isn’t publicly exposed.We have an interesting challenge. Doing exports for our customers at scale in a microservice environment. Right now, when performing an export, we perform. find using a limit + skip query, and then iterate over the DB by limiting the backpressure. Each find chunk is spread over 10 different workers allowing us to perform live updates in our core infrastructure.Note that these types of high offset queries (aka “paging via skip/limit”) are typically not performant (regardless of whether you use MongoDB or an RDBMS [1][2]). To better distribute this workload across workers an alternate strategy would be to segment your data into ranges.This could look something like:This would allow you to bypass using a single cursor for the entire operation and instead leverage multiple cursors, which could be distributed across the cluster (assuming a non-default read preference is used).Additionally, assuming an appropriate index exists it could be used to improve the overall performance of each “batch”. Obviously this isn’t a 1:1 replacement for skip/limit as the batch quantities may vary but using knowledge of your data ingestion and distribution an appropriate filter could be identified.So my question is simple. Why not allow developers to pass their own sessionID parameter? Let’s say I specifically want to create a long-running job, right now, my only option will be using skip/limit. Most Databases provide a clean way to continue a cursor job programmatically. It is technically possible on MongoDB using a lsid + cursorid, why not offering this on official drivers?There are internal scenarios (such as working with Serverless instances) that would require additional tooling aside form just passing an lsid to the getMore to ensure it would function properly.To ensure your microservice exports can scale appropriately a better strategy may be to look at range-based queries for distributing the workload.",
"username": "alexbevi"
},
{
"code": "",
"text": "We just made a NodeJS package allowing to perform cross-worker cursors: GitHub - crisp-oss/node-mongodb-native-cross-cursor: 📡 A MongoDB driver extension allowing to consume MongoDB cursors accross multiple instances.",
"username": "Baptiste_Jamin"
}
] | Is there any reason cross-session cursors are not allowed? | 2023-01-23T11:02:15.954Z | Is there any reason cross-session cursors are not allowed? | 759 |
null | [
"queries",
"data-modeling"
] | [
{
"code": "",
"text": "So I’m currently thinking about this, how do I know if it’s ideal for use in my application?I say this because my application operates as follows,I basically have several collections where each one has at least 220 million documents, I use it as a pipeline, it then processes all the other documents together in a single document, but I’m finding it slow to do searches within it.Second problem that I don’t understand how could I save historical data in mongodb what would be the best way?",
"username": "Luis_Justin"
},
{
"code": "",
"text": "Hey @Luis_Justin,Welcome to the MongoDB Community Forums! I basically have several collections where each one has at least 220 million documents, I use it as a pipeline, it then processes all the other documents together in a single document, but I’m finding it slow to do searches within it.If you’re finding it slow to perform querying within your collections in MongoDB, you can try the following to improve performance:It’s important to keep in mind that the actual improvements will depend on the specifics of your data, queries, and hardware. It would be better if you can share your sample documents and the queries that you’re using so as to be able to suggest you better and more definitive way to go. Also, can you please specify what is the pipeline that you’re referring to?You can read more about optimizing your queries in MongoDB here: Optimize your QueriesSecond problem that I don’t understand how could I save historical data in mongodb what would be the best way?Some of the ways you can use are:Please do note, that all the above-listed points are general in nature. The best approach will depend on your specific requirements, your queries, your documents, and the amount of data being queried.Hoping this helps. Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Doubts on how to know if mongodb would be the best choice for the application? | 2023-01-27T08:36:45.035Z | Doubts on how to know if mongodb would be the best choice for the application? | 823 |
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "I have started mongosqld with --sampleNamespaces mydb.mycollection and I see this message which seems to indicate that it’s recognized -I SCHEMA [sampler] mapped schema for 1 namespace: “mydb” (1): [“mycollection”]but when I try to create a system DSN with database “mydb”, I get an error:handshake error: ERROR 1043 (08S01): error using database mydb: ERROR 1049 (42000): Unknown database ‘mydb’.If I leave the database blank when creating the DSN, it creates the DSN but when I try to use it in Power BI, I see only “INFORMATION_SCHEMA” and “mysql” under the DSN and not “mydb”.Thanks in anticipation for any help.",
"username": "Abhijit_Sahay"
},
{
"code": "",
"text": "Hi Abhijit_Sahay … Did you find a solution to this issue? I am facing the same.",
"username": "Chirag_Hathiari"
},
{
"code": "",
"text": "Unfortunately I have not been able to get aroudnd it.",
"username": "Abhijit_Sahay"
},
{
"code": "",
"text": "My issue got resolved. I had to uninstall and reinstall\nVisual C++ Redistributable for Visual Studio 2015… It was earlier showing a version of 2015-2022. Let me know if that works.",
"username": "Chirag_Hathiari"
}
] | ODBC DSN cannot find database | 2022-09-05T02:39:24.239Z | ODBC DSN cannot find database | 2,308 |
[
"queries",
"database-tools"
] | [
{
"code": "",
"text": "i have imported a json but after importing data inside array is saved as string ,how to convert it into Array eg(userdetails:\"[{array with 20-30 fields}]\" and i am unable to use mongoimport it is thorwing error\n",
"username": "Aniket_Zapatkar"
},
{
"code": "",
"text": "It would be nice to have real data so that we can experiment.What do you think we can do with an image?You may always convert a well structure string to JSON using the parse method available in the language of your choice.",
"username": "steevej"
},
{
"code": "\"id\": \"18616\",\n\"patient_id\": \"146708\",\n\"sub_user_id\": \"0\",\n\"app_id\": \"0\",\n\"ziffy_code\": \"Ziffy_Test_202\",\n\"doctor_name\": \"self\",\n\"token\": \"AA947C11962378516064\",\n\"mobile_number\": \"9623366803\",\n\"status\": null,\n\"module\": \"Pathology\",\n\"dicompath\": null,\n\"pathology_data\": \"[{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"CLEAR\\\",\\\"Result_unit_reference_range\\\":\\\"Clear\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"APPEARANCE\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"BACTERIA\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"BILE PIGMENT\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"BILE SALT\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"CASTS\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"PALE YELLOW\\\",\\\"Result_unit_reference_range\\\":\\\"Pale Yellow\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"COLOUR\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"CRYSTALS\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"1-2\\\",\\\"Result_unit_reference_range\\\":\\\"0-4\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"EPITHELIAL CELLS\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"N\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"mg\\\\/l\\\",\\\"Observation_value\\\":\\\"10\\\",\\\"Result_unit_reference_range\\\":\\\"< 20\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"MICROALBUMIN\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"MUCUS\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"NITRITE\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"PARASITE\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"5\\\",\\\"Result_unit_reference_range\\\":\\\"5 - 8\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"PH\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"N\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"Cells\\\\/ul*\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"RED BLOOD CELLS\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"1.01\\\",\\\"Result_unit_reference_range\\\":\\\"1.003-1.030\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"SPECIFIC GRAVITY\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"N\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"mg\\\\/dl\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"URINARY BILIRUBIN\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"mg\\\\/dl\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"URINARY GLUCOSE\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"Cells\\\\/ul*\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"URINARY LEUCOCYTES (PUS CELLS)\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"mg\\\\/dl\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"URINARY PROTEIN\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"Cells\\\\/ul*\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"URINE BLOOD\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"mg\\\\/dl\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"URINE KETONE\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"mg\\\\/dl\\\",\\\"Observation_value\\\":\\\"< 0.2\\\",\\\"Result_unit_reference_range\\\":\\\"<=0.2\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"UROBILINOGEN\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"mL\\\",\\\"Observation_value\\\":\\\"3\\\",\\\"Result_unit_reference_range\\\":\\\"-\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"VOLUME\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"},{\\\"observationDetails\\\":{\\\"Value_type\\\":\\\"NM\\\",\\\"Sequence_No\\\":\\\"1\\\",\\\"Result_units_of_measurement\\\":\\\"-\\\",\\\"Observation_value\\\":\\\"ABSENT\\\",\\\"Result_unit_reference_range\\\":\\\"Absent\\\",\\\"Effective_date_of_last_normal_observation\\\":\\\"\\\",\\\"Observation_identifier\\\":{\\\"Observation_Coding_System\\\":\\\"Ziffy_Test_202\\\",\\\"Observation_Text\\\":\\\"YEAST\\\",\\\"Observation_Test_id\\\":\\\"Ziffy_Test_202\\\"},\\\"Observation_result_status\\\":\\\"F\\\",\\\"Abnormal_flags\\\":\\\"H\\\"},\\\"token\\\":\\\"AA947C11962378516064\\\"}]\",\n\"content_data\": \"{\\\"patient_id\\\":\\\"146708\\\",\\\"reg_id\\\":\\\"0\\\",\\\"token\\\":\\\"AA947C11962378516064\\\",\\\"patient_mobile\\\":\\\"9623366803\\\",\\\"test_zippycode\\\":\\\"Ziffy_Test_202\\\",\\\"doctor_name\\\":\\\"self\\\",\\\"status\\\":\\\"Authorise\\\",\\\"Module\\\":\\\"Pathology\\\" ,\\\"DicomPath\\\":\\\"File Not Found\\\",\\\"sct\\\":\\\"11 Sep 2022 06:00\\\",\\\"rrt\\\":\\\"11 Sep 2022 14:12\\\",\\\"labcode\\\":\\\"1109071699/AA947\\\",\\\"sample_type\\\":\\\"URINE\\\",\\\"srt\\\":\\\"11 Sep 2022 12:56\\\",\\\"Testgrp\\\":\\\"COMPLETE URINE ANALYSIS\\\"}\",\n\"final_note\": \"\",\n\"report_pdf_location\": \"\",\n\"doctor_note\": \"\",\n\"view_status\": \"0\",\n\"created\": \"2022-09-14 06:31:58\",\n\"modified\": \"2022-09-14 01:01:58\"",
"text": "",
"username": "Aniket_Zapatkar"
},
{
"code": "",
"text": "any other way to solve this issue with python",
"username": "Aniket_Zapatkar"
},
{
"code": "$functionJSON.parse",
"text": "check this SO answer: mongodb - Mongo DB aggregation pipeline: convert string to document/object - Stack Overflowyou can run $functions inside aggregation pipelines, such as javascript’s JSON.parse:\n$function (aggregation) — MongoDB Manual",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Array imported as string | 2023-01-25T06:56:03.676Z | Array imported as string | 1,304 |
|
null | [] | [
{
"code": "",
"text": "Hi Everyone,I’m John - I’ve been an engineer, architect and consultant here at MongoDB for the last 8 years and I recently joined our Developer Advocacy team.One of my plans is to create some Open-source applications, built on the MongoDB platform (including Realm) that are owned and developed by a community. I’d like to get to a state where someone with no previous understanding of MongoDB can come along and with a few instructions pull the code from Github, push it to Realm and start running a private application with a backend to meet an actual need.For example, last year I saw lots of people wanting to run online quizzes for fun and education. Whilst there are “Quiz as a Service” applications out there they lack the flexibility in their free tiers. With your own Atlas cluster and Realm app + source that would not be an issue.I’m also aware that decentralised Twitter style social media is a popular idea, that’s another option I was looking at.My intent is to be very closely involved in the early design and development phases for this (even if it’s me alone) and to use that to create articles on schema design, code design, scaling and all that stuff. Ultimately I’d like something that a community can own, grow, fork and importantly just use with no coding if that’s what they want.I’m putting this here to ask for help - a community effort start in a community. Initially I’m looking for application ideas beyond the two I have given - or feedback on those. I think, for it to be worthwhile it should be a multi-user application, that a non developer might want to host and use. Suggestions I have so far.What else can you think of?",
"username": "John_Page"
},
{
"code": "",
"text": "What about, similar to a quiz… a form-builder. Similar in schema to a quiz app - but a bit more flexible in the types of components.",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "Do you mean a Generic UI for data entry and retrieval in forms @Michael_Lynn ?",
"username": "John_Page"
},
{
"code": "",
"text": "Yes - think google forms - but MongoDB backed… complete with suggested charts for data presentation.",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "Ah Google Forms - that’s different I was thinking Oracle Forms - just a generic business app for finding and entering data.",
"username": "John_Page"
},
{
"code": "",
"text": "John, I was just this week pondering that my own Wordpress-based site needs to come into the MongoDB fold, but I’m unsure if some type of “migrate your Wordpress site to Realm” effort would be anything short of a Goat Rodeo™. If there’s a smidgen of merit here, I’d be interested in pitching in - just haven’t had time to give it any thought to this point.",
"username": "Eric_Reid"
},
{
"code": "",
"text": "Hi @Eric_Reid ,\nThis effort is sort of Inspired by Wordpress and how it drove php and the development of the LAMP stack (In my opinion) but I wasn’t considering writing a replacement for it simply because CMS is a pretty crowded marketplace and also not one that I’m personally enthused by. Something smaller with less alternative options would be better.",
"username": "John_Page"
},
{
"code": "",
"text": "Had a bit of out-of-forum feedback. What do folks thinks about a completely generic, forms based entry/edit/find/search/link interface - essentially can be used for any kind of data entry/retrieval by any organisation or individual. Less specific but possibly a better starting point",
"username": "John_Page"
},
{
"code": "",
"text": "love it. (and a few more characters here because I need 20 to actually post this… so.)",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "Hi @John_Page, I would love to contribute here.\nOne of the things that I enjoy the most is building full-stack apps using React, Node, and MongoDB. Please let me know if I can be of any help.Thanks and Regards.",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "Really like the whole Oracle Forms analogy. Basically, I think there is still a gap in the market for internal departmental-wide “rough and ready” data applications, where department staff need to manage data but don’t necessarily have the time or skills to turn to a programming language and full-stack environment to build such stuff from scratch. Follwing the demise of Oracle Forms, Oracle revisited this paradigm with APEX Oracle APEX but the challenge is probably knowing when to stop and enough features are there, otherwise it becomes a big montster that is hard for lay people to understand, thus defeating the purpose of it in the first place. Other relevant keywords: WYSIWYG, Low-Code, No-Code.I’d be up for contributing!( in a previous life, a very very long time ago, I was actually an engineer in Oracle’s Designer/2000 group - I was in the Form Generator team writing C code to generate pre-built Oracle Forms (Developer/2000) from a data model and some metadata, including generating client-side PL/SQL (those were the days! ).",
"username": "Paul_Done"
},
{
"code": "",
"text": "Hi John,\nsince a year I am carrying an idea with me, but never had time to kick it on. A nature protection project.\nBasically so simple as in counting populations of birds in nesting boxes. Build statistics, do prediction, merge with geo data to show the influence of traffic, weather, construction…\nThis would cover,There are much more details, this only as a teaser.regrads,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @michael_hoeller , have you looked a the MongoDB WildAid app (MongoDB Cloud To The Rescue: Protecting Our Oceans With An Open-Source Application | MongoDB Blog) its is very close to that already.Is it not a bit of a bit of a niche use case though?, I’d like to think a few years from now we have 10,000 installations.",
"username": "John_Page"
},
{
"code": "",
"text": "Why not all of the above? That sounds cheeky, but hear me out…Something Drupal has the notion of is “distributions” which are essentially various starter kits with some configuration and add-ons pre-installed: Drupal for Commerce, Drupal for Social, Drupal for Publishing, Drupal for Government, etc.So what if we pulled your awesome idea up a level, we could potentially create a whole new category of thing, “MongoDB Starter Kits” or what have you… outline some guidelines/templates to follow, and then invite the Community (capital “C,” inclusive of MongoDB internal folk as well) to contribute these.Then the submissions could be inclusive of both pragmatic, solve X problem things (Quiz, Twitter clone), as well as more “framework-y” things (Form Builder), and even more niche interests like a nature protection project. All would have a place.Sorry, not meaning to add scope. Just putting forth the idea that while you’re figuring out which of these ideas to work on first (which you have to do either way), also maybe try and think about what infrastructure would need to be in place to have 20 or 30 of them. I’m super happy to help with brainstorming this, if you think it’s a good idea. ",
"username": "webchick"
},
{
"code": "",
"text": "That’s sort of the plan (MongoDB starter kits), to have many of them however I am very wary of creating a framework at the expense of the code that’s actually needed. I’m a great believer in writing understandable, modifiable, reusable code and classes rather than any sort of explicit framework. We can go the dedicated app but modifiable code route - or we can go the no-code (unless you want to) multi purpose app route but I honestly we have the framework in Realm Server we just need to build things with it that aren’t more layers.",
"username": "John_Page"
},
{
"code": "",
"text": "Sorry, I was not suggesting that we use a framework for what you’re proposing… BIG +1 on understandable, modifiable, reusable code! The closer to “bare bones” Realm, the easier it will be for folks to understand/reuse.It was more a general note that it would be awesome to start from Day 1 planning to make an entire repository of these things, and thinking about the governance; specifically, how to invite contribution from the wider community to add additional ones, and what that might look like.",
"username": "webchick"
},
{
"code": "",
"text": "For example here’s Redis’s version of what I’m talking about: https://launchpad.redis.com/ Is this about what you are thinking, as well?",
"username": "webchick"
},
{
"code": "",
"text": "I like the concept and sharing at this early stage.I also think the majority of you are aiming for the sky. Which is not bad perse , but Ifor developers to go in and out and contribute the initial scope should be smaller.Either just some npm package or a react component.Unless the core developers are willing to give few months coding until the first MVP, docs, website, GitHub issue templates, etc.\nOr mongodb pays for the Devs time Aim for the sky but don’t forget where your foot is going to land next .Lastly, don’t forget to make it fun. A side project shouldn’t be tedious and full of burocreacy ",
"username": "Gianfranco_Palumbo"
},
{
"code": "",
"text": "Hi Gianfrance,\nFor me this won’t be just some side project but part of my paid job - as I said If I have to write it alone then so be it. I was canvasing for ideas and anyone that might want to assist now or in the future.The purpose it to create the first (of hopefully a number) of apps that someone can use without doing any coding if they don’t want to - or they can modify to their needs. I’m not looking to create frameworks or components for their own sake.\nJohn.",
"username": "John_Page"
},
{
"code": "",
"text": "I also think the majority of you are aiming for the sky. Which is not bad perse , but Ifor developers to go in and out and contribute the initial scope should be smaller.Well, in my mind the first iteration could be as simple as 1) a designated place in GitHub to stick these “starter kits” and 2) a README that explains how to add more of them.But fair enough, back to the topic at hand. How I can help is in reviewing and improving documentation to get started with the code. I have the uncanny ability to stumble into problems, and am happy to help someone else not do the same. ",
"username": "webchick"
}
] | Starting a new FOSS project, call for assistance | 2021-10-07T10:37:01.251Z | Starting a new FOSS project, call for assistance | 9,922 |
null | [
"queries",
"node-js",
"mongoose-odm",
"serverless"
] | [
{
"code": "const data = await User.find(paginatedQuery, User.getFields(fieldsToGet[role]), {\n\t\t\tsort: sorter,\n\t\t\tlimit: +limit\n\t\t});\npaginatedQuery.collation({ locale: 'en' })",
"text": "Hi, I’m using mongodb with mongoose for a NodeJs api and I have a problem with the missing of collation in Serverless instances.\nIn the API I give the possibility to sort by any value and paginate the results, here is a snippet of code:where paginatedQuery is a query with some optional regex (on the frontend I have a search bar to filter users by name) and some other filters used to get the pagination done.The problem is that when I sort by a String field because I would use the .collation({ locale: 'en' }) which is not supported on Serverless. Is there another way I can sort the results alphabetically as I need? I can’t do that after I got the result because the pagination is based on the sorted values.",
"username": "Emanuele_Caruso"
},
{
"code": "",
"text": "Hi,\nYou are right that Serverless instances do not support collations. I am curious, why can’t you use default sorting order - is there a specific collation logic you are using?MongoDB Atlas Serverless PM team",
"username": "Vishal_Dhiman"
},
{
"code": "find({\"fname\": { \"$regex\" : \"sometext\", \"$options\": \"i\"}}).collation({locale: \"en_US\"})Error : {\"error\":\"{\\\"message\\\":\\\"'collation' is not a function\\\",\\\"name\\\":\\\"TypeError\\\"}\",\"error_code\":\"FunctionExecutionError\"}\n",
"text": "Seems to be the same issue with Dedicated instance as well?\nI am trying to apply regex over compound index using collation of locale en_US.Using\nfind({\"fname\": { \"$regex\" : \"sometext\", \"$options\": \"i\"}}).collation({locale: \"en_US\"})I am getting following error:Any advice or suggestion is much appreciated.\nThank you",
"username": "RamaKishore_K"
},
{
"code": "db.collection.find(query, projection, {collation: {locale: \"en_US\", strength: 2}})",
"text": "I have made this working in dedicated instance server-less function query using find options.db.collection.find(query, projection, {collation: {locale: \"en_US\", strength: 2}})Thank you.",
"username": "RamaKishore_K"
},
{
"code": "",
"text": "There are several reasons why it would be useful to use collation, but the main one is to show to the end user the data in the order he expects (eg. Sorting a table by an alphanumeric field)",
"username": "Emanuele_Caruso"
}
] | Collation problem on Serverless instance | 2022-09-15T08:55:56.273Z | Collation problem on Serverless instance | 2,916 |
null | [
"aggregation",
"queries",
"node-js",
"indexes"
] | [
{
"code": "\"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 773918.0,\n \"executionTimeMillis\" : 8440.0,\n \"totalKeysExamined\" : 773927.0,\n \"totalDocsExamined\" : 773927.0,\n}\n \"winningPlan\" : {\n \"stage\" : \"PROJECTION_DEFAULT\",\n \"transformBy\" : {\n \"actor.user.email\" : 1.0,\n \"actor.user.name\" : 1.0,\n \"actor.user.uuid\" : 1.0,\n \"time\" : 1.0,\n \"_id\" : 0.0\n },\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [\n {\n \"actor.user.email\" : {\n \"$not\" : {\n \"$eq\" : \"[email protected]\"\n }\n }\n },\n {\n \"customer.uuid\" : {\n \"$eq\" : \"039e5026-be55-4ec6-a0ac-5ec82be0313b\"\n }\n },\n {\n \"sequr_code\" : {\n \"$eq\" : \"SEQUR_ACCESS_GRANTED\"\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"location.uuid\" : 1.0,\n \"time\" : -1.0\n },\n \"indexName\" : \"location_uuid\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"location.uuid\" : [\n\n ],\n \"time\" : [\n\n ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2.0,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"location.uuid\" : [\n \"[\\\"9eb44f41-3788-4b5b-b8f1-5f12b3956f69\\\", \\\"9eb44f41-3788-4b5b-b8f1-5f12b3956f69\\\"]\"\n ],\n \"time\" : [\n \"[new Date(1666411200000), new Date(1666238400000)]\"\n ]\n }\n }\n }\n },\n",
"text": "Hi everyone,I am searching on more than 3 million records but fetching the data is very slow despite of using the indexes.The problem here is $ne and $regex. While using other method like $eq, $in, $gt, $lt it is working fine and fetching the data very fast because it is using the correct index but while using the same query with $ne and $regex, it is taking more than 1 minute which is not feasible for my case as user wants the data within 3-4 seconds.I have spent a lot of time in creating the proper index and by using the other methods it is also using the same index which I want but for $ne and $regex, it is behaving very different.I am not able to figure out what to do to overcome this.The problem with $ne and $regex is that it needs to scan a lot of data.One scenario as an example :\nuser searches for email : {$ne:“[email protected]”}, so in this case it must find the email not equal to “[email protected]” so a lot of emails are there to scan so it is scanning so many docs which is taking more than 1 min so can you please suggest what to do?Execution stats for such scenario :Thanks.",
"username": "Vishalanand_Prajapati"
},
{
"code": "",
"text": "I have spent a lot of time in creating the proper indexIt would be interesting to see the indexes that you have because while you get an IXSCAN, then index used has none of the fields specified in your query. It would be very useful to see the exact query that you run. Do you provide an index hint or something like this? Do you have a $sort stage on location.uuid and time or something that would force to use the index location_uuid?Note that nReturned is 773_918 which is quite a big number of documents which means the query is not really selective.I am not too sure aboutThe problem with $ne and $regex is that it needs to scan a lot of data.because nReturned is pretty close to totalKeysExamined.There is not much more to say without more information.Your problem might also be a lack of resources. What is the total size of your data? What is the RAM size? Disk type and size? Are the client and servers on the same machines? If not how are they connected?",
"username": "steevej"
},
{
"code": " [\n {\n $match: {\n \n $and: [\n\n {\"actor.user.name\":{$ne:\"Adam Prescott\"}}, \n { \"actor.user.email\": { $regex: \"[email protected]\", $options:'i' } },\n { \"card.card_number\": { $eq: \"12006461200\" } },\n { \"actor.user.access_groups.name\": { '$eq': \"Admin\" } },\n { \"location.uuid\": { $in: [\"9eb44f41-3788-4b5b-b8f1-5f12b3956f69\"] } },\n { 'customer.uuid': { $eq: \"039e5026-be55-4ec6-a0ac-5ec82be0313b\" } },\n {\"custom_attributes.key\":{$regex:\"licens\"}},\n\n\n ],\n sequr_code: \"SEQUR_ACCESS_GRANTED\",\n\n time: {\n $gte: new Date(\"2022-10-16T04:00:00.000Z\"),\n $lte: new Date(\"2022-10-22T04:00:00.000Z\"),\n },\n },\n },\n {\n $group: {\n _id: {\n user_uuid: \"$actor.user.uuid\",\n\n date_string: {\n $dateToString: {\n format: \"%Y-%m-%d\",\n date: {\n $toDate: \"$time\",\n },\n timezone: \"America/New_York\",\n },\n },\n },\n exit_time: {\n $max: \"$time\",\n },\n entry_time: {\n $min: \"$time\",\n },\n records: {\n $first: {\n // 'time': '$time',\n user_name: \"$actor.user.name\",\n email: \"$actor.user.email\"\n },\n },\n },\n },\n {\n $sort: {\n\n \"_id.date_string\": 1,\n // time:1,\n \"records.user_name\": 1,\n \"_id.user_uuid\": 1,\n },\n },\n { $limit: 10 },\n\n ]\n{\n \"_id\": {\n \"$oid\": \"6375c63bcaa9987618048048\"\n },\n \"scp_reply\": {\n \"nSCPNumber\": 164,\n \"type\": 7,\n \"ser_num\": 46,\n \"time\": 1654061180,\n \"source_type\": 9,\n \"source_number\": 0,\n \"tran_type\": 22,\n \"tran_code\": 13,\n \"format_number\": 3,\n \"cardholder_id\": \"1654061180\",\n \"floor_number\": 0,\n \"card_type_flags\": 0,\n \"elev_cab\": 0\n },\n \"cameras\": [],\n \"customer\": {\n \"uuid\": \"039e5026-be55-4ec6-a0ac-5ec82be0313b\",\n \"name\": \"Genea test support\"\n },\n \"location\": {\n \"uuid\": \"9eb44f41-3788-4b5b-b8f1-5f12b3956f69\",\n \"name\": \"GENEA test support India\",\n \"timezone\": \"Asia/Kolkata\"\n },\n \"time\": {\n \"$date\": {\n \"$numberLong\": \"1654061180000\"\n }\n },\n \"type\": \"ACCESS\",\n \"message\": \"Access Granted\",\n \"controller\": {\n \"uuid\": \"97aabb0c-7894-41df-a4ca-bfe3edf89c49\",\n \"name\": \"LP4502 450 (LPSERIES)\",\n \"mac\": \"00:0f:e5:0b:61:64\",\n \"model\": \"LP4502(LPSERIES)\",\n \"firmware_version\": \"1.29.6 (654)\",\n \"serial_number\": \"1004680\",\n \"connection\": {\n \"primary_host_connection\": {\n \"data_security_mode\": \"TLS Required\",\n \"encryption_status\": \"TLS Encrypted\",\n \"connection_type\": \"IP Client\"\n }\n }\n },\n \"actor\": {\n \"type\": \"USER\",\n \"user\": {\n \"uuid\": \"6fff0a80-8409-4008-9850-9ec8fb8431ee\",\n \"name\": \"Aditya Raval\",\n \"email\": \"[email protected]\",\n \"avatar_file_name\": \"e48d7bbe-cca1-43ef-b48e-8030ec07e74b_1615202698.jpeg\",\n \"department\": \"Sales\",\n \"employee_number\": null,\n \"cost_center\": \"USA\",\n \"role\": \"ADMIN\",\n \"access_groups\": [\n {\n \"uuid\": \"91584e8c-4019-48c6-957b-0dbf45df3af1\",\n \"name\": \"Admin\"\n },\n {\n \"uuid\": \"3e23bd56-fcf7-40db-88a6-21bdc18996f3\",\n \"name\": \"Employee\"\n },\n {\n \"uuid\": \"fb83eed4-a84c-48c2-82fd-d71ff74a6780\",\n \"name\": \"Elevator Test\"\n }\n ],\n \"elevator_access_groups\": [\n {\n \"uuid\": \"6c4c0fc1-5577-48c9-bd6a-e83627e00cf5\",\n \"name\": \"All Floors\"\n },\n {\n \"uuid\": \"4c2ce295-d936-451b-b9d2-fb3396f76c9b\",\n \"name\": \"First Floor\"\n },\n {\n \"uuid\": \"2054fbbc-e0d6-477f-a5ca-78126cc5a5b5\",\n \"name\": \"Low Rises\"\n }\n ],\n \"is_card_access_revoked\": false,\n \"card_access_revoked_source\": null\n }\n },\n \"style\": {\n \"icon\": \"fa fa-circle-o green\"\n },\n \"sequr_code\": \"SEQUR_ACCESS_GRANTED\",\n \"note\": null,\n \"ip_address\": null,\n \"card\": {\n \"uuid\": \"fd192ab6-65b2-4c19-a041-6b9bad5fec12\",\n \"card_number\": \"12006461200\",\n \"type\": \"KEYCARD\",\n \"pin\": null\n },\n \"door\": {\n \"uuid\": \"42ac456e-df9b-4146-afb7-619dd584a524\",\n \"name\": \"Door Plaza Parking entry 1\",\n \"is_elevator_door\": false,\n \"elevator_door_type\": null,\n \"is_door_force_masked\": true,\n \"is_door_held_masked\": true,\n \"is_door_force_seen\": false,\n \"is_temperature_screening\": false\n },\n \"area\": null,\n \"interface_panel\": {\n \"uuid\": \"7056bcd0-140f-4381-93f6-2dd090c2c349\",\n \"name\": \"Internal SIO - Panel 27\"\n },\n \"control_point\": [\n {\n \"uuid\": \"8aedd858-cca9-4887-9d43-4953b933fa77\",\n \"name\": \"Gen Plaza CP\"\n },\n {\n \"uuid\": \"6c4c0fc1-5577-48c9-bd6a-e83627e00cf5\",\n \"name\": \"Plaza controller\"\n }\n ],\n \"monitor_point\": [\n {\n \"uuid\": \"8aedd858-cca9-4887-9d43-4953b933fa77\",\n \"name\": \"Gen Plaza MP1\"\n },\n {\n \"uuid\": \"6c4c0fc1-5577-48c9-bd6a-e83627e00cf5\",\n \"name\": \"Plaza monitor point\"\n }\n ],\n \"schedule\": null,\n \"card_format\": {\n \"uuid\": \"0306fca9-3757-4632-95ed-0b7be3e93407\",\n \"customer_uuid\": \"2abbb9a1-d102-4608-b798-e1c6128528e7\",\n \"is_deleted\": false,\n \"name\": \"37 bits\",\n \"description\": \"\",\n \"category\": \"PHYSICAL\",\n },\n \"updated_at\": {\n \"$date\": {\n \"$numberLong\": \"1668662843275\"\n }\n },\n \"created_by_user_uuid\": null,\n \"updated_by_user_uuid\": null,\n \"is_200_bit_fascn_to_128_bit_version_conversation\": false,\n \"is_card_id_check_with_other_formats\": false\n },\n \"additional_info\": {\n \"Card Number\": \"12006461200\",\n \"Format Number\": 3,\n \"Format Name\": \"37 bits\",\n \"description\": \"Access Granted - Full Test, Door Used\"\n },\n \"custom_attributes\": [\n {\n \"key\": \"testcheckbox\",\n \"value_text\": [\n \"TB\"\n ]\n },\n {\n \"key\": \"sample_date_picker\",\n \"value_date\": \"2022-05-08T05:50:24.000Z\"\n },\n {\n \"key\": \"license_plate\",\n \"value_text\": \"license may 8518\"\n },\n {\n \"key\": \"testaditya\",\n \"value_text\": \"TB\"\n }\n ],\n \"tenant\": null,\n \"elevator\": null,\n \"floor\": null,\n \"show_on_ui\": true,\n \"is_camera_centric\": true,\n \"created_at\": {\n \"$date\": {\n \"$numberLong\": \"1668662843275\"\n }\n }\n}\n1. \n {\n \"v\" : 2.0,\n \"key\" : {\n \"location.uuid\" : 1.0,\n \"customer.uuid\" : 1.0,\n \"sequr_code\" : 1.0,\n \"time\" : 1.0,\n \"actor.user.name\" : 1.0,\n \"actor.user.email\" : 1.0,\n \"actor.user.department\" : 1.0,\n \"actor.user.employee_number\" : 1.0,\n \"actor.user.cost_center\" : 1.0,\n \"custom_attributes.key\" : 1.0,\n \"custom_attributes.value_text\" : 1.0,\n \"custom_attributes.value_date\" : 1.0,\n \"card.card_number\" : 1.0,\n \"card.pin\" : 1.0\n },\n \"name\" : \"custom_default\"\n },\n \n2. \n {\n \"v\" : 2.0,\n \"key\" : {\n \"location.uuid\" : 1.0,\n \"customer.uuid\" : 1.0,\n \"sequr_code\" : 1.0,\n \"time\" : 1.0,\n \"actor.user.name\" : 1.0,\n \"actor.user.email\" : 1.0,\n \"actor.user.department\" : 1.0,\n \"actor.user.employee_number\" : 1.0,\n \"actor.user.cost_center\" : 1.0,\n \"actor.user.access_groups.name\" : 1.0,\n \"card.card_number\" : 1.0,\n \"card.pin\" : 1.0\n },\n \"name\" : \"filterable_fields\"\n },\n\n3.\n {\n \"v\" : 2.0,\n \"key\" : {\n \"location.uuid\" : 1.0,\n \"time\" : -1.0\n },\n \"name\" : \"location_uuid\",\n \"background\" : true\n },\n",
"text": "This is the aggregation query I am using:I need to first match all the fields as per the user input and then actual searching starts.This is the sample doc with dummy data:Here is the most used indexes:Other indexes are also there but this is the index which is being used 90% of the time with the above query. I need to find only ten records no more than 10.Current data size is 3 million which is not that high I guess. If configuration is the problem then for selective fields index it should not work very fast but with this it is giving result in milliseconds but I am stuck at a situation like this:If user made search for username and he has mispelled the username i.e. mispelled name is not there in an entire Database then it is taking so much time and same with the regex.",
"username": "Vishalanand_Prajapati"
},
{
"code": "",
"text": "If configuration is the problem then for selective fields index it should not work very fast but with this it is giving result in millisecondsNot necessary. A selective index would require much less work since documents that are not selected do not need to be read from storage while a non-selective index would need to fetch much more documents before return the result.A $regex that is not anchored at the beginning is a lot slower that one. Rather than using $regex and the ignore case option for your email matches, I strongly recommend that you normalize your data by converting to all lower-case. You would then avoid needing to use regex.",
"username": "steevej"
},
{
"code": "",
"text": "Sorry for the late reply.\nBut the problem which I am facing is if user is giving random input let’s say email is [email protected] and user is searching for email contains bcd but he is giving input like this BCd so for this we need to apply case insensitive input and also we do not know whether the provided input word starting alphabet is same as our actual data, means user can provide anything like this “sfdgssg”, so on this scenario fetching the data is taking a lot of time.Also it is not specifically for email there are some fields which contains Capital and small letter together so for those fields it will be a problem.",
"username": "Vishalanand_Prajapati"
}
] | $ne and $regex does not use proper index | 2022-12-28T12:27:40.356Z | $ne and $regex does not use proper index | 2,486 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.