image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[]
[ { "code": "", "text": "Hello there,\nIn September, I requested a code for the Developer certification that you could obtain using your student account. (MongoDB Certification Voucher Code - #2 by Lieke_Boon)Unfortunately I was too busy at the moment to actually use it and wasn’t able to use it. Now that I have some free time, I want to register for the upcoming exam, but it seems that I can’t find the code anymore. I think my email provider might have deleted the mail, as it was too old.Would it be possible to resend the email again with the code?I apologize for the inconveniences.", "username": "Bo_Robbrecht" }, { "code": "", "text": "Hi there and welcome to the MongoDB for Academia Community forum!I’m happy to resend your voucher code. If you’d like to DM me your Github username, I’ll verify your information and resend your code.", "username": "Aiyana_McConnell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to retriever voucher code
2022-10-28T17:20:44.777Z
Unable to retriever voucher code
1,907
null
[ "aggregation" ]
[ { "code": "{\n \"_id\":{\"$oid\":\"635a732932edf20bed23da5a\"},\n \"name\":\"Tomato with egg\",\n \"recipe\":\"Mix tomato with egg.\",\n \"image\":\"/images/tomatowithegg.png\",\n \"portions\":{\"$numberDouble\":\"2.0\"},\n \"ingredients\":\n [\n {\"product_id\":{\"$oid\":\"635a715b32edf20bed23da54\"},\n \"serving\":\"tbsp\",\n \"amount\":{\"$numberDouble\":\"2.0\"}},\n {\"product_id\":{\"$oid\":\"635a71fb32edf20bed23da55\"},\n \"serving\":\"piece\",\n \"amount\":\"1\"}\n ]\n}\n{\n \"_id\":{\"$oid\":\"635a715b32edf20bed23da54\"},\n \"name\":\"Tomato\",\n \"image\":\"/images/tomato.png\",\n \"nutrients\":\n {\n \"kcal\":\"300\",\n \"carbohydrates\":\"100\",\n \"fat\":\"50\",\n \"protein\":\"2\"\n },\n \"servings\":\n {\n \"g\":\"1\",\n \"piece\":\"200\",\n \"tbsp\":\"10\",\n \"tsp\":\"5\",\n \"ml\":\"1\"\n }\n}\n\n{\n \"_id\":{\"$oid\":\"635a71fb32edf20bed23da55\"},\n \"name\":\"Egg\",\n \"image\":\"/images/egg.png\",\n \"nutrients\":\n {\n \"kcal\":\"300\",\n \"carbohydrates\":\"100\",\n \"fat\":\"50\",\n \"protein\":\"2\"\n },\n \"servings\":\n {\n \"g\":\"1\",\n \"piece\":\"200\",\n \"tbsp\":\"10\",\n \"tsp\":\"5\",\n \"ml\":\"1\"\n }\n}\n[{\n $lookup: {\n from: 'products',\n localField: 'ingredients.product_id',\n foreignField: '_id',\n as: 'products'\n }\n}]\n", "text": "Hello,I am new to MongoDB and want to design a database for my small food planning application.I would like to join them with a lookup, so that ingredients array in meals collection becomes enriched with all the product information.This is my playground:Mongo playground: a simple sandbox to test and share MongoDB queries onlineMeals collectionProducts collectionThis is my current pipeline, but it results in two arrays - ingredients and products.Could you please help me with the Lookup?Also, would there be a way to automatically calculate nutrients for a meal, based on products inside?Thank you,\nLukas", "username": "Lukasz_Filipiuk" }, { "code": "as : 'ingridiants'", "text": "Hi @Lukasz_Filipiuk ,If you want to replace the ingredients array with the products lookup you should use as : 'ingridiants' in the lookup this will project the ingredients output rather then have it being added .In the next stage after the lookup you can use $group (on “$_id” ) with $sum of the elements for the nutrients sum (it will work on arrays too), however, I see that nutrients is an object with many attributes. Do you need the sum of all attribute or specific?I want to say that if this query of showing ingredients is a popular query you should avoid the lookup and ave a schema design where full ingredients are embedded in the meal document. I don’t expect a meal to have too much of those…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "ingredients:\n[\n {\n product_id: \"32786423876\",\n serving: \"piece\",\n quantity: 4\n }\n]\n{\n {\n _id: \"32786423876\"\n name: \"Tomato\"\n nutrients:\n {\n kcal: \"100\",\n carbohydrates: \"10\",\n fat: \"1\",\n protein: \"2\"\n }\n servings:\n {\n g: \"1\",\n piece: \"80\",\n ml: \"1\"\n }\n }\n}\n\n", "text": "Hi @Pavel_Duchovny thank you very much for your reply!Ingredients array has 3 fields - product_id, serving and quantity. I would love to lookup products so I get my detailed products information as well as serving and quantity, which come from ingredients array. It would be great to keep it separate, as user will be able to compose meals from the list of products, so one product may be reused.ingredients array in meals collectionproducts collectionWhen it comes to calculation, its going to be harder - I may do the calculation on the backend instead of the database query. I would like to check what servings the ingredients are (e.g. in the recipe we use 4 tomatoes). Then in servings object in products collection I have information that one piece of tomato weighs 80 grams. Based on that information I can calculate nutrients of 4 tomatoes (as nutrients object contains nutrients per 100 g). That way I could calculate total nutrients for the whole meal.", "username": "Lukasz_Filipiuk" }, { "code": "", "text": "@Lukasz_Filipiuk ,You can keep them separately and duplicate for embedding all together.Are the attributes updated frequently ?Merging data from one array with a lookup from another makes it pretty complex therefore if that is the main purpose of the application (“showing meal details and planning”), its smart to embed.Believe me it will save you hours of coding and headache Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_DuchovnyI will trust you on that and try embedding.The attributes will not be updated very frequently (at least for now).Purpose of the application is “showing meal detail and planning”. User will be able to create products, then from these products he will be able to compose meals. Then these meals will be used to plan meals and generate a shopping list. The idea is to have a very simple planning application.I went with the lookup approach since it’s more SQL-like, and that’s where I come from. Embedding is something new for me, but I want to try that, if that’s more efficient.Thanks,\nLukas", "username": "Lukasz_Filipiuk" }, { "code": "", "text": "@Lukasz_Filipiuk ,Its definitely more suitable for your use case.I suggest reading the following:Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!Get a summary of the six MongoDB Schema Design Anti-Patterns. Plus, learn how MongoDB Atlas can help you spot the anti-patterns in your databases.Also there is a great course free on our university site called MongoDB for sql pros:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Thanks", "username": "Pavel_Duchovny" }, { "code": "\n{\n _id: ObjectId(\"meal1\"),\n ingredients: \n [\n {\n product_id: ObjectId(\"AAA\"),\n serving: \"piece\"\n quantity: \"1\"\n },\n {\n product_id: ObjectId(\"BBB\"),\n serving: \"tbsp\"\n quantity: \"3\"\n },\n ]\n}\n{\n {\n _id: ObjectId(\"AAA\"),\n nutrients :\n {\n kcal: \"100\"\n carbohydrates: \"50\"\n fat: \"10\"\n protein: \"2\"\n },\n servings:\n {\n g: \"1\"\n tbsp: \"10\" \n piece: \"30\"\n ml: \"1\"\n },\n },\n {\n _id: ObjectId(\"BBB\"),\n nutrients :\n {\n kcal: \"50\"\n carbohydrates: \"40\"\n fat: \"5\"\n protein: \"5\"\n },\n servings:\n {\n g: \"1\"\n tbsp: \"15\" \n piece: \"300\"\n ml: \"1\"\n },\n },\n}\n", "text": "@Pavel_Duchovny thank you very much for these!I still wonder how to apply this into my scenario, where I believe we have a Many-To-Many case. In my app, product may be in many meals and a meal may be made of many products.I already did embed ingredients - in the beginning I wanted to have three collections - products, ingredients and meals. An ingredient is a product with a serving and quantity. Products to Ingredients would be one to many relationship and ingredients to meals would be one-to-one. That’s why I embedded ingredients inside meals. I still have a problem with products though.I would love to try embedding, but I still don’t understand how can I reuse the same product in many meals.This is my current example. I would love to end up with my products embedded inside meals, but one product can appear in many meals and meals have many products.mealproductsThank you,\nLukas", "username": "Lukasz_Filipiuk" }, { "code": "{\n _id: ObjectId(\"meal1\"),\n ingredients: \n [\n {\n product_id: ObjectId(\"AAA\"),\n serving: \"piece\"\n quantity: \"1\",\n nutrients :\n {\n kcal: \"100\"\n carbohydrates: \"50\"\n fat: \"10\"\n protein: \"2\"\n },\n servings:\n {\n g: \"1\"\n tbsp: \"10\" \n piece: \"30\"\n ml: \"1\"\n },\n },{\n product_id: ObjectId(\"BBB\"),\n serving: \"tbsp\"\n quantity: \"3\",\n nutrients :\n {\n kcal: \"50\"\n carbohydrates: \"40\"\n fat: \"5\"\n protein: \"5\"\n },\n servings:\n {\n g: \"1\"\n tbsp: \"15\" \n piece: \"300\"\n ml: \"1\"\n },\n },\n}\n\n ]\n}\n", "text": "Hi @Lukasz_Filipiuk ,So the idea is you can keep the products collection as is and show users what kind of products there is by querying this collection.However you can use an extended pattern reference to duplicate the needed fields when you add a product as ingredients to a specific meal , if 1000 meals have the same ingredients copy them 1000s times.Make sense?Using a reference to improve performanceNow making any logic on the suming can be done with the use of $sum, $add or $multiply fully through aggregation:Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny these suggestions are great! I will now spend time to learn and apply this in my project! Thank you!", "username": "Lukasz_Filipiuk" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Joining two collections and possibly calculating totals
2022-10-27T19:43:37.681Z
Joining two collections and possibly calculating totals
2,305
null
[ "aggregation", "queries", "data-modeling", "indexes" ]
[ { "code": "orand ", "text": "Supposed that a collection has billions of documents. Can mongodb handle queries that have hundreds of filter conditions on this collection? Assuming that this collection is indexed properly?The document limit and max nesting depth is fine. They will not be exceeded. The reason for queries having so many conditions is because the user is free to combine or statements with and statements arbitrarily. So even if a document only contain 50 fields, the number of filter conditions can balloon easily to hundreds.", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "@Big_Cat_Public_Safety_Act ,I don’t see a problem with that as long as the filter document itself does not cross 16mb itself ", "username": "Pavel_Duchovny" }, { "code": "", "text": "Would queries having hundreds of filter conditions exceed the 16mb limit?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "Well that depands on the filter nature If you have a $in with 1m objects for example?So its a hard question to answer, test your application…", "username": "Pavel_Duchovny" } ]
Giving the user the ability to create queries with arbitrary many filter conditions
2022-10-28T13:54:59.232Z
Giving the user the ability to create queries with arbitrary many filter conditions
1,420
null
[ "aggregation", "queries", "crud" ]
[ { "code": "{\n \"_id\" : ObjectId(\"63506c5db40233003a1252ab\"),\n \"filters\" : [\n {\n \"jaql\" : {\n \"dim\" : \"country.Country\"\n }\n },\n {\n \"jaql\" : {\n \"dim\" : \"[Commerce.Date (Calendar)]\"\n }\n },\n {\n \"jaql\" : {\n \"dim\" : \"[Commerce.Revenue]\"\n }\n },\n {\n \"jaql\" : {\n \"dim\" : \"[Country.Name]\"\n }\n }\n ]\n}\ndb.dashboards.updateMany({\"filters.jaql.dim\":\"country.Country\"},{\"$set\":{\"filters.$[elem].jaql.dim\":\"[country.Country]\"}},{arrayFilters:[{\"elem.jaql.dim\":\"country.Country\"}]})db.dashboards.updateMany({\"filters.jaql.dim\":{\"$regex\":/^[^[].*[^]]/}},[{\"$set\":{\"filters\":{\"$map\":{\"input\":\"$filters\",\"in\":{\"jaql.dim\":\"$$this.jaql.dim\"}}}}}])", "text": "In my collection, I have an array of embedded documents like so:I want to update the embedded document’s field “dim” : “country.Country” and add the missing square brackets to the value so I want to reuse the value and have “dim” : “[country.Country]” in the endWhen I don’t use aggregation and use arrayFilters I cannot reuse the value with $value and have to refer to the value explicitly, so the script is not universal:db.dashboards.updateMany({\"filters.jaql.dim\":\"country.Country\"},{\"$set\":{\"filters.$[elem].jaql.dim\":\"[country.Country]\"}},{arrayFilters:[{\"elem.jaql.dim\":\"country.Country\"}]})When I use aggregation I can use a pattern and reuse the value of the field, but cannot reach the embedded document: (simplified the script by just trying to reuse the value with $)db.dashboards.updateMany({\"filters.jaql.dim\":{\"$regex\":/^[^[].*[^]]/}},[{\"$set\":{\"filters\":{\"$map\":{\"input\":\"$filters\",\"in\":{\"jaql.dim\":\"$$this.jaql.dim\"}}}}}])WriteError({\n“index” : 0,\n“code” : 16412,\n“errmsg” : “Invalid $set :: caused by :: FieldPath field names may not contain ‘.’.”,What is the best way to achieve my goal?", "username": "Anton_Volov" }, { "code": "{ \"$map\" : {\n \"input\" : \"$filters\" ,\n \"in\" : { \"$mergeObjects\" : [\n \"$$this\" ,\n { \"jaql.dim\" : { \"$concat\" : [ \"[\" , \"$$this.jaql.dim\" , \"]\" ] } }\n ] }\n} }\n", "text": "I think that in your $map, the in expression has to use $mergeObjects. Something along the following untested lines:Please be safe!", "username": "steevej" }, { "code": "db.dashboards.updateMany({\"filters.jaql.dim\":{\"$regex\":/^[^[].*[^]]/}},\n [{\"$set\":\n {\"filters\":\n { \"$map\" : {\n \"input\" : \"$filters\" ,\n \"in\" : { \"$mergeObjects\" : [\n \"$$this\" ,\n { \"jaql.dim\" : { \"$concat\" : [ \"[\" , \"$$this.jaql.dim\" , \"]\" ] } }\n ]}\n }}\n }}])\ndb.dashboards.updateMany({\"filters.jaql.dim\":{\"$regex\":/^[^[].*[^]]/}},\n [{\"$set\":\n {\"filters\":\n { \"$map\" : {\n \"input\" : \"$filters\" ,\n \"in\" : { \"$mergeObjects\" : [\n \"$$this\" ,\n { \"jaql\" : \"$$this.jaql\" }\n ]}\n }}\n }}])\n", "text": "Thank you @steevej . Unfortunately it still doesn’t work for me: I still cannot access the embedded document:WriteError({\n“index” : 0,\n“code” : 16412,\n“errmsg” : “Invalid $set :: caused by :: FieldPath field names may not contain ‘.’.”,when I remove the ‘.’ in the field key and leave it in the value it works:but I still need to access the embedded document.", "username": "Anton_Volov" }, { "code": " \"filters\" : [\n {\n \"jaql\" : {\n \"datasource\" : {\n \"title\" : \"Sample Healthcare\",\n \"fullname\" : \"LocalHost/Sample Healthcare\",\n \"id\" : \"localhost_aSampleIAAaHealthcare\",\n \"address\" : \"localHost\",\n \"database\" : \"aSampleIAAaHealthcare\"\n },\n \"column\" : \"Gender\",\n \"dim\" : \"Patients.Gender\",\n \"datatype\" : \"text\",\n \"filter\" : {\n \"explicit\" : false,\n \"multiSelection\" : true,\n \"all\" : true\n },\n \"title\" : \"GENDER\",\n \"collapsed\" : true\n },\n \"isCascading\" : false\n }]", "text": "@steevej FYI the structure of the filters array is the following:", "username": "Anton_Volov" }, { "code": "set = [{\"$set\":\n {\"filters\":\n { \"$map\" : {\n \"input\" : \"$filters\" ,\n \"in\" : { \"$mergeObjects\" : [\n \"$$this\" ,\n { \"jaql\" : { \"$mergeObjects\" : [ \"$$this.jaql\" , { \"dim\" : { \"$concat\" : [ \"[\" , \"$$this.jaql.dim\" , \"]\" ] } } ] } }\n ]}\n }}\n }}]\n{ _id: ObjectId(\"6359ab29cbf1ad6771bd5290\"),\n filters: \n [ { jaql: \n { datasource: \n { title: 'Sample Healthcare',\n fullname: 'LocalHost/Sample Healthcare',\n id: 'localhost_aSampleIAAaHealthcare',\n address: 'localHost',\n database: 'aSampleIAAaHealthcare' },\n column: 'Gender',\n dim: '[Patients.Gender]',\n datatype: 'text',\n filter: { explicit: false, multiSelection: true, all: true },\n title: 'GENDER',\n collapsed: true },\n isCascading: false } ] }\n", "text": "You make me work hard. But it makes me learn. B-)The error message made me think that we may need a second level of $mergeObjects.With the following:I do NOT get the errorInvalid $set :: caused by :: FieldPath field names may not contain ‘.’.and your sample document from your last post is updated to:", "username": "steevej" }, { "code": "", "text": "@steevej perfect! You’re a genius! This script works marvelously and takes a few seconds to run instead of the previous JavaScript with ‘forEach’ which used to run for several minutes.The lesson learned: dot notation doesn’t allow to access embedded documents when using $set with aggregation pipelines. The workaround is to use $mergeObjects. It looks like a good feature request for MongoDB.", "username": "Anton_Volov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the best way to update a field in an embedded document in an array by reusing the field's value?
2022-10-19T22:56:15.007Z
What is the best way to update a field in an embedded document in an array by reusing the field’s value?
2,727
null
[ "swift" ]
[ { "code": "", "text": "As a bit of background: I have been working through a number of issues with my app which already uses realm sync (realmSwift 10.16.0). The issues, which mainly revolved around having an expired login token whereby users were not automatically logged out (along with destructive changes to my schema), have lead me to terminating sync, then reinstating the sync. I believe I have addressed these problems through testing in a test environment (which included upgrading to realmSwift 10.26.0), however, when I push these changes to my production environment I am now seeing the following error. The error occurs upon users re-logging in to the app (due to the expired token).client file contained lastIntegratedServerVersion(8) greater than the downloadServerVersion(3) sent in the IDENT message which is not allowed (ProtocolErrorCode=209)I am happy to troubleshoot myself, however, I don’t really know where to start with the above error, as the only mention I have been able to find of ProtocolErrorCode=209 is in: Integrating changesets failed: error finding the previous history size for version 80: client is disconnected (ProtocolErrorCode=101) - #4 by Mansoor_Omar.I appreciate the suggestion in the above link to open a ticket, however, as I said, Im happy to debug this one myself with a few clues. Also, I should mention, I expect to be running in to a BadClientFileIdent error, which I believe I am already handling properly as per my test environment, as opposed to the ProtocolErrorCode=209.Thanks!", "username": "BenJ" }, { "code": "", "text": "Just wanted to +1 this question, I’m having a similar issue which I posted about here: ERROR: client file contained lastIntegratedServerVersion greater than the downloadServerVersion – (ProtocolErrorCode=209)", "username": "Annie_Sexton" }, { "code": "", "text": "+1, seeing it happening in my app with Flexible Sync. Any way to solve this error that shows up in Realm’s logs?", "username": "Sonisan" }, { "code": "", "text": "+1 on this. Using flexible sync in Android (Java, realm v 10.11.0). Also seeing Bad server version (IDENT, UPLOAD) a lot and it makes the app do a client reset. Just any general explanation about this error would be hugely appreciated", "username": "Eric_Klaesson" }, { "code": "", "text": "Hi! If you DM me a link to your app I can investigate what’s going on here.", "username": "Kiro_Morkos" }, { "code": "", "text": "Hi there, I’m wondering if there are any updates regarding this issue. I’m seeing it in the app I’m working on as well - also using Flex Sync", "username": "Alexander_Sanchez" }, { "code": "", "text": "Hi @Alexander_Sanchez,There is a known issue in an old version of realm-core that could lead to these errors, which was resolved in v11.16.0. If you share which SDK are you using, and which version, we can determine if your app is using a version of realm that has this bug.", "username": "Kiro_Morkos" } ]
Bad server version ProtocolErrorCode=209
2022-07-02T02:13:40.353Z
Bad server version ProtocolErrorCode=209
3,736
null
[ "aggregation", "queries" ]
[ { "code": "product_id\npart_no\nvendor_standard\n@Prod_id\n@Name\ndb.EN.aggregate([\n {\n $lookup: {\n from: 'csv_import',\n localField: 'ICECAT-interface.Product.@Prod_id',\n foreignField: 'part_no',\n as: 'part_number'\n }\n }]);\n[{\n \"_id\": \"1414\",\n \"ICECAT-interface\": {\n \"@xmlns:xsi\": \"http://www.w3.org/2001/XMLSchema-instance\",\n \"@xsi:noNamespaceSchemaLocation\": \"https://data.icecat.biz/xsd/ICECAT-interface_response.xsd\",\n \"Product\": {\n \"@Code\": \"1\",\n \"@HighPic\": \"https://images.icecat.biz/img/norm/high/1414-HP.jpg\",\n \"@HighPicHeight\": \"400\",\n \"@HighPicSize\": \"43288\",\n \"@HighPicWidth\": \"400\",\n \"@ID\": \"1414\",\n \"@LowPic\": \"https://images.icecat.biz/img/norm/low/1414-HP.jpg\",\n \"@LowPicHeight\": \"200\",\n \"@LowPicSize\": \"17390\",\n \"@LowPicWidth\": \"200\",\n \"@Name\": \"C6614NE\",\n \"@IntName\": \"C6614NE\",\n \"@LocalName\": \"\",\n \"@Pic500x500\": \"https://images.icecat.biz/img/gallery_mediums/img_1414_medium_1480667779_072_2323.jpg\",\n \"@Pic500x500Height\": \"500\",\n \"@Pic500x500Size\": \"101045\",\n \"@Pic500x500Width\": \"500\",\n \"@Prod_id\": \"C6614NE\",\n{\n \"_id\": \"ObjectId(\\\"6348339cc6e5c8ce0b7da5a4\\\")\",\n \"index\": 23679,\n \"product_id\": 4019734,\n \"part_no\": \"CP-HAR-EP-ADVANCED-REN-1Y\",\n \"vendor_standard\": \"Check Point\"\n},\n\ndb.EN.aggregate([\n {\n $lookup: {\n from: 'csv_import',\n let: {pn:'$ICECAT-interface.Product.@Prod_id'},\n pipeline: [{\n $match: {\n $expr: {\n $eq: [\"$$pn\",\"$part_no\"]\n }\n }\n }],\n as: 'part_number_info'\n }\n }]).pretty();\n", "text": "Hi,I’m new to MongoDB and I have 2 collections, one called “EN” and another one called “csv_import”. I just need to join these 2 collections using a common field and get the results. For results, I just need the Part number and product id. The 2 collections structure is as follow:csv_import:EN: under object “ICECAT-interface.Product”:(these are the main ones but there are other non-important fields, for the sake of this example I include only relevant ones.just as clarification, “@” is part of the field nameI’m using this to join the two collections:Unfortunately, I get an empty array in part_number when I’m expecting just the results that match. Also, how can I specify which fields I want to get back? I thought adding “as: part_number” would be enough but doesn’t seem to be the caseHere’s some collection sample (taken from “EN”)Sample collection data taken from “csv_import” collection:I’ve tried this:but yet again, I’m not getting ONLY the matched, I get everything with empty array in part_number_info", "username": "Matias_Montroull" }, { "code": "{ $match: { $expr: { $eq: [\"$$pn\", \"$part_no\"] } } }\n// Actual Value is Not Matching\n// { $match: { $expr: { $eq: [\"C6614NE\", \"CP-HAR-EP-ADVANCED-REN-1Y\"] } } }\npart_no@Prod_idEN{\n \"_id\": \"1414\",\n \"ICECAT-interface\": {\n \"Product\": {\n \"@Prod_id\": \"C6614NE\"\n }\n }\n}\ncsv_import{\n \"_id\": \"ObjectId(\\\"6348339cc6e5c8ce0b7da5a4\\\")\",\n \"index\": 23679,\n \"product_id\": 4019734,\n \"part_no\": \"C6614NE\", // updated this to same as `@Prod_id`\n \"vendor_standard\": \"Check Point\"\n}\ndb.EN.aggregate([\n {\n $lookup: {\n from: \"csv_import\",\n let: { pn: \"$ICECAT-interface.Product.@Prod_id\" },\n pipeline: [\n { $match: { $expr: { $eq: [\"$$pn\", \"$part_no\" ] } } }\n ],\n as: \"part_number_info\"\n }\n }\n])\n[\n {\n \"ICECAT-interface\": {\n \"Product\": {\n \"@Prod_id\": \"C6614NE\"\n }\n },\n \"_id\": \"1414\",\n \"part_number_info\": [\n {\n \"_id\": \"ObjectId(\\\"6348339cc6e5c8ce0b7da5a4\\\")\",\n \"index\": 23679,\n \"part_no\": \"C6614NE\",\n \"product_id\": 4.019734e+06,\n \"vendor_standard\": \"Check Point\"\n }\n ]\n }\n]\n", "text": "Hello @Matias_Montroull, Welcome to the MongoDB community developer forum,Your aggregation pipeline looks good, but there is no matching value available in your example documents which is why it results in an empty array,Let’s change the value of part_no same as @Prod_id,\nEN Collection:csv_import Collection:Your same Query:PlaygroundResult:", "username": "turivishal" }, { "code": "", "text": "Thanks Turivishal,I realize it’s not the same value, how can I get only the matches and not the ones where the array is empty? getting a result set of say 10.000 records that don’t match is not ideal, I just want the ones that match.Thanks!", "username": "Matias_Montroull" }, { "code": "$matchpart_number_info[]$lookupdb.EN.aggregate([\n {\n $lookup: {\n from: \"csv_import\",\n let: { pn: \"$ICECAT-interface.Product.@Prod_id\" },\n pipeline: [\n { $match: { $expr: { $eq: [\"$$pn\", \"$part_no\" ] } } }\n ],\n as: \"part_number_info\"\n }\n },\n { $match: { part_number_info: { $ne: [] } } }\n])\n", "text": "how can I get only the matches and not the ones where the array is empty?It requires to check the $match condition part_number_info not equal to [] empty array after $lookup stage,", "username": "turivishal" }, { "code": "\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$cursor\": {\n \"queryPlanner\": {\n \"namespace\": \"Icecat.EN\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {},\n \"queryHash\": \"8B3D4AB8\",\n \"planCacheKey\": \"D542626C\",\n \"maxIndexedOrSolutionsReached\": false,\n \"maxIndexedAndSolutionsReached\": false,\n \"maxScansToExplodeReached\": false,\n \"winningPlan\": {\n \"stage\": \"COLLSCAN\",\n \"direction\": \"forward\"\n },\n \"rejectedPlans\": []\n }\n }\n },\n {\n \"$lookup\": {\n \"from\": \"csv_import\",\n \"as\": \"part_number_info\",\n \"let\": {\n \"pn\": \"$ICECAT-interface.Product.@Prod_id\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$$pn\",\n \"$part_no\"\n ]\n }\n }\n }\n ]\n }\n },\n {\n \"$match\": {\n \"part_number_info\": {\n \"$not\": {\n \"$eq\": []\n }\n }\n }\n }\n ],\n \n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"EN\",\n \"pipeline\": [\n {\n \"$lookup\": {\n \"from\": \"csv_import\",\n \"let\": {\n \"pn\": \"$ICECAT-interface.Product.@Prod_id\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$eq\": [\n \"$$pn\",\n \"$part_no\"\n ]\n }\n }\n }\n ],\n \"as\": \"part_number_info\"\n }\n },\n {\n \"$match\": {\n \"part_number_info\": {\n \"$ne\": []\n }\n }\n }\n ],\n \"cursor\": {},\n \"$db\": \"Icecat\"\n },\n \"ok\": 1\n}\n", "text": "Thanks! I’ve executed this and it takes forever now, maybe I need some additional index? Currently I have index on @Prod_id.Running this returns in a few seconds:\n{ $match: { part_number_info: { $eq: } } but when I use the $ne it becomes really slow, it takes a lot of CPU in the serverHere’s the output of the explain():", "username": "Matias_Montroull" }, { "code": "$match$lookupEN$lookup", "text": "The index will support in the first stage ($match) or in some stages also in the second stage ($match, $sort, $group),So that $match after $lookup will not use the index, I would not suggest this aggregation pipeline if you have huge data, obviously, it becomes slow,I would suggest you improve your schema/document structure or manage a boolean flag (has prod id?) in EN collection so you don’t need $lookup stage in your aggregation query.", "username": "turivishal" }, { "code": "db.csv_import.aggregate([\n {\n $lookup: {\n from: 'EN',\n let: {pn:'$part_no'},\n pipeline: [{\n $match: {\n $expr: {\n $eq: [\"$pn\",\"$ICECAT-interface.Product.@Prod_id\"]\n }\n }\n }],\n as: 'part_number_info'\n }\n },{$match: {\"part_number_info.0\": {$exists: true}}}\n ])\n", "text": "I solved it by adding an index on part_no in csv_import (EN @Prod_id already had an index) and changing the lookup a bit to go from small to large collection. Here’s the final solution:", "username": "Matias_Montroull" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Join two collections by common field and return the matches
2022-10-25T14:08:02.825Z
Join two collections by common field and return the matches
3,276
null
[]
[ { "code": "", "text": "HI. Just installed MongoDB on Ubuntu server and it warns about XFS file system. I have EXT4. Is it bad? Also my Kernel version is in the MongoDB recommended range. Thanks.", "username": "mj69" }, { "code": "", "text": "The Production Notes document has the following:With the WiredTiger storage engine, using XFS is strongly recommended for data bearing nodes to avoid performance issues that may occur when using EXT4 with WiredTiger.You can use EXT4, but you might see performance issues. Whether that’s a problem for you depends on your testing of the database under your expected production load.", "username": "Doug_Duncan" }, { "code": "", "text": "To elaborate a little on @Doug_Duncan 's post, under some circumstances, it is possible that WiredTiger can experience a stall under EXT4. This ticket SERVER-18314 was the original investigation into this phenomenon (there’s a technical explanation of why in the ticket, which I won’t reproduce here), which led to the XFS recommendation.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "What if the users of db are around 200 people and there are like a total of 3000 documents in the db collections.", "username": "mj69" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Community Edition warns about XFS?
2022-10-27T16:27:48.025Z
MongoDB Community Edition warns about XFS?
4,480
null
[ "flutter" ]
[ { "code": "RealmResults<Profile> profileQuery=_realm.query<Profile>(\"phoneNumber == \\$0\", [number]);\n _realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add<Profile>(profileQuery);\n });\n profileQuery = _realm.query<Profile>(\"phoneNumber == \\$0\", [number]);\n", "text": "We’re trying to run a query on Realm’s flexible sync configuration for a Realm model that has embedded references to other models on Flutter SDK.\nThere are two challenges we are facing:", "username": "Deepankshu_Shewarey" }, { "code": "await _realm.subscriptions.waitForSynchronization();\nProfilemutableSubscriptions.add<Profile>(profileQuery).include(\"address\")", "text": "Hey, so for the first problem, you seem to be missing a call to:This will resolve when the requested data has been downloaded locally. If you don’t include it, the data will still download in the background, you just won’t be notified when the download is complete.For the second question, there are two problems. First one is that the objects Profile links to are not embedded - note how embedded relationships are declared in json schema. Instead, they’re linking to objects in other collections, which will not be downloaded at this point. We are currently working on a project that will allow you to specify which links you want to include (e.g. mutableSubscriptions.add<Profile>(profileQuery).include(\"address\")) but that’s not ready yet and we don’t have a definitive timeline for when it’ll ship.Now, the obvious fix would be to switch those relationships to be embedded ones, in which case you’ll receive all the objects along with the original query - the problem is that the flutter SDK doesn’t support embedded objects yet. We have a PR that adds support and we hope to ship it in the near future, but again, don’t have a definitive timeline for when that’ll happen.", "username": "nirinchev" }, { "code": "", "text": "Thanks a lot Nikola. That really helps.\nAwaiting for embedded objects support for flutter SDK.", "username": "Deepankshu_Shewarey" }, { "code": "", "text": "Hey, just following up in case you’ve missed the latest release announcement - embedded objects are now supported, so you can give it another shot.", "username": "nirinchev" }, { "code": "", "text": "Thank you so much for this update, helped in time. Really appreciate it.", "username": "Deepankshu_Shewarey" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm's flexible sync configuration issue for embedded Realm model references through other Flutter SDK models
2022-10-10T04:42:04.692Z
Realm&rsquo;s flexible sync configuration issue for embedded Realm model references through other Flutter SDK models
2,293
null
[]
[ { "code": "\"orders\": [\n {\n \"OrderId\": A,\n \"LineItems\": [\n {\n \"LineItemId\": \"1\"\n },\n {\n \"LineItemId\": \"2\"\n }\n ]\n },\n {\n \"OrderId\": B,\n \"LineItems\": [\n {\n \"LineItemId\": \"2\"\n },\n {\n \"LineItemId\": \"1\"\n }\n ]\n },\n {\n \"OrderId\": C,\n \"LineItems\": [\n {\n \"LineItemId\": \"3\"\n }\n ]\n }\n]\n\"OrderQueryInput\": {\n \"LineItems\": {\n \"LineItemId\": \"1\"\n }\n}\n\"orders\": [\n {\n \"OrderId\": A,\n \"LineItems\": [\n {\n \"LineItemId\": \"1\"\n },\n {\n \"LineItemId\": \"2\"\n }\n ]\n }\n]\n\"orders\": [\n {\n \"OrderId\": A,\n \"LineItems\": [\n {\n \"LineItemId\": \"1\"\n },\n {\n \"LineItemId\": \"2\"\n }\n ]\n },\n {\n \"OrderId\": B,\n \"LineItems\": [\n {\n \"LineItemId\": \"2\"\n },\n {\n \"LineItemId\": \"1\"\n }\n ]\n }\n]\n", "text": "I am trying to return documents that have a key-value pair in any of the objects within an array in the document.The query I came up with only looks at the 0th object in the array to check the key-value pair.\nI can’t think of or find online what I need to change in the query so that it will search all the objects in the array.I am using MongoDB Atlas.The documents I’m searching: (simplified)The query I’m using:This query only returns the following:My desired result is:I would appreciate any help or push in the right direction.", "username": "Zanek_shaw" }, { "code": "{\n \"OrderId\": B,\n \"LineItems\": [\n {\n \"LineItemId\": \"2\"\n },\n {\n \"LineItemId\": \"1\" ,\n \"Extra_Field_by_steevej\" : true\n }\n ]\n }\n{\n \"LineItems.LineItemId\": \"1\"\n}\n", "text": "Your queries works perfectly with the sample documents you provided.Most likely your sample have been redacted. That is the problem with redaction of sample documents, what you removed from your documents make your query works. The issue is that by doing LineItems:{…}, what you really means is object equality, rather than field equality on LineItemId.If you add just a simple field likethe order OrderId:B will not be selected.The solution is to use the dot notation to specify field equality. The update query will look like:", "username": "steevej" }, { "code": "", "text": "Firstly, I want to show gratitude and thank you for taking the time to help me.I have forgotten to write that I am using GraphQL to access this data.\nI will share screenshots of the GraphQL query and results to make things easier.(Screenshot RED 1)\nYou mentioned that my query should show the right results. (and I really wish it did, haha)\nBelow are screenshots of my query and results when asking for the two products by id. Both return different orders. (\"_id\" changing)(Screenshot BLUE 2)\nAlso below, the results of the query you suggested.\nGraphQL1920×3157 228 KB\nThank you again,\nZanek", "username": "Zanek_shaw" }, { "code": "", "text": "Sorry, I won’t be able to help you further. I know nothing about GraphQL. I avoid any abstraction layers. Plain and direct queryand aggregation works perfectly for me.", "username": "steevej" }, { "code": "", "text": "That’s okay,\nThank you for your time, I apologise for wasting it.What’s this tho?Plain and direct query and aggregationI am using next.js(React). Would I benefit from this instead of GQL?", "username": "Zanek_shaw" }, { "code": "", "text": "I have got it!!!Adding “_in” to the query after the id key.\nScreen Shot 2022-10-28 at 12.32.08 pm2568×1232 317 KB\n", "username": "Zanek_shaw" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to search for a key-value pair in ALL objects within an array in the document
2022-10-23T11:10:37.745Z
How to search for a key-value pair in ALL objects within an array in the document
4,418
null
[ "replication" ]
[ { "code": "", "text": "I’m currently working on a replica set MongoDB. The current replica set up is:I need to update one value in a collection within a replica set db. I’m just wondering the best way to do this? Do I have to connect directly to the primary node and update in there? Will that automatically update the secondary nodes? Should I switch off the secondary’s whilst updating the primary node? I’m using MongoDB version 4.0Any help would be greatly appreciated ", "username": "Flouncy_Mgoo" }, { "code": "", "text": "Hi @Flouncy_MgooI’m not sure I understand the question. What do you mean exactly by “update one value in a collection”? Could you give an example of what you mean?A MongoDB replica set provides you with high availability and an application connects to the replica set as a whole. With MongoDB, no matter the topology of your deployment (standalone, replica set, or a sharded cluster), there would be no difference in database commands from the application side. The only difference is the connection string URI you use.Please see Replication for more details into how a replica set works in MongoDB.Also MongoDB 4.0 series was out of support since April 2022 and will not receive further updates and fixes. You might want to move to a supported version (the 4.2 series is the oldest still-supported series) as soon as possible.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin, apologies I wasn’t very clear. I basically just need to replace one single NULL value to a string in a table. I’m just worried about the change not being reflected in all of the replica sets if I edit the NULL value using the wrong connection URI (ie the connection string containing the all members of the set rather than just the primary node)Let me know if this does not clear things upThanks", "username": "Flouncy_Mgoo" }, { "code": "", "text": "Hi @Flouncy_MgooI’m just worried about the change not being reflected in all of the replica sets if I edit the NULL value using the wrong connection URIModern official MongoDB drivers will by default try to discover the topology you’re connecting to, and once a replica set is setup properly, it will propagate any writes to all nodes in the set, since this is done server-side. In a MongoDB replica set, only the primary can accept writes, and the secondaries replicate those writes.I’m curious about the origin of your question though. Are you developing an app using MongoDB, or are you administering it? Did you inherit this setup from someone else, or did you set it up yourself?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update Record in Replica Set
2022-10-26T15:07:58.997Z
Update Record in Replica Set
1,838
null
[ "queries" ]
[ { "code": "{\n \"_id\" : \"323211432059985529055037_20211020_10377\",\n \"roundId\" : \"0002315350\"\n}\n{\n \"_id\" : \"323200763200001781310030_20211020_10377\",\n \"roundId\" : \"0002315418\"\n}\n{ \"_id\" : \"8124774a-986e-4a5a-9f8e-599d2c169a0d\" }\n{ \"_id\" : \"9ea21fda-f102-49c0-b2f9-9bdd00c954c3\" }\n{ \"_id\" : \"8d0bd540-00a2-40d1-8c87-a767e954e070\" }\n{ \"_id\" : \"f1c3fa70-c05b-44e8-bad1-f4d48d9e4479\" }\n{ \"_id\" : \"8d003e6b-072e-460e-897f-f6abe677bfeb\" }\n", "text": "Hi,\nIn my production environment write operations on two collections in different databases taking more than 2 seconds. the average document size 76 bytes.sample documents for collection 1Collection1 stats:Documents : 501297\nSize : 38958213\nStorage size : 18665472\nIndexes : 1\nIndex size : 23564288sample documents for collection 2Collection2 stats:Documents : 25699830\nSize : 1310692010\nStorage size : 1266847744\nIndexes : 1\nIndex size : 1252200448we do not have any extra indexes on these two collections expect default index _id, but _id is not an objectID as shown in the sample documents… Daily we have 500k inserts on these two tables. Writes performance is decreasing along with documents count(if documents are getting increased insertion time also increasing like 2sec, 3 secs, 4 secs…).My Disk size is 512G, RAM 32G, 16 cores CPUBut there should be other optimization strategies, too did not comes to my mind I would like to hear about!Which optimization strategy sound most promising or is a mixture of several optimizations is needed here?", "username": "Mohan_M1" }, { "code": "", "text": "I’m having the same issue, running 250 concurrent findOneAndUpdate calls is taking upwards of 3 seconds.", "username": "Lopu_Designs" } ]
Writes are taking more than 2000ms
2021-10-20T13:20:20.777Z
Writes are taking more than 2000ms
1,873
null
[]
[ { "code": "", "text": "Hi all. I have a website on Magento 2. Help me make Magento 2 and Mongo db friends? ", "username": "Pest_Traps" }, { "code": "", "text": "Hi @Pest_TrapsBy Magento 2 do you mean the newly renamed Adobe Commerce suite?In many cases, it’s up to the application to connect to MongoDB, so usually this is a question best posted on the specific application’s support forum. MongoDB basically just sits there waiting for a connection and does not initiate any outgoing connection to an application, so unless there’s an error, the MongoDB server would have no idea what’s going on Having said that, I think there’s a relevant post in the Magento 2 forum: magento2 + mongodb - Magento Forums but this looks like an old question. You might want to check if the solution posted there works for you.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo db and magento connection
2022-10-25T19:42:32.802Z
Mongo db and magento connection
1,765
null
[]
[ { "code": "", "text": "I am wondering if there is a reason to make the config.changelog collection uncapped ? Beacause we migrated to mongo atlas and found out that we have this one uncapped but it is meant to be capped by default.", "username": "Bohdan_Chystiakov" }, { "code": "", "text": "There is no reason to have it uncapped.", "username": "Garaudy_Etienne" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Uncapped config.changelog collection
2022-10-27T11:37:08.945Z
Uncapped config.changelog collection
921
null
[ "indexes", "transactions" ]
[ { "code": " func TestTransactionWithIndex(t *testing.T) {\n\tctx := context.Background()\n\tclientOpts := options.Client().ApplyURI(\"mongodb://localhost:27017/testDd\")\n\tclient, err := mongo.Connect(ctx, clientOpts)\n\tif err != nil {\n\t\tt.Fatalf(\"Error connecting to mongo: %v\", err)\n\t}\n\tdefer func() { _ = client.Disconnect(ctx) }()\n\n\twcMajority := writeconcern.New(writeconcern.WMajority(), writeconcern.WTimeout(10*time.Second))\n\twcMajorityCollectionOpts := options.Collection().SetWriteConcern(wcMajority)\n\ttestIndexColl := client.Database(\"testDd\").Collection(\"testIndex\", wcMajorityCollectionOpts)\n\terr = testIndexColl.Drop(ctx)\n\tif err != nil {\n\t\tt.Fatalf(\"Cleanup: Error dropping collection: %v\", err)\n\t}\n\n\tsession, err := client.StartSession()\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer session.EndSession(ctx)\n\n\t_, err = testIndexColl.Indexes().CreateOne(ctx, mongo.IndexModel{\n\t\tKeys: bson.D{{Key: \"last_name\", Value: 1}},\n\t\tOptions: options.Index().SetUnique(true)})\n\tif err != nil {\n\t\tt.Fatalf(\"Error creating unique index: %v\", err)\n\t}\n\n\t_, err = testIndexColl.InsertOne(ctx, bson.M{\"last_name\": \"Smith\"})\n\tif err != nil {\n\t\tt.Fatalf(\"Error inserting document: %v\", err)\n\t}\n\n\tcallback := func(sessCtx mongo.SessionContext) (interface{}, error) {\n\t\t// Important: You must pass sessCtx as the Context parameter to the operations for them to be executed in the\n\t\t// transaction.\n\t\t_, err := testIndexColl.UpdateOne(sessCtx,\n\t\t\tbson.M{\"last_name\": \"Smith\"},\n\t\t\tbson.M{\"$set\": bson.M{\n\t\t\t\t\"last_name\": \"Smith_Old\",\n\t\t\t}})\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Error updating Smith inside transaction: %v\", err)\n\t\t}\n\n\t\ttype person struct {\n\t\t\tId string `bson:\"_id\"`\n\t\t\tLastName string `bson:\"last_name\"`\n\t\t}\n\t\tresult := testIndexColl.FindOne(sessCtx, bson.M{\"last_name\": \"Smith_Old\"})\n\t\tvar p person\n\t\terr = result.Decode(&p)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Error decoding result: %v\", err)\n\t\t}\n\t\tfmt.Printf(\"Found Smith Old inside transaction after updating: %v\\n\", p)\n\n\t\t//try create a document within the same transaction with key Smith, since old Smith has been updated\n\t\t_, err = testIndexColl.InsertOne(ctx, bson.M{\"last_name\": \"Smith\"})\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Error inserting new Smith inside transaction: %v\", err)\n\t\t}\n\t\treturn nil, err\n\t}\n\n\tresult, err := session.WithTransaction(ctx, callback)\n\tfmt.Printf(\"result: %v\\n\", result)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n}", "text": "Hello, I would like some help understanding how unique indices are updated with regards to transactions.Starting with a collection with unique index on field “last_name”, and a document with “last_name” = “Smith”, lets say that I want to perform an atomic update using a transaction, such that:This however seems not possible to do? It seems that inside transaction itself while I can update old document to be “Smith_Old”, and a Find inside transaction is aware of this update, the Insert of “Smith” inside transaction fails due to duplicate key error.My guess is that this is due to unique index being only updated AFTER transaction commits - is this true? I have been looking at mongo docs but I could not find any details regarding this.Would someone be able to provide details /documentation links - confirm what I am trying to do here is impossible with mongo?\nThe actual use case here is mongo fle data encryption key rotation. I’d love to be able to rotate key and keep consumers happy with existance of static keyAltName while the underlying key changes.Code reproducing this (Go lang):", "username": "Elena123" }, { "code": "=== RUN TestTransactionWithIndex\nFound Smith Old inside transaction after updating: {61914ecc8f59c2f93901bf23 Smith_Old}\n eflat_test.go:79: Error inserting new Smith inside transaction: write exception: write errors: [E11000 duplicate key error collection: testDd.testIndex index: last_name_1 dup key: { last_name: \"Smith\" }]\n--- FAIL: TestTransactionWithIndex (2.53s)\n\nFAIL\n", "text": "And this is the output of the above code when run:Also, mongo version is 4.4 and go mongo driver version is 1.7.3. Thank you", "username": "Elena123" }, { "code": "", "text": "@Elena123 did you ever get an answer to this? I’ve hit the same issue recently. I want to keep the atomic nature of my two commits bit also don’t really want to drop the unique constraint that I have.", "username": "Tom_Avent" } ]
Unique index & transaction: E11000 duplicate key error inside transaction
2021-11-14T18:10:53.245Z
Unique index &amp; transaction: E11000 duplicate key error inside transaction
4,420
null
[]
[ { "code": "", "text": "I’m putting together a small document management application with Node.js and MongoDB. At this point I’m able to rather easily create documents in a collection I call documents (meaning electronic documents, not mongodb objects) which have a half dozen attributs. I’m also able to save files using GridFS, which creates fs.files and fs.chunks objects. What I’d really like to do is store the metadata directly on the gridfs fs.files objects (provided I still have access to all the search capabilities).I can’t find any way to do this though. It seems I’m going to have to have the 2 separate collections and relate the 2 by having the Object IDs of the fs.files objects in the documents objects as a value.Is there some way to have the metadata on the fs.files objects I’m just not seeing? Any links to anyone doing this would be helpful.", "username": "Mark_Klamerus" }, { "code": "fs.filesfs.chunksfs.filesopenUploadStream`options.metadata` is an optional object to store in the file document's `metadata` field.\n", "text": "Hi Mark,The GridFS API is a convention for storing large binary data in MongoDB, but the underlying fs.files and fs.chunks collections are normal collections you can query and manipulate directly.Custom metadata happens to be part of the GridFS API specification and the driver APIs have provision to include optional metadata which will be added to the fs.files document.For example, if you are using the Node.js driver and openUploadStream:For a more general use case description, see Metadata and Asset Management in the MongoDB Manual.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "can you show sample code how to retrieve metadata? i can upload metadata using UploadFromStream and filled GridFSUploadOptions but i cant find way how to download metadata, i can download file using DownloadToStreamByName", "username": "BlasterXXL" } ]
Storing metadata in gridfs
2020-04-22T19:00:39.855Z
Storing metadata in gridfs
3,234
null
[ "spring-data-odm" ]
[ { "code": "", "text": "Hello TeamWe have created a capped collection and spring DefaultMessageListenerContainer (registered with TailableCursorRequest) listening to the capped collection. Everything works good till yesterday. From today we are getting below errorsMongoError: tailable cursor requested on non capped collectionCollection statistics for the capped collection shows capped property is false. We don’t know why / when suddenly capped collection converted into a non capped collection.Can some one help us to understand why suddenly it changed to non capped collection ?Mongodb 4.0.6\nSpring 2.2.4.RELEASE", "username": "Prabaharan_Kathiresa" }, { "code": "", "text": "Hi @Prabaharan_Kathiresa something like that shouldn’t happen all by itself. Did someone accidentally drop the collection and then rebuild it?As for troubleshooting, I would look through the log files to see if there is something in there that would help you determine what might have happened.You can convert the back to a capped collection if you’re not on a sharded cluster.", "username": "Doug_Duncan" }, { "code": "", "text": "Below are the different exception log statements available[cTaskExecutor-1] rContainer$DecoratingLoggingErrorHandler : Unexpected error occurred while listening to MongoDBProj: {}\ntailable cursor requested on non capped collection’ on server :27130; nested exception is com.mongodb.MongoQueryException: Query failed with error code 2 and error message 'error processing query: ns=db.cappedCollectionTree: createdDate $gt new Date(1590194803434)org.springframework.data.mongodb.UncategorizedMongoDbException: Query failed with error code 2 and error message 'error processing query: ns=db.cappedCollectionTree: createdDate $gt new Date(1590194803434)Caused by: com.mongodb.MongoQueryException: Query failed with error code 2 and error message 'error processing query: ns=db.cappedCollectionTree: createdDate $gt new Date(1590194803434)", "username": "Prabaharan_Kathiresa" }, { "code": "CMD: dropdrop", "text": "Welcome to the community forum @Prabaharan_Kathiresa! Collection statistics for the capped collection shows capped property is false. We don’t know why / when suddenly capped collection converted into a non capped collection. A capped collection cannot be converted to an uncapped collection. As @Doug_Duncan suggested, the most likely cause would be a collection being dropped & recreated with the same name. If the collection being tailed was dropped, this should also correlate with any associated cursors being closed.Below are the different exception log statements availableThe log snippets you provided are application logs. The timestamps for errors in your application log may help you work out the relevant time period, but to investigate further you need to look into your MongoDB server log activity.Try searching your MongoDB logs for lines matching CMD: drop . Unless you have changed the log settings, a drop command should be logged at the default level in MongoDB 4.0.Can you also confirm what type of deployment you have (standalone, replica set, or sharded cluster)?Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you @Stennie_X and @Doug_Duncan.We are using replica set deployment.", "username": "Prabaharan_Kathiresa" } ]
Capped Collection changed to uncapped suddenly
2020-05-25T20:49:26.200Z
Capped Collection changed to uncapped suddenly
4,079
null
[ "aggregation", "atlas-search" ]
[ { "code": "{\n _id: ObjectId(\"123\"),\n companyId: \"456-678\",\n description: \"this is a description\"\n}\n$search$matchcompanyId", "text": "Hi, we have collection with documents that looks like this:We want to let companies that uses our system to search on the description field so we created a full-text-search on that field.\nHow can we search only documents that have specific companyId rather than searching on all the collection?\n$search must to be the first step in the aggregation pipeline so we cant $match before that.Is this filter will use the existing companyId index?", "username": "Benny_Kachanovsky1" }, { "code": "$searchERROR\tMongoServerError: $_internalSearchMongotRemote is only valid as the first stage in a pipeline\n", "text": "but $search must to be the first in the pipeline, no? because im getting when doing otherwise", "username": "Benny_Kachanovsky1" }, { "code": "Atlas atlas-whhaeg-shard-0 [primary] test> db.companies.find()\n[\n {\n _id: ObjectId(\"63593dc870ca422a31030ded\"),\n companyId: '456-678',\n description: 'this is a description'\n },\n {\n _id: ObjectId(\"635945a370ca422a31030dee\"),\n companyId: '45232-678676',\n description: 'this is a description'\n }\n]\nAtlas atlas-whhaeg-shard-0 [primary] test> db.companies.aggregate([\n... {\n... \"$search\": {\n... index: 'companyId',\n... \"compound\": {\n... \"filter\": [{\n... \"text\": {\n... \"query\": [\"this\"],\n... \"path\": \"description\"\n... }\n... }],\n... \"must\": [{\n... \"text\": {\n... \"query\": \"456-678\",\n... \"path\": \"companyId\"\n... }\n... }]\n... }\n... }\n... }\n... ])\n[\n {\n _id: ObjectId(\"63593dc870ca422a31030ded\"),\n companyId: '456-678',\n description: 'this is a description'\n }\n]\n", "text": "Hello,You can use a compound query using a combination of “filter” and “must” to obtain the functionality of $match, this is explained in detail here.I did this example below with two records having same description and different companyId.Using the below $search specifying the “must” clause to match companyId with “456-678” and having a filter on description with part of the description, in this case the word “this”.As you can see the “must” clause forced the search to return only documents that matched the specified companyId.There are various other examples you can try using options of the “compound” section of $search in the documentation link below:I hope you find this helpful.Regards,\nElshafey", "username": "Mohamed_Elshafey" }, { "code": "compoundfiltercompanyId{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"companyId\": [\n {\n \"dynamic\": true,\n \"type\": \"document\"\n },\n {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n }\n ]\n }\n }\n}\ncompanyIdcompanyId[\n {\n '$search': {\n 'index': 'default', \n 'compound': {\n 'must': {\n 'text': {\n 'path': 'description', \n 'query': 'description'\n }\n }, \n 'filter': {\n 'text': {\n 'path': 'companyId', \n 'query': '123-456'\n }\n }\n }\n }\n }, {\n '$project': {\n 'companyId': 1, \n 'description': 1, \n 'score': {\n '$meta': 'searchScore'\n }\n }\n }\n]\nfilter", "text": "@Benny_Kachanovsky1 - I wanted to further expand on the previous reply. Using compound and filter are the right way to go about this, but let’s refine that a bit. First, besides setting up your Atlas Search index definition as you have already, configure companyId to use the keyword analyzer (so it becomes a single filterable string term in the index). Here’s my index definition I used:Once you have that index configuration saved, and your data added and indexed, this is a query that will filter by companyId and use the query term “description” (sorry, field name and query string are same in this example we are using) for relevancy ranking within all documents from the specified companyId:Be sure to use filter for companyId so it does not affect relevancy scoring and order.", "username": "Erik_Hatcher" }, { "code": "", "text": "@Mark_Solovskiy could you elaborate?", "username": "Erik_Hatcher" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Search index filtering
2022-10-26T08:39:33.984Z
Search index filtering
4,529
null
[]
[ { "code": "_iduser_id", "text": "Hello, I saw this while reading the tutorial on Mongo Realm Web Tutorial:\n“If you are building a mobile application, define the relationship to point to _id instead of user_id .”\nDoes this mean that I can’t share schema between web and mobile client? Or is that something WIP?Thanks", "username": "Weihan_Li" }, { "code": "", "text": "I want to have a web version of my app and two mobile clients (iOS and Android).Can anyone clarify if we can share the same schema and data between web and mobile clients?If we can, what about the _id and user_id fields mentioned in the Weihan_Li’s post?", "username": "Alexei_Vinidiktov" }, { "code": "", "text": "So is it possible to build both a web client and mobile clients for the same synced Realm database?", "username": "Alexei_Vinidiktov" }, { "code": "", "text": "I’ve tried the tutorial apps, and it appears the web version and the mobile versions don’t store their data in the same way.I can see the web version’s data in the collections in my Atlas account, but I can’t find the data that came from the mobile clients. Where is it stored?", "username": "Alexei_Vinidiktov" }, { "code": "", "text": "Can anybody comment?", "username": "Alexei_Vinidiktov" }, { "code": "_iduser_id", "text": "“If you are building a mobile application, define the relationship to point to _id instead of user_id .”\nDoes this mean that I can’t share schema between web and mobile client? Or is that something WIP?I’ve tried the tutorial apps, and it appears the web version and the mobile versions don’t store their data in the same way.I can see the web version’s data in the collections in my Atlas account, but I can’t find the data that came from the mobile clients. Where is it stored?", "username": "Alexei_Vinidiktov" }, { "code": "", "text": "This is very late so I hope you ended up figuring out. I’m trying to do something similar. It seems like you create one Atlas db and access it differently depending on your SDK (mobile or web).", "username": "Josh_Chukwuezi" } ]
Sharing Realm schema between web and mobile client
2020-06-11T03:20:43.118Z
Sharing Realm schema between web and mobile client
2,067
null
[ "compass", "mongodb-shell" ]
[ { "code": "", "text": "I am tried to import csv data in monogdb compass but i want the objectId should be as stype of string only\nwhat all methods can I use?Please help", "username": "Sanjay_Prasad" }, { "code": "", "text": "@Sanjay_Prasad ,I am not sure how to do it in compass, but mongoimport utility can take headers with type annotation :", "username": "Pavel_Duchovny" } ]
How to import CSV data in mongodb compass with ObjecctUd as String
2022-10-27T12:31:30.800Z
How to import CSV data in mongodb compass with ObjecctUd as String
1,086
null
[ "aggregation", "queries", "crud" ]
[ { "code": "[\n {\n _id: ObjectId(\"6352b213f60e6c14eade7dc5\"),\n companyId: ObjectId(\"5ec1c4c7ea40dd3912ff206d\"),\n memberId: ObjectId(\"61e538180d543336dfa52a94\"),\n isActive: true,\n name: \"SISWA1\",\n profileImage: \"string\",\n email: \"[email protected]\",\n phone: \"088123123123\",\n mainClassId: null,\n classId: null,\n datas: [\n {\n id: ObjectId(\"6350ce45605a1c2f35e4c607\"),\n classId: ObjectId(\"62f19d68c149abd43526d1a3\"),\n lessonId: ObjectId(\"62f0af32cb716c443625c3d0\"),\n year: \"2002\",\n report: [\n {\n activityId: ObjectId(\"6350f1313f586971dfd1effd\"),\n meet: 3,\n isPresent: true,\n scores: [\n {\n key: \"TUGAS\",\n value: 50\n }\n ]\n },\n {\n activityId: ObjectId(\"6350f13aaa8f2d84071fc3cd\"),\n meet: 12,\n isPresent: true,\n scores: [\n {\n key: \"TUGAS\",\n value: 50\n }\n ]\n }\n ]\n }\n ]\n }\n]\n[\n {\n _id: ObjectId(\"6352b213f60e6c14eade7dc5\"),\n companyId: ObjectId(\"5ec1c4c7ea40dd3912ff206d\"),\n memberId: ObjectId(\"61e538180d543336dfa52a94\"),\n isActive: true,\n name: \"SISWA1\",\n profileImage: \"string\",\n email: \"[email protected]\",\n phone: \"088123123123\",\n mainClassId: null,\n classId: null,\n datas: [\n {\n id: ObjectId(\"6350ce45605a1c2f35e4c607\"),\n classId: ObjectId(\"62f19d68c149abd43526d1a3\"),\n lessonId: ObjectId(\"62f0af32cb716c443625c3d0\"),\n year: \"2002\",\n report: [\n {\n activityId: ObjectId(\"6350f1313f586971dfd1effd\"),\n meet: 3,\n isPresent: true,\n scores: [\n {\n key: \"TUGAS\",\n value: 50\n }\n ]\n }\n ]\n }\n ]\n }\n]\ndb.collection.update({\n \"_id\": ObjectId(\"6352b213f60e6c14eade7dc5\"),\n \"datas\": {\n \"$elemMatch\": {\n \"id\": ObjectId(\"6350ce45605a1c2f35e4c607\"),\n \"report.activityId\": ObjectId(\"6350f1313f586971dfd1effd\")\n }\n },\n \n},\n{\n \"$pull\": {\n \"datas.$[outer].report.$[inner].activityId\": ObjectId(\"6350f13aaa8f2d84071fc3cd\")\n }\n},\n{\n \"arrayFilters\": [\n {\n \"outer.id\": ObjectId(\"6350ce45605a1c2f35e4c607\")\n },\n {\n \"inner.activityId\": ObjectId(\"6350f13aaa8f2d84071fc3cd\")\n }\n ]\n},\n)\n", "text": "hi guy’s i try to delete some document in nested array with $pull operator\nthis is the sample documenti want to delete document that have activity=“6350f13aaa8f2d84071fc3cd”, so the document became like thisi’ve been try with this way, but doesnt work.\nit even pops up a messagefail to run update: write exception: write errors: [Cannot apply $pull to a non-array value]sample code : Mongo playground", "username": "Nuur_zakki_Zamani" }, { "code": "arrayFilters$pulldb.collection.update({\n \"_id\": ObjectId(\"6352b213f60e6c14eade7dc5\"),\n \"datas\": {\n \"$elemMatch\": {\n \"id\": ObjectId(\"6350ce45605a1c2f35e4c607\"),\n \"report.activityId\": ObjectId(\"6350f1313f586971dfd1effd\")\n }\n }\n},\n{\n $pull: {\n \"datas.$[outer].report\": {\n \"activityId\": ObjectId(\"6350f1313f586971dfd1effd\")\n }\n }\n},\n{\n arrayFilters: [\n { \"outer.id\": ObjectId(\"6350ce45605a1c2f35e4c607\") }\n ]\n})\n", "text": "Hello @Nuur_zakki_Zamani,The condition for inner in arrayFilters is not required, you can put that condition inside the $pull operation,Playground", "username": "turivishal" }, { "code": "db.collection.updateOne({ \"datas\": {\n \"$elemMatch\": {\n \"id\": ObjectId(\"6350ce45605a1c2f35e4c607\"),\n \"report.activityId\": ObjectId(\"6350f1313f586971dfd1effd\")\n }\n }},{$pull : { \"datas.$.report\" : { activityId: ObjectId(\"6350f13aaa8f2d84071fc3cd\")} }})\nactivityId: ObjectId(\"6350f13aaa8f2d84071fc3cd\"", "text": "Hi @Nuur_zakki_Zamani and @turivishal ,There is actually a simple way to write it:The positional $ is working fine if there is only one occurrence of activityId: ObjectId(\"6350f13aaa8f2d84071fc3cd\"Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "yesss thanks mr @Pavel_Duchovny it work in my python ", "username": "Nuur_zakki_Zamani" }, { "code": "", "text": "thanks bro @turivishal ", "username": "Nuur_zakki_Zamani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to delete object in nested array array
2022-10-24T02:56:59.248Z
How to delete object in nested array array
4,565
null
[]
[ { "code": "", "text": "How do we use override information in Mongo Query.\nFor example, we have 2 fields(Salary and Salary_override). If the user provide override information in Salary_override field, we need to consider this field as first priority. Suppose if this override field value is null or empty then we need to use salary field, not override field.\nWe should construct the mongo query dynamically. Please suggest how do we construct the Mongo query for this scenario.", "username": "Mansur_Ali" }, { "code": "[{\n $project: {\n Salary: {\n $ifNull: [\n '$Salary_overide',\n '$Salary'\n ]\n }\n }\n}]\ndb.test12.find()\n{ _id: ObjectId(\"630cb9db2786520682ce36e5\"),\n salary: 100,\n salary_overide: 'xxx' }\n{ _id: ObjectId(\"6310876b46087690603641b0\"), salary: 100 }\ndb.test12.aggregate([{\n $project: {\n salary: {\n $ifNull: [\n '$salary_overide',\n '$salary'\n ]\n }\n }\n}])\n{ _id: ObjectId(\"630cb9db2786520682ce36e5\"), salary: 'xxx' }\n{ _id: ObjectId(\"6310876b46087690603641b0\"), salary: 100 }\n", "text": "Hi @Mansur_Ali ,the following aggregation should work:Ty", "username": "Pavel_Duchovny" }, { "code": " schema.post('init', function() {\n if(this.Salary_override){\n this.Salary = this.Salary_override;\n }\n });\n", "text": "If mongoose is being used,\nthen it could be done by using init post hook.\nIt’s executed on each doc, loaded by a find query.", "username": "Sahil_Kashyap" } ]
How do we use override information in Mongo Query
2022-09-01T07:33:14.823Z
How do we use override information in Mongo Query
1,532
null
[ "aggregation", "queries", "indexes" ]
[ { "code": "db.records.explain(\"executionStats\").aggregate([\n {\n \"$match\":{\n \"$and\":[\n {\n \"status\":\"new\",\n \"messageDeleted\":{\n \"$exists\":false\n }\n },\n {\n \"header.receiverNode\":{\n \"$in\":[\n \"224572\"\n ]\n }\n }\n ]\n }\n },\n {\n \"$sort\":{\n \"_id\":1\n }\n },\n {\n \"$limit\":100\n }\n]\");\"{\n \"stages\":[\n {\n \"$cursor\":{\n \"query\":{\n \"$and\":[\n {\n \"status\":\"new\",\n \"messageDeleted\":{\n \"$exists\":false\n }\n },\n {\n \"header.receiverNode\":{\n \"$in\":[\n \"224572\"\n ]\n }\n }\n ]\n },\n \"sort\":{\n \"_id\":1\n },\n \"limit\":NumberLong(100),\n \"queryPlanner\":{\n \"plannerVersion\":1,\n \"namespace\":\"hdnapi.records\",\n \"indexFilterSet\":false,\n \"parsedQuery\":{\n \"$and\":[\n {\n \"header.receiverNode\":{\n \"$eq\":\"224572\"\n }\n },\n {\n \"status\":{\n \"$eq\":\"new\"\n }\n },\n {\n \"messageDeleted\":{\n \"$not\":{\n \"$exists\":true\n }\n }\n }\n ]\n },\n \"queryHash\":\"6D467E3F\",\n \"planCacheKey\":\"28A44C04\",\n \"winningPlan\":{\n \"stage\":\"FETCH\",\n \"filter\":{\n \"$and\":[\n {\n \"header.receiverNode\":{\n \"$eq\":\"224572\"\n }\n },\n {\n \"status\":{\n \"$eq\":\"new\"\n }\n },\n {\n \"messageDeleted\":{\n \"$not\":{\n \"$exists\":true\n }\n }\n }\n ]\n },\n \"inputStage\":{\n \"stage\":\"IXSCAN\",\n \"keyPattern\":{\n \"_id\":1\n },\n \"indexName\":\"_id_\",\n \"isMultiKey\":false,\n \"multiKeyPaths\":{\n \"_id\":[\n \n ]\n },\n \"isUnique\":true,\n \"isSparse\":false,\n \"isPartial\":false,\n \"indexVersion\":2,\n \"direction\":\"forward\",\n \"indexBounds\":{\n \"_id\":[\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n },\n \"rejectedPlans\":[\n \n ]\n },\n \"executionStats\":{\n \"executionSuccess\":true,\n \"nReturned\":100,\n \"executionTimeMillis\":23729,\n \"totalKeysExamined\":2530452,\n \"totalDocsExamined\":2530452,\n \"executionStages\":{\n \"stage\":\"FETCH\",\n \"filter\":{\n \"$and\":[\n {\n \"header.receiverNode\":{\n \"$eq\":\"224572\"\n }\n },\n {\n \"status\":{\n \"$eq\":\"new\"\n }\n },\n {\n \"messageDeleted\":{\n \"$not\":{\n \"$exists\":true\n }\n }\n }\n ]\n },\n \"nReturned\":100,\n \"executionTimeMillisEstimate\":6277,\n \"works\":2530452,\n \"advanced\":100,\n \"needTime\":2530352,\n \"needYield\":0,\n \"saveState\":19838,\n \"restoreState\":19838,\n \"isEOF\":0,\n \"docsExamined\":2530452,\n \"alreadyHasObj\":0,\n \"inputStage\":{\n \"stage\":\"IXSCAN\",\n \"nReturned\":2530452,\n \"executionTimeMillisEstimate\":481,\n \"works\":2530452,\n \"advanced\":2530452,\n \"needTime\":0,\n \"needYield\":0,\n \"saveState\":19838,\n \"restoreState\":19838,\n \"isEOF\":0,\n \"keyPattern\":{\n \"_id\":1\n },\n \"indexName\":\"_id_\",\n \"isMultiKey\":false,\n \"multiKeyPaths\":{\n \"_id\":[\n \n ]\n },\n \"isUnique\":true,\n \"isSparse\":false,\n \"isPartial\":false,\n \"indexVersion\":2,\n \"direction\":\"forward\",\n \"indexBounds\":{\n \"_id\":[\n \"[MinKey, MaxKey]\"\n ]\n },\n \"keysExamined\":2530452,\n \"seeks\":1,\n \"dupsTested\":0,\n \"dupsDropped\":0\n }\n }\n }\n }\n }\n ],\n \"serverInfo\":{\n \"host\":\"cse04-mongo\",\n \"port\":27017,\n \"version\":\"4.2.20\",\n \"gitVersion\":\"15c0712952c356cb711c13a42cb3bca8617d4ebc\"\n },\n \"ok\":1,\n \"$clusterTime\":{\n \"clusterTime\":Timestamp(1666614215,\n 1),\n \"signature\":{\n \"hash\":BinData(0,\n \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"\")\",\n \"keyId\":NumberLong(0)\n }\n },\n \"operationTime\":Timestamp(1666614215,\n 1)\n}\ndb.records.explain(\"executionStats\").aggregate([\n {\n \"$match\":{\n \"$and\":[\n {\n \"status\":\"new\",\n \"messageDeleted\":{\n \"$exists\":false\n }\n },\n {\n \"header.receiverNode\":{\n \"$in\":[\n \"224572\"\n ]\n }\n }\n ]\n }\n },\n {\n \"$sort\":{\n \"header.receiverNode\":1\n }\n },\n {\n \"$limit\":100\n }\n]\");\"{\n \"stages\":[\n {\n \"$cursor\":{\n \"query\":{\n \"$and\":[\n {\n \"status\":\"new\",\n \"messageDeleted\":{\n \"$exists\":false\n }\n },\n {\n \"header.receiverNode\":{\n \"$in\":[\n \"224572\"\n ]\n }\n }\n ]\n },\n \"sort\":{\n \"header.receiverNode\":1\n },\n \"limit\":NumberLong(100),\n \"queryPlanner\":{\n \"plannerVersion\":1,\n \"namespace\":\"hdnapi.records\",\n \"indexFilterSet\":false,\n \"parsedQuery\":{\n \"$and\":[\n {\n \"header.receiverNode\":{\n \"$eq\":\"224572\"\n }\n },\n {\n \"status\":{\n \"$eq\":\"new\"\n }\n },\n {\n \"messageDeleted\":{\n \"$not\":{\n \"$exists\":true\n }\n }\n }\n ]\n },\n \"queryHash\":\"CBB00445\",\n \"planCacheKey\":\"8B2613FA\",\n \"winningPlan\":{\n \"stage\":\"FETCH\",\n \"filter\":{\n \"$and\":[\n {\n \"status\":{\n \"$eq\":\"new\"\n }\n },\n {\n \"messageDeleted\":{\n \"$not\":{\n \"$exists\":true\n }\n }\n }\n ]\n },\n \"inputStage\":{\n \"stage\":\"IXSCAN\",\n \"keyPattern\":{\n \"header.receiverNode\":1\n },\n \"indexName\":\"header.receiverNode_1\",\n \"isMultiKey\":false,\n \"multiKeyPaths\":{\n \"header.receiverNode\":[\n \n ]\n },\n \"isUnique\":false,\n \"isSparse\":false,\n \"isPartial\":false,\n \"indexVersion\":2,\n \"direction\":\"forward\",\n \"indexBounds\":{\n \"header.receiverNode\":[\n \"[\\\"224572\\\", \\\"224572\\\"]\"\n ]\n }\n }\n },\n \"rejectedPlans\":[\n \n ]\n },\n \"executionStats\":{\n \"executionSuccess\":true,\n \"nReturned\":100,\n \"executionTimeMillis\":21,\n \"totalKeysExamined\":1316,\n \"totalDocsExamined\":1316,\n \"executionStages\":{\n \"stage\":\"FETCH\",\n \"filter\":{\n \"$and\":[\n {\n \"status\":{\n \"$eq\":\"new\"\n }\n },\n {\n \"messageDeleted\":{\n \"$not\":{\n \"$exists\":true\n }\n }\n }\n ]\n },\n \"nReturned\":100,\n \"executionTimeMillisEstimate\":9,\n \"works\":1316,\n \"advanced\":100,\n \"needTime\":1216,\n \"needYield\":0,\n \"saveState\":11,\n \"restoreState\":11,\n \"isEOF\":0,\n \"docsExamined\":1316,\n \"alreadyHasObj\":0,\n \"inputStage\":{\n \"stage\":\"IXSCAN\",\n \"nReturned\":1316,\n \"executionTimeMillisEstimate\":0,\n \"works\":1316,\n \"advanced\":1316,\n \"needTime\":0,\n \"needYield\":0,\n \"saveState\":11,\n \"restoreState\":11,\n \"isEOF\":0,\n \"keyPattern\":{\n \"header.receiverNode\":1\n },\n \"indexName\":\"header.receiverNode_1\",\n \"isMultiKey\":false,\n \"multiKeyPaths\":{\n \"header.receiverNode\":[\n \n ]\n },\n \"isUnique\":false,\n \"isSparse\":false,\n \"isPartial\":false,\n \"indexVersion\":2,\n \"direction\":\"forward\",\n \"indexBounds\":{\n \"header.receiverNode\":[\n \"[\\\"224572\\\", \\\"224572\\\"]\"\n ]\n },\n \"keysExamined\":1316,\n \"seeks\":1,\n \"dupsTested\":0,\n \"dupsDropped\":0\n }\n }\n }\n }\n }\n ],\n \"serverInfo\":{\n \"host\":\"cse04-mongo\",\n \"port\":27017,\n \"version\":\"4.2.20\",\n \"gitVersion\":\"15c0712952c356cb711c13a42cb3bca8617d4ebc\"\n },\n \"ok\":1,\n \"$clusterTime\":{\n \"clusterTime\":Timestamp(1666616755,\n 1),\n \"signature\":{\n \"hash\":BinData(0,\n \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"\")\",\n \"keyId\":NumberLong(0)\n }\n },\n \"operationTime\":Timestamp(1666616755,\n 1)\n}\n", "text": "Dear all,We are having issues with the aggregate pipeline doing optimization steps which harm the queries performance. It is choosing to do the $sort before the $match stage (my assumption is because the $sort has a better index than the $match stage) which is causing the query to sort the whole database instead of a filtered set.I have added examples with explain executionStats below to provide more detail. The first example uses an $sort on _id which is indexed making it the first stage in the aggregate. The second example uses an $sort on header.receiverNode (own created field) which is also indexed.\nThe first example using _id is doing $sort first while the second example is doing the $match first.Here we can see “totalKeysExamined” : 2530452 which is the same number as the amount of records in the database. And for total time “executionTimeMillis”:23729.Here we can see “totalKeysExamined”: 1316 which is the matched subset. And for total time “executionTimeMillis”:21.So the same query is behaving different with a sort on _id or “header.receiverNode”. The query with the sort on _id is doing the sort first which leads to teriible performance and the query with “header.receiverNode” is working just fine.What is causing the different behaviour between these queries? Why is the first query doing a sort before the match?", "username": "Ontwikkeling_Afdeling" }, { "code": "Atlas atlas-whhaeg-shard-0 [primary] test> db.records.insertOne({\"status\":\"new\",\"header\":{\"receiverNode\":\"224572\"}})\n{\n acknowledged: true,\n insertedId: ObjectId(\"6356a8a24481e6bdd5090f75\")\n}\nAtlas atlas-whhaeg-shard-0 [primary] test> db.records.insertOne({\"status\":\"new\",\"header\":{\"receiverNode\":\"224572\"}})\n{\n acknowledged: true,\n insertedId: ObjectId(\"6356a8a54481e6bdd5090f76\")\n}\nAtlas atlas-whhaeg-shard-0 [primary] test> db.records.createIndex({\"header.receiverNode\":1})\nheader.receiverNode_1\nAtlas atlas-whhaeg-shard-0 [primary] test> db.records.explain(\"executionStats\").aggregate([ { \"$match\": { \"$and\": [ { \"status\": \"new\", \"messageDeleted\": { \"$exists\": false } }, { \"header.receiverNode\": { \"$in\": [ \"224572\"] } }] } }, { \"$sort\": { \"_id\": 1 } }, { \"$limit\": 100 }])\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'test.records',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { 'header.receiverNode': { '$eq': '224572' } },\n { status: { '$eq': 'new' } },\n { messageDeleted: { '$not': { '$exists': true } } }\n ]\n },\n optimizedPipeline: true,\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'LIMIT',\n limitAmount: 100,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { 'header.receiverNode': { '$eq': '224572' } },\n { status: { '$eq': 'new' } },\n { messageDeleted: { '$not': { '$exists': true } } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { _id: 1 },\n indexName: '_id_',\n isMultiKey: false,\n multiKeyPaths: { _id: [] },\n isUnique: true,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { _id: [ '[MinKey, MaxKey]' ] }\n }\n }\n },\n rejectedPlans: [\n {\n stage: 'SORT',\n sortPattern: { _id: 1 },\n memLimit: 33554432,\n limitAmount: 100,\n type: 'simple',\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { messageDeleted: { '$not': { '$exists': true } } },\n { status: { '$eq': 'new' } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'header.receiverNode': 1 },\n indexName: 'header.receiverNode_1',\n isMultiKey: false,\n multiKeyPaths: { 'header.receiverNode': [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { 'header.receiverNode': [ '[\"224572\", \"224572\"]' ] }\n }\n }\n }\n ]\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 2,\n executionTimeMillis: 1,\n totalKeysExamined: 2,\n totalDocsExamined: 2,\n executionStages: {\n stage: 'LIMIT',\n nReturned: 2,\n executionTimeMillisEstimate: 0,\n works: 4,\n advanced: 2,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n limitAmount: 100,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { 'header.receiverNode': { '$eq': '224572' } },\n { status: { '$eq': 'new' } },\n { messageDeleted: { '$not': { '$exists': true } } }\n ]\n },\n nReturned: 2,\n executionTimeMillisEstimate: 0,\n works: 4,\n advanced: 2,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n docsExamined: 2,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 2,\n executionTimeMillisEstimate: 0,\n works: 3,\n advanced: 2,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: { _id: 1 },\n indexName: '_id_',\n isMultiKey: false,\n multiKeyPaths: { _id: [] },\n isUnique: true,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { _id: [ '[MinKey, MaxKey]' ] },\n keysExamined: 2,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n },\n command: {\n aggregate: 'records',\n pipeline: [\n {\n '$match': {\n '$and': [\n { status: 'new', messageDeleted: { '$exists': false } },\n { 'header.receiverNode': { '$in': [ '224572' ] } }\n ]\n }\n },\n { '$sort': { _id: 1 } },\n { '$limit': 100 }\n ],\n cursor: {},\n '$db': 'test'\n },\n serverInfo: {\n host: 'ac-mqumzc7-shard-00-02.yzcbsla.mongodb.net',\n port: 27017,\n version: '5.0.13',\n gitVersion: 'cfb7690563a3144d3d1175b3a20c2ec81b662a8f'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 16793600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 33554432,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1666623668, i: 70 }),\n signature: {\n hash: Binary(Buffer.from(\"f655b6234ec133c083f5e3ea932602643aad6dd3\", \"hex\"), 0),\n keyId: Long(\"7108711770176356355\")\n }\n },\n operationTime: Timestamp({ t: 1666623668, i: 70 })\n}\n\nAtlas atlas-whhaeg-shard-0 [primary] test> db.records.createIndex({\"header.receiverNode\":1,\"_id\":1})\nheader.receiverNode_1__id_1\n\nAtlas atlas-whhaeg-shard-0 [primary] test> db.records.explain(\"executionStats\").aggregate([ { \"$match\": { \"$and\": [ { \"status\": \"new\", \"messageDeleted\": { \"$exists\": false } }, { \"header.receiverNode\": { \"$in\": [ \"224572\"] } }] } }, { \"$sort\": { \"_id\": 1 } }, { \"$limit\": 100 }])\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'test.records',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { 'header.receiverNode': { '$eq': '224572' } },\n { status: { '$eq': 'new' } },\n { messageDeleted: { '$not': { '$exists': true } } }\n ]\n },\n optimizedPipeline: true,\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'LIMIT',\n limitAmount: 100,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { messageDeleted: { '$not': { '$exists': true } } },\n { status: { '$eq': 'new' } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'header.receiverNode': 1, _id: 1 },\n indexName: 'header.receiverNode_1__id_1',\n isMultiKey: false,\n multiKeyPaths: { 'header.receiverNode': [], _id: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'header.receiverNode': [ '[\"224572\", \"224572\"]' ],\n _id: [ '[MinKey, MaxKey]' ]\n }\n }\n }\n },\n rejectedPlans: [\n {\n stage: 'SORT',\n sortPattern: { _id: 1 },\n memLimit: 33554432,\n limitAmount: 100,\n type: 'simple',\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { messageDeleted: { '$not': { '$exists': true } } },\n { status: { '$eq': 'new' } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'header.receiverNode': 1 },\n indexName: 'header.receiverNode_1',\n isMultiKey: false,\n multiKeyPaths: { 'header.receiverNode': [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { 'header.receiverNode': [ '[\"224572\", \"224572\"]' ] }\n }\n }\n }\n ]\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 2,\n executionTimeMillis: 1,\n totalKeysExamined: 2,\n totalDocsExamined: 2,\n executionStages: {\n stage: 'LIMIT',\n nReturned: 2,\n executionTimeMillisEstimate: 1,\n works: 4,\n advanced: 2,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n limitAmount: 100,\n inputStage: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { messageDeleted: { '$not': { '$exists': true } } },\n { status: { '$eq': 'new' } }\n ]\n },\n nReturned: 2,\n executionTimeMillisEstimate: 1,\n works: 4,\n advanced: 2,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n docsExamined: 2,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 2,\n executionTimeMillisEstimate: 1,\n works: 3,\n advanced: 2,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keyPattern: { 'header.receiverNode': 1, _id: 1 },\n indexName: 'header.receiverNode_1__id_1',\n isMultiKey: false,\n multiKeyPaths: { 'header.receiverNode': [], _id: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'header.receiverNode': [ '[\"224572\", \"224572\"]' ],\n _id: [ '[MinKey, MaxKey]' ]\n },\n keysExamined: 2,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n },\n command: {\n aggregate: 'records',\n pipeline: [\n {\n '$match': {\n '$and': [\n { status: 'new', messageDeleted: { '$exists': false } },\n { 'header.receiverNode': { '$in': [ '224572' ] } }\n ]\n }\n },\n { '$sort': { _id: 1 } },\n { '$limit': 100 }\n ],\n cursor: {},\n '$db': 'test'\n },\n serverInfo: {\n host: 'ac-mqumzc7-shard-00-02.yzcbsla.mongodb.net',\n port: 27017,\n version: '5.0.13',\n gitVersion: 'cfb7690563a3144d3d1175b3a20c2ec81b662a8f'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 16793600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 33554432,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1666624083, i: 5 }),\n signature: {\n hash: Binary(Buffer.from(\"89789d6903a1435fc259eb87b0091aad24e7d129\", \"hex\"), 0),\n keyId: Long(\"7108711770176356355\")\n }\n },\n operationTime: Timestamp({ t: 1666624083, i: 5 })\n}\n", "text": "Hello,The query sorting on “header.receiverNode” is using the index on “header.receiverNode” for both filtering and sorting which is ideal.On the other hand when you filter with “header.receiverNode” and sort on “_id” ( which is not part of the query filters ) this means that the optimizer needs to use a separate index for sorting other than the one filtering the data in the $match stage.When the optimizer needs to use more than one index, it’s called index intersection, However, it doesn’t work when the sort operation requires an index completely separate from the query predicate. and that would explain why when you sort with “_id” the performance degrades as it’s not using the selective index on “header.receiverNode” which in your case matched only 1316 keys.I can see that this behaviour is consistent and reproducible as I did in my testing environment even with 2 records inserted:As you can see it still used the index for “_id” as it considers it better for the sort operation and put the plan using “header.receiverNode” index in the rejected plans.If your query requires sorting with a field that is not part of the selective index used, you can consider including it in the index, in the below example I created an index that uses both “header.receiverNode” and “_id” and I can see the optimizer is picking it up accordingly.I hope you find this helpful.", "username": "Mohamed_Elshafey" }, { "code": "db.records.explain('executionStats').aggregate([{\"$match\" : { \"$and\" : [ { \"status\" : \"new\", \"messageDeleted\" : { \"$exists\" : false } }, { \"header.receiverNode\" : { \"$in\" : [ \"224572\" ] } } ] } }, { \"$sort\" : { \"_id\" : 1 } }, { \"$limit\" : 100 } ], {hint: \"header.receiverNode_1\"});\n{\n \"stages\" : [\n {\n \"$cursor\" : {\n \"query\" : {\n \"$and\" : [\n {\n \"status\" : \"new\",\n \"messageDeleted\" : {\n \"$exists\" : false\n }\n },\n {\n \"header.receiverNode\" : {\n \"$in\" : [\n \"224572\"\n ]\n }\n }\n ]\n },\n \"queryPlanner\" : {\n \"plannerVersion\" : 1,\n \"namespace\" : \"hdnapi.records\",\n \"indexFilterSet\" : false,\n \"parsedQuery\" : {\n \"$and\" : [\n {\n \"header.receiverNode\" : {\n \"$eq\" : \"224572\"\n }\n },\n {\n \"status\" : {\n \"$eq\" : \"new\"\n }\n },\n {\n \"messageDeleted\" : {\n \"$not\" : {\n \"$exists\" : true\n }\n }\n }\n ]\n },\n \"queryHash\" : \"9E2A8714\",\n \"planCacheKey\" : \"FEBB4A4C\",\n \"winningPlan\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [\n {\n \"status\" : {\n \"$eq\" : \"new\"\n }\n },\n {\n \"messageDeleted\" : {\n \"$not\" : {\n \"$exists\" : true\n }\n }\n }\n ]\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"header.receiverNode\" : 1\n },\n \"indexName\" : \"header.receiverNode_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"header.receiverNode\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"header.receiverNode\" : [\n \"[\\\"224572\\\", \\\"224572\\\"]\"\n ]\n }\n }\n },\n \"rejectedPlans\" : [ ]\n },\n \"executionStats\" : {\n \"executionSuccess\" : true,\n \"nReturned\" : 5778,\n \"executionTimeMillis\" : 130,\n \"totalKeysExamined\" : 12429,\n \"totalDocsExamined\" : 12429,\n \"executionStages\" : {\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"$and\" : [\n {\n \"status\" : {\n \"$eq\" : \"new\"\n }\n },\n {\n \"messageDeleted\" : {\n \"$not\" : {\n \"$exists\" : true\n }\n }\n }\n ]\n },\n \"nReturned\" : 5778,\n \"executionTimeMillisEstimate\" : 9,\n \"works\" : 12430,\n \"advanced\" : 5778,\n \"needTime\" : 6651,\n \"needYield\" : 0,\n \"saveState\" : 101,\n \"restoreState\" : 101,\n \"isEOF\" : 1,\n \"docsExamined\" : 12429,\n \"alreadyHasObj\" : 0,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"nReturned\" : 12429,\n \"executionTimeMillisEstimate\" : 6,\n \"works\" : 12430,\n \"advanced\" : 12429,\n \"needTime\" : 0,\n \"needYield\" : 0,\n \"saveState\" : 101,\n \"restoreState\" : 101,\n \"isEOF\" : 1,\n \"keyPattern\" : {\n \"header.receiverNode\" : 1\n },\n \"indexName\" : \"header.receiverNode_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"header.receiverNode\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"header.receiverNode\" : [\n \"[\\\"224572\\\", \\\"224572\\\"]\"\n ]\n },\n \"keysExamined\" : 12429,\n \"seeks\" : 1,\n \"dupsTested\" : 0,\n \"dupsDropped\" : 0\n }\n }\n }\n }\n },\n {\n \"$sort\" : {\n \"sortKey\" : {\n \"_id\" : 1\n },\n \"limit\" : NumberLong(100)\n }\n }\n ],\n \"serverInfo\" : {\n \"host\" : \"cse04-mongo\",\n \"port\" : 27017,\n \"version\" : \"4.2.20\",\n \"gitVersion\" : \"15c0712952c356cb711c13a42cb3bca8617d4ebc\"\n },\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1666688198, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1666688198, 1)\n}\n", "text": "Thanks for the answer and extensive explanation. It is now more clear what is causing the optimizer to have this behaviour and why it is choosing a different index at times.Is this behaviour from the optimizer expected where it chooses to sort first because there are two completely seperate indexes? Is there a way to enforce the aggregate to do the match stage first?\nDoes this mean the documentation on Aggregation Pipeline Optimization — MongoDB Manual is not always correct?Extending the index is a good idea but we want to provide multiple different sort options on different fields than _id. The query with _id is an example but there are similar ones causing the same issue.\nDoes this mean we need to create an index for all of them or a big index including them all?The query performance would be fine if it would always match first before the sort but because of the index intersection with completely seperate indexes the query becomes unworkable, sorting the entire database before matching seems not optimized ;). We cant remove the other indexes because they are used for other frequent used queries.I have tried to add a hint ($hint — MongoDB Manual) to the query to force the “header.receiverNode” index and this works because of the hint it does the $match stage first again.This query shows “totalKeysExamined” : 12429 and “executionTimeMillis” : 130 showing it has properly matched first before sorting.The problem is that our filter does not always contain the field “header.receiverNode” so hinting this index will only work in some cases. Is there a different way than using a $hint or creating multiple/massive indexes to support multiple $sort operations?\nI was hoping for a way/option to enforce the $match operation always go first before the $sort operation.", "username": "Ontwikkeling_Afdeling" }, { "code": "{\n explainVersion: '1',\n stages: [\n {\n '$cursor': {\n queryPlanner: {\n namespace: '633c1839ccc493400fe69b83_test.records',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { 'header.receiverNode': { '$eq': '224572' } },\n { status: { '$eq': 'new' } },\n { messageDeleted: { '$not': { '$exists': true } } }\n ]\n },\n queryHash: '9E2A8714',\n planCacheKey: '08D61773',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { messageDeleted: { '$not': { '$exists': true } } },\n { status: { '$eq': 'new' } }\n ]\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'header.receiverNode': 1 },\n indexName: 'header.receiverNode_1',\n isMultiKey: false,\n multiKeyPaths: { 'header.receiverNode': [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { 'header.receiverNode': [ '[\"224572\", \"224572\"]' ] }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 0,\n totalKeysExamined: 1,\n totalDocsExamined: 1,\n executionStages: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { messageDeleted: { '$not': { '$exists': true } } },\n { status: { '$eq': 'new' } }\n ]\n },\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 1,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n docsExamined: 1,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 1,\n needTime: 0,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n keyPattern: { 'header.receiverNode': 1 },\n indexName: 'header.receiverNode_1',\n isMultiKey: false,\n multiKeyPaths: { 'header.receiverNode': [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { 'header.receiverNode': [ '[\"224572\", \"224572\"]' ] },\n keysExamined: 1,\n seeks: 1,\n dupsTested: 0,\n dupsDropped: 0\n }\n }\n }\n },\n nReturned: Long(\"1\"),\n executionTimeMillisEstimate: Long(\"0\")\n },\n {\n '$project': { non_existing_field: false, _id: true },\n nReturned: Long(\"1\"),\n executionTimeMillisEstimate: Long(\"0\")\n },\n {\n '$sort': { sortKey: { _id: 1 }, limit: Long(\"100\") },\n totalDataSizeSortedBytesEstimate: Long(\"188\"),\n usedDisk: false,\n nReturned: Long(\"1\"),\n executionTimeMillisEstimate: Long(\"0\")\n }\n ],\n serverInfo: {\n host: 'ac-mqumzc7-shard-00-02.yzcbsla.mongodb.net',\n port: 27017,\n version: '5.0.13',\n gitVersion: 'cfb7690563a3144d3d1175b3a20c2ec81b662a8f'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 16793600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 33554432,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n command: {\n aggregate: 'records',\n pipeline: [\n {\n '$match': {\n '$and': [\n { status: 'new', messageDeleted: { '$exists': false } },\n { 'header.receiverNode': { '$in': [ '224572' ] } }\n ]\n }\n },\n { '$project': { non_existing_field: 0 } },\n { '$sort': { _id: 1 } },\n { '$limit': 100 }\n ],\n cursor: {},\n '$db': 'test'\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1666782156, i: 23 }),\n signature: {\n hash: Binary(Buffer.from(\"b91a1dcd407bd6ab812af23b3778ba556957b620\", \"hex\"), 0),\n keyId: Long(\"7108711770176356355\")\n }\n },\n operationTime: Timestamp({ t: 1666782156, i: 23 })\n}\n", "text": "Hello,Checking the plan when it used “_id” index for sort, it still does the match first before sort, but it doesn’t use an index for filtering, it uses the “_id” index to access the whole collection looking for the matching documents as it will need it again for sorting.The problem is that part of the aggregation optimisation is deciding if the aggregation would benefit from indexes or not, both $match and $sort are eligible to benefit from indexes, this is ok if they are using the same index, but because they are using different ones, the optimiser chooses one over the other, it’s possible that sorting using indexes takes priority ( this could need further testing ).If this is not the only filter or sort field you use, then I see how creating more compound indexes or even using hints would not be practical.Since $sort in aggregation cannot benefit from indexes if it’s preceded with a $project as mentioned here, I tried to introduce a dummy $project stage that excludes a non-existing field just to prevent $sort from using an index and it worked in my environment as per the below example:While this may not be ideal and will require you to tweak the aggregation itself, it will guarantee that you avoid this index intersection issue altogether no matter what sort field you use.", "username": "Mohamed_Elshafey" }, { "code": "", "text": "Thanks again for another detailed answer. The workaround with an non-existing project field is working and it forces the optimizer to use the index for the match stage instead of the sort index.We will be using this workaround and looking into restricting some sort/filter options so we can use covered indexes to make the queries even faster and remove the need for an empty project field.Ill mark your answer as the solution.Cheers", "username": "Ontwikkeling_Afdeling" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2 (and 4.4) aggregate is doing $sort before $match resulting in a sort of the whole collection and poor performance
2022-10-24T13:13:56.880Z
MongoDB 4.2 (and 4.4) aggregate is doing $sort before $match resulting in a sort of the whole collection and poor performance
2,298
null
[ "change-streams" ]
[ { "code": "messages = [A, B, C]", "text": "Hi everyone,I have a question. Is there anyway to read change streams with multiple threads?Ex: we have 3 messages in change streams\nmessages = [A, B, C]\nand my client has 2 threads. I want these 2 threads can receive messages in parallel. Ex:\nthread 1: receive [A, C]\nthread 2: receive [B]Assume that I don’t care about the ordering of messages, i.e: timestamp. I just want to read as fast as possible.", "username": "Khanh_Nguyen" }, { "code": "", "text": "You can have multiple thread receiving different change stream message if the query used to set up the stream is different.If the query is the same then all receive the message.If you want to load balance you can easily do it internally or with any message queue framework.", "username": "steevej" }, { "code": "", "text": "Basically, I want multiple threads can receive different messages in parallel from a change streams (ex: change streams from a specific collection in MongoDB).\nIt seems that there’s no way to do that.My problem is that I have 10.000 updates/sec in MongoDB. In my client program, I want to receive these 10.000 messages via change streams (FULL_DOCUMENT, each document is around 100KB) with Java API of change streams. I want to speed this process up by using multiple threads (Assume that I don’t care about the ordering of messages).", "username": "Khanh_Nguyen" }, { "code": "", "text": "(ex: change streams from a specific collection in MongoDB).\nIt seems that there’s no way to do that.please revise the documentation because you can do the above", "username": "steevej" }, { "code": " MongoCollection<Document> collection = db.getCollection(\"my_collection\");\n ChangeStreamIterable<Document> iterable = collection.watch();\n\n iterable.forEach(new Consumer<ChangeStreamDocument<Document>>() {\n @Override\n public void accept(ChangeStreamDocument<Document> doc) {\n // do something with doc\n }\n });", "text": "Thank Steeve for your reply.Could you please give me the link to that documentation? I’ve got stuck for a few days for this problem and couldn’t find any helpful link.I have 10k changes/sec in MongoDB. In my client program, I want to receive these 10k changes (fullDocument, each doc is around 100KB, so it’s around 1GB data/sec) with Java API of change streams. I want to speed this process up by using multiple threads (assume that I don’t care about the ordering of messages). Below is a normal way to work with change streams. How to speed it up?", "username": "Khanh_Nguyen" }, { "code": "MongoCollection<Document> collection = db.getCollection(\"my_collection\");\nChangeStreamIterable<Document> iterable = collection.watch();\n", "text": "Documentation can be found with google search.https://www.google.com/search?q=mongodb+change+streamsYou write(ex: change streams from a specific collection in MongoDB).\nIt seems that there’s no way to do that.but you share the codewhich does exactlychange streams from a specific collection", "username": "steevej" }, { "code": "", "text": "Oh Steeve, maybe my explanation is pretty bad. What I want is “multiple threads can receive different messages in parallel from a change streams”. You can see the problem (10k changes/sec) that I described above.The key problem is I want “mutiple threads” and “parallel”.I mean I want to receive messages in parallel from change stream of something like: database, a single collection, multiple collections. So I wrote “ex: change streams from a specific collection”.Of course, I know how to receive message in single thread as same as the code that I shared. It’s easy.", "username": "Khanh_Nguyen" }, { "code": "", "text": "As I originally wrote:If the query is the same then all receive the message.If you want to load balance you can easily do it internally or with any message queue framework.", "username": "steevej" }, { "code": " ChangeStreamIterable<Document> iterable = collection.watch();\n iterable.forEach(new Consumer<ChangeStreamDocument<Document>>() {\n @Override\n public void accept(ChangeStreamDocument<Document> doc) {\n // do something with doc\n }\n });\nChangeStreamIterable<Document> iterable1 = collection.watch(query1); // query1 can be id in a specific range\nChangeStreamIterable<Document> iterable2 = collection.watch(query2); \n", "text": "If the query is the same then all receive the message.yeah, so there’s no way to parallel the following codeIf we want to parallelize, we have to as follows but it’s very trickyI don’t understand this ideaIf you want to load balance you can easily do it internally or with any message queue framework.The main problem is how to parallelize if the query is the same? How to use message queue here? My pain point is that if the query is the same, we won’t be able to properly parallelize it without tricky. The bottleneck is if it’s the same query, the speed of reading from change stream is pretty bad.", "username": "Khanh_Nguyen" }, { "code": "ChangeStreamThread thread_query_1 = new ChangeStreamThread( query1 ).start() ;\nChangeStreamThread thread_query_2 = new ChangeStreamThread( query2 ).start() ;\nclass ChangeStreamThread extends Thread\n{\n Document m_Query ;\n\n ChangeStreamThread( Document query )\n {\n m_Query = query ;\n }\n public void run()\n {\n // What ever code you have to process the events matching m_Query from the stream\n }\n}\n", "text": "If we want to parallelize, we have to as follows but it’s very trickyHow is that tricky? The queries are different, they will receive different change stream events.Like you wrote:The main problem is how to parallelize if the query is the sameAs forHow to use message queue here?See how to use message queues - Google Search.If reading becomes the bottleneck you will have to distribute the load among different machines rather than multiple threads.", "username": "steevej" }, { "code": "ChangeStreamIterable<Document> iterable = collection.watch();\nChangeStreamIterable<Document> iterable1 = collection.watch(query1); // query1 can be id in a specific range\nChangeStreamIterable<Document> iterable2 = collection.watch(query2); \nChangeStreamIterable<Document> iterable = collection.watch();\n", "text": "Thank Steevej for your response.How is that tricky? The queries are different, they will receive different change stream events.The problem is it’s just a collection. Normally, we use a change stream for this collection as follows:The tricky is I split the change stream to multiple change streams by queries as I wrote (note that I just have 1 collection). So which queries should be chose?Of course, I can use 2 threads here with this 2 queries. The trick is I have to think about some queries to split a change stream to multiple change streams.So I want to use the same query (no query) as follows.How can I parallelize/use multi threads here?See how to use message queues - Google Search.My question is HOW to use it in MY USE CASE.If reading becomes the bottleneck you will have to distribute the load among different machines rather than multiple threads.There’re 2 things here:My client programMongoDBWhich thing did you suggest to scale? client or mongo or both? Note that it’s just a single collection.\nMaybe you suggests to distribute this collection to multiple mongo nodes (machines) and then will use different clients (on different machines), right?I haven’t had experiences about mongo. Sorry if there’re some stupid questions.", "username": "Khanh_Nguyen" }, { "code": " MongoCollection<SomeModel> events = test.getCollection(\"some-model\", SomeModel.class);\n ChangeStreamIterable<SomeModel> watch = events.watch();\n ExecutorService executorService = Executors.newSingleThreadExecutor(); // adjust the thread pool as preferred\n executorService.execute(() -> {\n watch.forEach(csd -> {\n System.out.printnl(\"changeStreamDocument: \" + csd); // this logic is called in each thread\n });\n });\n", "text": "I’m also experimenting with consuming change streams from multiple threads, and I guess it could be possible to do like this:", "username": "Matteo_Moci" }, { "code": "watch()", "text": "I’d like to correct what I wrote in the previous post:to consume the change stream from >1 threads, probably it’s best to use 1 thread to dispatch to a queue and then consume that queue with multiple consumers.The main issue when attempting to directly consume the change stream from 2 threads is that, since the watch() query would be the same, both will receive the same changes.", "username": "Matteo_Moci" } ]
Read change streams with multiple threads
2021-08-17T18:11:57.779Z
Read change streams with multiple threads
12,044
null
[ "aggregation" ]
[ { "code": "", "text": "Issue running mongod on Windows 10:sort stage of aggregation (allow disk use: true) fails with following error\n_tmp/extsort-time-sorter.2: errno:2 No such file or directory when dbPath is a subdirectory of C:Users.\nAfter moving the database files to C:\\data for example, the same aggregation succeeds.\nIs there a way to fix this without changing the dbPath directory.\nI would be thankful any help.", "username": "Anton_Rohrwick_1" }, { "code": "", "text": "So your mongod comes up on c:/Users/xyz but not able to create _tmp/files?\nNot clear what you meant by moving datafiles to c:/data\nYou will be pointing dbpath to c:/data or c:users/xyz\nAs long as mongod has write permissions it should work whichever path you choose", "username": "Ramachandra_Tummala" } ]
Windows 10 mongod don't find files in _tmp when data directory is subdirectory of C:\User
2022-10-26T08:32:04.457Z
Windows 10 mongod don&rsquo;t find files in _tmp when data directory is subdirectory of C:\User
1,360
null
[]
[ { "code": "", "text": "Hi Team,Wanted to know if we can create separate index on the Analytic nodes than the other node ?\nAlso in case of only 1 Analytical node and that going down due to hardware failure, does the new node does initial sync , do we have any sla for same.Thanks,\nNitin", "username": "Nitin_Mukheja_1" }, { "code": "", "text": "Hi Nitin,did you manage to create an index on a secondary node in MongoDB Atlas? If so, could you, please, share how that can be done.Thank you\nLuigi", "username": "Freeda_Tech" }, { "code": "", "text": "This is a great question - why hasn’t it been answered?", "username": "Soren_Christian_Aarup_Soren_Christian_Aaru" }, { "code": "", "text": "", "username": "system" }, { "code": "", "text": "", "username": "wan" }, { "code": "", "text": "As of writing, asymmetric indexes are currently not possible in Atlas as they aren’t natively supported in the database yet and would require alternative approaches that come with compromises.Asymmetric hardware was the first focus for MongoDB’s broader real-time analytics strategy, as released during Mongo World this year.", "username": "Matt_Gaudie" } ]
Index on Analytics Node in Atlas
2020-12-14T12:12:25.289Z
Index on Analytics Node in Atlas
3,399
https://www.mongodb.com/…7c45168bc367.png
[]
[ { "code": "", "text": "\nScreenshot_2022-10-27_09-59-28801×349 81.6 KB\n", "username": "Abu_Shama" }, { "code": "/proc/cpuinfo", "text": "Welcome to the MongoDB Community @Abu_Shama !Please see this prior discussion for details on the issue you are experiencing:The likely reason for encountering an illegal instruction is that your CPU does not meet the x86_64 microarchitecture requirements for MongoDB 5.0.Can you please:Related discussion for someone trying to run MongoDB 5.0 on VirtualBox: Could not start MongoDB .If you can provide the same details requested above, that will help determine the best course of action for your environment.Regards,\nStennie", "username": "Stennie_X" } ]
Can't run mondoDB in local environment on linux os
2022-10-27T04:07:56.639Z
Can&rsquo;t run mondoDB in local environment on linux os
1,544
null
[ "dot-net" ]
[ { "code": " var document1 = new BsonDocument\n {\n { \"Key1\", \"test1\" },\n { \"Key2\", \"test2\"},\n { \"datetime\", DateTime.Now },\n };\n\n try\n {\n _mongoMsgDetailCollection4O.InsertOne(document1);\n }\n catch (MongoWriteException e)\n {\n Console.WriteLine(e.Message);\n }\n var document1 = new BsonDocument\n {\n { \"Key1\", \"test1\" },\n { \"Key2\", \"test2\"},\n { \"datetime\", DateTime.Now },\n {\"_index\", \"SineInMsgCustom\"}\n };\n", "text": "Having trouble performing basic inserts of a document into a collection. Some details first:I’m trying to insert a document into a collection that does not contain a fixed ordinal address. There are documents in the collection that have unique “_id” fields, but the collection itself doesn’t have a fixed address.I try to do the following:The error I get is:Command Failed. MONG0018E 14.45.38 The _index field must exist in the input document.How and where do I define this index in the input document? All the CRUD examples I’ve found don’t show having to define an index. Even if I define the index parameter in the document, I still get the same error.I am expecting that the document be inserted and that the ObjectId of the document can be retrieved to be recorded elsewhere in my application. Thanks for any help you can offer.", "username": "RKS" }, { "code": "_index", "text": "MONG0018E 14.45.38 The _index field must exist in the input document.Hi @RKS ,Similar to your earlier question, this error message appears to be coming from the API you are using.MongoDB 2.6 was first released in March, 2014 and has been end of life since Oct, 2016. I’m not sure if 2.6.5 actually corresponds to a MongoDB server version or a version for the API you are using, but I would encourage you to upgrade to a more modern solution :).The error message implies your document must have an _index field, but since adding that does not resolve the problem I would look into the API or code you are using that is leading to this error message.I think you may be using IBM’s z/TPF MongoDB support. If so, I suggest looking into community or support options from IBM as this appears to be an emulation of some subset of the MongoDB API. We can’t provide relevant advice without knowing anything about the backend implementation, but the examples you have asked about so far suggest some novel constraints or behaviour compared to a genuine MongoDB deployment.Regards,\nStennie", "username": "Stennie_X" } ]
.Net Core MongoDB Driver InsertOne Index Error
2022-10-26T21:53:39.313Z
.Net Core MongoDB Driver InsertOne Index Error
1,222
null
[ "queries" ]
[ { "code": "db.XXXX.find({\"lastSaved\":{$gte: new Date(new Date() - 7 * 60 * 60 * 24 * 1000)}}).sort({\"lastSaved\" : -1})", "text": "I am getting an error when I try to use a gte with a date, in a find. What am I doing wrong?I am trying to find records in the last 7 days.db.XXXX.find({\"lastSaved\":{$gte: new Date(new Date() - 7 * 60 * 60 * 24 * 1000)}}).sort({\"lastSaved\" : -1})syntax error near unexpected token `{“lastSaved”:{$gte:’", "username": "Arun_Lakhotia" }, { "code": "", "text": "This query is not generating an error message on my system.Please provide a screenshot of where and how you issue this query that shows the error message you are getting.", "username": "steevej" }, { "code": "", "text": "I just realized I had exited the mongo shell and was in bash, without realizing it.The query works.", "username": "Arun_Lakhotia" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error when using gte with a date from mongo shell
2022-10-26T23:51:27.510Z
Error when using gte with a date from mongo shell
1,508
null
[ "ops-manager" ]
[ { "code": "", "text": "I need to export all configured alert settings in Opsmanger( admin>alerts>global alert settings ) to a file. Where can i find these settings file on Opsmanager linux server.", "username": "Murali_patibandla1" }, { "code": "mmsdbconfigconfig.alertConfigsacGLOBALet", "text": "Hi @Murali_patibandla1, and welcome to the MongoDB Community forums! It looks like those are stored in the backing database for Ops Manager. Check database name mmsdbconfig and collection config.alertConfigs. You would filter on field ac for value GLOBAL. You’ll have to do some mapping of the et field code to the text in OpsManager, but you should be able to figure them all out.", "username": "Doug_Duncan" }, { "code": "", "text": "Hi Doug- Thank you for the information . I am able to get the alerts from this location", "username": "Murali_patibandla1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Opsmanager alert settings file location
2022-10-25T22:12:24.222Z
Opsmanager alert settings file location
2,092
null
[]
[ { "code": "'table:sizeStorer' cannot be used until all column groups are created: Invalid argument\n[1666250062:785521][31884:0x1031c8580], wt, WT_SESSION.salvage: [WT_VERB_DEFAULT][ERROR]: int __wt_schema_get_table_uri(WT_SESSION_IMPL *, const char *, _Bool, uint32_t, WT_TABLE **), 85: 'table:sizeStorer' cannot be used until all column groups are created: Invalid argument\nwt: session.salvage: table:sizeStorer: Invalid argument\n", "text": "After a Kubernetes node crash our production instance got offline and when we tried to get it up with --repair flag it fails in the middle with the error:Our small business currently have a full backup from a month ago and only few of the collections have changed in that month, but we still want to get some way to recover, regenerate or recreate the sizeStorer file.\nThere’s any way or source I can try in order to recover the file? I have time as production is already running from the backup, also I already downloaded al files locally and tried to use Wiretiger (wt tool) savage() and verify() functions with the following result:", "username": "Fernando_De_Lucchi" }, { "code": "", "text": "This look to be the function that throws the error: https://github.com/wiredtiger/wiredtiger/blob/b3e3b2580ce14e7609e7ba269d3c7432f81339b7/src/schema/schema_list.c#L68", "username": "Fernando_De_Lucchi" }, { "code": "", "text": "One thing you can try is to find the sizeStorer.wt file and delete it, then try restarting mongod in repair mode. If the file is missing, the repair function should regenerate a new one. Note that all your collections will report “0 documents” for estimated count and collection size, but this can be corrected by running validate on all collections.", "username": "Eric_Milkie" }, { "code": "{\"t\":{\"$date\":\"2022-10-22T07:56:31.418+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22316, \"ctx\":\"initandlisten\",\"msg\":\"Repairing size cache\"}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.418+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":22,\"message\":\"[1666425391:418780][179:0xfffef4c30480], WT_SESSION.verify: __wt_schema_get_table_uri, 85: 'table:sizeStorer' cannot be used until all column groups are created: Invalid argument\"}}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.419+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22328, \"ctx\":\"initandlisten\",\"msg\":\"Verify failed. Running a salvage operation.\",\"attr\":{\"uri\":\"table:sizeStorer\"}}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.419+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":22,\"message\":\"[1666425391:419873][179:0xfffef4c30480], WT_SESSION.salvage: __wt_schema_get_table_uri, 85: 'table:sizeStorer' cannot be used until all column groups are created: Invalid argument\"}}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.420+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22351, \"ctx\":\"initandlisten\",\"msg\":\"Salvage failed. The file will be moved out of the way and a new ident will be created.\",\"attr\":{\"uri\":\"table:sizeStorer\",\"error\":{\"code\":2,\"codeName\":\"BadValue\",\"errmsg\":\"Salvage failed: 22: Invalid argument\"}}}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.421+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22353, \"ctx\":\"initandlisten\",\"msg\":\"Rebuilding ident\",\"attr\":{\"ident\":\"sizeStorer\"}}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.421+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error\",\"attr\":{\"error\":-31803,\"message\":\"[1666425391:421919][179:0xfffef4c30480], file:WiredTiger.wt, WT_CURSOR.search: __schema_create_collapse, 106: metadata information for source configuration \\\"colgroup:sizeStorer\\\" not found: WT_NOTFOUND: item not found\"}}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.422+00:00\"},\"s\":\"E\", \"c\":\"STORAGE\", \"id\":22357, \"ctx\":\"initandlisten\",\"msg\":\"Rebuilding ident failed: failed to get metadata\",\"attr\":{\"uri\":\"table:sizeStorer\",\"error\":{\"code\":4,\"codeName\":\"NoSuchKey\",\"errmsg\":\"Unable to find metadata for table:sizeStorer\"}}}\n{\"t\":{\"$date\":\"2022-10-22T07:56:31.423+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23095, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28577,\"error\":\"NoSuchKey: Unable to find metadata for table:sizeStorer\",\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":569}}\nMfile:sizeStorer.wtEtable:sizeStorer'table:sizeStorer' cannot be used until all column groups are created: Invalid argument”", "text": "This is the result. It looks like Wiredtiger.wt is the real problem, but just with the key of \"table:sizeStorer”!Note: When I explore the hex in WiredTiger.wt I see a lot of Mfile:sizeStorer.wt names, but only one Etable:sizeStorer\nNote2: Indeed, I get an error reporting metadata corruption but when it tries to rebuild it hits the error marked in the post ('table:sizeStorer' cannot be used until all column groups are created: Invalid argument”).", "username": "Fernando_De_Lucchi" }, { "code": "Etable:sizeStorertable:_mdb_catalog{\"t\":{\"$date\":\"2022-10-24T19:45:19.523+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":3913}}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.524+00:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.524+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22316, \"ctx\":\"initandlisten\",\"msg\":\"Repairing size cache\"}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.527+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22327, \"ctx\":\"initandlisten\",\"msg\":\"Verify succeeded. Not salvaging.\",\"attr\":{\"uri\":\"table:sizeStorer\"}}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.528+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22246, \"ctx\":\"initandlisten\",\"msg\":\"Repairing catalog metadata\"}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.528+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22350, \"ctx\":\"initandlisten\",\"msg\":\"Data file is missing. Attempting to drop and re-create the collection.\",\"attr\":{\"uri\":\"table:_mdb_catalog\"}}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.530+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22352, \"ctx\":\"initandlisten\",\"msg\":\"Moving data file to backup\",\"attr\":{\"file\":\"./_mdb_catalog.wt\",\"backup\":\"./_mdb_catalog.wt.corrupt\"}}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.531+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23095, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":50926,\"error\":\"FileRenameFailed: Attempted to rename file to an existing file: ./_mdb_catalog.wt.corrupt\",\"file\":\"src/mongo/db/storage/storage_engine_impl.cpp\",\"line\":122}}\n{\"t\":{\"$date\":\"2022-10-24T19:45:19.531+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23096, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "I’ve removed the E in Etable:sizeStorer and the WiredTiger is ok with it now (it inits the salvage process).\nI’m running into problems with table:_mdb_catalog, will update soon if I get to rebuild sizeStorer.This is my current log:", "username": "Fernando_De_Lucchi" }, { "code": "", "text": "Just for the record:\nI copied the WiredTiger.wt from the backup, the _mdb_catalog and sizeStorer.wt from the corrupted data, removed all the collections where the schema (index, validation, etc) changed during the affected timespan.\nThe DB repaired itself and rebuild the metadata.I lost just one specific collection that has 635K documents, but only 34 documents were created in affected timespan.All relevant collections get recovered.", "username": "Fernando_De_Lucchi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
WiredTiger Error: cannot be used until all column groups are created: Invalid argument
2022-10-20T07:25:53.365Z
WiredTiger Error: cannot be used until all column groups are created: Invalid argument
1,706
null
[ "dot-net" ]
[ { "code": "builder.RegisterAssemblyTypes(dataAccess)\n .AsImplementedInterfaces()\n .AsSelf();\nAutofac.Core.Activators.Reflection.NoConstructorsFoundException\n Message=No accessible constructors were found for the type 'QuickJournal_ProcessedByFody'.\n Source=Autofac\n StackTrace:\n at Autofac.Core.Activators.Reflection.DefaultConstructorFinder.GetDefaultPublicConstructors (System.Type type) [0x0002e] in /_/src/Autofac/Core/Activators/Reflection/DefaultConstructorFinder.cs:56 \n at Autofac.Core.Activators.Reflection.DefaultConstructorFinder.FindConstructors (System.Type targetType) [0x00000] in /_/src/Autofac/Core/Activators/Reflection/DefaultConstructorFinder.cs:47 \n at Autofac.Core.Activators.Reflection.ReflectionActivator.ConfigurePipeline (Autofac.Core.IComponentRegistryServices componentRegistryServices, Autofac.Core.Resolving.Pipeline.IResolvePipelineBuilder pipelineBuilder) [0x0001c] in /_/src/Autofac/Core/Activators/Reflection/ReflectionActivator.cs:86 \n at Autofac.Core.Registration.ComponentRegistration.BuildResolvePipeline (Autofac.Core.IComponentRegistryServices registryServices, Autofac.Core.Resolving.Pipeline.IResolvePipelineBuilder pipelineBuilder) [0x00047] in /_/src/Autofac/Core/Registration/ComponentRegistration.cs:278 \n at Autofac.Core.Registration.ComponentRegistration.BuildResolvePipeline (Autofac.Core.IComponentRegistryServices registryServices) [0x0002a] in /_/src/Autofac/Core/Registration/ComponentRegistration.cs:249 \n at Autofac.Core.Registration.ComponentRegistryBuilder.Build () [0x00024] in /_/src/Autofac/Core/Registration/ComponentRegistryBuilder.cs:92 \n at Autofac.ContainerBuilder.Build (Autofac.Builder.ContainerBuildOptions options) [0x00038] in /_/src/Autofac/ContainerBuilder.cs:166 \n at QuickJournal.App..ctor () [0x00036] in C:\\Realm_Tests\\QuickJournal\\QuickJournal\\QuickJournal\\App.xaml.cs:28 \n at QuickJournal.Droid.MainActivity.OnCreate (Android.OS.Bundle savedInstanceState) [0x00019] in C:\\Realm_Tests\\QuickJournal\\QuickJournal\\QuickJournal.Android\\MainActivity.cs:19 \n at Android.App.Activity.n_OnCreate_Landroid_os_Bundle_ (System.IntPtr jnienv, System.IntPtr native__this, System.IntPtr native_savedInstanceState) [0x00010] in /Users/builder/azdo/_work/1/s/xamarin-android/src/Mono.Android/obj/Release/monoandroid10/android-30/mcw/Android.App.Activity.cs:2713 \n at (wrapper dynamic-method) Android.Runtime.DynamicMethodNameCounter.7(intptr,intptr,intptr)\n", "text": "OK, I was getting an error on realm-tutorial-dotnet and then noticed a real simple example all in Xamarin with no backend and thought let’s see if the same error happens. The project works fine, but when I add AutoFac and set it up to do DI boom. This is what I use in a potential distributable app and think Real is the E-Ticket for the app. Anyway I am registering the entire assembly:This causes an error. I guess that means I need to independently register the Fody component? I see nothing about anyone having problems with this specific thing. Any ideas?", "username": "Michael_Gamble" }, { "code": "", "text": "Is there no one who has run into this? Is no one using AutoFac with Realm?", "username": "Michael_Gamble" }, { "code": "", "text": "Hello @Michael_Gamble ,Thank you for raising this query.This does appear like an autofac issue at a first glance. It seems to register an attribute in the DI container for perhaps no reason.Could you please share some code that can help us investigate more details?I look forward to your response.Cheers ", "username": "henna.s" }, { "code": "builder.RegisterAssemblyTypes(dataAccess)\n .Where(x => !x.Name.Contains(\"RealmModuleInitializer\"))\n .Where(x => !x.Name.Contains(\"ProcessedByFody\"))\n .AsImplementedInterfaces().AsSelf();\n", "text": "I had forgoten entirely about this post. Just recently I got back into using Realm/Atlas for my small Xamarin app. And thought I would share here the solution. Instead of using each container, class and registering it he uses GetExecutingAssembly, I got this from an Udemy class I took and thought it pretty slick but frankly didn’t really understand the process. For some reason a coup;e of Realm Classes kept throwing errors. Adding them to the builder seperately seemed to do the trick. Thought I’d share incase anyone else runs into this!", "username": "Michael_Gamble" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm & AutoFac error "ProcessedByFody"
2021-12-21T18:03:51.808Z
Realm &amp; AutoFac error &ldquo;ProcessedByFody&rdquo;
3,156
null
[ "node-js" ]
[ { "code": "", "text": "We have an API call that happen to fetch loads of data, binary data included. This in turn causes the fetching process to take long. What we want is to filter and fetch few light columns. To make the app to load faster. How can that be achieved on a React Typescript, Node", "username": "Trailbrazer_Lounge" }, { "code": "", "text": "Hi @Trailbrazer_Lounge ,If you are using a js driver or node driver you need to use projection in your queries and specify only the set of fields you need.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you Pavel. Let me give it a try.", "username": "Trailbrazer_Lounge" } ]
Selecting specific data from mongo DB documents
2022-10-26T12:56:18.493Z
Selecting specific data from mongo DB documents
936
null
[ "queries", "dot-net" ]
[ { "code": " public class ContactInputVariable\n {\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string Id { get; set; }\n public string ContactId { get; set; }\n public string InputText { get; set; }\n public string VariableName { get; set; }\n public string FlowId { get; set; }\n public string FlowNodeId { get; set; }\n public DateTime Created { get; set; }\n }\n", "text": "Hey MongoDB Team.I’m having an issue on production environment about the followingAn error occurred while deserializing the FlowNodeId property of class ContactInputVariable: Thread static buffer is already in use.\nThe environment usually had a lot of activity and heavy load\nOne of the potential causes could be hitting allowed max pool size, but was my opinion/understanding and nothing more (I could be wrong).Class ModelProject is using Official .NET driver for MongoDB v. 2.15.1Have no idea why is this happening.\nCould you give me some insight about what might be causing the issue.Thanks in advance.", "username": "Rudik_Manucharyan" }, { "code": "", "text": "Hi, just faced with the same issue, did you find any solution? @Rudik_Manucharyan", "username": "Mike_Savvin" }, { "code": "", "text": "Hi @Mike_Savvin , unfortunately no, we’re still facing the issue.\nWhat version of MongoDB driver are you using? Is it the same version that I’ve been using?", "username": "Rudik_Manucharyan" }, { "code": "", "text": "@Rudik_Manucharyan now I’m using the latest one version 2.17.1 and still facing it", "username": "Mike_Savvin" }, { "code": "", "text": "We were facing the same issue. In our case, it’s not exactly MongoDB driver’s fault. It was due to an issue with our RabbitMQ connection management that led to this issue. Instead of reusing the RabbitMQ connection, it was creating new connections and on top of it, the connection was not being closed. Looks like this exhausted all the available threads in the thread pool and no threads were remaining to be used by the mongodb driver to deserialize the data. Once the RabbitMQ connection reuse logic is fixed, this issue disappeared automatically. You could be facing something similar. Check if you are using up all your threads and blocking them anywhere in your application. Hope this helps.", "username": "Prashanth" } ]
Frequently facing with the exception (Thread static buffer is already in use)
2022-06-15T12:51:35.794Z
Frequently facing with the exception (Thread static buffer is already in use)
2,949
https://www.mongodb.com/…4_2_1024x118.png
[ "java", "atlas-cluster" ]
[ { "code": "package xxxxxxxxxx.dao.dbutil;\nimport java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.SQLException;\nimport java.util.logging.Level;\n\npublic class DBUtil {\n\n\tprivate static Connection conn;\n\tprivate DBUtil() {}\n\tprivate static final String URL = \"jdbc:mongodb://cluster0.ktad2.mongodb.net:27017/test?ssl=true&authSource=admin\";\n\n\tpublic static Connection openConnection() throws SQLException {\n\n\t\tjava.util.Properties p = new java.util.Properties();\n\t\tp.setProperty(\"user\", \"xxxxxxx\");\n\t\tp.setProperty(\"password\", \"xxxxxxxxxxxx\");\n\t\tp.setProperty(\"database\", \"test\");\n\t\tp.setProperty(\"loglevel\", Level.SEVERE.getName());\n\n\t\t//Registers JDBC Driver\n\t\ttry {\n\t\t\tClass.forName(\"com.mongodb.jdbc.MongoDriver\");\n\t\t} catch (ClassNotFoundException e) {\n\t\t\tSystem.out.println(\"ERROR: Unable to load SQLServer JDBC Driver\");\n\t\t\te.printStackTrace();\n\t\t}\n\t\tSystem.out.println(\"MongoDB JDBC Driver has been registered...\");\n\n\n\t\t//Connects to mongodb database\n\n\t\tSystem.out.println(\"Trying to get a connection to the database...\");\n\t\ttry {\n\t\t\tConnection conn = DriverManager.getConnection(URL, p);\n\n\t\t} catch (SQLException e) {\n\t\t\tSystem.out.println(\"ERROR: Unable to establish a connection with the database!\");\n\t\t\te.printStackTrace();\n\t\t}\n\t\treturn conn;\n\t}\n\nMongoDB JDBC Driver has been registered...\nTrying to get a connection to the database...\n\tat gr.aueb.cf.teachersapp.controller.SearchTeacherController.doGet(SearchTeacherController.java:39)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:655)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:764)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat gr.aueb.cf.teachersapp.controller.GRFilter.doFilter(GRFilter.java:25)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)\n\tat org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)\n\tat org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)\n\tat org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)\n\tat org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)\n\tat org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)\n\tat org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)\n\tat org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)\n\tat org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)\n\tat org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)\n\tat org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890)\n\tat org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1743)\n\tat org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)\n\tat org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)\n\tat org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)\n\tat org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\n[2022-10-25 22:54:41.809] [SEVERE] [c-2] com.mongodb.jdbc.MongoConnection: Error in MongoConnection.testConnection(..) \njava.util.concurrent.TimeoutException\n\tat java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)\n\tat com.mongodb.jdbc.MongoConnection.testConnection(MongoConnection.java:480)\n\tat com.mongodb.jdbc.MongoDriver.connect(MongoDriver.java:169)\n\tat java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)\n\tat java.sql/java.sql.DriverManager.getConnection(DriverManager.java:189)\n\tat gr.aueb.cf.teachersapp.dao.dbutil.DBUtil.openConnection(DBUtil.java:35)\n\tat gr.aueb.cf.teachersapp.dao.TeacherDAOImpl.getTeachersByLastnane(TeacherDAOImpl.java:110)\n\tat gr.aueb.cf.teachersapp.service.TeacherServiceImpl.getTeacherByLastname(TeacherServiceImpl.java:86)\n\tat gr.aueb.cf.teachersapp.controller.SearchTeacherController.doGet(SearchTeacherController.java:39)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:655)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:764)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat gr.aueb.cf.teachersapp.controller.GRFilter.doFilter(GRFilter.java:25)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:197)\n\tat org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97)\n\tat org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:541)\n\tat org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)\n\tat org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)\n\tat org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)\n\tat org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)\n\tat org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)\n\tat org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)\n\tat org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)\n\tat org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:890)\n\tat org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1743)\n\tat org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)\n\tat org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)\n\tat org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)\n\tat org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\n \njava.sql.SQLTimeoutException: Timeout. Can't connect.\n\tat com.mongodb.jdbc.MongoDriver.connect(MongoDriver.java:171)\n\tat java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)\n\tat java.sql/java.sql.DriverManager.getConnection(DriverManager.java:189)\n\tat gr.aueb.cf.teachersapp.dao.dbutil.DBUtil.openConnection(DBUtil.java:35)\n\tat gr.aueb.cf.teachersapp.dao.TeacherDAOImpl.getTeachersByLastnane(TeacherDAOImpl.java:110)\n\tat gr.aueb.cf.teachersapp.service.TeacherServiceImpl.getTeacherByLastname(TeacherServiceImpl.java:86)\n\tat gr.aueb.cf.teachersapp.controller.SearchTeacherController.doGet(SearchTeacherController.java:39)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:655)\n\tat javax.servlet.http.HttpServlet.service(HttpServlet.java:764)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:227)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53)\n\tat org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:189)\n\tat org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:162)\n\tat gr.aueb.cf.teachersapp.controller.GRFilter.doFilter(GRFilter.java:25)\n[2022-10-25 22:54:41.809] [SEVERE] [c-2] [stmt-1] com.mongodb.jdbc.MongoSQLStatement: Error in MongoSQLStatement.executeQuery(..) \ncom.mongodb.MongoInterruptedException: Interrupted while waiting to connect\n\tat com.mongodb.internal.connection.BaseCluster.getDescription(BaseCluster.java:211)\n\tat com.mongodb.internal.connection.SingleServerCluster.getDescription(SingleServerCluster.java:41)\n\tat com.mongodb.client.internal.MongoClientDelegate.getConnectedClusterDescription(MongoClientDelegate.java:155)\n\tat com.mongodb.client.internal.MongoClientDelegate.createClientSession(MongoClientDelegate.java:105)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.getClientSession(MongoClientDelegate.java:287)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:191)\n\tat com.mongodb.client.internal.MongoDatabaseImpl.executeCommand(MongoDatabaseImpl.java:194)\n\tat com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:163)\n\tat com.mongodb.client.internal.MongoDatabaseImpl.runCommand(MongoDatabaseImpl.java:158)\n\tat com.mongodb.jdbc.MongoSQLStatement.executeQuery(MongoSQLStatement.java:60)\n\tat com.mongodb.jdbc.MongoStatement.execute(MongoStatement.java:175)\n\tat com.mongodb.jdbc.MongoConnection$ConnValidation.call(MongoConnection.java:448)\nERROR: Unable to establish a connection with the database!\n\tat com.mongodb.jdbc.MongoConnection$ConnValidation.call(MongoConnection.java:1)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: java.lang.InterruptedException\n\tat java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1081)\n\tat java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1369)\n\tat java.base/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:278)\n", "text": "Hello everyone,I am a junior developer and just starting out so please bare with me.I am working on an app using Maven and even though I managed to have a database locally using JDBC, I can’t connect to my mongo database online.This is my codeI have tried everything I can think of and still getting a timeout error\nScreenshot_20221025_1057411048×121 18.8 KB\nas well asany help would be greatly appreciated!!", "username": "akko" }, { "code": "cluster0.ktad2.mongodb.net", "text": "Contrary to localhost the namecluster0.ktad2.mongodb.netis a cluster name. The connection method is different and you have to mongodb+srv:// as the connection method rather than simply mongodb://.See https://www.mongodb.com/docs/manual/reference/connection-string/ for more details on connection string formats.", "username": "steevej" } ]
Help! can't connect to mongodb using JDBC in maven project
2022-10-25T20:01:00.636Z
Help! can&rsquo;t connect to mongodb using JDBC in maven project
1,652
null
[ "python", "rust" ]
[ { "code": "", "text": "Hi there,I’m Roman - a software developer from Berlin.Some of you know me as the author of Beanie - Python ODM. Working on it I had the pleasure to meet many nice people using, writing about, and developing MongoDB. And today I’m proud to announce that I’m joining this great community as the leader of MongoDB User Group Berlin!This is the start of a fun journey, I think. I plan to make events, where people would share their knowledge, participate in discussions and, most important, meet each other. The first one will be announced very soon A few words about me:Here is the MUG Berlin group page. Please, join!You can find me on TwitterThank you", "username": "Roman_Right" }, { "code": "", "text": "Hi @Roman_Right,G’day from Sydney and greetings from another LEGO fan Thank you for your continued contributions to open source and the MongoDB community Excellent to see you stepping up as a leader for the Berlin MongoDB User Group – I’m looking forward to hearing about some amazing events!Cheers,\nStennie", "username": "Stennie_X" } ]
Hallo from Berlin
2022-10-26T12:00:09.144Z
Hallo from Berlin
1,885
null
[ "queries", "replication", "python", "containers" ]
[ { "code": "FROM ubuntu\nRUN apt-get update\nRUN apt-get install curl pip micro netcat -y\nCOPY requirements.txt app/requirements.txt\n\nWORKDIR /app\n\nRUN pip install --upgrade pip\nRUN pip install -r requirements.txt\n\nCOPY . /app\nfastapi==0.85.1\nuvicorn==0.19.0\nmotor==3.1.0dev\nblack==22.10.0\npython-dotenv==0.21.0 \norjson==3.8.0\nrequests==2.28.1\nimport asyncio\nimport requests\nfrom motor.motor_asyncio import (\n AsyncIOMotorClient,\n AsyncIOMotorCollection,\n)\n\n \nfrom bson.codec_options import DEFAULT_CODEC_OPTIONS\n\ndef get_collection(\n \n) -> AsyncIOMotorCollection:\n # SET OPTION TO RETURN DATES AS TIMEZONE AWARE\n options = DEFAULT_CODEC_OPTIONS.with_options(tz_aware=True)\n\n\n motor_client = AsyncIOMotorClient(\n \"mongodb://usr:[email protected]:27020/?authSource=admin&replicaSet=Replica1\"\n )\n database = motor_client[\"DBNAME\"]\n # APPLY OPTIONS TO RETURN DATETIMES AS TIMEZONE AWARE\n return database.get_collection(collName, codec_options=options)\n\n\nasync def main():\n # client = ClientMotor().client\n coll = get_collection()\n response=requests.get('http://host.docker.internal')\n print(response.text)\n async for doc in coll.find({}).limit(10):\n print(doc)\n\n\nif __name__ == \"__main__\":\n try:\n asyncio.run(\n main()\n )\n except KeyboardInterrupt:\n pass\ndocker build -t test_motor . \ndocker run -it --add-host host.docker.internal:host-gateway test_motor bash\npython3 mongodb.py.\nnc -zvw10 host.docker.internal 27020\n", "text": "On Linux Ubuntu 22.04 I have created a docker container with a simple python script that has to connect to a MongoDB instance working on the host and listening on localhost.\nTo have a reproducible example consider what follows.\nThe Dockerfile is:The requirements file contains:While the python script (inside app/db directory) is mongodb.py and contains:On host, I have installed and started NGINX that forwards stream to MongoDB from PORT 27020 to default port 27017 (always localhost).\nThe previous code works fine if not within a docker container (specifying localhost instead of the host.docker.internal) and I can connect to MongoDB from Compose by specifying PORT 27020 in the connection string.\nIf I build the container withand thereafter run it withRunningFrom within /app/db, I correctly get a response from NGINX but I’m not able to connect to MongoDB.\nIf I execute from within the docker containerthe host/port is reachable.\nWhat’s the problem? I don’t want to run the container with host=net, the only working solution so far!", "username": "Sergio_Ferlito1" }, { "code": "motor_client = AsyncIOMotorClient(\n username=\"user\",\n password=\"secret\",\n connectTimeoutMS=5000,\n socketTimeoutMS=5000,\n authSource=\"admin\",\n host=\"host.docker.internal\",\n port=27017,\n directConnection=True\n)\n", "text": "Finally solved as following.\nI had to add directConnection=True in the connection string to MongoDB and I had to set bindIp: 0.0.0.0 in MongoDB configuration file (in /etc/mongod.conf). To be clearer I changed motor_client as follows:Actually I don’t know why I had to set directConnection=True but this, in my configuration, solved the problem.", "username": "Sergio_Ferlito1" }, { "code": "replicaSetdirectConnection=True", "text": "Hi @Sergio_Ferlito1,Actually I don’t know why I had to set directConnection=True but this, in my configuration, solved the problem.It appears you are connecting to a replica set per the connection string:replicaSet=Replica1If so, the expected behaviour as per the MongoDB Server Discovery and Monitoring (SDAM) specification is to discover the replica set configuration and connect using configured the hostnames and ports.If you remove the replicaSet parameter, the default behaviour in Motor 3.0+ is to automatically try to discover replica sets.The directConnection=True option is used to connect to a single member of a deployment. This connection mode does not support failover or replica set discovery/monitoring, but will work with a configuration where you are forwarding via hostnames and/or ports that do not match the replica set configuration.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to reach MongoDB listening on localhost (on host, Ubuntu 22.04) from within docker container
2022-10-23T06:53:05.528Z
Unable to reach MongoDB listening on localhost (on host, Ubuntu 22.04) from within docker container
4,319
null
[ "queries" ]
[ { "code": "{\n projectId:'abc',\n goals:[{name:'a',age:5},{name:'b',age:9}]\n}\n{\n _id:'id'\n name:'a',\n age:5\n},\n\n{\n _id:'id'\n name:'b',\n age:9\n}\n", "text": "I have a collection which has data likeIs it possible to make a query that will convert the array of objects goals\nto documents like", "username": "AbdulRahman_Riyaz" }, { "code": "db.collection.aggregate([{$unwind : { path : \"$goals\"}}},\n{ $replaceRoot: { newRoot: \"$goals\" } },\n{ $out : \"newCollection\"}\n", "text": "Hi @AbdulRahman_Riyaz ,Its possible to use unwind and regenerate an _id for every document:This code have not been tested but thats the idea.Ty\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks, Pavel for taking the time to answer.", "username": "AbdulRahman_Riyaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert an array of objects to individual documents and save in the db
2022-10-25T09:06:28.160Z
Convert an array of objects to individual documents and save in the db
2,299
null
[]
[ { "code": "mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Thu 2022-10-20 09:57:50 CDT; 1min 29s ago\n Docs: https://docs.mongodb.org/manual\n Process: 17654 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=2)\n Process: 17652 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 17650 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 17648 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\nsystemd[1]: Failed to start MongoDB Database Server.`\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\nbindIp: 127.0.0.1\n# Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.\n\n\nsecurity:\n authorization: \"enabled\"\n\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n", "text": "could you help me with this error I just installed mongodb and I’m a beginnerthis is the error:this is my config file:", "username": "Angel_Plascencia" }, { "code": "ps -aef | grep [m]ongo\nss -tlnp\nls -al /var/lib/mongo/\nls -al /var/log/mongodb/\n", "text": "Please share the content of the log file and configuration file.Also share the output of:", "username": "steevej" }, { "code": "", "text": "i have the same problemand this result of all command @steevej\nps -aef | grep [m]ongo\nreturn emptyss -tlnp\nreturn my process node, mysql, and nginx in progress but nothing mongodls -al /var/lib/mongo/\nreturn empty directoryls -al /var/log/mongodb/drwxr-xr-x 2 mongodb mongodb 4096 Oct 14 21:05 .\ndrwxrwxr-x 10 root syslog 4096 Oct 23 00:00 …\n-rw------- 1 mongodb mongodb 7069523 Oct 26 13:45 mongod.log", "username": "core_irie_wilf" } ]
Failed start MongoDB Database Server
2022-10-20T15:26:14.859Z
Failed start MongoDB Database Server
2,980
null
[ "aggregation" ]
[ { "code": "", "text": "I’m using Web APIs using the Data API feature provided by MongoDB. I’m now trying to make the roles and filters work, but when using “Apply When” or filter by documents, I’m unable to get the Expansion “%%request” to work.What I’m trying to do, is add filters based on request headers or request actions, as follows:\n“%%request.action”: “aggregate”\nalso tried:\n“%%request.requestHeaders.customHeader”: “CustomValue”each of them was attempted separately, but none of them seemed to work.Same thing happened when trying to filter documents by having a document attribute equal a request header.This is crucial to my app, so can someone please help me with this?Thanks in advance.", "username": "Diaa_tamimi" }, { "code": "", "text": "Hi @Diaa_tamimi - do you mind sending me the structure of the request you send to the Data API (you can redact the sensitive data)And just confirming the changes were deployed once you defined this rule configuration.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Structure:\nURL: https://data.mongodb-api.com/app/--------------/endpoint/data/beta/action/aggregate\nHeaders: content-type, api-key, and Code (which is a custom header i’m using to pass a request value)In rules and filters, under “apply when”, i’m using the below statement:\n“%%request.action”: “aggregate”\nBody is as per the Data API documentation, and the request is working fine, i’m only having a problem with the rules “apply when” sectionit doesn’t seem to apply, but when i try something like “%%request.action”: {“$regex”:“aggregate”} it works. i don’t want to have to use $regex because MongoDB then ‘automatically’ changes it to pattern/option and when that happens it stops working as well.So then i tried using the expression:\n“%%request.requestHeaders.Code”: “Value”\nalso doesn’t work", "username": "Diaa_tamimi" }, { "code": "data/v1/aggregate", "text": "Hi @diaa_tamimiGot it, the data api is a little bit odd in the sense that the name for the action is actually data/v1/aggregate I will make sure to follow up with docs about this to explicitly call this outFor the headers, I’m not sure what’s going on but let me follow up on that in a little while", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Great. Thanks for your time and looking forward to your feedback", "username": "Diaa_tamimi" }, { "code": "", "text": "I’ve just checked, and data/v1/aggregate is working.\nNow I just need to be able to use the headers in my rules/filters and need to know how to call their value.Again, thanks for your time", "username": "Diaa_tamimi" }, { "code": "", "text": "Dear @Sumedha_Mehta1 ,Any updates? I really need to be able to call header values in Rules and Filters.Thanks", "username": "Diaa_tamimi" }, { "code": "\n{\n \"%%request.requestHeaders.Content-Type.0\": \"application/json\"\n}\n\n{\n \"application/json\": { \"$in\": \"%%request.requestHeaders.Content-Type\" }\n}\n", "text": "Hi - thanks for waiting. All of the headers actually consist of array values, so you would actually have to check if your header value is in the array. If the length of the array is 1, you should be able to do something like this:or using $in will work too", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Thanks for the information. I’ll test it as soon as possible and let you know", "username": "Diaa_tamimi" } ]
Having Troubles with Roles and Filters
2022-10-14T15:06:19.417Z
Having Troubles with Roles and Filters
2,510
null
[ "aggregation", "queries" ]
[ { "code": "db.collection.find({A: true}).count()\ndb.collection.find({B: true}).count()\ndb.collection.find({C: true}).count()\n db.collection.aggregate( {\"$facet\":{ \"countA\":[{\"$match\": {A: true}},{\"$count\":\"count\"}], \"countB\":[{\"$match\": {B: true}},{\"$count\":\"count\"}], \"countC\":[{\"$match\": {C: true}},{\"$count\":\"count\"}] }})\n", "text": "Hi, I am newbie to MongoDB and I am currently facing a trouble in query speed among multiple find().count() and aggregation({\"$facet\": {“countAmount”: [{\"$match\"}, {$count}]}}) and I would like to get some help.I discovered that with multiple query that will millions of recordsis way more faster than just merging the query into one with aggregationHence I would like to know what will be the best practice to do such query. Thank you in advance!!! ", "username": "Aaron_Lee1" }, { "code": "db.collection.createIndex({A:1},{ partialFilterExpression: { A : true }})\n...\ndb.collection.find({A: true},{A:1}).count()\ndb.collection.find({B: true},{B:1}).count()\ndb.collection.find({C: true},{C:1}).count()\n", "text": "Hi @Aaron_Lee1 ,It is possible that running the queries using a find will be faster then facet as it might parse and use indexing better.Do you have separate indexes on each of the fileds?What I would suggest is to have 3 partial expression index on every field name:, and in your queries filter and only project this field to take advantage of covered query scan :", "username": "Pavel_Duchovny" }, { "code": "", "text": "Nice to meet you @Pavel_Duchovny,Thank you for your suggestion of creating an index of the field. However, would you mind tell me more about facet might parse?I looked for using facet initially because I hope I can get the 3 count queries are using the same set of data in the secondary mongoDB (as the queries are merged into one and I can sure that there will only 1 access time rather than split into 3 access). There will still some new insert to the primary mongoDB during the queries. However the query of using facet will make me access timeout if there are plenty of data.What will be the best practice in handling this case?P.S. I am using java spring with mongotemplate to do the queryThank you once again your kind reply and suggestion! ", "username": "Aaron_Lee1" }, { "code": "db.collection.count({A :true})\nexecutionStats: {\n executionSuccess: true,\n nReturned: 0,\n executionTimeMillis: 0,\n totalKeysExamined: 1001,\n totalDocsExamined: 0,\n executionStages: {\n stage: 'COUNT',\n nReturned: 0,\n executionTimeMillisEstimate: 0,\n works: 1001,\n advanced: 0,\n needTime: 1000,\n needYield: 0,\n saveState: 1,\n restoreState: 1,\n isEOF: 1,\n nCounted: 1000,\n nSkipped: 0,\n inputStage: {\n stage: 'COUNT_SCAN',\n nReturned: 1000,\n{A : 1}\n \"nReturned\": 1600,\n \"executionTimeMillis\": 4,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 1600,\n \"executionStages\": {\n \"stage\": \"PROJECTION_SIMPLE\",\n \"nReturned\": 1600,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1602,\n \"advanced\": 1600,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 2,\n \"restoreState\": 2,\n \"isEOF\": 1,\n \"transformBy\": {\n \"A\": 1,\n \"B\": 1,\n \"C\": 1,\n \"_id\": 0\n },\n \"inputStage\": {\n \"stage\": \"COLLSCAN\",\n \"nReturned\": 1600,\n \"executionTimeMillisEstimate\": 0,\n \"works\": 1602,\n \"advanced\": 1600,\n \"needTime\": 1,\n \"needYield\": 0,\n \"saveState\": 2,\n \"restoreState\": 2,\n \"isEOF\": 1,\n \"direction\": \"forward\",\n \"docsExamined\": 1600\n }\n },\n \"allPlansExecution\": []\n }\n },\n \"nReturned\": 1600,\n \"executionTimeMillisEstimate\": 2\n },\n {\n \"$facet\": {\n \"countA\": [\n {\n \"$internalFacetTeeConsumer\": {},\n \"nReturned\": 1600,\n \"executionTimeMillisEstimate\": 2\n },\n {\n \"$match\": {\n \"A\": {\n \"$eq\": true\n }\n },\n \"nReturned\": 1000,\n \"executionTimeMillisEstimate\": 2\n },\n {\n \"$group\": {\n \"_id\": {\n \"$const\": null\n },\n \"count\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"count\": 80\n },\n \"totalOutputDataSizeBytes\": 237,\n \"usedDisk\": false,\n \"spills\": 0,\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 2\n },\n {\n \"$project\": {\n \"count\": true,\n \"_id\": false\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 2\n }\n ],\n \"countB\": [\n {\n \"$internalFacetTeeConsumer\": {},\n \"nReturned\": 1600,\n \"executionTimeMillisEstimate\": 0\n },\n {\n \"$match\": {\n \"B\": {\n \"$eq\": true\n }\n },\n \"nReturned\": 500,\n \"executionTimeMillisEstimate\": 0\n },\n {\n \"$group\": {\n \"_id\": {\n \"$const\": null\n },\n \"count\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"count\": 80\n },\n \"totalOutputDataSizeBytes\": 237,\n \"usedDisk\": false,\n \"spills\": 0,\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0\n },\n {\n \"$project\": {\n \"count\": true,\n \"_id\": false\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0\n }\n ],\n \"countC\": [\n {\n \"$internalFacetTeeConsumer\": {},\n \"nReturned\": 1600,\n \"executionTimeMillisEstimate\": 0\n },\n {\n \"$match\": {\n \"C\": {\n \"$eq\": true\n }\n },\n \"nReturned\": 100,\n \"executionTimeMillisEstimate\": 0\n },\n {\n \"$group\": {\n \"_id\": {\n \"$const\": null\n },\n \"count\": {\n \"$sum\": {\n \"$const\": 1\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"count\": 80\n },\n \"totalOutputDataSizeBytes\": 237,\n \"usedDisk\": false,\n \"spills\": 0,\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0\n },\n {\n \"$project\": {\n \"count\": true,\n \"_id\": false\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 0\n }\n ]\n },\n \"nReturned\": 1,\n \"executionTimeMillisEstimate\": 2\n }\n ],\n \"serverInfo\": {\n \"host\": \"atlas-rg2tcy-shard-00-02.uvwhr.mongodb.net\",\n \"port\": 27017,\n \"version\": \"6.1.0\",\n \"gitVersion\": \"0ca11aca38c75d3c8fb5bac5bd103b950718a896\"\n \ndb.collection.aggregate([{\n $match: {\n A: true\n }\n}, {\n $count: 'countA'\n}, {\n $unionWith: {\n coll: 'collection',\n pipeline: [\n {\n $match: {\n B: true\n }\n },\n {\n $count: 'countB'\n }\n ]\n }\n}, {\n $unionWith: {\n coll: 'collection',\n pipeline: [\n {\n $match: {\n C: true\n }\n },\n {\n $count: 'countC'\n }\n ]\n }\n}])\n", "text": "@Aaron_Lee1 ,Look at the explain plans of each query:When running the single count query which I actually prefare to use the count operation:Will return just an index scan on {A : 1} with no need to scan documents in my case I had 1000 A : true in the the collection.When using facet look at the explain plan:It actually does a collection scan for the full collection. And need to read the documents which are much larger than the index.I think that if you need this one in one query you are way better using unionWith rather than facet:Ty\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_DuchovnyThank you for your detail explanation.I was just following the suggestion in the comments in the following topicHowever, I would like to know is it better to merge the multiple count queries into one with using unionWIth in my case, which is using replicaSet with a Primary and Secondary database. Such that to make sure the multiple count is using the same set of documents at the point. Or are there any methods that I can secure the multiple queries are getting the data in the same specific time point, since there will some overheads when I connect to the db with mongoTemplate? What will be the common or best practice for facing this kind of problems.Please let me know if there are any unclear.Thank you so much!\nAaron", "username": "Aaron_Lee1" }, { "code": "", "text": "I don’t think there is a defenitive answer.You need to test the alternatives. I suspect that the $facet is the least preferable way .Indexes should be fitting your memory set and therefore scanning indexes for counts should be potentially most efficient.Please test the methods and let us know what worked.Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Get it! Thank you so much for your help and advice.Have a nice day,\nAaron", "username": "Aaron_Lee1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query Speed among multiple find then count vs aggregation
2022-10-25T03:19:17.846Z
Query Speed among multiple find then count vs aggregation
3,507
null
[ "atlas-search" ]
[ { "code": "", "text": "As the title says, how would I achieve this? Say for example, I have this backend api which will eventually query the atlas freetier cluster when it’s deployed to production. This query uses $search to be able to execute full text search on “products” collection. My TDD environment is set up with mongodb in-memory server. This in-memory server doesn’t recognize $search operator.I’m currently mocking/stubbing the result from the query that I executed manually from MongodbCompass. The process is like, execute the query and check the result. If it’s the one I was expecting then, copy and paste the query to the backend code and use the result as stub.\nIt doesn’'t seem to be too bad for now but what if db configuration changes, then I’ll always have to replace the stub with latest result from actual db query. I don’t feel like stubbing the result is a good idea.\nUnit testing to compare the actual result with what I’m expecting while I already know the result? hmm it’s almost like doing nothing.The only way I came up with is just connect to dev db on atlas. I was wondering if there’s other way or right way to achieve what I want without using actual cloud db. I prefer in-memory server cause obviously it’s fast and light.", "username": "Polar_swimming" }, { "code": "mongodb-memory-server$search$search", "text": "Hi @Polar_swimming,My TDD environment is set up with mongodb in-memory server. This in-memory server doesn’t recognize $search operator.Could you clarify what you mean by in-memory server? I presume you’re utilsing something like mongodb-memory-server but please correct me if i am wrong here. If this is the case, then it is expected that the $search operator is not recognizable in that instance as it is only available on Atlas.The only way I came up with is just connect to dev db on atlas. I was wondering if there’s other way or right way to achieve what I want without using actual cloud db.Unfortunately since the $search operator is Atlas-specific, there is currently no method that I’m aware of that can replicate this functionality locally.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to unit test operators available only on atlas?
2022-10-19T01:22:57.263Z
How to unit test operators available only on atlas?
1,665
null
[ "queries", "indexes" ]
[ { "code": "db.col_name.createIndex({\"A\": 1, \"B\", 1})\ndb.col_name.find({A: {$in: [1, 2, 3]} B: 50})\n", "text": "Since three values are selected for the prefix index, would that invalid the compound index?If not, how will it solve this problem?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "Hi @Big_Cat_Public_Safety_Act ,Please have a look at the explain results as suggested in Skipping fields when querying collections with compound indexes - #3 by StennieExplaining queries will show the information you’re looking for, including the effects of adding additional or different indexes. That will also help you get faster answers as you explore indexing.I also recommend watching Tips and Tricks for Query Performance: Let Us .explain() Them - YouTube from the MongoDB.live 2020 conference.For a more comprehensive (and free) introduction I recommend taking some of the free online courses at MongoDB University. M001: MongoDB Basics is the usual starting point.Regards,\nStennie", "username": "Stennie_X" } ]
Compound index with queries that filter for multiple values of a prefixed field
2022-10-26T02:48:58.567Z
Compound index with queries that filter for multiple values of a prefixed field
1,625
https://www.mongodb.com/…4_2_460x1024.png
[ "compass", "android" ]
[ { "code": "", "text": "\nScreenshot_20221014-163034720×1600 60.9 KB\nIn my personal experience, as a developer at a small electronics/IoT company:While I was traveling and without a notebook at hand, I needed to verify some settings in the platform, but I realized that there is no MongoDB compass for Android or iPhoneWhen I tried to access the browser of the phone, I saw that everything was unnavigable, I could not read anything at allThe least one expects is to be able to add or delete users in the project, or a friendly interface to view alertsIs it in the plans for MongoDB Compass to build a mobile app or improve the website for mobile screens? Cheers", "username": "Hugo_Jerez" }, { "code": "", "text": "Hi @Hugo_Jerez - Welcome to the community.Thanks for providing the screenshot of what you are seeing on your mobile.Is it in the plans for MongoDB Compass to build a mobile app or improve the website for mobile screens? MongoDB Compass is a seperate product from what you have provided in your screenshot. The Atlas Data Explorer is what is being seen. As of the time of this message, I am currently unaware of plans to update the UI to work better with Mobile devices. However, you can provide this feedback and use case details via the MongoDB feedback engine where others can also vote for it and the product team can view it.Regards,\nJason", "username": "Jason_Tran" } ]
MongoDB Cloud looks very bad on mobile
2022-10-17T05:25:40.618Z
MongoDB Cloud looks very bad on mobile
1,779
null
[ "aggregation", "queries", "java" ]
[ { "code": "<[Document{{_id=Leonor Silveira, count=8}}, Document{{_id=Joaquim de Almeida, count=6}},\n Document{{_id=Isabel Ruth, count=6}}, Document{{_id=Maria de Medeiros, count=5}},\n Document{{_id=Catherine Deneuve, count=5}}, Document{{_id=John Malkovich, count=5}},\n Document{{_id=Ricardo Trèpa, count=5}}, Document{{_id=Joèo Cèsar Monteiro, count=5}},...\n<[Document{{_id=Leonor Silveira, count=8}}, Document{{_id=Isabel Ruth, count=6}},\n Document{{_id=Joaquim de Almeida, count=6}}, Document{{_id=Joèo Cèsar Monteiro,\n count=5}}, Document{{_id=Maria de Medeiros, count=5}}, Document{{_id=Ricardo Trèpa,\n count=5}}, Document{{_id=Catherine Deneuve, count=5}}, Document{{_id=John Malkovich,\n count=5}},...\n", "text": "Hi Everyone,the test UsingAggregationBuilders.aggregateSeveralStages() fails because the order of the results are not equal. The ordering of attribute count is of course correct but the order of _id (here cast-name) is different.expected:but was:it looks like the second attribute is ordered differently (natural vs random?)\nthe size of the collection is the same.any hints?greetingsedit:\nits the same issue as discussed in UsingAggregationBuilders.aggregateSeveralStages assertion failsthe solution/workaround is: change the type of “sortByCountResults” and “groupByResults” from ArrayList to HashSet and it will work.this topic can be closed. thanks", "username": "Sebastian_Hort" }, { "code": "", "text": "", "username": "kevinadi" }, { "code": "", "text": "edit:\nits the same issue as discussed in UsingAggregationBuilders.aggregateSeveralStages assertion failsthe solution/workaround is: change the type of “sortByCountResults” and “groupByResults” from ArrayList to HashSet and it will work.this topic can be closed. thanksClosed as requested by @Sebastian_Hort as this was solved in the linked topic Best regards\nKevin", "username": "kevinadi" } ]
sortByCount and $group and $count - delivers not the same order
2022-10-14T13:47:59.412Z
sortByCount and $group and $count - delivers not the same order
1,173
null
[ "transactions" ]
[ { "code": "db.collection(\"users\").insertOne(user_json)", "text": "I have an endpoint that register users to my database. Each user is associated with a user id in incremental order such as 1, 2, 3, 4, etc. If I insert a new user, the new user will have a user id of (current max user id + 1).Suppose I have three users at my database with user id of 1, 2, and 3 correspondingly. If two users make a request to register concurrently, both users will have an user id of 4 which causes a duplicate conflict. One way to solve it is to use transaction since transactions are isolated. More specifically, user A has to wait until user B finishes his/her write to the database, then user A will proceed his/her write.Transaction follows ACID rule. What about a normal insert method without using a transaction. For example, just db.collection(\"users\").insertOne(user_json). Will this also follow ACID rule as well?\nIn my example above which two users make concurrent requests to register, will the two requests be isolated?", "username": "Black_Panther" }, { "code": "", "text": "Welcome to the MongoDB Community @Black_Panther !Updates to a single document still have the same ACID properties: a write to a single document is either successful or the document is not changed. You will not have a partially updated document and each update is applied atomically.However, ACID transactions are a step up from an ACID update (in MongoDB as well as other databases). If a transaction is in progress and a write outside the transaction modifies a document, the transaction will abort with a write conflict and can be retried. See In-progress Transactions and Write Conflicts for more information.Incrementing IDs are generally a performance anti-pattern for a distributed database as they require a single point of coordination (which is why the default unique identifiers are pseudorandom ObjectIDs that can be independently generated).However, there are a few references for your use case:MongoDB Auto-IncrementGenerating Globally Unique Identifiers for Use with MongoDBYou’ll notice the above references rely on the ACID properties of updating a counter document with a unique index rather than using multi-document transactions.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Are write to MongoDB without using transactions follow rule of ACID?
2022-10-25T19:28:07.214Z
Are write to MongoDB without using transactions follow rule of ACID?
1,605
null
[ "storage" ]
[ { "code": "", "text": "Hi folks,\nWe have MongoDB 4 Shard Cluster, One particular shard getting crash after throwing the below error everyday couple of time.MongoDB version: 4.0.28\nShard Type: Hash\nApplication : Write Intensive2022-10-12T18:25:28.760+0530 I NETWORK [conn93431] received client metadata from 10.241.89.33:60354 conn93431: { driver: { name: “NetworkInterfaceTL”, version: “4.0.28” }, os: { type: “Linux”, name: “Red Hat Enterprise Linux release 8.5 (Ootpa)”, architecture: “x86_64”, version: “Kernel 4.18.0-348.20.1.el8_5.x86_64” } }2022-10-12T18:25:29.271+0530 E STORAGE [conn93431] WiredTiger error (-31802) [1665579329:271730][4154273:0x7efde3939700], WT_CONNECTION.open_session: __open_session, 2058: out of sessions, configured for 20030 (including internal sessions): WT_ERROR: non-specific WiredTiger error Raw: [1665579329:271730][4154273:0x7efde3939700], WT_CONNECTION.open_session: __open_session, 2058: out of sessions, configured for 20030 (including internal sessions): WT_ERROR: non-specific WiredTiger error2022-10-12T18:25:29.271+0530 F - [conn93431] Invariant failure: conn->open_session(conn, NULL, “isolation=snapshot”, &_session) resulted in status UnknownError: -31802: WT_ERROR: non-specific WiredTiger error at src/mongo/db/storage/wiredtiger/wiredtiger_session_cache.cpp 1112022-10-12T18:25:29.271+0530 F - [conn93431] \\n\\n***aborting after invariant() failure\\n\\n", "username": "Manosh_Malai" }, { "code": "", "text": "Hi @Manosh_MalaiI believe you also opened SERVER-70783 for this issue. I would reiterate the comment on that ticket: MongoDB 4.0 is out of support since April 2022 and any issue experienced by out of support version will not be fixed.Thus it is entirely possible that this is a manifestation of an issue that was fixed in newer versions. The oldest supported series is 4.2, so I would start by upgrading from 4.0 to 4.2 (4.2.23 is the latest in the 4.2 series) as the first step. Please upgrade the cluster as a whole, not just the affected nodes Please also take backups before executing the procedure.Best regards\nKevin", "username": "kevinadi" } ]
WT_ERROR: non-specific WiredTiger error
2022-10-25T07:43:10.962Z
WT_ERROR: non-specific WiredTiger error
1,936
null
[ "queries", "indexes" ]
[ { "code": "db.col_name.createIndex({A: 1, B: 1, C: 1})\ndb.col_name.find({A: \"some_value\", C: \"some_other_value\"})\n", "text": "Suppose that a compound index is created lik so:Would this query work:This query skips the use of the B field.", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "Hello @Big_Cat_Public_Safety_Act,Yes, your query will use an index,\nYou need to read the exact example and explanation of your use case here:", "username": "turivishal" }, { "code": "explain()BAC{A: 1, B: 1, C: 1}{A: 1, C: 1, B:1}B", "text": "Hi @Big_Cat_Public_Safety_Act,The best way to confirm index usage and efficiency would be to explain() your queries. I recommend using MongoDB Compass’ visual Explain Plan feature to View Query Performance, but you can also run explain queries via the MongoDB shell or using your favourite MongoDB client/driver.The compound index you created is a candidate to support your query since the prefix of the index matches your query (per the reference from @turivishal), but this won’t be efficient since all values of B will need to be scanned in order to match both A and C in your query.An index on {A: 1, B: 1, C: 1} would be ideal for your query. An index on {A: 1, C: 1, B:1} would be better if B is skipped sometimes but is still commonly used.For more details on compound indexes please see: Create Indexes to Support Your QueriesThe ESR (Equality, Sort, Range) RuleCreate Queries that Ensure SelectivityRegards,\nStennie", "username": "Stennie_X" } ]
Skipping fields when querying collections with compound indexes
2022-10-25T15:46:22.466Z
Skipping fields when querying collections with compound indexes
1,752
null
[ "queries", "indexes" ]
[ { "code": "db.col_name.createIndex({\"A\": 1})\ndb.col_name.createIndex({\"B\": 1})\ndb.col_name.find({A: 10, B: 50})\n", "text": "Will this query make use of both indexes? Is it doubly as efficient as just one index?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "db.collection.createIndex({A:1, B:1})db.collection.createIndex({B:1, A:1})db.collection.explain().find(....)", "text": "Hi @Big_Cat_Public_Safety_ActMongoDB has an index intersection feature to do what you described. However in practice, it’s not as effective as, and not a replacement for proper compound index, and the query planner rarely chooses index intersection plans. For the exact query you posted, the best index would be db.collection.createIndex({A:1, B:1}) or db.collection.createIndex({B:1, A:1}). Which index to create would depend on what other queries you have so you can maximize index usage.If you’re curious about MongoDB query index use, you can use db.collection.explain().find(....) to show the explain output of the query.If you prefer to use a GUI, you can also use MongoDB Compass query plan view to see the explain output.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can a query use multiple indexes?
2022-10-25T19:39:16.017Z
Can a query use multiple indexes?
3,353
null
[ "atlas-cluster", "golang" ]
[ { "code": "", "text": "First I gonna apologize with my bad English. So I just use mongo atlas for first time, My app is use k8s and I connect to mongo with query string that Mongo atlas provide to me , After connect I got error like thispanic: server selection error: context deadline exceeded, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: URL:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp 192.168.248.2:27017: i/o timeout }, { Addr: URL:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp 192.168.248.3:27017: i/o timeout }, { Addr: URL:27017, Type: Unknown, Last error: connection() error occurred during connection handshake: dial tcp 192.168.248.4:27017: i/o timeout }, ] }", "username": "management_zabtech" }, { "code": "", "text": "and when I check with in access history It’s show like this\n", "username": "management_zabtech" }, { "code": "", "text": "If you are able to connect i think you can ignore those messages\nAtlas does not support SCRA-SHA-256\nCheck this thread", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I believe the failed SCRAM-SHA-256 authentication is unrelated to the error you’re seeing from your application.The error messageerror occurred during connection handshake: dial tcp 192.168.248.4:27017: i/o timeoutsuggests that your application failed to connect to the MongoDB cluster in Atlas due to a timeout.A few questions:", "username": "Matt_Dale" } ]
I got error like this , need help
2022-10-22T06:41:52.116Z
I got error like this , need help
4,854
null
[ "aggregation" ]
[ { "code": "", "text": "I’m a new MongoDB user (3 days now!) and I’ve got a dashboard with six cards that hold information. I want to be able to do an analysis of the data on a week by week basis to track increases and decreases. Is it possible to go this week to week comparison within MongoDB or not. I know there is the export function, so I can get it in CSV format and do it there, but would like to keep it in Mongo if possible.", "username": "Dustin_Rekunyk" }, { "code": "", "text": "Hi @Dustin_Rekunyk welcome to the community!Could you provide some example data and how the calculation and output should look like?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin:Thank you for the welcome. Going through the learning assets on here and trying to take in all the information as best as possible.Planning this out over the last 24 hours, the end state is a chart that shows statistics for the most recent week compared to the previous week.Example data: (total hits on a website)total hits to the website previous week - 75\ntotal hits to website this week - 70\ndecrease in web traffic - 7%Ideally I would like the output to look something like the attached chart, where last week is on top (green) and this week on the bottom (blue). The chart attached is a generic one, I was playing around with my dashboard last night and was trying different methods without success.\n", "username": "Dustin_Rekunyk" }, { "code": "", "text": "Sounds like MongoDB Charts is the tool for you. Charts can create charts like your example using MongoDB data. There are a couple of tutorials on the bottom of the linked page.However if you’re using an external tool, then you might need to use the aggregation pipeline to process the data before it’s ready for consumption.If you’re interested in following a more structured learning method, you might be interested in MongoDB University where there are many free courses on many different topics. For example, I think you might be interested in:Best regards\nKevin", "username": "kevinadi" } ]
Can I compare week over week stats in MongoDB
2022-10-24T19:28:18.833Z
Can I compare week over week stats in MongoDB
1,351
null
[ "swift", "flexible-sync" ]
[ { "code": "class House: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var owner_id: String\n @Persisted var avatarURL: String\n @Persisted var caretakers: List<User>\n}\n\nfinal class User: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n \n @Persisted(originProperty: \"caretakers\") var caretakerFor: LinkingObjects<House>\n}\nowner_idavatarURLcaretakers\"rules\": {\n \"House\": [\n {\n \"name\": \"all\", // This already feels incorrect, but I can't set the `applyWhen` based on document data since it is set at the start of the session.\n \"applyWhen\": {},\n \"read\": true,\n \"write\": true,\n \"fields\": {\n \"avatarURL\": {\n \"read\": {\n \"caretakers\": {\n \"%stringToOid\": \"%%user.id\"\n } // Assume there is an \"or\" here for the owner\n },\n \"write\": { \"owner_id\": \"%%user.id\" }\n }\n },\n \"additional_fields\": {\n \"read\": false,\n \"write\": false\n }\n }\n ]\n }\nrole field \"fields.avatarURL.read\" expects a value of type \"boolean\", but provided value was of type \"object\"HouseUser", "text": "I would like a recommendation on the best way to track field level permissions with Flexible Sync.I’m using Swift Realm SDK. My objects are as follows…Here, I want the owner (owner_id) to have read and write access to avatarURL, but anyone in the caretakers list to only have read access to the field.When setting my Flexible Sync role permissions, I tried something along the following…However, I receive an error: role field \"fields.avatarURL.read\" expects a value of type \"boolean\", but provided value was of type \"object\"So it would appear I cannot dynamically set the permissions of a field based on another field.This approach seems good in the context of my objects since when I open a House, I can see all the caretakers, and each User can easily see all the houses they are caretakers for (thanks to the object linking). However, this doesn’t seem to fit in with how we configure Flexible Sync permissions.Is there something I’m missing that would make this approach work? Or is there another strategy for tracking permissions is recommended?", "username": "Kevin_Everly" }, { "code": "\"additional_fields\": {\n \"read\": false,\n \"write\": false\n}\n{\n \"name\": \"all\", \n \"applyWhen\": {},\n \"read\": { \n $or: [\n { \"caretakers\": {\n \"%stringToOid\":\"%%user.id\"\n }},\n {\n \"owner_id\": \"%%user.id\" \n }\n ]\n },\n \"write\": { \"owner_id\":\"%%user.id\" },\n \"fields\": {\n \"avatarURL\": {\n \"read\": true,\n \"write\": true\n }\n },\n \"additional_fields\": {\n \"read\": false,\n \"write\": false\n } \n}\n{\n \"name\": \"all\", \n \"applyWhen\": {},\n \"read\": { \"caretakers\": { \"%stringToOid\":\"%%user.id\" } },\n \"write\": { \"owner_id\":\"%%user.id\" },\n \"fields\": {\n \"avatarURL\": {\n \"read\": true,\n \"write\": true\n }\n },\n \"additional_fields\": {\n \"read\": false,\n \"write\": false\n } \n}\n", "text": "Hey Kevin,Field-level permissions in flexible sync rules are currently limited to only booleans; hence, the error message you are seeing about how “fields.avatarURL.read” expects a boolean. The team is still evaluating implementing support for dynamic expressions in field-level permissions, and we appreciate the feedback given here.I noticed in the role “all” that you have specified, only the “avatarURL” field can ever get read from or written to, due to the “additional_fields”:I think as a workaround, you could define the role to instead be something like:Or better yet, since write access implies read:Let me know if that works for you,\nJonathan", "username": "Jonathan_Lee" }, { "code": "class House: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var owner_id: String\n @Persisted var avatarURL: String\n @Persisted var familyMembers: List<User>\n @Persisted var caretakers: List<User>\n @Persisted var lawnMowed: List<Date>\n}\n\nfinal class User: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n \n @Persisted(originProperty: \"familyMember\") var familyMemberFor: LinkingObjects<House>\n @Persisted(originProperty: \"caretakers\") var caretakerFor: LinkingObjects<House>\n}\nfamilyMemberslawnMowedavatarURLcaregiversavatarURLapplyWhenallPermissionsapplyWhen", "text": "Hey Jonathan, thanks for the response.Looks like that does work for my example. I tried to trim everything down for the sake of the post, and in reality, we have some fields that we want writable by caretakers. Let me expand on the example to give a more accurate picture.So instead of just caretakers, we also have “family members”, that can add/remove caretakers. Also, let’s say we have a list for recording every time the lawn is mowed.So in this case, familyMembers can read/write caretakers. Caretakers should be able to read/write to lawnMowed, but they should not be able to write avatarURL, or read/write caregivers itself.Since our caretakers can write one of the fields, they would need to be included in the high level “write” block, which with the rules you provided, that would allow write access to the avatarURL field.Ideally, I would like to make separate caretaker/familyMember/owner roles, but we can’t evaluate the document in the applyWhen. The other option would be going with the all role I showed above, but then we would need to dynamically evaluate the document for read/write at the field level.What would you recommend for handling the permissions in this case? Would I be better off having a separate Permissions collection, and use a function within the applyWhen to assign a role?", "username": "Kevin_Everly" }, { "code": "caretakers", "text": "Also, Not sure the solution you provided will work. caretakers is not a queryable field so I am not able to use it in the read/write.", "username": "Kevin_Everly" }, { "code": "caretakersowner_idcaretakersowner_idfamilyMembers{\n \"name\": \"caretaker\",\n \"applyWhen\": { \"%%user.custom_data.isCaretaker\": true },\n ...\n}\n{\n \"canReadAvatarURL\": true,\n \"canReadCaretakers\": true\n}\n{\n \"name\": \"canReadAvatarURLAndCaretakersRole\",\n \"applyWhen\": { $and: [ \n { \"%%user.custom_data.canReadAvatarURL\": true },\n { \"%%user.custom_data.canReadCaretakers\": true }\n ]},\n \"read\": true,\n \"write\": true,\n \"fields\": {\n \"avatarURL\": {\n \"read\": true,\n \"write\": false\n },\n \"caretakers\": {\n \"read\": true,\n \"write\": false\n }\n },\n \"additional_fields\": {\n \"read\": false,\n \"write\": false\n },\n}\n", "text": "Got it, yeah. caretakers and owner_id would indeed have to be queryable fields to make the solution I suggested work.If possible, I would suggest moving out the fields caretakers, owner_id, and familyMembers into custom user data and defining roles like:If that’s not possible, my other thought would be storing some notion of different permission accesses in the custom user data, (ie which fields each user is allowed to access):and then you could define a role for every permissions scenario like:Jonathan", "username": "Jonathan_Lee" }, { "code": "canReadAvatarURLHouse", "text": "Thanks for looking into it. However, I’m still not sure this addresses my issue. Correct me if I’m wrong, but what you are suggesting means that if a user canReadAvatarURL, then it is applied to all House documents. I need something that is a tiered privilege but on a per-document base, or at least a different way to structure our documents to allow for this.", "username": "Kevin_Everly" }, { "code": "readwriteHere, I want the owner (owner_id) to have read and write access to avatarURL, \nbut anyone in the caretakers list to only have read access to the field.\navatarURLowner_idcaretakershouse_idHouseAvatar{\n house_id: <_id-of-house>,\n owner_id: <owner_id>,\n caretakers: <caretakers>,\n avatarURL: <avatarURL>,\n}\n{\n \"name\": \"role\", \n \"applyWhen\": {},\n \"read\": { \"caretakers\": { \"%stringToOid\":\"%%user.id\" } },\n \"write\": { \"owner_id\":\"%%user.id\" },\n \"fields\": {\n \"avatarURL\": {\n \"read\": true,\n \"write\": true\n }\n },\n \"additional_fields\": {\n \"read\": false,\n \"write\": false\n } \n}\navatarURLhouse_idHouseAvatarcaretakersowner_idHouseAvatar", "text": "I see, unfortunately I think the current system is pretty limited in supporting different field-level permission schemes based on the values of fields in the document. While I wouldn’t normally suggest this, as it is not really scalable or particularly elegant, I believe the only way of really accomplishing what you are trying to do then is to define a separate collection for each unique permission that you want to be able to capture, and leveraging the top-level read and write expressions in the role to dynamically control data access based on the value of document fields. Taking the example you gave of:You could move out the avatarURL field and copy out the owner_id/caretakers/house_id fields into a separate collection HouseAvatar with a schema like:And a role for that collection might look like:Then, when you need to read/write the avatarURL associated with a particular house, you could lookup the house_id in HouseAvatar.Even then, you would likely also need to define additional roles to control who can write caretakers and owner_id in HouseAvatar or introduce more collections altogether - since it sounds like those restrictions may dynamically depend on document fields as well.", "username": "Jonathan_Lee" }, { "code": "", "text": "As a follow up to my previous post, if it becomes challenging to enforce your data access control requirements via the existing sync permissions system, it may be easier to implement that control within the application logic, and would suggest exploring that as a possibility as well.Best,\nJonathan", "username": "Jonathan_Lee" } ]
Flexible Sync field level permissions based on another field
2022-10-14T18:02:31.051Z
Flexible Sync field level permissions based on another field
2,423
null
[ "python", "production", "motor-driver" ]
[ { "code": "", "text": "We are pleased to announce the 3.1.1 release of Motor - a coroutine-based API for non-blocking access to MongoDB in Python. Motor 3.1.1 brings official support for Python 3.11.For more information, see the full changelog .See the Motor 3.1.1 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "Steve_Silvester" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Motor 3.1.1 Released
2022-10-25T20:55:56.659Z
MongoDB Motor 3.1.1 Released
2,570
null
[ "queries", "java", "flexible-sync", "one-to-one-relationship" ]
[ { "code": "{\n \"_facId\": {\n \"ref\": \"#/relationship/mongodb-atlas/facilities/Types\",\n \"foreignKey\": \"_id\",\n \"isList\": false\n }\n}\n\n{\n \"title\": \"Facility\",\n \"bsonType\": \"object\",\n \"required\": [\n \"owner_id\",\n \"_facId\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"_facId\": {\n \"bsonType\": \"long\"\n },\n \"owner_id\": {\n \"bsonType\": \"long\"\n },\n \"facName\": {\n \"bsonType\": \"string\"\n },\n \"facDescrizione\": {\n \"bsonType\": \"string\"\n },\n \"type\": {\n \"bsonType\": \"long\"\n }\n }\n}\n{\n \"title\": \"Type\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"nome\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"long\"\n },\n \"tipo\": {\n \"bsonType\": \"string\"\n },\n \"descrizione\": {\n \"bsonType\": \"string\"\n },\n \"nome\": {\n \"bsonType\": \"string\"\n }\n }\n}\nfailed to get queryable fields by table: field \"_facId\" in table \"Facility\" is not an allowed queryable field: queryable field cannot be a cross-collection linkSubscription subscription = realm.getSubscriptions().find(realm.where(Facility.class)\n .equalTo(\"owner_id\", 0));\n", "text": "Hi there,\ni’m experiencing with flexiblke sync and i try to undestend how relationships works.\nConsider this two schema:Facility is the table in which are collected some facilities and they can be of different types, so i make a relationship with Facility and TypesFacility schema and relationship with TypeType SchemaAnd here is the first thing i didn’t undestand: from the manual a read thatWhen you declare a to-one relationship in your object model, it must be an optional property. If you try to make a to-one relationship required, Realm throws an exception at runtime.But if i take out _facId from required i got this error on UI inteface\nfailed to get queryable fields by table: field \"_facId\" in table \"Facility\" is not an allowed queryable field: queryable field cannot be a cross-collection linkSo i made that field as required.Client side (Java SDK) i use this subscritpion that works fine without relationshipsbut i got this error:E/REALM_JAVA: Session Error[wss://realm.mongodb.com/]: BAD_QUERY(realm::sync::ProtocolError:226): Invalid query (IDENT, QUERY): failed to parse query: query contains table not in schema: “Facility” Logs: xxxxxxxxxxxxxWhere do i mistake?\nThank you in advance", "username": "Gianluca_Suzzi" }, { "code": "", "text": "Hi, so you have run into 2 different issues.The first, is that as the first message you posted states, we do not support syncing on a query on a link field. So initially you had your schema defined properly and so we reject the query when we receive it from the server. I’m happy to say we are actually lifting this limitation in our next release (1 week), so you will be able to sync on a link field.The second issue you run into is that you are defining an “invalid” schema by making the properly required.In terms of moving forward, it might be best to wait a week. We will deploy the ability to sync on a link field on this coming WednessdayThanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thank you,\nI’ll wait then ", "username": "Gianluca_Suzzi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flexible-Sync and relationships
2022-10-25T18:02:31.549Z
Flexible-Sync and relationships
2,579
null
[ "queries", "golang" ]
[ { "code": "func main() {\n session, err := mgo.Dial(\"mongo\")\n \n if err != nil {\n panic(err)\n }\n \n defer session.Close()\n session.SetMode(mgo.Monotonic, true)\n ensureIndex(session)\n \n mux := goji.NewMux()\n mux.HandleFunc(pat.Get(\"/cars\"), allCars(session))\n mux.HandleFunc(pat.Post(\"/cars\"), addCar(session))\n mux.HandleFunc(pat.Get(\"/cars/:vin\"), carByVIN(session))\n mux.HandleFunc(pat.Delete(\"/cars/:vin\"), deleteCar(session))\n http.ListenAndServe(\":8080\", mux)\n }\n \n func allCars(s *mgo.Session) func(w http.ResponseWriter, r *http.Request) {\n return func(w http.ResponseWriter, r *http.Request) {\n session := s.Copy()\n defer session.Close()\n\n c := session.DB(\"carsupermarket\").C(\"cars\")\n\n var cars []vehicle\n err := c.Find(bson.M{}).All(&cars)\n if err != nil {\n errorWithJSON(w, \"Database error\", http.StatusInternalServerError)\n log.Println(\"Failed get all cars: \", err)\n return\n }\n\n respBody, err := json.MarshalIndent(cars, \"\", \" \")\n if err != nil {\n log.Fatal(err)\n }\n\n responseWithJSON(w, respBody, http.StatusOK)\n }\n}\n", "text": "This is an except of code I have which uses goji and the mgo driver, I want to change this to use the official driver for golang:In terms of passing connectivity / session information to my goji handler functions what would people recommend ?, a pointer to a collection ?", "username": "Chris_Adkin" }, { "code": "func main() {\n\tclient, err := mongo.Connect(\n\t\tcontext.Background(),\n\t\toptions.Client().ApplyURI(\"mongo\").SetReadPreference(readpref.Primary()))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer client.Disconnect(context.Background())\n\n\tcars := client.Database(\"carsupermarket\").Collection(\"cars\")\n\tensureIndex(cars)\n\n\tmux := goji.NewMux()\n\tmux.HandleFunc(pat.Get(\"/cars\"), allCars(cars))\n\tmux.HandleFunc(pat.Post(\"/cars\"), addCar(cars))\n\tmux.HandleFunc(pat.Get(\"/cars/:vin\"), carByVIN(cars))\n\tmux.HandleFunc(pat.Delete(\"/cars/:vin\"), deleteCar(cars))\n\thttp.ListenAndServe(\":8080\", mux)\n}\n\nfunc allCars(cars *mongo.Collection) func(w http.ResponseWriter, r *http.Request) {\n\treturn func(w http.ResponseWriter, r *http.Request) {\n\t\tcursor, err := cars.Find(r.Context(), bson.M{})\n\t\tif err != nil {\n\t\t\terrorWithJSON(w, \"Database error\", http.StatusInternalServerError)\n\t\t\tlog.Println(\"Failed to find cars: \", err)\n\t\t\treturn\n\t\t}\n\t\tdefer cursor.Close(r.Context())\n\n\t\tvar cars []vehicle\n\t\terr = cursor.All(r.Context(), &cars)\n\t\tif err != nil {\n\t\t\terrorWithJSON(w, \"Database error\", http.StatusInternalServerError)\n\t\t\tlog.Println(\"Failed get all cars: \", err)\n\t\t\treturn\n\t\t}\n\n\t\trespBody, err := json.MarshalIndent(cars, \"\", \" \")\n\t\tif err != nil {\n\t\t\tlog.Fatal(err)\n\t\t}\n\n\t\tresponseWithJSON(w, respBody, http.StatusOK)\n\t}\n}\nmgoContextContexthttp.RequestMonotonicmgoClientDatabaseCollection", "text": "Hey @Chris_Adkin thanks for the question and sorry about the slow reply!Here’s an example of your provided code using the Go Driver:Important differences from mgo:", "username": "Matt_Dale" } ]
Advice when moving from the mgo to official mongodb driver for golang
2022-09-06T10:21:47.258Z
Advice when moving from the mgo to official mongodb driver for golang
2,353
null
[ "indexes" ]
[ { "code": "admin> use TestDB\nTestDB> db.createCollection('coll')\nTestDB> db.coll.createIndex({4:1,1:-1},{unique:true})\nTestDB> db.coll.getIndexex()\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n { v: 2, key: { '1': -1, '4': 1 }, name: '1_-1_4_1', unique: true }\n]\nTestDB> db.createCollection(\"coll2\")\nTestDB> db.coll2.createIndex({'s':1, 'a':-1},{unique:true})\n[\n { v: 2, key: { _id: 1 }, name: '_id_' },\n { v: 2, key: { s: 1, a: -1 }, name: 's_1_a_-1', unique: true }\n]\n", "text": "When I used numbers as keys, the compound index prefixes created were not as expected。To define a compound index:OutputI expected 4 to be the prefix of the index, but it turned out to be 1, why? It looks like the sorting happensAnd when I use strings as keywords, the results are completely inconsistent。Following:OutputWhat? It doesn’t seem to be sorting.", "username": "limian_huang" }, { "code": "Map", "text": "Welcome to the MongoDB Community @limian_huang!The problem you are observing is a side effect of how JavaScript treats objects with numeric keys (or keys that look like numbers). JavaScript interpreters will sort numeric key names ahead of alphanumeric key names, which leads to unexpected outcomes where order is important (for example, MongoDB index definitions). This behaviour is part of the JavaScript specification: ECMAScript 2015 Language Specification – ECMA-262 6th Edition.My strong recommendation would be to use alphanumeric keys and avoid numeric keys (or strings that look like numbers). You can also use an order-preserving data structure like a Map in JavaScript, but this will still be an easy path to bugs.Earlier discussion: Sorting multiple fields produces wrong order - #3 by Stennie.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the mongo compound index keyword arrangement rule? Numbers first, strings later?
2022-10-21T01:42:08.484Z
What is the mongo compound index keyword arrangement rule? Numbers first, strings later?
1,389
null
[ "server" ]
[ { "code": "git", "text": "I’m no git wizard so it’s not easy for me to see how the different branches/tags are created/developed/merged/maintained/etc.Is there a document that describes how/when/why the branches are created/developed/maintained?For example, does main development occur on the master branch that is tagged/branched for the progressive releases? … with back-porting of important fixes, etc., for supported versions?", "username": "Cast_Away" }, { "code": "", "text": "What you describe in your example is basically what we do. You can check out https://www.mongodb.com/docs/manual/reference/versioning/ for details about our versioning system.Cheers!", "username": "Kelsey_Schubert" }, { "code": "v6.1", "text": "@Kelsey_Schubert Thanks for the reply.I can see how versioning is done and I’m wondering how development is done.For example, on GitHub, branch v6.1 is 137 commits ahead and 1161 commits behind master branch.\nScreenshot from 2022-10-22 08-53-151301×383 46.1 KB\nDo all the active branches cherrypick off the master branch? If a developer was to make a new feature, where is the start? … a new personal branch of master that will one day, hopefully, be merged back into master? If a developer creates a fix applicable to multiple branches/releases, is it first developed as a branch of master, then merged back into master and then cherrypicked into all the active branches?Cheers!", "username": "Cast_Away" }, { "code": "", "text": "Yes, we develop against that master branch (create branches from it and then merge into it). Typically we cherry-pick fixes back if we consider it safe and worthwhile to do so.", "username": "Kelsey_Schubert" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Server versioning and development/release/maintenance model
2022-10-08T18:03:14.828Z
Server versioning and development/release/maintenance model
2,281
null
[ "crud", "golang" ]
[ { "code": "type User struct {\n Id primitive.ObjectId `bson:\"_id\"`\n Name string `bson:\"name\"`\n}\nuser := &User{\nName: \"John\",\n}\nopts := options.FindOneAndReplace().SetUpsert(true).SetReturnDocument(options.After)\nresults, err := collection.FindOneAndReplace(context.Background(), bson.D{{ \"_id\", user.Id }}, user, opts)\ntype User struct {\n Id *primitive.ObjectId `bson:\"_id\"`\n Name string `bson:\"name\"`\n}\n_id: null*primitive.ObjectId", "text": "In the case of a new user with the following structand usingUpsert works but it creates a document with primitive.Object ID ObjectId(“000000000000”) instead of generating a ID.If we were to change the struct to use the pointer instead.It also doesn’t generate a new _id, instead it upserts with _id: null even though *primitive.ObjectId is nil. I was expecting it to generate a new object ID when it’s What’s the solution to this, other than generating the objectid manually at the application side?", "username": "Dave_Teu" }, { "code": "omitemptyUser.Id_idtype User struct {\n\tId primitive.ObjectID `bson:\"_id,omitempty\"`\n\tName string `bson:\"name\"`\n}\n", "text": "Try including the omitempty BSON directive on the User.Id field. That should prevent a “zero value” ObjectID from being written to the BSON document, prompting the Go Driver to automatically add a random _id to the document before sending it to the database.For example:", "username": "Matt_Dale" } ]
Go - How to properly upsert with UpdateOne/ReplaceOne
2022-09-07T09:18:32.459Z
Go - How to properly upsert with UpdateOne/ReplaceOne
2,841
null
[ "golang" ]
[ { "code": "", "text": "I’m trying to measure the number of bytes transferred by the MongoDB Go client. We’re seeing higher than expected data transfer between MongoDB and some of our services, and we’d like to understand which services are moving the most data. Measuring the size of the objects after decoding isn’t a viable option since we could theoretically decode less data into a struct than was passed back from the database to the client.", "username": "Clark_McCauley" }, { "code": "CommandMonitorCommandMonitorvar bytesMonitored uint64\n\nfunc main() {\n\tcmdMonitor := &event.CommandMonitor{\n\t\tStarted: func(_ context.Context, evt *event.CommandStartedEvent) {\n\t\t\tatomic.AddUint64(&bytesMonitored, uint64(len(evt.Command)))\n\t\t},\n\t\tSucceeded: func(_ context.Context, evt *event.CommandSucceededEvent) {\n\t\t\tatomic.AddUint64(&bytesMonitored, uint64(len(evt.Reply)))\n\t\t},\n\t}\n\n\tclient, err := mongo.Connect(\n\t\tcontext.Background(),\n\t\toptions.Client().ApplyURI(\"mongodb://myURI\").SetMonitor(cmdMonitor))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer client.Disconnect(context.Background())\n\n\tgo func() {\n\t\tfor range time.Tick(5 * time.Second) {\n\t\t\tlog.Print(\"Current bytes monitored:\", atomic.LoadUint64(&bytesMonitored))\n\t\t}\n\t}()\n\n\t// Use the client to run operations.\n}\nCommandMonitorCommandMonitorCommandMonitor", "text": "@Clark_McCauley thanks for the question and sorry for the slow reply!You can use a CommandMonitor to get access to all of the request and response messages sent to the database.For example, to print the number of bytes recorded by the CommandMonitor every 5 seconds:A few important caveats:", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver - Capture data transfer metrics
2022-09-12T15:54:27.901Z
MongoDB Go Driver - Capture data transfer metrics
1,786
null
[ "aggregation", "swift" ]
[ { "code": "import Foundation\nimport SwiftDate\n\nstruct TimePeriodData {\n \n //MARK: - Time Period\n \n var timePeriod: TimePeriod = .week\n var date: Date = Date()\n var localDate: Date = Date()\n \n //MARK: - Work Day Times\n \n var startTimeArray: [Date] = []\n var endTimeArray: [Date] = []\n \n //MARK: - Work Day Hours\n \n var hoursSingleTimeArray: [Double] = []\n var hoursPenaltyArray: [Double] = []\n var hoursTotalArray: [Double] = []\n \n //MARK: - Work Day Income\n \n var singleTimeIncomeArray: [Double] = []\n var penaltyIncomeArray: [Double] = []\n var totalIncomeArray: [Double] = []\n}\nprivate func updateWorkDays() {\n // Get Time Period Dates\n let start = timePeriodStart()\n let end = timePeriodEnd()\n \n // Get Current User Work Days for Time Period\n let allUserWorkDays = realm.objects(WorkDay.self).where { workDay in\n workDay.date.contains(start ... end)\n }\n workDays = allUserWorkDays\n}\nprivate func getData(for inputDate: Date, from inputWorkDays: Results<WorkDay>) -> TimePeriodData {\n var data = TimePeriodData()\n for workDay in inputWorkDays {\n // Time Period\n data.timePeriod = self.timePeriod\n data.date = inputDate\n data.localDate = workDayListBrain.getLocalDate(from: inputDate)\n \n // Work Day Times\n data.startTimeArray.append(workDay.startTime)\n data.endTimeArray.append(workDay.endTime)\n \n // Work Day Hours\n data.hoursSingleTimeArray.append(workDay.hoursSingleTime)\n data.hoursPenaltyArray.append(workDay.hoursPenalty)\n data.hoursTotalArray.append(workDay.hoursTotal)\n \n // Work Day Income\n data.singleTimeIncomeArray.append(workDay.singleTimeIncome)\n data.penaltyIncomeArray.append(workDay.penaltyIncome)\n\n }\n return data\n}\n data.startTimeArray = inputWorkDays.compactMap { workDay in workDay.startTime } as [Date]\n data.endTimeArray = inputWorkDays.compactMap { workDay in workDay.endTime } as [Date]\n", "text": "I’m trying to get aggregate data for individual Realm records so that it can be graphed on a chart.I have a ‘WorkDay’ Realm Object which stores values like ‘startTime’, ‘endTime’, ‘singleTimeIncome’, ‘penaltyIncome’, ‘totalIncome’, etc.A charting example is that I want to collect the ‘totalIncome’ of all Work Days for an entire year and graph that against the ‘totalIncome’ for all Work Days of previous years.My current approach is to cycle all WorkDay Realm Objects for a given year with a ‘for loop’ and store the values from WorkDay in a custom data struct. I move the values from the Realm WorkDay Object to an array in the data struct. The reason I want to add them into an array as it will then allow me to calculate things like ‘averageTotalIncome’ for that year as well as getting the ‘totalIncome’ for the entire year by running ‘reduce(0, +)’ on the array.The custom data struct looks similar to this:I then collect all Work Days from the Realm, the function looks like this:Once I have the Work Days I then move the data into ‘TimePeriodData’ which will be used by SwiftUI to generate charts, etc. The code look something like this:When ‘getData’ returns the UI is updated. However what I am finding is that when I use ‘getData’ for an entire year I need to cycle 100 Work Day Objects and this takes around 2 Seconds. I know I will need to create ‘TimePeriodData’ for previous years as well so there is other data to graph against, assuming each of which will take 2 Seconds if they have 100 Work Days. This adds up quickly.I’ve read other posts where people cycle 50,000 Realm Objects and it takes 0.1 Seconds. Given I am only cycling 100 and it takes two seconds, am I doing something fundamentally wrong here? Is there a better way to approach this?What I have tried to date is using a map function instead of a ‘for loop’, this looks a bit like this:But the ‘compactMap’ made no notable gains in speed. I also tried doing it all on a BG Thread, this made things ‘feel’ speedier as the UI was never stalled while processing data but it still took 2 Seconds to update.Any help or advice you could offer would be very much appreciated. Thanks!", "username": "brad" }, { "code": "WorkDayworkDaysworkDays = allUserWorkDaysvar data = TimePeriodData()\nfor workDay in inputWorkDays {\n data.timePeriod = self.timePeriod\n data.date = inputDate\n data.localDate = workDayListBrain.getLocalDate(from: inputDate)\ndatadata.dateworkDay.hoursSingleTimelet sum: Double = realm.objects(WorkDay.self).sum(ofProperty: \"hoursSingleTime\")getDataprivate var getData(inputDate: date, startDate: Date, endDate: Date) - > {\n var data = TimePeriodData()\n\n data.timePeriod = //use startDate and endDate\n data.date = inputDate\n data.localDate = workDayListBrain.getLocalDate(from: inputDate)\n \n data.singleTimeTotal: Double = realm.objects(WorkDay.self)\n .where { $0.startDate >= startDate && $0.endDate <= endDate }.sum(ofProperty: \"hoursSingleTime\")\n \n data.penaltyIncomeTotal: Double = realm.objects(WorkDay.self)\n .where { $0.startDate >= startDate && $0.endDate <= endDate }.sum(ofProperty: \"penaltyIncome\")\n \n //... the rest of the totals\n return data\n}\n", "text": "The question needs a bit of clarity; we probably need see what the WorkDay model looks like as that’s what will determine the queries.Also, more complete and brief code will help - for example, we don’t know what workDays is or where it’s used as it doesn’t appear elsewhere in the questionworkDays = allUserWorkDaysPart of this loop is questionable;The data var is instantiated once before the loop, but then the loop assigns some of the same properties over and over. e.g. if 10/23/2022 is passed in as input date and there are 365 input work days, data.date will be assigned 10/23/2022 365 times. The only changing values are the startTime, endTimeI etc. Is that intentional?Lastly for those vars within the loop, workDay.hoursSingleTime for example. The loop iterates over all of the values to store them in an array (based on the array name) and that array appears to be used to get the totals for each property? (I could be way off on that guess)If that’s the case, let Realm do the heavy lifting and return a total instead of iterating, maybe something like this?let sum: Double = realm.objects(WorkDay.self).sum(ofProperty: \"hoursSingleTime\")so then the iteration in the call is removed and getData becomes thisIf my guess is not correct, can you add a few more data points (per above) for clarity and we’ll take a look?", "username": "Jay" }, { "code": " class WorkDay: Object, ObjectKeyIdentifiable {\n \n //MARK: - Realm Properties\n \n // Object Tracking - Used to Track Work Days in Database\n @objc dynamic var id = NSUUID().uuidString\n @objc dynamic var createdAt = Date()\n \n override static func primaryKey() -> String? {\n return \"id\"\n }\n \n // Realm Relationship\n var parentJob = LinkingObjects(fromType: Job.self, property: \"workDays\")\n \n // User Inputs\n @objc dynamic var date = Date()\n \n @objc dynamic var clockOn = Date()\n @objc dynamic var clockOff = Date()\n \n // Hours Worked\n @objc dynamic var hoursSingleTime: Double = 0\n @objc dynamic var hoursPenalty: Double = 0\n @objc dynamic var hoursTotal: Double = 0\n \n // Time Strings\n @objc dynamic var timesSingleTime: String = \"\"\n @objc dynamic var timesPenalty: String = \"\"\n @objc dynamic var timesTotal: String = \"\"\n}\n@Published var workDays: Results<WorkDay>?\n@Published var timePeriod: TimePeriod = .week\n@Published var date: Date = Date()\nenum TimePeriod: String {\n case week = \"week\"\n case month = \"month\"\n case quarter = \"quarter\"\n case year = \"year\"\n}\n@objc dynamic var driveAway: Date? = nil\n", "text": "Hey Jay,Thank you for taking the time to write back.The WorkDay Model looks a bit like this:When I reference ‘workDays = allUserWorkDays’, ‘workDays’ is a local variable as follows, alongside other local variables:And self.timePeriod is an enum as follows:When the VC appears it is assigned the Current Date via Date(). The user can then browse via Week, Month, Quarter and Year, each of which correspond to the TimePeriod. When the TimePeriod is set, .week is default, the system gets the start/end of the time period which is detailed above by timePeriodStart() and timePeriodEnd(), in this example the first day of the week and the last day of the week. Then the Realm returns all WorkDay Objects that appear within that TimePeriod.Thank you for pointing out that issue with the timePeriod, date and localDate at the start of the loop, this was a mistake on my part.My reasoning for putting all of the ‘workDay.hoursSingleTime’ into an array was so that I could derive a number of data points from it. One would be the total that you point out, another would be to get the average and the median. My thinking was that I’d need to sort the array to get the median.But what you are suggesting by getting the Realm to calculate the values instead which should be much faster. From what I understand Realm provides avg, count, max, min and sum in terms of calculating aggregates? So I’d be able to do pretty much everything I aim for except median via the Realm directly?The only other thing I would note is that some values in the WorkDay Object are optionals and some are calculated via computed properties.For example there is a property as follows:Sometimes the user would record this, other times they would not. If I used avg on this via the Realm would all values that are nil be ignored? Or would the better approach be to filter the Results? for ‘driveAway’ values that aren’t nil prior to getting the avg?In regards to the computed properties, they are based on values stored in the Realm but aren’t useable in the same way so I think for those values the only way to aggregate the data would be via running a for loop, do you agree?", "username": "brad" }, { "code": "//average driveAway taking nils into account\nlet results = realm.objects(WorkDay.self).where { $0.driveAway != nil }\nlet count = results.count\nlet countDouble = Double(count)\nlet total: Double = results.sum(ofProperty: \"driveAway\")\nprint(\"Average: \\(total / countDouble)\")\n//find median of hoursSingleTime\nlet allResults = realm.objects(WorkDay.self).sorted(byKeyPath: \"hoursSingleTime\")\nlet allCount = allResults.count\nlet countIsEvenNum = allCount.isMultiple(of: 2) //even or odd number\n\nif countIsEvenNum == true {\n print(\"even num\") //leaving this for your homework :-)\n //get the objects values above and below the 'middle' index, add together and divide by 2\n} else {\n let middleIndex = allCount / 2\n let medianWork = allResults[middleIndex]\n print(\"Median is: \\(medianWork.hoursSingleTime)\")\n}\n", "text": "The only other thing I would note is that some values in the WorkDay Object are optionalsNo big deal. Easily handled with a filter (all the code below is a bit verbose for readability)pretty much everything I aim for except median via the Realm directlyYou can do that too - no iteration necessary! Here’s the median of the hoursSingleTime", "username": "Jay" }, { "code": "var totalMileage: Double {\n var total: Double = 0\n for mileage in mileageLogged {\n total += mileage.unitsTravelled\n }\n return total\n}\nvar totalMileage: Double {\n return mileageLogged.sum(ofProperty: \"unitsTravelled\")\n}\nvar medianHours: Double {\n let sortedHoursTotal = hoursTotalArray.sorted { $0 < $1 }\n if sortedHoursTotal.count % 2 == 0 {\n return Double((sortedHoursTotal[(sortedHoursTotal.count / 2)] + sortedHoursTotal[(sortedHoursTotal.count / 2) - 1])) / 2\n } else {\n return Double(sortedHoursTotal[(sortedHoursTotal.count - 1) / 2])\n }\n}", "text": "Hey Jay,Thank you again for the detailed reply, very much appreciated.After your previous reply, nothing was noted flagging anything I was doing as wildly wrong and that should be causing such a delay in my loop and processing times. So I started digging deeper through my code to see if I could identify a problem.As I mentioned I had a bunch of computed properties in the WorkDay Realm Object, one of them is to return the total mileage, the code looks as follows:So I thought I’d looked for any ‘for loops’ in my computed properties and change them to use Realm Methods instead. When refactored to use the built in Realm Methods that you described the code looked as follows:I then setup some more accurate ways to benchmark the changes using DispatchTime.now() and comparing it when the function has finished.I changed 10-15 Computed Properties and ran the benchmarks. What I found is that the function performed noticeably slower when using the Realm Methods. Below are my times between using a for loop and the Realm Methods.Normal - For Loops\nDiff: nanoseconds(967731375) | Secs: 0.9677313750000001\nDiff: nanoseconds(871276166) | Secs: 0.871276166Computed Property Optimisations\nDiff: nanoseconds(1020237792) | Secs: 1.020237792\nDiff: nanoseconds(921382750) | Secs: 0.92138275Given this I reverted my code to the for loops and then it hit me, a computed property is computing. So I went down the rabbit hole of completely minimising the use of computed properties so things weren’t called twice and thus computations weren’t unnecessarily repeated. This turned out to be the crux of my problem. After a days of optimising this is where I landed:With Augment Function + Super + Optimise Income + BG Thread\nDiff: nanoseconds(160863416) | Secs: 0.160863416\nDiff: nanoseconds(134789875) | Secs: 0.134789875This was around 6x faster and made everything feel great again. Normally the computed properties are used to get info for 1x WorkDay Object at a time, this ensures calculations presented to the user are always up to date, but when running those computations for 100+ WorkDay Objects they are very much inefficient.Thank you very much for providing those code examples above, they make sense and are easy to read. I find the font used on the Realm Documentation quite difficult to read so it’s nice to see it here in a familiar code format.And yes, I appreciate you slotting in the homework about median. I actually use median in a different app and my code is as follows:", "username": "brad" }, { "code": ".where.filterlet dogList = person.dogList //lazy-loading niceness\nlet foundDogList = dogList.where { $0.name == \"Spot\" } //memory friendly\n\nlet dogArray = Array(dogList) //gobble up memory\nlet foundDogArray = dogArray.filter { $0.name == \"Spot\" } //gobble up more memory\n.sorted.sorted(byKeyPath:", "text": "Excellent! Sounds like you are well on your way.One thing to note:Realm is very memory friendly - as long as code uses Realm based calls, all of the objects are lazily loaded with very low memory impact. However, as soon as Realm objects are moved to Arrays or high-level Swift functions are used, that lazy-loading-ness goes out the window and all objects are loaded into memory.This can not only bog down an app, but can actually overwhelm the devices memory.It doesn’t sound like this will be an issue based on your dataset but just be aware to stick with Realm functions as much as possible; List vs Array for example, and .where vs .filter. Imagine a Person with a List of Dog objects:I mention it because of the Swift .sorted vs Realms .sorted(byKeyPath: in your code. Just a thought!", "username": "Jay" } ]
Getting Aggregate Statistics from Individual Realm Records
2022-10-23T06:54:42.826Z
Getting Aggregate Statistics from Individual Realm Records
2,271
https://www.mongodb.com/…_2_1024x576.jpeg
[ "dach-virtual-community" ]
[ { "code": "Senior Solution Architect @MongoDBSolution Architect, Scrum Master", "text": "\n2022-10 DACH MUG - Banner1920×1080 207 KB\nAtlas Search und die Atlas App Services bieten eine One Stop Data Platform für deine Applikationen. Die Plattform nimmt dir dabei elementare Anforderungen wie Security, Skalierung und Monitoring ab und du kannst dich als Entwickler auf das konzentrieren, was dich gut dastehen lässt. Features entwickeln für deine User. Mit Atlas App Services und der Lucene-basierten Atlas-Search kannst du alle Anforderungen erfüllen, die eine moderne Anwendung benötigt. Skalierbar, sicher und ohne den Aufwand, jede Komponente einzeln einzurichten.Behaupten kann jeder, im Talk gibt es das alles zum Anfassen mit CodeEure Fragen zum Thema sind sehr willkommen! Bringt sie mit. Das Format der User Group gibt uns die Möglichkeit auch mal links und rechts vom Pfad abzuweichen.Fast schon Tradition: Wer mag, kann bei einem Quiz im Anschluss gleich sein Wissen prüfen, den Gewinnern winken tolle MongoDB Swag-Preise.Event Type: Online\nLink: Video Conferencing URLMelde dich bei der virtuellen MongoDB DACH User Gruppe an, um über anstehende virtuelle Treffen und Diskussionen informiert zu werden.\nPhilipEschenbacher800×800 113 KB\nSenior Solution Architect @MongoDBSolution Architect, Scrum MasterAs an independent consultant I work with customers to build data architectures using modern technology stacks and NoSQL data stores. I support engineering teams to design, build and deploy performant and scalable MongoDB applications.\nI’m humbled and honoured to have received the MonogDB Innovation award 2021 and to be part of the founding members of the MongoDB Champions Program.", "username": "michael_hoeller" }, { "code": "", "text": "", "username": "Harshit" }, { "code": "", "text": "", "username": "Harshit" }, { "code": "", "text": "Gentle Reminder: The event starts in 15 mins. Please join here: Launch Meeting - Zoom", "username": "Harshit" } ]
DACH MUG: Skalierbare Applikationen mit Atlas Search und App Services
2022-09-22T14:20:22.936Z
DACH MUG: Skalierbare Applikationen mit Atlas Search und App Services
3,414
null
[ "node-js", "connecting" ]
[ { "code": "", "text": "Hey folks,Hi, I’m having problems with connecting to MongoDB Atlas from a Google Cloud Run instance. I’m intermittently (a few times per day) getting a MongoServerSelectionError when my server is initializing and trying to connect. This is my current setup:I’m not seeing any alerts in MongoDB atlas. Please note that this is an intermittent issue. Sometimes it connects sometimes it doesn’t. Because my app currently has very low traffic there is never more than one instance running so the 30-second timeout + time to restart of the container makes this problem critical.Here’s the stack trace:MongoServerSelectionError: connection timed out at Timeout._onTimeout (/usr/src/app/node_modules/mongodb/lib/core/sdam/topology.js:438:30) at listOnTimeout (internal/timers.js:549:17) at processTimers (internal/timers.js:492:7) {\nreason: TopologyDescription {\ntype: ‘ReplicaSetNoPrimary’,\nsetName: null,\nmaxSetVersion: null,\nmaxElectionId: null,\nservers: Map {\n‘xxx-00-00-0qvnm.gcp.mongodb.net:27017’ => [ServerDescription],\n‘xxx-00-01-0qvnm.gcp.mongodb.net:27017’ => [ServerDescription],\n‘xxx-00-02-0qvnm.gcp.mongodb.net:27017’ => [ServerDescription]\n},\nstale: false,\ncompatible: true,\ncompatibilityError: null,\nlogicalSessionTimeoutMinutes: null,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\ncommonWireVersion: null\n}Can anybody assist with troubleshooting this?Regards,\nStefan", "username": "Stefan_Li" }, { "code": "", "text": "Sorry I’m on MongoDB version 4.2 not 4.4", "username": "Stefan_Li" }, { "code": "Mongoose", "text": "Has this issue been resolved for you? I’m having the same issue, of intermittent connectivity due to a timeout, but mine is happening from a DigitalOcean droplet. I’ve tried almost everything, except use Mongoose.For my setup:", "username": "C_Bess" }, { "code": "useUnifiedTopology: false", "text": "I resolved it by setting useUnifiedTopology: false, works great now.", "username": "C_Bess" }, { "code": "useUnifiedTopology: false", "text": "I had a similar issue, however setting the useUnifiedTopology: false did not help me in my case.\nIs there any other fixes to solve this error?", "username": "Kishor_Kumar2" }, { "code": "", "text": "Hi, I am new to MongoDB Atlas too. A few days ago, I had connection problem using Python. I have managed to find a solution for that.Just now, using MongoDB Atlas NodeJs tutorial, using the same Atlas connection string in my Python stuff. I had the below timeout error.It turned out, my Public IP address has changed because I reset my Windows 10 internet connection. I added the new Public IP to Atlast IP whitelist and it works.This is my case. I am new to this, I hope this might help someone…MongoServerSelectionError: connection to 13.55.193.163:27017 closed\nat Timeout._onTimeout (D:\\Codes\\nodejs\\mongodb\\node_modules\\mongodb\\lib\\sdam\\topology.js:312:38)\nat listOnTimeout (node:internal/timers:557:17)\nat processTimers (node:internal/timers:500:7) {\nreason: TopologyDescription {\ntype: ‘ReplicaSetNoPrimary’,\nservers: Map(3) {\n‘cluster0-shard-00-00 . 71o6u . mongodb . net : 27017’ => [ServerDescription],\n‘cluster0-shard-00-01.71o6u.mongodb.net 27017’ => [ServerDescription],\n‘cluster0-shard-00-02.71o6u.mongodb.net:27017’ => [ServerDescription]\n},\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nsetName: ‘atlas-vx7zhd-shard-0’,\nlogicalSessionTimeoutMinutes: undefined\n}\n}", "username": "Be_Hai_Nguyen" }, { "code": "", "text": "This article helped me with this issue. Fixing MongoServerSelectionError while connecting MongoDB with node.js | by Barak Saidoff | Medium", "username": "Michael_Parker_Norton" }, { "code": "", "text": "@Stefan_Li Did you fix this? I have the same problem and cannot find a way to resolve it", "username": "Matthieu_Fesselier" }, { "code": "", "text": "Same - I also keep getting the error intermittently when using Cloud Run with Atlas.", "username": "Sumit_Kumar12" }, { "code": "options: {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n useCreateIndex: true,\n useFindAndModify: false,\n }\nconnection timed out MongoServerSelectionError: connection timed out\n at Timeout._onTimeout (/app/node_modules/mongodb/lib/core/sdam/topology.js:438:30)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7)\n", "text": "Hi, i have same issue, with NodeJs and Mongoose, on GCP App Engine with peer conection and using private url:mongodb+srv://user:[email protected]/?retryWrites=true&w=majorityNode version: 14.17.4\nMongoose verison: 5.13.13The problem happens intermittently, it does not follow a pattern, it has already happened twice in a day, but it has also been about a month without happening. Here is the error received from the application:I tried allow conection from 0.0.0.0/0 but dont work", "username": "Wanber_Alexandre_da_Silva" } ]
Intermittent MongoServerSelectionError when connecting from NodeJS using Google Cloud Run + MongoDB Atlas
2020-08-12T09:34:18.401Z
Intermittent MongoServerSelectionError when connecting from NodeJS using Google Cloud Run + MongoDB Atlas
18,077
null
[ "ruby" ]
[ { "code": "puts dbdb[:collection name]#<Mongo::Database:0x000000010e726190>\n#<Mongo::Collection:0x0000000109c63b30>\n", "text": "Hi, trying to retrieve contents of a client database through Ruby’s mongo documentation, however when outputting my collections via puts db and db[:collection name], the console returns an object of the formorand not actual explicit values and its parameters. Does anyone know how to solve this?Thanks", "username": "Pin_Hoong_Lai" }, { "code": "", "text": "While this doesn’t answer your question, I noticed that the link you provided is to an older version (2.2) of the driver. Are you following along with that, are you using the most recent version’s (2.18) documentation?", "username": "Doug_Duncan" }, { "code": "putsfind()aggregate()inspectrequire 'mongo'\n\nclient = Mongo::Client.new([ '127.0.0.1:27017' ], :database => 'test')\ndb = client.database\nputs client.inspect\nputs db.inspect\n#<Mongo::Client:0x580 cluster=#<Cluster topology=Single[127.0.0.1:27017] servers=[#<Server address=127.0.0.1:27017 STANDALONE pool=#<ConnectionPool size=0 (0-20) used=0 avail=0 pending=0>>]>>\n#<Mongo::Database:0x600 name=test>\n", "text": "Hi @Pin_Hoong_Lai,Calling puts on an object will print a stringified version of that object, as per your output.You need to call appropriate methods like find() or aggregate() to retrieve documents:not actual explicit values and its parametersDo you perhaps mean you want to see a representation of the object and instance values? You can use inspect for that:Would result in output like:If those suggestions aren’t what you are looking for, please provide more details on your environment:Regards,\nStennie", "username": "Stennie_X" }, { "code": "client = Mongo::Client.new('mongodb+srv://user:[email protected]/test')\ncollection = client[:listingsAndReviews]\nputs collection.find({name: \"Ribeira Charming Duplex\"})\n_id\n\"10006546\"\nlisting_url\n\"https://www.airbnb.com/rooms/10006546\"\nname\n\"Ribeira Charming Duplex\"\nsummary\n\"Fantastic duplex apartment with three bedrooms, located in the histori…\"\nspace\n\"Privileged views of the Douro River and Ribeira square, our apartment …\"\ndescription\n\"Fantastic duplex apartment with three bedrooms, located in the histori…\"\n", "text": "Thanks for the reply @Stennie_X.As you mentioned, I am trying to retrieve documents within my database. However, upon calling the find function on my DB collection, I am still unable to see the relevant document output.\nBelow is my code snippet:The output which I was expecting to receive was this:However i keep getting nil values.\nThe Database has been made public and hence you would be able to access it as well through mongoDB compass through the URI in my code snippet.Thank you,\nPin Hoong", "username": "Pin_Hoong_Lai" }, { "code": "puts collection.find({name: \"Ribeira Charming Duplex\"})find()CollectionView# Print the first result\nputs collection.find({name: \"Ribeira Charming Duplex\"}).first\n\n# Print all results\ncollection.find({name: \"Ribeira Charming Duplex\"}).each do |document|\n puts document\nend\n", "text": "Hi @Pin_Hoong_Lai,Thank you for providing the example output to clarify that you are just aiming to print the result document.puts collection.find({name: \"Ribeira Charming Duplex\"})The find() method returns the results of the operation as an iterable CollectionView.To get your desired output you need to iterate the result, for example:For more examples please see the Ruby Driver Quick Start Tutorial.The Database has been made public and hence you would be able to access it as well through mongoDB compass through the URI in my code snippet.Please remove the open access to your cluster. If you want to provide help for reproducing an issue, a sample document and the specific versions of software used should be a great starting point.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Ruby] Unable to view collection/ db content
2022-09-23T08:29:00.857Z
[Ruby] Unable to view collection/ db content
3,116
https://www.mongodb.com/…2a05b6b71f97.png
[ "app-services-user-auth", "react-native" ]
[ { "code": "", "text": "I started work on a React Native project about a week ago and originaly just set it up with email and password login, that works find but after trying to set up Google Sign In I’ve hit a bit of a roadblock.It seems to be able to work when I have OpenID Connect turned on, but I would like to be able to recieve the meta data when a user signs up. I’m able to log out my meta data so its getting passed making the request to Google, but below is the error I’m getting;\n779×186 8.19 KB\nAny suggestions are welcome as I’m out of ideas!", "username": "AustinMcGuinn" }, { "code": "", "text": "Following the thread, as I am looking into this as well.", "username": "Christian_Wagner" }, { "code": "", "text": "I wasn’t able to figure out whether you were using OpenID connect or not. If you are using OpenID, then metadata isn’t something Realm provides\n\nimage1366×730 82.4 KB\n", "username": "Sumedha_Mehta1" }, { "code": "", "text": "Same issue as OP. I could not get Google Sign In working until I turned on OpenID Conenct in the realm authentication provider portal. Once I turned on OpenID Connect everything worked fine, but I could not get it to work without this.I was fine with that setting, so I moved forward without much investigation. I no longer have the error messages or steps to replicate the problem. I am only posting this reply here in case others who are experiencing the same issue know to try turning on OpenID Connect through the Realm portal.", "username": "David_Harlow" }, { "code": "", "text": "I’ve also experienced the same issue. I cannot use Google provider unless I check that “OpenID Connect” option in the Authentication Providers tab. That way I’m not able to get user information like the email address, profile photo etc. I can just see the full name and nothing else.Not sure if anyone has a solution for this problem, it would be much appreciated. ", "username": "111757" }, { "code": "", "text": "I’m having the same issue. I really need the email address. I submitted a bug report on Github, but no solution there, either.", "username": "Noora_Chahine" }, { "code": "", "text": "Not sure if we have to enable some special permission inside a Google Cloud Platform project itself, in order to get some additional user info except the name…?", "username": "111757" }, { "code": "", "text": "@111757 I submitted a bug report to the developers and they gave me a workaround that allows us to get the email + other personal info from Google. You can read desistefanova’s solution here.It just requires switching to authenticating with JWT, which Google accepts.", "username": "Noora_Chahine" } ]
Error fetching info from OAuth2 provider
2021-07-26T17:20:56.483Z
Error fetching info from OAuth2 provider
5,868
null
[ "python", "atlas-cluster" ]
[ { "code": "", "text": "Hi,I am having trouble connecting to my MongoDB database from my hosting server at Heliohost.org. The server is running python 3.10.8 and all required dependencies have been installed. Port 27017 have also been opened for the primary and two secondary domain IPs i.e.My script managed to connect to the database from my local machine without problem but it failed to connect from the hosting server. My connection string used is:mongodb+srv://user:[email protected]/mydatabase?retryWrites=true&w=majorityI tried to get help from the server admins and the following are some of the feedbacks i got from them:Sorry as i’m pasting here the comments verbatim because i’m not that good at re-explaining this myself… i’m not really familiar with everything about DNS records, so could someone explain what is all this about? Why am i not able to connect to the database from my hosting server? what can/should my hosting server do to allow me to connect to MongoDB host ?Thanks in advance…", "username": "ra_rahim" }, { "code": "", "text": "please check your database server’s network access settings. it is possible you have strict IP access allowing only your local machine’s IP.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Unless you have configured Atlas to allow all IP addresses(which you shouldn’t do). You will need to add IP addresses of the servers that will be connecting to your Atlas cluster.Pymongo only added dnspython as a dependency in 4.3.2 so prior to this version dnspython would have to be added to requirements.txt, this is one thing to check.Atlas Documentation describes how to connect.Server admins 3 and 4 got close. The mongodb+srv:// connection string is used to resolve both SRV records AND a txt record. You can read more at Connection String URI Format", "username": "chris" }, { "code": "", "text": "I can confirm that i am allowing all IP addresses to connect to my cluster.", "username": "ra_rahim" }, { "code": "", "text": "So far, access from all IP addresses are allowed and dnspython has been installed.Currently i have only requested for port 27017 to be opened… do i need to request for ports 27015 and 27016 to be opened too if i’m connecting to my cluster using the following connection string?mongodb+srv://user:[email protected]/mydatabase?retryWrites=true&w=majority", "username": "ra_rahim" }, { "code": "mongodb+srv", "text": "Sorry as i’m pasting here the comments verbatim because i’m not that good at re-explaining this myself… i’m not really familiar with everything about DNS records, so could someone explain what is all this about? Why am i not able to connect to the database from my hosting server? what can/should my hosting server do to allow me to connect to MongoDB host ?Welcome to the MongoDB Community @ra_rahim !The hostnames you have provided all appear to be resolvable. If this was a newly created cluster, it is possible the hostnames have not propagated to your local DNS servers yet.If possible I would try using different name servers, for example Google Public DNS.The script you listed is trying to connect to radtech.p4cyn.mongodb.net which doesn’t exist, or at least there is no DNS for it…The mongodb+srv connection string format uses SRV and TXT records to discover the cluster hostnames and connection settings. For more background, see MongoDB 3.6: Here to SRV you with easier replica set connections | MongoDB.You can’t connect to a website that doesn’t return an A record. Either connect to it directly by IP address, ie: 18.138.205.196:27017, or come up with a new URL that actually exists…This is correct for any of the hostnames in your replica set. You need to use the Atlas hostnames (and ideally, the SRV url) to connect to your cluster.It doesn’t resolve for me either against any DNS server I’ve triedGoogle Public DNS is generally a good starting point.You can use this tool DNS Checker - DNS Check Propagation Tool to check A records from 33 DNS servers all around the world. They all report that there is no A record for that domain…This suggestion incorrectly assumes that an SRV hostname will have an A record (it will not, as above).One of my colleagues wrote a small tool to try to help checking Atlas connections. I suggest trying to run this from your host server environment: GitHub - pkdone/mongo-connection-check: Tool to check the connectivity to a remote MongoDB deployment, providing advice if connectivity is not achieved.Currently i have only requested for port 27017 to be opened… do i need to request for ports 27015 and 27016 to be opened too if i’m connecting to my cluster using the following connection string?For an Atlas replica set, you will only need access to port 27017. Port 27016 is used for sharded clusters and port 27015 for the MongoDB Connector for BI. For a reference, please see Attempting to connect to an Atlas deployment from behind a firewall.Regards,\nStennie", "username": "Stennie_X" } ]
Having trouble connecting to Mongo DB from hosting server
2022-10-23T16:00:25.863Z
Having trouble connecting to Mongo DB from hosting server
4,162
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "exports.getAllTours = async (req, res) => {\n const queryObj = { ...req.query }; \n\n let query = Tour.find(queryObj);\n\n if (req.query.sort) {\n const sortBy = req.query.sort.split(',').join(' ');\n query = query.sort(sortBy);\n } \n \n const tours = await query;\n\n res.status(200).json({\n status: 'success',\n requestedAt: req.requestTime,\n results: tours.length,\n data: {\n tours\n }\n });\n } catch (err) {\n res.status(404).json({\n status: 'fail',\n message: err\n });\n }\n };\n", "text": "For some reason, I can’t sort with multiple parameters in Mongoose. One parameter works just fine, but the ones after that are not working.My query looks like this : 127.0.0.1:3000/api/v1/tours?sort=ratingsAverage,price", "username": "Frederic_Ferreira" }, { "code": "sortBy\"ratingsAverage price\"", "text": "Hello @Frederic_Ferreira, Welcome to the MongoDB community developer forum,As per your query, it looks good, You need to provide more debugging details like,", "username": "turivishal" }, { "code": "", "text": "Hello @turivishal, thank you \nI took some screenshots of what you recommended, and as we can see the console print the two query parameters when logging sortBy (here I tried with price and duration). Also we can see on postman that, the results are correct for the first parameter (price) but not the second (duration), they are not sorted by duration.Here are the screenshots :", "username": "Frederic_Ferreira" }, { "code": "", "text": "\nCapture d’écran 2022-10-23 à 13.47.121920×881 80.4 KB\n", "username": "Frederic_Ferreira" }, { "code": "", "text": "\nCapture d’écran 2022-10-23 à 13.48.021532×1270 130 KB\n", "username": "Frederic_Ferreira" }, { "code": "", "text": "\nCapture d’écran 2022-10-23 à 13.48.271552×1350 288 KB\n", "username": "Frederic_Ferreira" }, { "code": "sort()127.0.0.1:3000/api/v1/tours?sort=ratingsAverage,price", "text": "Hi @Frederic_Ferreira welcome to the community!I think the issue is that the sort parameters was somehow not passed properly into mongoose. However, the code example you posted and the screenshot are showing different things: the code example looks like it’s one script, while the screenshot shows that the sorting processing was done in a function called sort().What I would do in this situation is to manually step through the code to make sure that all the parameters are processed correctly, and the mongoose query was constructed correctly.If you need further help, could you post a self-contained code that can reproduce this? E.g., you mentioned that this was called using the URI 127.0.0.1:3000/api/v1/tours?sort=ratingsAverage,price so I would also include the basic Express (?) code, so that we can work from a common source and not guessing on how your Express code looks like (and end up diverging since I could code it differently )Best regards\nKevin", "username": "kevinadi" } ]
Sort with Mongoose
2022-10-23T08:06:57.019Z
Sort with Mongoose
3,037
null
[ "replication" ]
[ { "code": "", "text": "Hey,\nWe have a 4 nodes replicaset with one primary node and 3 secondary nodes, and one more arbiter node.\nLately, we ran into an issue when the arbiter node was down for few minutes.\nIt seems that during that time there was a problem reading/write to the replicaset. Is that make sense? or maybe I configured something wrongThanks in advance", "username": "shachar.giladi" }, { "code": "rs.conf()", "text": "Hi @shachar.giladi - Welcome to the community.We have a 4 nodes replicaset with one primary node and 3 secondary nodes, and one more arbiter node.Do you mean a 5 node replicaset? (P,S,S,S,A)Lately, we ran into an issue when the arbiter node was down for few minutes.\nIt seems that during that time there was a problem reading/write to the replicaset. Is that make sense? or maybe I configured something wrongThere’s not enough information here to definitively say what the issue is or could be resolved by. Can you provide more information including:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Mongodb replicaset availability
2022-10-24T07:40:51.365Z
Mongodb replicaset availability
1,292
null
[ "performance" ]
[ { "code": "", "text": "Hello,For a while now I have been developing and testing my REST API application using the MongoDB’s free cloud instance. Now, MongoDB clearly states that the free instance is meant for development and testing rather than to be used in production so I went in with the corrisponding mindset.Being not too far away from deploying the api to production I have started looking into optimizing it and there was an endpoint that was really bothering me with its 700 - 1000ms processing time required. Other endpoints also had respectively rather high fluctuating processing times anywhere between 100 to 400ms.Out of curiosity, I have switched to MongoDB hosted on the localhost and the processing time of all those endpoints dropped to 30-50ms making it blazing fast in comparison. This experiment has induced some questions in me.Thank you very much", "username": "RENOVATIO" }, { "code": "", "text": "Hello @RENOVATIO,there was an endpoint that was really bothering me with its 700 - 1000ms processing time required. Other endpoints also had respectively rather high fluctuating processing times anywhere between 100 to 400msOut of curiosity, I have switched to MongoDB hosted on the localhost and the processing time of all those endpoints dropped to 30-50ms making it blazing fast in comparison.Just in regards to those values you’ve provided, I am interested to know how they are being calculated? E.g. is this the total time including application processing, network latency and database execution time or just database execution time alone? I presume it’s just the database execution times but please correct me if I am wrong here.It would be difficult to say with the information we have here alone. However, please note there are some limitations associated with the M0, M2 and M5 tier clusters. More specifically to the performance, it is possible that either the following limits were exceeded:You could check with the Atlas chat support team regarding the above limits and if your cluster had exceeded them during the time the testing was performed.At this point in time I understand your concern based off the values provided. However in saying so, it’s difficult to confirm whether the higher processing times recorded with the workload used in the testing scenarios you’ve mentioned can be attributed solely to the M0 tier cluster.The easiest way to test this out would probably be to test on a M10 instance but I do understand that this may not be the most convenient as it would mean that payment information would be required during the testing duration of the M10 instance. In saying so, Atlas is a database as a service which includes various features and benefits. If you find that hosting locally suits your use-case then this is certainly fine as well In addition to the above questions regarding the processing times, please provide the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello @Jason_Tran , hope you are doing fine!Forgive me for not providing the necessary specifics. The processing times actually come from Postman, so they are somewhat “dirty” as it is not just pure database queries. Today I launched the paid M10 instance and I am facing what seems to be a similar problem as with the free instance, albeit somewhat milder.The following is the Postman report for creating a resource on AWS M10. The server is based in a city where I am located and where I am firing the requests from.That very same request done to MongoDB hosted locally:I just can’t seem to understand what is causing such an increase. By the way, I have found the following forum post regarding the similar issue:https://www.mongodb.com/community/forums/t/network-latency-server-atlas-not-happening-on-local-mongodb/7412The response there mentions the reopening of connections being the potential suspect, but I am not sure if this is my case, as I am using mongoose and performing connect() only once (during the start up of the app).", "username": "RENOVATIO" }, { "code": "", "text": "Hi @RENOVATIO - Thank for clarifying those details I’m not too familiar what “Transfer Start” correlates too within Postman. After a quick google, a postman forum posts suggests that it may be the equivalent to Chrome Dev Tool network metric called Waiting (TTFB) defined as:Time spent waiting for the initial response, also known as the Time To First Byte. This time captures the latency of a round trip to the server in addition to the time spent waiting for the server to deliver the response.If I understand the test correctly, it’s measuring everything including network round trip time. In many cases, this is unavoidable since Atlas is hosted in a cloud provider (AWS/Azure/GCP) and the amount of network latency would depend on how good your connections are to those providers. This is the reason why your local MongoDB deployment is so fast: it doesn’t need to contend with network latencies or any network issues.There is no way to remove this network latency, although you can usually minimize it by:Although some of these may be quite obvious, I’d like to confirm some of the following:I understand the performance time increases may not be solely due to network but the above information should help narrow this down.Regards,\nJason", "username": "Jason_Tran" } ]
Free Atlas instance performance vs the paid instance
2022-10-23T18:56:08.381Z
Free Atlas instance performance vs the paid instance
3,017
null
[]
[ { "code": "", "text": "Hello, my name is diego or zkr1p and I like programming, I am Chilean, I am pleased to be able to introduce myself in this community, I hope I can help with my knowledge, have a great day", "username": "Diego_Peralt" }, { "code": "", "text": "Welcome to the mongodb community!!", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hello Diego and welcome to the community ", "username": "Jason_Tran" }, { "code": "", "text": "Welcome to the MongoDB community @Diego_Peralt!Can you share a bit more about the tech stack you are using or learning?Regards,\nStennie", "username": "Stennie_X" } ]
Hello, mongodb community, from Chile
2022-10-22T09:49:48.127Z
Hello, mongodb community, from Chile
2,079
null
[ "node-js", "atlas-device-sync", "atlas-functions", "app-services-user-auth" ]
[ { "code": " async function getValidAccessToken () {\n const app = await getRealmApp()\n // Guarantee that there's a logged in user with a valid access token\n if (!app.currentUser) {\n // If no user is logged in, log in an anonymous user. The logged in user will have a valid\n // access token.\n const token = 'xyz....'\n if (token) {\n const credentials = Realm.Credentials.jwt(token);\n await app.logIn(credentials);\n } \n } else {\n // An already logged in user's access token might be stale. To guarantee that the token is\n // valid, we refresh the user's custom data which also refreshes their access token.\n await app.currentUser.refreshCustomData()\n }\n return app.currentUser.accessToken\n}", "text": "I’m using realm-graphql in my electron app. Now I’m facing an error while I’m passing authentication token in the header of graphql request.\nThis is the error:Error: Network error: invalid session: failed to find refresh token\nat new ApolloError.Here’s my code:", "username": "Abhishek_Matta" }, { "code": "", "text": "hey … i do not think u are refreshing the access token when it expires … after refreshing it u will have to update the graphql client header as well", "username": "rares.lupascu" } ]
Graphql throwing error invalid session
2021-09-24T11:26:34.127Z
Graphql throwing error invalid session
3,140
null
[ "atlas-online-archive" ]
[ { "code": "timestampdate-hour", "text": "Hello there!\nI’m plan to migrate a large amount of data to mongodb with ability to move some data to Atlas Online Archive.\nAverage size of the record is 1.6Kb.\nI saw a video with mongo db engineer explaining how to use AOA by using archiving rule based on timestamp A Day at the Data Lake: Atlas Online Archive with Bob Liles Atlas Engineer - YouTube\nSo the question is: when archiving rule is based on timestamp does data will be partitioned by this key in addition to 2 keys available for partitioning (resulting in 3 partition keys in total)?\nWill be queries effective in cost and speed as if I use for partitioning date-hour key (e.g. 2020-10-24T23) ?", "username": "dmitry_N_A1" }, { "code": "", "text": "Hi Dmitry,Yes that’s correct, if you are archiving using a date-based rule, you can choose up to 3 partitioning keys in total, including the date field you selected. For a custom archival rule, you can choose up to 2 partitioning keys . The queries will be effective if you are most frequently querying the archive using the same key. Otherwise, you might push the date field to the lowest in the order of partitioning keys.Please refer to this blog to know the list of do’s and don’ts for partitioning fields:Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!-Prem", "username": "Prem_PK_Krishna" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas online archive partition based on timestamp
2022-10-24T16:17:35.430Z
Atlas online archive partition based on timestamp
2,419
null
[ "flexible-sync" ]
[ { "code": "", "text": "Hi there,\ni’ve tried flexible-sync with success but i wonder if is it possible also to manage offline realm. In my application i want the users to share a realm but they can work also offline and the sync when on line. What’s the best practice to do that? Am i supposed to mange two different realms, one with flexible-sync and the other only on local device?Thank’s in advance.", "username": "Gianluca_Suzzi" }, { "code": "", "text": "You can still use the synchronized realm file while offline. The whole idea of Realm is to allow you to work with your database and not worry about connectivity.", "username": "nirinchev" }, { "code": "", "text": "Thank you @nirinchev,\nso i use the same connection without worrying about the device is online or offline?\nAnyway thank’s, i’ll try ", "username": "Gianluca_Suzzi" }, { "code": "", "text": "Hi @Gianluca_Suzzi,so i use the same connection without worrying about the device is online or offline?Most of the time, yes, disconnections, especially in a mobile environment, will happen, but Realm will work regardless on the local data, and pick up where it left when the connection is back.The only case you’ll have to avoid is to change the subscriptions (i.e. the criteria you select data to keep on-device) while offline: as you can’t contact the backend, the attempt to re-sync following a change in subscriptions will fail.", "username": "Paolo_Manna" }, { "code": "", "text": "And to clarify there, you can definitely change the subscriptions while offline, but the effect of that change will only become apparent when the device becomes online. The change will be persisted though, so even if you restart the app, it will eventually go through.", "username": "nirinchev" }, { "code": "", "text": "Thank you for your support guys ", "username": "Gianluca_Suzzi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flexible-sync and offline device best practice
2022-10-23T17:15:57.553Z
Flexible-sync and offline device best practice
3,114
null
[ "queries" ]
[ { "code": "{a:1,b1,c1}\n{d:1,e:1,f:1}\n", "text": "I have a collection where the data is highly unstructured. There can be totally different fields in each document.Now how can i query efficiently?\nIn original scenario common fields can be identified and index can be created but there can be a lot of unknown fields where index can not be created. So, how do we optimise quering so, that it can run efficiently.", "username": "Kaushik_Das2" }, { "code": "", "text": "Hi @Kaushik_Das2 ,There are several solutions but the bet one dependant on the specific use case.There are wild card indexes which allow dynamically index all fields:There is a nice pattern called the attribute pattern where using a key value arrays allow dynamic attribute indexing:Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny Thanks for the solution. Just one question that dynamically adding index in all fields will slow down update and insertion. I should expect this behaviour right?", "username": "Kaushik_Das2" }, { "code": "", "text": "Yep @Kaushik_Das2it is possible.", "username": "Pavel_Duchovny" } ]
Query optimisation for unstructured data
2022-10-19T12:55:59.712Z
Query optimisation for unstructured data
1,312
null
[ "java", "mongodb-shell", "atlas-cluster" ]
[ { "code": " ServerApi api = ServerApi.builder().version(ServerApiVersion.V1).build();\n String uri = \"mongodb+srv://retail_user:***************@cluster0.********.mongodb.net/Cluster0?retryWrites=true&w=majority\";\n\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(uri))\n .serverApi(api)\n .build();\n\n MongoClient mongoClient = MongoClients.create(settings);\nThe method serverApi(ServerApi) is undefined for the type MongoClientSettings.Builder", "text": "I can connect using the mongosh, but have seen various exceptions when trying to connect with a java client directly. The most recent one I can’t get past when using a direct copy paste from the docs is the following…The exception thrown isThe method serverApi(ServerApi) is undefined for the type MongoClientSettings.BuilderI copy pasted the code above, using the 4.7 driver, from https://www.mongodb.com/docs/drivers/java/sync/current/fundamentals/stable-api/#enable-the-stable-api-on-a-mongodb-client", "username": "Steve_Howard" }, { "code": "", "text": "I pasted the same code into my editor and it compiles fine with 4.7. You should double-check that you’re actually using the right driver version, as that method was added all the way back in the 4.3 release.Regards,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='retail_user', source='Cluster0', password=<hidden>, mechanismProperties=<hidden>}import com.mongodb.client.*;\nimport com.mongodb.ServerApi;\nimport com.mongodb.ServerApiVersion;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.MongoCredential;\nimport com.mongodb.ConnectionString;\nimport org.bson.*;\n\npublic class a{\n public static void main (String args[]) {\n try {\n\n ServerApi api = ServerApi.builder().version(ServerApiVersion.V1).build();\n //mongosh \"mongodb+srv://cluster0.********.mongodb.net/Cluster0\" --username retail-user -p ************\n String uri = \"mongodb+srv://retail_user:*************@cluster0.*********.mongodb.net/Cluster0\";\n uri = \"mongodb+srv://cluster0.*********.mongodb.net/Cluster0?retryWrites=true&w=majority\";\n\n MongoCredential credential = MongoCredential.createCredential(\n \"retail_user\", \"Cluster0\",\n \"***********\".toCharArray());\n\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(uri))\n .serverApi(api)\n .credential(credential)\n .build();\n\n MongoClient mongoClient = MongoClients.create(settings);\n\n MongoDatabase db = mongoClient.getDatabase(\"Cluster0\");\n MongoCollection<Document> collection = db.getCollection(\"inventory\");\n for (Document doc : collection.find()) {\n System.out.println(doc.toJson() + \"<br>\");\n }\n }\n catch(Exception e) {\n e.printStackTrace();\n }\n }\n}\n", "text": "Thanks, Jeff. You were correct, I had an old jar in the tomcat lib directory that had an old version that in fact did not have the noted method. I can compile, but am getting an authn error for the same thing that again, works with mongosh. This is a simple as it gets, and I have tried various versions of embedding the user/pwd in the URL, using a MongoCredential object, but all return the same thing.com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='retail_user', source='Cluster0', password=<hidden>, mechanismProperties=<hidden>}Let me know if this should be a new question.", "username": "Steve_Howard" }, { "code": " MongoCredential credential = MongoCredential.createCredential(\n \"retail_user\", \"admin\",\n \"***********\".toCharArray());\n\n", "text": "Users are typically in the “admin” database, so you would want this instead:Or you can use your original uri that contained the credential embedded in it. That will default to the “admin” database as the credential source.", "username": "Jeffrey_Yemin" }, { "code": ">>> from pymongo import MongoClient\n>>> CONNECTION_STRING = \"mongodb+srv://retail-user:*************@cluster0.********.mongodb.net/Cluster0\"\n>>> client = MongoClient(CONNECTION_STRING)\n>>> dbname = client['Cluster0']\n>>> collection_name = dbname[\"inventory\"]\n>>> item_details = collection_name.find_one()\n>>> for item in item_details:\n... print(item)\n...\n_id\nTS\nSKU\nQUANTITY\nTYPE\n>>>\n", "text": "Thanks again, Jeff, I appreciate the eyes on it. That’s interesting, as I would also expect it to work, but I keep getting an authn error with both the embedded URI and the credential object. I used the starter python example, and that works fine, so something is amuck in java, but I don’t know what. I am a partner of Mongo’s with a tech talk tomorrow and wanted to use MongoDB as a sink in a demo, but the java piece has taken too many cycles. I am sure it is me and something simple I am missing, but I will just use python with Flask for a web part of the demo. This works…", "username": "Steve_Howard" }, { "code": "", "text": "Happy to assist further but I understand that you have time constraints.Good luck in the demo!", "username": "Jeffrey_Yemin" }, { "code": "", "text": "It was me. An underscore rather than a dash in the username. Doh! Thanks again!", "username": "Steve_Howard" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't connect to Atlas from java
2022-10-23T23:50:22.740Z
Can&rsquo;t connect to Atlas from java
2,621
null
[ "performance", "containers" ]
[ { "code": "", "text": "I did an experiment by running a python app that is writing 2000 records into mongoDB.The details of my setup of the experiment as follows:Test 1: Local PC - Python App running on Local PC with mongoDB on Local PC (baseline)\nTest 2: Docker - Python App on Linux Container with mongoDB on Linux Container with persist volume\nTest 3: Docker - Python App on Linux Container with mongoDB on Linux Container without persist volumeI’ve generated the result in chart - on average writing data on local PC is about 30 secs. Where else on Docker, it takes about 80plus secs. Hence it seems like writing on Docker is almost 3 times slower than writing on local PC itself.Should I want to improve the write speed or performance of the mongoDB in docker container, what is the recommended practice? Or should I put the mongoDB as a external volume without docker?Thank you!", "username": "Joseph_Y" }, { "code": "", "text": "This is the graph", "username": "Joseph_Y" }, { "code": "", "text": "I am starting to find the same performance issues myself. But I am using an external volume and still I see some degradation when in a container. Be very interesting to figure this one out as containers are the way things are heading.", "username": "John_Torr" } ]
MongoDB Performance in Docker
2021-10-04T01:06:42.299Z
MongoDB Performance in Docker
5,879
null
[ "transactions" ]
[ { "code": "exports = async function(arg){\n var ethers = require(\"ethers\");\n \n const provider = \nnew ethers.providers.JsonRpcProvider(\"https://goerli.infura.io/XXXXXXXXXXXXXXXXXXXX\");\n const wallet = new ethers.Wallet(\"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\", provider);\n \n const gasPrice = await provider.getGasPrice(); //THIS WORKS FINE\n console.log(gasPrice);\n \n const nonce = await wallet.getTransactionCount(); //THIS THROWS THE ERROR toLowerCase() is not a function\n console.log(nonce);\n};\n", "text": "Hi everyone, I’m trying to write a simple realm function where, using the ethers.js dependency, I get back the number of transactions in my ethereum wallet.But I get an error when I try to use the function: toLowerCase() is not a functionWhy? I leave you the code hoping to be able to solve …", "username": "Filippo_Murolo" }, { "code": "\n \n try {\n return BigNumber.from(result);\n } catch (error) {\n return logger.throwError(\"bad result from backend\", Logger.errors.SERVER_ERROR, {\n method: \"getBalance\",\n params, result, error\n });\n }\n }\n \n async getTransactionCount(addressOrName: string | Promise<string>, blockTag?: BlockTag | Promise<BlockTag>): Promise<number> {\n await this.getNetwork();\n const params = await resolveProperties({\n address: this._getAddress(addressOrName),\n blockTag: this._getBlockTag(blockTag)\n });\n \n const result = await this.perform(\"getTransactionCount\", params);\n try {\n return BigNumber.from(result).toNumber();\n } catch (error) {\n \n ", "text": "Hi @Filippo_Murolo ,How do you upload the Ethar.js module ? Do you use external dependencies?It looks as the source code of the Ethar JS is Type Script based:I am not sure how it will all blend in a App Services dependency…Have you searched the web if anyone did something like that?PS. I think your best chance is to see what said on Ethersjs \"toLowerCase is not a function\" typeerror on MongoDB Realm Webhook · Issue #2238 · ethers-io/ethers.js · GitHubThanks\nPavel", "username": "Pavel_Duchovny" } ]
Realm Function error ethers.js and web3
2022-10-22T11:22:01.110Z
Realm Function error ethers.js and web3
1,623
null
[ "aggregation" ]
[ { "code": "db.create_collection(\"test\")\ncoll = db[\"test\"]\ncoll.insert_many([{\"a\":5, \"b\":8}, {\"a\":3, \"b\":3, \"c\":4}, {\"a\":5, \"b\":1}, {\"a\":2, \"b\":7, \"c\":9}])\nresult = coll.aggregate([\n {\"$addFields\": {\"d\": {\"$cond\": [{\"$eq\": [\"$c\", None]}, \"$b\", \"$c\"]}}},\n {\"$group\": {\n \"_id\": None,\n \"total\": {\"$sum\": \"$d\"},\n \"count\": {\"$sum\": 1}\n }}\n ])\nnext(cursor)\n", "text": "My goal is to compute the sum of conditional fields. Let’s say I have a collection of documents containing always a and b fields, and sometimes c. I want to aggregate the values of c when it exists, or b when c does not exist.Here is a snippet:But this does not work, the result is 13 instead of 22. It seems that only c values have been summed, i.e. the $eq condition always returned false. How do you test that a field does not exist in an aggregation? I’ve read in the documentation (Query for Null or Missing Fields — MongoDB Manual) that testing the equality with None could mean that the field was affected to None or that the field did not exist. But the given examples only concerned search, not aggregation. Why is the behavior different?", "username": "Mark_Fallous" }, { "code": "MongoDB Enterprise M040:PRIMARY> db.test.insertMany([{\"a\":5, \"b\":8}, {\"a\":3, \"b\":3, \"c\":4}, {\"a\":5, \"b\":1}, {\"a\":2, \"b\":7, \"c\":9}])\n{\n\t\"acknowledged\" : true,\n\t\"insertedIds\" : [\n\t\tObjectId(\"6352b73d645b4616bd7626b2\"),\n\t\tObjectId(\"6352b73d645b4616bd7626b3\"),\n\t\tObjectId(\"6352b73d645b4616bd7626b4\"),\n\t\tObjectId(\"6352b73d645b4616bd7626b5\")\n\t]\n}\nMongoDB Enterprise M040:PRIMARY> \nMongoDB Enterprise M040:PRIMARY> db.test.aggregate([\n... {\"$addFields\": {\"d\": { $ifNull: [ \"$c\", \"$b\" ] }}},\n... {\"$group\": {\n... \"_id\": null,\n... \"total\": {\"$sum\": \"$d\"},\n... \"count\": {\"$sum\": 1}\n... }}\n... ])\n{ \"_id\" : null, \"total\" : 22, \"count\" : 4 }\n", "text": "Hello,There is a way to check for field existence by using $ifNull expression explained here\nIt evaluates input expressions for null values, assigns a value if it’s null, in this case I assigned the value of “b” to be the value if “c” is null.I tried your example in my testing environment and it produced the expected output of 22, please check the example below:I hope you find this helpful.", "username": "Mohamed_Elshafey" }, { "code": "", "text": "Hello,Yes, I know $ifNull but in my case, I need to check the existence of another field (let’s say c) in order to sum on a or b. That’s why I’d like to keep the $cond structure (or equivalent). The example has been simplified to make it small and simple.I think that $eq should behave the same in searches or aggregations.", "username": "Mark_Fallous" } ]
Testing field existence in aggregation
2022-10-21T14:02:19.915Z
Testing field existence in aggregation
3,143
null
[ "replication", "containers", "backup" ]
[ { "code": "", "text": "Hi!I’m running several managed MongoDB cloud instances.I now want to run some analyze scripts on the data. However, I cannot run them on the database. Instead, I want the scripts to run on files on my local computer. (Not that much data, should be feasible computing-wise.)Hence, I need to extract all the documents in all collections of my instances to strings and save them on file system of my local computer.What is the most efficient way of doing so?I tried to download backups and run them locally via Docker Compose, but things are not that easy due to replica set setups etc. etc.\nWas just wondering if there is an easier way.Thanks!", "username": "Christian_Schwaderer" }, { "code": "", "text": "Hi @Christian_Schwaderer and welcome to the MongoDB community forum!!It would be very helpful to understand the requirement, if you would share the following details:Hence, I need to extract all the documents in all collections of my instances to strings and save them on file system of my local computer.Can you help me with an example on how would you like your documents to look like ?If transferring data between MongoDB instances is the goal, there are different methods to do the same irrespective of whether you are on a cloud instance or not.For example, using standard tooling, you can use mongodump and mongorestore to dump and restore the data from a remote deployment into a local MongoDB deployment.If you wish to store your documents in different format, mongoexport and mongoimport is the way to go for it.I tried to download backups and run them locally via Docker Compose, but things are not that easy due to replica set setups etc. etc.The MongoDB deployment topology should not matter when transferring data is the main goal. You can dump data from one deployment type (e.g. a replica set) to another deployment type (e.g. a standalone or a sharded cluster) using those standard tools. If this is not the main goal, could you elaborate with more examples?Let us know if you have any further queries.Best regards\nAasawari", "username": "Aasawari" }, { "code": "{\n \"_id\": {\n \"$oid\": \"5ea0350ed62b9e002019c53d\"\n },\n \"foo\": \"mÖeow_123\"\n}\nbla_whatever_5ea0350ed62b9e002019c53d.json{\n \"_id\": {\n \"$oid\": \"5ea0350ed62b9e002019c53d\"\n },\n \"foo\": \"mÖeow_123\"\n}\n", "text": "Hi!Thanks for your reply!No, the goal is not transferring data from one instance to another one.Let me try to explain with an example.Say I have in any collection a document like this:Now we assume that having an Ö plus a some decimals with add up to the number 6 in any string in any document in any collection of any instance would be somehow problematic for my application.\nSo, my goal would now be to find out if I that’s the case somewhere. (Acting upon it would be different problem which we can leave aside for now.)One way of solving that would be to run a script against all data. However, I do not want that.So, my approach is: I want all the documents as JSON files on the file system. E.g. I would have a file called bla_whatever_5ea0350ed62b9e002019c53d.json with the content:on my file system.Then I could run a script on all those local files - checking for Ös and decimals which add up to 6.I do not intend to put the data back to Mongo afterwards.", "username": "Christian_Schwaderer" }, { "code": "", "text": "Hi @Christian_SchwadererSo, my approach is: I want all the documents as JSON files on the file system.If I understand correctly, you wanted to dump all data in your collection as JSON and then run some scripts on them. If yes, then I think mongoexport is the tool that does that (Aasawari mentioned this as well in her post). It can export whole collections into JSON or CSV format.However I’m curious about why you don’t want to execute those queries on the database itself. Is there a particular reason why it’s undesirable? I mean, searching for things is what databases do Best regards\nKevin", "username": "kevinadi" } ]
From MongoDB cloud instance to strings
2022-10-19T06:59:03.371Z
From MongoDB cloud instance to strings
1,909
null
[ "queries", "node-js", "data-modeling" ]
[ { "code": "", "text": "Greetings,We’re building a software tracking app that users can use on their landers to monitor traffic sources and performance for their funnels.We discussed the possibility of having a collection for each user of the app or making a 1 data collection that contains the data of all users. Our main concerns are the following:If we make a collection for each user, organizing the data will be better since this collection will have the statistics for each day of the month. The user can query the statistics of a range, specific day, or year, but that way, we need to create many collections if have a large number of users.If we make 1 collection that contains the user statistics, then we will have all the users’ data in 1 collection, and in order to query something specifically for the user, we have to do heavier queries each time, and the data will not be organized per user.Accordingly, what is the best model to have this type of functionality, especially since we need data filters according to a time range for a specific user and domain, and we need to store user information, tokens, domains, etc.I was thinking that having a collection for each user is the best option we have, so we can have a user info document, and a document for the page statistics on each day of the month.Pls, let us know what do think. and what recommendations or special ideas you have for us as starters.Thanks a lot!", "username": "Hazem_Alkhatib" }, { "code": "{ domain : 1, userName : 1, date : 1}\ndb.userData.find({domain : \"xxx\" , userName : \"yyy\" , date : { $gt : ISODate(...) , $lt : isDate(...)})\n{domain : 1, userName : 1}userData_202201\nuserData_202202\n", "text": "Hi @Hazem_Alkhatib ,The problem with the collection per user is that you will probably hit a known anti-pattern called “Too many collections”:The problem is by having so many collections the MongoDB server will use more resources to manage those multiple on drive files creating more indexes and more storage consumption.If you store the data in a single collection you can use indexes to better filter your queries and eventually reducing the effort of accessing a specific user data. For example indexing:Will allow you to perform time based queries efficiently if the query is :This query will use an index and will scan only the entries as if the data was in “its own collection”If that still won’t work for you in terms of data management you can always:Let me know if that answer the question.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
The Best Structure to Have in Your Application
2022-10-23T16:02:34.116Z
The Best Structure to Have in Your Application
1,157
null
[ "python" ]
[ { "code": "", "text": "Hello everyone,\nWhen I’m trying to connect to my atlas from AWS Lambda with PyMongoconnect(**credentials)I’m getting this error:“errorMessage”: \"The “dnspython” module must be installed to use mongodb+srv:// URIs.I tried to add to the requirements.txt file pymong[srv] and got the same error, and when I changed pymongo to [srv] I got “No module pymongo”.\nI also tried to add [srv,tls] and got the same error.\nWhen I’m trying to add to the app.py:os.system(‘pip install pymongo’)I’m getting“errorMessage”: “[Errno 30] Read-only file system: ‘/home/sbx_user1051’”]I have no idea what else I can do, does anyone have any idea?", "username": "Orel_Levi" }, { "code": "srv", "text": "Hi @Orel_Levi welcome to the community!If I understand correctly, you’re having issues using/installing Pymongo from AWS Lambda. Is this correct?For using external dependency in Lambda, please have a look at external dependencies in Lambda functions. In short, you’ll need to package the script and dependencies in a zip file that you’ll need to upload to Lambda.Also, I believe you’re using Atlas (due to the mention of the srv protocol in your post). Please note that due to the nature of Lambda, you’ll need special care in connecting Atlas (or any other database service, I believe). The documentation for managing connection to MongoDB in a Lambda function may be useful for you.Best regards\nKevin", "username": "kevinadi" } ]
Connect to MongoDB atlas with AWS Lambda error
2022-10-11T08:41:25.505Z
Connect to MongoDB atlas with AWS Lambda error
1,447
null
[ "aggregation" ]
[ { "code": "getnewlist ()\n{\nssh $ssh_user@$mongo_host_source -i $ssh_key_source sudo -u someuser mongo --quiet -u someuser mongodb://$mongo_ip_source/metadata -p $db_admin_password_source -eval \\'DBQuery.shellBatchSize\\=\"$shell_batch_size\"\\;db.Unit.aggregate\\(\\[\\{\\$group\\:\\{_id\\:\\\"\\$_\"$1\"\\\"\\}\\}\\],\\{allowDiskUse\\:true\\}\\)\\' | cut -d \":\" -f 2 | tr -d [:blank:][:punct:]\n}\ngetnumberofua ()\n{\nssh $ssh_user@$mongo_host_source -i $ssh_key_source sudo -u someuser mongo --quiet -u someuser mongodb://$mongo_ip_source/metadata -p $db_admin_password_source -eval \\'db.Unit.aggregate\\(\\[\\{\\$group\\:\\{_id\\:\\\"\\$_\"$1\"\\\"\\}\\},\\{\\$facet\\:\\{totalCount\\:\\[\\{\\$count\\:\\\"count\\\"\\}\\]\\}\\}\\],\\{allowDiskUse\\:true\\}\\)\\' | cut -d \":\" -f 3 | tr -d [:blank:][:punct:]\n}\n", "text": "Dear all,I’m trying to get a list of elements from MongoDB 4.2 using a pair of Bash functions and the mongo CLI command:With that first function I’m querying the metadata database to get a list of operations to work with. That list is saved to a file for later use. $1 equals “opi”, short for “operation identifier”.With that second function I’m getting a list of “archiving units” from the metadata database, they are a subdivision of the aforementioned “opi”. $1 in that context equals “id”, i.e archiving units. At first I wanted to get a complete liste of archiving units, similar to my list of OPIs. But then I realized I didn’t care much about a list and a simple total count would be sufficient.These two functions are working great on small databases but they don’t scale up well when millions of documents (~7) are involved.What am I doing wrong ?Best regards,Samuel", "username": "Samuel_VISCAPI1" }, { "code": "", "text": "Hello @Samuel_VISCAPI1 ,Welcome to The MongoDB Community Forums! These two functions are working great on small databases but they don’t scale up well when millions of documents (~7) are involved.If the queries are fast for small amount of data but struggles with a larger amount of data, please make sure that all your queries are backed by indexes. To check further on the slowness observed, could you share below details:Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "$group$matchestimatedDocumentCount()mongocuttr", "text": "Welcome to the MongoDB community @Samuel_VISCAPI1 !These two functions are working great on small databases but they don’t scale up well when millions of documents (~7) are involved.As @Tarun_Gaur mentioned, further details would be useful in order to provide relevant advice.However, I noticed your aggregation queries are using $group to count all documents so the number of documents processed will scale with the size of the collection.A few ideas to consider:Improve your aggregation query by limiting the number of documents to process with an initial $match stage (see Improve Performance with Indexes and Document Filters).Maintain a count in your application logic when documents are updated or deleted so you don’t have to calculate this dynamically.If you need a count of all documents in a collection and speed is more important than accuracy, use the estimatedDocumentCount().If you have other functions that return larger result sets, consider implementing using an officially supported MongoDB driver instead of piping to the mongo shell. You will likely have better performance and can do any extra transformations within a single implementation instead of piping through other tools like cut or tr.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to speed up find and aggregate operations?
2022-09-27T13:06:50.198Z
How to speed up find and aggregate operations?
2,143
null
[ "vancouver-mug" ]
[ { "code": "", "text": "Are you new to MongoDB and wondering if there’s life after SQL (green run)? Or, are you a grizzled MongoDB DBA (double black diamond) looking to share your experiences? Either way, this is an ideal meet-up for you where we’re trying to attract both newcomers and experienced developers to share ideas, case studies and meet new people in the community.Come join us for a casual conversation over drinks, appies or dinner. Space is limited so please RSVP.Event Type: In-Person\nLocation: The Keg, Yaletown (1011 Mainland St, Vancouver, BC V6B 5P9)", "username": "Mark_Clancy" }, { "code": "", "text": "Have you had these before? How many people tend to show up. Ex Oracle consultant new to Mongo DB/Realm", "username": "Ken_Turnbull" }, { "code": "", "text": "Hey there, Ken! Nice to “meet” you!As you can probably imagine, the pandemic kinda knocked the wind out of things in the MUG space, so I think we are still learning how many people tend to show up in these times. But here’s a group photo and wrap up of our social event from earlier this summer Vancouver MUG: MongoDB User Group Re-Boot Social Gathering! - #5 by webchick which had 7 of us in attendance.If you can make it, we would love to have you!", "username": "webchick" }, { "code": "", "text": "I am used to the old Oracle events…a bit larger …I will try and make it if I can. I can remember back in the 80’s/90’s when consultants dealing with relational DB’s could meet in a closet with room to spare.\nWhile I have your attention, a question about Realm. Is there anyone local who is sufficiently knowledgeable to be able to help me update an ObservedRealmObject? I posted my question in the community but haven’t heard anything back…yet.\nIt appears that am trying to deal with a frozen object that is fed by an enum", "username": "Ken_Turnbull" }, { "code": "", "text": "Hi Ken -I think we worked together back in the 90s at BC Tel Mobility! Hope all is well.Yes, the group has been together for many years but we were dormant during covid and just getting back to regular meetings (likely quarterly). We’ve done technical sessions, case studies, debriefs from MongoDB World (annual conference), etc.If you’re new to MongoDB, I think you will find the gatherings fairly useful. We’re trying to take a less formal and more social approach to the meetings going forward. Everyone is quite friendly and it will likely be a small group (<20).I can’t answer your Realm question but there’s a chance someone at the event would know.Hope to see you there.Mark", "username": "Mark_Clancy" }, { "code": "", "text": "Hi @Ken_Turnbull,I moved your Realm Swift question from the more general Working with Data category to the Realm category for better visibility.I’m not sure if there will be someone at the Vancouver event with relevant knowledge of Realm + Swift, but there should certainly be someone online who can help.Sometimes questions can take a bit longer to be noticed by the right experts, so bumping with an update after a few quiet weekdays may help resurface the discussion. I’ll look for someone who can help with your question.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I think that I will try and attend. So put me in with a pencil.", "username": "Ken_Turnbull" } ]
Vancouver MongoDB User Group (VanMUG) - Conversations - From Green Runs to Double Black Diamonds
2022-10-11T04:36:37.990Z
Vancouver MongoDB User Group (VanMUG) - Conversations - From Green Runs to Double Black Diamonds
4,928
null
[ "aggregation", "queries", "node-js", "transactions" ]
[ { "code": " let settlement = await this.profileModel.aggregate([\n {\n $match: {\n bindedSuperAdmin: name,\n },\n },\n {\n $lookup: {\n from: 'tpes',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite',\n },\n },\n {\n $lookup: {\n from: 'settlements',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalID',\n as: 'settlementsByUser',\n pipeline: [\n {\n $sort: {\n transactionDate: -1,\n },\n },\n ],\n },\n },\n\n { $unwind: '$tpesBySite' },\n\n { $unwind: '$settlementsByUser' },\n { $unwind: '$settlementsByUser.settledTransactions' },\n {\n $project: {\n bReconcileError: '$settlementsByUser.bReconcileError',\n batchNumber: '$settlementsByUser.batchNumber',\n btransfered: '$settlementsByUser.btransfered',\n countryCode: '$settlementsByUser.countryCode',\n currencyCode: '$settlementsByUser.currencyCode',\n merchantID: '$settlementsByUser.merchantID',\n nSettlementAmount: '$settlementsByUser.nSettlementAmount',\n onlineMessageMACerror: '$settlementsByUser.onlineMessageMACerror',\n reconciliationAdviceRRN:\n '$settlementsByUser.reconciliationAdviceRRN',\n reconciliationApprovalCode:\n '$settlementsByUser.reconciliationApprovalCode',\n settledTransactions: [\n {\n appName: '$settlementsByUser.settledTransactions.appName',\n cardInputMethod:\n '$settlementsByUser.settledTransactions.cardInputMethod',\n cardPAN_PCI:\n '$settlementsByUser.settledTransactions.cardPAN_PCI',\n networkName:\n '$settlementsByUser.settledTransactions.networkName',\n onlineApprovalCode:\n '$settlementsByUser.settledTransactions.onlineApprovalCode',\n onlineRetrievalReferenceNumber:\n '$settlementsByUser.settledTransactions.onlineRetrievalReferenceNumber',\n transactionAmount:\n '$settlementsByUser.settledTransactions.transactionAmount',\n transactionDate:\n '$settlementsByUser.settledTransactions.transactionDate',\n transactionTime:\n '$settlementsByUser.settledTransactions.transactionTime',\n transactionType:\n '$settlementsByUser.settledTransactions.transactionType',\n },\n ],\n\n settlementAmount: '$settlementsByUser.settlementAmount',\n settlementDate: '$settlementsByUser.settlementDate',\n settlementTime: '$settlementsByUser.settlementTime',\n terminalID: '$settlementsByUser.terminalID',\n traceNumber: '$settlementsByUser.traceNumber',\n uniqueID: '$settlementsByUser.uniqueID',\n },\n },\n ]);\n console.log('settlement from service ', settlement.length);\n\n return settlement;\n\nsettlementsByUser: {\n _id: new ObjectId(\"62ac9732c36810454f8f3822\"),\n bReconcileError: 'true',\n batchNumber: '1',\n btransfered: 'true',\n countryCode: '788',\n currencyCode: '788',\n merchantID: '458742236657711',\n nSettlementAmount: '159800',\n onlineMessageMACerror: 'false',\n reconciliationAdviceRRN: '000104246913',\n reconciliationApprovalCode: '',\n settledTransactions: [Array],\n settlementAmount: 'C000159800',\n settlementDate: '220617',\n settlementTime: '114110',\n terminalID: '05000002',\n traceNumber: '13',\n uniqueID: '363bc047-4cff-4013-aaad-e608a59bbd4c',\n __v: 0\n }\n\n", "text": "I made a lookup with two collections existing in my db, result of this lookup I want to join with another collection in my db, So I could implement this query with success but returned docs are more than data existing in my db for example I have 266 records, After the query I got 524 I don’t know where they come .\nHINT : when I use simple find() I got correct result but with this aggregation I got more records\nHere is my function shown below :the result :I need to unwind the settledTransaction to read what’s inside it but the unwind made the duplicating of dataI’m really confused, it’s my first time I face this kind of error anyone could help me please ?", "username": "skander_lassoued" }, { "code": "", "text": "Hi @skander_lassoued ,Unwinding arrays is the probable reason.When you unwind an array you basically multiply all root occurrences with each element of the unwinded array forming a sort of a Cartesian join…Ty\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for replying me, could you provide me solution how can I resolve this problm ?", "username": "skander_lassoued" }, { "code": "", "text": "Hi @skander_lassoued ,This is not a problem this is how unwind works…If you need a specefic help, first explain what is the sample documents you use? What is the purpose of the query? And the expected output?Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I will update my question to explain more about my problem I wish you help me", "username": "skander_lassoued" }, { "code": "", "text": "I updated my question take a look please ", "username": "skander_lassoued" }, { "code": "", "text": "To solve this I will need source document from all the involved collection corresponding to the result you are trying to achieve", "username": "Pavel_Duchovny" }, { "code": "$unwind$unwind", "text": "I need to unwind the settledTransaction to read what’s inside itI’m curious about this statement. When you join the data from another collection it ends up as an array because you may have more than one matching value in the collection you are looking up data from. When you unwind you’re now saying you want to have a separate record for each such array element.If you just want original documents with joined data inside them, don’t do any $unwind stages and that’s exactly what you will get. If you think you need to use $unwind you’ll have to explain why you think you need it…Asya", "username": "Asya_Kamsky" } ]
Why aggregation doesn't return exact data in my DB
2022-10-21T14:12:11.093Z
Why aggregation doesn&rsquo;t return exact data in my DB
2,558
https://www.mongodb.com/…_2_1024x402.jpeg
[ "connecting", "mongodb-shell" ]
[ { "code": "", "text": "Can you please help me to fix the IDE ErrorI’m trying a lab exercise connect to Atlas Cluster.Below is the screenshot when I hit the URL getting an error message. I have downloaded the MongoDB shell for windows and when I try I’m able to connect with MongoDB shell command prompt , but when I try on the browser IDE I get an error.\nCan you please advise how to connect to complete this exercise.\n\nIDE_Error1524×599 57.6 KB\nCan you please help me to fix the issues.", "username": "Santosh_Salvaji" }, { "code": "", "text": "WritingI get an errordoes not help us figure out what is the error. You have to share the error message you get. A screenshot of what you are doing when you get the error might give us some clue about the context.", "username": "steevej" }, { "code": "", "text": "Thanks for your reply,As part of lab exercise M001, I’m trying to connect IDE when I try both admin or other getting same error.\nattached the screenshots.\nURI : mongo “mongodb+srv://sandbox.mv23vqs.mongodb.net/admin” --username santosh\nor\nmongo “mongodb+srv://sandbox.mv23vqs.mongodb.net/sample_airbnb” --username santoshrun test got message\n1 total, 0 passed,0\nskipped;\n[FAIL] \" Successfully \"\nconnected to the Atlas\nCluster\"\n\nIDE_Error1966×664 56.4 KB\n", "username": "Santosh_Salvaji" }, { "code": "", "text": "You have to perform step 2 of the dialog you posted.", "username": "steevej" }, { "code": "", "text": "I have already performed step-2 on the browser IDE I get an error attached screenshot for reference.\n\nIDE_Error1524×599 57.6 KB\n", "username": "Santosh_Salvaji" }, { "code": "", "text": "Like all command line tools, you have to press the ENTER key to submit the command.", "username": "steevej" }, { "code": "", "text": "Thanks lot for support it worked. able to connect and complete my assessment .", "username": "Santosh_Salvaji" } ]
Getting connectivity issue though IDE, able to connect with mongosh
2022-10-16T15:31:16.341Z
Getting connectivity issue though IDE, able to connect with mongosh
2,063
null
[ "aggregation" ]
[ { "code": "db.aggregate(\n\t[\t\n\t\t{ $set: { \"order\": { $multiply: [ { $rand: {} }, 200000 ] } } },\n\t\t{ $set: { \"order\": { $floor: \"$order\" } } },\n\t\t{ $merge: \"data\"}\n\t]\n)\n", "text": "I have this aggregate which assigns a random number to the ‘order’ field in every document within the ‘data’ collection. (The point was to shuffle the order in which data is retrieved every once in a while.)I need to upgrade this to do things a bit differently:\n1: Filter by some of the document fields to only assign the random numbers to a portion of the collection, not the entire collection.\n2: Assign every generated random number to 10 documents, not 1. It doesn’t matter which batch gets what number, but each document within a batch should get the same number.Please help me to understand how to do it.\nThank you.", "username": "notapolita" }, { "code": "db.data.aggregate([{\n $match: {\n <ANY_TYPE_CONDITION>\n }\n}, {\n $setWindowFields: {\n partitionBy: null,\n sortBy: {\n _id: 1\n },\n output: {\n documentNumber: {\n $documentNumber: {}\n }\n }\n }\n}, {\n $group: {\n _id: {\n $floor: {\n $divide: [\n '$documentNumber',\n 10\n ]\n }\n },\n result: {\n $push: '$$ROOT'\n }\n }\n}, {\n $set: {\n order: {\n $floor: {\n $multiply: [\n {\n $rand: {}\n },\n 200000\n ]\n }\n }\n }\n}, {$sort : {order : 1}}],{\"allowDiskUse\" : true})\n", "text": "Hi @notapolita ,Its not a super stright forward idea for the mongoDB sever, but the aggregation framework is so rich that you can do the following:This aggregation will basically first use a match stage to filter on any filter expression that a $match can have, this will cover your first requirementThen the next stage will actually document number each document using 5.0+ setWindowFields and then will group by devision of 10 creating a document with 10 documents grouped under “results”. Now we add the random number to each 10 groups and sort by it.There is no need to do 2 $set as it actually does a full document pass twice try to use a minimal stages as possible.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
How to modify this aggregate to update part of a collection in batches?
2022-10-22T22:58:18.581Z
How to modify this aggregate to update part of a collection in batches?
1,179
null
[ "mongodb-shell" ]
[ { "code": "ShardingTest()ReferenceError: ShardingTest is not definedmongosh --nodb --norc", "text": "Hi,Following the MongoDB Definitive Guide on chapter 14, I’m trying to create a Shard Cluster for testing, using the example with ShardingTest() in mongoshell, but I always get ReferenceError: ShardingTest is not defined.My previous command was just mongosh --nodb --norc.I can’t find almost no information about this topic.Any help would be much appreciated.Thank you!", "username": "Luis_Santos1" }, { "code": "ShardingTest()mongomongomongoshmlaunchmongo", "text": "Welcome to the MongoDB Community @Luis_Santos1 !The ShardingTest() command is only available in the legacy mongo shell, which has been deprecated as of the MongoDB 6.0 server release. The mongo shell test commands were created for MongoDB server test cases, so you won’t find a lot of end user documentation. Since you are using the new MongoDB Shell (mongosh), test commands are not available.I recommend trying out mlaunch (part of the mtools Python script collection) as a more straightforward tool for standing up local test clusters. Alternatively you could download an older MongoDB server package (5.0 or earlier) for your O/S which still has the legacy mongo shell.I use mtools for all my own local testing.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you very much Stennie!I have installed mtools, but I can’t seem to get it going…I’m trying the commands in CMD/Powershell but it’s not working -The term ‘mlaunch’ is not recognized as the name of a cmdlet, function, script file, or operable program.How or where do I run these commands?", "username": "Luis_Santos1" }, { "code": "pip3Location:../../../bin/mlaunch", "text": "How or where do I run these commands?Hi @Luis_Santos1,I don’t have a Windows environment handy to test with, but you can list all files for a Python package installed via pip3 with:$ pip3 show -f mtoolsIn my output on macOS the Location: value shows where all the package files are installed and executable scripts show a relative path from this Location (eg ../../../bin/mlaunch).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ReferenceError: ShardingTest is not defined
2022-10-21T17:51:12.070Z
ReferenceError: ShardingTest is not defined
2,857
null
[ "mongoose-odm" ]
[ { "code": " {\n \"commenterName\": \"James Mustic\",\n \"user\": \"63499ae5cbba271c29375a25\",\n \"note\": \"Called the payer for follow up\",\n \"_id\": \"634cb6d055cff6bd70a19c44\",\n \"updatedAt\": \"2022-10-17T01:58:40.509Z\",\n \"createdAt\": \"2022-10-17T01:58:40.509Z\"\n }\n", "text": "Once I save a new document I expect to get the _id as first element, how can I get sort this as per my requirement? (Using Mongoose ODM)\nPlease advise. Thank you!", "username": "Faslu_Rahman_K" }, { "code": "_id_idMap", "text": "Hi @Faslu_Rahman_K ,If this is a newly created JavaScript object, I expect the order of fields will be the same as creation order unless a field name is numeric (since JavaScript has special sorting logic for object property keys). When this object is written to the MongoDB server, the _id field will be moved first during the write operation (per The _id Field documentation) but that field order change will not be reflected in your application until you refetch the document.The order of fields in JavaScript objects is generally not guaranteed unless you are using an ordered field like an array or a data structure like a Map that preserves order. Can you provide more detail on why you require a specific field ordering?Also, please also share more details on your environment:Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can any one advise me why I am not getting the _id as first feild after save or update?
2022-10-23T00:25:32.653Z
Can any one advise me why I am not getting the _id as first feild after save or update?
1,294
null
[]
[ { "code": "", "text": "I developed an app with strapi backend as it was easy to use, handles authentication, easy api/endpoints so http requests straightfoward. Now deploying online has become a big issue. Was using heroku but asalesforce are abandoning the learning community next month. Looking at Render as an option but so far not getting through.Having used Mongodb on a couple of projects i thought maybe i could use Strapi with Mongodb or just scrap Strapi. My app needs a simple backend, user table and projects table. No relation necessary. But i want authentication and deploy to web, no cost.Should i use Mongodb with Strapi or without? What authentication/ api endpoints are available in mongodb?", "username": "Patrick_Kavanagh" }, { "code": "", "text": "ok so I ditched bug-ridden strapi and trying to get that on another platform other than heroku was a non-starter. Tried Render - waste of time, buggy, poor support. Ditched Render.\nNow, Im battling Mongodb. Followed the plain js demo on mongodb npm package and that works nicely.But Im using Vite 3.1.0 and Svelte 3.49.0 and ES6.\nWith a clean app.svelte just showing a header, as soon as I add\n\"import {MongoClient} from ‘mongodb’\nI get error\n“Module “util” has been externalized for browser compatibility. Cannot access “util.promisify” in client close.”\nHours chasing solutions online and NOTHING worked. So tried Mongoose (am I allowed to say Mongoose on here? Well Mongoose it is.\nSo I put import mongoose from ‘mongoose’ into my clean App.svelte and IT WORKS! Having spent days now chasing errors, the simplest thing that works is ELATION!\nSuccess builds success, right? I pushed ahead with\nmongoose.connect(‘mongodb://localhost:27017/test’)\nand\nI\ncame\ncrashing\ndown\nto\nEarth\nwith another error …\n[HMR][Svelte] Unrecoverable HMR error in : next update will trigger a full reload\nUncaught TypeError: mongoose.connect is not a function.\nSince when? Works fine in vanilla js.More hours chasing solutions which dont work.Very long story short, I was trying to avoid using a local express server (vanilla js) to interface with Vite/Svelte - which works, but neither mongodb npm package nor Mongoose allowed me to reach that goal.Anyone out there in a similar situation with a solution? Even if it means an earlier version of Vite/Svelte?", "username": "Patrick_Kavanagh" } ]
Strapi v4 / mongodb
2022-10-16T04:32:09.458Z
Strapi v4 / mongodb
3,716
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "{\n_id: ObjectId(“50b59cd75bed76f46522c35a”),\nstudent_id: 1,\nclass_id: 22,\nscores: [\n{ type: ‘exam’, score: 47.38775906993299 },\n{ type: ‘quiz’, score: 9.963742963372834 },\n{ type: ‘homework’, score: 22.17993073237026 },\n{ type: ‘homework’, score: 33.7647119689925 },\n{ type: ‘homework’, score: 18.29543263797219 }\n]\n}", "username": "Neco_Darian" }, { "code": "", "text": "Can you elaborate the requirements in more details?What would be your input, and what would be your expected output?", "username": "NeNaD" }, { "code": "", "text": "@NeNaD, Im trying to find the exam score associated with the student_id = 1 and class_id = 22.with the input bellow, i get the whole document:db.grades.find( {student_id:1, class_id: 22, “scores.type”: “exam”} );I hope there is a way to get this output:{ type: ‘exam’, score: 47.38775906993299 }", "username": "Neco_Darian" }, { "code": "$match$filter$first$filter$projectdb.collection.aggregate([\n {\n \"$match\": {\n student_id: 1,\n class_id: 22,\n \"scores.type\": \"exam\"\n }\n },\n {\n \"$set\": {\n \"data\": {\n \"$first\": {\n \"$filter\": {\n \"input\": \"$scores\",\n \"cond\": {\n \"$eq\": [\n \"$$this.type\",\n \"exam\"\n ]\n }\n }\n }\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"score\": \"$data.score\",\n \"type\": \"$data.type\"\n }\n }\n])\n", "text": "You can do it like this:Working example", "username": "NeNaD" }, { "code": "", "text": "I appreciate your help @NeNaD\nHave a nice weekend!", "username": "Neco_Darian" }, { "code": "db.grades.find( {student_id:1, class_id: 22, \"scores.type\": \"exam\"} ,{ \"scores.$\" : 1 })\n", "text": "If you are only interested in the first element of the array that matches your query you could simply usesomething like", "username": "steevej" }, { "code": "", "text": "This is fantastic! Thank you for your response and solution ", "username": "Neco_Darian" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Hi! How can I get the value of the exam score only, and not the whole document
2022-10-21T14:20:05.536Z
Hi! How can I get the value of the exam score only, and not the whole document
1,099
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "model\n.find({ deleted: false, movie_id: id })\n.populate('movie_id')\n.populate('character_id');\nmodel.aggregate([\n {\n $match: {\n movie_id: id,\n deleted: false,\n },\n },\n {\n $lookup: {\n from: 'movies',\n localField: 'movie_id',\n foreignField: '_id',\n as: 'customer_id',\n },\n },\n {\n $unwind: {\n path: '$movie_id',\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup: {\n from: 'characters',\n localField: 'character_id',\n foreignField: '_id',\n as: 'character_id',\n },\n },\n {\n $unwind: {\n path: '$character_id',\n preserveNullAndEmptyArrays: true,\n },\n },\n ]);\n", "text": "Hi guys, I’m trying to create an existing query built in mongoose but using just aggregate pipeline, my current query using populate looks similar to this:Using the pipeline I built this one:Am I right with this?", "username": "PedroFumero" }, { "code": "$addFields$arrayElemAt$unwind$arrayElemAtmodel.aggregate([\n {\n $match: {\n movie_id: id,\n deleted: false,\n }\n },\n {\n $lookup: {\n from: 'movies',\n localField: 'movie_id',\n foreignField: '_id',\n as: 'customer_id',\n },\n },\n {\n $lookup: {\n from: 'characters',\n localField: 'character_id',\n foreignField: '_id',\n as: 'character_id',\n },\n },\n {\n $addFields: {\n customer_id: {\n $arrayElemAt: [\"$customer_id\", 0]\n },\n character_id: {\n $arrayElemAt: [\"$character_id\", 0]\n }\n }\n }\n]);\n", "text": "Hello @PedroFumero , Welcome to the MongoDB community developer forum,Am I right with this?Yes almost correct, is there any error or are you asking to just check if we can do it other way as well?\nI would suggest you can use $addFields and $arrayElemAt operators to get the first element from an array, instead of $unwind stage", "username": "turivishal" }, { "code": "", "text": "Thanks a lot! I have a question, in terms of performance is it better to apply $arrayElemAt instead of unwind? I’m looking to improve performance of original query, and btw using pipeline it performs better, but is even better $arrayElemAt instead of $unwind?, thanks in advance.", "username": "PedroFumero" }, { "code": "$unwnid$addFields", "text": "I have a question, in terms of performance is it better to apply $arrayElemAt instead of unwind?Not sure but not much difference if it is limited to a single element, But yes I can say number of more stages impacts the performance, here we have improved the query by eliminating the extra $unwnid stage, and we have covered both the operations in the single stage $addFields.You can check explain() function to check the performance of the query.You can also read the really good explanation of Query Optimization.", "username": "turivishal" } ]
Create same query using $lookup
2022-10-22T13:54:49.017Z
Create same query using $lookup
1,167
null
[ "connecting" ]
[ { "code": "", "text": "This is more likely to be a problem with Fixie SOCKS but I have not found any help about this topic elsewhere. Hence I’m hoping someone here might have an answer.My NodeJS application is hosted on Heroku and I am using Fixie SOCKS add-on on Heroku that provides a static IP address in order to connect to the Atlas cluster. The static IP address provided by Fixie SOCKS is whitelisted in the Atlas cluster network settings.My NodeJS application fails to connect to Atlas from Heroku with authentication failure.The exact same application connects just fine when NodeJS is running on my laptop (yes, I have temporarily whitelisted my outbound IP address).The only difference between these two methods is that app hosted on Heroku uses Fixie SOCKS and the instance running on my laptop directly opens a connection to Atlas.In both cases, I am passing following URI to mongoose client.mongodb+srv://dbuser:[email protected]/postel?retryWrites=true&w=majority", "username": "Vineet_Dixit" }, { "code": "", "text": "Is this still happening? Do you know what kind of error you’re getting?\nIs there a way to connect from the mongo shell from the Fixie SOCKS context as a separate test?\nI wonder if there’s something about the DNS available in the Heroku concept that’s making the SRV connection string not work? maybe try using the legacy connection string which is also available in the Atlas connect UI (I think we frame it as being for earlier driver versions)", "username": "Andrew_Davidson" }, { "code": "", "text": "Terribly sorry for a late response. I decided not to use Fixie SOCKS and since did not pursue this problem. I really appreciate you checking in.", "username": "Vineet_Dixit" }, { "code": "", "text": "Any update on this one? I’m in the same boat. Fixie Socks and Fixie did not solve the problem. I’m kind hesitant go all in on Heroku private spaces, that’s kind way beyond the budget I was hoping for.", "username": "George_N" }, { "code": "", "text": "Well folks, I am also getting a similar problem (with Fixie). However, it does run fine, and then crash. The logs always say the whitelist IP connection is the issue (though I added both outbound IPs). If anyone solves it (or if Atlas solves it), I would love to hear about it.", "username": "Simeon_Florea" }, { "code": "", "text": "Yeah, same here, fixie just keep yelling at me about could not connect to any server from atlas cluster, even though I already added outbound IPs to the whitelist…", "username": "Sean_Fang" }, { "code": "", "text": "For folks who are have this problem going forward: Fixie just released a library called fixie-wrench which does port forwarding via Fixie Socks. This allows you to connect to MongoDB Atlas as if it were running on a local port, regardless of driver support. All your requests are tunneled through Fixie Socks so they come from static IP addresses.For more information: SOCKS Documentation | Fixie", "username": "fixie" } ]
Unable to connect to Atlas from Heroku using Fixie SOCKS
2021-01-16T19:31:13.816Z
Unable to connect to Atlas from Heroku using Fixie SOCKS
4,566
null
[ "python", "change-streams" ]
[ { "code": "with collection.watch() as stream:\n while stream.alive\n print(change)\nwith collection.watch() as stream\n If stream.alive == True:\n stream.close()\n", "text": "Flask api has different end points for start and stop the collection…Start endpointHow to unwatch/stop change stream with different endpoint called - Stop end pointTried as below…but change stream is not closing…Please advise…", "username": "Krishnamoorthy_Kalidoss" }, { "code": "", "text": "any update for closing xhange stream from flask api", "username": "Krishnamoorthy_Kalidoss" }, { "code": " with db.collection.watch(pipeline) as stream:\n for insert_change in stream:\n print(insert_change)\n if <some condition>:\n break\n<some condition>", "text": "Hi @Krishnamoorthy_KalidossI think you can use Pymongo’s documentation to do this (change_stream – Watch changes on a collection, database, or cluster — PyMongo 4.3.3 documentation). With a little change:You might want to tailor the <some condition> above to suit your closing criteria.Also from https://www.mongodb.com/docs/manual/changeStreams/ :While the connection to the MongoDB deployment remains open, the cursor remains open until one of the following occurs:Note that you need to have certain requirements (e.g. WiredTiger, recent MongoDB versions, etc.) to be able to open a changestream.If this is not working for you, could you post:Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Ok thanks for reply… i do have few questions before i try…As i mentioned i have two different endpoints for start and stop changestream…When i start chanestream using start end point…it keeps on watching the collection…When i want to stop watching, i wil cal stopchangestream endpoint using the code u provided…(based on condition)…So we are using two different method for start and stop, what will happen the call for start which we started earlier(as it is keep on watching)… stopchangestram endpoint will stop watching which we called from start end point?", "username": "Krishnamoorthy_Kalidoss" }, { "code": "", "text": "HiI tried with break…still it is not working…\nBecause if there is any change, cs keep on watching using for loopfor change in stream:\n.\n.I used kind of flag…if flag is stop, break( with in for loop)…else continue…As for loop keeps on execute on change…flag doesnt play role here…My requirement is i have multiple collections in mongo db…i can start any collection to watch and stop any collection to stop watching at any time…Please advise…", "username": "Krishnamoorthy_Kalidoss" }, { "code": "db.collection.watch()", "text": "So we are using two different method for start and stop, what will happen the call for start which we started earlier(as it is keep on watching)… stopchangestram endpoint will stop watching which we called from start end point?I’m not sure I follow. Are you saying that you have two different endpoints you can trigger:Is this accurate?Can you provide some small example code for both processes?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "startChangeStream:with collection.watch() as stream:for change in stream:\nprint(change)stopChangeStrean:with collection.watch() as stream:for change in stream:\nbreaki tried with separate methods for start and stop…i have also tried with passing flag to start change stream.\nboth didnt help me.\ni have multiple collections… i will start watching any collection and stop watching any collection at any time …Please advise…", "username": "Krishnamoorthy_Kalidoss" }, { "code": "startChangeStreamstopChangeStreamstopChangeStreamwith collection.watch() as stream:\n for change in stream:\n break\nwith collection.watch() as stream:\n print(change)\n if <some external condition>:\n break\n<some external condition>", "text": "If startChangeStream and stopChangeStream are different scripts (or even different functions within the same script), then I don’t think this approach will work, since they are basically two different change streams, so one cannot affect the other.Also in the stopChangeStream script/function:wouldn’t this open a changestream and immediately close it?I believe what you need is to combine them into one function, something like what I posted earlier:The <some external condition> above could be anything you need, e.g. a flag that was set somewhere, some timer, etc.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Yeah, i tried calling stop from same method using flag …as below:with collection.watch() as stream:\nfor change in stream:\nif (status == ‘stop’)\nbreakBut once change stream started watching changes, it keep on watching. status variable doesn’t set the value as ’ stop’ externallyPlease advise", "username": "Krishnamoorthy_Kalidoss" }, { "code": "statusforstatus", "text": "status variable doesn’t set the value as ’ stop’ externallyI think this is the main cause of the issue. I believe you’ll need to ensure that this “stop” flag gets passed on to the function. However I think this is not really a MongoDB issue, rather, it’s more of a coding issue Having said that, I would check:If these does not solve the issue, you might want to also ask on specific coding-oriented sites such as StackOverflow to see if there are any issues with the Python code.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi,Instead of checking stop status within for loop do we have any better way to stop the change stream…Because, irrespective of changes whether we have changes or not in stream i want to stop the change stream for the collection mentioned…Please advise…", "username": "Krishnamoorthy_Kalidoss" }, { "code": "", "text": "Instead of checking stop status within for loop do we have any better way to stop the change stream…Not that I know of, since that is how the code works. Currently, the watch() method works like an infinite loop and it will keep processing each change event as it arrives.i want to stop the change stream for the collection mentioned…At this point, killing the script is the only thing I can think of.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,Thanks for confirming my observation…Killing the script also will not help us…because change stream has to watch other collections if i stop watching one collection…so if i kill the script none of the collections will be watched…So i think as per your earlier mail, we dont have option to achieve this…", "username": "Krishnamoorthy_Kalidoss" }, { "code": "stop = False\ndef watch_test():\n with collection.watch() as stream:\n for change in stream:\n if stop == True:\n break\n print(change)\n\nw = threading.Thread(target=watch_test)\nw.start()\nstopTrue", "text": "Any work around for this? Or we dont have any other option to achieve this…I can think of two immediate ones off the top of my head:One: is to run a separate script per change stream, so you can just kill the one that you need. This is probably the most straightforward, but would require a bit of management.Two: is to run the change stream watchers in threads. Perhaps something like this:You can set the stop variable to True and it will be visible inside the threaded function due to the scope, so it will stop the thread’s execution. See threading for more information, and please note that I wrote that untested code in a couple of minutes so it’s not the best.But again this is Python coding, of which we’re not really experts in Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Ok, let me check the first option…And in second approach, when the var stop is false and started watching collections using for loop, it wont come out of for loop till the changes are in ( as it is infinite) and dont have the scope to assiign stop var to true when we are supposed to stop…", "username": "Krishnamoorthy_Kalidoss" }, { "code": "", "text": "Hi @Krishnamoorthy_KalidossAnd in second approach, when the var stop is false and started watching collections using for loop, it wont come out of for loop till the changes are in ( as it is infinite) and dont have the scope to assiign stop var to true when we are supposed to stop…Yes I wrote that in a couple of minutes so it won’t be perfect and definitely not production quality, as it was intended to serve as an illustration to potential solutions. I was hoping to put forth some ideas for you to try on and modify to suit your workflow.As we have agreed that this is a Python coding question and not a MongoDB question, I encourage you to ask for more direction and better code examples in programming sites such as StackOverflow.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "", "username": "kevinadi" } ]
How to unwatch/close change stream
2022-09-25T09:03:52.184Z
How to unwatch/close change stream
5,691
null
[ "ops-manager", "data-recovery" ]
[ { "code": "", "text": "I searched as much as possible but found lesser and lesser.Problem statement:some update actions has done on mogo db or an specific collection.\nwe have oplog which contains any update.\nwhat can I do without exiting from the environment through a somple process, preferable a single query update revert the effect of faulty update.so in breif how I can revert effect of some updates through mongo shell and have data avaiable again as long as it does not contradict later changes made on collections", "username": "Kamran" }, { "code": "", "text": "Welcome to the MongoDB Community @Kamran !If you are connecting directly to your MongoDB deployment there is no generic “undo” for database updates. This is true for all databases as far as I’m aware: once changes have been committed they are not expected to be reversible, so you would need to have appropriate backup policies and data access controls for your use case.The typical approach would be restoring from a backup or from another copy of data before modification (for example, using a Delayed Replica Set Member). The complexity of recovering will depend on the operation(s) you are trying to revert.If you are directly making changes that may have unexpected outcomes, I would always test first in a representative development or staging environment so you reduce the risk of having to recover from an update gone awry in your production environment.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Revert effects of faulty update through and only through mongo shell
2022-10-20T14:44:51.252Z
Revert effects of faulty update through and only through mongo shell
5,403
null
[ "aggregation", "queries", "python" ]
[ { "code": "{\n \"companyId\": {\"5ec1c4c7ea40dd3912ff206d\"},\n \"memberId\": {\"61e538180d543336dfa52a94\"},\n \"isActive\": true,\n \"name\": \"SISWA1\",\n \"profileImage\": \"string\",\n \"email\": \"[email protected]\",\n \"phone\": \"088123123123\",\n \"mainClassId\": null,\n \"classId\": null,\n \"datas\": [\n {\n \"id\": {\"6350ce45605a1c2f35e4c607\"},\n \"classId\": {\"62f19d68c149abd43526d1a3\"},\n \"lessonId\": {\"62f0af32cb716c443625c3d0\"},\n \"year\": \"2002\",\n \"report\": [\n {\n \"activityId\": {\"6350f1313f586971dfd1effd\"},\n \"meet\": 3,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"TUGAS\",\n \"value\": 50\n }\n ]\n },\n {\n \"activityId\": {\"6350f13aaa8f2d84071fc3cd\"},\n \"meet\": 4,\n \"isPresent\": true,\n \"scores\": [\n {\n \"key\": \"TUGAS\",\n \"value\": 50\n }\n ]\n }\n ]\n }\n ]\n}\n student_activity_op = await USER_STUDENT.update_one(\n {\"datas.report.activityId\": ObjectId(\"6350f13aaa8f2d84071fc3cd\")},\n {\n \"$set\": {\n \"datas.$[outer].report.$[inner].meet\": 12\n }\n },\n {\n \"$arrayFilters\": [\n {\"outer.id\": ObjectId(\"6350ce45605a1c2f35e4c607\")},\n {\"inner.activityId\": ObjectId(\"6350f13aaa8f2d84071fc3cd\")}\n ]\n },\n )\n", "text": "hi guys i try update object in 3 rd level of nested array\nthis is the sample documentI’ve tried this way but there’s no change.\ni want to update field meet in array report with id=“6350f13aaa8f2d84071fc3cd”, from 2 became 12if anyone can provide a solution i will appreciate", "username": "Nuur_zakki_Zamani" }, { "code": "", "text": "Hi @Nuur_zakki_Zamani ,Are you certain that the values are actually objectids and not just strings?In the print it looks like a string.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny\nthat’s just an example sir, originally every field in the document is objectid. I just don’t know how to update a meet field in the report array. can you help me sir?", "username": "Nuur_zakki_Zamani" }, { "code": "{\n \"$arrayFilters\": [\n...\n{\n \"arrayFilters\": [\n {\"outer.id\": ObjectId(\"6350ce45605a1c2f35e4c607\")},\n {\"inner.activityId\": ObjectId(\"6350f13aaa8f2d84071fc3cd\")}\n ]\n }\n{ _id: ObjectId(\"6352b213f60e6c14eade7dc5\"),\n companyId: ObjectId(\"5ec1c4c7ea40dd3912ff206d\"),\n memberId: ObjectId(\"61e538180d543336dfa52a94\"),\n isActive: true,\n name: 'SISWA1',\n profileImage: 'string',\n email: '[email protected]',\n phone: '088123123123',\n mainClassId: null,\n classId: null,\n datas: \n [ { id: ObjectId(\"6350ce45605a1c2f35e4c607\"),\n classId: ObjectId(\"62f19d68c149abd43526d1a3\"),\n lessonId: ObjectId(\"62f0af32cb716c443625c3d0\"),\n year: '2002',\n report: \n [ { activityId: ObjectId(\"6350f1313f586971dfd1effd\"),\n meet: 3,\n isPresent: true,\n scores: [ { key: 'TUGAS', value: 50 } ] },\n { activityId: ObjectId(\"6350f13aaa8f2d84071fc3cd\"),\n meet: 12,\n isPresent: true,\n scores: [ { key: 'TUGAS', value: 50 } ] } ] } ] }\n", "text": "Hi @Nuur_zakki_Zamani ,Its a silly mistake, hope you going to laugh You have an unecessary dollar in your “$arrayFilters” expression:It should beWhen I changed that everything worked for me You probably got an error that you missed.The object changed for me as expected:Thank\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": " yeayyyyy alhamdulillah.\nfinaly after trying for 3 days, i can sleep well.\nthank you so much mr @Pavel_Duchovny. ", "username": "Nuur_zakki_Zamani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update object in 3 rd level of nested array
2022-10-21T02:21:01.501Z
Update object in 3 rd level of nested array
2,264
null
[ "indexes" ]
[ { "code": "", "text": "", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "Hi @Big_Cat_Public_Safety_Act,General limits are documented in MongoDB Limits and Thresholds and there also are some specific considerations in the documentation for each index type. Some limits will vary depending on your MongoDB server version, so make sure you are reviewing the correct version documentation (or are on the latest release series of MongoDB).Are there any complex field types, such as arrays or arrays of objects, that cannot be indexed in a MongoDB collection?It depends what sort of index you are creating. For example: Compound Multikey Indexes can only include at most one indexed field whose value is an array.A more common issue is not whether an index can be created, but whether an index can efficiently support the type of query you would like to make. For example, Regular Expressions for pattern matching in strings cannot use indexes efficiently for case-insensitive matches or partial matches that are not anchored to the beginning of key values. However, there are alternatives to regular expressions that are better suited to advanced text search use cases.Data that is highly nested can also be difficult to query. I recommend reviewing the Building with Patterns series on schema design patterns as well as A Summary of Schema Design Anti-Patterns for some ideas on common approaches for effective data modeling and indexing.If so, what are all field types that cannot be indexed?All field types can be added to indexes, but as noted above the more important aspect is whether an index and schema design approach will be effective in supporting your common queries.With MongoDB’s flexible schema, an index also isn’t associated with a specific data type like a date or a string: documents in the same collection can use different types for a given field by default. If you want to have more rigid control over the allowed types and value ranges for your data, there is a Schema Validation feature that allows you to create validation rules for a collection.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Are there any complex field types, such as arrays or arrays of objects, that cannot be indexed?
2022-10-21T22:21:24.422Z
Are there any complex field types, such as arrays or arrays of objects, that cannot be indexed?
1,432
null
[ "indexes" ]
[ { "code": "{\n countries: [\"US\", \"CA\"]\n}\n\nand\n\n{\n countries: [\"CA\", \"US\"]\n}\n", "text": "For the purpose of indexes, do I need to sort the array before putting into the database?", "username": "Big_Cat_Public_Safety_Act" }, { "code": "", "text": "Absolutely not. But you could.", "username": "steevej" }, { "code": "$slice$slice$sort$push", "text": "Hi @Big_Cat_Public_Safety_Act,To be clear, sorting array values will not have any benefit for indexing.The only top of mind use cases I can think of for sorting array values would be if you always want to display the values in a certain order or use something like $slice to maintain a capped array size. For example, Use $slice and $sort with $push to save the highest three scores in an array.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Do I need to sort an array before adding to the database?
2022-10-21T22:39:19.661Z
Do I need to sort an array before adding to the database?
1,351
null
[]
[ { "code": "", "text": "Hey everyone! I recently started my corporate career by joining Google as a Software Engineer.My friends and myself have decided to use MongoDB for a side project. What we liked about Mongo is its rich documentation (we are impressed!), availability across cloud platforms (Atlas as well as Railway.app seems to have an amazing free tier to start with) and rich community.Looking forward to interact with (and learn from) the community.", "username": "Yashraj_Kakkad" }, { "code": "", "text": "Congrats on the new job @Yashraj_Kakkad! We work hard to keep MongoDB documentation fresh and appreciate the feedback.", "username": "Zuhair_Ahmed" }, { "code": "", "text": "Hi, @Yashraj_Kakkad\nThere is many mongodb experts in here. I am one that.\nI think you are starter, I will help you activitly. Don’t hesitate to ask. I am always ready.\nI wish you to success.", "username": "Shuzhi_Sam" } ]
Hello hello MongoDB!
2022-10-15T17:21:04.375Z
Hello hello MongoDB!
2,193
null
[ "aggregation", "python" ]
[ { "code": "import pymongo\n\ndb_client = pymongo.MongoClient(\"mongodb://localhost:27017/\")\n\ncurrent_db = db_client[\"pyloungedb\"] \n\ncollection2 = current_db[\"Abonement bought\"]\n\nabonement_bought = [\n {'ID': 1000,'client_id': 2 , 'price': 10,'client': 'Petras Petraitis','start_date': '2022.10.14','end_date': '2022.10.14'},\n {'ID': 1000, 'client_id': 1, 'price': 30,'client': 'Jonas Jonaitis','start_date': '2021.09.14','end_date': '2022.10.14'},\n {'ID': 1002, 'client_id': 3, 'price': 300,'client': 'Monika Mokaite','start_date': '2020.10.14','end_date': '2021.10.14'}\n]\n\nins_result = collection2.insert_many(abonement_bought) \nprint(ins_result.inserted_ids)\n\nagg_result= collection2.aggregate( \n [{ \n \"$group\" :{\n \"_id\" : \"$ID\",\n \"Total\" : {\"$sum\" : 1},\n \"revenue\":{\"$sum\" : \"$price\"}}}\n \n \n ]) \nfor i in agg_result: \n print(i)\ndef mapReduce():\n mapf = \"function(){emit(this.$ID, this.$price)}\"\n reducef = \"function(k, v) { return Array.sum(v) }\" \n result = current_db.command(\n 'mapReduce', \n 'collection2', \n map=mapf, \n reduce=reducef, \n out={'inline': 1})\n\n print(result)\n\nmapReduce()\n", "text": "I need help with writing function with map-reduce. I have such collection and aggregation function with it:I have to rewrite this with map-reduce function. Note: var doesn’t work, so I have a code, which should work, just I get empty result after printing itOutput: {‘results’: , ‘ok’: 1.0}Using JupyterLab Python", "username": "Laura_7777" }, { "code": " mapf = \"function(){emit(this.$ID, this.$price)}\"mapreduce$variable", "text": "Welcome to the MongoDB Community @Laura_7777 !Please note that Map-Reduce has been deprecated as of MongoDB 5.0, and an Aggregation Pipeline is the recommended approach for performance and usability.I assume you have academic interest in converting your aggregation pipeline to a legacy equivalent. mapf = \"function(){emit(this.$ID, this.$price)}\"The map and reduce functions are implemented in JavaScript, so the aggregation syntax of $variable will not work.You need to reference field paths directly, similar to:mapf = “function(){emit(this.ID, this.price)}”The functions you are creating are essentially the same as Map-Reduce Examples in the MongoDB documentation if you would like a working reference to start from.Regards,\nStennie", "username": "Stennie_X" } ]
How to rewrite aggregation function with map-reduce in python
2022-10-21T20:03:31.103Z
How to rewrite aggregation function with map-reduce in python
1,214
null
[ "aggregation" ]
[ { "code": "", "text": "We are generating a stacked bar charts with custom Fields, the values show fine, but the order of the bars keep switching.This chart is used to represent a funnel sequence of onboarding steps so we want to maintain the speicif order of each bar, but tried to use a sort after match in the top charts aggregation pipeline but get an error message.How is the correct method to specify the order of the bar charts in a stacked bar chart?", "username": "Rocky_Mehta" }, { "code": "", "text": "For a chart with string values on the category axis, the order is determined by your choice in the “Sort” dropdown:\nBy default, the bars are sorted in decreasing order of the total values. If you change the setting to Category, you can sort in alphabetical order (increasing or decreasing).There isn’t any way to short by anything other than value or category. If you are manually sorting in a pipeline, you may need to use labels like “1-Foo”, “2-Bar” or similar to force them to be in alphabetical order.Tom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't maintain the order of the stacked bars to match the order they are in the aggregation section
2022-10-21T16:20:35.378Z
Can&rsquo;t maintain the order of the stacked bars to match the order they are in the aggregation section
1,791
null
[ "queries", "replication" ]
[ { "code": "", "text": "I have a large collection catalog, with a count of 200 million rows.\nI want to schedule and replicate into a new collection catalog_filter_replicated with a filter like clientId:“123”.\nEvery time when schedule script executes, new rows should be inserted and the old one should update the target one.The performance factor is important here. Please suggest the best way to replicate (insert / update) a large collection into a new one with Filters.\nThanks", "username": "Adeel_Nazir" }, { "code": "", "text": "Would that replica of the original collection be read-only? Depending of the use-cases involved with the replica there is different version.You could have a normal view\na materialized viewMongoDB on-demand materialized viewYou could also create a collection run an aggregation and use $out or $merge. You wrotenew rows should be insertedandold one should update the target onebut what about the deleted one? A $merge won’t take care of the deleted documents, but a drop collection and $out will recreate a fresh replica.Another way would be to use Change Stream to replicates the update of catalog for clientId:123.It all depends of the use-cases of the replica.", "username": "steevej" } ]
How to replicate (insert / update) a large collection into a new one with Filters
2022-10-20T13:51:20.687Z
How to replicate (insert / update) a large collection into a new one with Filters
1,169
null
[ "compass", "mongodb-shell", "database-tools" ]
[ { "code": "", "text": "I have a database called tutorials with a field called url which i accidentally overwrote for all 500 entries.I have an older backup luckily, but I cant figure out how to use that to fix my situation. I cannot just drop the new db and import the old one because there are other fields that have been updated since.I just want to import the url field from the old one.I am using a hosted mongodb server and connecting / managing it with compass.I exported the tutorials data from my old backup, with just the ID field and the URL field.I tried to import this old data with the “add data” button, but it throws an error saying the ID is a duplicate. But i dont want to insert these documents, i want to use them to update. I don’t see an option for this.I see people saying to use mongoimport, but that doesn’t seem to exist within the “mongosh” command line at the bottom of compass.How do?", "username": "Jim_Bridger" }, { "code": "", "text": "You are in the right direction.The command mongoimport is not part of mongosh, you start it from the windows cmd or PowerShell terminal.With Compass, use Add Data but into a temporary collection.Then use the aggregation framework to $merge the temporary collection into the original collection.", "username": "steevej" }, { "code": "", "text": "i dont seem to have it installed, was hoping to not have to install additional software, or have to figure out how to log in via command line (since it’s already working in compass).is there really no way to do it from within compass?", "username": "Jim_Bridger" }, { "code": "", "text": "Like I wroteWith Compass, use Add Data but into a temporary collection.You can usethe aggregation frameworkwithin Compass, or Compass’ integrated mongosh", "username": "steevej" } ]
Help i overwrote a ton of data
2022-10-21T13:09:18.337Z
Help i overwrote a ton of data
1,440