image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "queries", "crud" ]
[ { "code": "db.students.insertOne(\n { \"_id\" : 1,\n \"key\":\"first\",\n \"grades\" : [\n { type: \"quiz\", questions: [{name:\"1\",value:\"1\"}, {name:\"2\",value:\"2\"}],images:[\"1\",\"2\"] },\n { type: \"quiz\" },\n { type: \"hw\", questions: [ {name:\"5\",value:\"5\"}, {name:\"6\"} ] },\n { type: \"exam\", questions: [ {name:\"7\",value:\"7\"}, {name:\"8\",value:\"8\"},{name:\"1\",value:\"1\"} ],images:[\"1\",\"2\"] },\n ]\n }\n)\n{“name”:“1”}db.getCollection(\"students\").updateMany({\"grades.questions.name\":\"1\"},{$set:{\"grades.$[].questions.$[x].name\":\"10\"}},\n{ arrayFilters: [ {\"x.name\": \"1\" } ] }\n,{upsert:false})\n“grades.questions.name”:“1”", "text": "I’m new to mongo and have already searched through most of this forum to find the solution to my current problem, but I haven’t been able to, so I apologize in advance if my question appears repetitive or stupid.I have provided one real-world example based on the mongo db documents and examples:In my search, I am looking for all objects containing {“name”:“1”} and updating just the name to \"10\nHere is the query:But it complains that ‘grades.1.questions’ must exist in the document in order to apply array updates. This is why I included the first condition “grades.questions.name”:“1”.any help will be appreciated!", "username": "saeed_afroozi" }, { "code": "{ type: \"quiz\" },", "text": "{ type: \"quiz\" },your query works, but you have this incorrect data. it has no “questions.name” thus the problem happens.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "That’s true, but I did it intentionally as this is the current state of our database; there are some records that don’t even have the second array!\nDo you think it is possible to tailor the query to skip such situations without complaining?", "username": "saeed_afroozi" }, { "code": "$exists", "text": "there is $exists operator to check if a field is present. using it in matching query may help (I say “may” because you work on an array (questions) in an array (grades) in an array (document itself), dive carefully): $exists — MongoDB Manualthere are also other null check operators: Query for Null or Missing Fields — MongoDB Manualyou may need to change your schema if it proves difficult to write a query. for example, splitting each question into their own field and embedding grade type in them instead of embedding under questions under grades. if then, you would loop only on questions but add an extra type check. pros and cons.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "another thing to do might be (pre)processing all documents and insert an empty array if there is none, because if something does not exists you cannot tell if it is array or not, but you can tell “oh, this array is empty”. this kind of operations are required when the shape of documents, the schema, is important in queries.you can do this per query, depending on the type of query, or patch the whole database to reflect this. pros/cons: query time versus disk size.", "username": "Yilmaz_Durmaz" }, { "code": "db.getCollection(\"students\").updateMany({\"grades.questions.name\":\"1\"}\n,{$set:{\"grades.$[y].questions.$[x].name\":\"10\"}},\n{ arrayFilters: [ {\"y.questions\":{$exists:true}},{\"x.name\": \"1\" } ] }\n)\ndb.getCollection(\"students\").updateMany({\"grades\": { $exists: true, $ne: [],$elemMatch: { \"questions\": {$exists: true } \n}}},{$set:{\"grades.$[].questions.$[x].name\":\"10\"}},{ arrayFilters: [ {\"x.name\": \"1\" } ] })\n", "text": "Thank for your help!\nThis is how I checked the existence, and it works wellAs you can see, the existence check was placed on the array filter, but if I place it on the first part, it does not work!As I understand it, the first part selects the right record and then applies an update or any other operator based on _id or any other condition, but this example does not work. am I right?", "username": "saeed_afroozi" }, { "code": "", "text": "it is nice you have a working solution.for the second one, the first part of the query is about selecting a matching document, and I am guessing when you check for the existence, this document will match and be used because the condition is satisfied by other non-empty elements in the array. but when it comes to processing array elements in this document, you will hit the empty-field element.it is like requesting a bunch of numbers from a user, denying characters other than numbers, but your function does division and hits the divide-by-zero problem.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "You might be already aware of this page, but check it out if not:", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thank you so much for your quick response.\nI totally understand now what you said!", "username": "saeed_afroozi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update the property of an object in an nested array
2022-09-05T13:09:13.521Z
Update the property of an object in an nested array
6,153
null
[]
[ { "code": "", "text": "As no one is responding in the online chat I thought that i would open this up…I have database triggers that are firing off for each inserted document but I am looking to have it fire off once per entry. So for example if the initiating call generates 50 new documents I would just like it to update after all insertions. I have tried to use a bulk insert (https://www.mongodb.com/docs/manual/reference/method/Bulk.insert/) but I can see in the logs that it is still firing for each document.Does anyone have any advice as how to handle?", "username": "Timothy_Payne" }, { "code": "_id", "text": "Welcome to the MongoDB Community @Timothy_Payne !Atlas Database Triggers are based on change streams of document changes rather than command execution (for example, bulk insert). Processing per document also helps functions complete within Atlas Function execution limits including 120 seconds of runtime.However, you could invoke processing after a specific batch job finishes by calling a function via a Custom HTTPS Endpoint. I assume you would need some criteria to be able to identify the batch of documents most recently inserted, such as a creation date or batch ID in your documents (or perhaps the most recent _id before the bulk insert).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks @Stennie_X ! This might then be better hooked-up within the app as it would still have access to the inserted documents. It does seem strange that there isn’t a one action trigger after a bulk insert/update but we’re all working towards something. Thanks again", "username": "Timothy_Payne" } ]
Trigger is firing for each document rather than as a bulk action
2022-08-31T12:05:18.794Z
Trigger is firing for each document rather than as a bulk action
1,937
null
[ "compass", "indexes" ]
[ { "code": "", "text": "So I have a DB with some stuff in, When I use MongoDB Compass and click on create index and choose a numeric field of the objects in the list and type ASC then after I press create index to add it mongodb will freeze and when I open a second version of mongodb it will show the index added with 4.1kb size (empty). This index will also go away on restart and mongodb isn’t consuming any resources in taskmanager.I read something about configuring a keyfile so I tried that but this just causes me to be completely locked out of connecting to mongodb with mongodb compass. I’m on windows btw.", "username": "Adrian_Belmans" }, { "code": "", "text": "Turns out I just had to wait a few hours", "username": "Adrian_Belmans" } ]
MongoDB Compass doesn't add index
2022-09-07T07:40:25.983Z
MongoDB Compass doesn’t add index
1,760
null
[ "aggregation" ]
[ { "code": "", "text": "Hi, In my document I have an array field containing objects. I would like to use “$addfield” to add a new field to the document by checking for the value contained in the object array in the document. For example:\n{ oid:1,\nck:[{tag:“A”,\ndtime:“2022-04-01”},\n{“tag”:“B”,\ndtime:“2022-04-03”}\n]\n}\nIn this document, I would like to use “$addfield” and add new field “pickdate” if the document contains “$ck.tag” as “A”.{\noid:1,\nck:[{tag:“A”,\ndtime:“2022-04-01”},\n{“tag”:“B”,\ndtime:“2022-04-03”}\n]\n“pickdate”:“2022-04-01”\n}", "username": "Yogalakshmi_J" }, { "code": "db.collection.aggregate([{\n $addFields: {\n pickdate: {\n $filter: {\n input: '$ck',\n as: 'tag',\n cond: {\n $eq: [\n '$$tag.tag',\n 'A'\n ]\n }\n }\n }\n }\n}, {\n $addFields: {\n pickdate: '$pickdate.dtime'\n }\n}])\n[{\n $addFields: {\n pickdate: {\n $filter: {\n input: '$ck',\n as: 'tag',\n cond: {\n $eq: [\n '$$tag.tag',\n 'A'\n ]\n }\n }\n }\n }\n}, {\n $addFields: {\n pickdate: {$first : '$pickdate.dtime' }\n }\n}]\n", "text": "Hi @Yogalakshmi_J ,You will need an aggregation like the following:If you need just the first value found do:Thanks\nPavel", "username": "Pavel_Duchovny" } ]
$addfields add a field based on condition on array value of document
2022-09-06T11:15:55.389Z
$addfields add a field based on condition on array value of document
6,468
null
[]
[ { "code": "", "text": "Hello there!Last week Friday, I’ve filled in the form for getting my voucher code for the Developer certification, but I got nothing back yet. Is there a way I could get a update on the status of my application? Thanks", "username": "Bo_Robbrecht" }, { "code": "", "text": "Hi @Bo_RobbrechtWelcome to the forums! I sent you and email on Friday with your voucher. Could you kindly check your spam? I will also re-send it today to be sure.Thank you!Lieke", "username": "Lieke_Boon" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Certification Voucher Code
2022-09-07T06:51:01.609Z
MongoDB Certification Voucher Code
3,204
null
[]
[ { "code": "{\n membershipId: string,\n name: string,\n}\n", "text": "Hello,My company has the requirement to be able to export mongoDB data into sql server while mapping the tables.Im wondering does mongo provides any tools for this?Example we have a collection in the database and the document schema looks something like thisIs there any tools out there that can help import this document schema into a SQL database and create the correct columns for us with the correct types?\nDoes mongo provide any tools for this.how to export schema without relation from MongoDB and import to SQL server as a relational model\nhow to export data from MongoDB and import to SQL server based on imported schemaThanks,\nConor", "username": "Conor_O_Shea" }, { "code": "", "text": "Hi @Conor_O_Shea ,There are several ways to do so.There are other ways to script the data transformation as many RDBMS drivers have JSON to table mapping capabilities…Hope thats helpful", "username": "Pavel_Duchovny" } ]
MongoDB and import to SQL server as a relational model
2022-09-07T07:29:03.116Z
MongoDB and import to SQL server as a relational model
2,662
null
[ "aggregation", "queries" ]
[ { "code": "Timeline Country Sales\n2021-W01 A 10\n2021-W02 B 20\n2021-W03 C 30\n…\n2022-W33 Z 50\n{\n \"result\": [\n {\n \"start\": \"2021-W01\",\n \"end\": \"2021-W10\",\n \"totalSales\": 100\n },\n {\n \"start\": \"2021-W10\",\n \"end\": \"2021-W20\",\n \"totalSales\": 20\n },\n …\n {\n \"start\": \"2021-W40\",\n \"end\": \"2021-W45\",\n \"totalSales\": 1\n }\n ]\n}\ndb.collection.aggregate([\n {\"$match\": {\"$and\":[{\"Country\": \"A\"}, {\"Timeline\": {\"$in\": [‘2021-W01’, ‘2021-W11’, … ‘2021-W45’]}}]}},\n {\"$group\": {\"_id\": {Timeline: \"$Timeline\", totalSales: {\"$sum\": \"$Sales\"}}}},\n {\"$project\": {\"_id\": 0, result: \"$_id\"}}\n])\n[\n {\n \"result\": {\n \"Timeline\": \"2021-W01\",\n \"totalSales\": 10\n }\n },\n {\n \"result\": {\n \"Timeline\": \"2021-W02\",\n \"totalSales\": 20\n }\n },\n …\n {\n \"result\": {\n \"Timeline\": \"2021-W45\",\n \"totalSales\": 23\n }\n }\n]\n", "text": "Let us say for example I have a collection which contains car sales information for a manufacturer worldwide.Now I would like the aggregation to compute total sales for every 10 weeks between week 1 2021 (including) and week 45 2021 (excluding).Desired Output:For this so far, I have come up with this solution.But this is producing output like thisI am unable to get aggregated results for every 10 weeks as this is only doing it for every week.\nIf possible, I kindly request everyone to help me understand this. Thanks.", "username": "Aswin_Ramani" }, { "code": "Timeline Country Sales\n2021-W01 A 10\n2021-W02 B 20\n2021-W03 C 30\n…\n2022-W33 Z 50\n[{\n \"Timeline\": \"2021-W01\",\n \"Sales\": 11,\n \"Country\": \"A\"\n},{\n \"Timeline\": \"2021-W02\",\n \"Sales\": 11,\n \"Country\": \"B\"\n},{\n \"Timeline\": \"2021-W03\",\n \"Sales\": 30,\n \"Country\": \"C\"\n},{\n \"Timeline\": \"2021-W04\",\n \"Sales\": 40,\n \"Country\": \"D\"\n},{\n \"Timeline\": \"2021-W05\",\n \"Sales\": 11,\n \"Country\": \"C\"\n},{\n \"Timeline\": \"2021-W06\",\n \"Sales\": 11,\n \"Country\": \"B\"\n},{\n \"Timeline\": \"2021-W07\",\n \"Sales\": 11,\n \"Country\": \"D\"\n},{\n \"Timeline\": \"2021-W08\",\n \"Sales\": 11,\n \"Country\": \"E\"\n}]\nweek 1week 8db.collection.aggregate([\n {\n $addFields: {\n week: {\n $week: {\n $dateFromString: {\n dateString: \"$Timeline\",\n format: \"%G-W%V\"\n }\n }\n }\n }\n },\n {\n $bucket: {\n groupBy: \"$week\",\n boundaries: [\n 1,\n 4,\n 7\n ],\n default: \"Other\",\n output: {\n \"totalSales\": {\n $sum: \"$Sales\"\n },\n \"start\": {\n $min: \"$Timeline\"\n },\n \"end\": {\n $max: \"$Timeline\"\n },\n \n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0\n }\n }\n])\n[\n {\n \"end\": \"2021-W03\",\n \"start\": \"2021-W01\",\n \"totalSales\": 52\n },\n {\n \"end\": \"2021-W06\",\n \"start\": \"2021-W04\",\n \"totalSales\": 62\n },\n {\n \"end\": \"2021-W08\",\n \"start\": \"2021-W07\",\n \"totalSales\": 22\n }\n]\n$addFields$week$dateFromString $bucket$project", "text": "Hi @Aswin_Ramani,Welcome to the MongoDB Community forums As a sample dataset in MongoDB, I have considered the following:Now I would like the aggregation to compute total sales for every 10 weeks between week 1 2021 (including) and week 45 2021 (excluding).For the sample dataset, I have considered every 3 weeks between week 1 of 2021 and week 8 of 2021:This is the aggregation pipeline I used to produce the output mentioned in the problem statement:Here $bucket categorizes the documents into groups, called buckets, based on a specified expression and its boundaries and outputs a document per each bucket.So, it will give the following output, similar to the desired one:Some of the aggregation stages/operators used in the above for your reference:Please note that I have only tested on a few sample documents. Depending on your use case(s), you can adjust accordingly. Before using it in production, it is highly recommended to test it in a test environment to ensure it meets all your requirements and use cases.Please let us know if you have any further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_Kesav,Thanks for this explanation! Now I have an idea. I will try it out and let you know.", "username": "Aswin_Ramani" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregation for every 10 weeks between week 1 2021 (including) and week 45 2021 (excluding)
2022-08-26T09:55:12.804Z
Aggregation for every 10 weeks between week 1 2021 (including) and week 45 2021 (excluding)
941
null
[ "dot-net" ]
[ { "code": "", "text": "I am using .net6 withdependency injection and I have single client in my app .\nMy connection string is like that “mongodb://localhost:27017/?maxPoolSize=1000” .\nAll of the threads access to mongo for lifetime.\nWhen I use 1000 threads in my app there is no connection problem but when I use 4000 threads\nI have this error;\nSystem.TimeoutException: Timed out waiting in connecting queue after 121812ms.\n** at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.ConnectionCreator.CreateOpenedOrReuseAsync(CancellationToken cancellationToken)**But when I start my app to 4 time at same time also working fine all of them\nWhat is the problem how can I fix this problem\n(I tried to increase the poolsize .Is is not working)", "username": "Semih_Can" }, { "code": "", "text": "Hi @Semih_Can welcome to the community!The maximum pool size by default is 100, so is there a reason why you want to increase it to 40x its default size? Is 100 not enough for your application? How did you determine that it is not enough?With regard to setting your ideal pool size, you might want to have a look at Tuning Your Connection Pool SettingsBest regards\nKevin", "username": "kevinadi" }, { "code": "System.TimeoutException: Timed out waiting in connecting queue after 121812ms.\n** at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.ConnectionCreator.CreateOpenedOrReuseAsync(CancellationToken cancellationToken)**\n", "text": "Hi @kevinadi ,The app has multiple task and task count is about 4000 simultaneously.When using maxPoolSize is 100 or 1000, we get exception:So we have multiple insert, update and delete operation with 4000 simultaneous operation. How we handle this? The exception said the problem occur about connection pool.", "username": "iozcelik" }, { "code": "", "text": "Hi @iozcelik welcome to the community!You mentioned:When using maxPoolSize is 100 or 1000, we get exception:However earlier, @Semih_Can mentioned:When I use 1000 threads in my app there is no connection problem but when I use 4000 threads\nI have this error;Could you clarify which one is the correct scenario? As I understand them, those two statements appear to contradict each other.As mentioned previously, have you looked at the linked page Tuning Your Connection Pool Settings? Listed there are a series of possible scenario that you wanted to optimize for.Additionally:Best regards\nKevin", "username": "kevinadi" }, { "code": "Fisrt Scenario:\nWhen we set the communicating limit to 1000 device at same time ,\nour application use 1000 parallel task. Every task accesses to MongoDB at same time \n\nAccording to this scenario we run our application once ,\nif we set maxPoolSize:100 , the application not working so there has mongo connection error (Timed out waiting in connecting queue) .\nBut if we set maxPoolSize:1000, the application is working fine\n\nSecond Scenario:\nif we start like first scenario again but this time we run to 4 instance our application at same time\nso we provide 4000 task at same time and every thing is working fine again\nbut we dont want 4 instance we need to run once\n\nThird Scenario\nWhen we set the communicating limit to 4000 device at same time \n so our application use 4000 parallel task and access to MongoDB every task at same time \n\nAccording to this scenario we run our application once ,\nif we set maxPoolSize:100, not working, there has mongo connection error (Timed out waiting in connecting queue)\nif we set maxPoolSize:1000, not working, there has mongo connection error (Timed out waiting in connecting queue)\nif we set maxPoolSize:4000, not working, there has mongo connection error (Timed out waiting in connecting queue)\n\n\n", "text": "Hi @kevinadi ,I tought that we must tell you our aplication how it is workingWe need to read the 60K smart devices every hour.\nThis devices are smart electricity meters which have internet access and they can support communicaiton and data transfer\nWe can configure how many device we can communicating from our application at same timeAs a result the problem is that;\nWhy 1000 task and 4 instance working fine but 4000 task 1 instance don’t working", "username": "Semih_Can" }, { "code": "maxPoolSizemaxPoolSize", "text": "Hi @Semih_CanThanks for the detailed explanation of the scenario.However while the scenario is important, I think we also need more details about your deployment in order to understand more and possibly replicate the issue you’re seeing:In addition, if you post a short self-contained example code that can replicate this error, that would be extremely helpful in understanding it.I also would like to confirm this part:if we set maxPoolSize:100, not working, there has mongo connection error (Timed out waiting in connecting queue)\nif we set maxPoolSize:1000, not working, there has mongo connection error (Timed out waiting in connecting queue)\nif we set maxPoolSize:4000, not working, there has mongo connection error (Timed out waiting in connecting queue)So no matter what the maxPoolSize setting (100, 1000, 4000) when communicating with 4000 devices simultaneously (I’m assuming you mean you’re trying to insert 4000 documents at the same time), it never works, but when you try using 1000 simultaneous inserts, they can work? You mentioned 1000 & 4000, have you tried something in between e.g. 2000?In the meantime, in the absence of any information regarding your deployment, you might want to take a look at the minPoolSize setting in addition to setting maxPoolSize.Best regards\nKevin", "username": "kevinadi" } ]
I have 4K thread in my app and my mongo connection down
2022-09-04T13:38:42.895Z
I have 4K thread in my app and my mongo connection down
2,564
null
[ "queries", "python" ]
[ { "code": "Price_post_api = {\n\n \"station_id\": 31200009,\n \"price_detail\": [\n {\n \"fuel_id\": 1,\n \"fuel_name\": \"Gazole\",\n \"fuel_cost\": 1.959,\n \"update_date\": {\n \"$date\": \"2022-05-30T10:05:22Z\"\n }\n },\n {\n \"fuel_id\": 2,\n \"fuel_name\": \"SP95\",\n \"fuel_cost\": 2.049,\n \"update_date\": {\n \"$date\": \"2022-05-30T10:05:23Z\"\n }\n },\n {\n \"fuel_id\": 5,\n \"fuel_name\": \"E10\",\n \"fuel_cost\": 2.009,\n \"update_date\": {\n \"$date\": \"2022-05-30T10:05:23Z\"\n }\n }\n ]\n },\n\n$push\"fuel_cost\"Mongodb_price_data ={\n \"station_id\": 31200009,\n \"price_detail\": [\n {\n \"fuel_id\": 1,\n \"fuel_name\": \"Gazole\",\n \"fuel_cost\": 1.959,\n \"update_date\": {\n \"$date\": \"2022-05-30T10:05:22Z\"\n }\n },\n {\n \"fuel_id\": 1,\n \"fuel_name\": \"Gazole\",\n \"fuel_cost\": 35.87,\n \"update_date\": {\n \"$date\": \"2022-05-31T10:09:22Z\"\n }\n },\n {\n \"fuel_id\": 2,\n \"fuel_name\": \"SP95\",\n \"fuel_cost\": 2.049,\n \"update_date\": {\n \"$date\": \"2022-05-30T10:05:23Z\"\n }\n },\n {\n \"fuel_id\": 2,\n \"fuel_name\": \"Gazole\",\n \"fuel_cost\": 1.59,\n \"update_date\": {\n \"$date\": \"2022-07-14T00:10:19Z\"\n }\n },\n {\n \"fuel_id\": 5,\n \"fuel_name\": \"E10\",\n \"fuel_cost\": 2.009,\n \"update_date\": {\n \"$date\": \"2022-05-30T10:05:23Z\"\n }\n }\n ]\n}\ndef update_new_price(station_id, fuel_id, fuel_name, cost):\n query = {'station_id': station_id, 'price_detail.fuel_id': fuel_id,\n 'price_detail.fuel_name': fuel_name, 'price_detail.fuel_cost': cost}\n\n result = db[CL_PRICE].find(query)\n\n if not list(result):\n db[CL_PRICE].update_one(\n {'station_id': station_id, 'price_detail.fuel_id': fuel_id,\n 'price_detail.fuel_name': fuel_name},\n {'$push': {'price_detail': {'$each': [\n {'fuel_id': fuel_id, 'fuel_name': fuel_name, 'fuel_cost': cost}]}}},upsert=True)\n print('new value added: ', {'station_id': station_id, 'fuel_id': fuel_id, 'fuel_name': fuel_name, 'fuel_cost': cost})\n else:\n print('Already exists: ', {'station_id': station_id, 'fuel_id': fuel_id, 'fuel_name': fuel_name, 'fuel_cost': cost})\npymongo.errors.WriteError: The field 'price_detail' must be an array but is of type object in document {no id}, full error: {'index': 0, 'code': 2, 'errmsg': \"The field 'price_detail' must be an array but is of type object in document {no id}\"}\n", "text": "I am currently working on a web scraping project. The goal is to retrieve the price of the different types of fuel available in the gaz stations (900+) each day . If the price changes, the script will be able to append the new price to my Mongodb database.The data collected looks like this:I’m having a hard time to figure out how to $push properly the data in Mongodb based on the \"fuel_cost\" field. Here an example of the expected output in the db.So far, I have the following function:The function works great until I get an error messageAny idea why and how can I fix it?", "username": "franck_ishemezwe" }, { "code": "\"price_detail\"/// Documents to be pushed / added\n{\n \"fuel_id\": 2,\n \"fuel_name\": \"Gazole\",\n \"fuel_cost\": 1.59,\n \"update_date\": {\n \"$date\": \"2022-07-14T00:10:19Z\"\n }\n},\n{\n \"fuel_id\": 1,\n \"fuel_name\": \"Gazole\",\n \"fuel_cost\": 35.87,\n \"update_date\": {\n \"$date\": \"2022-05-31T10:09:22Z\"\n }\n}\n\"price_detail\"DB> db.stations2.find()\n[\n { _id: ObjectId(\"630fee2ba0eae719ee140850\"), price_detail: { a: 1 } }\n]\nDB> db.stations2.updateOne({},{$push:{\"price_detail\":{$each:[{a:2}]}}})\n\nMongoServerError: The field 'price_detail' must be an array but is of type object in document {_id: ObjectId('630fee2ba0eae719ee140850')}\nmongosh\"price_detail\"", "text": "Hi @franck_ishemezwe - Welcome to the community.Not too sure if you’ve solved this yet but I just wanted to confirm a few items:Note: The above examples are performed / shown in mongoshAdditionally, from an initial glance, this type of schema design and work flow may lead to the “price_detail” array growing indefinitely which means that it will most likely encounter issues in future if the application runs long enough due to the fact that it may hit the BSON document size limit. One alternative to this would be perhaps having the \"price_detail\" as a collection instead of an array. However, it may not reach this point depending on your environment or use case Regards,\nJason", "username": "Jason_Tran" }, { "code": "\"price_detail\"", "text": "Hi @Jason_Tran. Thanks for your answer.To reply to your questions:1 - Yes that is correct. Here it’s only a snippet but in production environment we have up to 6 documents to add in \"price_detail\" array.2 - Yes multiple documents and they are returning an array.\n\nimage645×552 35.3 KB\n3 - I am using Pymongo4 - The latest one, MongoDB 5.0.9 community5 - You raised a good point here. As I’ m novice to Mongodb world, I designed the schema based on my “understanding”. But basically, I want to build a webapp that give the ability to the end user to:The data provided are from an OpenData website in xml format and are refreshed every 10 minutes. Since I want to keep track of the fuel prices for each station_ID, I came up with this schema design. In a nutshell, I have 2 collections, one for the station information as below:\n\nimage605×605 25 KB\nand the second one for the fuel prices per station\nimage562×718 21.5 KB\nAt the end of the day I’m not sure to have the right approach…", "username": "franck_ishemezwe" }, { "code": "upsert<update><query><update>\"price_detail\"$push$elemMatchmongoshDB>db.collection.find()\n/// Empty collection to start\nDB> db.collection.updateOne({\"price_detail.fuel_id\":1},{$push:{\"price_detail\":{\"$each\":[1,2,3]}}},{upsert:true})\nMongoServerError: The field 'price_detail' must be an array but is of type object in document {no id}\n/// Same error returned. Note the query value.\nDB> db.collection.updateOne({\"price_detail\":{$elemMatch:{\"fuel_id\":1}}},{$push:{\"price_detail\":{\"$each\":[1,2,3]}}},{upsert:true})\n/// Using $elemMatch instead\n{\n acknowledged: true,\n insertedId: ObjectId(\"6317d956c3d5b1b653dc09bf\"),\n matchedCount: 0,\n modifiedCount: 0,\n upsertedCount: 1\n}\nDB> db.collection.find()\n[\n {\n _id: ObjectId(\"6317d956c3d5b1b653dc09bf\"),\n price_detail: [ 1, 2, 3 ]\n }\n]\nupdate_one()$elemMatch<query>\"price_detail.fuel_name\"\"price_detail.fuel_id\"", "text": "Thanks for providing the details requested @franck_ishemezweUpon further inspection, it may be possible the error that’s being returned is due to the upsert behaviour as mentioned in the Upsert with Operator Expressions documetation:If no document matches the query criteria and the <update> parameter is a document with update operator expressions, then the operation creates a base document from the equality clauses in the <query> parameter and applies the expressions from the <update> parameter.I believe due to the query criteria, the \"price_detail\" field is of type object rather than an array when it is created as part of the “base document” which will then cause the error when the $push operator is applied.If it suits your use case, would it be possible for you to utilise $elemMatch at the query portion? The below example is performed via mongosh:You can alter your current update_one() method accordingly with $elemMatch although this would only be if it suits your use case as I understand your current <query> within the update operation is checking for if the \"price_detail.fuel_name\" and \"price_detail.fuel_id\" separately regardless if they exist in the same object within the array. It is highly recommended to test thoroughly on a test environment after making the required changes to verify it suits all your use case(s) and requirements.Regards,\nJason", "username": "Jason_Tran" } ]
Issue with $push operator
2022-07-31T12:58:07.359Z
Issue with $push operator
3,159
https://www.mongodb.com/…e_2_1024x512.png
[ "replication" ]
[ { "code": "", "text": "We have MongoDB 4.2.11 with 3 Nodes in replica set - Primary, Secondary and Arbiter.\nWith a recent incidence in Production, we had Primary node down which caused the Secondary Node to become primary. However in about 5hours the oplog collection almost rose to 15GB causing no free disk space for the current Primary Node. Eventually MongoDB Primary crashed due to no disk space issue.Question is if there is any way to limit Oplog space in MongoDB 4.2.11? Or the only way forward is to upgrade MongoDB to 4.4 and above.Here is the link to documentation that clarifies no way to limit oplog in 4.2.Hope to hear some feedback or any old thread that addresses this. Thanks.", "username": "Mohzim_Shaikh" }, { "code": "majority commit point", "text": "Welcome to the MongoDB community @Mohzim_Shaikh !The procedure you linked to is for changing the size of the replication oplog.I suspect the issue you are experiencing is due to:use of an arbiter which cannot acknowledge writessecondary member down for an extended period of time which can cause the primary oplog to grow past the configured size limit:Starting in MongoDB 4.0, the oplog can grow past its configured size limit to avoid deleting the majority commit point .There are two suggestions to avoid this issue:Replace your arbiter with a secondary (see Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie for more background on why).Disable Read Concern Majority for your PSA deployment to prevent storage cache pressure when a data bearing node is down.Regards,\nStennie", "username": "Stennie_X" } ]
Anyway to limit OpLog in MongoDB 4.2
2022-09-06T16:49:43.272Z
Anyway to limit OpLog in MongoDB 4.2
1,800
null
[ "indexes" ]
[ { "code": " { \"v\" : 2, \"key\" : { \"_id\" : 1 }, \"name\" : \"_id_\", \"ns\" : \"db1.metrics\" }", "text": "What are the differences between version 1 indexes and version 2 indexes?e.g. here’s an example of what i mean by a v:2 index: { \"v\" : 2, \"key\" : { \"_id\" : 1 }, \"name\" : \"_id_\", \"ns\" : \"db1.metrics\" }I assume some performance improvements have been made but it would help if someone could share any documentation explaining what the specific differences are.", "username": "Edward_Murphy" }, { "code": "", "text": "Hi @Edward_Murphy,The v2 index format is a MongoDB 3.4+ index improvement that adds support for collation and the Decimal128 BSON type (both new features in 3.4).If your deployment was previously upgraded from MongoDB 3.2 or earlier, you may still have some v1 indexes as these were not automatically upgraded (recreating indexes on large populated collections can have significant impact on a production environment). Any new indexes will be created as v2 (or latest index version for your server release).Existing v1 indexes can be rebuilt as v2 (or latest index version) by either dropping and recreating a specific index, or using the reIndex command to recreate all indexes for a collection.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the difference between a v:1 index and a v:2 index?
2022-09-06T17:46:26.247Z
What is the difference between a v:1 index and a v:2 index?
2,093
null
[ "aggregation", "queries" ]
[ { "code": "[\n {\n \"key\": 1\n pricing: [\n {\n \"date\":\"2022-09-09T16:00:00.000+00:00\",\n \"price\": 100\n },\n {\n \"date\":\"2022-09-10T16:00:00.000+00:00\",\n \"price\": 100\n },\n {\n \"date\":\"2022-09-11T16:00:00.000+00:00\",\n \"price\": 100\n },\n ]\n },\n]\nconst dates = [\n '2022-09-07T16:00:00.000+00:00',\n '2022-09-08T16:00:00.000+00:00',\n '2022-09-09T16:00:00.000+00:00',\n]\n\n{\n '$match': {\n \"pricing.date\" : {$in : dates}\n }\n},\n{\n '$project': {\n \"pricing.date\" : 1\n \"pricing.price\" : 1\n }\n}\n[\n {date: 2022-09-09T16:00:00.000+00:00 ,price: 100}\n]\n{\n \"$match\": {\n \"pricing.date\": {\n $elemMatch: {\n $in: dates,\n },\n }\n }\n}\n", "text": "Hello,I have a collection which contains an array of objects, each containing a date and a price. I would like to be able to run an aggregate query that passes in an array of dates, check if any of those dates are contained within the object in the array, and if so, return the corresponding price. For example,Sample data structure:I’m not sure how to search an object in an array with an array, and if it exists project the value of another key. I’ve been trying something like thisThe data I would like back would be something like thisI have also tried $elemMatch like thisI have a non-working Playground here: https://mongoplayground.net/p/hyk-oYEF6RZAny help would be very much appreciated!\nCheers,\nMatt", "username": "Matt_Heslington1" }, { "code": " db.collection.find({\n \"pricing.date\": {\n \"$in\": [\n '2022-09-07T16:00:00.000+00:00',\n '2022-09-08T16:00:00.000+00:00',\n '2022-09-09T16:00:00.000+00:00',\n]\n }\n},\n{\n \"pricing.$\": 1\n})\n", "text": "Hi @Matt_Heslington1 ,You don’t specifically need an aggregation to do that query. You can use positional projecting like the following query:Now you have to make sure that if you comparing strings as dates you either have them in strings in your documents and then they have to exactly match a string (all chars), or use date formats in both collection and query array.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"pricing.date\": {\n \"$in\": [\n '2022-09-07T16:00:00.000+00:00',\n '2022-09-08T16:00:00.000+00:00',\n '2022-09-09T16:00:00.000+00:00',\n]\n }\n},\n{\n \"pricing.$\": 1\n}\ndb.collection.find(\n {\n _id: 'H8Se4O_LyGYFmMatcjK6_'\n },\n {\n 'pricing.date': {\n $in: ['2022-09-07T16:00:00.000+00:00', '2022-09-08T16:00:00.000+00:00', '2022-09-09T16:00:00.000+00:00'],\n },\n },\n {\n 'pricing.$': 1,\n }\n )\n", "text": "Hello Pavel,Thank you for your help. That works, but have should have been more specific in my question - I need to pass in an ‘_id’, so for a specific record:But I believe you can only pass in two arguments to a ‘find’ query like this, so adding the third ‘_id’ argument breaks it. Just a quick question too, the way we’re doing it at the moment the date fields are being passed as strings as you said - how do we pass them as dates?Thanks again!\nMatt", "username": "Matt_Heslington1" }, { "code": "db.collection.find(\n {\n _id: 'H8Se4O_LyGYFmMatcjK6_',\n 'pricing.date': {\n $in: [ISODate('2022-09-07T16:00:00.000+00:00'),ISODate( '2022-09-08T16:00:00.000+00:00'), ISODate('2022-09-09T16:00:00.000+00:00')],\n },\n },\n {\n 'pricing.$': 1,\n }\n )\n", "text": "Hi @Matt_Heslington1 ,Nope you can add several fields in the first query object so it will peform an “and” condition across the fields, you can also convert strings to dates via the specific driver, with shell just wrap them in ISODate() functions", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel,That’s perfect. Thank you so much, you’ve been a great help. Much appreciated.\nHave a great day,\nMatt", "username": "Matt_Heslington1" }, { "code": "[\n {\n \"_id\": \"abc\",\n \"pricing\": [\n {\n \"date\": \"2022-09-09T16: 00: 00.000+00: 00\",\n \"price\": 100\n },\n {\n \"date\": \"2022-09-10T16: 00: 00.000+00: 00\",\n \"price\": 100\n }\n ]\n }\n]\n", "text": "Hi Pavel,Apologies, I did a false test and thought it was working when it wasn’t. It returns the first matched record, but only the first, not all of the matches. In the updated Playground, the query should match two dates, Sep 09 and Sep 10, but it’s only returning data for Sep 09 - I’d love it to be able to return data like below, ie. one more record.I can’t understand why it’s not working, as everything looks good according to the docsCheers,\nMatt", "username": "Matt_Heslington1" }, { "code": "db.collection.aggregate([\n {\n $match: {\n \"_id\": \"abc\",\n \"pricing.date\": {\n \"$in\": [\n \"2022-09-08T16: 00: 00.000+00: 00\",\n \"2022-09-09T16: 00: 00.000+00: 00\",\n \"2022-09-10T16: 00: 00.000+00: 00\",\n \n ]\n }\n }\n },\n {\n \"$addFields\": {\n \"pricing\": {\n \"$filter\": {\n \"input\": \"$pricing\",\n \"as\": \"price\",\n \"cond\": {\n $in: [\n \"$$price.date\",\n [\n \"2022-09-08T16: 00: 00.000+00: 00\",\n \"2022-09-09T16: 00: 00.000+00: 00\",\n \"2022-09-10T16: 00: 00.000+00: 00\"\n \n ]\n ]\n }\n }\n }\n }\n }\n])\n", "text": "Hi @Matt_Heslington1 ,in that case you probably need an aggregation with a $filter stage to only match elements that have the criteria:Let me know if that works?Ty", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes, it works! Great, thank you, Pavel, you’ve been a star!", "username": "Matt_Heslington1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to search an array of objects and return key value pairs using an array as a reference?
2022-09-06T02:58:39.272Z
How to search an array of objects and return key value pairs using an array as a reference?
7,129
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 5.0.12 is out and is ready for production deployment. This release contains only fixes since 5.0.11, and is a recommended upgrade for all 5.0 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.12 is released
2022-09-06T22:58:47.794Z
MongoDB 5.0.12 is released
2,373
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.10.2 of the MongoDB Go Driver.This release stops treating context errors as retryable network errors where possible. For more information please see the 1.10.2 release notes.You can obtain the driver source from GitHub under the v1.10.2 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Qingyang_Hu1" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.10.2 Released
2022-09-06T21:41:15.375Z
MongoDB Go Driver 1.10.2 Released
1,843
null
[ "swift" ]
[ { "code": "TextField(\"creditHours\", value: $course.creditHours, formatter: NumberFormatter())\n// also occurs if you pass in your own formatter such as:\nlet formatter: NumberFormatter = {\n let f = NumberFormatter()\n f.numberStyle = .decimal\n return f\n}()\n @State var hours: String = \"\"\n\nTextField(\"hours\", text: $hours).onChange(of: hours) { val in\n if let dec = Double(val) {\n try! realm.write {\n var c = course.thaw()!\n c.creditHours = try! Decimal128(string: val)\n }\n }\n }\n", "text": "Per this thread, it appears there is a bug when using a binding from string to decimal with a text field, where the value does not get stored:\nhttps://developer.apple.com/forums/thread/687521The “working” suggestion on the bottom of that thread is to listen to the onChange event and manually persist it.However, the value still does not persist to the database.Given Decimal128 is a realm swift class, I’m not sure how to diagnose further.", "username": "Joseph_Bittman" }, { "code": "@State var hours: String = \"2\"\n\nTextField(\"creditHours\", text: $hours).onChange(of: hours) { val in\n if let d = Double(val) {\n try! realm.write {\n var c = course.thaw()!\n c.creditHours = d\n }\n }\n }\n", "text": "If I change the type to Double, then it works.Updating the value to 24 in the UI results in 24 saved to the backend.", "username": "Joseph_Bittman" }, { "code": "c.creditHours = try! Decimal128(string: val)someProperty = try ! Decimal128(string: \"9.99\")someProperty = try ! Decimal128(string: \"$9.99\")", "text": "c.creditHours = try! Decimal128(string: val)Did you add a breakpoint to that line and a) ensure the code execution actually gets to that line and then b ) evaluate val to see what it resolves to before attempting to init the Decimal128 with it? Does anything get stored in that property in the database?This code workssomeProperty = try ! Decimal128(string: \"9.99\")and even this code “works” but writes NaN to the propertysomeProperty = try ! Decimal128(string: \"$9.99\")", "username": "Jay" } ]
Decimal128 value not persisting
2022-09-02T18:59:38.298Z
Decimal128 value not persisting
2,074
null
[ "queries" ]
[ { "code": "[\n {\n \"_id\": \"123456\",\n \"Continent\": {\n \"Country\": [\n [\n \"US\",\n {\n \"State\": [\n [\n 100,\n {\n \"Product\": \"Corn\",\n \"SID\": 100\n }\n ],\n [\n 200,\n {\n \"Product\": \"Maze\",\n \"SID\": 200\n }\n ],\n [\n 100,\n {\n \"Product\": \"Corn-HB\",\n \"SID\": 100\n }\n ]\n ],\n \n }\n ]\n ]\n }\n }\n]\n[\n {\n \"_id\": \"123456\",\n \"Continent\": {\n \"Country\": [\n [\n \"US\",\n {\n \"State\": [\n [\n 100,\n {\n \"Product\": \"Corn\",\n \"SID\": 100\n }\n ],\n [\n 100,\n {\n \"Product\": \"Corn-HB\",\n \"SID\": 100\n }\n ]\n ],\n \n }\n ]\n ]\n }\n }\n]\n", "text": "Hello,I have MongoDB document like this. I need to get all data where “SID”: 100.\nThe output should have similar format as input.\nCurrently I am using MongoDB 4.0. How do I achieve this.InputExpected OutputThanks", "username": "Sym_Don" }, { "code": "Array of objectsArrayarray of objectsdb.collections.aggregate([\n {\n \"$unwind\": \"$Continent.Country\"\n },\n {\n \"$addFields\": {\n State: {\n \"$let\": {\n vars: { idx1: { \"$arrayElemAt\": [\"$Continent.Country\", 1] } },\n in: {\n \"$filter\": {\n input: \"$$idx1.State\",\n cond: {\n \"$eq\": [\"$$this.SID\", [100]]\n }\n }\n }\n }\n }\n }\n },\n {\n \"$set\": {\n \"Continent.Country\": {\n \"$map\": {\n input: {\n \"$range\": [0, { \"$size\": \"$Continent.Country\" }]\n },\n in: {\n \"$cond\": [{ \"$eq\": [\"$$this\", 1] },\n { State: \"$State\" },\n { \"$arrayElemAt\": [\"$Continent.Country\", \"$$this\"] }]\n }\n }\n }\n }\n },\n {\n \"$project\": { State: 0 }\n }\n])\n\"State\"1\"Continent.Country\"idx1\"Continent.Country\"[\n {\n _id: '123456',\n Continent: {\n Country: [\n 'US',\n {\n State: [\n [ 100, { Product: 'Corn', SID: 100 } ],\n [ 100, { Product: 'Corn-HB', SID: 100 } ]\n ]\n }\n ]\n }\n }\n]\n", "text": "Hello @Sym_Don,Welcome to the MongoDB Community forums I need to get all data where “SID”: 100.The input schema seems complex and it includes Array of objects within an Array of array of objects, however I tested the following aggregation pipeline approach:Here, the example code assumes that the object containing the \"State\" array is always in array index position 1 within the unwinded \"Continent.Country\" array.However, below is the explanation of the query.The $unwind operator deconstructs an array field from the input documents to output a document for each element.The $addFields operator adds new fields to the documentsHere idx1 is the variable that stores the element at index 1 of \"Continent.Country\", and then performed further operations on it.The $filter operator returns an array with only those elements that match the conditionThe $set operator replaces the value of a field with the specified value.The $arrayElemAt returns the element at the specified array index.The $map applies an expression to each item in an array and returns an array with the applied results.which result the following output:This is an un-tested code, so please test thoroughly in a test environment to verify it suits all your use case(s) and requirements.I will, however, recommend that you alter the schema to make it easier to use and scale. Otherwise, it won’t scale well.Q: Also, is it a one-off or are you planning to use it very frequently?Thanks,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This is one-off collection which have this schema. Thanks for the solution.", "username": "Sym_Don" }, { "code": "db.collection.aggregate([\n {\n \"$unwind\": \"$Continent.Country\"\n },\n {\n \"$addFields\": {\n State: {\n \"$let\": {\n vars: {\n idx1: {\n \"$arrayElemAt\": [\n \"$Continent.Country\",\n 1\n ]\n }\n },\n in: {\n \"$filter\": {\n input: \"$$idx1.State\",\n cond: {\n \"$eq\": [\n \"$$this.SID\",\n [\n 100\n ]\n ]\n }\n }\n }\n }\n }\n }\n },\n {\n \"$set\": {\n \"Continent.Country\": [\n {\n \"$map\": {\n input: {\n \"$range\": [\n 0,\n {\n \"$size\": \"$Continent.Country\"\n }\n ]\n },\n in: {\n \"$cond\": [\n {\n \"$eq\": [\n \"$$this\",\n 1\n ]\n },\n {\n State: \"$State\"\n },\n {\n \"$arrayElemAt\": [\n \"$Continent.Country\",\n \"$$this\"\n ]\n }\n ]\n }\n }\n }\n ]\n }\n },\n {\n \"$project\": {\n State: 0\n }\n }\n])\n", "text": "I made a small addition brackets ‘[’ at “Continent.Country”: [, and also closing bracket. to exactly match my output.", "username": "Sym_Don" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Inner Array Query 4.0
2022-08-19T19:29:26.780Z
MongoDB Inner Array Query 4.0
1,202
https://www.mongodb.com/…7_2_1024x185.png
[]
[ { "code": "{\n \"roles\": [\n \"superadmin\"\n ],\n \"teams\": [\n \"619f91d32b2b45517a26b39b\"\n ],\n \"name\": \"Mr Who\",\n \"email\": \"[email protected]\",\n \"id\": \"627247134471c9b2a92b3a9f\",\n \"customData\": {\n \"roles\": [\n \"superadmin\"\n ],\n \"teams\": [\n \"619f91d32b2b45517a26b39b\"\n ]\n }\n}\n", "text": "ive worked with email authentication provider, works perfect. but what really upset me is that i could not transfer my existing user to another project if i needed to. therefore im creating my own custom authentication. but after trying it out, i figure inconsistancy with custom login:linked users collection\n\nScreenshot 2022-09-03 at 11.18.03 AM2296×416 22.6 KB\nsample result:returned currentUser.id\n6311bc10a7aa86620ce6ec7f\nScreenshot 2022-09-03 at 11.21.31 AM2320×252 31.1 KB\ncan see here, im returning string of users.id 627247134471c9b2a92b3a9f\nbut im getting 6311bc10a7aa86620ce6ec7f returned.", "username": "James_Tan1" }, { "code": "", "text": "@James_Tan1 Today, the custom authentication and custom user data are a bit separate. In order to achieve what you are looking for you would need to use an Auth Trigger which could then map the document id to the currentUser’s id. From there you would then refresh the custom user data on the client side Realm SDK and you should see your custom user data.We realize this is not ideal and we have a design to add new functionality to custom authentication and custom user data which will enable you to do this mapping at custom authentication time all in one step. We should get started on this work soon.", "username": "Ian_Ward" }, { "code": "", "text": "hihighly appreciate your response, glad that im getting some attention to this matter. im also concern that custom authentication does not return the same uid like email provider. seems like my only work around is to query users collection separately. but without knowing id means ive to query by email or my customer username.thank you.", "username": "James_Tan1" }, { "code": "", "text": "and also it will be great if we could have a way to transfer user to another project if possible.", "username": "James_Tan1" }, { "code": "", "text": "What’s the use case for migrating users between apps? Generally, this is where we would recommend integrating with a fully-fledged IDP and 3rd party JWT provider like Auth0 or similar as they are more fully featured and would support moving authentication status around different App Services apps.", "username": "Ian_Ward" }, { "code": "", "text": "like firebase? hehe. alright noted", "username": "James_Tan1" } ]
customData and uid from custom authentication
2022-09-03T03:17:20.313Z
customData and uid from custom authentication
1,650
null
[ "atlas", "serverless" ]
[ { "code": "", "text": "I’m just switching from a simple docker based node instance with the default sizing to MongoDB Atlas serverless. I have migrated all my data (export then import using Compass), and I have recreated all the indexes as they were on the source database.However, I am seeing massively slower performance accessing the data from my dev website running locally. I am expecting it to be slower, but not like this. I have a page that runs 4 different aggregate functions across a few different collections. Any fields that need to be are indexed.On my local Docker instance of MongoDB, the maximum time I am seeing for the page to render is about 500ms. On MongoDB Atlas the same page is taking 45-120 seconds to load!! Looking at my logs from sveltekit, I can see the DB queries genuinley take the time stated above. The Performance Advisor is not suggesting any new indexes or changes.It shouldn’t be this awful should it?", "username": "Stephen_Eddy" }, { "code": "", "text": "The really long delays appear to be some sort of timeout on the connection. If I keep clicking links then they load withing a few seconds, so still slower than locally, but not minutes. If I walk away, then come back after 10 mins and click a link, it then can take a minute or more, but the entire time my log is reporting MongoDB is connected and it is waiting for the DB response.I am using Mongoose version 6.5.2 as my connection library. I’ve enabled connection debugging and events, and the client doesn’t think it disconnects. It is like the serverless instance is going to sleep for 30 seconds after so many minutes of inactivity. I have other processes inserting data so it isn’t because nothing is happening with the DB.Correction it is 5 minutes. After that time the first aggregation query takes ages.", "username": "Stephen_Eddy" }, { "code": "", "text": "Hi Stephen,\nI work in the MongoDB Atlas Serverless PM team. Based on the what you are describing it seems like there is something on the client side (where mongoose is running) which is stopping the connections. Perhaps check your firewall settings. One way to check this would be to use mongosh on the client machine and test the connections and compare with mongosh running from local. Let us know what you find out.", "username": "Vishal_Dhiman" } ]
Incredibly slow serverless performance
2022-09-03T19:21:20.970Z
Incredibly slow serverless performance
3,391
null
[ "installation" ]
[ { "code": "", "text": "I am getting conflicts on mongo-client installation. I have already installed mongodb community version. Command for mongo-client installation and erros are mentioned below.mtechcse@mtechcse-Precision-Tower-5810:~$ sudo apt install mongodb-clients\n[sudo] password for mtechcse:\nReading package lists… Done\nBuilding dependency tree\nReading state information… Done\nYou might want to run ‘apt --fix-broken install’ to correct these.\nThe following packages have unmet dependencies:\nmongodb-org : Conflicts: mongodb-clients but 1:3.6.3-0ubuntu1.4 is to be installed\nmongodb-org-database : Conflicts: mongodb-clients but 1:3.6.3-0ubuntu1.4 is to be installed\nmongodb-org-database-tools-extra : Conflicts: mongodb-clients but 1:3.6.3-0ubuntu1.4 is to be installed\nmongodb-org-mongos : Conflicts: mongodb-clients but 1:3.6.3-0ubuntu1.4 is to be installed\nmongodb-org-server : Conflicts: mongodb-clients but 1:3.6.3-0ubuntu1.4 is to be installed\nmongodb-org-shell : Conflicts: mongodb-clients but 1:3.6.3-0ubuntu1.4 is to be installed\nmongodb-org-tools : Depends: mongodb-database-tools but it is not going to be installed\nConflicts: mongodb-clients but 1:3.6.3-0ubuntu1.4 is to be installed\nE: Unmet dependencies. Try ‘apt --fix-broken install’ with no packages (or specify a solution).", "username": "Prof_Monika_Shah" }, { "code": "", "text": "Not able to solve this conflict even by --fix-broken optionmtechcse@mtechcse-Precision-Tower-5810:~$ sudo apt --fix-broken install\n[sudo] password for mtechcse:\nReading package lists… Done\nBuilding dependency tree\nReading state information… Done\nCorrecting dependencies… Done\nThe following packages were automatically installed and are no longer required:\nlibboost-program-options1.65.1 libgoogle-perftools4 libpcrecpp0v5\nlibsnappy1v5 libtcmalloc-minimal4 libyaml-cpp0.5v5\nlinux-hwe-5.4-headers-5.4.0-105 linux-hwe-5.4-headers-5.4.0-107\nlinux-hwe-5.4-headers-5.4.0-110 linux-hwe-5.4-headers-5.4.0-113 mongo-tools\nUse ‘sudo apt autoremove’ to remove them.\nThe following additional packages will be installed:\nmongodb-database-tools\nThe following NEW packages will be installed:\nmongodb-database-tools\n0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded.\n2 not fully installed or removed.\nNeed to get 0 B/47.7 MB of archives.\nAfter this operation, 0 B of additional disk space will be used.\nDo you want to continue? [Y/n] y\n(Reading database … 307077 files and directories currently installed.)\nPreparing to unpack …/mongodb-database-tools_100.6.0_amd64.deb …\nUnpacking mongodb-database-tools (100.6.0) …\ndpkg: error processing archive /var/cache/apt/archives/mongodb-database-tools_100.6.0_amd64.deb (–unpack):\ntrying to overwrite ‘/usr/bin/bsondump’, which is also in package mongo-tools 3.6.3-0ubuntu1\ndpkg-deb: error: paste subprocess was killed by signal (Broken pipe)\nErrors were encountered while processing:\n/var/cache/apt/archives/mongodb-database-tools_100.6.0_amd64.deb\nE: Sub-process /usr/bin/dpkg returned an error code (1)", "username": "Prof_Monika_Shah" }, { "code": "mongo --version", "text": "Hi @Prof_Monika_Shah, can you provide the results of mongo --version. I’m wondering if you installed an old version of MongoDB.Here are the steps to install MongoDB 6.0 on Ubuntu 20.04.", "username": "Doug_Duncan" } ]
Conflicts on Mongodb-client installation
2022-09-06T12:26:56.036Z
Conflicts on Mongodb-client installation
4,836
null
[ "aggregation", "node-js" ]
[ { "code": "$sum{\n$Group: {_id: my_awesome_id},\nfieldwithseconds: {$sum: \"$secondsfromDB\"},\ngrandtotalofseconds: {$sum: \"$fieldwithseconds\"}\n}\ngrandtotalofseconds: {$sum: “fieldwithseconds”}\n", "text": "How do I get the $sum of certain fields in a group?Why is grand total not working?I have also tried:Cheers,\nDaniel", "username": "Daniel_Stege_Lindsjo" }, { "code": "", "text": "Did you sort this out already?I see what the issue is. I don’t think you can access the field of a document that is being generated. You can only access the ones of the document being processed.Also, I don’t see how field with seconds isnt already what you want?", "username": "Mah_Neh" }, { "code": "", "text": "Greetings \nDoing a $sum of sums is needed in my case.\nBut I need to sum up my sums to a grand total sum.\nThe base value is seconds from MongoDB but I need to sum up those seconds across the dataset.D", "username": "Daniel_Stege_Lindsjo" }, { "code": "$group$sum$groupdb.foo.drop();\ndb.foo.insertMany([\n { a: 1, seconds: 1 },\n { a: 1, seconds: 2 },\n { a: 2, seconds: 3 },\n { a: 2, seconds: 4 },\n])\n\ndb.foo.aggregate([\n { $group: {\n _id: \"$a\",\n subTotal: { $sum: \"$seconds\" }\n }}\n])\n// output\n[\n {\n \"_id\": 1,\n \"subTotal\": 3\n },\n {\n \"_id\": 2,\n \"subTotal\": 7\n }\n]\n$groupdb.foo.aggregate([\n { $group: {\n _id: \"$a\",\n subTotal: { $sum: \"$seconds\" }\n }},\n { $group: {\n _id: null,\n results: { $push: \"$$ROOT\" },\n grandTotal: { $sum: \"$subTotal\" }\n }}\n])\n[\n {\n \"_id\": null,\n \"results\": [\n {\n \"_id\": 1,\n \"subTotal\": 3\n },\n {\n \"_id\": 2,\n \"subTotal\": 7\n }\n ],\n \"grandTotal\": 10\n }\n]\n", "text": "@Daniel_Stege_Lindsjo to do a sum of sums you’d need to $group again to perform a $sum of the calculated field from the previous $group.For example:If you add another $group to the above you can product a grand total from the sum of the subtotals:", "username": "alexbevi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$sum fields in $group across dataset in MongoDB
2022-09-06T04:56:48.886Z
$sum fields in $group across dataset in MongoDB
2,520
null
[ "dot-net", "compass", "containers", "security", "configuration" ]
[ { "code": "mongo{\"t\":{\"$date\":\"2022-07-22T20:08:42.422+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.19.0.1:56814\",\"uuid\":\"43a121aa-a251-4a5b-b02e-550e251ec477\",\"connectionId\":1,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2022-07-22T20:08:42.422+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.19.0.1:56812\",\"uuid\":\"d3ca9390-5353-4ae3-8ab0-e4bd77a37b79\",\"connectionId\":2,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2022-07-22T20:08:42.487+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn2\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.19.0.1:56812\",\"uuid\":\"d3ca9390-5353-4ae3-8ab0-e4bd77a37b79\",\"connectionId\":2,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2022-07-22T20:08:42.487+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.19.0.1:56814\",\"uuid\":\"43a121aa-a251-4a5b-b02e-550e251ec477\",\"connectionId\":1,\"connectionCount\":0}}\n{\"t\":{\"$date\":\"2022-07-22T20:08:43.018+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.19.0.1:56816\",\"uuid\":\"6cb73ff5-f2ce-4af2-b3ba-7767b66926a3\",\"connectionId\":3,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2022-07-22T20:08:43.024+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn3\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.19.0.1:56816\",\"uuid\":\"6cb73ff5-f2ce-4af2-b3ba-7767b66926a3\",\"connectionId\":3,\"connectionCount\":0}}\nnet:\n tls:\n mode: requireTLS\n certificateKeyFile: /etc/mongo/cert/pub and priv keys.pem\n certificateKeyFilePassword: 1\n disabledProtocols: TLS1_0,TLS1_1\n\nversion: \"3.1\"\n\nservices:\n my-mongo:\n image: mongo:latest\n command: \"--config /etc/mongo/conf/mongod.yaml\"\n restart: always\n container_name: mongo\n hostname: mongo_host\n ports:\n - \"27017:27017\"\n - \"8080:80\"\n environment:\n MONGO_INITDB_ROOT_USERNAME: oleksiiroot\n MONGO_INITDB_ROOT_PASSWORD: password\n volumes:\n - \"./volumes/mongo/config/:/etc/mongo/conf/\"\n - \"./volumes/mongo/cert/:/etc/mongo/cert/\"\n - \"./volumes/mongo/data/:/data/db/\"\nvar urlBuilder = new MongoUrlBuilder();\nurlBuilder.ApplicationName = \"my-app-name\";\nurlBuilder.DirectConnection = true;\nurlBuilder.Scheme = ConnectionStringScheme.MongoDB;\nurlBuilder.Server = new MongoServerAddress(\"192.168.2.11\", 27017);\nurlBuilder.Username = \"oleksiiroot\";\nurlBuilder.Password = \"password\";\nurlBuilder.UseTls = true;\nurlBuilder.TlsDisableCertificateRevocationCheck = true; \n", "text": "I’m working on a test environment with mongo image in Docker Desktop. I need to configure TLS with a self-signed certificate.I generated a PEM certificate and updated mongo configuration file (see below). Service started successfully, but clients cannot connect to the service (test .Net client, Mongo Compass)..Net client gets the following error:System.TimeoutException: 'A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, DirectConnection : “True”, Type : “Standalone”, State : “Disconnected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “192.168.2.11:27017” }”, EndPoint: “192.168.2.11:27017”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.Mongo image log contains many similar lines. See a short extract below:", "username": "Oleksii" }, { "code": "AllowInsecureTlsTlsDisableCertificateRevocationChecktrue", "text": "I finally got it working.I had to set AllowInsecureTls to true and remove the line of code that sets TlsDisableCertificateRevocationCheck to true in the .Net app.MongoDb Compass connected after ticking “tlsInsecure” check box.", "username": "Oleksii" }, { "code": "", "text": "Hello Oleksii,\nI am working on this as well. and I am having trouble connecting to the MongoDB which is running in Docker in the AWS EC2 instance.\nCould you mind if you can share your knowledge on this subject?\nI have tls certificate and CA root file to get into mongodb. I was told that I need to have public key to access to Mongodb to MongoDB Compass in my local environment. and also how to connect to my .NET app?\ncan you describe applicationName is?\nI would really appreciate your help on this.", "username": "Kris_Kammadanam" } ]
Cannot connect to MongoDB in docker after enabling TLS
2022-07-22T20:17:09.996Z
Cannot connect to MongoDB in docker after enabling TLS
6,649
null
[ "aggregation", "python", "sharding" ]
[ { "code": "", "text": "Hi, I have tried to insert over 700 billions of rows to a sharded cluster using distribute computingMy data schema contains just one line of hashed string (ex: {_id: e3nlvksnlk12fdnsnkd!}),\nand I want to give index to it so I can check existence of hashed string quickly.\nI configured around 50 shards(10 mongos server)I made pymongo client in spark job and used insert_many to cluster.\nAt first, performance was up to 200k insert count aggregated over mongos server\nbut as document accumulated, write performance is down below 50k…Is there any way to increase bulk insert performance?\nI consider several ways to do it… create index after indexless insert, ordered bulk insert(from ordered index data)… etc. but I don’t seem these are effective way", "username": "JaeHo_Park" }, { "code": "mongoimport/mongorestore", "text": "Hi @JaeHo_Park and welcome to the community!!In order to understand the issue observed and provide with detailed assistance, could you please help me a few details based on the above specifications:Is this heavy insert workload the normal day-to-day workload you’re expecting for the cluster? Or is this a one-off job, and querying/aggregation will be the typical workload you envision in the future?Regards\nAasawari", "username": "Aasawari" } ]
Performance issue on a job inserting over 700 biliions of rows to a sharded cluster
2022-08-23T04:02:06.805Z
Performance issue on a job inserting over 700 biliions of rows to a sharded cluster
2,022
null
[ "aggregation", "node-js", "atlas-cluster" ]
[ { "code": "const pipeline = [\n {\n '$match': {\n 'accommodates': {\n '$gt': 4\n }, \n 'price': {\n '$lt': 500\n }, \n 'amenities': 'Hair dryer'\n }\n }, {\n '$sort': {\n 'price': 1\n }\n }, {\n '$project': {\n 'name': 1, \n 'amenities': 1, \n 'price': 1, \n 'image': 1, \n 'descriptions': 1\n }\n }, {\n '$limit': 20\n }\n ]\n\n const agg = await collection.aggregate(pipeline).toArray();\n\n console.log(agg)\n", "text": "Hi there,I am new to MongoDB and I wanted to make sure I have executed my code correctly to display an aggregation pipeline for testing purposes.I am trying to console.log the below which does not return anything back to me. Is this an issue with the testing sample data?const { MongoClient, ServerApiVersion } = require(‘mongodb’);\nconst uri = “mongodb+srv://michaelhardie:[email protected]/?retryWrites=true&w=majority”;\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true, serverApi: ServerApiVersion.v1 });\nclient.connect(async err => {\nconst collection = client.db(“sample_airbnb”).collection(“listingsAndReviews”);\n// perform actions on the collection objectclient.close();\n});Thanks in advance,\nMichael", "username": "Michael_Hardie" }, { "code": "", "text": "When no document comes out from a pipeline, any of the following could be wrong.I would proceed in the following way.Remove the $match stage and see if I have any document. If you have no document, then is is problem 1. or 2. Check both names. Note that the name can be correct but you might be on the wrong cluster.Add back each of $match clauses one by one verify if you have documents or not.When you have no documents, you know that the culprit is the last clause added.3a. Check if the field name is correct? Names are case sensitives.\n3b. Check if the value checked has the same type as the fiekd values", "username": "steevej" }, { "code": "serverApi: ServerApiVersion.v1.toArray()", "text": "I have checked you query directly on Atlas, and your query seems fine giving 730 total (no limit) documents in return.I haven’t tried other parts in a Node.js environment but here I suspect two things: serverApi: ServerApiVersion.v1 and .toArray(). They might not be working to your expectation.open a node REPL in your project folder (so you can access installed libraries in it). try without setting serverapiversion and then try to print the result of aggregation directly.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I have now tested your whole code.The problem is in one of your username, password, or cluster address. wait about 30 seconds and check your log to see “Uncaught MongoError: Topology is closed, please connect”I have tried this on my cluster, the only differences are those 3, and your code works fine.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Thanks you for getting back to me so quickly.I am glad the code is executing. Although on my end I am still unable to log out anything successfully (no error messages either)", "username": "Michael_Hardie" }, { "code": "", "text": "there is a possibility your driver is not installed correctly. but first, try connecting to your cluster with mongo shell or Compass to see if your URI is fine.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "URI is fine. I was able to connect via mongo shell", "username": "Michael_Hardie" }, { "code": "", "text": "michaelhardie@<comp_details> mongoDB % node --version\nv16.15.1\nmichaelhardie@<comp_details> mongoDB % npm --version\n8.11.0\nmichaelhardie@<comp_details> mongoDB % node index.js\nmichaelhardie@<comp_details> mongoDB %", "username": "Michael_Hardie" }, { "code": "npm init -ynpm install mongodbnode", "text": "now then, create a new folder andif this gives you a result, we may say the installation is broken in the other project.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This is what been returnedconst { MongoClient, ServerApiVersion } = require(‘mongodb’);\nundefined\nconst uri = “mongodb+srv://michaelhardie:[email protected]/?retryWrites=true&w=majority”;\nundefined\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true, serverApi: ServerApiVersion.v1 });\nundefined\nclient.connect(async err => {\n… // perform actions on the collection object\n…\n… const collection = client.db(“sample_airbnb”).collection(“listingsAndReviews”);\n…\n… const pipeline = [\n… {\n… ‘$match’: {\n… ‘accommodates’: {\n… ‘$gt’: 4\n… },\n… ‘price’: {\n… ‘$lt’: 500\n… },\n… ‘amenities’: ‘Hair dryer’\n… }\n… }, {\n… ‘$sort’: {\n… ‘price’: 1\n… }\n… }, {\n… ‘$project’: {\n… ‘name’: 1,\n… ‘amenities’: 1,\n… ‘price’: 1,\n… ‘image’: 1,\n… ‘descriptions’: 1\n… }\n… }, {\n… ‘$limit’: 20\n… }\n… ]\n…\n… const agg = await collection.aggregate(pipeline).toArray();\n…\n… console.log(agg);\n…\n… client.close();\n… });", "username": "Michael_Hardie" }, { "code": "", "text": "I have node.js v18.0.0 and npm v8.6.0. mongodb v4.9.1 gets installed, and your code run perfectly to return a result in a few seconds.maybe nodejs installation has a problem, or your firewall happened to disallow node to use the network.or maybe you forgot to import the sample database in that cluster which results in zero response because no document returns. this type of forgetfulness can happen anytime, so go check that too.otherwise, my ideas are depleted.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I manage to get “Uncaught ReferenceError: client is not defined” error message. Unsure what this is referring to. But regardless thank you for assisting", "username": "Michael_Hardie" }, { "code": "https://github.com/Schniz/fnmeval \"$(fnm env)\"", "text": "a new idea to test if your installs have a problem:", "username": "Yilmaz_Durmaz" } ]
Console.logging out my aggregation pipeline for test
2022-09-05T12:47:26.626Z
Console.logging out my aggregation pipeline for test
4,328
https://www.mongodb.com/…6_2_1024x575.png
[ "data-modeling", "graphql", "delhi-mug" ]
[ { "code": "Community Triage Engineer, MongoDBDevelopment Competitor & TCO22 Finalist, TopcoderCurriculum Services Engineer, MongoDBDevelopment Copilot & MVP at Topcoder", "text": "\nTC@MongoDB2876×1616 416 KB\nMongoDB is excited to host Topcoder for their 8th regional event in the region: TCO22 Southern Asia Regional Event.The event will include an introduction to Topcoder Competitions, a session on using GraphQL with MongoDB by Sharathkumar Anbu, a session by @Aasawari and @Kushagra_Kesav on MongoDB Schema Design, a small team-based competition , pizzas, and swag!Session 1: How can you use Graph QL to retrieve the data you want - nothing more, nothing less\nDescription: In this workshop we will discuss the basics of Graph QL and how Graph QL can be used to retrieve the exact data the user wants from Mongo DB, nothing more nothing less. Graph QL revolutionized the way how the clients retrieve data from servers via API. Nowadays Graph QL is becoming a defacto standard for data retrieval in APIs. Let’s learn how Graph QL and Mongo DB gonna transform your APIs!Session 2: Innovate and Build Applications faster with MongoDB\nDescription: Modeling your application’s schema - is the first thing that comes to your mind when you start planning an application for your Hackathon. Things to Is your app read or write heavy? What data is frequently accessed together? How will your data set grow and scale?In this session, we will discuss the basics of MongoDB and the basics of data modeling using real-world examples. Learn how you can design your application’s schema better and faster with MongoDB.If you are a beginner or have some experience with MongoDB already, there is something for all of you!*Gates open at 4:30 PMEvent Type: In-Person\n Location: MongoDB Office, Gurugram.\n Floor 14, Building, 10C, DLF Cyber City, DLF Phase 2, Sector 24, Gurugram, Haryana 122001To RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.\nAasawari2055×2052 374 KB\nCommunity Triage Engineer, MongoDB–Development Competitor & TCO22 Finalist, Topcoder–\n\nKushagra1616×1751 442 KB\nCurriculum Services Engineer, MongoDB–\n\nimage1920×2560 423 KB\nDevelopment Copilot & MVP at TopcoderJoin the Delhi-NCR group to stay updated with upcoming meetups and discussions in Delhi-NCR.", "username": "Harshit" }, { "code": "Government-Issued ID Card", "text": "Hey Everyone,We are excited to see you all tomorrow at MongoDB Office. Here are a few things to note:Also, excited to see so many of you RSVPed, if you have any change of plans, please un-check the Going button at the top to allow other interested members to sign up Please reply on this thread in case you have any questions.Looking forward to seeing most of you tomorrow!", "username": "Harshit" }, { "code": "", "text": "\nIMG202208051915371920×1440 145 KB\n\nTeam members:\nAbhishek Rana\nDivyansh Kumar\nPoojan Vachharajani", "username": "DIVYANSH_KUMAR" }, { "code": "", "text": "\n165970750355888221097658985786951920×2560 402 KB\n", "username": "Kanishk_Khurana" }, { "code": "", "text": "\n165970728729381495198806120996071920×2560 231 KB\nTeam member", "username": "011_Dhruv_Dewan" }, { "code": "", "text": "Daksh Makhija\nRupin Vijan\nAviral Juyal\nKanishk Khurana\nJasmeet Singh", "username": "Kanishk_Khurana" }, { "code": "", "text": "\n165970753681079921239351241712591920×1440 111 KB\n", "username": "_avtej_ingh" }, { "code": "", "text": "\nIMG_20220805_1923341920×4262 258 KB\nAnshita Arya\nHarshdeep Dhanjal\nManjari\nBrahmjot Singh\nNavtej Singh", "username": "35_Manjari_N_A" }, { "code": "", "text": "\n16597076752401180233972508517291920×2560 299 KB\n\nTeam member", "username": "011_Dhruv_Dewan" }, { "code": "", "text": "\n165970778121880400203807886475941920×2560 207 KB\n@Rashi_arora\n@Avneet_Kaur\nPravneet singh\nSwarnim\nSahibpreet\nKarmanpreet", "username": "Rashi_arora" }, { "code": "", "text": "Navtej\nManjari\nHarshdeep dhanjal\nAnshita\n\n165970815165934752888180985040451920×2560 187 KB\n", "username": "_avtej_ingh" }, { "code": "", "text": "Team 001Aditi\nChirag\nSimarpreet\nAvi\nSmridhi\n\nIMG202208051932121920×2560 239 KB\n\n\nIMG202208051932161920×2560 230 KB\n", "username": "33_ANSHDEEP_Singh" }, { "code": "", "text": "\n20220805_1934321920×1392 178 KB\n\nJugraj Singh\nAnshdeep Singh\nAvi Kapoor\nMahak Kaur\nGagan deep kaur", "username": "Jugraj_Singh" }, { "code": "", "text": "Abhishek Ranae here…Event is very good and Top coder staff is very impressive and talented ", "username": "Abhishek_Rana" }, { "code": "", "text": "Thanks, everyone for attending! In case you couldn’t attend here’s the event summary:It was great to see Topcoder and MongoDB Communities as well as teams coming together to put together an event with some exciting workshop sessions, informational sessions as well as an inspiring story of S Deepak Kumar 's grit and passion who coming from a very small village in India, self-taught himself programming and won a global hackathon.\nIMG_15591920×1440 289 KB\nThe event started with Topcoder’s Community Manager, Jessie, and me kicking off the event and welcoming everyone. Later Jessie shared and awarded Topcoder’s regional award winners.It was followed by a session by @Aasawari and @Kushagra_Kesav. It began with an initial emphasis on the factors to consider while modeling your application and later they used Bookmyshow’s example to show how designing application schema is faster and easier with a code-first approach of data modeling with MongoDB.We also had a mini competition towards the end - where attendees were asked to design a MongoDB database schema for an e-commerce website. It was great to see the community interacting and learning in between the sessions and competitions. We are working on the results and will share the winners asap!We also had a quick workshop-styled session by Topcoder MVP @Sharathkumar_Anbu who spoke about changing trend data retrieval from RESTful API’s to GraphQL and backed that with a demo of how GraphQL query language gives developers a single API endpoint to access exactly the data they need. The code repo from the demo is available hereAlso for further reading on GraphQL do check out @SourabhBagrecha’s blog that can help you create your own Expense Manager App using MongoDB Atlas GraphQL API.\nWhatsApp Image 2022-08-05 at 3.57.21 PM (1)1600×1200 223 KB\n\nThe event ended with Pizzas , conversations, and a great event photo! (MUG Delhi-NCR is a clear winner when it comes to group photos )It was great seeing some of the regular and first-time MUG Event Attendees in @005_Atharva_Rustagi, @Harsimran_Singh, @Krishan_Singh, @Dhruv_Dewan, @33_ANSHDEEP_Singh @_avtej_ingh, @35_Manjari_N_A, @Rashi_arora, @Avneet_Kaur @DIVYANSH_KUMAR, @Samriddhi_Singh, @Samriddhi_Chandel, @Daksh_Makhija,\n@Rupin_Vijan and @Aviral_N_A come to the MongoDB office and attend the event. Always great to see you all! (I am definitely missing a few of you, I know).Big thanks to Topcoder and MongoDB Teams especially Shashank, Kalindi and Daniela , @Aasawari @Kushagra_Kesav and @Sonali_Mamgain who made the event possible. Not to forget big thanks to the workplace team and support staff who helped put together a great hybrid event.Hope you all had a great time and we hope to see you soon!–\nMongoDB Community Team", "username": "Harshit" }, { "code": "", "text": "Hello all ,Thanks for coming to the event and thanks Topcoder for collaborating with us.We have our results for the schema challenge we ran at the event. We thank everyone who competed and submitted their entries. In the final analysis, we found two submissions that were fairly close to what we expected. Congratulations to @_avtej_ingh’s and @Jugraj_Singh’s team for having the closest solution to what we expected.We really liked how @33_ANSHDEEP_Singh and @DIVYANSH_KUMAR approached the problem as well.According to the discussion during the session, data modeling with MongoDB offers greater flexibility and use case specificity. With this flexibility, you can select the data model that best suits your application and its performance requirements. In short “data accessed together to be stored together”.Generally, designing a schema or modeling your data depends on your app and its features and this becomes much easier when we are modeling for a non-relational database. We have prepared a solution for you to reference for a generic e-commerce app that has a product listing, a product category page, a cart, and an order portal.\nimage5560×4949 1.38 MB\nIf you are interested in Schema Designing and Data Modelling, check out our free MongoDB University Course on Data Modelling.Feel free to reach out to us and share if you need any help or have doubts while modeling your data.Best Regards,", "username": "Kushagra_Kesav" } ]
Delhi-NCR MUG: Topcoder Meetup, MongoDB Schema Design & GraphQL with MongoDB
2022-07-27T21:26:53.502Z
Delhi-NCR MUG: Topcoder Meetup, MongoDB Schema Design &amp; GraphQL with MongoDB
8,760
null
[ "transactions" ]
[ { "code": "", "text": "User A queries a document. User B also query the same document. User A writes to document and commit. Now User B is left with a document JSON that has already been changed by user A. What happens if user B tries to write and save. Does User A transaction will be lost in that case ?", "username": "waseem_shahzad" }, { "code": "", "text": "Hi @waseem_shahzad welcome to the community!I believe you’re talking about multi-document transaction, but correct me if I’m wrong.So as I understand it, the scenario is as follows:If this is correct, then B would see a write conflict error (see In-progress Transactions and Write Conflicts) which I think describes this exact scenario.Additionally:Further to your question:Does User A transaction will be lost in that caseOnce a transaction is committed and the client received an acknowledgment, the writes won’t be lost.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Consistent Reads
2022-09-05T17:01:25.031Z
Consistent Reads
1,206
null
[ "replication", "server", "containers" ]
[ { "code": "cacheSizeGB", "text": "We are just starting out and would like to run a MongoDB replica set and a Docker swarm.\nI wonder if I could fit them all on a 3-node cluster.But, from the production notes:If your MongoDB instance is hosted on a system that also runs other software, such as a webserver, you should choose the first swap strategy. Do not disable swap in this case.Is it absolute? why? (Why should I not disable swap in this case?)Says we have 4 GB nodes and limit cacheSizeGB to 0.5 GB (50% of (2GB - 1 GB)), isn’t it safe to use another 2GB to run containers without swap?", "username": "3Ji" }, { "code": "mongodmongod", "text": "Hi @3Ji - Welcome to the communityIs it absolute? why?I do not believe it is absolute. For example, if you are confident in your systems workload & memory usage and know for certain that swapping isn’t required, then swapping could possibly be disabled in this case.One of the reasons that swapping is enabled is to help prevent the linux OOM killer from terminating the mongod instance. Perhaps you may wish to disable it if the OS the mongod instances are running on do not have a OOM kill function.I’m not too familiar with Docker but I hope these details help.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why it is discouraged to disable swap on a non-dedicated cluster
2022-08-15T16:56:41.996Z
Why it is discouraged to disable swap on a non-dedicated cluster
1,862
https://www.mongodb.com/…7_2_1024x341.png
[ "queries" ]
[ { "code": "", "text": "Hi!My current aim is to find all securities that have Source = Manual on a security field (e.g. Instrument_Name, Instrument_InstrumentType) this means a particular field has been manually updated by user. The problem comes from the fact that there are lots of Sources ( e.g. GSMApi, OpenFigi) and lots of security fields on each of the sources. I am trying to narrow this down, but cannot seem to find a proper query to do so. Also I’m not exactly good with queries on MongoDB and anything I’ve tried so far is either 0 results fetched or an error. I believe these are not arrays, but rather nested documents. I have included current data structure in a screenshot. Also below fetches single result by using full path -db.getCollection(‘security-data’).find({“SourceFields.RemotePlus.Instrument_Issue_Issuer_Name.Source”: ‘Manual’})However RemotePlus.Instrument_Issue_Issuer_Name is not a constant, the manual update can happen anywhere, on any source and any of its field (even multiple) this is where I’m stuck.below is current data structure -\nimage1685×562 43.3 KB\nThanks!", "username": "Ed_Motor" }, { "code": "RemotePlus.Instrument_Issue_Issuer_Name", "text": "Hi @Ed_Motor - Welcome to the community However RemotePlus.Instrument_Issue_Issuer_Name is not a constant, the manual update can happen anywhere, on any source and any of its field (even multiple) this is where I’m stuck.Can you provide the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran,\nthanks for reply!Sorry for late response.Please let me know if that is enough info.Regards,\nEd", "username": "Ed_Motor" }, { "code": "", "text": "Hi @Ed_Motor - Thanks for getting back to me with that information.Specific to the above, would you be able to provide 4-5 sample documents and the expected / desired output?I can load this into my test environment and attempt to see if what you are expecting is possible. I have some ideas at the moment based off the description but sample documents and expected output would help verify these.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran!I’m using Robo 3T to navigate myself around. What would be the best way to provide sample documents? As for expected/desired output, as long as it’s at least top level I can go from there, meaning we just find all documents where criteria is met, records doesn’t have to be expanded, etc -\nimage2378×333 30.1 KB\nRegards,\nEd", "username": "Ed_Motor" }, { "code": "", "text": "Hi @Ed_Motor,I’m using Robo 3T to navigate myself around. What would be the best way to provide sample documents?I’m not too familiar with Robo3T but perhaps you could try export some documents using MongoDB Compass by following the procedure noted here and send a few sample document(s) once exported as JSON here.Please redact any personal or sensitive data before posting the sample document(s) here.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Ok, having some problems connecting with MongoDB Compass | MongoDB I remember it used to work just recently…An error occurred while loading navigation: Unsupported OP_QUERY command: connectionStatus. The client driver may require an upgrade. For more details see https://dochub.mongodb.org/core/legacy-opcode-removalI’ve extracted one document using below approach with Robo 3T.\nimage2367×289 27.3 KB\nUnfortunately I’m not allowed to add any attachments as a new user to verify if that is good enough. ", "username": "Ed_Motor" }, { "code": "{\n\"name\":\"Jason\",\n\"userId\": 123456,\n\"Location\": \"Unknown\"\n}\n", "text": "I’ve extracted one document using below approach with Robo 3T.That should be a good start. Are you able to paste it as text after copying the JSON (based off your screenshot)? I am not sure what the output would be from that from Robo3T as I have not used it before unfortunately.Example:", "username": "Jason_Tran" }, { "code": "{\n \"_id\" : NUUID(\"c26442ad-a8df-4095-8086-405994222c53\"),\n \"SecurityAggregateVersion\" : NumberLong(4),\n \"MarketDataAggregateVersion\" : NumberLong(1),\n \"Fields\" : {\n \"Issuer_Fundamentals_CountryOfRisk_Iso3166Alpha2\" : {\n \"_t\" : \"StringSecurityDataField\",\n \"LastUpdate\" : \"16620353903630469\",\n \"EffectiveDateTime\" : \"2022-09-01T12:29:45.8112946\",\n \"AssetType\" : -1,\n \"Source\" : \"ABCD\",\n \"BillingCategory\" : \"Test Billing\",\n \"BillingInfo\" : [ \n [ \n \"master_information.instrument_master.trex_asset_type\", \n \"20\"\n ], \n [ \n \"equity.equity_details.ADR_structure\", \n \"## ERROR: Field not returned from BICE ##\"\n ], \n [ \n \"master_information.organization_master.org_country_code\", \n \"AT\"\n ], \n [ \n \"debt.payment_schedule.pool_factor\", \n null\n ], \n [ \n \"debt.cmo_details.tranche_number\", \n null\n ], \n [ \n \"debt.fixed_income.debt_type\", \n \"## ERROR: Field not returned from BICE ##\"\n ]\n ],\n \"Value\" : \"AT678\"\n },\n \"Issuer_Fundamentals_Name\" : {\n \"_t\" : \"StringSecurityDataField\",\n \"LastUpdate\" : \"16620353903630469\",\n \"EffectiveDateTime\" : \"2022-09-01T12:29:45.8112946\",\n \"AssetType\" : -1,\n \"Source\" : \"ABCD\",\n \"BillingCategory\" : \"Test Billing\",\n \"BillingInfo\" : [ \n [ \n \"master_information.instrument_master.trex_asset_type\", \n \"20\"\n ], \n [ \n \"equity.equity_details.ADR_structure\", \n \"## ERROR: Field not returned from BICE ##\"\n ], \n [ \n \"master_information.organization_master.org_country_code\", \n \"AT\"\n ], \n [ \n \"debt.payment_schedule.pool_factor\", \n null\n ], \n [ \n \"debt.cmo_details.tranche_number\", \n null\n ], \n [ \n \"debt.fixed_income.debt_type\", \n \"## ERROR: Field not returned from BICE ##\"\n ]\n ],\n \"Value\" : \"Centrobank AG\"\n }\n },\n \"SourceFields\" : {\n \"ABCD\" : {\n \"Instrument_Issue_Issuer_IdcIssuer\" : {\n \"_t\" : \"StringSecurityDataField\",\n \"LastUpdate\" : \"16620353903726245\",\n \"EffectiveDateTime\" : \"2022-09-01T12:29:45.8112946\",\n \"AssetType\" : -1,\n \"Source\" : \"ABCD\",\n \"BillingCategory\" : \"Test Billing\",\n \"BillingInfo\" : [ \n [ \n \"master_information.instrument_master.trex_asset_type\", \n \"20\"\n ], \n [ \n \"equity.equity_details.ADR_structure\", \n \"## ERROR: Field not returned from BICE ##\"\n ], \n [ \n \"master_information.organization_master.org_country_code\", \n \"AT\"\n ], \n [ \n \"debt.payment_schedule.pool_factor\", \n null\n ], \n [ \n \"debt.cmo_details.tranche_number\", \n null\n ], \n [ \n \"debt.fixed_income.debt_type\", \n \"## ERROR: Field not returned from BICE ##\"\n ]\n ],\n \"Value\" : \"CENHAN\"\n },\n \"Instrument_Issue_IssueCurrency_Iso4217Alpha3\" : {\n \"_t\" : \"StringSecurityDataField\",\n \"LastUpdate\" : \"16620353903726245\",\n \"EffectiveDateTime\" : \"2022-09-01T12:29:45.8112946\",\n \"AssetType\" : -1,\n \"Source\" : \"ABCD\",\n \"BillingCategory\" : \"Test Billing\",\n \"BillingInfo\" : [ \n [ \n \"master_information.instrument_master.trex_asset_type\", \n \"20\"\n ], \n [ \n \"equity.equity_details.ADR_structure\", \n \"## ERROR: Field not returned from BICE ##\"\n ], \n [ \n \"master_information.organization_master.org_country_code\", \n \"AT\"\n ], \n [ \n \"debt.payment_schedule.pool_factor\", \n null\n ], \n [ \n \"debt.cmo_details.tranche_number\", \n null\n ], \n [ \n \"debt.fixed_income.debt_type\", \n \"## ERROR: Field not returned from BICE ##\"\n ]\n ],\n \"Value\" : \"EUR\"\n },\n \"Instrument_Debt_CouponSchedule\" : {\n \"_t\" : \"ArraySecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"BillingCategory\" : \"Manual Entry\",\n \"OverrideUntil\" : \"2021-01-14T14:18:43.13\",\n \"Value\" : [ \n {\n \"_t\" : \"RecordSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T14:18:43.13\",\n \"Value\" : {\n \"Price\" : {\n \"_t\" : \"DecimalSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : \"77777.0\"\n },\n \"IsPercentage\" : {\n \"_t\" : \"BooleanSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : false\n },\n \"Date\" : {\n \"_t\" : \"DateSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : \"2020-01-01\"\n }\n }\n }\n ]\n }\n },\n \"OpenFigi\" : {\n \"Instrument_Naming_ShortName\" : {\n \"_t\" : \"StringSecurityDataField\",\n \"LastUpdate\" : \"16620353905711796\",\n \"EffectiveDateTime\" : \"2022-09-01T12:29:46.2499029\",\n \"AssetType\" : -1,\n \"Source\" : \"OpenFigi\",\n \"BillingCategory\" : \"OpenFigi\",\n \"BillingInfo\" : [],\n \"Value\" : \"TEST 0 PERP\"\n },\n \"Instrument_Classification_BloombergSecurityType\" : {\n \"_t\" : \"StringSecurityDataField\",\n \"LastUpdate\" : \"16620353905711796\",\n \"EffectiveDateTime\" : \"2022-09-01T12:29:46.2499029\",\n \"AssetType\" : -1,\n \"Source\" : \"OpenFigi\",\n \"BillingCategory\" : \"OpenFigi\",\n \"BillingInfo\" : [],\n \"Value\" : \"EURO-ZONE\"\n },\n \"Instrument_PriceDivisor\" : {\n \"_t\" : \"DecimalSecurityDataField\",\n \"LastUpdate\" : \"16620369913791479\",\n \"EffectiveDateTime\" : \"2022-09-01T10:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"BillingCategory\" : \"Test Billing\",\n \"BillingInfo\" : [ \n [ \n \"master_information.instrument_master.trex_asset_type\", \n \"20\"\n ], \n [ \n \"equity.equity_details.ADR_structure\", \n \"## ERROR: Field not returned from BICE ##\"\n ], \n [ \n \"master_information.organization_master.org_country_code\", \n \"AT\"\n ], \n [ \n \"debt.payment_schedule.pool_factor\", \n null\n ], \n [ \n \"debt.cmo_details.tranche_number\", \n null\n ], \n [ \n \"debt.fixed_income.debt_type\", \n \"## ERROR: Field not returned from BICE ##\"\n ]\n ],\n \"OverrideUntil\" : \"2021-09-30T14:18:43.13\",\n \"Value\" : \"101.0\"\n },\n \"Instrument_Debt_CouponSchedule\" : {\n \"_t\" : \"ArraySecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"BillingCategory\" : \"Manual Entry\",\n \"OverrideUntil\" : \"2021-01-14T14:18:43.13\",\n \"Value\" : [ \n {\n \"_t\" : \"RecordSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T14:18:43.13\",\n \"Value\" : {\n \"Price\" : {\n \"_t\" : \"DecimalSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : \"77777.0\"\n },\n \"IsPercentage\" : {\n \"_t\" : \"BooleanSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : false\n },\n \"Date\" : {\n \"_t\" : \"DateSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : \"2020-01-01\"\n }\n }\n }\n ]\n }\n },\n \"RemotePlus\" : {\n \"Instrument_MarketData_DailyVolume\" : {\n \"_t\" : \"DecimalSecurityDataField\",\n \"LastUpdate\" : \"16620353906311889\",\n \"EffectiveDateTime\" : \"2022-09-01T00:00:00\",\n \"AssetType\" : -1,\n \"Source\" : \"RemotePlus\",\n \"BillingCategory\" : \"Test Billing2\",\n \"BillingInfo\" : [ \n [ \n \"trexASSET\", \n \"20\"\n ], \n [ \n \"TYP1\", \n null\n ], \n [ \n \"ETDTYPE\", \n null\n ], \n [ \n \"ATYPE\", \n \"30\"\n ], \n [ \n \"COQ\", \n \"AT\"\n ]\n ],\n \"Value\" : \"294000.0\"\n },\n \"Instrument_MarketData_LastPrice\" : {\n \"_t\" : \"DecimalSecurityDataField\",\n \"LastUpdate\" : \"16620353906311889\",\n \"EffectiveDateTime\" : \"2022-09-01T00:00:00\",\n \"AssetType\" : -1,\n \"Source\" : \"RemotePlus\",\n \"BillingCategory\" : \"Test Billing2\",\n \"BillingInfo\" : [ \n [ \n \"trexASSET\", \n \"20\"\n ], \n [ \n \"TYP1\", \n null\n ], \n [ \n \"ETDTYPE\", \n null\n ], \n [ \n \"ATYPE\", \n \"30\"\n ], \n [ \n \"COQ\", \n \"AT\"\n ]\n ],\n \"Value\" : \"0.309\"\n },\n \"Instrument_PriceDivisor\" : {\n \"_t\" : \"DecimalSecurityDataField\",\n \"LastUpdate\" : \"16620369913791479\",\n \"EffectiveDateTime\" : \"2022-09-01T10:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"BillingCategory\" : \"Test Billing\",\n \"BillingInfo\" : [ \n [ \n \"master_information.instrument_master.trex_asset_type\", \n \"20\"\n ], \n [ \n \"equity.equity_details.ADR_structure\", \n \"## ERROR: Field not returned from BICE ##\"\n ], \n [ \n \"master_information.organization_master.org_country_code\", \n \"AT\"\n ], \n [ \n \"debt.payment_schedule.pool_factor\", \n null\n ], \n [ \n \"debt.cmo_details.tranche_number\", \n null\n ], \n [ \n \"debt.fixed_income.debt_type\", \n \"## ERROR: Field not returned from BICE ##\"\n ]\n ],\n \"OverrideUntil\" : \"2021-09-30T14:18:43.13\",\n \"Value\" : \"101.0\"\n },\n \"Instrument_Debt_CouponSchedule\" : {\n \"_t\" : \"ArraySecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"BillingCategory\" : \"Manual Entry\",\n \"OverrideUntil\" : \"2021-01-14T14:18:43.13\",\n \"Value\" : [ \n {\n \"_t\" : \"RecordSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T14:18:43.13\",\n \"Value\" : {\n \"PrBICE\" : {\n \"_t\" : \"DecimalSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : \"77777.0\"\n },\n \"IsPercentage\" : {\n \"_t\" : \"BooleanSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : false\n },\n \"Date\" : {\n \"_t\" : \"DateSecurityDataField\",\n \"LastUpdate\" : \"16620388323222898\",\n \"EffectiveDateTime\" : \"2021-01-12T15:18:43.13\",\n \"AssetType\" : 0,\n \"Source\" : \"Manual\",\n \"OverrideUntil\" : \"2021-01-14T00:00:00\",\n \"Value\" : \"2020-01-01\"\n }\n }\n }\n ]\n }\n }\n }\n}\n", "text": "Ok, here is an example doc, please give it a try.", "username": "Ed_Motor" }, { "code": "\"SourceFields.<variable1>.<variable2>.Soruce\"[\n {\n _id: ObjectId(\"63116f45413bc94d7e966242\"),\n groupedFilteredArray: [\n {\n Instrument_Debt_CouponSchedule: {\n _t: 'ArraySecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n BillingCategory: 'Manual Entry',\n OverrideUntil: '2021-01-14T14:18:43.13',\n Value: [\n {\n _t: 'RecordSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T14:18:43.13',\n Value: {\n Price: {\n _t: 'DecimalSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: '77777.0'\n },\n IsPercentage: {\n _t: 'BooleanSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: false\n },\n Date: {\n _t: 'DateSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: '2020-01-01'\n }\n }\n }\n ]\n }\n },\n {\n Instrument_PriceDivisor: {\n _t: 'DecimalSecurityDataField',\n LastUpdate: '16620369913791479',\n EffectiveDateTime: '2022-09-01T10:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n BillingCategory: 'Test Billing',\n BillingInfo: [\n [\n 'master_information.instrument_master.trex_asset_type',\n '20'\n ],\n [\n 'equity.equity_details.ADR_structure',\n '## ERROR: Field not returned from BICE ##'\n ],\n [\n 'master_information.organization_master.org_country_code',\n 'AT'\n ],\n [ 'debt.payment_schedule.pool_factor', null ],\n [ 'debt.cmo_details.tranche_number', null ],\n [\n 'debt.fixed_income.debt_type',\n '## ERROR: Field not returned from BICE ##'\n ]\n ],\n OverrideUntil: '2021-09-30T14:18:43.13',\n Value: '101.0'\n },\n Instrument_Debt_CouponSchedule: {\n _t: 'ArraySecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n BillingCategory: 'Manual Entry',\n OverrideUntil: '2021-01-14T14:18:43.13',\n Value: [\n {\n _t: 'RecordSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T14:18:43.13',\n Value: {\n Price: {\n _t: 'DecimalSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: '77777.0'\n },\n IsPercentage: {\n _t: 'BooleanSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: false\n },\n Date: {\n _t: 'DateSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: '2020-01-01'\n }\n }\n }\n ]\n }\n },\n {\n Instrument_PriceDivisor: {\n _t: 'DecimalSecurityDataField',\n LastUpdate: '16620369913791479',\n EffectiveDateTime: '2022-09-01T10:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n BillingCategory: 'Test Billing',\n BillingInfo: [\n [\n 'master_information.instrument_master.trex_asset_type',\n '20'\n ],\n [\n 'equity.equity_details.ADR_structure',\n '## ERROR: Field not returned from BICE ##'\n ],\n [\n 'master_information.organization_master.org_country_code',\n 'AT'\n ],\n [ 'debt.payment_schedule.pool_factor', null ],\n [ 'debt.cmo_details.tranche_number', null ],\n [\n 'debt.fixed_income.debt_type',\n '## ERROR: Field not returned from BICE ##'\n ]\n ],\n OverrideUntil: '2021-09-30T14:18:43.13',\n Value: '101.0'\n },\n Instrument_Debt_CouponSchedule: {\n _t: 'ArraySecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n BillingCategory: 'Manual Entry',\n OverrideUntil: '2021-01-14T14:18:43.13',\n Value: [\n {\n _t: 'RecordSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T14:18:43.13',\n Value: {\n PrBICE: {\n _t: 'DecimalSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: '77777.0'\n },\n IsPercentage: {\n _t: 'BooleanSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: false\n },\n Date: {\n _t: 'DateSecurityDataField',\n LastUpdate: '16620388323222898',\n EffectiveDateTime: '2021-01-12T15:18:43.13',\n AssetType: 0,\n Source: 'Manual',\n OverrideUntil: '2021-01-14T00:00:00',\n Value: '2020-01-01'\n }\n }\n }\n ]\n }\n }\n ]\n }\n]\ngroupFilteredArray\"SourceFields.<variable1>.<variable2>.Soruce\"\"SourceFields.ABCD\"\"SourceFields.OpenFigi\"\"SourceFields.RemotePlus\"“SourceFields.< *source* >.< *field_name* >.Source”: ‘Manual’\"Source\"\"Manual\"db.collection.aggregate([\n{\n \"$addFields\": {\n \"array\": {\n \"$objectToArray\": \"$SourceFields\"\n }\n }\n},\n{\n \"$addFields\": {\n \"array2\": {\n \"$map\": {\n \"input\": \"$array\",\n \"in\": {\n \"$objectToArray\": \"$$this.v\"\n }\n }\n }\n }\n},\n{ \"$project\": { \"array2\": 1 } },\n{ \"$unwind\": \"$array2\" },\n{\n \"$project\": {\n \"filteredArray\": {\n \"$filter\": {\n \"input\": \"$array2\",\n \"cond\": {\n \"$eq\": [\n \"$$this.v.Source\",\n \"Manual\"\n ]\n }\n }\n }\n }\n},\n{\"$project\":{\"reconverted\":{\"$arrayToObject\":\"$filteredArray\"}}},\n{\n \"$group\": {\n \"_id\": \"$_id\",\n \"groupedFilteredArray\": {\n \"$push\": \"$reconverted\"\n }\n }\n}\n])\n$addFields$objectToArray$arrayToObject$map$project$unwind$filter$group$push$project", "text": "Thanks @Ed_Motor,I’m not too sure what you’re expecting as an output but I have done some testing to obtain the below output. From the sample document you provided, I can see there are 5 instances where \"SourceFields.<variable1>.<variable2>.Soruce\" have a value of “Manual”. Is that correct? If so, here is the output from my testing:The groupFilteredArray field contains 3 documents inside. However, it has 5 instances where \"SourceFields.<variable1>.<variable2>.Soruce\" have a value of “Manual”. From what I had counted from the original sample document, these were the paths where each instance existed:Note: The above output does not include the Original Source name / details (i.e. OpenFigi, ABCD, RemotePlus)I have assumed that there is 2 layers of “variables” for each of the input documents.Would the example output based off my testing include enough information to demonstrate all records (or “securities”), where “SourceFields.< *source* >.< *field_name* >.Source”: ‘Manual’ for your use case?I will also add, the way this is done is extremely process heavy. If possible, it may be best to retrieve the original documents and then filter them out from the application side. This operation i’ve performed may suit your use case if performed on a minimal set of document(s) (depending on your infrastructure / environment as well) in which you may not encounter resource pressure or long operation times. If you plan to run this as a “once in a while” situation, then it may work too assuming you are okay with the resource overhead.Have you considered re-designing the schema as I noticed there is more \"Source\" fields that have \"Manual\" as the value deeper in the same document (4th, 5th layers, etc.) and especially if you plan to run this filtering very frequently.Example aggregation for the above output:Some of the aggregation stages / operators used in the above for you reference:Very important to note I have only tested this a couple of times and on the single sample document provided. Perhaps other community members may be able to chime in with more suitable alternatives to the above example. I have used $project eventually in my testing due to the resulting document after each stage becoming too large to work with visually. You can adjust accordingly if you feel this works for your use case(s). It is highly recommend to test on a test environment to verify it suits all your use case(s) and requirements before trying in production. However, I would recommend going over the following documentation regarding schema design:Regards,\nJason", "username": "Jason_Tran" }, { "code": "ObjectId(\"63116f45413bc94d7e966242\")Fetched 0 record(s) in 0ms", "text": "@Jason_Tran thank you for reply/output provided!What you have provided I believe is good enough for me, because as long as I get:ObjectId(\"63116f45413bc94d7e966242\")I can then use these ids and fetch securities in question using API calls. It’s also very good that I can immediately view fields with “Manual”, even though I don’t see the top level, that’s not an issue at all.The 5 instances is exactly how many I would expect to be in the output in this case.As for process heavy I think it should be fine. Environment is spinning on AWS and there are plenty of resources and really this is more of a unique situation where I suspect some data may be causing time-outs, because it was manually amended and it is affecting only particular set of securities and only staging environment. So indeed it will be “once in a while” situation.In regards to re-design I can bring this up with developers as I’m more on end-to-end testing side of things. Interestingly I did try your provided query on my local instance with 6 sample documents and one of them having this “Manual” source, but got Fetched 0 record(s) in 0ms - is there anything I need to adjust in the query or it can be run as is?Regards,\nEd", "username": "Ed_Motor" }, { "code": "Fetched 0 record(s) in 0ms", "text": "Glad to hear those details are sufficient for now and thanks for providing further context regarding the environment.Interestingly I did try your provided query on my local instance with 6 sample documents and one of them having this “Manual” source, but got Fetched 0 record(s) in 0ms - is there anything I need to adjust in the query or it can be run as is?For the above sample document(s) statement, I will send a DM Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find deeply nested data in MongoDB
2022-07-22T11:23:09.593Z
Find deeply nested data in MongoDB
4,453
https://www.mongodb.com/…9_2_1024x572.png
[ "saintlouis-mug" ]
[ { "code": "Director, Customer Digital Experience at Ameren", "text": "\nSaintLouis1345×752 89.1 KB\nJoin Confluent and MongoDB at Topgolf in St. Louis: Celebrating a Day in the Life of a Data ProfessionalToday’s customers expect real-time data from businesses. To rise to these expectations, modern organizations are deploying data streaming technology, the most popular of which is Apache Kafka.Join Confluent and MongoDB as we share use cases and customer best practices on how organizations can harness the full power of data in motion to innovate and gain a competitive advantage in the modern digital world.Join us for this opportunity to network and share with other professionals to learn more about their data transformation and play golf!Event Type: In-Person\n Location: Topgolf Chesterfield .\n 16851 N Outer 40 Rd, Chesterfield, MO 63005To register: Join Confluent and MongoDB at Topgolf in St. Louis: Celebrating a Day in the Life of a Data ProfessionalDirector, Customer Digital Experience at AmerenJoin the St Louis group to stay updated with upcoming meetups and discussions in St-Louis.", "username": "Daniel_Hawthorne" }, { "code": "", "text": "Thanks, everyone for attending!\nIMG_06641920×1440 183 KB\n", "username": "Harshit" } ]
Saint Louis MUG: Celebrating a Day in the Life of a Data Professional!
2022-08-03T20:48:40.381Z
Saint Louis MUG: Celebrating a Day in the Life of a Data Professional!
3,124
null
[ "field-encryption" ]
[ { "code": "", "text": "i have this error andi i don’t know how to fix this i tred literrary everithing and couldn’t find fix{“t”:{\"$date\":“2022-09-05T19:18:34.510+02:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:\"-\n“,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:\n{“minWireVersion”:0,“maxWireVersion”:17},“incomingInternalClient”:\n{“minWireVersion”:0,“maxWireVersion”:17},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:17},“isInternalClient”:true}}}\n{“t”:{”$date\":“2022-09-05T19:18:34.511+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285,\n“ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.512+02:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“namespace”:“config.tenantMigrationDonors”}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“ShardSplitDonorService”,“namespace”:“config.tenantSplitDonors”}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:14171,“port”:27017,“dbPath”:\"/data/db\",“architecture”:“64-bit”,“host”:“g”}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“6.0.1”,“gitVersion”:“32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b”,“openSSLVersion”:“OpenSSL 1.1.1f 31 Mar 2020”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“ubuntu2004”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Ubuntu”,“version”:“22.04”}}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.513+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{}}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“E”, “c”:“CONTROL”, “id”:20557, “ctx”:“initandlisten”,“msg”:“DBException in initAndListen, terminating”,“attr”:{“error”:“NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file.”}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“initandlisten”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:15000}}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“REPL”, “id”:4794602, “ctx”:“initandlisten”,“msg”:“Attempting to enter quiesce mode”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:\"-\", “id”:6371601, “ctx”:“initandlisten”,“msg”:“Shutting down the FLE Crud thread pool”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“initandlisten”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“initandlisten”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“initandlisten”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“initandlisten”,“msg”:“Shutting down the global connection pool”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784906, “ctx”:“initandlisten”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:\"-\", “id”:20520, “ctx”:“initandlisten”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“initandlisten”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“initandlisten”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“initandlisten”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“initandlisten”,“msg”:“Shutting down free monitoring”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“initandlisten”,“msg”:“Shutting down the HealthLog”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“initandlisten”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:6278511, “ctx”:“initandlisten”,“msg”:“Shutting down the Change Stream Expired Pre-images Remover”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“initandlisten”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:\"-\", “id”:4784931, “ctx”:“initandlisten”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“initandlisten”,“msg”:“Now exiting”}\n{“t”:{\"$date\":“2022-09-05T19:18:34.515+02:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:100}}", "username": "Stepan_Komis" }, { "code": " /data/db", "text": "Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the ‘storage.dbPath’ option in the configuration file.”the folder to hold data (default is /data/db) is missing. an easy-to-miss detail and not-so-easy to manually read from the error log. the solution is in it.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Command "mongod" throws long error
2022-09-05T19:21:46.142Z
Command &ldquo;mongod&rdquo; throws long error
3,285
null
[]
[ { "code": "", "text": "Good Evening,I have installed mongo DB on my Ubunto version 20.04, when i run the command ‘mongo’, i get the following errorMongoDB shell version v4.4.15\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1any help would be great thanks", "username": "Sherelle_Scott" }, { "code": "mongodsystemctl status mongod", "text": "Hi @Sherelle_Scott and welcome to the MongoDB Community forums! It sounds like the mongod process is not running. Can you run systemctl status mongod and report the results here?", "username": "Doug_Duncan" }, { "code": "", "text": "systemctl status mongodThank you for getting back to me @Doug_Duncanthe command returnssystemctl status mongod\n● mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset>\nActive: inactive (dead)\nDocs: https://docs.mongodb.org/manual", "username": "Sherelle_Scott" }, { "code": "sudo systemctl start mongodsystemctl status mongod", "text": "That confirms that the process is not running. Try running sudo systemctl start mongod. After a couple of seconds rerun systemctl status mongod and it should show that it’s running. If not you can show the error log and we can help troubleshoot more.", "username": "Doug_Duncan" }, { "code": "mongod --versionmongodsudo mkdir /data\ncd /data\nsudo mkdir db\nsudo pkill -f mongod\nmongod", "text": "Thank you for you help @Doug_Duncan.I stole the below solution from stack overflow that seems to have it working.Thank you for helping and sorry to bother you.", "username": "Sherelle_Scott" }, { "code": "mongodsudo systemctl start mongodsudosudomongodsudomongodsudo/data/db/var/lib/mongo", "text": "It seems you created a solution to a problem you didn’t have. Based on the fact that you had mongod as a service, the installer would have already created a data directory and a user with limited permissions to run the process. All you needed to do was run sudo systemctl start mongod (there are ways to get around using sudo to start a process, but every time I type sudo it makes me think about what I’m doing and why I’m needing elevated privileges) to get the service up and running.Then use sudo mongod command.There are limited cases where you should run a service using sudo, and mongod is not one of them. Running with sudo introduces potential security risks.As for creating /data/db, that is where MongoDB put the data files when it was first created and for several versions afterwards, but I believe the new location on Ubuntu is something like /var/lib/mongo which is more in line with standards and best practices.", "username": "Doug_Duncan" }, { "code": "", "text": "I’m still new to this,\nopen to listening and understanding everything you have to say", "username": "Sherelle_Scott" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connect to server 127.0.0.1:27017,
2022-09-04T22:34:24.833Z
Connect to server 127.0.0.1:27017,
4,278
null
[ "aggregation", "crud" ]
[ { "code": "updateManyupdateOnedb.bills.insertMany([\n { \n _id: \"111\",\n balance: 40000,\n payments: [ \n { billPaymentId: \"123\", amount: 25000 }, \n { billPaymentId: \"456\", amount: 15000 } ],\n },\n { \n _id: \"222\",\n balance: 12,\n payments: [ \n { billPaymentId: \"99\", amount: 3 }, \n { billPaymentId: \"101\", amount: 9 } ],\n }\n])\npaymentsbalanceupdateOnedb.bills.updateOne(\n{ _id: \"606f028d-9409-4f67-bd5d-f254c258ff0c\" }, [\n { $pull: { billPaymentId: \"427066e5-5a4a-4667-8f86-c10cb2560c77\" } },\n { $set: { balance: { $sum: '$payments.amount' } } }\n])\n Unrecognized pipeline stage name: '$pull'", "text": "Hi, updateMany and updateOne accepts a limited aggregation pipeline.\nWe need to remove an item from an array (e.g via $pull) and update a property based on the result.We are trying to remove some element from payments and update balance with the sum of the payments that are left.\nIf $pull would have worked with updateOne it would look like this:but as mentioned, $pull is not supported and this query return Unrecognized pipeline stage name: '$pull'.\nAny other ideas maybe?", "username": "Benny_Kachanovsky1" }, { "code": "updateOnedb.collection.updateOne()$addFields$set$project$unset$replaceRoot$replaceWithdb.bills.updateOne(\n {\n _id: \"111\"\n },\n [\n {\n $set: {\n payments: {\n $filter: {\n input: \"$payments\",\n as: \"payment\",\n cond: {\n $ne: [\"$$payment.billPaymentId\", \"123\"]\n }\n }\n }\n }\n },\n {\n $set: {\n balance: {\n \"$sum\": \"$payments.amount\"\n }\n }\n }\n ]\n)\n$set", "text": "Hi @Benny_Kachanovsky1 and welcome to the MongoDB Community forums! As you noticed, the pipeline stages available to updateOne are limited to only the following:Update with Aggregation PipelineStarting in MongoDB 4.2, the db.collection.updateOne() can use an aggregation pipeline for the update. The pipeline can consist of the following stages:The following was just a quick test to play around with things, but it could be used as a starting point to get what you want hopefully:I am not sure why I had to put the update for the balance into a second $set block, but it was not working in the original location for some reason.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update document using $pull and $set all together
2022-09-05T16:43:31.336Z
Update document using $pull and $set all together
1,635
https://www.mongodb.com/…8a904c680b83.png
[]
[ { "code": "{\n\t\"_id\": \"1141508\",\n\t\"orderDevice\": \"desktop\",\n\t\"deviceVersion\": \"5.0\",\n\t\"sourceType\": \"ONLINE\",\n\t\"grandTotal\": 3710.0,\n\t\"basketTotal\": 0.0,\n\t\"couponCode\": \"LUCKY200\",\n\t\"attributes\": {},\n\t\"orders\": [{\n\t\t\t\"orderId\": \"1001-66-1141508\",\n\t\t\t\"storeId\": \"66\",\n\t\t\t\"orderStatus\": \"DELIVERY_ADDRESS\",\n\t\t\t\"deliveryType\": \"HYPERLOCAL_DELIVERY\",\n\t\t\t\"carrierPartner\": \"TELYPORT\",\n\t\t\t\"isGiftWrapped\": \"N\",\n\t\t\t\"isOtpVerified\": \"N\",\n\t\t\t\"isSubscription\": \"N\",\n\t\t\t\"isSubscriptionScheduler\": \"N\",\n\t\t\t\"posStatus\": \"N\",\n\t\t\t\"orderEvents\": {\n\t\t\t\t\"orderCreatedDate\": \"\",\n\t\t\t\t\"orderEntryDate\": \"\"\n\t\t\t},\n\t\t\t\"amount\": {\n\t\t\t\t\"remainingSubTotal\": 0.0,\n\t\t\t\t\"totalSkuDiscount\": 0.0,\n\t\t\t\t\"totalOtherDiscount\": 0.0,\n\t\t\t\t\"grandTotal\": 3710.0,\n\t\t\t\t\"giftCardTotal\": 0.0,\n\t\t\t\t\"oldGiftCardTotal\": 0.0,\n\t\t\t\t\"grandTotalInvoice\": 0.0,\n\t\t\t\t\"paymentCharges\": 0.0\n\t\t\t}\n\t\t},\n\t\t{\n\t\t\t\"orderId\": \"2002-999-1141508\",\n\t\t\t\"storeId\": \"999\",\n\t\t\t\"orderStatus\": \"DELIVERY_ADDRESS\",\n\t\t\t\"deliveryType\": \"STANDARD_DELIVERY\",\n\t\t\t\"carrierPartner\": \"ECOM_EXPRESS\",\n\t\t\t\"isGiftWrapped\": \"N\",\n\t\t\t\"isOtpVerified\": \"N\",\n\t\t\t\"isSubscription\": \"N\",\n\t\t\t\"isSubscriptionScheduler\": \"N\",\n\t\t\t\"posStatus\": \"N\",\n\t\t\t\"orderEvents\": {\n\t\t\t\t\"orderCreatedDate\": \"\",\n\t\t\t\t\"orderEntryDate\": \"\"\n\t\t\t},\n\t\t\t\"amount\": {\n\t\t\t\t\"remainingSubTotal\": 0.0,\n\t\t\t\t\"totalSkuDiscount\": 247.0,\n\t\t\t\t\"totalOtherDiscount\": 0.0,\n\t\t\t\t\"grandTotal\": 6520.0,\n\t\t\t\t\"giftCardTotal\": 0.0,\n\t\t\t\t\"oldGiftCardTotal\": 0.0,\n\t\t\t\t\"grandTotalInvoice\": 0.0,\n\t\t\t\t\"paymentCharges\": 0.0\n\t\t\t}\n\t\t}\n\t]\n}\n", "text": "\nimage904×419 59.4 KB\nHi This is my JSON I wanted to sum the key grandTotal that is in the array object inside the amount object and get the total and assign that total to grandTotalFor Reference:As above mentioned Json i wanted the output of grandTotal to be 10230 which is now 3710.0 in the reference so i need to sum up order.amount.grandtotal(that is 3710.0)+order.amount.grandtotal(that is 6520.0)\nKindly do the needfull.", "username": "Dhanush_R_M" }, { "code": "", "text": "What you want to do it use $addFields together with $reduce to compute your grandTotal.", "username": "steevej" }, { "code": "_iddb.test.aggregate(\n [\n {\n \"$project\": {\n \"orders\": 1\n }\n },\n {\n \"$unwind\": \"$orders\"\n },\n {\n \"$group\": {\n \"_id\": \"$_id\",\n \"grandTotal\": {\n \"$sum\": \"$orders.amount.grandTotal\"\n }\n }\n }\n ]\n)\n[ { _id: '1141508', grandTotal: 10230 } ]\norders", "text": "Hi @Dhanush_R_M and welcome to the MongoDB community forums! Steeve gives you one way of doing things, but it really depends on what you need from your output which you don’t show what you’re looking for in a result. If you just need the _id field and a grand total, you could run the following:This will return the following for the sample document you provided:Note that if you have a lot of documents that you’re trying to perform this on, you might run into performance issues so you will want to thoroughly test this before putting it into production. Of course this warning goes for any solution you put into production. The above aggregation pipeline will create a new document for every item in the orders array for every document sent through the pipeline.", "username": "Doug_Duncan" } ]
Sum of array value and set that value to key
2022-09-01T10:37:10.630Z
Sum of array value and set that value to key
2,651
null
[ "aggregation" ]
[ { "code": "[\n{\n \"one.two.three\": 4,\n \"number.two\": \"B\"\n},\n{\n \"one.two.three\": 7,\n \"number.two\": \"A\"\n},\n{\n \"one.two.three\": 10,\n \"number.two\": \"B\"\n}\n]\n{\n \"one.two.three\": 10,\n \"number.two\": \"A\"\n}\n", "text": "I would like to perform a complex merge:e.g.where the result is:because those are the maximum values…I could have any N+ number of arbitrary KV pairs, so I can’t just sort on a specific field", "username": "Noah_Kreiger" }, { "code": "", "text": "I am not sure aboutthose are the maximum valuesThe value A is usually considered smaller than B.It is not really common to have dots in field names.I usually avoid working with dynamic and arbitrary keys as it makes life harder. I use the attribute pattern and then it becomes easy because a simple $group can be use.You could always use $objectToArray to transform your data to a dynamic attribute pattern and the use $group as above. But if you frequently do this aggregation you might as well store the data using the pattern and save the extra step doing the $objectToArray.", "username": "steevej" } ]
Complex MongoDB Merge
2022-09-03T16:35:55.409Z
Complex MongoDB Merge
1,129
null
[ "aggregation" ]
[ { "code": "$match$groupdb.getCollection(\"mycollection\").aggregate([\n {\n \"$match\": {\n \"person.isVerified\": true\n }\n },\n {\n \"$match\": {\n \"skipped\": false\n }\n },\n {\n \"$match\": {\n \"result.secondsToComplete\": {\n \"$gt\": 0\n }\n }\n },\n {\n \"$match\": {\n \"creationDate\": {\n \"$gt\": ISODate(\"2022-01-02T00:00:00Z\"),\n \"$lt\": ISODate(\"2022-09-01T23:59:59.999Z\")\n \n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$person.id\",\n \"sessionDate\": {\n \"$max\": \"$creationDate\"\n },\n \"completions\": {\n \"$sum\": {\n \"$ifNull\": [\n \"$completions\",\n 1\n ]\n }\n },\n \"testsMade\": {\n \"$sum\": \"$result.makes\"\n },\n \"testsTaken\": {\n \"$sum\": \"$result.attempts\"\n },\n \"ftMade\": {\n \"$sum\": \"$result.ftMade\"\n },\n \"ftTaken\": {\n \"$sum\": \"$result.ftTaken\"\n },\n \"fgTaken\": {\n \"$sum\": \"$result.fgTaken\"\n },\n \"fgMade\": {\n \"$sum\": \"$result.fgMade\"\n },\n \"threeMade\": {\n \"$sum\": \"$result.threeMade\"\n },\n \"threeTaken\": {\n \"$sum\": \"$result.threeTaken\"\n },\n \"twoMade\": {\n \"$sum\": \"$result.twoMade\"\n },\n \"twoTaken\": {\n \"$sum\": \"$result.twoTaken\"\n },\n \"longesttestsTestStreak\": {\n \"$max\": \"$result.testsTestStreak\"\n },\n \"firstDate\": {\n \"$min\": \"$result.firstDate\"\n },\n \"lastDate\": {\n \"$max\": \"$result.lastDate\"\n },\n \"secondsToCompleteTest\": {\n \"$min\": \"$result.secondsToComplete\"\n },\n \"firstName\": {\n \"$max\": \"$person.snapShot.firstName\"\n },\n \"lastName\": {\n \"$max\": \"$person.snapShot.lastName\"\n },\n \"personName\": {\n \"$max\": \"$person.snapShot.personName\"\n },\n \"personNameLastFirst\": {\n \"$max\": \"$person.snapShot.personNameLastFirst\"\n },\n \"metaTag\": {\n \"$max\": \"$person.snapShot.metaTag\"\n },\n \"membership\": {\n \"$max\": \"$person.snapShot.membership\"\n }\n }\n },\n {\n \"$addFields\": {\n \"hasFt\": {\n \"$cmp\": [\n {\n \"$ifNull\": [\n \"$ftTaken\",\n 0\n ]\n },\n 0\n ]\n },\n \"hasFg\": {\n \"$cmp\": [\n {\n \"$ifNull\": [\n \"$fgTaken\",\n 0\n ]\n },\n 0\n ]\n },\n \"hasTwo\": {\n \"$cmp\": [\n {\n \"$ifNull\": [\n \"$twoTaken\",\n 0\n ]\n },\n 0\n ]\n },\n \"hasThree\": {\n \"$cmp\": [\n {\n \"$ifNull\": [\n \"$threeTaken\",\n 0\n ]\n },\n 0\n ]\n },\n \"hasAttempts\": {\n \"$cmp\": [\n {\n \"$ifNull\": [\n \"$testsTaken\",\n 0\n ]\n },\n 0\n ]\n }\n }\n },\n {\n \"$addFields\": {\n \"freeThrowPercentage\": {\n \"$cond\": [\n \"$hasFt\",\n {\n \"$multiply\": [\n {\n \"$divide\": [\n \"$ftMade\",\n \"$ftTaken\"\n ]\n },\n 100\n ]\n },\n null\n ]\n },\n \"fieldGoalPercentage\": {\n \"$cond\": [\n \"$hasFg\",\n {\n \"$multiply\": [\n {\n \"$divide\": [\n \"$fgMade\",\n \"$fgTaken\"\n ]\n },\n 100\n ]\n },\n null\n ]\n },\n \"fourPointPercentage\": {\n \"$cond\": [\n \"$hasTwo\",\n {\n \"$multiply\": [\n {\n \"$divide\": [\n \"$twoMade\",\n \"$twoTaken\"\n ]\n },\n 100\n ]\n },\n null\n ]\n },\n \"fivePointPercentage\": {\n \"$cond\": [\n \"$hasThree\",\n {\n \"$multiply\": [\n {\n \"$divide\": [\n \"$threeMade\",\n \"$threeTaken\"\n ]\n },\n 100\n ]\n },\n null\n ]\n },\n \"overallPercentage\": {\n \"$cond\": [\n \"$hasAttempts\",\n {\n \"$multiply\": [\n {\n \"$divide\": [\n \"$testsMade\",\n \"$testsTaken\"\n ]\n },\n 100\n ]\n },\n null\n ]\n }\n }\n },\n {\n \"$sort\": {\n \"testsTaken\": -1\n }\n }\n ], {\"allowDiskUse\": true})\n\n\n", "text": "Hi all,I’m trying to figure out how to make this query more efficient. I have indices set up on each of the fields I use in any of the $match parts of the query. The $group seems to bear the heavy load by looking at a full explain. Any ideas would be very welcome.", "username": "John_Peacock" }, { "code": "", "text": "Consolidate your $match into a single one. May be mongod do this by itself but it could reduce the server work load.A $sort after a $group cannot use the index.In your first $addFields, you compute flags in order to use them in the following $addFields. You could forgo the first $addFields by using the expressions directly in the second. In principal, the memory used to compute an expression is released as soon as the expression is evaluated. The memory used by your new fields exists until the document is out of the pipeline.", "username": "steevej" } ]
Making a $group with multiple $sum more efficient
2022-09-01T22:15:10.950Z
Making a $group with multiple $sum more efficient
1,269
null
[ "swift" ]
[ { "code": "class FileItem: Object\n{\n @Persisted(primaryKey: true) var path: String\n ...\n}\nFileItemlet theItem: FileItem = someRealm.object(ofType: FileItem.self, forPrimaryKey: \"some/path\")\nlet pathsToFetch: [String] = ...\nvar items: [FileItem] = []\n\nfor path: String in pathsToFetch\n{\n if let item = someRealm.object(ofType: FileItem.self, forPrimaryKey: path) {\n items.append(item)\n }\n}\nSortDescriptorRealmSwift.SortDescriptorNSSortDescriptorpathsToFetchSetcontainsFileItemobject(ofType:forPrimaryKey:)", "text": "I’m modeling a filesystem and I have 1,000,000 of these in a Realm:Retrieving any single FileItem by the Primary Key is insanely fast:Fetching a handful (say, 10) items is most quickly accomplished like this:But the disadvantage here is that I lose access to sorting the results with Realm’s SortDescriptor. And because there’s no straightforward way to convert from RealmSwift.SortDescriptor to NSSortDescriptor, that’s inconvenient.I know I can make pathsToFetch a Set and query with contains, but that degrades performance, since I’m now evaluating every FileItem in the database. Is there a way to retrieve and sort a handful of objects by Primary Key that is still close to the performance of object(ofType:forPrimaryKey:)?", "username": "Bryan_Jones" }, { "code": "let queryString = \"\"\nfor path: String in pathsToFetch\n{\n if queryString = \"\" {\n queryString = \"_id == \\\"\\(path)\\\"\"\n } else {\n queryString += \" || _id == \\\"\\(path)\\\"\"\n }\n}\nconst items = someRealm.objects(FileItem.self).filtered(queryString)\nResults", "text": "I would typically build a query string and use that instead.Then I would use that to filter for the objets:This should give you a Results object that you can still do all the Realm stuff you want.", "username": "Kurt_Libby1" }, { "code": "FileItem.objects().filtered()==", "text": "@Kurt_Libby1 Thanks! How does the performance of that query scale with the number of FileItem objects in the database, though?In a way, I guess what I’m asking is what’s the difference in performance between fetching an item by Primarykey, versus fetching an item by an indexed string property using .objects().filtered(), assuming that we’re testing for an exact string match (==) in both cases.I just don’t have 10M files at my disposal to try it!", "username": "Bryan_Jones" }, { "code": "let pathsToFetch: [String] = [...]", "text": "I’m not sure. I also don’t have anywhere north of 10K objects to test it on.But in my experience, the reason I’m creating these types of queryString filters is to present some data to a user. The chances that the user actually needs a million objects is very low.As long as this queryString is the end of the line in any sort of manual pipeline/aggregation, I think you can take advantage of all of the quick Realm functions and then apply it.Where are you getting the let pathsToFetch: [String] = [...] from? If these are already in Realm like in a List or something, you don’t really need to do this, you can just get the list, but if they are from another source, and you can just send the string, my experience is that the filtering is very quick.", "username": "Kurt_Libby1" } ]
Fastest Way To Query Multiple Objects by Primary Key?
2022-09-04T07:55:13.827Z
Fastest Way To Query Multiple Objects by Primary Key?
2,258
null
[ "dot-net", "transactions" ]
[ { "code": "using IClientSessionHandle session = await _collection.Database.Client.StartSessionAsync();\nawait session.WithTransactionAsync(async (session, cancellationToken) =>\n{\n ...\n await _collection.InsertOneAsync(document, insertOneOptions, cancellationToken);\n return true;\n}\n await _collection.InsertOneAsync(session, document, insertOneOptions, cancellationToken);\n", "text": "Hello,Taking upon the following example of using transactions:Is the insert operation included in the transaction? Or I have to pass the session to the method?", "username": "Tudor-Radu_Hatos" }, { "code": "", "text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using IClientSessionHandle.WithTransactionAsync
2022-08-25T07:48:05.875Z
Using IClientSessionHandle.WithTransactionAsync
1,922
null
[ "node-js" ]
[ { "code": "", "text": "I used to use this documentation: Class: Collection (mongodb.github.io)However, after version 4, there simply is no documentation. Why is this?", "username": "chs" }, { "code": "", "text": "Documentation is well alive at MongoDB Node.js Driverthis, for example, is the 4.9 version of the one you linked:Documentation for mongodb", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Yes, I’ve found that page many times. That’s it then? It’s absolute garbage. It doesn’t provide any relevant information.", "username": "chs" }, { "code": "", "text": "For example, if I want to find out what findOneAndUpdate returns, it doesn’t provide that information. “Returns Promise”. Yeah, thanks…", "username": "chs" }, { "code": "", "text": "Nope, it is not garbage. it is an API documentation, not a how-to guide. this just shows you are not yet accustomed to using TypeScript or just do not know how to use API documentation.For example, if I want to find out what findOneAndUpdate returns, it doesn’t provide that information. “Returns Promise”. Yeah, thanks…Did you really say that?open all findOneAndUpdate returns:\nv2.2 Promise if no callback passed\nv3.1 Promise if no callback passed\nv3.7 Promise if no callback passed\nv4.0 Promise<ModifyResult>\nv4.9 Promise<ModifyResult>over 6 years and 3 versions, the API doc has stayed almost the same except they started using TypeScript with v4.0.The driver still has the same Javascript nature (with new and deprecated features) when used in a program.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "It’s hot garbage. Look at the example I gave, which gave a list of options that you could send along, and what the method returns.Now look at this stupid bullshit. It literally doesn’t give anything useful. Yes, I know it returns a Promise…", "username": "chs" }, { "code": "value", "text": "Case in point. Older docs:value object Document returned from findAndModify command.\nlastErrorObject\tobject\t\nThe raw lastErrorObject returned from the command.\nok\tNumber\t\nIs 1 if the command executed correctly.Oh look, it returns the document as value. Very useful! And it also shows how to get the new document instead of the old one.The garbage you linked gives this information:Yes, that’s right. Literally nothing.", "username": "chs" }, { "code": "", "text": "It literally doesn’t give anything useful.Or you can just admit you don’t know how to read new documentation which is written with TypeScript.TSchema, Filter, UpdateFilter, FindOneAndUpdateOptions, ModifyResult … these are all type names you would learn once and once needed.They still imply the same things as before, just with an extra “type” layer.I am, too, inclined to see old style but you cannot start a defamation campaign just because developers have chosen a more secure way for the project with the “type safety” of TypeScript. After, what we are talking about is still not a how-to guide, but an API documentation instead.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "It’s not an API documentation, and TypeScript continues to be stupid. I don’t know how “TSchema” in itself conveys any information.", "username": "chs" }, { "code": "interface Movie { plot: string; title: string; }\nconst movies = database.collection<Movie>(\"movies\");\n", "text": "TSchema is just a model interface so you can have type check.But you don’t have to use TypeScript in your project. Because Typescript compiles to pure Javascript at the end (unless you use tsnode or deno) and in that case all you need is to use examples to write your queries. (change the version number on the left if you need)Almost all examples are given in both TypeScript and Javascript.The only problem here is that if you want to do more you will need to at least understand TypeScript (you don’t have to be pro for that, just know the basics). It is not a new language nor enforces you to use types all the time. It is just a “type” layer added on top of Javascript and helps to catch errors early during coding. Any Javascript file is a perfectly valid Typescript file.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why is there no node.js documentation after version 4?
2022-09-03T19:01:02.364Z
Why is there no node.js documentation after version 4?
1,426
https://www.mongodb.com/…ac12084fddac.png
[ "containers", "storage" ]
[ { "code": "", "text": "Hi,\nI see mongostat still show up to 80% cache use even after I upped the wiredTiger cache to 2G.\n\nimage708×82 4.72 KB\n\nimage988×221 82.7 KB\nMy setupWhen I check the container memory usage, it shows about 2G free memory.\n\nCan you shed light?Thanks", "username": "Gunho_Cho" }, { "code": "", "text": "This topic was automatically closed after 60 days. New replies are no longer allowed.", "username": "system" } ]
M312: ch3 response time degradation, part 1 - cache use
2022-09-05T03:25:37.884Z
M312: ch3 response time degradation, part 1 - cache use
1,479
null
[]
[ { "code": "", "text": "my request doesnt even reach 1mb of size and yet i am getting error 413, any idea why?", "username": "Alen_Kogen" }, { "code": "", "text": "This may not be related to Atlas\nHow are you accessing your mongodb\nAre you using nginx websphere?\nSomething related to your environment settings", "username": "Ramachandra_Tummala" } ]
Error 413 on small request
2022-09-03T11:45:43.292Z
Error 413 on small request
1,437
null
[ "dot-net", "compass" ]
[ { "code": "string conn = @\"mongodb://abcdef:[email protected]:27017/?tls=true\";\nvar clientSettings = MongoClientSettings.FromUrl(new MongoUrl(conn));\n clientSettings.AllowInsecureTls = false;\n clientSettings.UseTls = true;\n\n SslSettings sslSettings = new SslSettings\n {\n EnabledSslProtocols = SslProtocols.Tls12,\n ClientCertificates = new[] { \n new X509Certificate(@\"mongodb.pem\"),\n new X509Certificate(@\"rootCA.crt\"),\n },\n };\n\n clientSettings.SslSettings = sslSettings; \n\n MongoClient client = new MongoClient(clientSettings);\n{\"A timeout occurred after 30000ms selecting a server using CompositeServerSelector\n\t{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, \n\tLatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. \nClient view of cluster state is { ClusterId : \\\"1\\\", Type : \\\"Unknown\\\", State : \\\"Disconnected\\\", Servers : [{ ServerId: \\\"\n{ ClusterId : 1, EndPoint : \\\"Unspecified/ec2-52-28-11-2.eu-central-1.compute.amazonaws.com:27017\\\" }\\\", \nEndPoint: \\\"Unspecified/ec2-52-28-11-2.eu-central-1.compute.amazonaws.com:27017\\\", ReasonChanged: \\\"Heartbeat\\\", \nState: \\\"Disconnected\\\", ServerVersion: , TopologyVersion: , Type: \\\"Unknown\\\", \nHeartbeatException: \\\"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.\\r\\n \n---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid because of errors in the certificate chain: PartialChain\\r\\n \nat System.Net.Security.SslStream.SendAuthResetSignal(ProtocolToken message, ExceptionDispatchInfo exception)\\r\\n \nat System.Net.Security.SslStream.CompleteHandshake(SslAuthenticationOptions sslAuthenticationOptions)\\r\\n \nat System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](TIOAdapter adapter, Boolean receiveFirst, Byte[] reAuthenticationData, Boolean isApm)\\r\\n \nat System.Net.Security.SslStream.AuthenticateAsClient(SslClientAuthenticationOptions sslClientAuthenticationOptions)\\r\\n \nat System.Net.Security.SslStream.AuthenticateAsClient(String targetHost, X509CertificateCollection clientCertificates, SslProtocols enabledSslProtocols, Boolean checkCertificateRevocation)\\r\\n \nat MongoDB.Driver.Core.Connections.SslStreamFactory.CreateStream(EndPoint endPoint, CancellationToken cancellationToken)\\r\\n \nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\\r\\n \n--- End of inner exception stack trace ---\\r\\n \nat MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\\r\\n \nat MongoDB.Driver.Core.Connections.BinaryConnection.Open(CancellationToken cancellationToken)\\r\\n \nat MongoDB.Driver.Core.Servers.ServerMonitor.InitializeConnection(CancellationToken cancellationToken)\\r\\n \nat MongoDB.Driver.Core.Servers.ServerMonitor.Heartbeat(CancellationToken cancellationToken)\\\", \nLastHeartbeatTimestamp: \\\"2022-08-31T14.22.15.1629747Z\\\", LastUpdateTimestamp: \\\"2022-08-31T14.22.15.1629749Z\\\" }] }.\"}\n", "text": "Hello Everyone,\nI have my MongoDB running in Docker in the Amazon Linux EC2 instance in AWS. It has the SSL/TLS Certificate as well. On the Server, I have to add the tlscertificate and CArootfile to open the mongodb.\nFirst question: is it possible to add tls certificate in conf file and restart the mongodb? and if so, how can I add it? and What is the command to restart the mongodb to accept the tls certificate.? Sudo/systemctl doenst work in the docker\nSecond question: I was able to connect to MongoDB to my local MongoDB compass, Now I am trying to run it on my .NET Core application with those certificates, it doesnt work.\nHere is my C# code toAnd the error I got is:Any suggestions??", "username": "Kris_Kammadanam" }, { "code": "", "text": "Create a folder and create a customized config file along with certificate files and write a customized “dockerfile” to copy them into corresponding directories inside the container. do not forget to set in-out ports.for accessibility, you will need port forwarding, one from container to aws network, and from there to WAN (if it is not done in one step).Timeout errors are mostly caused by an incorrect address or port to the server, and then any firewall, proxy, VPN, or connection-limiting program can be the next culprit. I am guessing yours is incomplete port forwarding or AWS firewall setting. check them first.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Hello Yilmaz_Durmaz\nThank you so much for the info. I am a bit confused about this. Please bear with me as I am completely new to Docker and Linux environment. and I am still reading upon how the MongoDB, tls and its commands.\nThe thing is, I can connect directly to the MongoDB Compass with the tls certificate file copied locally to my local environment. Now, I am using the same connection string into my application with those files to make it work. but its not connecting.does it make any difference if I add the docker file to the container or add those certificates to my application?", "username": "Kris_Kammadanam" }, { "code": "", "text": "I assume you use your own MongoDB server in a container, not an Atlas cluster. That should give 3 running machines; your local, app somewhere on the cloud, and container somewhere else.setting and running containers is another story but you seem to have set up your database authorization from your description (connecting with a key from local). then all you need to copy the required keys to your app’s host and set your app to connect to the database with a key.All that said, your first code shows you have already set something. Unfortunately, I am not a .NET guru, so I cannot say if this is the right way to use the key along with a username/password.Run your app first on your local environment to see if it runs fine. This will eliminate the possibility of the wrong implementation.Then, you will need to set firewalls, port forwardings, and CORS (if needed) so that app’s and database’s hosts can connect to each other.", "username": "Yilmaz_Durmaz" } ]
.NET Core App Connection String to Mongodb running in Docker
2022-09-02T09:39:21.000Z
.NET Core App Connection String to Mongodb running in Docker
4,136
null
[ "aggregation", "dot-net" ]
[ { "code": "using MongoDB.Bson.Serialization;\nusing MongoDB.Driver;\n\nConsole.WriteLine(\"Hello, World!\");\n\npublic interface IDomainEvent\n{\n Guid EventId { get; }\n Guid AggregateId { get; }\n DateTime OccuredAt { get; }\n}\n\npublic abstract class DomainEvent : IDomainEvent\n{\n protected DomainEvent(Guid aggregteId)\n {\n AggregateId = aggregteId;\n EventId = Guid.NewGuid();\n OccuredAt = DateTime.UtcNow;\n }\n public Guid EventId { get; }\n public Guid AggregateId { get; }\n public DateTime OccuredAt { get; }\n}\n\npublic class UserCreated : DomainEvent\n{\n public UserCreated(Guid userId, string name)\n : base(userId)\n {\n UserId = userId;\n Name = name;\n CreatedOn = DateTime.UtcNow;\n }\n public Guid UserId { get; }\n public string Name { get; }\n public DateTime CreatedOn { get; }\n}\n\npublic class UserNameChanged : DomainEvent\n{\n public UserNameChanged(Guid userId, string newName, string oldName)\n : base(userId)\n {\n UserId = userId;\n NewName = newName;\n OldName = oldName;\n CreatedOn = DateTime.UtcNow;\n }\n\n public Guid UserId { get; }\n public string NewName { get; }\n public string OldName { get; }\n public DateTime CreatedOn { get; }\n}\n\n// Create two events\nvar userId = Guid.NewGuid();\nvar name = \"User-1\";\nvar newName = \"User-2\";\nvar userCreatedEvent = new UserCreated(userId, name);\nvar userNameChangedEvent = new UserNameChanged(userId, newName, name);\n\n// Add them to Mongo store\nvar client = new MongoClient(\"mongodb://localhost:27017\");\nvar database = client.GetDatabase(\"test\");\nvar collection = database.GetCollection<IDomainEvent>(\"events\");\n\nBsonClassMap.RegisterClassMap<UserCreated>();\nBsonClassMap.RegisterClassMap<UserNameChanged>();\n\nawait collection.InsertOneAsync(userCreatedEvent);\nawait collection.InsertOneAsync(userNameChangedEvent);\n\nvar userEvents = await collection.FindAsync(x => x.AggregateId == userId); // Failing here as Document doesn't contain base property 'AggregateId'\n\n", "text": "I would like to include properties defined in an interface during the serialization and deserialization to Bson documents. I tried registering derived classes using RegisterClassMap, still interface properties were not serialized and while retrieving through base property it gave me an error.A sample use case is described below.\nAn interface IDomainEvent with properties(EventId, AggregateId and OccuredAt)\nAn abstract class DomainEvent implementing IDomainEvent.\nTwo classes UserCreated and UserNameChanged derived from abstract class DomainEvent. They contain event specific properties apart from base properties.Insertion to DB happens with derived class properties only and no base properties such as EventId, AggregateId.\nWhile retrieving document using base property ‘AggregateId’, property not found exception is thrown which is valid as its not present in the document.Questions:Sample Code: http://pastie.org/p/5B8kZnUV8Vw47MGTf5h1fm\nMongoDB.Driver : 2.17.1", "username": "Praveen_Raghuvanshi" }, { "code": "", "text": "Content of record in mongodb\nimage1302×392 34 KB\n", "username": "Praveen_Raghuvanshi" } ]
How to get interface properties serialized using MongDB C# driver
2022-09-05T06:19:50.136Z
How to get interface properties serialized using MongDB C# driver
2,959
https://www.mongodb.com/…e_2_1024x512.png
[ "kotlin" ]
[ { "code": "", "text": "In this documentation about authentication, I’m seeing a runBlocking{} block. Now my question is, why is it important? Can we use a regular Launch {} block instead?", "username": "111757" }, { "code": "launchrunBlocking", "text": "@111757 : Yes, you can use regular launch instead of runBlocking.", "username": "Mohit_Sharma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is runBlocking {} important?
2022-08-26T16:14:30.719Z
Is runBlocking {} important?
2,129
null
[ "python", "serverless", "motor-driver" ]
[ { "code": "from fastapi import FastAPI, Body, status, Depends\n\nfrom mangum import Mangum\n\nfrom motor.motor_asyncio import AsyncIOMotorClient\n\nfrom fastapi.responses import JSONResponse\n\nfrom app.utility.config import MONGODB_URL, MAX_CONNECTIONS_COUNT, MIN_CONNECTIONS_COUNT, MAX_DB_THREADS_WAIT_COUNT, MAX_DB_THREAD_QUEUE_TIMEOUT_COUNT\n\napplication= FastAPI()\n\nclient = AsyncIOMotorClient(str(MONGODB_URL),\n\n maxPoolSize=MAX_CONNECTIONS_COUNT,\n\n minPoolSize=MIN_CONNECTIONS_COUNT,\n\n waitQueueMultiple = MAX_DB_THREADS_WAIT_COUNT,\n\n waitQueueTimeoutMS = MAX_DB_THREAD_QUEUE_TIMEOUT_COUNT )\n\nasync def get_database() -> AsyncIOMotorClient:\n\n \n\n return client\n\[email protected](\"/createStudent\")\n\nasync def create_student(student = Body(...), db: AsyncIOMotorClient = Depends(get_database)):\n\n new_student = await db[\"college\"][\"students\"].insert_one(student)\n\n created_student = await db[\"college\"][\"students\"].find_one({\"_id\": new_student.inserted_id})\n\n return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_student)\n\[email protected](\"/createTeacher\")\n\nasync def create_teacher(teacher = Body(...), db: AsyncIOMotorClient = Depends(get_database)):\n\n new_teacher = await db[\"college\"][\"students\"].insert_one(teacher)\n\n created_teacher = await db[\"college\"][\"students\"].find_one({\"_id\": new_teacher.inserted_id})\n\n return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_teacher)\n\nhandler = Mangum(application)\n", "text": "I’m building a serverless application using Python and Mongodb. In documentation I found that I need to write db connection outside handler function. I have used Mangum python package as adapter to handle API gateway.For every API request, new connection is created. How to cache db so that new request uses old connection? Every time new request is created so that lambda compute time is increased dramatically after db hits max connection limit.\nIn documentation , I only found nodejs example but couldnot solve with python", "username": "Rabindra_Acharya" }, { "code": "exports = function({ query, headers, body}, response) {\n const coll = context.services.get(\"mongodb-atlas\").db(\"covid19\").collection(\"global\");\n coll.findOne().then(res => {\n response.setBody(JSON.stringify(res));\n response.setHeader(\"content-type\", \"application/json\");\n });\n};\n", "text": "Hi @Rabindra_Acharya and welcome in the MongoDB Community !I might be wrong here, but it looks like you are trying to build something stateful in a stateless serverless environment.The MongoDB driver - Motor here - usually keeps a pool of connection open and these connections are reused. But if this pool isn’t persisted between calls and a new pool is recreated each time because the serverless environment you are using is stateless, then this just doesn’t work (and won’t).MongoDB Realm keeps a connection pool open and the same connection are reused between each calls.Here is an example of a REST API doing a findOne operation:As you can see, the connection to the Atlas service is retrieved from the context and from there, we can use any MongoDB query we want without recreating a new connection pool.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Now its difficult to switch to another at my project stage ,https://docs.atlas.mongodb.com/best-practices-connecting-from-aws-lambda/ According to this document , mongodb can be cached. I am confused how to implement with python", "username": "Rabindra_Acharya" }, { "code": "", "text": "Well I didn’t know MongoDB made this doc so for finding it ! And I’m glad they are explaining more or less the same “caching the MongoDB client / connection” concept.Let me know if you figure out how to do it in Python, but it should be more or less a translation of what they are doing in JS.I only played once with AWS lambdas so are probably ahead of me already.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi PyMongo/Motor maintainer here. I suspect this issue may be caused by a feature added in PyMongo 3.11 (specifically PYTHON-2123). Could you try again using PyMongo 3.10.1 and Motor 2.1.0 and report back if these issues still occur?", "username": "Shane" }, { "code": "from __future__ import annotations\nfrom imp import reload\nimport os\nfrom typing import Optional\nfrom datetime import datetime\nfrom fastapi import FastAPI, Body, HTTPException, status\nfrom fastapi.responses import JSONResponse\nfrom fastapi.encoders import jsonable_encoder\nfrom bson import ObjectId\nfrom pydantic import BaseModel, Field, EmailStr\nimport motor.motor_asyncio\nfrom mangum import Mangum\n\napp = FastAPI()\n\n# client = motor.motor_asyncio.AsyncIOMotorClient(os.environ[\"MONGODB_URL\"])\nclient = motor.motor_asyncio.AsyncIOMotorClient('mongodb+srv://USERNAME:[email protected]/myFirstDatabase?retryWrites=true&w=majority')\n\ndb = client['test']\n\n# both classes below based on https://www.mongodb.com/developer/quickstart/python-quickstart-fastapi/\nclass PyObjectId(ObjectId): # so FastAPI can encode ObjectID as JSON\n @classmethod\n def __get_validators__(cls):\n yield cls.validate\n @classmethod\n def validate(cls, v):\n if not ObjectId.is_valid(v):\n raise ValueError(\"Invalid objectid\")\n return ObjectId(v)\n @classmethod\n def __modify_schema__(cls, field_schema):\n field_schema.update(type=\"string\")\n\nclass FlexyDataInsertion(BaseModel):\n id: PyObjectId = Field(default_factory=PyObjectId, alias=\"_id\")\n dataSeries_id: str = Field(...)\n timestamp: datetime = Field(...)\n value: int = Field(...)\n\n class Config:\n allow_population_by_field_name = True\n arbitrary_types_allowed = True\n json_encoders = {ObjectId: str}\n schema_extra = {\n \"example\": {\n \"dataSeries_id\": \"blah\",\n \"timestamp\": \"2022-03-24T08:48:57Z\",\n \"value\": 26,\n }\n }\n\[email protected]('/dataPoints', response_description=\"Add data point(s)\", response_model=None, tags=[\"insertion\"])\nasync def add_data_point(datapoint: FlexyDataInsertion = Body(...)) -> None:\n datapoint = jsonable_encoder(datapoint)\n new_data = await db['test/dataSeries_SENSOR_Humidity/dataPoints'].insert_one(datapoint)\n created_data = await db['test/dataSeries_SENSOR_Humidity/dataPoints'].find_one({\"_id\": new_data.inserted_id})\n return JSONResponse(content=created_data)\n\[email protected]('/dataPoints', response_model=None, tags=[\"extraction\"])\nasync def retrieve_data_point():\n return JSONResponse(\"hello world\")\n\nhandler = Mangum(app=app)\n\n# uncomment for running on localhost server, not in production\n# import uvicorn\n# if __name__ == \"__main__\":\n# uvicorn.run(\"app:app\", host=\"127.0.0.1\", port=8000, log_level=\"info\", reload=True) \n[ERROR]\t2022-04-09T02:49:34.716Z\tfb7adf6d-f52b-42d5-bd34-72f4fc3bc7b4\tAn error occurred running the application.\nTraceback (most recent call last):\n File \"/var/task/mangum/protocols/http.py\", line 66, in run\n await app(self.scope, self.receive, self.send)\n File \"/var/task/fastapi/applications.py\", line 261, in __call__\n await super().__call__(scope, receive, send)\n File \"/var/task/starlette/applications.py\", line 112, in __call__\n await self.middleware_stack(scope, receive, send)\n File \"/var/task/starlette/middleware/errors.py\", line 181, in __call__\n raise exc\n File \"/var/task/starlette/middleware/errors.py\", line 159, in __call__\n await self.app(scope, receive, _send)\n File \"/var/task/starlette/exceptions.py\", line 82, in __call__\n raise exc\n File \"/var/task/starlette/exceptions.py\", line 71, in __call__\n await self.app(scope, receive, sender)\n File \"/var/task/fastapi/middleware/asyncexitstack.py\", line 21, in __call__\n raise e\n File \"/var/task/fastapi/middleware/asyncexitstack.py\", line 18, in __call__\n await self.app(scope, receive, send)\n File \"/var/task/starlette/routing.py\", line 656, in __call__\n await route.handle(scope, receive, send)\n File \"/var/task/starlette/routing.py\", line 259, in handle\n await self.app(scope, receive, send)\n File \"/var/task/starlette/routing.py\", line 61, in app\n response = await func(request)\n File \"/var/task/fastapi/routing.py\", line 227, in app\n raw_response = await run_endpoint_function(\n File \"/var/task/fastapi/routing.py\", line 160, in run_endpoint_function\n return await dependant.call(**values)\n File \"/var/task/app.py\", line 97, in add_data_point\n new_data = await db['test/dataSeries_SENSOR_Humidity/dataPoints'].insert_one(datapoint)\n File \"/var/lang/lib/python3.8/concurrent/futures/thread.py\", line 57, in run\n result = self.fn(*self.args, **self.kwargs)\n File \"/var/task/pymongo/collection.py\", line 695, in insert_one\n self._insert(document,\n File \"/var/task/pymongo/collection.py\", line 610, in _insert\n return self._insert_one(\n File \"/var/task/pymongo/collection.py\", line 599, in _insert_one\n self.__database.client._retryable_write(\n File \"/var/task/pymongo/mongo_client.py\", line 1490, in _retryable_write\n with self._tmp_session(session) as s:\n File \"/var/lang/lib/python3.8/contextlib.py\", line 113, in __enter__\n return next(self.gen)\n File \"/var/task/pymongo/mongo_client.py\", line 1823, in _tmp_session\n s = self._ensure_session(session)\n File \"/var/task/pymongo/mongo_client.py\", line 1810, in _ensure_session\n return self.__start_session(True, causal_consistency=False)\n File \"/var/task/pymongo/mongo_client.py\", line 1763, in __start_session\n server_session = self._get_server_session()\n File \"/var/task/pymongo/mongo_client.py\", line 1796, in _get_server_session\n return self._topology.get_server_session()\n File \"/var/task/pymongo/topology.py\", line 487, in get_server_session\n self._select_servers_loop(\n File \"/var/task/pymongo/topology.py\", line 208, in _select_servers_loop\n raise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: connection closed,connection closed,connection closed\n[ERROR] 2022-04-09T02:49:34.716Z fb7adf6d-f52b-42d5-bd34-72f4fc3bc7b4 An error occurred running the application. Traceback (most recent call last): File \"/var/task/mangum/protocols/http.py\", line 66, in run await app(self.scope, self.receive, self.send) File \"/var/task/fastapi/applications.py\", line 261, in __call__ await super().__call__(scope, receive, send) File \"/var/task/starlette/applications.py\", line 112, in __call__ await self.middleware_stack(scope, receive, send) File \"/var/task/starlette/middleware/errors.py\", line 181, in __call__ raise exc File \"/var/task/starlette/middleware/errors.py\", line 159, in __call__ await self.app(scope, receive, _send) File \"/var/task/starlette/exceptions.py\", line 82, in __call__ raise exc File \"/var/task/starlette/exceptions.py\", line 71, in __call__ await self.app(scope, receive, sender) File \"/var/task/fastapi/middleware/asyncexitstack.py\", line 21, in __call__ raise e File \"/var/task/fastapi/middleware/asyncexitstack.py\", line 18, in __call__ await self.app(scope, receive, send) File \"/var/task/starlette/routing.py\", line 656, in __call__ await route.handle(scope, receive, send) File \"/var/task/starlette/routing.py\", line 259, in handle await self.app(scope, receive, send) File \"/var/task/starlette/routing.py\", line 61, in app response = await func(request) File \"/var/task/fastapi/routing.py\", line 227, in app raw_response = await run_endpoint_function( File \"/var/task/fastapi/routing.py\", line 160, in run_endpoint_function return await dependant.call(**values) File \"/var/task/app.py\", line 97, in add_data_point new_data = await db['test/dataSeries_SENSOR_Humidity/dataPoints'].insert_one(datapoint) File \"/var/lang/lib/python3.8/concurrent/futures/thread.py\", line 57, in run result = self.fn(*self.args, **self.kwargs) File \"/var/task/pymongo/collection.py\", line 695, in insert_one self._insert(document, File \"/var/task/pymongo/collection.py\", line 610, in _insert return self._insert_one( File \"/var/task/pymongo/collection.py\", line 599, in _insert_one self.__database.client._retryable_write( File \"/var/task/pymongo/mongo_client.py\", line 1490, in _retryable_write with self._tmp_session(session) as s: File \"/var/lang/lib/python3.8/contextlib.py\", line 113, in __enter__ return next(self.gen) File \"/var/task/pymongo/mongo_client.py\", line 1823, in _tmp_session s = self._ensure_session(session) File \"/var/task/pymongo/mongo_client.py\", line 1810, in _ensure_session return self.__start_session(True, causal_consistency=False) File \"/var/task/pymongo/mongo_client.py\", line 1763, in __start_session server_session = self._get_server_session() File \"/var/task/pymongo/mongo_client.py\", line 1796, in _get_server_session return self._topology.get_server_session() File \"/var/task/pymongo/topology.py\", line 487, in get_server_session self._select_servers_loop( File \"/var/task/pymongo/topology.py\", line 208, in _select_servers_loop raise ServerSelectionTimeoutError( pymongo.errors.ServerSelectionTimeoutError: connection closed,connection closed,connection closed\n session_timeout = self._check_session_support()\n File \"/var/task/pymongo/topology.py\", line 504, in _check_session_support\n self._select_servers_loop(\n File \"/var/task/pymongo/topology.py\", line 218, in _select_servers_loop\n raise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: connection closed,connection closed,connection closed, Timeout: 30s, Topology Description: <TopologyDescription id: 6250e075e1229bd047cf86b7, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('XXXXXXXXXXX.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>, <ServerDescription ('XXXXXXXXXXX.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>, <ServerDescription ('XXXXXXXXXXX.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>]>\n[ERROR] 2022-04-09T01:32:20.565Z c1103461-149b-4eed-92d2-013b500fceaf An error occurred running the application. Traceback (most recent call last): File \"/var/task/mangum/protocols/http.py\", line 66, in run await app(self.scope, self.receive, self.send) File \"/var/task/fastapi/applications.py\", line 261, in __call__ await super().__call__(scope, receive, send) File \"/var/task/starlette/applications.py\", line 112, in __call__ await self.middleware_stack(scope, receive, send) File \"/var/task/starlette/middleware/errors.py\", line 181, in __call__ raise exc File \"/var/task/starlette/middleware/errors.py\", line 159, in __call__ await self.app(scope, receive, _send) File \"/var/task/starlette/exceptions.py\", line 82, in __call__ raise exc File \"/var/task/starlette/exceptions.py\", line 71, in __call__ await self.app(scope, receive, sender) File \"/var/task/fastapi/middleware/asyncexitstack.py\", line 21, in __call__ raise e File \"/var/task/fastapi/middleware/asyncexitstack.py\", line 18, in __call__ await self.app(scope, receive, send) File \"/var/task/starlette/routing.py\", line 656, in __call__ await route.handle(scope, receive, send) File \"/var/task/starlette/routing.py\", line 259, in handle await self.app(scope, receive, send) File \"/var/task/starlette/routing.py\", line 61, in app response = await func(request) File \"/var/task/fastapi/routing.py\", line 227, in app raw_response = await run_endpoint_function( File \"/var/task/fastapi/routing.py\", line 160, in run_endpoint_function return await dependant.call(**values) File \"/var/task/app.py\", line 96, in add_data_point new_data = await db['test/dataSeries_SENSOR_Humidity/dataPoints'].insert_one(datapoint) File \"/var/lang/lib/python3.8/concurrent/futures/thread.py\", line 57, in run result = self.fn(*self.args, **self.kwargs) File \"/var/task/pymongo/collection.py\", line 705, in insert_one self._insert(document, File \"/var/task/pymongo/collection.py\", line 620, in _insert return self._insert_one( File \"/var/task/pymongo/collection.py\", line 609, in _insert_one self.__database.client._retryable_write( File \"/var/task/pymongo/mongo_client.py\", line 1551, in _retryable_write with self._tmp_session(session) as s: File \"/var/lang/lib/python3.8/contextlib.py\", line 113, in __enter__ return next(self.gen) File \"/var/task/pymongo/mongo_client.py\", line 1948, in _tmp_session s = self._ensure_session(session) File \"/var/task/pymongo/mongo_client.py\", line 1935, in _ensure_session return self.__start_session(True, causal_consistency=False) File \"/var/task/pymongo/mongo_client.py\", line 1883, in __start_session server_session = self._get_server_session() File \"/var/task/pymongo/mongo_client.py\", line 1921, in _get_server_session return self._topology.get_server_session() File \"/var/task/pymongo/topology.py\", line 520, in get_server_session session_timeout = self._check_session_support() File \"/var/task/pymongo/topology.py\", line 504, in _check_session_support self._select_servers_loop( File \"/var/task/pymongo/topology.py\", line 218, in _select_servers_loop raise ServerSelectionTimeoutError( pymongo.errors.ServerSelectionTimeoutError: connection closed,connection closed,connection closed, Timeout: 30s, Topology Description: <TopologyDescription id: 6250e075e1229bd047cf86b7, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('XXXXXXXXXXX.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>, <ServerDescription ('XXXXXXXXXXX.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>, <ServerDescription ('XXXXXXXXXXX.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('connection closed')>]>\n", "text": "Hi All,I am having the same issue as Rabindra as of his last post (i.e. searching for working Python implementation of the proposed solution).The first block below is the code I am using (minus MongoDB Atlas credentials). I initially tried this using PyMongo[srv] 3.12.3 and Motor 2.5.1, and then using PyMongo[srv] 3.10.1 and Motor 2.1.0, as Shane suggested.Using POSTMAN, in either case, I can get a “hello world” response from the GET route, but not an MongoDB insertion or the find method from the POST route, for which I instead get a JSON message “{“message”: “Endpoint request timed out”}”, and the CloudWatch log file (2nd & 3rd blocks) for this can be seen below my Python code. (Notably, this same POST route (and the GET) works as expected when hosting this code locally with uvicorn.).Error message from CloudWatch (part of MongoDB URL redacted using “XXXXXXXXXXX”):.With the latest Motor and Pymongo versions, there is the following additional 2 lines (session_timeout & _check_session_support) in the error log, near the end:", "username": "Pawel" }, { "code": "", "text": "yeap in latest pymongo and motor, I am also getting same issue.Still stuck in DB cache in python. Did you use DB cache?", "username": "Rabindra_Acharya" }, { "code": "", "text": "Found the solution for me. Each time I destroyed then again spun up my Lambda using Terraform, the IP would change. Getting that IP on my MongoDB Atlas allow list solved the problem. I did not have to use older PyMongo and Motor versions.", "username": "Pawel" }, { "code": "", "text": "Could you help me with code snippet ? Just how you modify the code you write before here in the post", "username": "Rabindra_Acharya" }, { "code": "", "text": "The only change I made in my code was to my MongoDB URL, changing the URL, username and password. The error message from CloudWatch had part of the MongoDB URL redacted using “XXXXXXXXXXX”.", "username": "Pawel" }, { "code": "", "text": "It would be easy to troubbleshoot in my project if you put your MoNGODB url with username ,password, cluster name hidden.", "username": "Rabindra_Acharya" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Creating Mongodb object once lambda python?
2022-03-08T04:17:21.901Z
Creating Mongodb object once lambda python?
7,819
null
[ "c-driver" ]
[ { "code": "configure = \\\n-DENABLE_ZLIB=OFF \\\n-DZLIB_LIBRARY=path/to/zlib/lib/zlib.lib \\\n-DZLIB_INCLUDE_DIR=path/to/zlib/latest \\\n-DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF \\\n-DCMAKE_BUILD_TYPE=Release \\\n-DCMAKE_INSTALL_PREFIX=$(MAKEFILE_PATH)/$(PLATFORM) \\\n-DCMAKE_PREFIX_PATH=$(MAKEFILE_PATH)/$(PLATFORM) \\\n-DBUILD_SHARED_LIBS=OFF \\\n-DENABLE_TESTS=OFF \\\n-DENABLE_EXAMPLES=OFF \\\n../source\n\ninstall = \\\n--build . \\\n--config Release \\\n--target install\n\ndefault: build\n\nbuild:\n\tmkdir build\n\tmkdir $(PLATFORM)\n\tcd build && cmake $(configure)\n\tcd build && cmake $(install)", "text": "Hi I have been using the c / cxx driver for a project, but am running into some issues.Trying to build the static libs, but as I already link zlib into the project I have to use the-DENABLE_ZLIB=OFFoption to stop linking it twice.\nI have seen that there is a -DENABLE_ZLIB=SYSTEM option, but currently I believe that this does not work, as even when specifying-DZLIB_ROOT=path/to/zlib/latest\n-DZLIB_LIBRARY=path/to/zlib/lib/zlib.libthere is a build error from mongo-compression.c stating that it cannot find zlib.h, implying the build is not respecting ZLIB_ROOTmakefile to call cmake:", "username": "Thomas_Morten" }, { "code": "-DENABLE_ZLIB=SYSTEM-DZLIB_ROOT=path/to/zlib/latestZLIB_ROOTlibinclude/usr/usr/local--prefixZLIB_LIBRARY-DENABLE_ZLIB=OFFadd_subdirectory(mongo-c-driver)target_link_libraries(foo mongo::mongoc_static)", "text": "@Thomas_Morten it is not clear what you are trying to accomplish. It seems like you are building a project that links directly to both zlib and libmongoc. You state that you want to compile the C driver static libraries, so I am assuming that you want to statically link libmongoc, but you do not say whether you are linking zlib statically or dynamically. I am going to assume that you want to want to link both statically.In that case, the correct way is to specify -DENABLE_ZLIB=SYSTEM. Then, if your system zlib is not in a location that can be located by the C driver build, you can also specify -DZLIB_ROOT=path/to/zlib/latest. Note that the ZLIB_ROOT should be the path that contains the lib and include directories created by the zlib build. On a typical system installation this would be either /usr or /usr/local. Most likely, you want to use whatever directory was specified with the --prefix option of the zlib build. The ZLIB_LIBRARY variable has no effect, so you can safely remove that.Your own project that then consumes both the C driver and zlib directly will need to build using the C driver static libraries and the same zlib static library. This will ensure that there are no conflicts between library implementation expected by the C driver static libraries and your own project’s static libraries.Either way, specifying -DENABLE_ZLIB=OFF is going render all the other zlib-related options useless, as the build will not include any zlib references. That approach is unlikely to be what you want, unless what you want is a C driver without zlib support.All that said, if you want static linkage of everything and if your own project uses CMake, then the easiest thing is going to be to make the C driver sources a sub-directory of your own build, include the C driver with something like add_subdirectory(mongo-c-driver), then link the C driver library to whichever of your project components need it with target_link_libraries(foo mongo::mongoc_static).", "username": "Roberto_Sanchez" }, { "code": "Directory of <Path To>\\zlib\n<DIR> include\n<DIR> lib\n\n\n\n\nDirectory of <Path To>\\zlib\\include\n<DIR> amiga\n<DIR> contrib\n 31,009 crc32.h\n 13,014 deflate.h\n<DIR> doc\n<DIR> examples\n 4,809 gzguts.h\n 438 inffast.h\n 6,437 inffixed.h\n 6,521 inflate.h\n 2,990 inftrees.h\n<DIR> msdos\n<DIR> nintendods\n<DIR> old\n<DIR> qnx\n 8,600 trees.h\n<DIR> watcom\n<DIR> win32\n 13,826 zconf.h\n 81,246 zlib.h\n 7,427 zutil.h\n\n\n\nDirectory of <Path To>\\zlib\\lib\n 338,596 zlib.lib\ncmake \\\n -DENABLE_ZLIB=SYSTEM \\\n -DZLIB_ROOT=<Path To>\\zlib \\\n -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF \\\n -DCMAKE_BUILD_TYPE=Release \\\n -DCMAKE_INSTALL_PREFIX=<Path To>/install/ \\\n -DCMAKE_PREFIX_PATH=<Path To>/install/ \\\n -DBUILD_SHARED_LIBS=OFF \\\n -DENABLE_TESTS=OFF \\\n -DENABLE_EXAMPLES=OFF \\\n -A x64 \\\n ../source\n\n \n\ncmake \\\n --build . \\\n --config Release \\\n --target install\nProject \"<Path To>\\mongo-c-driver\\build\\ALL_BUILD.vcxproj\" (3) is building \"<Path To>\\mongo-c-driver\\build\\src\\libmongoc\\mongoc-stat.vcxproj\" (6) on node 1 (default targets).\nProject \"<Path To>\\mongo-c-driver\\build\\src\\libmongoc\\mongoc-stat.vcxproj\" (6) is building \"<Path To>\\mongo-c-driver\\build\\src\\libmongoc\\mongoc_shared.vcxproj\" (7) on node 1 (default targets).\nPrepareForBuild:\n Creating directory \"mongoc_shared.dir\\Release\\\".\n Creating directory \"<Path To>\\mongo-c-driver\\build\\src\\libmongoc\\Release\\\".\n Creating directory \"mongoc_shared.dir\\Release\\mongoc_shared.tlog\\\".\nInitializeBuildStatus:\n Creating \"mongoc_shared.dir\\Release\\mongoc_shared.tlog\\unsuccessfulbuild\" because \"AlwaysCreate\" was specified.\nCustomBuild:\n Building Custom Rule D:/GitHub/externalapi/mongodb/mongo-c-driver/source/src/libmongoc/CMakeLists.txt\n CMake does not need to re-run because D:/GitHub/externalapi/mongodb/mongo-c-driver/build/src/libmongoc/CMakeFiles/generate.stamp is up-to-date.\nClCompile:\n C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Enterprise\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x64\\CL.exe /c /I\"<Path To>\\mongo-c-driver\\build\\src\\libmongoc\\src\" /I\"<Path To>\\mongo-c-drive\n r\\build\\src\\libmongoc\\src\\mongoc\" /I\"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\" /I\"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\..\\..\\src\\common\" /I\"<Path To>\\mongo-\n c-driver\\source\\src\\libmongoc\\..\\kms-message\\src\" /I\"<Path To>\\mongo-c-driver\\source\\src\\libbson\\src\" /I\"<Path To>\\mongo-c-driver\\build\\src\\libbson\\src\" /I\"<Path To>\\mongo-c-\n driver\\build\\src\\libbson\\src\\bson\" /nologo /W3 /WX- /diagnostics:classic /O2 /Ob2 /D WIN32 /D _WINDOWS /D NDEBUG /D MONGOC_COMPILATION /D KMS_MSG_STATIC /D KMS_MESSAGE_ENABLE_CRYPTO /D KMS_MESSAGE_ENABLE_CRYPTO_CNG /D _CRT_SECURE_NO_W\n ARNINGS /D _GNU_SOURCE /D _BSD_SOURCE /D _DEFAULT_SOURCE /D COMMON_PREFIX_=_mongoc_common /D \"CMAKE_INTDIR=\\\"Release\\\"\" /D mongoc_shared_EXPORTS /D _WINDLL /D _MBCS /Gm- /MD /GS /fp:precise /Qspectre /Zc:wchar_t /Zc:forScope /Zc:inlin\n e /Fo\"mongoc_shared.dir\\Release\\\\\" /Fd\"mongoc_shared.dir\\Release\\vc141.pdb\" /Gd /TC /FC /errorReport:queue \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-aggregate.c\" \"D:\\GitHub\\externalapi\\mongod\n b\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-apm.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-array.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\n \\mongoc-async.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-async-cmd.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-buffer.c\" \"D:\\GitHub\\externalapi\\m\n ongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-bulk-operation.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-change-stream.c\" \"<Path To>\\mongo-c-driver\\source\\\n src\\libmongoc\\src\\mongoc\\mongoc-client.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-client-pool.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-client-\n side-encryption.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cluster.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-aws.c\" \"D:\\GitHub\\external\n api\\mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-sasl.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-collection.c\" \"<Path To>\\mongo-c-driver\\source\\\n src\\libmongoc\\src\\mongoc\\mongoc-compression.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-counters.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-crypt\n .c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cursor-array.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cursor.c\" \"<Path To>\\mon\n go-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cursor-cmd.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cursor-change-stream.c\" \"<Path To>\\mongo-c-driver\\source\\src\\lib\n mongoc\\src\\mongoc\\mongoc-cursor-cmd-deprecated.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cursor-find.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc\n -cursor-find-cmd.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cursor-find-opquery.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cursor-legacy.c\" \"D:\\\n GitHub\\externalapi\\mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-database.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-error.c\" \"<Path To>\\mongo-c-driver\\s\n ource\\src\\libmongoc\\src\\mongoc\\mongoc-find-and-modify.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-init.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc\n -gridfs.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-bucket.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-bucket-file.c\" \"D:\\GitHub\\ext\n ernalapi\\mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file-list.c\" \"<Path To>\\mongo-c-driv\n er\\source\\src\\libmongoc\\src\\mongoc\\mongoc-gridfs-file-page.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-handshake.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mon\n goc\\mongoc-host-list.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-http.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-index.c\" \"D:\\GitHub\\externalapi\\\n mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-interrupt.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-list.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\n \\src\\mongoc\\mongoc-linux-distro-scanner.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-log.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-matcher.c\" \"D:\n \\GitHub\\externalapi\\mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-matcher-op.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-memcmp.c\" \"<Path To>\\mongo-c-driv\n er\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cmd.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-opts-helpers.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc\n -opts.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-queue.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-read-concern.c\" \"<Path To>\n \\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-read-prefs.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-rpc.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mo\n ngoc\\mongoc-server-description.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-server-stream.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-client-sessio\n n.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-server-monitor.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-set.c\" \"<Path To>\\mon\n go-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-socket.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-stream-buffered.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\sr\n c\\mongoc\\mongoc-stream.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-stream-file.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-stream-gridfs.c\" \"D:\\Gi\n tHub\\externalapi\\mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-stream-gridfs-download.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-stream-gridfs-upload.c\" \"D:\\GitHub\\externala\n pi\\mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-stream-socket.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-topology.c\" \"<Path To>\\mongo-c-driver\\source\\sr\n c\\libmongoc\\src\\mongoc\\mongoc-topology-background-monitoring.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-topology-description.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libm\n ongoc\\src\\mongoc\\mongoc-topology-description-apm.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-topology-scanner.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\n \\mongoc-uri.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-util.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-version-functions.c\" \"D:\\GitHub\\externala\n pi\\mongodb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-write-command.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-write-command-legacy.c\" \"<Path To>\\mongo-c-driv\n er\\source\\src\\libmongoc\\src\\mongoc\\mongoc-write-concern.c\" \"<Path To>\\mongo-c-driver\\source\\src\\common\\common-b64.c\" \"<Path To>\\mongo-c-driver\\source\\src\\common\\common-md5.c\" \"D:\\GitHub\\external\n api\\mongodb\\mongo-c-driver\\source\\src\\common\\common-thread.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-crypto.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\n \\mongoc-scram.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-stream-tls.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-ssl.c\" \"D:\\GitHub\\externalapi\\mon\n godb\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-crypto-cng.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-rand-cng.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmong\n oc\\src\\mongoc\\mongoc-stream-tls-secure-channel.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-secure-channel.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mon\n goc-sasl.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-cluster-sspi.c\" \"<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-sspi.c\"\n mongoc-aggregate.c\n ...\n mongoc-compression.c\n [<Path To>\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-compression.c(26): fatal error C1083: Cannot open include file: 'zlib.h': No such file or directory [<Path To>\\mongo-c-driver\\build\\\nsrc\\libmongoc\\mongoc_shared.vcxproj]", "text": "specifying -DENABLE_ZLIB=SYSTEM and -DZLIB_ROOT=\\zlib results in the error [\\mongo-c-driver\\source\\src\\libmongoc\\src\\mongoc\\mongoc-compression.c(26): fatal error C1083: Cannot open include file: ‘zlib.h’: No such file or directory [\\mongo-c-driver\\build\nsrc\\libmongoc\\mongoc_shared.vcxproj] during the build.Directory listing of zlib root, and full cmake command included for referencecmake:output:", "username": "Thomas_Morten" }, { "code": "diff --git a/src/libmongoc/CMakeLists.txt b/src/libmongoc/CMakeLists.txt\nindex 1fde3bfee..2f621971d 100644\n--- a/src/libmongoc/CMakeLists.txt\n+++ b/src/libmongoc/CMakeLists.txt\n@@ -55,6 +55,7 @@ configure_file (\n \"${CMAKE_BINARY_DIR}/src/zlib-1.2.11/zconf.h\"\n COPYONLY\n )\n+set (ZLIB_INCLUDE_DIRS \"\")\n if (ENABLE_ZLIB MATCHES \"SYSTEM|AUTO\")\n message (STATUS \"Searching for zlib CMake packages\")\n include (FindZLIB)\n@@ -74,13 +75,12 @@ if (ENABLE_ZLIB MATCHES \"SYSTEM|AUTO\")\n endif ()\n endif ()\n \n-set (PRIVATE_ZLIB_INCLUDES \"\")\n if ( (ENABLE_ZLIB STREQUAL \"BUNDLED\")\n OR (ENABLE_ZLIB STREQUAL \"AUTO\" AND NOT ZLIB_FOUND) )\n message (STATUS \"Enabling zlib compression (bundled)\")\n set (SOURCES ${SOURCES} ${ZLIB_SOURCES})\n set (\n- PRIVATE_ZLIB_INCLUDES\n+ ZLIB_INCLUDE_DIRS\n \"${SOURCE_DIR}/src/zlib-1.2.11\"\n \"${CMAKE_BINARY_DIR}/src/zlib-1.2.11\"\n )\n@@ -723,7 +723,7 @@ add_library (mongoc_shared SHARED ${SOURCES} ${HEADERS} ${HEADERS_FORWARDING})\n set_target_properties (mongoc_shared PROPERTIES CMAKE_CXX_VISIBILITY_PRESET hidden)\n target_link_libraries (mongoc_shared PRIVATE ${LIBRARIES} PUBLIC ${BSON_LIBRARIES})\n target_include_directories (mongoc_shared BEFORE PUBLIC ${MONGOC_INTERNAL_INCLUDE_DIRS})\n-target_include_directories (mongoc_shared PRIVATE ${PRIVATE_ZLIB_INCLUDES})\n+target_include_directories (mongoc_shared PRIVATE ${ZLIB_INCLUDE_DIRS})\n target_include_directories (mongoc_shared PRIVATE ${LIBMONGOCRYPT_INCLUDE_DIRECTORIES})\n if (MONGOC_ENABLE_MONGODB_AWS_AUTH)\n target_include_directories (mongoc_shared PRIVATE \"${CMAKE_CURRENT_SOURCE_DIR}/../kms-message/src\")\n@@ -765,7 +765,7 @@ if (MONGOC_ENABLE_STATIC_BUILD)\n message (\"Adding -fPIC to compilation of mongoc_static components\")\n endif ()\n target_include_directories (mongoc_static BEFORE PUBLIC ${MONGOC_INTERNAL_INCLUDE_DIRS})\n- target_include_directories (mongoc_static PRIVATE ${PRIVATE_ZLIB_INCLUDES})\n+ target_include_directories (mongoc_static PRIVATE ${ZLIB_INCLUDE_DIRS})\n target_include_directories (mongoc_static PRIVATE ${LIBMONGOCRYPT_INCLUDE_DIRECTORIES})\n if (MONGOC_ENABLE_MONGODB_AWS_AUTH)\n target_include_directories (mongoc_static PRIVATE \"${CMAKE_CURRENT_SOURCE_DIR}/../kms-message/src\")\n", "text": "@Thomas_Morten Using the additional information you provided I was able to reproduce the error. Using a Windows machine with VS2017, I built zlib and installed it to C:\\zlib, then attempted to build the C driver using the same options you did in your build. I encountered the same “No such file” error.I am writing up a ticket and will be fixing this in our Git repository shortly. In the meantime, you can get the C driver build working by applying this patch to your local copy:", "username": "Roberto_Sanchez" } ]
MongoC -DENABLE_ZLIB=SYSTEM not working
2020-09-30T11:05:18.820Z
MongoC -DENABLE_ZLIB=SYSTEM not working
4,978
null
[ "queries", "java" ]
[ { "code": " String param = \"hello\";\n database. getCollection(\"sample\").find(Filters.eq(\"mongo\", param)).forEach(\n new Block<T>() {\n @Override\n public void apply(ProcessingProtectedRegion region) {\n //my code to handle\n }\n },\n //implementation of SingleResultCallback<T>\n );\ndocumentation of Async driver ForEach opperation\n /* Iterates over all documents in the view, applying the given block to each, and completing the returned future after all documents\n * have been iterated, or an exception has occurred.\n \n * @param block the block to apply to each document\n * @param callback a callback that completed once the iteration has completed\n */\n void forEach(Block<? super TResult> block, SingleResultCallback<Void> callback)\n", "text": "Hi ,\nI have been trying to move from Mongo-DB async driver to Java reactive driver.\nso far I have been successful in migrating most of the operations.\nBut I’m stuck with MongoDbIterable and trying to find a compatible version for reactive driverHere is the code snippet for async driverIm trying to migrate the above snippet to Reactive driver but not able to find the correct operation which would behave similar to the ForEach() of async driver that takes 2 parameter as it react driver operations always needs subscriber", "username": "Sandeep_Chandan" }, { "code": " collection.find().subscribe(new Subscriber<>() {\n @Override\n public void onSubscribe(Subscription s) {\n // this is required by Reactive Streams to indicate \"demand\"\n s.request(Long.MAX_VALUE);\n }\n\n @Override\n public void onNext(Document document) {\n // this method is called for every document\n }\n\n @Override\n public void onError(Throwable t) {\n // this method is called once if there is an error\n }\n\n @Override\n public void onComplete() {\n // this method is called once if there is no error and after all documents have been iterated\n }\n });\n }\nSubscriber", "text": "The equivalent Reactive Streams code would look something like this:In practice you would probably want to define a base class implementing the Subscriber interface that you could re-use across all your queries, or else rely on a third-party library like Project Reactor, which does this for you as well as a whole lot more.Good luck!", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo DB - Migrating from Async Driver to Reactive driver
2022-09-04T02:30:42.036Z
Mongo DB - Migrating from Async Driver to Reactive driver
1,886
null
[ "text-search" ]
[ { "code": "", "text": "Helloi am looking for any detail around whether full text search capability works in mongoDB Realm mobile embedded database ?", "username": "Pavan_D1" }, { "code": "", "text": "Full text searches are fully supported in Realm. However, the question is a little vague - can you tell us about the use case and coding platform?In Realm it’s called a filter and is fully covered in the documentation. Here’s the Swift docs", "username": "Jay" }, { "code": "", "text": "Hey @Pavan_D1 i think you’re referring to Atlas Search, correct? If so, you can always create a Full text Search endpoint using GraphQL and call that via Mobile.Here’s an example of a custom resolver which uses Autocomplete:https://github.com/rkiesler1/MongoRx/blob/main/realm/graphql/custom_resolvers/query_autocomplete.json", "username": "Ethan_Steininger" }, { "code": "", "text": "I think the author refers to fuzzy/fulltext search “on local realm db” without calling external resources like Atlas Search.The filter feature pointed by @Jay it is pretty basic compared to Atlas Search.On local realm data I am afraid you can’t go beyond string “contains” and “like”.Am i wrong?", "username": "Robson_Tenorio" }, { "code": "", "text": "@Robson_TenorioThere are powerful and flexible querying options in Realm via the SDK; not only do you have the built in filtering features, there is also Realm Swift Query API and then leveraging NSPredicates can really amp up the filtering capability.See the NSPredicate Sheet Sheet along with Realm Filters for some further reading and examples", "username": "Jay" } ]
Full text search in mongoDB Realm embedded database?
2022-01-10T15:04:01.294Z
Full text search in mongoDB Realm embedded database?
5,439
https://www.mongodb.com/…e_2_1024x512.png
[]
[ { "code": "", "text": "Hello,I followed the steps in the documentation of Mongo here;I use Macbook Air 2017 (Intel).However, somehow I cannot run the commands like “mongo” or “mongod” successfully.When I command “mongo”, terminal says zsh: command not found mongo.When I try to run “mongod” I get this very long error message that I attach with the post.\n\nEkran Resmi 2022-09-04 17.13.571416×646 219 KB\nCan you please help me on this?", "username": "Samed_Torun" }, { "code": "mongodmongod--portmongomongoshmongomongosh", "text": "Hello @Samed_Torun and welcome to the MongoDB Community forums. The line after what you have highlighted says Address already in use. This means that there is a process alreadly listening on port 27017. Most likely this is another instance of mongod.mongod is the process that runs the database and you can only have a single instance of the process running at a time, unless you specify different --port options for the other servers.We can also tell from your screenshot that you are running MongoDB 6.0. This version no longer ships the mongo executable. This is the older version of the shell and it has been replaced by the mongosh executable which should have been installed on your Mac. If not you can download it separately. Almost anything you can do in the older mongo tool you can do in the newer mongosh. However, most people are not going to notice any difference between the two and you will likely not run across any of the functionality that is missing.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Community Edition can't install on Macbook
2022-09-04T14:19:06.808Z
MongoDB Community Edition can&rsquo;t install on Macbook
1,735
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi experts,I have a case where I have an array in a collection that might grow unlimited. For performance purpose, going to apply the outlier pattern to avoid unbounded arrays. What do you think which better from performance wise, create new collection to have the overflow (extra) data? Or create new document in the same collection?Thanks", "username": "Rami_Khal" }, { "code": "{\n_id : \"doc1\", \nparent : 'xxx' ,\narray : [ { \"id\" : \"embeeded1\" } ... { \"id\" : \"embeededN\" } ],\noverFlowIndex: 1,\nhasOverflow : true\n}\n...\n{\n_id : \"doc2\", \nparent : 'xxx' ,\narray : [ { \"id\" : \"embeeded1\" } ... { \"id\" : \"embeededN\" }] ,\noverFlowIndex: 2,\nhasOverflow : false\n}\nxxxdb.collection.find({\"parent\" : \"xxx\" , overFlowIndex : { $gt : 0} }\ndb.collection.find({\"parent\" : \"xxx\" , overFlowIndex : { $gt : 0} }.sort({ overFlowIndex : 1})\n{\"parent\" : 1, \"overFlowIndex\" : 1}", "text": "Hi @Rami_Khal ,Its an interesting question, I would say that if you perform a lookup of the overflow documents then it is not that important.But if you be able to cluster those documents on the same index then it might turn into a range query.Now to get all the documents of parent xxx I need to query:If you need to sort the documents based on insert order:Now when indexing {\"parent\" : 1, \"overFlowIndex\" : 1} you will get an indexed query to get all overflow documents. This will have a much better performance then doing a lookup of overflow document from another collection.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks a lot for the reply, @Pavel_Duchovny .\nGoing to start design and implement the same approach.", "username": "Rami_Khal" } ]
Proper way to avoid unbounded arrays
2022-09-02T22:52:24.128Z
Proper way to avoid unbounded arrays
1,556
null
[ "data-modeling" ]
[ { "code": "{ \"properties\": { \"_id\": { \"bsonType\": \"string\" }, \"_partition\": { \"bsonType\": \"string\" }, \"memberOf\": { \"bsonType\": \"array\", \"items\": { \"bsonType\": \"object\", \"properties\": { \"name\": { \"bsonType\": \"string\" }, \"partition\": { \"bsonType\": \"string\" } }, \"title\": \"Project\" } }, \"name\": { \"bsonType\": \"string\" } }, \"required\": [ \"_id\", \"_partition\", \"name\" ], \"title\": \"User\" }", "text": "I am trying to add a new schema to my app, but I keep getting this error “schema for namespace (AppName.User) must include partition key “pair””. Any ideas about how I can resolve this?{ \"properties\": { \"_id\": { \"bsonType\": \"string\" }, \"_partition\": { \"bsonType\": \"string\" }, \"memberOf\": { \"bsonType\": \"array\", \"items\": { \"bsonType\": \"object\", \"properties\": { \"name\": { \"bsonType\": \"string\" }, \"partition\": { \"bsonType\": \"string\" } }, \"title\": \"Project\" } }, \"name\": { \"bsonType\": \"string\" } }, \"required\": [ \"_id\", \"_partition\", \"name\" ], \"title\": \"User\" }", "username": "Abene_Tester" }, { "code": "{\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\" \n }, \n \"_partition\": {\n \"bsonType\": \"string\" \n }, \n \"memberOf\": {\n \"bsonType\": \"array\", \n \"items\": {\n \"bsonType\": \"object\", \n \"properties\": {\n \"name\": {\n \"bsonType\": \"string\" \n }, \n \"partition\": {\n \"bsonType\": \"string\" \n } \n }, \n \"title\": \"Project\" \n } \n }, \n \"name\": {\n \"bsonType\": \"string\" \n } \n },\n \"required\": [ \"_id\", \"_partition\", \"name\" ], \n \"title\": \"User\" \n}\n\"partition\"\"_partition\"", "text": "Hi @Abene_Tester. Welcome to the forums!I expanded the model out a bit to make it easier to see:I’m not sure exactly, but since you didn’t append partition with the _ in your Project object, it might be looking for _partition and not finding it.I’d try to update the \"partition\" in your Project properties to be \"_partition\"", "username": "Kurt_Libby1" }, { "code": "", "text": "Thank your for the help! I went to my sync configuration and found that I set the partition key as “pair”. After reconfiguring the name for partition key, everything works fine now.", "username": "Abene_Tester" } ]
Setting up partition key for schema
2022-09-04T03:24:48.893Z
Setting up partition key for schema
1,460
null
[ "aggregation", "queries" ]
[ { "code": "{\n '$match': {\n 'bookings': {\n '$not': {\n '$eq': Date('Tue, 09 Aug 2022 16:00:00 GMT')\n }\n }\n }\n}\nconst dates = [\n '2022-09-09T16:00:00.000+00:00',\n '2022-09-08T16:00:00.000+00:00',\n '2022-09-10T16:00:00.000+00:00',\n]\n\n{\n '$match': {\n 'bookings': {\n '$not': {\n '$eq': Date([dates])\n }\n }\n }\n}\n", "text": "Hello,I’m currently using the following $match statement in an aggregation query to exclude entries with a booking date that matches the given date. The bookings field is an array of dates. This works fine:However, I need or would like to pass in an array of dates (that will have been dynamically generated). Is this possible? I’d like to do something like this:What would be the best way of filtering/excluding against an array of data, please?\nCheers,\nMatt", "username": "Matt_Heslington1" }, { "code": "$nin{\n '$match': {\n 'bookings': {\n '$nin': dates\n }\n }\n}\n", "text": "Hi,You can use $nin operator:Working example", "username": "NeNaD" }, { "code": "", "text": "Hi Nenad, brilliant, thank you, $nin was exactly what I was looking for.\nThank you for your help and for the example too!\nHave a great day,\nMatt", "username": "Matt_Heslington1" }, { "code": "", "text": "Hi @Matt_Heslington1,You are welcome. I am glad it helped you!", "username": "NeNaD" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Using an Array of Values for $match $not $eq in an Aggregation Query
2022-09-04T06:01:27.709Z
Using an Array of Values for $match $not $eq in an Aggregation Query
1,210
null
[ "node-js", "react-js" ]
[ { "code": "Auth.jsrouter.get(\"/facebook\", passport.authenticate(\"facebook\", { \n scope: [\"email\"] }));\n\nrouter.get(\"/auth/facebook/callback\", passport.authenticate(\"facebook\", {\n successRedirect: \"http://localhost:3000/\",\n failureRedirect: \"/facebookLogin/failed\"\n}));\n\nrouter.get(\"/facebookLogin/success\", async (req, res)=>{\n if(req.user){\n const user = await User.findOne({provider_id: req.user.id, \n provider: req.user.provider})\n if(user){\n res.status(200).json({\n success: true,\n message: \"success\",\n user: user\n })\n \n }else{\n const checkUserEmail = await User.findOne({email: req.user.email})\n if(checkUserEmail){\n res.status(401).json({\n success: false,\n message: \"User already Exist with this email id\",\n })\n }else{\n const user = await User.create({\n username: req.user.name.givenName+ \"_\" +req.user.name.familyName,\n firstName: req.user.name.givenName,\n lastName: req.user.name.familyName,\n email: req.user.emails[0].value,\n provider: req.user.provider,\n provider_id: req.user.id,\n profilePic: req.user.photos[0].value,\n });\n res.status(200).json({\n success: true,\n message: \"success\",\n user: user\n })\n }\n }\n console.log(\"CURRNT USER: \", user);\n }\n})\n\nrouter.get(\"/facebookLogin/failed\", (req, res)=>{\n if(req.user){\n res.status(401).json({\n success: false,\n message: \"failure\",\n })\n }\n})\n", "text": "I’m getting this error whenever i’m trying to log a user in using passportjs, i checked in mongo collection and there is no username the same !/Users//my-blog/api/node_modules/mongodb/lib/operations/insert.js:53 return callback(new error_1.MongoServerError(res.writeErrors[0])); ^ MongoServerError: E11000 duplicate key error collection: blog.users index: username_1 dup key: { username: “undefined_undefined” }Auth.js Code:What I’m missing here ?", "username": "Sultan_Hboush" }, { "code": "", "text": "your schema seems to have “unique” tag on “username” and also your request does not send “user.name.givenName” and “user.name.familyName” as you expected (typos?), but instead, they are null and thus you get “undefined_undefined”. the first user can get registered and then anyone else hits on this error.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "I don’t get it !! there is no typos in the code, now i’m getting the same error but index: email_1MongoServerError: E11000 duplicate key error collection: blog.users index: email_1 dup keyI don’t get why it’s a problem I have the unique tag on “username” and “email”", "username": "Sultan_Hboush" }, { "code": "req.user.name.givenName+ \"_\" +req.user.name.familyNameundefined_undefinedreq.user.emails[0].valueundefinedreq.userundefined", "text": "I don’t get why it’s a problem I have the unique tag on “username” and “email”Problem is not you having “unique” tag on them.The problem is that you are sending wrong/null data for them.", "username": "Yilmaz_Durmaz" }, { "code": "req.bodyreq.body.user.email{user:{name:\"...\",email:\"...\"}}req.body.email{name:\"...\",email:\"...\"}", "text": "It seems you are using Node.js and Express is your server.In that case, your \"POST\"ed data is carried over in req.body.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoServerError: E11000 duplicate key error collection
2022-09-01T19:17:10.687Z
MongoServerError: E11000 duplicate key error collection
20,375
null
[]
[ { "code": "", "text": "I am new to mongodb\nRecently our software developer implement tls and x509 security feature at database server.\nAt mongo cfg file, tls is enabled and pem file path is defined.\nThere are set of certs given by customer.\nIn order to understand and verify the authentication feature is implemented correctly, I use mongo compass to establish connection and view collection data.\nAt mongo compass, first i turned on x509, then at tls tab, i turn on TLS, added the CA cert, added the pem cert, suppy a password, click connect , connection established successfully. I assumed that both x509,tls and certs between client and server works well.\nHowever, i try to do another way round to see if anyone can exploit and access db without certs.\nI turn on x509, turn on tls still, delete both ca and client cert. Enabled the option “allow invalid cert”.\nClock connect, i am able to access db still, anyone can explain to me why?", "username": "Dstest" }, { "code": "net.tls.modemongod.confpreferTLS ", "text": "Hey\nCan you show net.tls.mode section from your mongod.conf file ?\nyou can have the mode set to preferTLS - connections between servers use TLS, for incoming connections, the server accepts both TLS and non-TLS.", "username": "Arkadiusz_Borucki" }, { "code": "net:\n tls:\n mode: requiredTLS\n disabledProtocols: TLS1_0,TLS1_1\n certificateKeyFile: C:/Certs/cert.pem\n", "text": "", "username": "Dstest" } ]
Newbie tey to understand mongodb tls and x509 authentication
2022-07-30T15:22:43.433Z
Newbie tey to understand mongodb tls and x509 authentication
1,304
null
[ "aggregation", "node-js" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"62908343716f52001696cc9b\"\n },\n \"wards\": [\n {\n \"admissionNumber\": \"Sil 0020\",\n \"schoolCode\": \"SC227043\"\n },\n {\n \"admissionNumber\": \"Sil 0023\",\n \"schoolCode\": \"SC227043\"\n }\n ],\n \"verify\": false,\n \"regPoint\": 4,\n \"completeRegistration\": true,\n \"email\": \"\",\n \"schoolId\": {\n \"$oid\": \"624f9790a900d000165b543e\"\n },\n \"password\": \"$2a$10$60aNQeReDfHohft1P5WDjuWwfvCtJZlr.7sjUbDU8K8TpQgs5U1oG\",\n \"__v\": 0,\n \"otp\": \"3654\",\n \"phoneNumber\": \"8028944791\",\n \"address\": \"22 apata alakia ibadan\",\n \"firstName\": \"taiwo\",\n \"lastName\": \"omotola\",\n \"about\": \"life\",\n \"imgUri\": \"\"\n}\n\n{\n \"_id\": {\n \"$oid\": \"6290958b767b9000169383e9\"\n },\n \"wards\": [\n {\n \"admissionNumber\": \"HOG001\",\n \"schoolCode\": \"H223302\"\n },\n {\n \"admissionNumber\": \"HOG0010\",\n \"schoolCode\": \"H223302\"\n }\n ],\n \"verify\": false,\n \"regPoint\": 4,\n \"completeRegistration\": true,\n \"email\": \"[email protected]\",\n \"schoolId\": {\n \"$oid\": \"621f990511e2b40016a22fad\"\n },\n \"password\": \"$2a$10$xWbINEzm/NvyQw0CWKKsMukxnvh2UC1MjKKoOjb5QtuATuYeI8f52\",\n \"__v\": 0,\n \"otp\": \"7509\",\n \"phoneNumber\": \"8067410631\",\n \"address\": \"60 association road\",\n \"firstName\": \"olayemi\",\n \"lastName\": \"williams\"\n}\n", "text": "Hello Guys, i am having issue with getting data that $match value in an array and get those data by array or ObjectThis is the document that i want to fetch data fromand so on.i want to get all the data of any student that his/her admissionNumber $match\nDia001\nDia002\nDI001\nBHS00200\nBHS00100\nBHS00300\ndia007\nFTS00400\nHOG001\nHOG002\nHOG003\nHOG004Please i need to explain further, kindly help me", "username": "Gbade_Francis" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and update your post so that we see more clearly your issue.", "username": "steevej" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6209599e9ee1d60016315ce0\"\n },\n \"sex\": \"female\",\n \"dateJoined\": {\n \"$date\": {\n \"$numberLong\": \"1659380317083\"\n }\n },\n \"roleName\": null,\n \"isDeleted\": false,\n \"levelHistory\": [],\n \"admissionNumber\": \"Dia001\",\n \"firstName\": \"kehinde\",\n \"middleName\": \"\",\n \"lastName\": \"fatokun\",\n \"dateOfBirth\": {\n \"$date\": {\n \"$numberLong\": \"1280770043041\"\n }\n },\n \"guardianContact\": \"08034356783\",\n \"currentLevel\": \"620956889ee1d60016315bfc\",\n \"schoolId\": \"620956889ee1d60016315bf8\",\n \"__v\": 0\n},\n{\n \"_id\": {\n \"$oid\": \"620959f09ee1d60016315ced\"\n },\n \"sex\": \"male\",\n \"dateJoined\": {\n \"$date\": {\n \"$numberLong\": \"1659380317083\"\n }\n },\n \"roleName\": null,\n \"isDeleted\": false,\n \"levelHistory\": [],\n \"admissionNumber\": \"Dia002\",\n \"firstName\": \"funsho \",\n \"middleName\": \"\",\n \"lastName\": \"adeoye\",\n \"dateOfBirth\": {\n \"$date\": {\n \"$numberLong\": \"1280747131521\"\n }\n },\n \"guardianContact\": \"08035678893\",\n \"currentLevel\": \"620956889ee1d60016315bfa\",\n \"schoolId\": \"620956889ee1d60016315bf8\",\n \"__v\": 0\n}\n{\n \"_id\": {\n \"$oid\": \"62908343716f52001696cc9b\"\n },\n \"wards\": [\n {\n \"admissionNumber\": \"Sil 0020\",\n \"schoolCode\": \"SC227043\"\n },\n {\n \"admissionNumber\": \"Sil 0023\",\n \"schoolCode\": \"SC227043\"\n }\n ],\n \"verify\": false,\n \"regPoint\": 4,\n \"completeRegistration\": true,\n \"email\": \"\",\n \"schoolId\": {\n \"$oid\": \"624f9790a900d000165b543e\"\n },\n \"password\": \"$2a$10$60aNQeReDfHohft1P5WDjuWwfvCtJZlr.7sjUbDU8K8TpQgs5U1oG\",\n \"__v\": 0,\n \"otp\": \"3654\",\n \"phoneNumber\": \"8028944791\",\n \"address\": \"22 apata alakia ibadan\",\n \"firstName\": \"taiwo\",\n \"lastName\": \"omotola\",\n \"about\": \"life\",\n \"imgUri\": \"\"\n},\n{\n \"_id\": {\n \"$oid\": \"62bc10586f0d8b00163d02e2\"\n },\n \"wards\": [\n {\n \"admissionNumber\": \"HOG002\",\n \"schoolCode\": \"H223302\"\n }\n ],\n \"verify\": false,\n \"regPoint\": 4,\n \"completeRegistration\": true,\n \"email\": \",\n \"schoolId\": {\n \"$oid\": \"621f990511e2b40016a22fad\"\n },\n \"password\": \"$2a$10$BFM6/58We5rMZGzg9HYNq.PW/0.AwuQ8J.TgZmqEORw3CdwNjycUm\",\n \"__v\": 0,\n \"otp\": \"2268\",\n \"phoneNumber\": \"8029313116\",\n \"address\": \"lagos\",\n \"firstName\": \"simon\",\n \"lastName\": \"simon\"\n}\n", "text": "i have 4 tables,\n(1) wards\n(2) Guardians\n(3) Birthday\n(4) Teacher.I used aggregate function to get all the wards which their birthday will be comming in the next 7 days. and inserting it in the birthday table using node schedular. (This is working perfectly) at the process of getting those wards details, i need to send notification to both teacher and Guardian that has relationship with those wards that will be having birthday…The only mean i can use to get Guardian details to receive the notification is by checking the guadians table and get each guardian for each student…and same thing with Teacher.the table is as followsWards TableThis is where i am getting wards details from and inserting them into birthday tableso from this details, there is AdmissionNumber which stand as the relationship between the wards and the Guardian.Guardian Tablewith this table, i need to get all the guardians detail that has relationship with the wards that will be having birthday next weeks.\nBelow is the output of adminssioinNumber comming from the aggregate that i used to get the wards details\n“Dia001\nDia002\nDI001\nBHS00200\nBHS00100\nBHS00300\ndia007\nFTS00400\nHOG001\nHOG002\nHOG003\nHOG004”.All i need to do is to use this admission numbers to check all the guardians that has a relationship with these number, and get their details out. Thank You", "username": "Gbade_Francis" }, { "code": "", "text": "Have you tried to cut-n-paste back your sample documents?We cannot because the quotes are wrong so please:read Formatting code and log snippets in posts and update your post so that we see more clearly your issue.", "username": "steevej" }, { "code": "{\n \"_id\": {\n \"$oid\": \"62908343716f52001696cc9b\"\n },\n \"wards\": [\n {\n \"admissionNumber\": \"Sil 0020\",\n \"schoolCode\": \"SC227043\"\n },\n {\n \"admissionNumber\": \"Sil 0023\",\n \"schoolCode\": \"SC227043\"\n }\n ],\n \"verify\": false,\n \"regPoint\": 4,\n \"completeRegistration\": true,\n \"email\": \"[email protected]\",\n \"schoolId\": {\n \"$oid\": \"624f9790a900d000165b543e\"\n },\n \"password\": \"$2a$10$60aNQeReDfHohft1P5WDjuWwfvCtJZlr.7sjUbDU8K8TpQgs5U1oG\",\n \"__v\": 0,\n \"otp\": \"3654\",\n \"phoneNumber\": \"8028944791\",\n \"address\": \"22 apata alakia ibadan\",\n \"firstName\": \"taiwo\",\n \"lastName\": \"omotola\",\n \"about\": \"life\",\n \"imgUri\": \"https://res.cloudinary.com/asm-web/image/upload/v1654100090/ukv0qydhcpsfu1ssg7nd.jpg\"\n}\n", "text": "sample document for guardian is", "username": "Gbade_Francis" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6209599e9ee1d60016315ce0\"\n },\n \"sex\": \"female\",\n \"dateJoined\": {\n \"$date\": {\n \"$numberLong\": \"1659380317083\"\n }\n },\n \"roleName\": null,\n \"isDeleted\": false,\n \"levelHistory\": [],\n \"admissionNumber\": \"Dia001\",\n \"firstName\": \"kehinde\",\n \"middleName\": \"\",\n \"lastName\": \"fatokun\",\n \"dateOfBirth\": {\n \"$date\": {\n \"$numberLong\": \"1280770043041\"\n }\n },\n \"guardianContact\": \"08034356783\",\n \"currentLevel\": \"620956889ee1d60016315bfc\",\n \"schoolId\": \"620956889ee1d60016315bf8\",\n \"__v\": 0\n}\n", "text": "and sample document for ward is this", "username": "Gbade_Francis" } ]
How to fetch array data from a table using aggregate
2022-07-26T07:55:41.969Z
How to fetch array data from a table using aggregate
3,881
null
[ "python" ]
[ { "code": "", "text": "I am using mongodb 5.0 to hold a dataset made by myself, but everytime after I update data using pymongo client, the mongodb server takes up much memory and doesnt release… I can only restart to release it. Anyone knows how to solve this ??", "username": "Kyrie_Yan" }, { "code": "", "text": "See the following. It might help you fine-tune your system.", "username": "steevej" }, { "code": "", "text": "Thanks for showing this, I’ve seen this before but it can only adjust the memory usage when i use the database. My problem is , when I have finished my interaction with the database using pymongo client,even if after client.closed( ) , the memory does not release however… what I need is as how it does in mongodb compass, after I close the compass, the memory is released at the same time… Any idea on it ?", "username": "Kyrie_Yan" }, { "code": "", "text": "I do not think that Compass does anything special to have mongod release any memory. The memory released is certainly the one used by Compass.Do you terminate your pymongo application after client.closed?", "username": "steevej" }, { "code": "", "text": "As far as I observe, when I open and inspect the database in the compass, the memory used by mongo server goes up to 60%. and when i closed the compass, it goes down . I guess the memory maintained for information of the specific database is released? But when I connect and manipulate the database using python, when pymongo client is closed and python program is terminated, the memory usage is still at 60%… How can i do with it?", "username": "Kyrie_Yan" } ]
How can mongodb server release memory after usage?
2022-08-31T09:07:19.920Z
How can mongodb server release memory after usage?
3,364
null
[ "serverless" ]
[ { "code": "", "text": "Hello everyone,I’m thinking of using Mongodb Serverless, but there are 2 things I’m curious about.1- Is serverless suitable for production? (I will keep the auth token records)\n2- Mongodb in Serverless product; Does serverless crash in case of failure? Or does it run replica sets in the background?Thank you.", "username": "Anil" }, { "code": "", "text": "Hey Anil,Atlas serverless instances are generally available, and we would recommend them for production use for applications where serverless is well-suited, typically sparse and infrequent workloads.Atlas serverless instances are built to be highly available in the backend, and do not crash/become unavailable if only a single VM fails.Best,\nChris\n– Atlas serverless product team", "username": "Christopher_Shum" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is Mongodb Serverless suitable for production?
2022-09-02T20:42:14.140Z
Is Mongodb Serverless suitable for production?
2,336
https://www.mongodb.com/…e4a05242f0e1.png
[ "swift" ]
[ { "code": "**Terminating app due to uncaught exception 'RLMException', reason: 'Cannot write to class ctext when no flexible sync subscription has been created.'**\n\nLogs:\n[\n \"Upload message contains 1 changeset(s) to be integrated\",\n \"Integrated 1 of 1 remaining changeset(s)\",\n \"Integrating changesets required conflict resolution to be performed on 0 of the changesets\",\n \"Latest server version is now 20\",\n \"Number of integration attempts was 1\"\n]\nFunction Call Location:\nUS-OR\nQuery:\n{\n \"course\": \"(TRUEPREDICATE)\"\n}\nWrite Summary:\n{\n \"course\": {\n \"inserted\": [\n \"630ae06a65d95d89d88efefb\"\n ]\n }\n}\nRemote IP Address:\n\nSDK:\nRealm Cocoa v10.28.4\nPlatform Version:\nVersion 15.5 (Build 19F70)\nimport Foundation\nimport RealmSwift\nimport SwiftUI\n\nstruct NavView: View {\n @ObservedResults(ctext.self) var contexts\n @ObservedResults(course.self) var courses\n @EnvironmentObject var app: RealmSwift.App\n @State var realm: Realm\n var body: some View {\n NavigationView {\n VStack {\n Button{\n let newUnit = course()\n newUnit.owner_id = app.currentUser?.id ?? \"noid\"\n newUnit.displayName = \"displayName\"\n $courses.append(newUnit)\n \n let c = ctext()\n c.owner_id = app.currentUser?.id ?? \"noid\"\n c.cpPath = \"base\"\n $contexts.append(c)\n } label: { Text(String(contexts.count)) }\n }\n }.task {\n do {\n let subscriptions = realm.subscriptions\n if let foundSubscriptions = subscriptions.first(named: \"ctext\") {\n print(\"ctext sub already made\")\n } else {\n print(\"creating ctext sub \")\n try await subscriptions.update {\n subscriptions.append(QuerySubscription<ctext>(name: \"ctext\"))\n }\n \n if let foundSubscriptions = subscriptions.first(named: \"course\") {\n print(\"course sub already made\")\n } else {\n print(\"creating course sub \")\n try await subscriptions.update {\n subscriptions.append(QuerySubscription<course>(name: \"course\"))}\n }\n }\n } catch {\n print(\"Error info: \\(error)\")\n }}\n }\n}\n2022-08-27 20:39:38.606451-0700 sample_app_dev[70377:8241788] Version 10.28.6 of Realm is now available: https://github.com/realm/realm-swift/blob/v10.28.6/CHANGELOG.md\n2022-08-27 20:39:38.609030-0700 sample_app_dev[70377:8241787] Sync: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = false\n2022-08-27 20:39:38.609933-0700 sample_app_dev[70377:8241787] Sync: Connection[2]: Session[2]: client_reset_config = false, Realm exists = true, client reset = false\n2022-08-27 20:39:38.676231-0700 sample_app_dev[70377:8241787] Sync: Connected to endpoint '54.202.198.109:443' (from '192.168.1.11:64811')\n2022-08-27 20:39:38.780715-0700 sample_app_dev[70377:8241780] [boringssl] boringssl_metrics_log_metric_block_invoke(153) Failed to log metrics\nctext sub already made\n2022-08-27 20:39:41.509175-0700 sample_app_dev[70377:8241587] *** Terminating app due to uncaught exception 'RLMException', reason: 'Cannot write to class ctext when no flexible sync subscription has been created.'\n*** First throw call stack:\n(\n\t0 CoreFoundation 0x000000010b407604 __exceptionPreprocess + 242\n\t1 libobjc.A.dylib 0x0000000108f61a45 objc_exception_throw + 48\n\t2 sample_app_dev 0x0000000104d5639a _ZN18RLMAccessorContext12createObjectEP11objc_objectN5realm12CreatePolicyEbNS2_6ObjKeyE + 3114\n\t3 sample_app_dev 0x0000000104e61fb8 RLMAddObjectToRealm + 280\n\t4 sample_app_dev 0x000000010515fca5 $s10RealmSwift0A0V3add_6updateySo0aB6ObjectC_AC12UpdatePolicyOtF + 1077\n\t5 sample_app_dev 0x000000010519132e $s10RealmSwift15BoundCollectionPAASo0aB6ObjectC7ElementRczAA7ResultsVyAGG5ValueRtzrlE6appendyyAGFyAJXEfU_ + 286\n\t6 sample_app_dev 0x000000010517e8f9 $s10RealmSwift9safeWrite33_06F2B43D1E2DA64D3C5AC1DADA9F5BA7LLyyx_yxXEtAA14ThreadConfinedRzlFyyXEfU_ + 57\n\t7 sample_app_dev 0x000000010517e91f $ss5Error_pIgzo_ytsAA_pIegrzo_TR + 15\n\t8 sample_app_dev 0x00000001051b37f4 $ss5Error_pIgzo_ytsAA_pIegrzo_TRTA + 20\n\t9 sample_app_dev 0x000000010515e833 $s10RealmSwift0A0V5write16withoutNotifying_xSaySo20RLMNotificationTokenCG_xyKXEtKlF + 275\n\t10 sample_app_dev 0x000000010517e798 $s10RealmSwift9safeWrite33_06F2B43D1E2DA64D3C5AC1DADA9F5BA7LLyyx_yxXEtAA14ThreadConfinedRzlF + 1080\n\t11 sample_app_dev 0x00000001051911db $s10RealmSwift15BoundCollectionPAASo0aB6ObjectC7ElementRczAA7ResultsVyAGG5ValueRtzrlE6appendyyAGF + 1419\n\t12 sample_app_dev 0x0000000104d1424e $s14sample_app_dev7NavViewV4bodyQrvg7SwiftUI6VStackVyAE6ButtonVyAE4TextVGGyXEfU_ALyXEfU_yycfU_ + 1406\n\t13 SwiftUI 0x00000001156b3841 $s7SwiftUI18WrappedButtonStyle33_AEEDD090E917AC57C12008D974DC6805LLV8makeBody13configurationQrAA09PrimitivedE13ConfigurationV_tFyycAHcfu_yycfu0_TA + 17\n\t14 SwiftUI 0x0000000115bd6163 $s7SwiftUI25PressableGestureCallbacksV8dispatch5phase5stateyycSgAA0D5PhaseOyxG_SbztFyycfU_ + 32\n\t15 SwiftUI 0x00000001157cca1e $sIeg_ytIegr_TR + 12\n\t16 SwiftUI 0x000000011549653a $sIeg_ytIegr_TRTA + 17\n\t17 SwiftUI 0x00000001154e15bc $sIeg_ytIegr_TRTA.5406 + 9\n\t18 SwiftUI 0x00000001157cca32 $sytIegr_Ieg_TR + 12\n\t19 SwiftUI 0x00000001157cca1e $sIeg_ytIegr_TR + 12\n\t20 SwiftUI 0x000000011549653a $sIeg_ytIegr_TRTA + 17\n\t21 SwiftUI 0x00000001154e15c7 $sIeg_ytIegr_TRTA.5414 + 9\n\t22 SwiftUI 0x00000001157a633e $s7SwiftUI6UpdateO3endyyFZ + 410\n\t23 SwiftUI 0x00000001158a1a11 $s7SwiftUI19EventBindingManagerC4sendyySDyAA0C2IDVAA0C4Type_pGF + 280\n\t24 SwiftUI 0x0000000115dbf209 $s7SwiftUI18EventBindingBridgeC4send_6sourceySDyAA0C2IDVAA0C4Type_pG_AA0cD6Source_ptFTf4nen_nAA22UIKitGestureRecognizerC_Tg5 + 1825\n\t25 SwiftUI 0x0000000115dbd766 $s7SwiftUI22UIKitGestureRecognizerC4send025_062C14327F4C9197D92807A7H6DF7F3BLL7touches5event5phaseyShySo7UITouchCG_So7UIEventCAA10EventPhaseOtF + 66\n\t26 SwiftUI 0x0000000115dbdf0a $s7SwiftUI22UIKitGestureRecognizerC12touchesBegan_4withyShySo7UITouchCG_So7UIEventCtFToTm + 138\n\t27 SwiftUI 0x0000000115dbd7ee $s7SwiftUI22UIKitGestureRecognizerC12touchesEnded_4withyShySo7UITouchCG_So7UIEventCtFTo + 40\n\t28 UIKitCore 0x000000012d3988b5 -[UIGestureRecognizer _componentsEnded:withEvent:] + 217\n\t29 UIKitCore 0x000000012d96337f -[UITouchesEvent _sendEventToGestureRecognizer:] + 662\n\t30 UIKitCore 0x000000012d38cd6b __47-[UIGestureEnvironment _updateForEvent:window:]_block_invoke + 70\n\t31 UIKitCore 0x000000012d38c9f5 -[UIGestureEnvironment _updateForEvent:window:] + 516\n\t32 UIKitCore 0x000000012d90fe24 -[UIWindow sendEvent:] + 5290\n\t33 UIKitCore 0x000000012d8e5eac -[UIApplication sendEvent:] + 820\n\t34 UIKitCore 0x000000012d97df0a __dispatchPreprocessedEventFromEventQueue + 5614\n\t35 UIKitCore 0x000000012d98135d __processEventQueue + 8635\n\t36 UIKitCore 0x000000012d977af5 __eventFetcherSourceCallback + 232\n\t37 CoreFoundation 0x000000010b3744a7 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 17\n\t38 CoreFoundation 0x000000010b37439f __CFRunLoopDoSource0 + 180\n\t39 CoreFoundation 0x000000010b37386c __CFRunLoopDoSources0 + 242\n\t40 CoreFoundation 0x000000010b36df68 __CFRunLoopRun + 871\n\t41 CoreFoundation 0x000000010b36d704 CFRunLoopRunSpecific + 562\n\t42 GraphicsServices 0x000000010a5d9c8e GSEventRunModal + 139\n\t43 UIKitCore 0x000000012d8c665a -[UIApplication _run] + 928\n\t44 UIKitCore 0x000000012d8cb2b5 UIApplicationMain + 101\n\t45 SwiftUI 0x0000000115d90e5d $s7SwiftUI17KitRendererCommon33_ACC2C5639A7D76F611E170E831FCA491LLys5NeverOyXlXpFAESpySpys4Int8VGSgGXEfU_ + 196\n\t46 SwiftUI 0x0000000115d90d97 $s7SwiftUI6runAppys5NeverOxAA0D0RzlF + 148\n\t47 SwiftUI 0x0000000115753854 $s7SwiftUI3AppPAAE4mainyyFZ + 61\n\t48 sample_app_dev 0x0000000104d451de $s14sample_app_dev15realmSwiftUIAppV5$mainyyFZ + 30\n\t49 sample_app_dev 0x0000000104d45269 main + 9\n\t50 dyld 0x0000000108bd0f21 start_sim + 10\n\t51 ??? 0x000000011526751e 0x0 + 4649809182\n)\nlibc++abi: terminating with uncaught exception of type NSException\ndyld4 config: DYLD_ROOT_PATH=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot DYLD_LIBRARY_PATH=/Users/josephbittman/Library/Developer/Xcode/DerivedData/sample_app-dtakijxdnijtkegdverncnbkrzqg/Build/Products/Debug-iphonesimulator:/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/libBacktraceRecording.dylib:/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/libMainThreadChecker.dylib:/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/Developer/Library/PrivateFrameworks/DTDDISupport.framework/libViewDebuggerSupport.dylib DYLD_FRAMEWORK_PATH=/Users/josephbittman/Library/Developer/Xcode/DerivedData/sample_app-dtakijxdnijtkegdverncnbkrzqg/Build/Products/Debug-iphonesimulator:/Users/josephbittman/Library/Developer/Xcode/DerivedData/sample_app-dtakijxdnijtkegdverncnbkrzqg/Build/Products/Debug-iphonesimulator/PackageFrameworks\n*** Terminating app due to uncaught exception 'RLMException', reason: 'Cannot write to class ctext when no flexible sync subscription has been created.'\nterminating with uncaught exception of type NSException\nCoreSimulator 802.6.1 - Device: iPhone 11 Pro (99012B61-5D78-46FB-A87D-B3FA37C14553) - Runtime: iOS 15.5 (19F70) - DeviceType: iPhone 11 Pro\n(lldb)\n", "text": "When I attempt to write a new piece of data, I get the following error:I am getting this error even though a subscription exists. The sub already exists, which you can verify thru the print statement output in the console log below.This NavView is a sandbox view for me to experiment in. I included the “course” subscription and $.append call in NavView, as I separately have an actual app’s view working for course. The error does not throw on the NavView’s course subscription. The primary item this sandbox testing is experimenting with is the Mixed sync type that is used within the “ctext” class. An interesting note about the course.append… even though it does not error, the data is never added to the collection in Atlas. This is in spite of the Logs area showing a write entry:\natlas data513×668 17.5 KB\nPreviously, I had the subscription code set up using initialSubscriptions when the configuration is declared (higher in the view hierarchy). However, it had the same problem, and I moved it down into this sandbox view to try to isolate things better.Reproduction code:Full Console output:", "username": "Joseph_Bittman" }, { "code": "", "text": "So, it turned out that Sync needed to be terminated and restarted.Maybe the subscriptions object allowed creating a subscription to the offline data and not server and so it appeared a subscription was made? But then I would think it would have allowed me to .append to the offline data even tho the server connectivity was not there… and I can manually recover the changes when it goes back online for a client reset.And I don’t know why the Logs for the course .append showed (or seemed to me) success even though it didn’t?Few things to improve there if there is dev bandwidth… Not the first time I’ve spent a half day on something to realize it was fixed after restarting sync. I’ve learned to check Sync since this has occurred a few times, but when I’m experimenting with new features, I just don’t recall to do so.", "username": "Joseph_Bittman" }, { "code": "", "text": "I had another occurance of this same error being thrown. This time, I terminated sync and started again. Same error. Terminated sync a second time and started; no error.", "username": "Joseph_Bittman" }, { "code": ".environment(\\.realmConfiguration, user!.flexibleSyncConfiguration())@ObservedResults@Environment(\\.realm) var realm", "text": "@Joseph_Bittman Can you please share the code how you inject the realm to the environment values.\nSomething like this,\n.environment(\\.realmConfiguration, user!.flexibleSyncConfiguration())\nthis is actually very important because this configuration is the one been used by the @ObservedResultsI would also suggest to use\n@Environment(\\.realm) var realm if you are doing the subscription on the same view.", "username": "Diana_Maria_Perez_Af" }, { "code": "**let** realmApp = App(id: theAppConfig.appId, configuration: AppConfiguration(baseURL: theAppConfig.baseUrl, transport: **nil** , localAppName: **nil** , localAppVersion: **nil** ))\n\n ContentView().environmentObject(realmApp)\nlet config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if let foundSubscriptions = subs.first(named: \"course\") {\n return\n } else {\n\n subs.append(QuerySubscription<course>(name: \"course\"))\n subs.append(QuerySubscription<ctext>(name: \"ctext\"))\n....... \n }\n })\n.......\n\n OpenRealmView().environment(\\.realmConfiguration, config)\n @AsyncOpen(appId: theAppConfig.appId, timeout: 4000) var asyncOpen\n.........\n [viewname](realm: realm).environment(\\.realm, realm)\n", "text": "in app main:in content view: (based off of realm template)in openrealmview:", "username": "Joseph_Bittman" }, { "code": "class Contact: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: String = UUID().uuidString\n @Persisted var name: String = \"\"\n @Persisted var lastName: String = \"\"\n @Persisted var email: String = \"\"\n @Persisted var birthdate: Date = Date()\n}\n\n// For the purpose of this example, we have to ways of syncing, using @AsyncOpen and @AutoOpen\nstruct ContentView: View {\n var body: some View {\n NavigationView {\n LoginView()\n }\n }\n}\n\n// LoginView, Authenticate User\n// When you have enabled anonymous authentication in the Realm UI, users can immediately log into your app without providing any identifying information:\n// Documentation of how to login can be found (https://docs.mongodb.com/realm/sdk/ios/quick-start-with-sync/)\nstruct LoginView: View {\n @ObservedObject var loginHelper = LoginHelper()\n @State var isLogged = false\n\n var body: some View {\n VStack {\n if isLogged {\n if let user = loginHelper.user {\n let configuration = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n subs.append(QuerySubscription<Contact>(name: \"contacts\"))\n })\n AsyncOpenView()\n .environment(\\.realmConfiguration, configuration)\n } else {\n EmptyView()\n }\n } else {\n Button(\"Login\") {\n loginHelper.login() {\n isLogged = true\n }\n }\n }\n }\n .padding()\n .navigationTitle(\"Logging View\")\n }\n}\n\nclass LoginHelper: ObservableObject {\n var cancellables = Set<AnyCancellable>()\n var user: User?\n\n func login(completion: @escaping () -> Void) {\n let app = RealmSwift.App(id: appId)\n app.login(credentials: .anonymous)\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: { results in\n\n }, receiveValue: { user in\n self.user = user\n completion()\n })\n .store(in: &cancellables)\n }\n}\n\nstruct AsyncOpenView: View {\n @AsyncOpen(appId: appId, timeout: 4000) var asyncOpen\n\n var body: some View {\n VStack {\n switch asyncOpen {\n case .connecting, .waitingForUser, .progress:\n ProgressView()\n case .error:\n EmptyView()\n case .open(let realm):\n ListView()\n .environment(\\.realmConfiguration, realm.configuration)\n }\n }\n }\n}\n\nstruct ListView: View {\n @ObservedResults(Contact.self) var contacts\n @Environment(\\.realm) var realm\n @State var searchString: String = \"\"\n\n var body: some View {\n VStack {\n SwiftUI.List {\n ForEach(contacts) { contact in\n HStack {\n Text(contact.name)\n }\n }\n }\n }\n .navigationBarItems(trailing: HStack {\n Button(\"add\") {\n let contact = Contact()\n contact.name = \"name_\\(Int.random(in: 0...100))\"\n $contacts.append(contact)\n }\n })\n }\n}\n", "text": "Hi @Joseph_Bittman , you have to make sure that the realm used by the ObservedResults which you are using to append new objects is the same realm that you added the subscriptions. Which version of realm are you using?\nI tested the following code, which seems similar to yours and seems to be working fine\nRealm version 10.28.6", "username": "Diana_Maria_Perez_Af" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flex sync when using Mixed type: error thrown that no subscription exists yet it does
2022-08-28T03:49:59.448Z
Flex sync when using Mixed type: error thrown that no subscription exists yet it does
2,718
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Is there a way to return an object from model.find({}) where the fields are mapped to their alias value. I need to convert _id to some other alias name. I am using mongoose node j", "username": "Jyothin_K_Jayan" }, { "code": "aggregate$projectmodel.aggregate([\n {\n \"$project\": {\n \"new_field\": \"$current_field\"\n }\n }\n])\n", "text": "Hi,You can use aggregate query with $project pipeline:Working Example", "username": "NeNaD" }, { "code": "", "text": "what i need is… I have my collection as\n{ {\n“_id”: ObjectId(“5a934e000102030405000000”), “'name” : “Arnold”\n},\n{\n“_id”: ObjectId(“5a934e000102030405000001”), “'name” : “helen”\n},\n}\ni need to query [model.find({filter}) from this collection to get the name as name and _id as userId", "username": "Jyothin_K_Jayan" }, { "code": "model.aggregate([\n {\n \"$project\": {\n \"name\": 1,\n \"userId\": \"$_id\",\n \"_id\": 0\n }\n }\n])\n", "text": "Hi,You can do it like this:Working example", "username": "NeNaD" } ]
Is there a way to return an object from model.find({}) where the fields are mapped to their alias value. I need to convert _id to some other alias name. I am using mongoose node js
2022-09-02T07:50:55.219Z
Is there a way to return an object from model.find({}) where the fields are mapped to their alias value. I need to convert _id to some other alias name. I am using mongoose node js
3,418
null
[ "node-js" ]
[ { "code": "", "text": "how to install mongo db", "username": "alaa_hussain" }, { "code": "", "text": "Helll @alaa_hussain and welcome to the MongoDB community. Are you asking about how to install the MongoDB database or install the mongodb NPM module?", "username": "Doug_Duncan" } ]
How to install mongo db
2022-09-02T17:22:09.677Z
How to install mongo db
1,183
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "req.params.idmongoose.Types.ObjectId()mongoose.Types.ObjectId()node v16.17.0\n\"async\": \"^3.2.4\",\n\"express\": \"^4.16.1\",\n\"mongoose\": \"^6.5.3\",\nconst Genre = require('../models/genre');\nconst Book = require('../models/book');\nconst mongoose = require('mongoose');\nconst async = require('async');\n........................\n// Display detail page for a specific Genre.\nexports.genre_detail = (req, res, next) => {\n const id = mongoose.Types.ObjectId(req.params.id);\n async.parallel(\n {\n genre(callback) {\n Genre.findById(id).exec(callback);\n },\n\n genre_books(callback) {\n Book.find({ genre: id }).exec(callback);\n },\n },\n (err, results) => {\n if (err) {\n return next(err);\n }\n if (results.genre == null) {\n // No results.\n const err = new Error('Genre not found');\n err.status = 404;\n return next(err);\n }\n // Successful, so render\n res.render('genre_detail', {\n title: 'Genre Detail',\n genre: results.genre,\n genre_books: results.genre_books,\n });\n }\n );\n};\nconst mongoose = require('mongoose');\nconst Schema = mongoose.Schema;\n\nconst GenreSchema = new Schema({\n name: { type: String, required: true, minLength: 3, maxLength: 100 },\n});\n\n// Virtual for genre's URL\nGenreSchema.virtual('url').get(() => {\n return `/catalog/genre/` + this._id;\n});\n\n//Export model & Compile model from schema\nmodule.exports = mongoose.model('Genre', GenreSchema);\n// GET request for one Genre.\nrouter.get('/genre/:id', genre_controller.genre_detail);\n\n// GET request for list of all Genre.\nrouter.get('/genres', genre_controller.genre_list);\n", "text": "Following the MDN Web Docs server side Local Library Tutorial, for the Genre Detail Page, (Genre Detail Page, the tutorial mentions a mongoose error coming from the req.params.id.The tutorial recommends that we use mongoose.Types.ObjectId() to convert the id to a type that can be used. I implemented the mongoose.Types.ObjectId() code, but am now getting a different error: “BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer”.Does anyone know the fix for the BSON Type Error?JohnsonUsing the following dependencies:This is what I have in my genreController.js:This is what I have in my genre.js:And here are my routes for genre:", "username": "Johnson_Elugbadebo" }, { "code": "mongoose.Types.ObjectId()aggregate()find()req.params.idmongoose.Types.ObjectId()console.log(req.params.id)", "text": "Hi,I think that mongoose.Types.ObjectId() is only required for aggregate() query.For find() queries, Mongoose will parse the string internally. Try just pass req.params.id without mongoose.Types.ObjectId().Also, try to console.log(req.params.id) and check what’s the output.", "username": "NeNaD" }, { "code": "/* Virtual for this genre instance URL. */\nGenreSchema\n.virtual('url')\n.get( () => {\n return '/catalog/genre/'+this._id;\n});\n\n/* Virtual for this genre instance URL. */\nGenreSchema\n.virtual('url')\n.get(function () {\n return '/catalog/genre/'+this._id;\n});\n", "text": "Hi,Thanks for following up on this, much appreciated. I have literally just now finally resolved the issue. So, to declare the “Virtual” property in the Mongoose Schema, apparently you have to use the notation as explicitly laid out in the Mongoose docs.I was using arrow function notation (which I generally prefer when writing JavaScript) as in the following:The Mongoose docs use the following notation, where “function” is spelled out:Apparently, the code doesn’t work with arrow notation. Lesson learned.Thanks again for following up.Johnson", "username": "Johnson_Elugbadebo" } ]
Mongoose findById method not properly casting to id strings to ObjectId
2022-08-30T03:28:41.297Z
Mongoose findById method not properly casting to id strings to ObjectId
12,487
null
[ "atlas-cluster", "database-tools" ]
[ { "code": "mongoimport --uri mongodb+srv://Jfreundlich01:<PASSWORD>@seir-cluster.sclij.mongodb.net/Optimizer --collection Players --type csv --file \"Seed.csv\"\nno such file or directory: <part of my password>@seir-cluster.sclij.mongodb.net/Optimizer\n", "text": "Hey y’all, trying to import a csv file into my MongoAtlas collection.This is the command I run in terminal:When I click return it goes to a new line in my terminal but nothing happens.After I run that line, if I type mongoimport --help or --version or any of those options, I get:Any help/guidance would be greatly appreciated!Thanks", "username": "Jordan_Freundlich" }, { "code": "", "text": "Are you able to connect to your cluster with your uri connect string?\nHave you removed “<>” in password?\nNo need of enclosing Seed.csv in quotes unless you are giving full path", "username": "Ramachandra_Tummala" }, { "code": ":", "text": "if you are on windows, enclose your URI in quotes. Windows does not like : in your commands.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Yea I can connect to the cluster using the uri connect string. And yea I have my password in there, just didn’t want to put that up… I am on mac but it still wont work without quotes", "username": "Jordan_Freundlich" }, { "code": "mongodb+srv://Jfreundlich01:<password>@seir-cluster.sclij.mongodb.net/?retryWrites=true&w=majority\nmongodb+srv://Jfreundlich01:<password>@seir-cluster.sclij.mongodb.net/optimize\n", "text": "Well to clarify I can connect withBut if i tryThe same issue occurs, goes to new line, but nothing happens.", "username": "Jordan_Freundlich" }, { "code": "", "text": "Try to split your command into multiple lines using back slash", "username": "Ramachandra_Tummala" }, { "code": "", "text": "mind showing what that would look like? Thanks", "username": "Jordan_Freundlich" }, { "code": "", "text": "Just tried it after every mongoimport option, and it still did the same thing.", "username": "Jordan_Freundlich" }, { "code": "*", "text": "Does your password have an * or any other special characters in it by chance? If so you will want to URL encode those characters.", "username": "Doug_Duncan" }, { "code": "", "text": "Hey Doug thanks for chirping in! It does have special characters, but I have been able to connect to the db several times using the same password in projects. mongoimport is the first time it has given me an issue. I’ll check out how to URL encode those characters and give it ago.", "username": "Jordan_Freundlich" }, { "code": "", "text": "OMG that worked! Wow thanks so much. Would have never thought to do that. Any guesses why you have to do that for mongoimport and not other times you connect to database?", "username": "Jordan_Freundlich" }, { "code": "", "text": "It depends on the tool and if it works some magic behind the scenes for you in this regard. I think that all tools would be able to use the URL encoded values without issue.", "username": "Doug_Duncan" }, { "code": "'\"`\\", "text": "When I click return it goes to a new line in my terminal but nothing happens.some characters are used by the shell unless quoted. few of them are very important as they cause a start of a multi-line command mode, single quote ' , double quote \", backtick ` and backslash \\. your new line description fits in this type of control character.however, in most situations, simply surrounding the whole URI with double quotes (as it should not have double quote in it anyways) should solve the problem without extra encoding.", "username": "Yilmaz_Durmaz" }, { "code": "mongoimportno such file or directory: <part of my password>@seir-cluster.sclij.mongodb.net/Optimizer\n&&ctrl + c[1]+ Exit 1 mongoimport <any characters up to the &>\n&*\\\\&", "text": "After I had a chance to think about this (was busy during my initial response), the problem is not with mongoimport but with the special character.Since you were getting en error message with just the end portion of your password and cluster name per your original message:I’m assuming the special character was an & and you had everything after that character in the error message.The & character is a special character in Linux which will run the command in the background. If you typed ctrl + c to end that command you should have seen something similar to the following:You could also just escape the & (or any other shell specific special character such as *) by preceding that character with a \\ (e.g. \\&). This will stop the shell from treating that character as special and treating it as the literal value that it is.This would happen on any tool that you typed that password into on the command line (at least in a Linux/Mac based system). I’m assuming most times, you’re typing the password in at a prompt, in which case you don’t need to escape the special character as you’re no longer working with prompt directly.Hopefully this clears up what was going on and that I didn’t muddy the waters trying to explain what was happening.", "username": "Doug_Duncan" }, { "code": "", "text": "however, in most situations, simply surrounding the whole URI with double quotes (as it should not have double quote in it anyways) should solve the problem without extra encoding.This works as well, but I don’t like using quotes which is why I never think to mention it. Muscle memory has gotten me used to escaping special shell characters and I don’t think about quoted strings.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoimport doing nothing
2022-09-02T14:55:13.579Z
Mongoimport doing nothing
1,282
null
[ "queries", "node-js", "mongodb-shell", "app-services-user-auth", "serverless" ]
[ { "code": "", "text": "I’m unable to connect to the Atlas database, using the x.509 certificate generated by Atlas.I use the paid serverless option.I used two ways to connect one with mongosh and one with Node.js and both are not working.This is the error I’m getting:MongoDB connection error MongoServerError: authentication failed. User not foundThe name of username is the same as the one in the certificate.", "username": "Tsu_Sun" }, { "code": "", "text": "Hi @Tsu_Sun - Welcome to the community.I use the paid serverless option.Is the issue you’re experiencing specific to X.509 authentication AND serverless instances? I.e. Have you had any issues with the same authentication to a non-serverless instance?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Tsu_Sun, what steps are you using to connect to your serverless instance? You can easily find the connection string and relevant parameters by going to cloud.mongodb.com and going to your instance and selecting on “Connect”. This will bring you a pop-in window which shows various options to connect to your instance.", "username": "Vishal_Dhiman" } ]
Atlas x.509 managed certificate is not working
2022-08-10T18:49:26.093Z
Atlas x.509 managed certificate is not working
2,436
null
[ "atlas-functions", "atlas-triggers" ]
[ { "code": "", "text": "I inserted trigger date using atlas trigger when document insert or update operations. But trigger date field updated every time. Because that trigger method running infinite. How I avoiding it?", "username": "Salitha_Shamood" }, { "code": "{ updateDescription.updatedFields.fromTriggerCount`: {$exists: false}}\n", "text": "I am not sure I follow your question. My interpretation is that you have a trigger on a collection where you then update that same collection which then causes the trigger to fire again.One way to avoid this is to tag the “trigger’s update” with some information that you can use the MatchExpression to filter it out. IE, you can add a fromTriggerCount field to the document and have the trigger function do a $inc on that field. Then your match expression for the trigger can include something like:", "username": "Tyler_Kaye" } ]
How to stop iteration of trigger when update document?
2022-09-02T11:58:11.604Z
How to stop iteration of trigger when update document?
2,084
null
[ "aggregation", "node-js" ]
[ { "code": " {$project: {$cond: [ { $weekday: true }, then: current_day: 1, total_work_time_hours_base10_rounded: 1, total_worktime_seconds: 1, \n weekday: 1,\n weekend_day: 1, else: current_day: 1, total_work_time_hours_base10_rounded: 0, total_worktime_seconds: 0, \n weekday: 0,\n weekend_day: 0,] }\n }\n", "text": "Is it possible to project based upon $cond?Displaying only certain fields in a $project stage?Cheers,\nDaniel", "username": "Daniel_Stege_Lindsjo" }, { "code": "$$REMOVEdb.foo.drop();\ndb.foo.insertMany([\n { current_day: 1 },\n { current_day: 2 },\n { current_day: 3 },\n { current_day: 4 },\n { current_day: 5 },\n { current_day: 6 },\n { current_day: 7 }\n])\ndb.foo.aggregate([\n{\n $project: {\n _id: 0, \n current_day: 1,\n weekday: { $and: [{ $gte: [\"$current_day\", 1] }, { $lte: [\"$current_day\", 5] } ] },\n weekend_day: { $or: [{ $eq: [\"$current_day\", 6] }, { $eq: [\"$current_day\", 7] } ] }\n }},\n { $project: {\n weekday: { $cond: { if: \"$weekday\", then: \"$weekday\", else: \"$$REMOVE\" } },\n weekend_day: { $cond: { if: \"$weekend_day\", then: \"$weekend_day\", else: \"$$REMOVE\" } }\n }}\n]);\n// output\n[\n {\n \"weekday\": true\n },\n {\n \"weekday\": true\n },\n {\n \"weekday\": true\n },\n {\n \"weekday\": true\n },\n {\n \"weekday\": true\n },\n {\n \"weekend_day\": true\n },\n {\n \"weekend_day\": true\n }\n]\n", "text": "@Daniel_Stege_Lindsjo,Conditional projection can be done using the $$REMOVE variable.For example:", "username": "alexbevi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it possible to $project based upon $cond?
2022-09-02T14:17:38.564Z
Is it possible to $project based upon $cond?
1,116
null
[ "aggregation", "node-js" ]
[ { "code": "", "text": "I have the following $project code:current_day: {$isoDayOfWeek: “$time_stamp_sign_on_ISODate”}{$project: {current_day: 1, total_work_time_hours_base10_rounded: 1, total_worktime_seconds: 1, weekday: { $or: [ { $gte: [ “$current_day”, 1 ] },\n{ $lte: [ “$current_day”, 5 ] } ] }, weekend_day: { $or: [ { $eq: [ “$current_day”, 6 ] },{ $eq: [ “$current_day”, 7 ] } ] }}}The code should classify current_day as a weekend_day when the $isoDayOfWeek value is either 6 or 7. The current_day should be classified as a weekday if the $isoDayOfWeek value is between 1 and 5.The query is able to print the correct $isoDayOfWeek value, but the logic implemented with - $gte, $lte, $or - fails.Do you have any ideas about what could be wrong?Cheers,\nDaniel", "username": "Daniel_Stege_Lindsjo" }, { "code": "weekday$or$anddb.foo.drop();\ndb.foo.insertMany([\n { current_day: 1 },\n { current_day: 2 },\n { current_day: 3 },\n { current_day: 4 },\n { current_day: 5 },\n { current_day: 6 },\n { current_day: 7 }\n])\ndb.foo.aggregate([\n{\n $project: {\n _id: 0, current_day: 1,\n weekday: { $and: [{ $gte: [\"$current_day\", 1] }, { $lte: [\"$current_day\", 5] } ] },\n weekend_day: { $or: [{ $eq: [\"$current_day\", 6] }, { $eq: [\"$current_day\", 7] } ] }\n }\n}]);\n// output\n[\n {\n \"current_day\": 1,\n \"weekday\": true,\n \"weekend_day\": false\n },\n {\n \"current_day\": 2,\n \"weekday\": true,\n \"weekend_day\": false\n },\n {\n \"current_day\": 3,\n \"weekday\": true,\n \"weekend_day\": false\n },\n {\n \"current_day\": 4,\n \"weekday\": true,\n \"weekend_day\": false\n },\n {\n \"current_day\": 5,\n \"weekday\": true,\n \"weekend_day\": false\n },\n {\n \"current_day\": 6,\n \"weekday\": false,\n \"weekend_day\": true\n },\n {\n \"current_day\": 7,\n \"weekday\": false,\n \"weekend_day\": true\n }\n]\nweekday$or>= 1 OR <= 5>= 1", "text": "The query is able to print the correct $isoDayOfWeek value, but the logic implemented with - $gte, $lte, $or - fails.Do you have any ideas about what could be wrong?The weekday criteria needs to be an inclusive range, so changing the $or to an $and should produce the desired result.For example:By having the weekday filter include an $or you’re matching ANY value in the range >= 1 OR <= 5. Since 6 and 7 are >= 1 you’d be matching EVERY possible value greater than 1.", "username": "alexbevi" }, { "code": "", "text": "/// This works with all fields added!\ndb.WorkTimeData.aggregate([{$match : { “time_stamp_sign_on_ISODate”: { $gte: new Date(“2020-01-01:00:00:00”), $lte: new Date(“2022-12-31:23:59:59”) } }},{$group: {\n_id: { id: “$_id”,\ncurrent_year: {$year: “$time_stamp_sign_on_ISODate”},\ncurrent_week: {$week: “$time_stamp_sign_on_ISODate” },\ncurrent_day: {$isoDayOfWeek: “$time_stamp_sign_on_ISODate”}\n},\ntotal_worktime_seconds: {$sum: “$work_time_seconds”},\n}}, {$sort: {_id: 1}},\n{$project: {total_work_time_hours_base10: {$divide: [\"$total_worktime_seconds\", 3600] }, total_worktime_seconds: 1,}},\n{$project: {total_work_time_hours_base10_rounded: {$round: [\"$total_work_time_hours_base10\", 2] }, total_worktime_seconds: 1,}},{$project: {current_day: 1, total_work_time_hours_base10_rounded: 1, total_worktime_seconds: 1,\nweekday: { $and: [{ $gte: [\"$_id.current_day\", 1] }, { $lte: [\"$_id.current_day\", 5] } ] },\nweekend_day: { $or: [{ $eq: [\"$_id.current_day\", 6] }, { $eq: [\"$_id.current_day\", 7] } ] }\n}\n}\n])", "username": "Daniel_Stege_Lindsjo" }, { "code": "", "text": "Glad I could help get this sorted out for you ", "username": "alexbevi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issues using $isoDayOfWeek with logical operators: $gte, $or, $lte
2022-09-02T11:46:40.994Z
Issues using $isoDayOfWeek with logical operators: $gte, $or, $lte
1,730
null
[]
[ { "code": "majoritylocallocalmajoritymajority{w:0}local", "text": "I’m posting in order to get some feedback on a general approach to selecting read/write concerns, in particular when isolated script executions don’t know they’re part of a wider “session”.I’ve been working for years with MongoDB, but have always had a hard time with read/write concerns. In particular how to set the appropriate concerns when the different parts of my application don’t necessarily know the full context of what has recently happened. The concept of a consistent session is easy enough to model in a single script execution, but introduce (for example) stateless API calls in quick succession and the concept of a “session” starts to span operations that are actually related, but have no knowledge of each other.Perhaps the safe thing to do would be to read and write majority for all operations, but some of my operations insert many thousands of documents and I want them done quickly. What I’ve ended up with is my application switching to unacknowledged writes when speed is required, then switching reads to local in case the next operation in the same script needs to read it back. It doesn’t really matter if unacknowledged writes get rolled back, but what does matter is that the next script execution has no context of the previous and so can’t choose the best read concern. To get around that I end up reading from the primary with local in most cases, which means no benefit from secondaries!I want to sort this mess out in a simple fashion, while ensuring that:One thing my application does know is how recently its state was modified. This is due to tracking all updates with a timestamp and incrementing number (in separate storage). So based on this I’m considering that my application could have two operating modes (Safe, and Fast) and switch between them based on recent activity.Safe: Write majority, Read majority from Secondary.\nUsed when there have been no recent updates and fast writes aren’t necessary.Fast: Write {w:0} Read local from Primary.\nUsed when write speed is required, or when there have been recent updates (which may still be running).What do you think? How have other people solved this problem? Feedback much appreciated.", "username": "timw" }, { "code": "", "text": "Hi @timw and welcome to the community!!If I understand correctly, you have an application that basically needs to do two things that appear to be in contradiction: one mode of working is fast writes, read the most recent data, don’t care if it’s get rolled back, and the other mode is consistency. Typically an application is one or the other (but you wanted both), so I’m not sure I fully understand the requirements. Could you perhaps post some scenario that the application handles that would help illustrate the day to day operation of the application?Also, to be more specific, could you please elaborate on a couple of points:It doesn’t really matter if unacknowledged writes get rolled back, but what does matter is that the next script execution has no context of the previous and so can’t choose the best read concern.This feels contradictory to me: since the writes are unacknowledged, there is no guarantee that the write even happened. Also what if the “next script execution” reads data that are rolled back? Is there a scenario that you can post so we can understand this better?This is due to tracking all updates with a timestamp and incrementing number (in separate storage).This follows the previous point: if the writes are unacknowledged and can be rolled back, why do you need to track these writes (that can disappear) on a separate storage?To cater for different application needs, MongoDB provides various combinations of read and write concerns settings depending on the level of consistency and availability required. However this requirement is typically consistent application-wide (i.e. consistency, or availability), so I might misunderstand your use case that appears to call for both requirements in a single application.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Thanks for the reply.The idea that an application has one set of read/write concerns for all operations makes perfect sense. I guess the short answer to my very broad question is “if you care about state at all then go for consistency across the board”. The presence of any unacknowledged writes anywhere in an application potentially breaks this model.I feel I’m trying to justify my requirements now, but I’ll try to clarify some points you highlighted…Re this contradiction: Some functions of my application can write tens of thousands of documents while many others can write a maximum of one. Choosing speed in all cases would increase risk in my application for the many operations that don’t need it. Choosing consistency in all cases would make the largest operations much slower. If I could choose only one approach I would choose consistency, but selecting the right trade-off for the right situation seemed desirable (when I built the system 10 years ago!). It didn’t seem strange to me that I’d want to benefit from MongoDB’s write speed in select contexts, but maintain a stateful application in general.When I say that failures “don’t matter”. I mean that they are very rare and any inconsistencies created by rollbacks are corrected soon afterwards. Hence accepting some risk in exchange for a lot of speed seemed reasonable, but only when necessary. I didn’t mean to say that I care about consistency sometimes, but not at other times. All operations are equal in this regard, but very occasional failure is tolerable.I track updates mainly for the purpose of cache invalidation. Clients requesting unmodified data will get 304 responses (and this saves me a lot of juice). I figured the same mechanism could be used to put the application into a secondary read mode. The worst case here (in the event of failed writes) would be the client getting a 200 response with discarded data from the primary. This is actually the default mode of the current system, so I thought a safe incremental improvement.I don’t mean to answer my own question, rather I’m curious if others have been through the same though process of choosing between consistency and speed at different times, and how they managed the potential consistency problems this creates.", "username": "timw" } ]
How to choose read/write concerns from your application?
2022-08-22T18:01:03.295Z
How to choose read/write concerns from your application?
1,271
null
[ "queries" ]
[ { "code": "db.getCollection('Test').find({\n $expr: { \"$eq\": [ { \"$arrayElemAt\": [ \"$CLOSING.STATUS\", -1 ] }, \"WRITABLE\" ]}\n})\ndb.getCollection('Test').find({\n $or: [ \n { CLOSING: null },\n { $expr: { \"$eq\": [ { \"$arrayElemAt\": [ \"$CLOSING.STATUS\", -1 ] }, \"WRITABLE\" ]} }\n ]\n})\nError: error: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"$arrayElemAt's first argument must be an array, but is string\",\n\t\"code\" : 28689,\n\t\"codeName\" : \"Location28689\"\n}\n", "text": "Hi,I want to get the records where the last value of my object is an array and in this array, i check “STATUS” attribute (string value).I find this query :This query is ok but i want put this into $orHowever, it’s not successfull. MongoDB return this :Can you help me ?Thanks in advance.", "username": "Florian_CHENE" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6310a1622786520682ce36e8\"\n },\n \"CLOSING\": [\n {\n \"STATUS\": \"WRITABLE\"\n }\n ]\n}\n", "text": "Hi @Florian_CHENE ,For me the queries work with the following document:Maybe some of the documents you have does not have “CLOSING” as array and then obviously elementArrayAt fails.Ty", "username": "Pavel_Duchovny" }, { "code": "{\n \"UID\" : \"377D3DE711704D03BE0C3008F6641A8D\",\n \"CREATIONTIMESTAMP\" : ISODate(\"2022-09-01T12:14:00.249Z\"),\n \"USER\" : {\n \"UID\" : \"9121B944EFA7471DB313C88A8526034B\",\n \"IDENTIFICATION\" : {\n \"DENOMINATION1\" : \"CHENE\",\n \"DENOMINATION2\" : \"Florian\"\n }\n },\n \"STATUS\" : \"WRITABLE\"\n }\n", "text": "This is CLOSING example :i’m agree with you but without $or my queries is ok. How can i fix my problem ?", "username": "Florian_CHENE" }, { "code": "CLOSINGSTATUSdb.test.insertOne( { \"CLOSING\": { \"STATUS\": \"WRITABLE\" } } )", "text": "Is it possible that sometimes CLOSING is a subdocument with a STATUS field? I can get the error mentioned if I run db.test.insertOne( { \"CLOSING\": { \"STATUS\": \"WRITABLE\" } } )\nimage1401×469 62 KB\n", "username": "Doug_Duncan" }, { "code": "", "text": "I changed my attribute “CLOSING” by “CLOSED” and the queries work because i have another records with CLOSING attribute (object type) but i don’t understand why my querie work without $or. Can you explain me ?Thanks for your answer.", "username": "Florian_CHENE" }, { "code": "", "text": "Queries are not executed directly, but the query planner first makes optimizations on them.I think your first query is optimized so that the documents without an array in “CLOSING.STATUS” are filtered. but when you use “CLOSING:null” it gives up that optimization and uses all documents whether STATUS is an array or not, causing this error.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "Okay, i understand your answer.Thanks all", "username": "Florian_CHENE" } ]
Error in $or and $arrayElemAt
2022-09-01T11:55:13.473Z
Error in $or and $arrayElemAt
2,660
https://www.mongodb.com/…504ab5c257a.jpeg
[ "lebanon-mug" ]
[ { "code": "Community Manager, Mobile at MongoDB", "text": "The MongoDB User Group in Lebanon is pleased to invite you to the its online workshop: From MongoDB to Mobile. Join us in the 10th of September at 7pm (GMT+3) with @henna.s , the Mobile community manager at MongoDB to master this in demand technologies togetherIn the workshop, you will understandIntro to MongoDB Developer Data PlatformRealm Database on mobileSchema Generation based on your collections in AtlasSync Development Types with focus on flexible sync for a restaurant appOffline access to your dataBy the end of the talk, you will have a Restaurant App shared among your family to decide on a restaurant and meet there for lunch or dinner.2022-09-10T16:00:00Z→2022-09-10T17:00:00ZCommunity Manager, Mobile at MongoDBLooking forward to seeing you all !!", "username": "eliehannouch" }, { "code": "", "text": "amazing ,cant wait!!!", "username": "show_maker" } ]
Lebanon MUG: From MongoDB to Mobile workshop
2022-09-02T07:05:04.506Z
Lebanon MUG: From MongoDB to Mobile workshop
4,377
null
[]
[ { "code": "", "text": "coffeeBean_collection\nname, brand, roast profile and price\nThe brew options are, type of coffee beans, grinding settings\nfrom 1 (coarse) to 7 (fine), litres of water (i.e. 0.5, 1.1, 2.2 litres),\nand grams of coffee", "username": "Mohammed_Mokhtar" }, { "code": "{\nname : ... ,\nbrand: ... ,\nroastProfile : { ... } ,\nprice : ... ,\nbrewOptions : { \n beansType : ... ,\n grindingSettings : ... ,\n water : { amount : ... , unit : \"liters\" } ,\n grams: ...\n }\n}\n", "text": "Hi @Mohammed_Mokhtar ,Not sure what do you specifically mean? Do you need a schema design to hold this information?What are the queries you will perform and what is the purpose of the data.Based on the provided information it could be a document like the following (but maybe completely different considering the needed use case):Thanks\nPavel}", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny\nThe queries will be CRUD. it will be nice if you can help me with schema design also.\nThanks a lot\nMohammed", "username": "Mohammed_Mokhtar" }, { "code": "", "text": "@Pavel_Duchovny Should the roastProfile be an object?\nNot like this{\nname : … ,\nbrand: … ,\nroastProfile : … ,\nprice : … ,\nbrewOptions : {\nbeansType : … ,\ngrindingSettings : … ,\nwater : { amount : … , unit : “liters” } ,\ngrams: …\n}\n}", "username": "Mohammed_Mokhtar" }, { "code": "", "text": "Hi @Mohammed_Mokhtar ,CRUD is a super generic description. To help you with specific schema consideration you should provide more specific access patterns…Like would you seek for a specific coffee Bean machine or brand and what would the application pages or consumer search for a wider range of documents to present?What and how often be updated , tracked etc…The roastProfile can be anything you need : object, array of objects, just a string … Depending on type of information and its relationship, quantity with a coffee bean document…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,It will be for variety of coffee available. Each document will be for different brand available in the market.Thanks\nMohammed", "username": "Mohammed_Mokhtar" }, { "code": "", "text": "Hi @Mohammed_Mokhtar ,So I would start by a document for an item with all of the data for that item embeeded in the document like I mentioned…Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "It will not updated that much", "username": "Mohammed_Mokhtar" }, { "code": "", "text": "Thanks @Pavel_Duchovny\nCan you help with schema", "username": "Mohammed_Mokhtar" }, { "code": "{\n_id : ... ,\nname : ... ,\nbrand: ... ,\nroastProfile : { ... } ,\nprice : ... ,\nbrewOptions : { \n beansType : ... ,\n grindingSettings : ... ,\n water : { amount : ... , unit : \"liters\" } ,\n grams: ...\n }\n}\n{ name : 1, price :1}{brand :1 , price : 1}", "text": "@Mohammed_Mokhtar ,With the minimal information you provided it sounds like a document per coffee product like I presented is a way\nTo start:Now if any of the values needs to be a list of limited amount of values (under 500 per doc) yiu can consider storing them in an array…I assume you will search based on a brand or name of product and perhaps sort by price , therefore I would index :\n{ name : 1, price :1} and {brand :1 , price : 1}Fior further guidance I would suggest to do one of our online courses for schema design and read:Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!Get a summary of the six MongoDB Schema Design Anti-Patterns. Plus, learn how MongoDB Atlas can help you spot the anti-patterns in your databases.A summary of all the patterns we've looked at in this seriesYou can also listen to a podcast I done recentlyListen to this episode from FedBites on Spotify. First podcast guest - Pavel Duchovny, Lead Developer Advocate at MongoDB. On this special episode Yoav and Roman talk with Pavel about what are the best practices and ways to use MongoDB. They discuss...Thanks", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks a lot @Pavel_Duchovny", "username": "Mohammed_Mokhtar" } ]
How the document will be?
2022-03-20T09:28:19.949Z
How the document will be?
2,398
null
[ "serverless" ]
[ { "code": "", "text": "I haven’t found any literature on the performance of a serverless cluster. Is there any rough equivalent to one of the M-tiers?", "username": "Jared_Lindsay1" }, { "code": "", "text": "Hi @Jared_Lindsay1I don’t believe there is an easy comparison between serverless instances and regular instances. They are catering for two different use cases.Please have a look at the post Frequently Asked Questions - Atlas Serverless Instances for a more detailed answer.Best regards\nKevin", "username": "kevinadi" } ]
Serverless — Equivalent performance?
2022-09-02T00:22:47.231Z
Serverless — Equivalent performance?
2,520
null
[ "aggregation", "queries", "node-js", "data-modeling", "java" ]
[ { "code": "{\n \"customer\": \"62f75f6204a24bb48edae723\",\n \"product\": \"62cd46a3b325452b3efc6dd3\",\n \"downPayment\": 140,\n \"planOfInstallment\": 12,\n \"moneyRequiredToPay\": 629,\n \"contractInitiated\": false,\n \"contractStatus\": \"Normal\",\n \"moneyRecieved\": 0,\n \"investor\": [\n {\n \"investorDetail\": \"62f7542289326e783ae7feba\",\n \"money\": 200,\n \"date\": \"2022-08-13T09:38:33.476Z\",\n \"_id\": \"62f7711d932b45b68c7ae813\"\n },\n {\n \"investorDetail\": \"62f7542289326e783ae7feba\",\n \"money\": 170,\n \"date\": \"2022-08-13T09:38:33.476Z\",\n \"_id\": \"62f7711d932b45b68c7ae814\"\n }\n ],\n \"createdDate\": \"2022-08-13T09:38:33.476Z\",\n \"_id\": \"62f7711d932b45b68c7ae812\",\n \"paymentschedule\": [\n {\n \"monthName\": \"September\",\n \"dateToPay\": \"2022-09-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae815\"\n },\n {\n \"monthName\": \"October\",\n \"dateToPay\": \"2022-10-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae816\"\n },\n {\n \"monthName\": \"November\",\n \"dateToPay\": \"2022-11-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae817\"\n },\n {\n \"monthName\": \"December\",\n \"dateToPay\": \"2022-12-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae818\"\n },\n {\n \"monthName\": \"January\",\n \"dateToPay\": \"2023-01-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae819\"\n },\n {\n \"monthName\": \"February\",\n \"dateToPay\": \"2023-02-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae81a\"\n },\n {\n \"monthName\": \"March\",\n \"dateToPay\": \"2023-03-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae81b\"\n },\n {\n \"monthName\": \"April\",\n \"dateToPay\": \"2023-04-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae81c\"\n },\n {\n \"monthName\": \"May\",\n \"dateToPay\": \"2023-05-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae81d\"\n },\n {\n \"monthName\": \"June\",\n \"dateToPay\": \"2023-06-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae81e\"\n },\n {\n \"monthName\": \"July\",\n \"dateToPay\": \"2023-07-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae81f\"\n },\n {\n \"monthName\": \"August\",\n \"dateToPay\": \"2023-08-25T21:00:00.000Z\",\n \"paid\": false,\n \"payment\": 52.416666666666664,\n \"paymentRecieveDate\": null,\n \"_id\": \"62f7711d932b45b68c7ae820\"\n }\n ],\n \"documentContract\": [],\n \"__v\": 0,\n \"id\": \"62f7711d932b45b68c7ae812\"\n}\n", "text": "i have one query about the mongodb\nthere is one issue i want to change the field inside the collection of array of object i want to change it .\ni want to change the paid to true if in the body req.body._id of is match to inside the object of paymentschedule _id.", "username": "arbabmuhammad_ramzan" }, { "code": "$[<identifier>]", "text": "Hi @arbabmuhammad_ramzan,Have you checked out or considered using the filtered positional operator $[<identifier>]? It’s possible this may help you in updating the nested objects within the array fields of the sample document you provided.If you require further assistance, please provide the following:if in the body req.body._id of is match to inside the object of paymentschedule _id.Regards,\nJason", "username": "Jason_Tran" }, { "code": "const payMonthlyInstallment = async (req, res) => {\n const contractDetails = await Contract.findOneAndUpdate(\n \n { _id: ObjectId(req.params.id),\"paymentschedule._id\": ObjectId(req.body.month) },\n {\n $set: { \"paymentschedule.$.paid\": true,\"paymentschedule.$.paymentRecieveDate\": new Date()},\n }, \n ).populate(\"customer\").populate({\n path : 'product',\n populate : {\n path : 'category'\n }\n });\n contractDetails.investor.map(async item => {\n let monthlyInstallment;\n switch(contractDetails?.product?.category?.profit){\n case 70: monthlyInstallment = ((((((contractDetails?.product?.price - contractDetails?.downPayment) * 25)/100) + (contractDetails?.product?.price - contractDetails?.downPayment)) / contractDetails?.planOfInstallment) ); break;\n case 30: monthlyInstallment = ((((((contractDetails?.product?.price - contractDetails?.downPayment) * 15)/100) + (contractDetails?.product?.price - contractDetails?.downPayment)) / contractDetails?.planOfInstallment) ); break;\n case 15: monthlyInstallment = ((((((contractDetails?.product?.price - contractDetails?.downPayment) * 5)/100) + (contractDetails?.product?.price - contractDetails?.downPayment)) / contractDetails?.planOfInstallment) ); break;\n default: monthlyInstallment = ((((((contractDetails?.product?.price - contractDetails?.downPayment) * 5)/100) + (contractDetails?.product?.price - contractDetails?.downPayment)) / contractDetails?.planOfInstallment) ) ; break;\n }\n \n const investorPercentage = ((item?.money/(contractDetails?.product?.price - contractDetails?.downPayment)) * 100)\n const amount = monthlyInstallment * investorPercentage / 100\n const updateContract = await Contract.findOneAndUpdate(\n \n { _id: ObjectId(req.params.id),\"investor._id\": ObjectId(item?._id) },\n {\n $set: {\n \"investor.$.moneyRecieved\": {$add: [ \"investor.$.moneyRecieved\", amount ]},\n }\n }\n )\n console.log(\"contractDetails\",updateContract)\n\n })\n res.send(contractDetails)\n}\n", "text": "Thanks a-lot for explaining I am stuck now in one place in the investors array of object its receivedMoney is not updates what api i have implemented let me share but its not working properly\nThe paymentSchedule is working but the investor field is not updating.", "username": "arbabmuhammad_ramzan" }, { "code": "mongosh const updateContract = await Contract.findOneAndUpdate(\n \n { _id: ObjectId(req.params.id),\"investor._id\": ObjectId(item?._id) },\n {\n $set: {\n \"investor.$.moneyRecieved\": {$add: [ \"investor.$.moneyRecieved\", amount ]},\n }\n }\n )\n\"_id\"\"investor._id\"amount\"moneyReceived\"\"investor\"\"investor._id\"findOneAndUpdate(){\n...\n\"investor\": [\n {\n \"investorDetail\": \"62f7542289326e783ae7feba\",\n \"money\": 200,\n \"date\": \"2022-08-13T09:38:33.476Z\",\n \"_id\": \"62f7711d932b45b68c7ae813\" /// <--- String\n },\n {\n \"investorDetail\": \"62f7542289326e783ae7feba\",\n \"money\": 170,\n \"date\": \"2022-08-13T09:38:33.476Z\",\n \"_id\": \"62f7711d932b45b68c7ae814\" /// <--- String\n }\n ],\n...\n}\n", "text": "Hi @arbabmuhammad_ramzan,For the code portion of your response, I’m not too familiar with Javascript so I will try any testing in mongosh .The paymentSchedule is working but the investor field is not updating.Can you advise which portion of the code you’re having issues with? My assumption is that it is this portion based off your comment but please confirm.Additionally, using the document you posted in the initial post, what are you expecting the output to be? If you could provide static values for the \"_id\", \"investor._id\", and amount in relations to the sample document you provided initially that would help greatly.Lastly, please advise if you’re receiving any errors or unexpected behaviour regarding the investor field update. E.g. is the \"moneyReceived\" field within the \"investor\" array objects not being created or being created with the incorrect value or even in the incorrect object within the array etc.One thing I also noted in the above code is that the \"investor._id\" in the query section of the findOneAndUpdate() refers to an ObjectId() value where as the sample document provided has the value in these fields as string. Is this expected? i.e.:Regards,\nJason", "username": "Jason_Tran" } ]
I want to edit the field inside the collection of array of object
2022-08-13T13:39:18.496Z
I want to edit the field inside the collection of array of object
2,618
https://www.mongodb.com/…6_2_1024x175.png
[ "server", "installation" ]
[ { "code": "", "text": "Hello,\nI just installed mongodb community edition, and I would like to get your help regarding the file mongod.exe. I tried to start the mongodb database by going to C:\\program files\\mongodb\\server\\5.0\\bin to execute the mongod.exe . But there is no mongod.exe file there.\n\nCapture1062×182 6.4 KB\nI am trying to figure out why mongod.exe is not there , could you help?", "username": "Dung_Tran1" }, { "code": "", "text": "What type of installation you did?\nSame issue reported by another user on Windows\nMay be you did not choose all the required components like server,service etc", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I am having the same issue. Did you get this resolved? If so, what did you do? I am think of installing mongod.exe or uninstalling and downloading again. I hope I am on the right track here.", "username": "James_Justis" }, { "code": "", "text": "Hi,\nI am having the same issue, may I know if anyone know how to solve it?\nthank you", "username": "Amani_Rosman" }, { "code": "", "text": "Hi @Amani_Rosman and welcome to the MongoDB community forums. Can you provide the following information?", "username": "Doug_Duncan" }, { "code": "", "text": "Hi Sir, I am installing mongodb in windows. I have install it according to few YouTube video.\nHere is a screenshot of command prompt, services , system environment variable and MongoDB Compass.\nI notice that there’s no .exe file in bin.Thanl you for your time.\nMongo1280×720 277 KB\n", "username": "Amani_Rosman" }, { "code": "mongo.exemongoshmongoshmongomongoshmongo", "text": "The mongo.exe tool is no longer being distributed as it’s been superseded by the newer mongosh tool. I would have thought the installer would have installed the tool, but if not, you can always download it separately.mongosh has almost all of the functionality that the older mongo tool had. Most users won’t even know that there are some lesser used functionality. You should be able to use mongosh anywhere you see mongo in documentation/blogs/videos.", "username": "Doug_Duncan" } ]
Can not find mongod.exe
2022-01-12T17:13:42.053Z
Can not find mongod.exe
8,506
https://www.mongodb.com/…_2_1024x568.jpeg
[ "atlas-search", "atlas", "serverless", "singapore-mug" ]
[ { "code": "Senior Solutions Architect, MongoDB", "text": "\nMUG-SG1920×1065 110 KB\n\nSingapore vector created by freepikSingapore, MongoDB User Group is excited to launch and announce their first meetup in collaboration with Google Developers Space on Sept 1st.The event will include two sessions, a demo based See in Action session, that will provide you with a quick introduction to MongoDB and MongoDB Atlas. It will be followed up by a demo of getting your own MongoDB Atlas cluster on Google Cloud that is free forever.The next session would be a Do and Learn session, in which we’re going have you all add a search capability to your app with MongoDB Atlas Search. We will demo how can you build your own movie finder app with MongoDB. The demo will cover MongoDB’s document data storage model, aggregation pipeline, and search capabilities that power the application.We will also have fun Networking Time to meet some of the developers, customers, architects, and experts in the region. Not to forget there will also be, Trivia Swags , and Dinner. If you are a beginner or have some experience with MongoDB already, there is something for all of you!*Registration open at 7:00 PM SGTEvent Type: In-Person\n Location: Google Developer Space, Singapore.\n Building 80 Pasir Panjang Rd, Level 3, Singapore 117372To RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Senior Solutions Architect, MongoDBJoin the Singapore group to stay updated with upcoming meetups and discussions in Singapore.", "username": "DerrickChua" }, { "code": "", "text": "Hello Everyone!\nWe are excited to see you tomorrow at Google Developer Space, Singapore!Location: Building 80 Pasir Panjang Rd, Level 3, Singapore 117372Event schedule :Doors open at 19.00. Make sure to be on time so you don’t miss the dinner and we all can have time to chat before the talks start.07:00 PM Networking and Dinner\n07:30 PM MongoDB Atlas on Google Cloud\n08:00 PM Break\n08:15 PM Do and Learn: MongoDB Atlas Search\n08:45 PM Trivia and FunFeel free to reach out on this forum if you have any questions!Looking forward to seeing you all tomorrow at the event!", "username": "Harshit" }, { "code": "", "text": "Yes really looking forward to meeting everyone tomorrow and having a nice geek out session!", "username": "DerrickChua" }, { "code": "", "text": "Hello everyone! We had a great turnout last night and I hope all of you have enjoyed yourself and learnt something new about MongoDB. For those who were unable to make it, we hope to see you next time.Here are some pictures from last night and congrats again to all the winners of the trivia. Enjoy your swag!\nimage1920×1440 348 KB\n\nimage1600×1200 235 KB\n\nimage1920×1440 228 KB\n\nimage1920×1440 212 KB\n\nimage1920×1440 203 KB\n\nimage1920×1440 207 KB\n", "username": "DerrickChua" } ]
Singapore MUG: *Inaugural Meetup!*
2022-08-08T08:32:40.780Z
Singapore MUG: *Inaugural Meetup!*
4,396
null
[ "java", "spring-data-odm" ]
[ { "code": "", "text": "his application is connected to a comtainner mongoDB how to lunch the server port to a remote server i use postman for this test\nexemple: in local http://localhost:8080/hello\nport 8080 is running in local\nmy qyestion is :\nhow to start this port on the remote server", "username": "ebbe_AHMED" }, { "code": "", "text": "it is mostly about container port forwarding but try to be a bit more specific about your settings. include at least these details.what type is that container? docker?\nwhere is that remote server? on cloud?\nhow much apart are your web app and mongodb servers? same network?", "username": "Yilmaz_Durmaz" } ]
I would test my application container back-end (spring boot) and mongodb
2022-08-30T11:21:48.631Z
I would test my application container back-end (spring boot) and mongodb
1,757
null
[ "sharding" ]
[ { "code": "", "text": "How I list the entire cluster information from mongos - mongo shell? Like I am looking a single command to list similar to below information:\nmongos:\nmongos1:27017\nmongos2:27017\nconfig:\nmongoc1:27018\nmongoc2:27018\nmongoc3:27018\nshards:\nsh1:\nmongod1:27019\nmongod2:27019\nmongod3:27019\nsh2:\nmongod4:27019\nmongod5:27019\nmongod6:27019", "username": "Rama_Mekala1" }, { "code": "db.runCommand(\"getShardMap\")[direct: mongos] admin> db.runCommand(\"getShardMap\")\n{\n map: {\n config: 'configRepl/localhost:27024',\n shard01: 'shard01/localhost:27018,localhost:27019,localhost:27020',\n shard02: 'shard02/localhost:27021,localhost:27022,localhost:27023'\n },\n ...\n}\nmongossh.status()db.runCommand(\"getShardMap\")--evalmongosh", "text": "Hi @Rama_Mekala1 - Welcome to the community.How I list the entire cluster information from mongos - mongo shell? Like I am looking a single command to list similar to below informationThe closest single command I could think of that would produce a similar output would be db.runCommand(\"getShardMap\") which results in the following output based off my test environment:It does not include the mongos hostname or port info though from what I have seen from the output (from my test environment).Perhaps you could write a script that runs multiple commands to gather the required information (e.g. sh.status() for mongos details combined with the above db.runCommand(\"getShardMap\") ) . Additionally, there are some examples using multiple --eval options in the mongosh document here.Hope this helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I list of all servers ( mongos,config,shards) for sharded cluster. from mongos shell?
2022-08-08T15:20:03.395Z
How do I list of all servers ( mongos,config,shards) for sharded cluster. from mongos shell?
2,191
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "Is there a way to change the name of the web site mentioned in the login-via-Google window that appears when authenticating into a Realm app via Google? The Google sign-in dialog says “Choose an account to continue to mongodb.com”, but I am developing an app for a client and they would like “mongodb.com” to be replaced with their domain name (which is what the end-user is logging into).", "username": "Jeffrey_Pinyan1" }, { "code": "", "text": "Hi all! Do we have any news about this issue with the “mongodb.com” on the Google Consent screen?\nI’m facing the same problem with my project.\nThanks!", "username": "Pablo_Costa" } ]
Changing the web site named in the logIn() window message "Choose an account to continue to mongodb.com"
2022-04-06T14:20:32.410Z
Changing the web site named in the logIn() window message &ldquo;Choose an account to continue to mongodb.com&rdquo;
2,527
null
[ "stitch" ]
[ { "code": "", "text": "I followed MongoDb Stitch docs to setup Google OAuth. Everything works fine, except that the user consent screen shows “mongodb.com” instead of my own domain. Google Consent Screen shows the domain from which the request is coming from, not the app name set in the Google console.I thought that I have to let Google verify my web app to make it show my web apps name and logo. Unfortunately, after 5-6 email exchanges I received an email from Google telling me that I have to be the owner of mongodb.com to proceed with my verification. All of the Authorized Javascript Origins and Redirect URLs should be owned by me.So in summary, with current setup it is not possible to show your own web app’s name, because there is no way for me to claim ownership of mongodb.com.Any ideas on how this could be solved this? Thank you!", "username": "Dimitar_Kurtev" }, { "code": "", "text": "Are you saying that your are using a custom domain (and stitch is configured to server your app at <yourdomain.extention> but you still see this in Google Auth screen:\nI think that comes from Google Auth being configured to redirect back to mongodb.com stitch…\nI haven’t seen a way to get that altered but I’ll check with the Stitch team and see if they know of any workarounds…", "username": "Asya_Kamsky" }, { "code": "", "text": "I think that comes from Google Auth being configured to redirect back to mongodb.com stitch…Exactly. I think the initial auth request is also triggered by stitch, so all that Google sees is mongodb domain. Thank you for your fast reply! I’m really interested if there is a workaround.", "username": "Dimitar_Kurtev" }, { "code": "https://us-east-1.aws.stitch.mongodb.commongodb", "text": "Ok I suspect this has something to do with the underlying URL that it will be returning to - in my case that’s https://us-east-1.aws.stitch.mongodb.com In other words, they are saying, whatever this looks like, the web page you will be returned to is actually hosted by mongodb.I’m checking with the Stitch team to see if I’m overlooking something but it seems like this feature either doesn’t exist or doesn’t work - either way the team will let me know (or maybe they’ll post here themselves).", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi again,\nDrew DiPalma answered at https://mongodb.canny.io/ to the same question:Hi Dimitar – After some additional investigation with the engineering team, there doesn’t seem to be a way around this at this time. We are looking into an improvement for later in the year that would allow greater flexibility with using Custom Domains + Stitch and will keep this in mind.This is unfortunate. More than 50% of users in my beta test used Google login.\nI assume, as a workaround, I can implement custom JWT Google authentication.", "username": "Dimitar_Kurtev" }, { "code": "mongodb.com", "text": "That’s really weird because I just tried it again and this time it didn’t show mongodb.com like before!!!\n\nScreen Shot 2020-02-20 at 4.43.40 PM1080×446 36.9 KB\n", "username": "Asya_Kamsky" }, { "code": "", "text": "That’s strange. I tried but I still see the mongodb.com link.", "username": "Dimitar_Kurtev" }, { "code": "", "text": "any update on a fix for this?? I just ran into the same problem ", "username": "Colin_Green" }, { "code": "", "text": "After reading this thread, just wondering if there has been any additional progress made on this issue? I’d rather not write a custom JWT handler (and external application) just so I can show something other than “mongodb.com” on the login screen.", "username": "Justin_Jarae" }, { "code": "", "text": "Hi ! Any news about google consent validation ?", "username": "Jonathan_Gautier" }, { "code": "", "text": "Hi all! Do we have any news about this issue with the “mongodb.com” on the Google Consent screen?", "username": "Pablo_Costa" } ]
Google OAuth Consent Screen shows mongodb.com
2020-02-13T19:00:36.718Z
Google OAuth Consent Screen shows mongodb.com
5,614
null
[ "swift", "atlas-device-sync" ]
[ { "code": " \"type\": \"partition\",\n \"state\": \"enabled\",\n \"development_mode_enabled\": true,\n ....\nSDK: Realm Cocoa v10.28.6ending session with error: user cannot perform additive schema changes without write access: non-breaking schema change: adding schema for Realm table \"Formation\", schema changes from clients are restricted when developer mode is disabled (ProtocolErrorCode=206)", "text": "I’m setting up a new Atlas App Service, and have my “Device Sync” setting to developer mode enabled. I can verify this in the web UI and also in the “sync.json” which shows:However, when I try to sync from my iOS (swift, but not SwiftUI, SDK: Realm Cocoa v10.28.6) app, I get a “permission denied” error on the client, and on the web UI Logs , I see:ending session with error: user cannot perform additive schema changes without write access: non-breaking schema change: adding schema for Realm table \"Formation\", schema changes from clients are restricted when developer mode is disabled (ProtocolErrorCode=206)I don’t understand what’s wrong. Development mode is enabled.Any ideas? Thanks!", "username": "Alex_Tang1" }, { "code": "", "text": "Ha, funny. Apparently i had a similar problem a year or so ago and it was referenced by: Create synched realm-sync schemas server side using development mode, which had the solution: I had write permissions blocking the creation of the schema.", "username": "Alex_Tang1" } ]
Getting "permission denied... schema changes from clients are restricted when developer mode is disabled" when developer mode is enabled
2022-09-01T06:52:51.011Z
Getting &ldquo;permission denied&hellip; schema changes from clients are restricted when developer mode is disabled&rdquo; when developer mode is enabled
2,259
null
[ "dot-net" ]
[ { "code": "", "text": "HiI am resolving the obsolete methods left over after the MongoDB package update and I am stuck in the middle of the client reset functionality. I have tested the DiscardLocalResetHandler and it behaves a little bit odd compared to the documentation. I do destructive change and I receive the HandleBeforeResetCallback but I never receive the HandleAfterResetCallback even If I do not close the app for a long period of time. Besides, when I close the app and reopen it again I receive the HandleBeforeResetCallback again. And eventually, when I close the app and open it for the third time I get the app with proper sync. So is this behaviour correct or am I doing something wrong?Thanks.", "username": "Vardan_Sargsyan92" }, { "code": "DiscardLocalResetHandlerDiscardLocalResetHandlerOnAfterResetManualResetFallbackDiscardLocalResetHandlerDiscardLocalResetHandlerManualResetFallback", "text": "Hi @Vardan_Sargsyan92,The DiscardLocalResetHandler strategy discards the local changes and gets a fresh copy of the realm stored on the sync server. A destructive schema change makes this impossible as downloading the fresh realm is impossible given the mismatching schemas. In that case, your client and server need to re-align with the schema first, then sync can be restarted.I do destructive change and I receive the HandleBeforeResetCallback but I never receive the HandleAfterResetCallback even If I do not close the app for a long period of time. Besides, when I close the app and reopen it again I receive the HandleBeforeResetCallback againGenerally, when the DiscardLocalResetHandler strategy fails because of an error, it fallsback to the ManualResetFallback. So it’s normal that you don’t see the OnAfterReset being triggered. However, you’d see the ManualResetFallback triggered.And eventually, when I close the app and open it for the third time I get the app with proper sync.This is a little strange. Are you sure that in the meanwhile you haven’t updated the schema on the client to match the server?As a little conclusion, DiscardLocalResetHandler is expected to be really useful in a situation where the client and the server don’t share anymore the same history; but still share the same schema.It’s generally really helpful to have an overview of what could cause a client reset. I’d recommend to take a look at our docs about this subject.Additionally, we specifically explain in our docs that a destructive schema change is not something that DiscardLocalResetHandler can handle, and it’ll fallback to ManualResetFallback.", "username": "Andrea_Catalini" }, { "code": "", "text": "Hi @Andrea_Catalini\nthe question is based on your docs and thank you for just re-denoting the links which I have already read. Indeed the DiscardLocalResetHandler can not handle the destructive change you have mentioned. The destructive change is one of the ways to imitate a client reset which I am trying to achieve. So I am not asking what the DiscardLocalResetHandler do or what it can handle. I am just asking why I am not receive the after-reset callback.\nNow Regarding the points that you have mentioned above.So it’s normal that you don’t see the OnAfterReset being triggered. However, you’d see the ManualResetFallback triggered.This is a little strange. Are you sure that in the meanwhile you haven’t updated the schema on the client to match the server?", "username": "Vardan_Sargsyan92" }, { "code": "OnBeforeResetOnAfterReset", "text": "As a one off, you could simply disable and re-enable sync. That’ll trigger both OnBeforeReset and OnAfterReset.\nBut if you’re writing integration tests, this is clearly not a good avenue as you need automation. I’ll get back to you with some advice for triggering a client reset in a programmatic way.", "username": "Andrea_Catalini" }, { "code": "", "text": "@Andrea_CataliniAs a one-off, you could simply disable and re-enable sync. That’ll trigger both OnBeforeReset and OnAfterReset.I have tried but no luck. Thank you for the info. I will do manual reset", "username": "Vardan_Sargsyan92" }, { "code": "", "text": "I’m investigating why disabling and re-enabling sync isn’t doing it.\nIf integration tests is not what you’re after, but you’re just after hitting the before and after callbacks, you could use a method we expose on the session exactly to simulate a client reset. The method is SimulateAutomaticClientResetFailure and you can see how we use it in our tests.I hope this does it for you.", "username": "Andrea_Catalini" }, { "code": "", "text": "HI @Andrea_Catalini. Thank you for your efforts. SimulateAutomaticClientResetFailure would be a good option for my case.", "username": "Vardan_Sargsyan92" }, { "code": "", "text": "HI @Andrea_Catalini .\nCould you please answer the following question?I am using the methods inside the TestingExtensions. The application crashes when I use the SimulateError method and pass the DivergingHistories error code. Besides, I have implemented error handling for session exceptions discussed here but the problem is that the app crashes before achieving to the exception handling method.\nSo Is that acceptable behaviour or not?", "username": "Vardan_Sargsyan92" }, { "code": "ClientResetHandlerOnSessionErrorSimulateError", "text": "I take that this is a completely different issue. You are now asking about testing session errors.\nWhat you describe is not supposed to happen. Can you show the full test? I’m interested in seeing your ClientResetHandler, your OnSessionError, the overall logic of the test and how you use SimulateError.", "username": "Andrea_Catalini" }, { "code": "var conf = new PartitionSyncConfiguration(partition, user)\n{\n ClientResetHandler = new DiscardLocalResetHandler\n {\n OnBeforeReset = (beforeFrozen) =>\n {\n // executed right before a client reset is about to happen\n },\n OnAfterReset = (beforeFrozen, after) =>\n {\n // executed right after an automatic recovery from a client reset has completed\n },\n ManualResetFallback = (session, err) =>\n {\n // handle the reset manually\n }\n }\n};\n\nvar realm = Realm.GetInstance(config);\n\n// ... do whatever you need to do\n// let the app wait for sync to be terminated\nawait Task.Delay(20000);\n\n// manually terminate sync\n\nrealm.Dispose();\nwhile (!realm.IsClosed)\n{\n await Task.Delay(500);\n}\n\nrealm = Realm.GetInstance(config);\n\n// your `OnBeforeReset` and `OnAfterReset` callbacks should be hit now\n", "text": "I just tried terminating and restarting sync myself and it does trigger a client reset. The only detail, that I understand is not obvious, is that the client generally tries to reconnect for up to an hour.\nTo avoid the waiting time, once you know you’ve terminated the server, just close and reopen the realm. The skeleton of a test of this type would look like thisI hope this works for you. Let me know if you still encounter issues.Andrea", "username": "Andrea_Catalini" }, { "code": "", "text": "HI @Andrea_CataliniThanks for the info. Apparently, the whole client reset process is not much improved from before. Thus I will keep the workaround the same as discussed here.", "username": "Vardan_Sargsyan92" }, { "code": "private async Task<bool> ExecuteCoreAsync(object parameter, CancellationToken token = default)\n {\n using var realm = await _realmFactory.GetRealmAsync(_databaseManager.CurrentDatabase);\n realm.SyncSession.SimulateError(_viewModel.RealmErrorCode, _viewModel.RealmErrorCode.ToString());\n return true;\n }\nprivate async Task<RealmConfigurationBase> GetSyncedConfigurationAsync(string realmDbPath, string databasePartitionKey)\n\n {\n\n IRealmUserContext userContext = await _realmAuthService.SignInAsync();\n\n var syncedConfiguration = new PartitionSyncConfiguration(databasePartitionKey, userContext.User, realmDbPath)\n\n {\n\n Schema = _realmTypes,\n\n OnSessionError = HandleSessionError,\n\n ClientResetHandler = new DiscardLocalResetHandler\n\n {\n\n ManualResetFallback = HandleManualReset\n\n }\n\n };\n\n return syncedConfiguration;\n\n }\n\n private void HandleSessionError(Session session, SessionException error)\n\n {\n\n...\n\n }\n\n private void HandleManualReset(ClientResetException clientResetException)\n\n {\n\n...\n\n }\n", "text": "I have a button inside the app which executes the following part of the code.Here is the part of the PartitionSyncConfiguration setup.So when I tap on the button the app simulates a session exception with the DivergingHistories error code and nothing happens. When I tap on the button again the app crashes before reaching the HandleSessionError method. If I just call a method SimulateAutomaticClientResetFailure then the app enters to HandleSessionError method.", "username": "Vardan_Sargsyan92" }, { "code": "SimulateAutomaticClientResetFailure", "text": "What we worked on for the client reset was adding strategies that greatly simplify developers’ lives when it comes to data handling and recovery when compared to the previous manual handler.\nWhen it comes to testing, if you are fine with not doing an integration test but just testing code paths, then the suggested SimulateAutomaticClientResetFailure should do.\nIf you are after integration testing, I suggested a working way that unfortunately needs some user interaction, namely terminating and reenabling sync. If it’s very important for you to programmatically trigger a client reset on the server, please open a github issue so that the whole team can investigate the feature.Let me know if there is still something unclear.", "username": "Andrea_Catalini" }, { "code": "", "text": "Thank you @Andrea_Catalini. I’ll do that", "username": "Vardan_Sargsyan92" }, { "code": "", "text": "Overall the code shown seems reasonable and should work, granted that the missing parts of code like what the factory does etc have no errors.So when I tap on the button the app simulates a session exception with the DivergingHistories error code and nothing happens. When I tap on the button again the app crashes before reaching the HandleSessionError method. If I just call a method SimulateAutomaticClientResetFailure then the app enters to HandleSessionError method.None of this is normal but our tests don’t show such behaviour. If you can extract the issue in a small project and send it to us, we can investigate.", "username": "Andrea_Catalini" } ]
HandleAfterResetCallback never hits
2022-08-31T13:16:42.693Z
HandleAfterResetCallback never hits
3,942
null
[ "flexible-sync" ]
[ { "code": "@RealmModel()\nclass _Pet {\n ...\n}\n\n@RealmModel()\nclass _Person {\n List<_Pet> pets = [];\n}\n@RealmModel()\nclass _Person {\n List<ObjectId> pets = [];\n}\n", "text": "I was wondering how to work with object-links and Flexible Sync.\nConsidering one very simple example:In this case, I cannot create a query on the pets of a person, meaning I can not query them with Device Sync. My idea was to use ObjectId instead of the real reference:But now I am basically losing the real connection to the database and am required to query the objects manually.Is there a proposed solution for this scenario as it seems very common to me.", "username": "Thomas_Anderl" }, { "code": "ownerId == user.idownerIdregion", "text": "I think the easiest solution, if you have a sufficiently small data set, is to subscribe to all Person objects and all Pet objects. Then, you know the object links will always work.If you have sufficiently restricted permissions model, such as the read and write own data model proposed in the Flexible Sync Permissions Guide, you can sync on a field that must be in all objects the user can access - i.e. ownerId == user.id. Then both linked objects would need the ownerId field, but the link would be preserved.Otherwise, you’d probably want to look for common data patterns to sync on. For example, maybe your Pet and Person both have a region field, and you could sync all Pet and Person objects within a region.Curious if any folks have other suggestions for how they’re handling this.", "username": "Dachary_Carey" }, { "code": "", "text": "Thank you for your input. But if I add a field “ownerId”, it is of type ObjectId and no longer of the original object type.I thought about having both, a real object reference and an objectId reference, but this seems very wrong.", "username": "Thomas_Anderl" }, { "code": "", "text": "@Thomas_Anderl did you get this worked thru? I just went thru implementing similar to Dachary’s suggestion and can share my experience if you need.Dachary’s suggestion is the implementation I settled on.", "username": "Joseph_Bittman" }, { "code": "", "text": "Hey, @Joseph_Bittman.I would be happy if you could share your solution. I am not 100% sure if I fully got the idea.", "username": "Thomas_Anderl" }, { "code": "", "text": "@Thomas_Anderl, are you using flexible sync? That is what I am using. I can share soon.", "username": "Joseph_Bittman" }, { "code": "const Person = {\n name: \"Person\",\n properties: {\n name: \"string\",\n birthdate: \"date\",\n dogs: \"Dog[]\"\n }\n};\nconst Dog = {\n name: \"Dog\",\n properties: {\n name: \"string\",\n age: \"int\",\n breed: \"string?\"\n }\n};\n let config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n\n subs.append(QuerySubscription<ItemGroup>(name: \"user_groups\") {\n $0.ownerId == user.id\n })\n subs.append(QuerySubscription<Item>(name: \"user_items\") {\n $0.ownerId == user.id\n })\n }\n", "text": "Here is some better documentation from another SDK (I’ve found some SDK flavors have more detailed documentation on specific topics).In the “to-many” section example:Again, this is the object model in the SDK, and your SDK syntax will be a bit different, but you want essentially to use this object model example and translate it to yours.Regardless of the SDK you use this with, here is the experience you will get:\nEach person instance will be in its own collection.\nEach dog instance will be in its own collection.personInstance.dogs will be an array of RealmObjects\npersonInstance.dogs[n] will be an actual, fully fleshed out data object. You don’t have to manually query them to resolve them from object ids. Also, each realmobject will have an id value in case you want the object id.You can do things like:\npersonInstance.dogs[n].name = “new doggie name”\nAnd this will persist back to the instance of the dog in the dogs collection. The person instance in the person collection just contains a reference (that is resolved automatically for you) to the instance in the dog collection.Also, with flexible sync, you don’t need to pull every person or dog in the collections for this to work. If you use a subscription, you can pass a query/filter/where syntax to pull down just the Person instances (and dogs as applicable) that are relevant. This is what Dachary was referencing:In this example from the swift-ui-tutorial link above, in your case, “user_groups” is Person and “user_items” is dogs. You need a subscription to each collection, and can filter down to some more limited dataset as appropriate. “ownerId” would have been set up as a “queryable” field.Make sure you have set permissions properly on the server’s collection permissions and not just filtering in the end client interface.Again, check out some subscription articles from various SDKs to get a more complete understanding of how subscriptions are handled.I was able to track down this documentation, so I think this is more helpful than my implementation as it will allow you to discover more answers to questions around these topics. I basically did what is above where I have a queryable field by ownerId, and subscriptions to both collections, and then can interact with the data objects without having to manually resolve the references.Hope this helps!", "username": "Joseph_Bittman" }, { "code": "const Dog = {\n name: \"Dog\",\n properties: {\n name: \"string\",\n age: \"int\",\n breed: \"string?\",\n ownerId: \"objectId\" //NEW\n }\n};\n", "text": "@Joseph_Bittman Thank you for your response. The subscription is however the part I struggle with. The swift-example you are refering to uses$0.ownerId == user.idSo here, we have a field called ownerId which is an ObjectId and not an Object. That means, this QuickStart does also not use real objects. For a field to be queryable, it cannot be a link if I understood correctly. So to your example to work, dog needs to be extended:And this is where my problem starts. I don’t want to model all references by objectId just because I need to query them. I would prefer havingowner: ‘Person’which would be the “natural” way to solve this, but then I cannot query the field anymore.", "username": "Thomas_Anderl" }, { "code": "", "text": "I see. Yes, you are right.As you said, it would be more natural to refer to, in some cases, an object than an object id. MongoDB historically, clearly, has not really enabled this as demonstrated from the partition key method of permissioning user data… or defining your own field to have an id to then use when applying permissions.Flexible sync has added additional roles/permission rules but still doesn’t support permissions (or subscriptions) to be evaluated based on embedded/linked/referenced objects.Their documentation still examples using ownerId, which is objectid as you have pointed out.I guess the difference between my use cases and yours is that I don’t really go from any particular object to its owner often. Therefore, I can use the object id and work with it in permission rules and subscriptions without inconvenience. In the infrequent cases where I need the actual user data (such as in Account), then I’m just querying for the specific user. Hence being able to implement similar to the examples and Dachary’s suggestion.", "username": "Joseph_Bittman" }, { "code": "", "text": "@Joseph_BittmanThank you. This solution is the one I took now. But it feels wrong to me. Wanted to make sure I wasnt missing anything here.Doing it that way just adds an unnecessary layer of complexety and inconsistenty for the developers and results in unforseeable issues (e.g. GraphQL won’t work the way it is supposed to anymore)", "username": "Thomas_Anderl" }, { "code": "ownerId", "text": "It may not be optimal, but you could have an ownerId property on both objects that is not the object link. For example, person could still have the list of dogs to represent the to-many relationship, and both the person and dog objects have an owner id. So you’d only be syncing objects where ownerId matches the user.id, but the ownerId isn’t actually the linking field - the relationship field is the linking field as you’d expect. In this case, ownerId is just a piece of metadata that allows you to sync the objects you want. This gives you some future-proofing for when linked objects are supported; you’ll already have the relationship field so you’ll have the natural link, and can just start ignoring the ownerId field at that point.", "username": "Dachary_Carey" }, { "code": "", "text": "@Dachary_Carey I thought about this too. But this only covers this specific use case. I also have a chat in my application and there I need a different approach (saving a copy of the chatIds in the user metadata and the messages referencing the chat via objectId instead of a link).There is a workaround for all scenarios but I was hoping there was a cleaner way from MongoDb to deal with that issue. Seem to be kind of nasty workarounds to me.", "username": "Thomas_Anderl" } ]
Flexible Sync on links
2022-08-09T07:23:47.494Z
Flexible Sync on links
3,837
https://www.mongodb.com/…3_2_1024x451.png
[ "connecting" ]
[ { "code": "", "text": "Hello Folks, just could not find what is wrong here. Trying to connect to my free cluster from my home lan but didn’t work. From my azure VM works fine, so it’s not a database user or permission problem. I have whitelisted everything with 0.0.0.0. Ports 27017,27016, and 27015 are working fine on my lan and on my notebook (Kubuntu 20). Tested with portquiz.net successfully and with curl, I can ping the cluster fine too, so I believe DNS is resolving fine. Anyway, I saw some stackoverflow posts and changed my DNS to google 8.8.8.8, prior was to OpenDns, but it didn’t work either. I’m running out of solutions, any tip of what’s going on? thanks.\nScreenshot_20210409_1156441045×461 63.5 KB\n", "username": "Fabio_Muller" }, { "code": "", "text": "Did you try with alternate internet connection like your mobile hotspot?\nor Google 8.8.4.4 DNS.Try long form of connect string and see if it works instead of SRV string", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for replying Ramachandra, After your suggestions I tested with my mobile hotspot and get the same error. Changed the DNS to google second and nothing yet. As I got the same error with my mobile hotspot I became distrustful, it’s was not a network problem. My note is dual boot, so I run Windows and volia it works on the first trial, shell and compass, no config, no changes. So it seems the problem is with Kubuntu (20.04) nothing connecting there, shell or compass. WIll try to investigate a little bit more with Kubuntu and put the results here.", "username": "Fabio_Muller" }, { "code": "", "text": "After a lot of reading and tests, I found a string that worked, it was basically added --tls and --tlsAllowInvalidCertificates to the connection string. I search a little bit but didn’t find any quick solutions to solve that. I know that allow invalid certificates is not supposed to be a good security thing but this database is for tests purposes only. and is a Linux thing of my configuration but I would like to understand it a little bit better. I could not pass to compass and Mongbooster this string so If there is a solution to connect without these constraints would be better. I created another Cluster in Azure, instead of AWS just to see if it was some kind of key issue with AWS server as they changed the certificates recently but it has nothing to do with that. The same is happening when connecting to Azure cluster, I have to put the tls’s strings. What do you think ? tks…", "username": "Fabio_Muller" }, { "code": "", "text": "invalid certificates is not supposed to be a good security thingYes not recommended\nWhat is the issue with Compass.What error are you getting\nYou can get your Compass string from Atlas or try fill connection details option where yu specify individual params like hostname etcAre you using any VPN,anti virus when attempting to connect to mongodb which cause port blocking,firewall issues\nOr your home LAN has high security which does not allow connection", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Compass cannot connect, gives me time-out after some time. I tried many string variations and none worked. The lan is working fine. I can totally connect through Windows 10 on the same machine. The problem now is with Kubuntu and security more specifically. It’s not a usual thing that I can connect with WIndows without any extra configuration, just install and run but had to make adjusts to Linux. Generally is the other way round, but nothing is perfect and the question now is just TLS. Mongo client is connecting but insecurely. I tried to understand how --AllowInvalidCertification works but could not understand very well at a first sight, I will have to dig a little bit to see what its the relation with SSL/TLS and this kind of security in Linux. If you have any tip pls let me know. Thanks.", "username": "Fabio_Muller" }, { "code": "", "text": "@Fabio_Muller Can You share your connection string? I am facing the serverselectionerror.", "username": "Riyad_Sm" }, { "code": "", "text": "Have you whitelisted your IP?\nor try from another location/network", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yes. I listed my ip.", "username": "Riyad_Sm" }, { "code": "", "text": "Are you trying by shell or Compass?\nAre both failing?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "You IP is not necessarily the one seen by the server.See Cannot connect remotely to mongodb - #2 by steevej", "username": "steevej" }, { "code": "", "text": "HI @Riyad_Sm, I know there was another thread you had where you were getting timeouts even though other’s were able to connect to your Atlas cluster without issue.According to this post you got things to work. Are you having problems once more? This post (the one were in now) appears to be dated before the post I linked above.It’s best not to put the same problem in multiple posts as it makes it difficult for those trying to help out follow along with the progress of resolution to the problem.", "username": "Doug_Duncan" }, { "code": "", "text": "@Doug_Duncan yes I am facing the problem again. I did mention that my problem was solved(I created a new mongo account). I worked for few days as well .but again I am facing the problem! I am completely idealess right now!\nplease help", "username": "Riyad_Sm" }, { "code": "0.0.0.0/00.0.0.0/0", "text": "Sorry to hear that the issue has returned. That doesn’t make any sense, but the original problem seemed to reside on your side with networking as I was able to connect to your Atlas instance with no issues.All I can suggest is to look at the network access section for your Atlas cluster and verify that your IP address is in the allow list. For testing you can always add 0.0.0.0/0 to allow access from anywhere, but I would remove that after testing for security reasons. If you can’t connect even with 0.0.0.0/0 then you have networking issues that are keeping you from accessing the cluster, and I wouldn’t be able to help out much with that since I don’t have access to your machine to see what’s going on.", "username": "Doug_Duncan" } ]
Connection Problems to Atlas from my home Lan
2021-04-09T14:58:50.854Z
Connection Problems to Atlas from my home Lan
8,562
null
[ "configuration" ]
[ { "code": "", "text": "Hi i tried to work with mongoDb today . I just installed iton my pc and tried to start a port using Hyper terminal\nAfter that i issued the command -\nmongobut it returned\nbadValue: error: no args for configdb–can anyone please tell me what this means", "username": "Tarang_Rastogi" }, { "code": "mongosmongosmongosmongodmongosh", "text": "I am going to assume that you ran the mongos command with no parameters. mongos is the interface between clients and a sharded cluster. You can read about mongos in the documentation.If you have installed MongoDB and have the mongod (this is the database daemon) process up and running, then you can connect to is using the mongosh command line tool.If you have a couple of hours, I would recommend going through the M001 - MongoDB Basics course on MongoDB University. This is a free course and should help you get up to speed on using MongoDB.", "username": "Doug_Duncan" }, { "code": "", "text": "A post was split to a new topic: Windows install problem: This app can’t run on your PC", "username": "Stennie_X" }, { "code": "", "text": "Hello there! I also run into the same problem and even followed your advice in taking the course and successfully completed it (Thank you for that!!), however, I am still very confused on how we can solve the original question. Typing mongos prints out\n“badValue: error: no args for configdb–\ntry ‘C:\\Program Files\\MongoDB\\Server\\6.0\\bin\\mongos.exe --help’ for more information”\non Windows 11.\nIn Hyper Terminal using vim I put\nalias mongos=\"/c/Program\\ Files/MongoDB/Server/6.0/bin/mongos.exe\"\nbut it still didn’t help me at all.\nHow do I get this to respond using mongosh as you said?\nThank you in advance!!!", "username": "Christos_Ioannidis" }, { "code": "mongos--configdbmognoshmongosh", "text": "mongos is the interface between your application and a sharded cluster. As the error states, you need to pass in the --configdb parameter with your list of config servers.Are you instead meaning to run mognosh which is the new MongoDB shell command line interface? If so, have you downloaded the tool yet? mongosh is not included with the server install.", "username": "Doug_Duncan" }, { "code": "", "text": "I have downloaded mongosh and it is working just fine, I even use it instead of the default IDE provided by the MongoDB courses when each chapter ends.\nRegarding mongos, what you said raises another question about the list of config servers, because I tried the command --configdb and after that the path to the mongos.exe file.\n$ mongos --configdb “C:\\Program Files\\MongoDB\\Server\\6.0\\bin\\mongos.exe”\nThis prints out:\nFailedToParse: Did not consume whole string.\nAm I doing something wrong?\nAnd If so, could you explain what exactly you mean about list of config servers. I searched it a bit and still couldn’t understand.", "username": "Christos_Ioannidis" }, { "code": "mongos--configdbmongosmongos", "text": "The error comes from the fact that you are passing in an executable file and not a replicaset configuration.I would recommend reading the mongos documentation, especially the part talking about the --configdb parameter to better understand what’s going on. If you are not trying to run a sharded cluster, then there is no need to run the mongos command.You might want to look at taking M103 - Basic Cluster Administration as well. That course should give some overview of sharded clusters and how to use the mongos command.", "username": "Doug_Duncan" }, { "code": "C:\\\\Program Files\\\\MongoDB\\\\Server\\\\\\\\bin", "text": "I’m new to programming, in case I can save anyone else a similar headache;For whatever reason, the installation didn’t create a mongo.exe in the bin, meaning the “mongo” command won’t work.Try reinstalling an earlier version (5.0 vs 6.0) and verify mongo.exe is in that C:\\\\Program Files\\\\MongoDB\\\\Server\\\\\\\\bin", "username": "Ryan_Chapman" }, { "code": "mongomongoshmongomongosh", "text": "Hi @Ryan_Chapman and welcome to the MongoDB Community forums. Sorry for my delayed response.For whatever reason, the installation didn’t create a mongo.exe in the bin, meaning the “mongo” command won’t work.The mongo command line tool has been superseded by the new mongosh tool. Starting in MongoDB version 6.0 mongo is no longer being installed, but the installer should have installed mongosh.", "username": "Doug_Duncan" } ]
Hey how to get rid of no args for --configdb error ...i have just started learning mongodb . Can anyone help
2022-07-30T14:51:33.694Z
Hey how to get rid of no args for &ndash;configdb error &hellip;i have just started learning mongodb . Can anyone help
21,866
null
[ "java", "kafka-connector" ]
[ { "code": "", "text": "Hello! Is it possible to use java 17 when working with the mongodbd kafka connector?", "username": "Pierre_Stridsberg1" }, { "code": "", "text": "I can’t think of a reason why it would not work. The more likely issues though would not be with the MongoDB connector but with the Kafka framework itself, so you should check Kafka documentation as well.", "username": "Jeffrey_Yemin" } ]
Java 17 support
2022-09-01T12:55:12.413Z
Java 17 support
2,099
null
[ "crud" ]
[ { "code": "=== Existing Document ===\n{\n name: 'abc',\n _id: 123\n}\n\n=== Document To Insert ===\n{\n name: 'abc',\n _id: 456\n}\n\n=== Final Result ===\n{\n name: 'abc',\n _id: 123\n},\n{\n name: 'abc #2',\n _id: 456\n}\n", "text": "Hey,\nI was wondering if there is a way to insert a document with an indexed field value (not _id), and if the value exists already, then insert it with some change.\nFor example:Thanks", "username": "Efrat_Harel" }, { "code": "", "text": "Hi @Efrat_Harel ,There is no way I know of in the server to do this logic.Usually, you can either query for that value and if exist have an application logic to change it, or wait for a unique index error and add the additional post fix.Ty", "username": "Pavel_Duchovny" }, { "code": "", "text": "That’s what I was afraid of, was hoping there is some way I’m unaware of.\nThanks!", "username": "Efrat_Harel" } ]
If indexed field's value exists upon insert, insert the field as value + 1
2022-08-31T07:20:47.211Z
If indexed field&rsquo;s value exists upon insert, insert the field as value + 1
1,068
https://www.mongodb.com/…e_2_1024x512.png
[ "flutter" ]
[ { "code": "", "text": "Hi. I’m refer and coding below document with using flutter sdk.But I faced that a new login is required every time the app is launched.This is very inconvenient. Is there any kind of automatically login like a “remember me”?", "username": "Chance_Lucky" }, { "code": "app.currentUservar user = app.currentUser;\nif (user == null) {\n // when the login call is successful, app.currentUser will become user\n user = await app.logIn(Credentials.anonymous());\n}\n\nreturn user;\n", "text": "Hey, this doesn’t seem to be well covered in the docs, but the default behavior for the SDK is to remember logged-in users. You can access the current user by calling app.currentUser. So your app could work something like this:If you want to support multiple users in your app (similar to Netflix/Youtube), you can refer to this page in the docs for help.", "username": "nirinchev" } ]
Is there a "remember me" in the login?
2022-09-01T03:06:43.764Z
Is there a &ldquo;remember me&rdquo; in the login?
2,036
null
[ "database-tools" ]
[ { "code": "{\n \"_id\" : ObjectId(\"630f28b72e83ac2a6aa5a0ec\"),\n \"firstName\" : {\n \"string()\" : \"John\"\n },\n \"lastName\" : {\n \"string()\" : \"Doe\"\n },\n \"email\" : {\n \"string()\" : \"[email protected]\"\n }\n}\n{\n \"_id\" : ObjectId(\"630f29402e83ac2a6aa5a0fb\"),\n \"firstName\" : \"John\",\n \"lastName\" : \"Doe\",\n \"email\" : \"[email protected]\"\n}\n", "text": "In CSV import, on the 1st line, if i remove the column types, it imports just fine. Else it creates weird documents.Example1: when CSV file has firstName.string(),lastName.string(),email.string() and 1 row of data:\nResulting Document isIf I remove the column types, and do the CSV import, then resulting document isThe thing is i don’t care about the string type but for many other imports, i want to specify, boolean, timestamp, date, etc.Has this changed in recent versions of mongo? This used to work earlier. I couldn’t find coumentation on this, hence askingThanks", "username": "VenkatMSN" }, { "code": "mongoimport --version--columnsHaveTypes--headerlinefirstName.string(),lastName.string(),email.string()\nJohn,Doe,[email protected]\n{\n _id: ObjectId(\"6310098a14219fbdb46a04c5\"),\n firstName: 'John',\n lastName: 'Doe',\n email: '[email protected]'\n}\n--columnsHaveTypes", "text": "Welcome to the MongoDB Community @VenkatMSN !What does mongoimport --version report and what command line parameters are you using?You can specify field types with --columnsHaveTypes and would also need --headerline if the source of types is the first row in your CSV file.I created a test CSV with:… and imported using:mongoimport --columnsHaveTypes --headerline test.csv --type csv -d foo -c barThe resulting document is:I believe your command line is missing the --columnsHaveTypes parameter, so the types in the header line will be interpreted as field names (which would be embedded documents using dot notation).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Ugggh…Thanks! I was importing directly on Atlas etc and when i had used the import tool on a shell, I had only /headerline and not the columns have types.Problem SolvedThanks!", "username": "VenkatMSN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Weird results with CSV import
2022-08-31T09:43:57.497Z
Weird results with CSV import
2,469
null
[ "c-driver" ]
[ { "code": "", "text": "Hi,In simple application that inserts lots of documents (calling mongoc_collection_replace_one) we notice when profiling the run that hello command seems to be run each time a new document (mongoc_server_description_handle_hello is called same times as mongoc_collection_replace_one).\nIs this correct/expected, and is there a way to reduce these hello calls as they take significant time ?Also, MongoDB wire protocol is documented but is there a set of guidelines for drivers implementors available to document what needs to be done when client application interacts with MongoDB server ?Thanks\nBest regards", "username": "JoeC" }, { "code": "mongoc_server_description_handle_hellomongoc_server_description_tmongoc_server_description_new_copymongoc_server_description_handle_hellohellohellomongoc_apm_set_server_heartbeat_succeeded_cb", "text": "mongoc_server_description_handle_hello is also called when a mongoc_server_description_t is copied in mongoc_server_description_new_copy.Most calls mongoc_server_description_handle_hello do not suggest a new hello response is being handled.To observe when a hello new response is being handled, use mongoc_apm_set_server_heartbeat_succeeded_cb to observe when the driver completes a “hello” command to check the status of a server. Application Performance Monitoring (APM) — libmongoc 1.23.2 includes an example.Also, MongoDB wire protocol is documented but is there a set of guidelines for drivers implementors available to document what needs to be done when client application interacts with MongoDB server ?GitHub - mongodb/specifications: Specifications related to MongoDB is intended for driver implementors. specifications/server-monitoring.rst at master · mongodb/specifications · GitHub may be the most relevant.Sincerely,\nKevin", "username": "Kevin_Albertson" }, { "code": "", "text": "Thanks for replying. From profiling done on a simple client that only writes documents it seems that hello command gets issued at each time a document is written. We were wondering if this is was by design or if it was possible to reduce the amount of these hello pings as they take some time ? We tried changing some config values without success.\nThanks", "username": "JoeC" } ]
Hello command when inserting documents?
2022-07-21T21:43:53.309Z
Hello command when inserting documents?
2,494
null
[ "aggregation", "python" ]
[ { "code": "", "text": "Hi,I have a job that triggers the lambda and get the count from mongodb everyday. When i try to retrieve the data from mongodb using the below query {’$and’: [{’_id.a’: 402}, {‘t.u’: {’$gte’: datetime.datetime(2022, 6, 29, 22, 0, tzinfo=datetime.timezone.utc), ‘$lte’: datetime.datetime(2022, 8, 9, 22, 0, tzinfo=datetime.timezone.utc)}}, {’_id.s’: {’$gt’: 99999999}}]} . I’m getting two different counts. when the job ran , i got the count as 661 whereas the actual count was 340. But when i ran it separately in the morning i got the count as 340. No changes were made in the query. Even when the job ran the same query fetched the different count.Can someone help me on this issue", "username": "Navaneethan_Sukumaran" }, { "code": "", "text": "Hi @Navaneethan_Sukumaran and welcome to the community!!Could you help me with some informations for the above issue mentioned:You might want to check out:Please help me with the above details to assist further.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "HI Aasawari,Please find the details below,", "username": "Navaneethan_Sukumaran" }, { "code": "$count", "text": "Hi @Navaneethan_SukumaranThank you for sharing the above information, but I think we do not have enough information to determine what’s going on.Could you help with a few more details which may be helpful:I think a better analysis could be made when we have more logging from the Lambda function’s execution. I would suggest adding more logging statements to the function, that way we can see whether the function was executed differently when it was manually triggered.Aside from logging, perhaps you can also check out:Best Regards\nAasawari", "username": "Aasawari" } ]
Same query returning two different count
2022-08-11T12:27:38.603Z
Same query returning two different count
2,354
null
[]
[ { "code": "final class Customer: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n \n @Persisted var nameFirst = \"\"\n @Persisted var nameLast = \"\"\n @Persisted var active = true\n @Persisted var email = \"\"\n @Persisted var phone = \"\"\n @Persisted var notes = \"\"\n \n @Persisted var address: Address?\n \n @Persisted(originProperty: \"customers\") var custGroup: LinkingObjects<CustomerGroup>\n}\n\nfinal class Address: EmbeddedObject {\n @Persisted var city: String?\n @Persisted var postalCode: String?\n @Persisted var provinceCode: String?\n @Persisted var street1: String?\n}\nimport SwiftUI\nimport RealmSwift\n\nstruct CustomerForm: View {\n\n @ObservedRealmObject var customer: Customer\n\n var body: some View {\n ...\n TextField(\"Street\", text: $customer.address.street1)\n ...\n }\n}\nFailed to produce diagnostic for expression; please submit a bug report (https://swift.org/contributing/#reporting-bugs) and include the project", "text": "I have the following object and embedded object in my code:Then in my view object, I attempt to access the embedded object like so:Which produces the error: Failed to produce diagnostic for expression; please submit a bug report (https://swift.org/contributing/#reporting-bugs) and include the projectSomewhat new to swift and realm but the error strikes me as odd.Thanks,\nRon", "username": "Ron_Dyck" }, { "code": " @ObservedRealmObject var parentData: type\n\n VStack{\n\n TextField(\"displayName\", text: $parentData.displayName)\n ........ other parent data fields\n\n embedded_edit(embedded: parentData.nestedObj )\n }\n\nstruct embedded_edit: View {\n @State var embedded: embeddedType\n.......\n TextField(\"displayName\", text: $embedded.displayName)\n", "text": "Old thread, but wanted to post solution here. Just worked thru the same issue with a realm embedded object, and found out that this is more of a swift problem than a realm one.According to this, swift does not like binding nested observableobjects.The best solution is to create a separate, small view just to give the UI elements for the nested object, and pass only the nested object to it. In this way, the separate view only appears to have one level of object, and things work.In my case, I have a parent object with a nested embedded object containing optional data. I was planning on having one edit view for the parent & embedded data combined. However, I ended up having a parent_edit view and an embedded_edit view. Seems harmless enough and was only a tiny bit of boiler plate extra.In the parent view:In the embedded_edit view:", "username": "Joseph_Bittman" } ]
Access EmbeddedObject with RealmSwift in SwiftUI
2021-09-23T18:56:21.523Z
Access EmbeddedObject with RealmSwift in SwiftUI
2,059
null
[ "node-js", "connecting" ]
[ { "code": "", "text": "Hi there. Sorry if I’ve not posted this on the correct forum.\nI’ve got a general question about database connections to Mongo. I’m developing an app using Nodejs, Express and a local hosted MongoDB.The first aspect is more a standard, do a query and then render the results to the browser. Which I open a DB connection, perform the query and close the connection.The second is an AJAX element, where there could be a number of queries, in quick succession. I have read in this instance it’s best not to close the connection, as this will downgrade the performance.My question is this, First, is that correct with an AJAX type application. Second, never closing the database connection, will this cause a problem. EG. some kind of out of memory issue when there is a lot of traffic. Or does the connection automatically close after a period of inaction.Thanks.", "username": "Andy_Bryan" }, { "code": "", "text": "You should take the course M220JS as it goes over an application using nodejs.", "username": "steevej" }, { "code": "", "text": "Hello @Andy_Bryan ,I think @steevej’s suggestion is a solid one. You can go through the M220JS course to get more understanding about implementing the application’s communication with MongoDB using Node.js driver.Having said that I’d like to add a little into the discussion. The general advice is actually to have a pool of established, ready-to-use connections during the life of the application instead of connecting & disconnecting on every operation. All official drivers do this by default. Please refer to Connection Pool Overview for more details.With regard to your question:Second, never closing the database connection, will this cause a problemNo it won’t cause problems as long as the hardware is sized correctly for the workload. Official drivers manage the connections automatically for you to ensure this is as trouble-free as possible. See Connection Monitoring and Pooling if you want to see in detail how this is done.Regards,\nTarun", "username": "Tarun_Gaur" } ]
General question about opening and closing database connections
2022-08-26T09:21:32.171Z
General question about opening and closing database connections
3,770
null
[ "indexes" ]
[ { "code": "", "text": "Hi.After monitoring long queries I created index and the index statistic counter started to increase from 0 to 15, than stopped. I realized also that after reboot/restart all the index statistic counters zeroing and some of it stay zeroed despite of queries. Deleting the particular index with zero counter causing slowness, recreating obviously fix the slowness.\nSo the question is-why in some cases index usage statistic counter stop without obvious reason?", "username": "SeventhSon" }, { "code": "$indexStatsmongod\"accessess.since\"mongoddb.collection.find().explain()", "text": "Hi @SeventhSon,After monitoring long queries I created index and the index statistic counter started to increase from 0 to 15, than stopped.Would you be able to advise how you are checking the index statistics? I assume it would be through the $indexStats aggregation stage but please correct me if I am incorrect in my assumption here.I realized also that after reboot/restart all the index statistic counters zeroing and some of it stay zeroed despite of queries.Upon mongod restarts, the index statistics are refreshed (The \"accessess.since\" value is the time from when the mongod begins recording the index stats). However, the behaviour regarding the counter staying at zero does sound odd (assuming that the index is being used by said queries)So the question is-why in some cases index usage statistic counter stop without obvious reason?One example I can think of is if you’re running the query which uses an index with the explain output. I.e. db.collection.find().explain() won’t increment the counter. However, would you be able to provide the following information so that I can try reproduce the behaviour you’ve mentioned regarding the index counter not increasing:Regards,\nJason", "username": "Jason_Tran" } ]
Index statistic zeroing
2022-08-01T14:50:02.555Z
Index statistic zeroing
1,914
null
[]
[ { "code": "", "text": "Hi,Quite new to mindset of NoSQL in general and MongoDB and Realm in particular.Let’s say I have an application where documents can be:What type of access control, permissions do I use? It seems to depend on variety of factors, i.e. Partition-Based Sync or Flexible Sync.It’s important that only systemwide documents and documents that the user owns (has created themselves) are synced to their device. Is this controlled by “read”?BR,\nJimisola", "username": "Jimisola_Laursen" }, { "code": "edited serverside", "text": "So while this is a pretty good question, it’s going to be hard to answer because there are too many variables and use cases. Also, some of the terminology being used is not clear; for example edited serverside could mean a LOT of things. I would doubt that you’re planning on sitting on a computer accessing the “server” (Realm console) to input data - you may, but that’s highly inefficient.You’re overall setup will also be be different between partition and flex sync. With partition, you could have all of the items that are read-only on one partition that everyone has access to, and then each user could have their own partition. The downside there is that while the user sync’s to their partition which could be a small amount of data, the read-only partition dataset could be massive - or maybe not. So then you consider flex syncWithout knowing a lot more info, it’s going to be hard to make a suggestion, other than think through those use cases and write some code to see what works.", "username": "Jay" }, { "code": "", "text": "I agree with Jay that there are a lot of variables here.That said, in this particular situation, I have a suggestion to learn about Flexible Sync and focus on optimizing that for your situation.Flexible Sync appears to be the future of Realm and partition is pretty legacy. Even if there was a crystal ball showing both would fit your data situation equally well, from a skills perspective I think it makes sense to go with Flex sync. More likely to continued to be used in the future with the current product or others, and then you don’t have a possible deprecation situation later on. Also cross training/documentation will live longer.Anyway, enough hypothetical… sounds like flex sync is the way to go for your situation with a “ownerId” queryable field on your documents that you apply permissions to.For all the ‘system’ documents, have the owner id set to a specific value and limit who can modify based off that. You can still give read access to everyone.For user documents, set the ownerId to the given user and limit permission based on that.", "username": "Joseph_Bittman" } ]
How to handle different types of Documents
2022-08-28T17:16:20.919Z
How to handle different types of Documents
1,202
null
[ "aggregation", "compass" ]
[ { "code": "{\n from: \"stores\",\n localField: \"stores\",\n foreignField: \"_id\",\n pipeline: [\n {\n $geoNear: {\n near: {\n type: \"Point\",\n coordinates: [0, 0]\n },\n distanceField: \"distance\",\n maxDistance: 10000\n }\n }\n ],\n as: \"stores\"\n}\n", "text": "In my aggregation pipeline I eventually get an array of store IDs, the next step is to run a lookup and get the stores documents but I need to limit them to a 10km max distance.The problem I’m facing is that, for some reason, I get an error saying that “$geoNear is only valid as the first stage in a pipeline.”, even though it is the first stage in the $lookup pipeline.Lookup stage:Not sure what I’m missing here.", "username": "Gabriel_Gaspar" }, { "code": "", "text": "MongoDB Version: 5.0.11", "username": "Gabriel_Gaspar" }, { "code": "localFieldforeignField{\n from: \"stores\",\n let: { ids: \"$stores\" },\n pipeline: [\n {\n $geoNear: {\n near: {\n type: \"Point\",\n coordinates: [0, 0]\n },\n distanceField: \"distance\",\n maxDistance: 10000,\n query: {\n $expr: {\n $in: [\"$_id\", \"$$ids\"]\n }\n }\n }\n }\n ],\n as: \"stores\"\n}\n", "text": "Found the problem!It seems that when you use localField and foreignField in the lookup, they run similar to a first stage (like a $match). Removed them and moved all the logic to the pipeline.Works like a charm ", "username": "Gabriel_Gaspar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trying to use $geoNear inside $lookup pipeline
2022-08-31T19:37:13.331Z
Trying to use $geoNear inside $lookup pipeline
2,079
null
[ "aggregation" ]
[ { "code": "let news= NewsRoomCollection.aggregate(\n [\n {$unwind: \"$deals\"},\n {$unwind: \"$users\"}, \n queryParams.filters,\n {$match:{ \"users.userId\": queryParams.userId }},\n {$group:{\n _id: \"$_id\",\n companyId : { $first: '$companyId' },\n companyName : { $first: '$companyName' },\n newsId: {$first: \"$newsId\"},\n newsTitle: {$first: \"$newsTitle\"},\n newsLink: {$first: \"$newsLink\"},\n newsPublishedAt: {$first: \"$newsPublishedAt\"},\n deals: {$push: \"$deals\"},\n users:{$push: \"$users\"}\n }},\n {\n $facet: {\n \"data\": [\n { $sort: queryParams.sortQuery },\n { $skip: skip },\n { $limit: limit },\n ],\n \"pagination\": [\n { $count: \"total\" }\n ]\n }\n }, \n ]\n ).toArray();\n", "text": "Hi, I am new to MongoDB and below is my query and it is working fine when trying to get 50k data performance is very slow, May I know that following query is fine or need any modification.", "username": "Naveen_hm" }, { "code": "[\n {$match:{ \"users.userId\": queryParams.userId }},\n {$unwind: \"$deals\"},\n {$unwind: \"$users\"}, \n queryParams.filters,\n {$group:{\n _id: \"$_id\",\n companyId : { $first: '$companyId' },\n companyName : { $first: '$companyName' },\n newsId: {$first: \"$newsId\"},\n newsTitle: {$first: \"$newsTitle\"},\n newsLink: {$first: \"$newsLink\"},\n newsPublishedAt: {$first: \"$newsPublishedAt\"},\n deals: {$push: \"$deals\"},\n users:{$push: \"$users\"}\n }},\n {\n $facet: {\n \"data\": [\n { $sort: queryParams.sortQuery },\n { $skip: skip },\n { $limit: limit },\n ],\n \"pagination\": [\n { $count: \"total\" }\n ]\n }\n }, \n ]\n", "text": "Hi @Naveen_hm ,The best practices of an aggregation is to perform the match stages as early as possible with an index on the matched fields:In your case is { “users.userId”: 1 } indexed?Why not first match and only then perfom the unwinds?I also don’t really get why you unwind to then group with push? Is it only to find documents with a user in an array? This will work also when comparing an array with a single value as it works as $in…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \"_id\": \"MhHnLT7frymsjp2Mr\",\n \"newsId\": \"CBMibWh0dHBzOi8vd3d3LmtmdHYuY29tL25ld3MvMjAyMi8wNS8yNi9uZW9tcy13YXluZS1ib3JnLXRhbGtzLXByb2R1Y3Rpb24tYW5kLXN1c3RhaW5hYmlsaXR5LWFtYml0aW9ucy1pbi1jYW5uZXPSAXFodHRwczovL3d3dy5rZnR2LmNvbS9hbXAvbmV3cy8yMDIyLzA1LzI2L25lb21zLXdheW5lLWJvcmctdGFsa3MtcHJvZHVjdGlvbi1hbmQtc3VzdGFpbmFiaWxpdHktYW1iaXRpb25zLWluLWNhbm5lcw\",\n \"newsTitle\": \"NEOM's Wayne Borg talks production and sustainability ambitions in Cannes - KFTV\",\n \"newsLink\": \"https://www.kftv.com/news/2022/05/26/neoms-wayne-borg-talks-production-and-sustainability-ambitions-in-cannes\",\n \"companyId\": \"LDMXHxSEYXtNXGFEr\",\n \"companyName\": \"NEOM\",\n \"deals\": [\n {\n \"dealId\": \"P9v4RMWRr7zcbnptd\",\n \"clusterId\": \"AutonomousAndSustainableMobility\"\n }\n ],\n \"users\": [\n {\n \"userId\": \"iChz62XNeMfA7oB9A\",\n \"isReadNotification\": false\n },\n {\n \"userId\": \"zHruHDSjyWhkvF398\",\n \"isReadNotification\": false\n },\n {\n \"userId\": \"hPS7K5it6nZstj6M3\",\n \"isReadNotification\": false\n },\n {\n \"userId\": \"g3ctNC7rykc8BsoiM\",\n \"isReadNotification\": true\n },\n {\n \"userId\": \"jR9Xjc5Rrcnt3beGe\",\n \"isReadNotification\": false\n },\n {\n \"userId\": \"ZjtNtCHHt3SdPLGYD\",\n \"isReadNotification\": false\n },\n {\n \"userId\": \"xbf6pHLx4CKSjF8rM\",\n \"isReadNotification\": false\n },\n {\n \"userId\": \"yAbR4wmaAwqL9qwsQ\",\n \"isReadNotification\": true\n },\n {\n \"userId\": \"RvfvkH9o3FpEbWKmR\",\n \"isReadNotification\": true\n },\n {\n \"userId\": \"YR9sGETsDyEJAQ9wy\",\n \"isReadNotification\": true\n },\n {\n \"userId\": \"2ehdDcK9QoEzye6jj\",\n \"isReadNotification\": true\n }\n ],\n \"createdAt\": 1655963402992,\n \"newsPublishedAt\": 1653523200000\n}\n", "text": "“I also don’t really get why you unwind to then group with push?” this is i used because once i got the result set am looping it other reason.Is it only to find documents with a user in an array? not only to based on userId but also based some other fields like queryParams.filters is a object which has other fields for matchIn your case is { “users.userId”: 1 } indexed? it is not indexed am just saving values in arraybelow is the example collection data:", "username": "Naveen_hm" } ]
Aggregate functions causing performance issue
2022-08-29T12:36:45.717Z
Aggregate functions causing performance issue
1,589
null
[ "replication", "python", "transactions" ]
[ { "code": "def find_record():\n # Connect to an existing database\n mongodb_client = pymongo.MongoClient(\"mongodb://localhost:27019/?replicaSet=dmitriy_test\")\n print(\"============== find_record =============\")\n claim_details = mongodb_client.local.claim_details\n print(claim_details)\n with mongodb_client.start_session() as session:\n with session.start_transaction():\n print(f\"+++++++++++++++++SEARCHING FOR===={pnr_id_to_search}\")\n # result_from_search = claim_details.find_one({\"claim_detail.assetDetails.bookingInformation.pnrId\": pnr_id_to_search}, {'_id': False})\n result_from_search = claim_details.find_one({\"claim_detail.assetDetails.bookingInformation.pnrId\": pnr_id_to_search}, {'_id': False}, session=session)\n if(result_from_search is not None):\n print(\"=========================SUCCESS===============================\")\n else:\n print(\"=========================FAILURE===============================\")\n print(f\"=== RESULT OF SEARCH ===\")\n print(result_from_search)\npymongo.errors.OperationFailure: Cannot run command against the 'local' database in a transaction., full error: {'ok': 0.0, 'errmsg': \"Cannot run command against the 'local' database in a transaction.\", 'code': 263, 'codeName': 'OperationNotSupportedInTransaction', '$clusterTime': {'clusterTime': Timestamp(1661955990, 1), 'signature': {'hash': b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', 'keyId': 0}}, 'operationTime': Timestamp(1661955990, 1)}\n", "text": "I am not sure if my setup is wrong or something else but I am not able to run a simple request in a transaction. I have 3 replicas with an arbiter locally. I try to do a simple find, but I keep getting an error that operation is not supportedthis is the error that I am getting", "username": "Dmitriy_Mestetskiy" }, { "code": "Cannot run command against the 'local' database in a transactionlocalmongodconfigadminlocalsystem.*", "text": "Welcome back, @Dmitriy_Mestetskiy !Cannot run command against the 'local' database in a transactionThe local database is a system database intended for use by the mongod process for instance-specific data like the replication oplog.Per the Transactions documentation, a transaction cannot read/write collections in system databases or write to system collections:You should be able to use transactions with any non-system database (and non-system collections).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "THANK YOU!!! This was it!!!", "username": "Dmitriy_Mestetskiy" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot run a command in transaction
2022-08-31T14:28:01.418Z
Cannot run a command in transaction
1,813
null
[ "atlas-cluster" ]
[ { "code": "", "text": "Our system have lots of routines which needs long running queries so we need primary node and it’s connection to stay available while the queries are running.Is there an option to disable automatic version updates ?\nSince on the process of updating, at the end of all secondaries, the primary still need to be re-elected and switched.", "username": "Ittipan_Langkulanon" }, { "code": "", "text": "Welcome to the MongoDB Community @Ittipan_Langkulanon !It is not possible to completely disable automatic version updates as this is a core feature to ensure deployments have the most recent security and stability improvements.However, if you have a dedicated Atlas cluster (M10+) you can Configure a Maintenance Window and use the Test Failover feature to confirm your application correctly handles elections and failover.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there an option to disable automatic version updates?
2022-09-01T01:29:23.786Z
Is there an option to disable automatic version updates?
2,396
null
[ "replication", "java" ]
[ { "code": "", "text": "Hihow to fix this error:No server chosen by ReadPreferenceServerSelector{readPreference=primary} from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=192.168.71.130:27017, type=REPLICA_SET_SECONDARY, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 4, 11]}, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, roundTripTimeNanos=82811743, setName=‘rs0’, canonicalAddress=192.168.71.130:27017, hosts=[192.168.71.128:27017, 192.168.71.129:27017, 192.168.71.130:27017], passives=, arbiters=, primary=‘null’, tagSet=TagSet{}, electionId=null, setVersion=1, lastWriteDate=Mon Aug 29 15:48:46 IRDT 2022, lastUpdateTimeNanos=21968293986802}, ServerDescription{address=192.168.71.129:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketException: Malformed reply from SOCKS server}}, ServerDescription{address=192.168.71.128:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.SocketException: Malformed reply from SOCKS server}}]}. Waiting for 30000 ms before timing outim 3 node (vm) ubuntu 20.04.4 and installed mongo 4.4 and clustered mongodb replicaSet, after down one node other node success work but down 2 node my jar file is errord, how tind and fix is error?thanks.", "username": "omidzamani" }, { "code": "192.168.*", "text": "Welcome to the MongoDB Community @omidzamani !java.net.SocketException: Malformed reply from SOCKS serverIs your application connecting to your deployment via a proxy? What options are you using in your connection string and what version of the MongoDB Java driver?I notice your replica set is using private IPs (192.168.*) which will require the calling application to be on the same network (or have routing via VPN/VPC) in order to establish a replica set connection.The expected behaviour for a replica set connection is that clients use the hostnames, IPs, and ports specified in the replica set configuration.Regards,\nStennie", "username": "Stennie_X" } ]
How fix No server chosen by ReadPref
2022-08-31T08:10:49.835Z
How fix No server chosen by ReadPref
6,325
null
[ "queries", "compass", "database-tools", "backup" ]
[ { "code": "", "text": "Hi Team,While restoring i don’t want to over write original collection.", "username": "Vijay_Kumar8" }, { "code": "", "text": "Have you tried using the --db and --collection arguments from my best friend the mongorestore documentation? Other options related to your use-case are --nsFrom and --nsTo.It looks like it will do what you want.", "username": "steevej" } ]
Restore single collection with different name
2022-08-31T23:27:00.600Z
Restore single collection with different name
2,648
null
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "export const updateEmployee=async(req,res)=>{\n try{ \n const {FirstName,LastName,Email,Password,Mobile,ReportingManager,EmployeeCode,Salary,Location,Country,State,Department,\n }=req.body;\n \n const _id=req.params.id;\n const updatedResult=await PostMessage.updateOne({_id},\n {FirstName,LastName,Email,Password,Mobile,ReportingManager,EmployeeCode,Salary,Location,Country,State,Department},\n {\n new:true,\n })\n console.log('data was updated',updatedResult);\n res.status(200).json(updatedResult);\n }catch(error){\n console.log(error.message);\n res.status(501).json({message:error.message})\n } \n}\n", "text": "How to update multiple fields in a single documentWhenever I try to update multiple fields only one values gets changed irrespective of how many changes I made in my form", "username": "Ayush_N_A3" }, { "code": "", "text": "I have used findByIdandUpdate() method too but same result", "username": "Ayush_N_A3" }, { "code": "c.updateOne( { _id } , { First , Last } , { new : true })\nMongoInvalidArgumentError: Update document requires atomic operators\n{ \"$set\" : \n { FirstName, LastName, Email, /* ... */ }\n}\n", "text": "Are you using an abstraction layer, like mongoose? Probably since youhave used findByIdandUpdate()I ask because when I try your code I get an error.This is expected as you are missing an operator like $set.The new:true option is not one that I recognize for MongoDB nodejs driver.If using pure MongoDB driver try with the following as the 2nd parameter of updateOne().", "username": "steevej" }, { "code": "", "text": "Hii Steevej !!\nYes I am using mongoose, its a mern app could you tell how do i resolve this issue if using moongose?new:true option I saw it on a youtube tutorial in the creator’s video any suggestions how can I solve this issue to update multiple fields in a single document when using moongose also", "username": "Ayush_N_A3" }, { "code": "", "text": "I know nothing about Mongoose. I try to stay away from abstraction layers. Hopefully, someone with mongoose experience will jump it. Other you may try a mongoose specific forum if such a thing exists. There is always stackoverflow.", "username": "steevej" } ]
Updating multiple fields within single document
2022-08-26T14:54:01.599Z
Updating multiple fields within single document
4,587
null
[]
[ { "code": "", "text": "Ok I want at least 3 versions of the database: LOCAL-DEV, TEST, and PRODUCTION, corresponding to 3 environments. Pretty standard.My data is pretty simple, each environment has the same half-dozen tables, would each fit in 1 DB.How do I set this up? Is this three different Clusters? Projects? Databases? Collections? What?(I’ve been here so many times… I’m starting a new platform or service. One of the first thing you do is setup your dev, text, and prod environments. But I’m not familiar enough with the completely new nomenclature to know what corresponds to what… and intro documentation never covers this stuff.)", "username": "Tim_N_A" }, { "code": "", "text": "Hi Tim,There’s no one way to do this but our generally recommendation is to use different Atlas Projects within your Organization for these different environments. In each project you’d then have a different cluster (you might use a much smaller cluster for dev, and maybe you frequently pause your test cluster, etc).Projects offer security/authorization level isolation (e.g. which team members have access) as well as isolation of configurations like Alerts which make them ideal for separating Prod from non-prod environments. A common pattern is to restore a backup from prod to a non-prod environment for test purposes. https://docs.atlas.mongodb.com/best-practices/#the-project-level may be of use.Taking a step back, I love the question because it’s good for thought for us to figure out how to make it easier for users to set up best practices.Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "I went ahead with that: organized environments at the project level: project-dev, project-test, project-production.I’ve started using Realm (to provide id and user access, plus expose part of the data to an open api) … now I’m super confused again. Realm Application also have environments?Are Realm environments supposed to all use the same database?Are we expected to migrate entire Realm applications from project to project throughout a development cycle?What’s the correct to organize MongoDB and Realm intro environments?Why do I always have trouble finding documentation on this online?Thanks for an help!", "username": "Tim_N_A" }, { "code": "", "text": "Hi Tim – We actually cover this in a recent blog and .Live talk and are working on incorporating this guidance into our documentation as well. Hope this helps!", "username": "Drew_DiPalma" }, { "code": "", "text": "My understanding of this isssue is that App Services has environments and each environment has values and secrets and can point to its own Atlas cluster in linked data sources.\nSo for instance an application might have something like this.Development(environment in App Services)Staging(environment in App Services)Production(environment in App Services)Then changes to the app configuration can be moved to each environment during the development process. For instance following the guide above. Can someone confirm this?", "username": "Matthew_Brimmer" } ]
How do I organize LOCAL-DEV, TEST, and PRODUCTION environments?
2021-05-17T18:38:06.994Z
How do I organize LOCAL-DEV, TEST, and PRODUCTION environments?
12,361
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.23-rc0 is out and ready for testing. This is a release candidate containing only fixes since 4.2.22. The next stable release 4.2.23 will be a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Aaron_Morand" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2.23-rc0 is released
2022-08-31T18:11:35.773Z
MongoDB 4.2.23-rc0 is released
2,186