image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation" ]
[ { "code": "`{`\n` name : 'steve',`\n` TopCountry : HKG`\n` }`\n{ name : 'steve', TopCountry : 1", "text": "Been trying to find the most occurences of a string.I wanted to do this in a $addFieldsHere is the playground, showing demo data, and my query so far.Mongo playground: a simple sandbox to test and share MongoDB queries onlineThis is my playground so far, but i feel its very messy.\nI am using $map and $filter. I was told this was better then always using $unwind.My expected output isBut presently it is showing me the total count.{\n name : 'steve',\n TopCountry : 1 }`I would also like to do a sort in the add fields, so it sorts it by top most occurences of country.\nMaybe use of a $reduce would be better so i could reduce to the number of times a country was listed.\nHKG : 4\nUSA : 3\nCHN : 5thanks.", "username": "Rishi_uttam" }, { "code": "unwind = { \"$unwind\" : \"$downloads\" }\ngroup = { \"$group\" : {\n _id : { \"name\" : \"$name\" , \"country\" : \"$downloads.country\" } ,\n count : { \"$sum\" : 1 }\n} }\nsort = { \"$sort\" : { \"count\" : -1 } }\ngroup_by_name = { \"$group\" : {\n \"_id\" : \"$_id.name\" ,\n \"country\" : { \"$first\" : \"$_id.country\" } ,\n \"count\" : { \"$first\" : \"$count\" }\n} }\npipeline = [ unwind , group , sort , group_by_name ]\nc.aggregate( pipeline )\n{ _id: 'elon', country: 'JPN', count: 2 }\n{ _id: 'mark', country: 'USA', count: 5 }\n{ _id: 'steve', country: 'HKG', count: 4 }\n", "text": "Thanks for providing sample input documents, expected results and what you have tried.Here is what I come up:What you get is:Not quite in the format you want, but a simple $project should get you to your expected results. I usually prefer not to do this last $project and perform the final formatting at the application layer.", "username": "steevej" }, { "code": "$unwind$group array.object.nestedArray.object", "text": "Thanks for this, i will have a look .I too though i needed to $unwind the downloads array, however the above is a trival example, and my real data has lots of other steps, including 2 other unwinds for nested array. Doesn’t this create a huge amount of workload and documents for mongodb atlas to process before sending data back? Granted i am not looking for best performance, but i did read this, pls scroll to the last comment made by a Mongo Employee (i think) [Asya_Kamsky]Asha says that i should NOT, and she emphasized in capitals NOT. \" You should never use $unwind and then $group when you just need to transform a single document! \"so thats why i went down the rabbit hole of .map and filter but still could not get the data i needed.pls comment when you have a moment.thanks.", "username": "Rishi_uttam" }, { "code": "pipeline = [ group , sort , group_by_name ]\n", "text": "when you just need to transform a single documentYour requirement was not to transform a single document. The unwind is bad when you unwind then group using the _id to recreate the original document with the modification. This is not what you are doing.above is a trival examplePlease publish real documents so that we do not lose time working on trivial examples that you cannot adapt to your use case.Since my downloads array isn’t nested, and i only want to reach in to downloads.country, why do i need a unwind here?Because if you don’t you don’t get the correct result. With the code I share, it is very easy to remove the $unwind. Simply do:", "username": "steevej" }, { "code": "count : { \"$sum\" : 1 }", "text": "Thank you, this makes sense, so basically any array which i need to do a expression on which has nested objects , i would always need to unwind first before applying any expression.What happens if downloads.country does not exist, in this case it will still register with a 1 count, instead of 0. count : { \"$sum\" : 1 }", "username": "Rishi_uttam" }, { "code": "count : { \"$sum\" : 1 }", "text": "i would always need to unwind first before applying any expressionNO. The post you shared above is an example where you do not have to unwind. The operations $filter, $reduce and $map are quite powerful.What happens if downloads.country does not exist, in this case it will still register with a 1 count, instead of 0. count : { \"$sum\" : 1 }This is something you can easily try. You can always $match out edge cases in an earlier stage.", "username": "steevej" }, { "code": "", "text": "Thanks… could you help me do this without unwind and with a filter, reduce / map? i did try that initially in the playground, but could never get it working.tks.", "username": "Rishi_uttam" }, { "code": "", "text": "i did try that initially in the playground, but could never get it working.Share what you tried and explain us how it failed so that we do not lose time investigating a direction you already know does not work.", "username": "steevej" }, { "code": "", "text": "I finally was able to look at your playground.The good news is that the $addFields correctly sums up the occurrences.Then to get the document that has the max count you will have to do something with $filter, $reduce or $arrayElemAt like explained at mongodb - Find max element inside an array - Stack Overflow", "username": "steevej" } ]
Getting the value of the most occurrences
2022-05-18T13:06:33.270Z
Getting the value of the most occurrences
4,464
null
[ "queries", "mongodb-shell", "atlas-search" ]
[ { "code": "db.posts.find({text: {$search : \"\\Post O\\\"\"} })", "text": "Hi fellows,\nI am currently learning hwo tio use mongosh following this video :\nMongoDB Crash Course - YouTubeTrying this: db.posts.find({text: {$search : \"\\Post O\\\"\"} })\nI get:\nMongoServerError: unknown operator: $search\nWhat do I have to do to fix this error?Many thanks in advnace,\nUli", "username": "Ulrich_Kleemann" }, { "code": "", "text": "The $search operator is only available through aggregation pipeline and to collections hosted on AtlasSo using db.posts.aggregate() instead of find() might helpHave further read on the docs here https://www.mongodb.com/docs/manual/reference/operator/aggregation/search/", "username": "Abdulhakeem_Ibrahim" }, { "code": "", "text": "Hello Ibrahim, Many thanks fpr you help. I swapped $search to $aggregate what worked without error but now I get :Unrecognized pipeline stage name: 'views’\nwhen I try : db.posts.aggregate({views: { $gte: 10 } } ).pretty()Still searching for a solution to fix it .Greetings\nUli", "username": "Ulrich_Kleemann" }, { "code": "", "text": "To make such a query you need a $match stage.", "username": "steevej" } ]
Unknown operator $search
2022-05-24T14:02:21.298Z
Unknown operator $search
4,494
null
[ "database-tools", "mdbw22-hackathon" ]
[ { "code": "- GDELT2.eventscsv\n\n FILTER: {\"GlobalEventId\" : 1043574262}\n\n GlobalEventId 1043574262\n\n ActionGeo_Lat -26.0833\n\n ActionGeo_Long 28.25\n\n ActionGeo_FeatureID 204226\n\n- GDELT2..events\n\n FILTER: {\"GlobalEventId\" : 1043574262}\n\n Action > Geo > coordinates > Array\n 0 28.25. <=== ActionGeo_Long\n 1 -26.0833 <=== ActionGeo_Lat\n", "text": "Hi @Joe_Drumgoole @Shane_McAllister @Mark_Smith @nraboyI would like to warn you of something that I think may lead to confusion.In the Videos MongoDB World 2022 Hackathon first they download the file masterfilelist.txt to download multiples *.export.CSV.zip to after use mongoimport.shto import the initial data in the collection eventscsv.After they use reshapeData.js to get the collections events from the collection eventscsv but they invert the order of the Geolocation for Actor1, Actor2 and Action.For example:I think it would have been better to store the Latitude in the Array at index 0 and the Longitude at index 1 to maintain a format similar to the previous one.", "username": "Manuel_Martin" }, { "code": "", "text": "I believe that is the way GeoJSON is stored by default in mongoDB.\nimage788×463 42.7 KB\nSee the documentation", "username": "Fiewor_John" }, { "code": "", "text": "Thank you @Fiewor_John I thought it was a mistake", "username": "Manuel_Martin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Warning: The script reshapeData.js invert the order of the Geolocation for Actor1, Actor2 and Action
2022-05-24T19:22:15.884Z
Warning: The script reshapeData.js invert the order of the Geolocation for Actor1, Actor2 and Action
2,634
null
[]
[ { "code": "", "text": "hi, I have this kind of messages in log\nIs it problem?\n{“t”:{\"$date\":“2021-10-18T11:38:03.494+04:00”},“s”:“D1”, “c”:“QUERY”, “id”:22790, “ctx”:“conn229550502”,“msg”:“Received interrupt request for unknown op”,“attr”:{“opId”:711841748}}\n{“t”:{\"$date\":“2021-10-18T11:38:03.494+04:00”},“s”:“D2”, “c”:“QUERY”, “id”:22783, “ctx”:“conn229550502”,“msg”:“Ops known during interrupt”,“attr”:{“ops”:[]}}", "username": "Nanuka_Zedginidze" }, { "code": "", "text": "Received interrupt request for unknown opI have seen a huge amount of such messages since upgraded to 4.4.13.“{“t”:{”$date\":“2022-05-24T20:24:29.069+00:00”},“s”:“D1”, “c”:“QUERY”, “id”:22790, “ctx”:“conn11”,“msg”:“Received interrupt request for unknown op”,“attr”:{“opId”:18786}}\"\nwhat could be the reason for this D1 message.\nthanks,\nHank", "username": "Hank_Su" } ]
Received interrupt request for unknown op
2021-10-18T08:06:34.770Z
Received interrupt request for unknown op
2,739
null
[ "node-js" ]
[ { "code": "{ \"_id\" : ObjectId(\"5cefb17eef71edecf6a1f6a8\"), \"Name\" : \"John\" }\n{ \"_id\" : ObjectId(\"5cefb181ef71edecf6a1f6a9\"), \"Name\" : \"Chris\" }\n{ \"_id\" : ObjectId(\"5cefb185ef71edecf6a1f6aa\"), \"Name\" : \"Robert\" }\n", "text": "I am having a problem finding data from MongoDB. I am trying to get data last inset first to insert data. if you know how I can get data effortless please help me? below the MongoDB data, I want to show this data on my website last inset to first insert data. I hope so I will get my answer. thank you.", "username": "Tusar_N_A" }, { "code": "db.collection.find().sort({_id:-1}) \n", "text": "You can simply sort on the _id field.This will sort from newest to oldest document", "username": "tapiocaPENGUIN" } ]
How to get last insert data to first insert data from mongodb
2022-05-24T19:50:50.502Z
How to get last insert data to first insert data from mongodb
4,759
https://www.mongodb.com/…_2_1024x576.jpeg
[ "punjab-mug" ]
[ { "code": "Software Engineer, MongoDBSoftware Engineer, MongoDBPunjab MongoDB User Group, Leader", "text": "\nMUG-Punjab1887×1063 227 KB\nPunjab, India MongoDB User Group is excited to launch and announce their first meetup in collaboration with Guru Nanak Dev University, Amritsar.The session will help you ramp up your knowledge of NoSQL Databases and MongoDB. You will also learn how you can set up your free MongoDB Atlas cluster and query a sample database. At the end, we will be telling you about our exciting MongoDB World Hackathon '22 and all the prizes and swag you can win! Trivia - We will have a quick trivia at the end and winners will take home some exciting MongoDB Swag.Event Type: Online\n Join here: Video Conferencing URLTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.\nimage738×738 75 KB\nSoftware Engineer, MongoDB–\nSoftware Engineer, MongoDBPunjab MongoDB User Group, LeaderJoin the Punjab Group to stay updated with upcoming meetups and discussions.", "username": "Tarun_Gaur" }, { "code": "", "text": "Hello All,\nGentle Reminder: The event begins in 40 mins from now.Here’s the zoom link to join the event: Launch Meeting - ZoomZoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom Rooms is the original software-based conference...", "username": "Harshit" }, { "code": "", "text": "We are about to start with Trivia Game.\nJoin in if you are still around to win MongoDB Swag:Join a game of kahoot here. Kahoot! is a free game-based learning platform that makes it fun to learn – any subject, in any language, on any device, for all ages!", "username": "Harshit" }, { "code": "", "text": "Here’s the correct link: Kahoot!", "username": "Harshit" }, { "code": "", "text": "Hey, Thank you guys for the wonderful event. Just loved the Demo part. I won in some contests, do let me know what’s the procedure for sways at [email protected]. Thanks a lot again.", "username": "Akhil_Aggarwal" }, { "code": "", "text": "Hello Punjab MUG team.", "username": "BabbarOP_N_A" }, { "code": "", "text": "Hello Punjab MUG team. Thank you for this amazing session. The session was great and we learnt a lot of new things with demo, practice and trivia. I was the winner of Kahoot trivia game so do let me know about swags → [email protected]. Looking forward to more such sessions. ", "username": "BabbarOP_N_A" }, { "code": "", "text": "hey everyone ,first of all thank you guys for such a wonderful and lovely session , you guys kept interacting and made this event so much fun and filled with learnings , I’m currently working on a project using mongodb , so i’ll ping you guys for any help\nregards Nitish", "username": "tung_singh_469" }, { "code": "", "text": "Hi , this is nitish , i almost forgot about the swags and currently im working on web app using mongodb and then i realised i’ve a mongodb shirt coming my way, this is my email [email protected] , let me know about swags , and looking forward for such sessions .\nRegards nitish", "username": "tung_singh_469" }, { "code": "", "text": "Hey Nitish,\nGreat to know about your web app. We would love it if you would want to share more about it with the community in the upcoming MUG event or forums here.We will reach out to winners this week ", "username": "Harshit" }, { "code": "", "text": "Hey, Actually I have not received any mail regarding swags. Have you sent the mails?", "username": "Akhil_Aggarwal" }, { "code": "", "text": "Hey Akhil,\nYou should have received the email now ", "username": "Harshit" }, { "code": "", "text": "Yes I have received. Thanks a lot", "username": "Akhil_Aggarwal" } ]
Punjab MUG: Introduction to NoSQL Databases and MongoDB - May 14, 2022
2022-05-06T10:46:10.602Z
Punjab MUG: Introduction to NoSQL Databases and MongoDB - May 14, 2022
5,545
null
[]
[ { "code": "", "text": "My M10 cluster auto scaled the disk space and has been unavailable for 90 minutes now. What is the expected down time for events like this?", "username": "Chris_Norris" }, { "code": "", "text": "The atlas cluster came back online 2.5 hours later. I’ve disabled auto-scaling disk space because it causes unacceptable downtime.I traced the increased disk usage that triggered this event to the continuous backups being overwhelmed by database updates.I’d like to hear others’ experience with autoscaling disk space. I cannot see how this is a useful feature as it is now.", "username": "Chris_Norris" }, { "code": "", "text": "Hi Chris,My name is Lori and I’m a product manager at MongoDB on the Atlas team. Thanks for raising this issue - 2.5 hour downtime due to autoscaling is completely unacceptable and we’re sorry you had this experience.If you’re able, could you please send me an email with your organization ID so I can further investigate what happened here? My email is [email protected] you!Lori", "username": "Lori_Berenberg" } ]
Atlas unavailable during autoscale?
2022-05-14T04:30:22.390Z
Atlas unavailable during autoscale?
2,309
null
[ "dot-net", "connecting" ]
[ { "code": "System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"2\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 2, EndPoint : \"Unspecified/xyz:27017\" }\", EndPoint: \"Unspecified/xyz:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. public void connection()\n {\n string template = \"mongodb://{0}:{1}@{2}/admin?replicaSet=rs0&readPreference={3}&retryWrites=false\";\n string username = \"unknown\";\n string password = \"unknown\";\n string readPreference = \"secondaryPreferred\";\n string clusterEndpoint = \"unknown\";\n string connectionString = String.Format(template, username, password, clusterEndpoint, readPreference);\n\n string pathToCAFile = @\"E:\\path\\rds-combined-ca-bundle.pem\";\n\n X509Store localTrustStore = new X509Store(StoreName.Root);\n X509Certificate2Collection certificateCollection = new X509Certificate2Collection();\n certificateCollection.Import(pathToCAFile);\n try\n {\n localTrustStore.Open(OpenFlags.ReadWrite);\n localTrustStore.AddRange(certificateCollection);\n }\n catch (Exception ex)\n {\n Console.WriteLine(\"Root certificate import failed: \" + ex.Message);\n throw;\n }\n finally\n {\n localTrustStore.Close();\n }\n\n var settings = MongoClientSettings.FromUrl(new MongoUrl(connectionString));\n var client = new MongoClient(settings);\n\n var database = client.GetDatabase(\"admin\");\n var collection = database.GetCollection<BsonDocument>(\"samplecollection\");\n var docToInsert = new BsonDocument { { \"pi\", 3.14159 } };\n collection.InsertOne(docToInsert);\n }\n", "text": "I am running a C# script to perform insert operations to a cluster. Whenever I try to perform a insert operation to the database, I get an error starting with. The cluster is a part of AWS DocumentDB.A timeout occured after 30000ms selecting a server using CompositeServerSelectorSystem.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : \"2\", ConnectionMode : \"ReplicaSet\", Type : \"ReplicaSet\", State : \"Disconnected\", Servers : [{ ServerId: \"{ ClusterId : 2, EndPoint : \"Unspecified/xyz:27017\" }\", EndPoint: \"Unspecified/xyz:27017\", ReasonChanged: \"Heartbeat\", State: \"Disconnected\", ServerVersion: , TopologyVersion: , Type: \"Unknown\", HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.Hiding the credentials as it is confidential. But please help me know how can I resolve the issue?", "username": "Sawee_Rawal" }, { "code": "", "text": "Hi, @Sawee_Rawal,Welcome to the MongoDB Community Forums. A timeout exception such as this is indicative that the driver is unable to connect to the cluster. AWS DocumentDB is not a MongoDB product. I suggest reaching out to AWS DocumentDB support on how to troubleshoot this issue further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to connect to AWS DocumentDB got "A timeout occured after 30000ms" using C# MongoClient
2022-05-24T10:36:55.883Z
Unable to connect to AWS DocumentDB got &ldquo;A timeout occured after 30000ms&rdquo; using C# MongoClient
4,788
null
[ "data-modeling", "compass", "atlas-cluster" ]
[ { "code": "", "text": "I am a beginer in mongoDB and learning Mongodb atlas.How I can fix it, Looking for kind help. Thanks in advance", "username": "GOURAV_SINGH2" }, { "code": "", "text": "cluster0.9dmds.mongodb.netThis is not a valid cluster.", "username": "steevej" }, { "code": "", "text": "Ensure that you whitelist your ip address under Network Access → Add IP addressSimple reason is MongoDB atlass only allow connections from trusted IP addresses", "username": "Abdulhakeem_Ibrahim" }, { "code": "", "text": "This is a system generated, how can I fix it.", "username": "GOURAV_SINGH2" }, { "code": "", "text": "I tried adding my current ip address under network access, but still facing the same issues. And another way to whitelist ip address is to do it under security tab but security tab is not available for free community version. Looking for kind help.", "username": "GOURAV_SINGH2" }, { "code": "", "text": "You must have a typo in the name.There is no DNS entry for the name cluster0.9dmds.mongodb.net.", "username": "steevej" }, { "code": "", "text": "Thank you sir but sir i generated another url mongodb+srv://gouravsingh:[email protected]/test\nbut still facing the same issues.", "username": "GOURAV_SINGH2" }, { "code": "id 36076\nopcode QUERY\nrcode NOERROR\nflags QR RD RA\n;QUESTION\ncluster0.3iv7ree.mongodb.net. IN ANY\n;ANSWER\ncluster0.3iv7ree.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-122qwu-shard-0\"\ncluster0.3iv7ree.mongodb.net. 60 IN SRV 0 0 27017 ac-noap5ca-shard-00-00.3iv7ree.mongodb.net.\ncluster0.3iv7ree.mongodb.net. 60 IN SRV 0 0 27017 ac-noap5ca-shard-00-01.3iv7ree.mongodb.net.\ncluster0.3iv7ree.mongodb.net. 60 IN SRV 0 0 27017 ac-noap5ca-shard-00-02.3iv7ree.mongodb.net.\n;AUTHORITY\n;ADDITIONAL\n", "text": "This one is correct:Now allow network access from everywhere.", "username": "steevej" }, { "code": "", "text": "yes sir i allowed access from anywhere", "username": "GOURAV_SINGH2" }, { "code": "", "text": "may you please check whether this works for you, it is still not working for me, i can connect to local server which works well but cloud isn’t working", "username": "GOURAV_SINGH2" }, { "code": "", "text": "I works fine now.May be your VPN, ISP stops you from going. What do you get when you try\nhttp://portquiz.net:27017/.", "username": "steevej" }, { "code": "", "text": "this is the error i am consistently getting sir", "username": "GOURAV_SINGH2" }, { "code": "", "text": "ISP, VPN or firewall is blocking you.", "username": "steevej" }, { "code": "", "text": "yes sir may be, when i clicked your link, it says site is not reachable.", "username": "GOURAV_SINGH2" }, { "code": "", "text": "Thank you so much sir for kind help, i will try to fix ISP, VPN or firewall, though not sure how to do it. thanks a lot.", "username": "GOURAV_SINGH2" }, { "code": "", "text": "For now, change the password you shared.", "username": "steevej" }, { "code": "", "text": "sure sir, i’ will change it.", "username": "GOURAV_SINGH2" } ]
Error while connecting MongoDB ATLAS TO MONGODB COMPASS
2022-05-24T10:23:14.141Z
Error while connecting MongoDB ATLAS TO MONGODB COMPASS
4,176
null
[ "node-js", "replication", "mongoose-odm", "atlas-cluster" ]
[ { "code": "patre@LAPTOP-QGT1UHO7 MINGW64 /c/Programování/React/Projects/elektronickaEvidenceAstronautu (master)\n$ node server\nServer 5000!\nC:\\Programování\\React\\Projects\\elektronickaEvidenceAstronautu\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:20\n throw error;\n ^\n\nError: Connect error to MongoDB\n at C:\\Programování\\React\\Projects\\elektronickaEvidenceAstronautu\\database\\connect.js:12:23\n at C:\\Programování\\React\\Projects\\elektronickaEvidenceAstronautu\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:17:1\n1\n at C:\\Programování\\React\\Projects\\elektronickaEvidenceAstronautu\\node_modules\\mongoose\\lib\\index.js:344:16\n at C:\\Programování\\React\\Projects\\elektronickaEvidenceAstronautu\\node_modules\\mongoose\\lib\\connection.js:825:14\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n\nconst Mongoose = require(\"mongoose\");\nconst URL = \"mongodb://******:******@cluster0-shard-00-00.mq3ik.mongodb.net:27017,cluster0-shard-00-01.mq3ik.mongodb.net:27017,cluster0-shard-00-02.mq3ik.mongodb.net:27017/?ssl=true&replicaSet=atlas-iiut2k-shard-0&authSource=admin&retryWrites=true&w=majority\";\n\nclass dbConnect {\n connect() {\n Mongoose.connect(URL,{\n useNewUrlParser: true,\n useUnifiedTopology: true,\n useFindAndModify: false,\n useCreateIndex: true\n },(err) => {\n if(err) throw new Error(\"Connect error to MongoDB\");\n console.log(\"MongoDB connected!\");\n });\n }\n}\n\nmodule.exports = dbConnect;\n\n", "text": "Hello, first time I am working with MongoDB and I struggling to connect it to my Node.js project.\nCan someone please help with this error?Terminal:Code:Thank you for advice.", "username": "Patrik_Tomek" }, { "code": "", "text": "It is failing at db connect stage\nCan you connect to your db by shell using the same connect string?\nWe need exact error.From your code it appears to be generic error message", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Today i had the exact same code and problem. I tried deleting some options and started working fine. I deleted:", "username": "juan99_N_A" }, { "code": "", "text": "I figured out that recent versions of mongoose do not support anymore those options. If u still wanna use them, just change the mongoose version to 5.0.12MongoParseError: options usecreateindex, usefindandmodify are not supported", "username": "juan99_N_A" }, { "code": "", "text": "It works, thank you!", "username": "Patrik_Tomek" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Node.js connection error
2022-05-18T16:43:48.830Z
Node.js connection error
9,280
null
[ "aggregation" ]
[ { "code": "{\n\"_id\":\"uXDyX3Mwqx3mQgsRP\",\n\"projectNumber\":\"ABC-123\",\n\"extraInfo\":\"info\"\n}\n{\n\"_id\":\"Cfp2NpwoJFAo22yY3\",\n\"number\":\"ABC-123\",\"name\":\"Central Metrics\",\n\"location\":{\"type\":\"Point\",\"coordinates\":[51.206764,7.030342]}\n}\n[\n{\n\"_id\":\"uXDyX3Mwqx3mQgsRP\",\n\"projectNumber\":\"ABC-123\",\n\"extraInfo\":\"info\",\n\"location\":{\"type\":\"Point\",\"coordinates\":[51.206764,7.030342]}\n}\n]\n{\n'from': 'projectsPCT',\n// 'localField': 'number',//can't use with pipeline\n// 'foreignField': 'projectNumber',//can't use with pipeline\n'let': {\n 'projectNumber': '$projectNumber'\n},\n'pipeline': [\n {\n '$geoNear': {\n 'near': {\n 'type': 'Point', \n 'coordinates': [\n 51.206764, 7.030342\n ]\n }, \n 'distanceField': 'distance', \n 'maxDistance': 70000, \n 'spherical': true,\n //'query': {\"number\": \"$projectNumber\"} //<--this doesn't work for some reasons?\n 'query': {\"number\": \"ABC-123\"} //<- hardcoding number works though\n }\n },\n {\n $set: {\n projectNumberTest: '$$projectNumber'\n }\n },\n],\n'as': 'projectsPCT',\n}\n", "text": "Hi,Trying to fetch “requests” with coordinates that come from “projectsPCT” collection. Coordinates are used as filters and collection fields that connect documents are: requests.projectNumber === projectsPCT.numberData samples:\nRequests:ProjectsPCT:Expected outcome are filtered requests based on location:My initial aggregation attempt:MongoDB server version: 4.4.4", "username": "Express_Me" }, { "code": "", "text": "I would try with $$projectNumber like you did inside the $set: to access the variables defined in the let: parameter.", "username": "steevej" }, { "code": "", "text": "with $$projectNumber or $projectNumber i receive empty results in returned parameter projectsPCT,\nwhile with hardcoded ABC-123 i do receive an array in parameter projectsPCT containing located projectsPCT.", "username": "Express_Me" }, { "code": "'query': { \"$expr\" : { \"$eq\" : [ \"$number\" , \"$$projectNumber\"] } }\n", "text": "Next thing to try is:", "username": "steevej" } ]
($geoNear + query with $$variable) Filtering collection based on location data from another collection
2022-05-19T11:42:46.032Z
($geoNear + query with $$variable) Filtering collection based on location data from another collection
1,814
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "So don’t leave it 'till the last minute - make sure to get your submissions in early if you can as Friday 27th is closing in on us!.Remember - all you need to submit is -and all the details can be found HEREAll eligible submissions will receive prizes, and the top judged entries are in the running for some superb tech gear and much more!Don’t delay - submit soon!!", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
It's Submissions week! It will not be extended!
2022-05-24T12:26:30.469Z
It&rsquo;s Submissions week! It will not be extended!
2,715
null
[ "dot-net", "android", "xamarin" ]
[ { "code": " public IEnumerable<Test> List { get; }\n Realm realm;\n public MainPageViewModel()\n {\n realm = Realm.GetInstance();\n\n\n Test t1 = new Test(\"1\");\n Test t2 = new Test(\"2\");\n Test t3 = new Test(\"3\");\n Test t4 = new Test(\"4\");\n realm.Write(() =>\n {\n realm.RemoveAll<Test>();\n realm.Add(t1);\n realm.Add(t2);\n realm.Add(t3);\n realm.Add(t4);\n });\n List = realm.All<Test>();\n }\n\n internal void Dispose()\n {\n realm.Dispose();\n }\n\n [ICommand]\n public void Add()\n {\n Test t5 = new Test(\"5\");\n try\n {\n realm.Write(() =>\n {\n realm.Add(t5);\n\n });\n }\n catch (Exception ex)\n {\n Debug.WriteLine(ex.Message);\n\n }\n Debug.WriteLine(realm.All<Test>().ToList().Count.ToString());\n Debug.WriteLine(List.ToList().Count.ToString());\n\n }\n}\n }\n public Test(string name):this()\n {\n Name = name;\n }\n public string Id { get; set; } = Guid.NewGuid().ToString();\n public string Name { get; set; }\n}\n", "text": "Hello folks,I am doing my first steps with Realm .Net SDK and found a probleme where I don’t see how to solve it:I was examining howto use Realm .Net SDK in Xamarin and automatically update the view using the MVVM pattern.While it is working with Android and iOS, it is not working on UWP.Can anyone leave a comment on my example code what I am not implementing correctly or not seeing?Kind regarsThe Example Code starting with the View:\n\n\n<CollectionView.ItemTemplate>\n\n\n\n\n\n</CollectionView.ItemTemplate>\n\n\n\nthe ViewModel:using CommunityToolkit.Mvvm.ComponentModel;\nusing System;\nusing System.Collections.Generic;\nusing Realms;\nusing CommunityToolkit.Mvvm.Input;\nusing System.Diagnostics;\nusing System.Linq;namespace RealmLiveUpdateTest\n{\npublic partial class MainPageViewModel : ObservableObject\n{}the model:using System.Text;namespace RealmLiveUpdateTest\n{\npublic class Test : RealmObject\n{\npublic Test()\n{}", "username": "UltimateWidder" }, { "code": "CollectionViewcsharpxml//```csharp\n var i = 5;\n//```\n", "text": "Hi @UltimateWidder,Your view seems to lack code. Could you show us the whole view? We need to see the source used for the CollectionView and how you bind to the command.\nFor formatting the code in a more readable manner, you can use 3 backticks to open a multiline code section and again 3 backticks to close the multiline code section. You can also specify the syntax highlighting by using the csharp tag or the xml tag right after the opening backticks, then new line. It’d look like the following but without the comment slashes", "username": "Andrea_Catalini" }, { "code": "```xaml\n <?xml version=\"1.0\" encoding=\"utf-8\" ?>\n<ContentPage\n x:Class=\"RealmLiveUpdateTest.MainPage\"\n xmlns=\"http://xamarin.com/schemas/2014/forms\"\n xmlns:x=\"http://schemas.microsoft.com/winfx/2009/xaml\">\n <StackLayout>\n <CollectionView HeightRequest=\"300\" ItemsSource=\"{Binding List}\">\n <CollectionView.ItemTemplate>\n <DataTemplate>\n <StackLayout>\n <Label\n BackgroundColor=\"Red\"\n Text=\"{Binding Name}\"\n TextColor=\"White\" />\n </StackLayout>\n </DataTemplate>\n </CollectionView.ItemTemplate>\n </CollectionView>\n <Button Command=\"{Binding AddCommand}\" Text=\"Add\" />\n </StackLayout>\n</ContentPage>\n\n\n using Xamarin.Forms;\n\nnamespace RealmLiveUpdateTest\n{\n public partial class MainPage : ContentPage\n {\n MainPageViewModel context;\n public MainPage()\n {\n InitializeComponent();\n BindingContext = new MainPageViewModel();\n context = BindingContext as MainPageViewModel;\n }\n protected override void OnDisappearing()\n {\n base.OnDisappearing();\n context.Dispose();\n }\n }\n}\n\nusing CommunityToolkit.Mvvm.ComponentModel;\nusing System;\nusing System.Collections.Generic;\nusing Realms;\nusing CommunityToolkit.Mvvm.Input;\nusing System.Diagnostics;\nusing System.Linq;\n\nnamespace RealmLiveUpdateTest\n{\n public partial class MainPageViewModel : ObservableObject\n {\n \n public IEnumerable<Test> List { get; }\n Realm realm;\n public MainPageViewModel()\n {\n realm = Realm.GetInstance();\n\n\n Test t1 = new Test(\"1\");\n Test t2 = new Test(\"2\");\n Test t3 = new Test(\"3\");\n Test t4 = new Test(\"4\");\n realm.Write(() =>\n {\n realm.RemoveAll<Test>();\n realm.Add(t1);\n realm.Add(t2);\n realm.Add(t3);\n realm.Add(t4);\n });\n List = realm.All<Test>();\n }\n\n internal void Dispose()\n {\n realm.Dispose();\n }\n\n [ICommand]\n public void Add()\n {\n Test t5 = new Test(\"5\");\n try\n {\n realm.Write(() =>\n {\n realm.Add(t5);\n\n });\n }\n catch (Exception ex)\n {\n Debug.WriteLine(ex.Message);\n\n }\n Debug.WriteLine(realm.All<Test>().ToList().Count.ToString());\n Debug.WriteLine(List.ToList().Count.ToString());\n\n }\n }\n}\n\nusing Realms;\nusing System;\n\nnamespace RealmLiveUpdateTest\n{\n public class Test : RealmObject\n {\n public Test()\n {\n\n }\n public Test(string name):this()\n {\n Name = name;\n }\n public string Id { get; set; } = Guid.NewGuid().ToString();\n public string Name { get; set; }\n }\n}\n\n", "text": "Hello @ Andrea_Catalini,thank you for the fast response. The formatting destroyed the copied code[…].\nHopefully this time it is shown completly:", "username": "UltimateWidder" }, { "code": "INotifyPropertyChange", "text": "The Binding of the command is functioniong well, the problem is the Binding of the ItemsSource of the CollectionView. While the List is binding correctly on Android and iOS, in UWP it is also bound, but the View is not updating when I add another Test object to the realm. It seems like the INotifyPropertyChanged is not invoking on UWP, but in Android and iOS it is.Provided by the informations within the documentation https://www.mongodb.com/docs/realm/sdk/dotnet/fundamentals/object-models-and-schemas/\nquoting “if you bind a ListView to a live query, then the list will update automatically when the results of the query change; you do not need to implement the INotifyPropertyChange interface.” it should also work on UWP or what am I missing?", "username": "UltimateWidder" }, { "code": "", "text": "Thank you for re-posting the code. We’re gonna take a look at this next week. We’ll keep you updated.", "username": "Andrea_Catalini" }, { "code": "", "text": "Thank you for your support, looking forward for the solution. I wish also a nice weekend.", "username": "UltimateWidder" }, { "code": "", "text": "Unfortunately I was sick the whole week. I hope to find time this week. But I can’t make promises.", "username": "Andrea_Catalini" }, { "code": "", "text": "Hello Andrea,\nthank you for the update. I wish a good healing and bless you. ", "username": "UltimateWidder" }, { "code": "", "text": "I haven’t forgotten about this. I’m just waiting to have the time to check this. I’ll update you when I’ve tested the code.", "username": "Andrea_Catalini" }, { "code": "", "text": "Thank you Andrea for the informations. Looking for the result ;).", "username": "UltimateWidder" }, { "code": "", "text": "Hi @UltimateWidder,Sorry for the long wait. I have finally had time to take a look at the issue you reported.\nThere’s nothing wrong with your code. I can indeed reproduce the problem that you are seeing. Unfortunately this is a known issue that hasn’t had yet enough priority to be reviseted.\nYou can read about the issue here. The summary is that this known issue is generated by a mix of 2 non-realm bugs, one in UWP’s ListView and the another one in Xamarin.Forms Android.If you really need this to work, a workaround would be to use a 3rd party library that implements their own collectionView that pleases UWP.I hope this helps you.", "username": "Andrea_Catalini" }, { "code": "", "text": "Hello Andrea,\nthank you very much for your response. Your response clearifies my thought. At least it shows me I have to change the component.\nIt would be nice that this information is also provided in the .net sdk documentation as it is important to know how Realm should be used with UWP.Kind Regards", "username": "UltimateWidder" }, { "code": "", "text": "Thank you for the feedback. Our hope is that the Android bug has finally being resolved and we can fix the UWP problem without having Android not working.\nYou could consider marking the issue as resolved if the help your received did what you expected.Andrea", "username": "Andrea_Catalini" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
View not Updating in UWP, but in Android and iOS
2022-04-21T10:42:37.781Z
View not Updating in UWP, but in Android and iOS
4,718
null
[ "crud" ]
[ { "code": "", "text": "Hi all,\nI’m having strange problem with updateOne, probably because I’m fresh on MongoDB and I’m doing some rookie error \nMy collection looks like this:{\n_id: ObjectId(“628b3de8b94528801b9c50d0”),\nname: ‘Piccolo’,\ntheatre: ‘Cinestar’,\nrows: 2,\ncolumns: 2,\nseats: [\n{ row: 0, col: 0, type: ‘X’ },\n{ row: 1, col: 0, type: ‘X’ },\n{ row: 0, col: 1, type: ‘X’ },\n{ row: 1, col: 1, type: ‘X’ }\n]\n}I want to be able to update property Type for every element inside seats array so I use this:db.auditoriums.updateOne({_id: ObjectId(“628b3de8b94528801b9c50d0”), “seats.row”: 1, “seats.col”: 0},{$set: {“seats.$.type”: “Z”}})and this works every time I change only row attribute. But if I change col attribute too it doesn’t match and doesn’t do any change.{\nacknowledged: true,\ninsertedId: null,\nmatchedCount: 1,\nmodifiedCount: 0,\nupsertedCount: 0\n}What I’m doing wrong?", "username": "Misko_Misic" }, { "code": "$elemMatch", "text": "Hello @Misko_Misic, welcome to the MongoDB community forum!You can find the solution in the topic of the manual, Update Embedded Documents Using Multiple Field Matches, linked below. When matching multiple fields in the array of sub-documents you need to use the $elemMatch operator.", "username": "Prasad_Saya" }, { "code": "", "text": "Hello @Prasad_Saya\nthank you for your answer, I solved my problem.cheers", "username": "Misko_Misic" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't update specific element of nested array inside collection
2022-05-23T09:21:55.268Z
Can&rsquo;t update specific element of nested array inside collection
1,282
null
[ "mobile-bytes" ]
[ { "code": "", "text": "We spoke about error handling last week, so will continue on the same trend this week as the “Bad Changeset” error has been frequently asked by a lot of you and it sometimes gets tricky to troubleshoot why this is happening or how this can be fixed.This week, let’s learn some tips and tricks around the error, and some proactive steps you can take to prevent this error from happening.*Depending on the use-case and implementation of code in your application, sometimes the error can still happen, but hopefully with the information below you will have more understanding of the Sync process.What is a changeset?A changeset is a set of instructions that specify a change(s) to a Realm object(s) after a write operation. This along with some metadata is exchanged between a client(mobile) and the Sync Server to keep your data in Sync. Check Realm Sync Protocol section to learn more.What is a Bad Changeset?A changeset is termed as Bad when instructions from the client do not match the server (UPLOAD) that leads to Server-side error or when the instructions from the Server do not match the client (DOWNLOAD), that leads to a Bad Changeset on the client and hence the data cannot be merged and Syncing of the data stops for the client.Different Error Variations(but not limited to)What Causes a Bad Changeset and How to Resolve it?The reasons can vary every time. The Bad Changeset occurs when you make changes that are not permitted or do not update SDK to a newer version that has fixes.Schema InconsistenciesPartition IssuesContinuing to access a Synced Collection after it has been droppedUsing the old Realm SDK version that should have been updatedThe recommended way to resolve this is to prevent these inconsistencies in your code. Some helpful documentation links areDepending on the use case you may be required to Terminate and Re-Enable Sync to fix the state of the application. Or if this is only happening for a specific device, having that client perform a client reset may solveIf you cannot limit the cause to any of this in the list, then you may be experiencing server-side inconsistency. The recommendation is to open a post on the forum with complete details of the error log.I hope the provided information is helpful.Please feel free to share any information that came useful to you or any different methods you used to resolve the Bad Changeset situation.Happy Realming! ", "username": "henna.s" }, { "code": ".discardLocal", "text": "Hi Henna,Thanks for the post, this is helpful.I have a question regarding performing a client reset when a “Bad changeset (UPLOAD)” occurs.\nFirstly, note that I’m fixing all cases where this could happen, so it shouldn’t happen, in theory But better safe than sorry.\nI managed to create a simple example where the code provokes a bad upload. The realm still opens correctly upon app startup, but disconnects shortly after. Even though I pass the .discardLocal reset strategy to the realm config before the realm is opened, this error does not seem to trigger a client reset. Why so?I find myself unable to “catch” this error, and wipe the local changes to fix this state.Thanks!", "username": "Baptiste_Malaguti" }, { "code": "clientSessionError", "text": "G’Day @Baptiste_Malaguti,Glad to know that the post was helpful to you.“Bad changeset(UPLOAD)” happens when you are uploading data in an incorrect partition or there is a schema mismatch error. The check is done server-side and an error handler can be used to catch it on the client, it may come in as clientSessionErrorThis error does not trigger a client reset. This error can be corrected by uploading to the correct partition and/or fixing any schema mismatch errors.I hope this clarifies your doubts.Please feel free to ask if you have any more questions.Cheers, ", "username": "henna.s" } ]
Mobile Bytes #3: Lets Talk Bad Changeset Error
2022-02-02T09:05:44.999Z
Mobile Bytes #3: Lets Talk Bad Changeset Error
4,797
null
[ "aggregation" ]
[ { "code": " \"$out\": {\n \"s3\": {\n \"bucket\": \"testmdbbucket\",\n \"region\": \"eu-central-1\",\n \"filename\": \"secairbnb\",\n \"format\": {\n \"name\": \"csv\" \n }\n }\n }\n", "text": "Kindly I need a way to push data from mongodb to S3 bucket in csv format, the issue is that the data has a huge number of objects and arrays, so it is not recommended to have a script to take it field by field, and when the code is likeit gives an error in S3 any help with that will be really appreciated, thanks in advance", "username": "yousef_osama" }, { "code": "errorModemaxFileSize", "text": "Hi @yousef_osama and welcome in the MongoDB Community !Sharing the exact error you get might help .There are a few other options that you can use to resolve your problem: https://www.mongodb.com/docs/datalake/reference/pipeline/out/#syntaxerrorMode could be one solution but if the size is the problem then probably maxFileSize can help?Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Push collection to S3 in csv
2022-04-26T19:53:27.550Z
Push collection to S3 in csv
1,780
null
[ "database-tools", "backup", "ops-manager" ]
[ { "code": "", "text": "Hi,I have a question regarding the best practice of obtaining backups from a replica set that is configured in kubernetes.At the moment there is no tool implemented like ops Manager, therefore, the way to obtain them is through mongodump.This is the most optimal? Or do they have to be obtained in some other way?", "username": "Bet_TS" }, { "code": "", "text": "Welcome to the community, @Bet_TS !!MongoDB has a variety of backup options, and you can use any of them to back up the data. Please see the [MongoDB Backup Methods] manual for further information. (https://www.mongodb.com/docs/manual/core/backups/)Additionally, Kubernetes provides backup support with VolumeSnapshot, for which we recommend approaching the Kubernetes Community for better clarity.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Backups from kubernetes
2022-05-04T16:09:00.814Z
Backups from kubernetes
2,826
null
[]
[ { "code": "", "text": "I’m new to MongoDB and Realm and after being excited and encouraged from completing the tutorial I’m now having several problems.Created a new database and after deleting the tutorial database I can’t remove the schema’s associated with the deleted database (“tracker”). Clicking on “Remove Configuration” just silently fails and does nothing – no error message or anything. It really should respond with some error info.Turned dev mode off and tried to update the schema for my new database, and it complains about the schemas for the tutorial that I can’t delete. Very strange as it’s different schemas than the one I’m editing and as mentioned above can’t edit them to correct the errors or delete them.In the Sync panel I can’t Pause or Terminate it. UI just endlessly displays the 3 dot loader animation and doesn’t update. Can’t pause or terminate. Pretty much out of ideas on how to get this working properly. Have tried everything and now this isn’t giving me much confidence in the product at all.Does anyone know of ways to correct things, from the CLI or otherwise? I don’t want to start over again etc. Thank you for reading and any tips appreciated!", "username": "Andrew_Hargreaves" }, { "code": "", "text": "I’m facing the exact same issue, used the todo’s boilerplate for swift. Wanted to re-name my atlas (realm) database (by creating a new one, and deleting the todo). Finally I got it erased, with now a broken sync. Can’t terminate the sync, nor restart atlas sync.I did discover that the sync is attached to the todo, yet I can’t fix itImprovements:Could be easier to rename / delete collections and databases.Notifications within atlas UI could be improved to alert about the stopped sync (you have to dig before you see it, yes you do get emails, but)Error handling with atlas, giving some sort of notification about the sync relationship when you delete the collection.Swift Boilerplate uses asyncOpen yet in documentation this is marked as legacy and flexible sync in preview", "username": "Jimmi_Andersen" }, { "code": "", "text": "Hi Jimmi_Andersen – If I remember correctly, as a workaround I think I went to the Deployment section and did a “re-deploy” of a previous deployment. Then I was able to stop Sync and clear the schema configurations etc. I was off and running after that. I don’t know what I did exactly that Realm didn’t like but I was ok after that experience.", "username": "Andrew_Hargreaves" }, { "code": "", "text": "SolutionSupport helped me fix the problem, a bug related to the sync. And deleted all references to old (renamed) collection, from syncs and other cloud realm apps (I had two referencing the todo schema in sync)", "username": "Jimmi_Andersen" } ]
Can't remove Schema's, can't terminate Sync
2022-02-05T21:29:32.330Z
Can&rsquo;t remove Schema&rsquo;s, can&rsquo;t terminate Sync
2,626
null
[ "storage" ]
[ { "code": "", "text": "Friends, I have been trying to find out a good article or KB which helps us understand the amount of bytes used to store each of data types in Mongo DB. For example, someone may store a monetary value in string or double data type some may storage as integer and the decimal is handled in the UI. (example, stored as 1199 but application shows it as 11.99)Basically, in general, how do we calculate the storage of a document using various data types in Mongo DB is the main ask. The thought is with regards to the storage and the performance related to it.Any guidance will be great help.Thanks,\nVB", "username": "Vikram_Bade" }, { "code": "$bsonSize", "text": "the $bsonSize operator should do the job:", "username": "Joe_Drumgoole" }, { "code": "", "text": "Thanks @Joe_Drumgoole , I will check. Hopefully, this should help to see how much storage is taken from data type perspective.", "username": "Vikram_Bade" } ]
Data Storage and Impact on Performance
2022-05-22T10:42:37.362Z
Data Storage and Impact on Performance
2,097
null
[ "replication", "database-tools", "backup" ]
[ { "code": "mongodump --oplogrs0:PRIMARY> rs.conf()\n{\n\t\"_id\" : \"rs0\",\n\t\"version\" : 1,\n\t\"protocolVersion\" : NumberLong(1),\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"host\" : \"smartshape.io.test:27017\",\n\t\t\t\"arbiterOnly\" : false,\n\t\t\t\"buildIndexes\" : true,\n\t\t\t\"hidden\" : false,\n\t\t\t\"priority\" : 1,\n\t\t\t\"tags\" : {\n\t\t\t\t\n\t\t\t},\n\t\t\t\"slaveDelay\" : NumberLong(0),\n\t\t\t\"votes\" : 1\n\t\t}\n\t],\n\t\"settings\" : {\n\t\t\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"getLastErrorModes\" : {\n\t\t\t\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"wtimeout\" : 0\n\t\t},\n\t\t\"replicaSetId\" : ObjectId(\"628b94350b78e951119f508c\")\n\t}\n}\nrs0:PRIMARY> rs.status()\n{\n\t\"set\" : \"rs0\",\n\t\"date\" : ISODate(\"2022-05-23T14:29:01.664Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(1),\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 0,\n\t\t\t\"name\" : \"smartshape.io.test:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 1541,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1653314744, 2),\n\t\t\t\t\"t\" : NumberLong(1)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2022-05-23T14:05:44Z\"),\n\t\t\t\"electionTime\" : Timestamp(1653314614, 1),\n\t\t\t\"electionDate\" : ISODate(\"2022-05-23T14:03:34Z\"),\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"self\" : true\n\t\t}\n\t],\n\t\"ok\" : 1\n}\nroot@smartshape:~# mongod --version\ndb version v3.2.22\ngit version: 105acca0d443f9a47c1a5bd608fd7133840a58dd\nOpenSSL version: OpenSSL 1.0.2n 7 Dec 2017\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: ubuntu1404\n distarch: x86_64\n target_arch: x86_64\n", "text": "I want to turn on oplog in a standalone instance in order use mongodump --oplog on it however I fail to achieve that.I follow 2 ways without success:After trying, I have this configuration:Context:", "username": "Vincent_Herlemont" }, { "code": "", "text": "Not working means what error are you getting?\nmongodump with --oplog works only for full DB\nPlease show the error logs from your mongodump", "username": "Ramachandra_Tummala" }, { "code": "oplogoplogroot@smartshape:~# mongodump -v --oplog --out ./backup\n2022-05-23T09:25:47.804+0000\tgetting most recent oplog timestamp\n...\n2022-05-23T09:25:47.828+0000\twriting captured oplog to \n2022-05-23T09:25:47.828+0000\t\tdumped 0 oplog entries\n", "text": "My bad, I forgot the result.Here is the result after activating the oplog and update/insert some document in MongoDB collections, oplog are always empty:", "username": "Vincent_Herlemont" }, { "code": "", "text": "How big is your db?\nHow long it takes to dump?\nAny activity going while backup runs?\nDo some activity and see", "username": "Ramachandra_Tummala" } ]
Activate oplog on a standalone instance and to dump
2022-05-23T14:30:16.567Z
Activate oplog on a standalone instance and to dump
3,622
null
[ "replication", "containers" ]
[ { "code": "", "text": "Hi,We have a replica of 3 nodes in kubernetes, however, a couple of days ago one of the nodes went down and so far we have not been able to get it up.The database log, the available FileSystem space and the status of the “pod” were reviewed, the latter returns the error “CrashLoopBackOff”In the database log, no error is observed that could have caused the node to crash, as for the FileSystem, it has enough space.We’ve been doing some research, however still haven’t found anything that might be of use, as far as the solution we’re considering is deleting the pod and turning it back on.This would be correct?\nIs there anything else we could check to determine the cause of the error?Regards.", "username": "Bet_TS" }, { "code": "kubectl describe pod <pod-name> -n <namespace>", "text": "Hi @Bet_TS\nWelcome to the community forum!!CrashLoopBackOff”There could be various reasons why this status may be observed for the pod failure.\nCan you please share the output for\nkubectl describe pod <pod-name> -n <namespace>Is the pod deployment a stateful set model o a deployment state model?as far as the solution we’re considering is deleting the pod and turning it back on.Does the pod immediately comes up after the pod has been deleted?This would be correctThis would not be a recommended as the pod should start without having to delete it explicitly and make it to restart.Let us know the describe output for the above kubernetes command and we can assist you better.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari ,I share the output of the pod with status “CrashLoopBackOff”\nout_pod_describe.txt (3.1 KB)Is the pod deployment a stateful set model o a deployment state model?It is a deployment to stateful set modelDoes the pod immediately comes up after the pod has been deleted?yes.Thanks for the support\nRegards.", "username": "Bet_TS" }, { "code": "back-off restarting failed container", "text": "Hi @Bet_TSThank you for sharing the above document.back-off restarting failed container error occurs mostly because of many reasons which could be one among them:Apart from these, the sidecar image could also be a culprit.\nHowever, please share the mongod log files and any relevant docker/kubernetes log files for the failing pod for better understanding.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari ,I share the log of the replica node that we have not been able to start\nmongo-2.log (22.1 KB)Regarding the docker/kubernetes logs, we are still reviewing it, however, we would like to know what are the logs that could be useful to you?The answer “Mounting volume problem” called our attention, if that were the cause, why are the other nodes not presenting the same error if it is a persistent volume?Regards", "username": "Bet_TS" }, { "code": "", "text": "Hi @Bet_TSCan you please share the logs in .txt format as I am not able to access the logs shared in the above format.Thanks\nAasawari", "username": "Aasawari" }, { "code": "2022-05-13T18:03:29.518+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2022-05-13T18:03:29.624+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/path 64-bit host=mongo-2\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] db version v4.2.7\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] git version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] allocator: tcmalloc\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] modules: none\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] build environment:\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] distmod: ubuntu1804\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] distarch: x86_64\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] target_arch: x86_64\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] 4096 MB of memory available to the process out of 31538 MB total system memory\n2022-05-13T18:03:29.625+0000 I CONTROL [initandlisten] options: { net: { bindIp: \"*\" }, replication: { replSet: \"db\" } }\n2022-05-13T18:03:29.634+0000 W STORAGE [initandlisten] Detected unclean shutdown - /path/mongod.lock is not empty.\n2022-05-13T18:03:29.636+0000 I STORAGE [initandlisten] Detected data files in /path created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.\n2022-05-13T18:03:29.636+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.\n2022-05-13T18:03:29.637+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1536M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],\n2022-05-13T18:03:42.173+0000 I STORAGE [initandlisten] WiredTiger message [1652465022:173115][1:0x7f9b74405b00], txn-recover: Recovering log 9444 through 9445\n2022-05-13T18:03:43.110+0000 I STORAGE [initandlisten] WiredTiger message [1652465023:110679][1:0x7f9b74405b00], txn-recover: Recovering log 9445 through 9445\n2022-05-13T18:03:44.047+0000 I STORAGE [initandlisten] WiredTiger message [1652465024:47729][1:0x7f9b74405b00], txn-recover: Main recovery loop: starting at 9444/256 to 9445/256\n2022-05-13T18:03:44.055+0000 I STORAGE [initandlisten] WiredTiger message [1652465024:55551][1:0x7f9b74405b00], txn-recover: Recovering log 9444 through 9445\n2022-05-13T18:03:44.192+0000 I STORAGE [initandlisten] WiredTiger message [1652465024:192749][1:0x7f9b74405b00], file:collection-10--8124160588626690814.wt, txn-recover: Recovering log 9445 through 9445\n2022-05-13T18:03:44.244+0000 I STORAGE [initandlisten] WiredTiger message [1652465024:244156][1:0x7f9b74405b00], file:collection-10--8124160588626690814.wt, txn-recover: Set global recovery timestamp: (1649415891, 1)\n2022-05-13T18:03:44.267+0000 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1649415891, 1)\n2022-05-13T18:03:44.288+0000 I STORAGE [initandlisten] Starting OplogTruncaterThread local.oplog.rs\n2022-05-13T18:03:44.288+0000 I STORAGE [initandlisten] The size storer reports that the oplog contains 6751366 records totaling to 6973228487 bytes\n2022-05-13T18:03:44.288+0000 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation\n2022-05-13T18:03:44.295+0000 I STORAGE [initandlisten] Sampling from the oplog between Oct 8 23:00:37:1 and Apr 8 11:11:16:1 to determine where to place markers for truncation\n2022-05-13T18:03:44.295+0000 I STORAGE [initandlisten] Taking 260 samples and assuming that each section of oplog contains approximately 259295 records totaling to 267815917 bytes\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Oct 26 15:38:56:74\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Nov 2 16:25:53:3\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Nov 10 15:34:18:85\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Nov 30 15:21:50:38\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Nov 30 15:25:21:465\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Dec 19 08:36:04:6\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Dec 28 16:36:19:73\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Dec 31 18:08:42:11\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Dec 31 18:15:10:91\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Feb 11 20:19:52:68\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Feb 11 20:24:39:55\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Feb 24 22:44:37:2\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Mar 10 16:15:09:850\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Mar 10 16:15:26:2114\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Apr 22 03:15:26:1\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Apr 28 22:36:22:4517\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Apr 28 22:36:36:10\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime May 28 20:52:38:23836\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime May 28 20:53:00:84\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Jun 18 16:07:41:10544\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Jun 18 16:08:09:133\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Sep 2 17:12:37:12546\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Oct 13 09:54:27:2\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Oct 13 10:23:03:18\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Nov 30 20:50:24:1\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] Placing a marker at optime Apr 7 21:00:57:1\n2022-05-13T18:03:51.990+0000 I STORAGE [initandlisten] WiredTiger record store oplog processing took 7701ms\n2022-05-13T18:03:52.009+0000 I STORAGE [initandlisten] Timestamp monitor starting\n2022-05-13T18:03:52.011+0000 I CONTROL [initandlisten] \n2022-05-13T18:03:52.011+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2022-05-13T18:03:52.011+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2022-05-13T18:03:52.011+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.\n2022-05-13T18:03:52.011+0000 I CONTROL [initandlisten] \n2022-05-13T18:03:52.019+0000 I CONTROL [initandlisten] \n2022-05-13T18:03:52.019+0000 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine.\n2022-05-13T18:03:52.019+0000 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems:\n2022-05-13T18:03:52.019+0000 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options]\n2022-05-13T18:03:52.020+0000 I CONTROL [initandlisten] \n2022-05-13T18:03:52.020+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.\n2022-05-13T18:03:52.020+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'\n2022-05-13T18:03:52.020+0000 I CONTROL [initandlisten] \n2022-05-13T18:03:52.020+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.\n2022-05-13T18:03:52.020+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'\n2022-05-13T18:03:52.020+0000 I CONTROL [initandlisten] \n2022-05-13T18:03:52.093+0000 I SHARDING [initandlisten] Marking collection local.system.replset as collection version: <unsharded>\n2022-05-13T18:03:52.116+0000 I STORAGE [initandlisten] Flow Control is enabled on this deployment.\n2022-05-13T18:03:52.126+0000 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version: <unsharded>\n2022-05-13T18:03:52.127+0000 I SHARDING [initandlisten] Marking collection admin.system.version as collection version: <unsharded>\n2022-05-13T18:03:52.127+0000 I SHARDING [initandlisten] Marking collection local.startup_log as collection version: <unsharded>\n2022-05-13T18:03:52.128+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/path/diagnostic.data'\n2022-05-13T18:03:52.148+0000 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version: <unsharded>\n2022-05-13T18:03:52.148+0000 I SHARDING [initandlisten] Marking collection local.replset.election as collection version: <unsharded>\n2022-05-13T18:03:52.160+0000 I REPL [initandlisten] Rollback ID is 1\n2022-05-13T18:03:52.167+0000 I REPL [initandlisten] Recovering from stable timestamp: Timestamp(1649415891, 1) (top of oplog: { ts: Timestamp(1649416276, 1), t: 30 }, appliedThrough: { ts: Timestamp(0, 0), t: -1 }, TruncateAfter: Timestamp(0, 0))\n2022-05-13T18:03:52.167+0000 I REPL [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1649415891, 1)\n2022-05-13T18:03:52.167+0000 I REPL [initandlisten] Replaying stored operations from Timestamp(1649415891, 1) (inclusive) to Timestamp(1649416276, 1) (inclusive).\n2022-05-13T18:03:52.167+0000 I SHARDING [initandlisten] Marking collection local.oplog.rs as collection version: <unsharded>\n2022-05-13T18:03:52.175+0000 I REPL [initandlisten] Applied 3 operations in 1 batches. Last operation applied with optime: { ts: Timestamp(1649416276, 1), t: 30 }\n2022-05-13T18:03:52.176+0000 I SHARDING [initandlisten] Marking collection config.transactions as collection version: <unsharded>\n2022-05-13T18:03:52.182+0000 I SHARDING [LogicalSessionCacheReap] Marking collection config.system.sessions as collection version: <unsharded>\n2022-05-13T18:03:52.182+0000 I NETWORK [listener] Listening on /tmp/mongodb-27017.sock\n2022-05-13T18:03:52.182+0000 I NETWORK [listener] Listening on 0.0.0.0\n2022-05-13T18:03:52.182+0000 I NETWORK [listener] waiting for connections on port 27017\n2022-05-13T18:03:52.183+0000 I CONTROL [LogicalSessionCacheRefresh] Sessions collection is not set up; waiting until next sessions refresh interval: Replication has not yet been configured\n2022-05-13T18:03:52.234+0000 I CONTROL [LogicalSessionCacheReap] Failed to reap transaction table: NotYetInitialized: Replication has not yet been configured\n2022-05-13T18:03:52.266+0000 I REPL [replexec-0] New replica set config in use: { _id: \"db\", version: 178016725, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: \"mongo-1:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: \"mongo-0:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: \"mongo-2:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5f7f9a0a07508370da2f5327') } }\n2022-05-13T18:03:52.266+0000 I REPL [replexec-0] This node is mongo-2:27017 in the config\n2022-05-13T18:03:52.266+0000 I REPL [replexec-0] transition to STARTUP2 from STARTUP\n2022-05-13T18:03:52.266+0000 I REPL [replexec-0] Starting replication storage threads\n2022-05-13T18:03:52.286+0000 I CONNPOOL [Replication] Connecting to mongo-1:27017\n2022-05-13T18:03:52.286+0000 I CONNPOOL [Replication] Connecting to mongo-0:27017\n2022-05-13T18:03:52.286+0000 I REPL [replexec-0] transition to RECOVERING from STARTUP2\n2022-05-13T18:03:52.286+0000 I REPL [replexec-0] Starting replication fetcher thread\n2022-05-13T18:03:52.287+0000 I REPL [replexec-0] Starting replication applier thread\n2022-05-13T18:03:52.287+0000 I REPL [replexec-0] Starting replication reporter thread\n2022-05-13T18:03:52.287+0000 I REPL [rsSync-0] Starting oplog application\n2022-05-13T18:03:52.287+0000 I REPL [rsBackgroundSync] waiting for 4 pings from other members before syncing\n2022-05-13T18:03:52.288+0000 I REPL [rsSync-0] transition to SECONDARY from RECOVERING\n2022-05-13T18:03:52.288+0000 I REPL [rsSync-0] Resetting sync source to empty, which was :27017\n2022-05-13T18:03:52.301+0000 I REPL [replexec-0] Cannot find self in new replica set configuration; I must be removed; NodeNotFound: No host described in new configuration 178016788 for replica set db maps to this node\n2022-05-13T18:03:52.302+0000 I REPL [replexec-0] New replica set config in use: { _id: \"db\", version: 178016788, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: \"mongo-1:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: \"mongo-0:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5f7f9a0a07508370da2f5327') } }\n2022-05-13T18:03:52.302+0000 I REPL [replexec-0] This node is not a member of the config\n2022-05-13T18:03:52.302+0000 I REPL [replexec-0] transition to REMOVED from SECONDARY\n2022-05-13T18:03:52.463+0000 I NETWORK [listener] connection accepted from 192.168.247.168:33884 #7 (1 connection now open)\n2022-05-13T18:03:52.464+0000 I NETWORK [conn7] received client metadata from 192.168.247.168:33884 conn7: { driver: { name: \"NetworkInterfaceTL\", version: \"4.2.7\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"18.04\" } }\n2022-05-13T18:03:52.569+0000 I NETWORK [listener] connection accepted from 192.168.247.168:33886 #8 (2 connections now open)\n2022-05-13T18:03:52.570+0000 I NETWORK [conn8] received client metadata from 192.168.247.168:33886 conn8: { driver: { name: \"NetworkInterfaceTL\", version: \"4.2.7\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"18.04\" } }\n2022-05-13T18:03:52.608+0000 I NETWORK [listener] connection accepted from 192.168.247.169:39288 #9 (3 connections now open)\n2022-05-13T18:03:52.609+0000 I NETWORK [conn9] received client metadata from 192.168.247.169:39288 conn9: { driver: { name: \"NetworkInterfaceTL\", version: \"4.2.7\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"18.04\" } }\n2022-05-13T18:03:54.223+0000 I NETWORK [listener] connection accepted from 127.0.0.1:41000 #10 (4 connections now open)\n2022-05-13T18:03:54.225+0000 I NETWORK [conn10] received client metadata from 127.0.0.1:41000 conn10: { driver: { name: \"nodejs\", version: \"2.2.36\" }, os: { type: \"Linux\", name: \"linux\", architecture: \"x64\", version: \"3.10.0-1127.el7.x86_64\" }, platform: \"Node.js v11.2.0, LE, mongodb-core: 2.1.20\" }\n2022-05-13T18:03:54.243+0000 I REPL [conn10] replSetReconfig admin command received from client; new config: { _id: \"db\", version: 178016789, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: \"mongo-1:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: \"mongo-0:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: \"mongo-2:27017\" } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5f7f9a0a07508370da2f5327') } }\n2022-05-13T18:03:54.251+0000 I REPL [conn10] replSetReconfig config object with 3 members parses ok\n2022-05-13T18:03:54.251+0000 I REPL [conn10] New replica set config in use: { _id: \"db\", version: 178058453, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: \"mongo-1:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: \"mongo-0:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: \"mongo-2:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5f7f9a0a07508370da2f5327') } }\n2022-05-13T18:03:54.251+0000 I REPL [conn10] This node is mongo-2:27017 in the config\n2022-05-13T18:03:54.251+0000 I REPL [conn10] transition to SECONDARY from REMOVED\n2022-05-13T18:03:54.251+0000 I REPL [conn10] Resetting sync source to empty, which was :27017\n2022-05-13T18:03:54.253+0000 I REPL [replexec-2] Member mongo-1:27017 is now in state SECONDARY\n2022-05-13T18:03:54.253+0000 I NETWORK [conn10] end connection 127.0.0.1:41000 (3 connections now open)\n2022-05-13T18:03:54.253+0000 I REPL [replexec-0] Member mongo-0:27017 is now in state PRIMARY\n2022-05-13T18:03:54.259+0000 I NETWORK [listener] connection accepted from 192.168.247.168:33894 #13 (4 connections now open)\n2022-05-13T18:03:54.260+0000 I NETWORK [conn13] end connection 192.168.247.168:33894 (3 connections now open)\n2022-05-13T18:03:54.260+0000 I NETWORK [listener] connection accepted from 192.168.247.169:39296 #14 (4 connections now open)\n2022-05-13T18:03:54.260+0000 I NETWORK [conn14] end connection 192.168.247.169:39296 (3 connections now open)\n2022-05-13T18:03:54.530+0000 I NETWORK [listener] connection accepted from 192.168.222.195:42162 #15 (4 connections now open)\n2022-05-13T18:03:54.531+0000 I NETWORK [conn15] received client metadata from 192.168.222.195:42162 conn15: { driver: { name: \"mongo-java-driver|sync|spring-boot\", version: \"4.0.5\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"amd64\", version: \"3.10.0-1127.el7.x86_64\" }, platform: \"Java/Oracle Corporation/1.8.0_312-b07\" }\n2022-05-13T18:03:55.288+0000 I REPL [rsBackgroundSync] sync source candidate: mongo-1:27017\n2022-05-13T18:03:55.288+0000 I CONNPOOL [RS] Connecting to mongo-1:27017\n2022-05-13T18:03:55.293+0000 I REPL [rsBackgroundSync] Changed sync source from empty to mongo-1:27017\n2022-05-13T18:03:55.502+0000 I REPL [rsBackgroundSync] Starting rollback due to OplogStartMissing: Our last optime fetched: { ts: Timestamp(1649416276, 1), t: 30 }. source's GTE: { ts: Timestamp(1649417431, 1), t: 31 }\n2022-05-13T18:03:55.502+0000 I REPL [rsBackgroundSync] Replication commit point: { ts: Timestamp(0, 0), t: -1 }\n2022-05-13T18:03:55.502+0000 I REPL [rsBackgroundSync] Rollback using 'recoverToStableTimestamp' method.\n2022-05-13T18:03:55.502+0000 I REPL [rsBackgroundSync] Scheduling rollback (sync source: mongo-1:27017)\n2022-05-13T18:03:55.502+0000 I ROLLBACK [rsBackgroundSync] transition to ROLLBACK\n2022-05-13T18:03:55.503+0000 I REPL [rsBackgroundSync] State transition ops metrics: { lastStateTransition: \"rollback\", userOpsKilled: 0, userOpsRunning: 5 }\n2022-05-13T18:03:55.503+0000 I REPL [rsBackgroundSync] transition to ROLLBACK from SECONDARY\n2022-05-13T18:03:55.503+0000 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 9\n2022-05-13T18:03:55.503+0000 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 8\n2022-05-13T18:03:55.503+0000 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 7\n2022-05-13T18:03:55.503+0000 I NETWORK [conn15] end connection 192.168.222.195:42162 (3 connections now open)\n2022-05-13T18:03:55.503+0000 I ROLLBACK [rsBackgroundSync] Waiting for all background operations to complete before starting rollback\n2022-05-13T18:03:55.503+0000 I ROLLBACK [rsBackgroundSync] Finished waiting for background operations to complete before rollback\n2022-05-13T18:03:55.503+0000 I ROLLBACK [rsBackgroundSync] finding common point\n2022-05-13T18:03:56.399+0000 I NETWORK [listener] connection accepted from 192.168.222.196:58692 #18 (4 connections now open)\n2022-05-13T18:03:56.400+0000 I NETWORK [conn18] received client metadata from 192.168.222.196:58692 conn18: { driver: { name: \"mongo-java-driver|sync|spring-boot\", version: \"4.0.5\" }, os: { type: \"Linux\", name: \"Linux\", architecture: \"amd64\", version: \"3.10.0-1127.el7.x86_64\" }, platform: \"Java/Oracle Corporation/1.8.0_312-b07\" }\n2022-05-13T18:03:56.843+0000 I ROLLBACK [rsBackgroundSync] Rollback common point is { ts: Timestamp(1649410978, 1), t: 13 }\n2022-05-13T18:03:56.843+0000 F ROLLBACK [rsBackgroundSync] Common point must be at least stable timestamp, common point: Timestamp(1649410978, 1), stable timestamp: Timestamp(1649415891, 1)\n2022-05-13T18:03:56.843+0000 F - [rsBackgroundSync] Fatal Assertion 51121 at src/mongo/db/repl/rollback_impl.cpp 969\n2022-05-13T18:03:56.843+0000 F - [rsBackgroundSync] \n\n***aborting after fassert() failure\n\n\n", "text": "Hi @AasawariI share the log againRegards!", "username": "Bet_TS" }, { "code": "", "text": "Hi @Bet_TSThank you for sharing the logs.From the logs you shared, it appears that the secondary in question has fallen off the oplog, i.e. the oplog in the other 2 nodes have rolled over and thus have no common point anymore between this node and the others. This necessitates a resync of the member. Please refer to the following documentation to understand more on this: resync-replica-set-memberP.S.: I notice that you have been running and older version (4.2.7 released in around 2020), I would recommend you to upgrade to a latest version Patch-release 4.2.x release in May 2022/Since your current MongoDB server release series was released almost two years ago, bug fixes improvements have been implemented. Minor production releases do not contain breaking changes and upgrading or downgrading within the same release series is straightforward: MongoDB 4.2 Manual: Upgrade to the Latest Revision of MongoDB.Thanks\nAasawari", "username": "Aasawari" } ]
Replica set in Kubernetes
2022-05-05T21:50:20.432Z
Replica set in Kubernetes
5,949
null
[]
[ { "code": "$addToSet: { myArray: 'String' }\n$pull: { myArray: 'String' }\n$addToSet$pullString$addToSet$pull", "text": "I have a simple array in my collection and I call one of 2 update events against it:ORI need to have a trigger run on both events.So my questions are:Is there anyway to know if a $addToSet OR $pull command was run?How can I get the String that was added or removed by the $addToSet OR $pull command?Thanks", "username": "Dev_Ops" }, { "code": "$addToSet$pulldb.collection.update()db.collection.updateOne()db.collection.updateMany()$addToSet$pullString$addToSet$pullM0M2M5mongodb-audit-log/// For db.testcollection.updateOne({a:1},{$addToSet:{colours:\"red\"}})\n{ \"atype\" : \"authCheck\", \"ts\" : { \"$date\" : \"2022-05-18T04:11:31.515+00:00\" },...\"param\" : { \"command\" : \"update\", \"ns\" : \"myFirstDatabase.testcollection\", \"args\" : { \"update\" : \"testcollection\", \"updates\" : [ { \"q\" : { \"a\" : 1 }, \"u\" : { \"$addToSet\" : { \"colours\" : \"red\" } } } ],...\n\n/// For db.testcollection.updateOne({a:1},{$pull:{colours:\"red\"}})\n{ \"atype\" : \"authCheck\", \"ts\" : { \"$date\" : \"2022-05-18T04:11:47.555+00:00\" },...\"param\" : { \"command\" : \"update\", \"ns\" : \"myFirstDatabase.testcollection\", \"args\" : { \"update\" : \"testcollection\", \"updates\" : [ { \"q\" : { \"a\" : 1 }, \"u\" : { \"$pull\" : { \"colours\" : \"red\" } } } ],...\n\"colours\"$addToSet$pull\"red\"", "text": "Hi @Dev_Ops,I have a simple array in my collection and I call one of 2 update events against itI presume the $addToSet and $pull operators are used in either a db.collection.update(), db.collection.updateOne() or db.collection.updateMany() operation but please correct me if I am wrong here.Depending on your use case and requirements, you may wish to Set Up Database Auditing although please note that this feature is not available for M0 free clusters, M2 , and M5 clusters. There are a few lines from a mongodb-audit-log file from my test environment for your reference to see if it contains details you are after:Please see the two below example log entries for the following operations in my test environment:You can see from the above examples the value for the \"colours\" field (which is of type array) when using both the $addToSet and $pull operator (in this case the value \"red\") are recorded.Please note that i’ve redacted some of the information from the log lines. Additionally, Turning on this feature will increase your daily cluster pricing. Read more.If this sounds like this may help, you may find the procedure on how to enable Database Auditing documentation useful.Another possible other method that may work for you depending on your use case is to log the operations and variable values at the application level. Of course this won’t depend on your Atlas cluster tier.Lastly, if you’re wishing to compare the before and after documents then the following SERVER-36941 ticket may be relevant to you which is pending release in MongoDB 6.0.Hope this helps. If you require further assistance please let us know more details about your environment, use case and requirements.Regards,\nJason", "username": "Jason_Tran" }, { "code": "db.colloctions.update()audit-log-file$addToSet$pull", "text": "Thanks for your response @Jason_TranThe updates will be taking place via a db.colloctions.update() event as you assumed.How would I be able to access the audit-log-file in a Trigger that is configured to use AWS EventBridge?This trigger is simply to notify a user of a change to their account based on a $addToSet or $pull event taking place against a specific array in their profile.I really need to know if what I am trying to do is even possible from a MongoDB Atlas standpoint, that way I can look at other options of notifying the user.Thank you", "username": "Dev_Ops" }, { "code": "audit-log-filemongodb-audit-log$addToSet$pullmongodb-audit-log example lines:\n/// For db.testcollection.updateOne({a:1},{$addToSet:{colours:\"red\"}})\n{ \"atype\" : \"authCheck\", \"ts\" : { \"$date\" : \"2022-05-18T04:11:31.515+00:00\" },...\"param\" : { \"command\" : \"update\", \"ns\" : \"myFirstDatabase.testcollection\", \"args\" : { \"update\" : \"testcollection\", \"updates\" : [ { \"q\" : { \"a\" : 1 }, \"u\" : { \"$addToSet\" : { \"colours\" : \"red\" } } } ],...\n\n/// For db.testcollection.updateOne({a:1},{$pull:{colours:\"red\"}})\n{ \"atype\" : \"authCheck\", \"ts\" : { \"$date\" : \"2022-05-18T04:11:47.555+00:00\" },...\"param\" : { \"command\" : \"update\", \"ns\" : \"myFirstDatabase.testcollection\", \"args\" : { \"update\" : \"testcollection\", \"updates\" : [ { \"q\" : { \"a\" : 1 }, \"u\" : { \"$pull\" : { \"colours\" : \"red\" } } } ],...\nfullDocumentfullDocumentBeforeChange/// Performed `db.arraycoll.update({j:1},{$addToSet:{colours:\"grey\"}})`\nLogs:\n[\n \"full document BEFORE change = {\\\"_id\\\":\\\"6285a86fbf866c9d2ca4b991\\\",\\\"j\\\":1,\\\"colours\\\":[\\\"blue\\\",\\\"green\\\"]}\",\n \"full document AFTER change ={\\\"_id\\\":\\\"6285a86fbf866c9d2ca4b991\\\",\\\"j\\\":1,\\\"colours\\\":[\\\"blue\\\",\\\"green\\\",\\\"grey\\\"]}\"\n]\n\n/// Performed `db.arraycoll.update({j:1},{$pull:{colours:\"grey\"}})`\nLogs:\n[\n \"full document BEFORE change = {\\\"_id\\\":\\\"6285a86fbf866c9d2ca4b991\\\",\\\"j\\\":1,\\\"colours\\\":[\\\"blue\\\",\\\"green\\\",\\\"grey\\\"]}\",\n \"full document AFTER change ={\\\"_id\\\":\\\"6285a86fbf866c9d2ca4b991\\\",\\\"j\\\":1,\\\"colours\\\":[\\\"blue\\\",\\\"green\\\"]}\"\n]\n", "text": "How would I be able to access the audit-log-file in a Trigger that is configured to use AWS EventBridge?After reading your most recent comment, the mongodb-audit-log file contents may not be necessary as it seems you are after a more immediate notification style alert rather than viewing / auditing the information in a log. Please correct me if I am wrong here. In any case, the trigger won’t be able to access the contents of the log file.I really need to know if what I am trying to do is even possible from a MongoDB Atlas standpoint, that way I can look at other options of notifying the user.Via triggers, you won’t be able to see the $addToSet or $pull operation in detail like what was shown in the log examples in my previous comment:In saying so, perhaps usage of the triggers containing the contents of the fullDocument and fullDocumentBeforeChange. The following examples are log lines from an test trigger function:In saying so, I feel like the functionality you need is perhaps easier to implement in the application, e.g. when a change is happening, send a notification to the user right away. My thinking is, if you employ triggers/auditing, wouldn’t you need a separate process to monitor such events and fire the notification as well? If this notification is implemented in the application, this monitoring wouldn’t be required - Let me know your thoughts here.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Differentiate trigger events
2022-05-05T18:36:47.892Z
Differentiate trigger events
1,751
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi, beginner with Mongodb on windows10 here, not sure whether it’s the problem of GitBash or mongo shell. There are several apparent problems.", "username": "Luke_He" }, { "code": "mongosh", "text": "Hi @Luke_He - Welcome to the community I’m having a bit of trouble trying to understand what may be going on here but it seems it might be easier with a screenshot. Would you be able to provide a screenshot with perhaps some annotations for this?Additionally, I myself have not used gitbash. but are you having the same troubles using mongosh direct from your terminal / command prompt to perform your tasks? If not, I would recommend giving this a go. Otherwise, please provide details on the use case for gitbash and what you are attempting to do.Thanks,\nJason", "username": "Jason_Tran" } ]
Mongo shell doesnt seem to work properly
2022-05-04T18:43:13.004Z
Mongo shell doesnt seem to work properly
2,113
null
[ "storage" ]
[ { "code": "", "text": "I have incremental backups at Atlas MongoDB cluster Cloud Backup. I need to check some data from the past. I’ve downloaded a big file with wiredtiger structure, but don’t know how to open or use it.", "username": "Mirko_Geest" }, { "code": ".tar.gz", "text": "Hi @Mirko_Geest - Welcome to the community I have incremental backups at Atlas MongoDB cluster Cloud Backup.Can you clarify if you mean you have Continuous Cloud Backups enabled?I’ve downloaded a big file with wiredtiger structure, but don’t know how to open or use it.Additionally, could you confirm the process you used to download the file you’ve mentioned and the file extension? I presume it would be the .tar.gz file but please advise.In saying so, you may find the Restore a Cluster from a Cloud Backup documentation useful. More specifically, for the downloaded backups, please check out the Manually Restore One Snapshot procedure.Regards,\nJason", "username": "Jason_Tran" } ]
How can I consult data from a Cloud Backup snapshot?
2022-05-06T05:51:14.621Z
How can I consult data from a Cloud Backup snapshot?
1,870
null
[ "node-js", "connecting", "devops" ]
[ { "code": "const SSH2Promise = require('ssh2-promise');\nconst MongoClient = require('mongodb').MongoClient\n\nconst database = \"<database name>\";\nconst mongoUsername = auths.mongodb.username;\nconst mongoPassword = auths.mongodb.password;\n\nconst { \n host, \n port,\n username, \n privateKey,\n} = auths.ssh\n\nconst ssh = new SSH2Promise({\n host,\n username,\n privateKey,\n})\n\nconst tunnel1 = await ssh.addTunnel({\n remoteAddr: \"<shard00>.55gfk.mongodb.net\", \n remotePort: 27017,\n localHost: \"127.0.0.1\"\n})\n\nconst tunnel2 = await ssh.addTunnel({\n remoteAddr: \"<shard01>.55gfk.mongodb.net\", \n remotePort: 27017,\n localHost: \"127.0.0.1\"\n})\n\nconst tunnel3 = await ssh.addTunnel({\n remoteAddr: \"<shard02>.55gfk.mongodb.net\", \n remotePort: 27017,\n localHost: \"127.0.0.1\"\n})\n\nconsole.log(\"tunnel established\");\n\nconst url = `mongodb://${mongoUsername}:${mongoPassword}@${tunnel1.localHost}:${tunnel1.localPort},${tunnel2.localHost}:${tunnel2.localPort},${tunnel3.localHost}:${tunnel3.localPort}/${database}?ssl=true&replicaSet=atlas-<cluster>-shard-0&authSource=admin&retryWrites=true&w=majority`\n\nconsole.log(url)\n\nconst client = await MongoClient.connect(url, { \n useNewUrlParser: true, \n useUnifiedTopology: true \n});\n\nconsole.log(\"db connection established\");\n\nclient.close();\nssh.close();", "text": "Hi all,I’m trying to connect to an Atlas cluster via tunneling through a bastion host (since I’m executing code from Pipedream which launches from a non-static set of IPs so I can’t know which to whitelist unless I use my own bastion server).Using node.js packages, I’m running this code below wrapped in an async function, but it seems to not be able to connect to the MongoDB Atlas cluster through the tunnel. I’ve verified that the tunnel actually works for various other purposes. I’m using the ssh package “ssh2-promise”, and have also tried “tunnel-ssh”. If I don’t use tunneling and just connect the driver straight to the cluster via the +SRV name or the explicit standard connection string, it’ll work (with 0.0.0.0/0 access allowed, of course). But I really want to get this tunnel working. What am I doing wrong here?", "username": "Nghia_Nguyen" }, { "code": "", "text": "Hi! Did you find a solution?", "username": "Yuri_Lima" }, { "code": "", "text": "Nope. I’ve resigned to just making really secure credentials and allowing 0.0.0.0/0 for now until Pipedream can NAT out its compute somehow to a static IP.", "username": "Nghia_Nguyen" }, { "code": "const url = `mongodb://${mongoUsername}:${mongoPassword}@localhost:${tunnel1.localPort},localhost:${tunnel2.localPort},localhost:${tunnel3.localPort}/${database}?ssl=true&replicaSet=atlas-<cluster>-shard-0&authSource=admin&retryWrites=true&w=majority`", "text": "@Nghia_Nguyen looking at the ssh2promise docs, addTunnel() accepts localPort not localHost for configuration. tunnelX.localHost returns undefined when I try it. It could be that you are using an older version of the package that did support that. So maybe the connection string is incorrect? The connection string should be:", "username": "Govind_Rai" }, { "code": "", "text": "Did that solve it, @Nghia_Nguyen ?", "username": "Mitchell_Rogers" }, { "code": "", "text": "I’m pretty sure I made sure to check the local host and local port resolved to values when I tried this a while back. Never got it to work–but you should take a crack at it if you’re facing similar problems.", "username": "Nghia_Nguyen" } ]
Node.js tunneling to MongoDB Atlas replica set via bastion host
2021-04-19T20:33:37.454Z
Node.js tunneling to MongoDB Atlas replica set via bastion host
7,254
null
[ "xamarin" ]
[ { "code": " [MapTo(\"end_date\")]\n public DateTimeOffset End_Date { get; set; }\npublic class A : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(Constants.Partition)]\n [Required]\n public string Partition { get; set; } = Constants.Partition;\n\n [MapTo(\"name\")]\n public string Name { get; set; }\n\n [MapTo(\"event_dates\")]\n public IList<Event_Date> Event_Dates { get; }\n\n [MapTo(\"test_date\")]\n public DateTimeOffset now { get; set; } = DateTimeOffset.Now;\n", "text": "I’m loading a set of new records into a Realm database, which all is working as it should, except for the DateTimeOffset datatype property in an EmbeddedObject in the RealmObject (the property member in the RealmObject is a List).Below is the classes I’m using to load and write to the Realm. I was chasing my tail and reading lots of documentation about the DateTimeOffset and how Realm stores it and thinking I misunderstood or overlooked something until I tested it by adding a DateTimeOffset to the RealmObject itself and seeing it worked fine.But, where I’m loading and storing in the EmbeddedIObject DateTimeOffset the incorrect value “0001-01-01T00:00:00.000+00:00” is always what ends up in the Realm database no matter what it might be set to by my code.I added a DateTimeObject as a test to the RealmObject and it is working as expected, only the behavior in the List of EmbeddedObject(s) is storing the incorrect value.I’m not sure if the problem is related to it being an EmbeddedObject kept as a List in the RealmObject or just a problem with the EmbeddedObject handing DateTimeOffsets.public class Event_Date : EmbeddedObject\n{\n[MapTo(“start_date”)]\npublic DateTimeOffset Start_Date { get; set; }}}", "username": "Josh_Whitehouse" }, { "code": "", "text": "of course, after putting this here, at a meeting 2 hours later, I find the problem is on my end, with a missing underscore typo that propagated via the person building the JSON file I was loading. the missing underscore for the DateTimeOffset field was used to copy/paste/edit new records incorrectly and here we are now, problem found! Consider this not an issue, just stupid.", "username": "Josh_Whitehouse" }, { "code": "", "text": "We’re glad you found the issue.You could consider marking this thread as resolved, thank you.Have a good day.Andrea", "username": "Andrea_Catalini" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
DateTimeOffset in EmbeddedObjects not written to database correctly
2022-05-22T17:28:05.268Z
DateTimeOffset in EmbeddedObjects not written to database correctly
2,420
null
[ "queries" ]
[ { "code": " {\n \"_id\": \"628b59f072d05a2d51b0a2e6\",\n \"campaignID\": \"61c6a74d0c61c7ef2aeb24d4\",\n \"data\": [\n \"628b59f072d05a2d51b0a2e2\"\n ],\n \"updated\": \"Mon May 23 2022 12:50:47 GMT+0300 (East Africa Time)\",\n \"__v\": 0\n },\n {\n \"_id\": \"628b59f272d05a2d51b0a2ed\",\n \"campaignID\": \"61c6a74d0c61c7ef2aeb24d4\",\n \"data\": [\n \"628b59f272d05a2d51b0a2e9\"\n ],\n \"updated\": \"Mon May 23 2022 12:50:47 GMT+0300 (East Africa Time)\",\n \"__v\": 0\n },\n {\n \"_id\": \"628b5a3c72d05a2d51b0a2f6\",\n \"campaignID\": \"61c6a74d0c61c7ef2aeb24d4\",\n \"data\": [\n \"628b5a3c72d05a2d51b0a2f2\"\n ],\n \"updated\": \"Mon May 23 2022 12:50:47 GMT+0300 (East Africa Time)\",\n \"__v\": 0\n }\n", "text": "", "username": "kerenke_tepela" }, { "code": "db.collection.find( ­{ \"campaignID\": \"61c6a74d0c61c7ef2aeb24d4\" } )\n", "text": "This is one of the most basic query.I strongly recommend that you take MongoDB Courses and Trainings | MongoDB University.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do I get all data which has common data in some fields? I want to get all data with the same campaignID
2022-05-23T11:39:11.581Z
How do I get all data which has common data in some fields? I want to get all data with the same campaignID
1,346
https://www.mongodb.com/…654f988cdb5d.png
[ "schema-validation" ]
[ { "code": "", "text": "Let’s say I configure a $schema validation rule for my collection.If I set the Validation level to off (so update/insert will work no matter what)\nHow can I query a list of documents that pass/fail the schema validation?", "username": "Alex_Bjorlig" }, { "code": "$jsonSchema$jsonSchema", "text": "Hi @Alex_Bjorlig,You can use the $jsonSchema operator to find existing documents that that match (or do not match) a JSON schema validator.There are some examples in the Query Conditions section of the $jsonSchema operator documentation.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Ahh thanks @Stennie_X !Now that I have your attention, could I politely ask if you have Typescript <–> JSON Schema expertise - or maybe know someone at MongoDB?Our team is still debating how to maintain Typescript interfaces and MongoDB JSON schemas. I tried to ask for inputs here, but to my surprise nobody answered I imagine this is a task everyone using Typescript/Mongodb-schema-validation would have, but there is supringsly few results when I google this.", "username": "Alex_Bjorlig" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
With schema validation, can you query documents that don't match?
2022-05-23T09:52:49.309Z
With schema validation, can you query documents that don&rsquo;t match?
3,580
null
[ "node-js", "mdbw22-hackathon", "mdbw-hackhelp" ]
[ { "code": "", "text": "Hi @nraboy, I think I heard you comment in a video that a command that would install all the dependencies of the package.json file in a project would be good, that is precisely what the following command does:$ npm installInstall all dependencies from package.json", "username": "Manuel_Martin" }, { "code": "npm iyarn", "text": "Using just npm i works too or yarn if you’re using yarn for your project", "username": "Fiewor_John" }, { "code": "", "text": "I don’t recall my comment, but now I’m curious if you find the video :-).You sure I wasn’t referring to installing dependencies that were actually used rather than dependencies that were listed in the package.json?", "username": "nraboy" }, { "code": "", "text": "@nraboy maybe you are right and you meant installing dependencies that were actually used rather than dependencies that were listed in the package.json, I thought you refer to dependencies listed in the package.json, but I should be wrong, don’t remember exactly in which video I listened it.Maybe installing dependencies that were actually used could be achieved with some VSCode extension.", "username": "Manuel_Martin" }, { "code": "", "text": "I think you should check out Quokka.js PRO which is a nice VS Code extension that provides this package install feature.", "username": "Fiewor_John" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Install al dependencies from package.json
2022-05-20T15:20:36.039Z
Install al dependencies from package.json
4,076
null
[ "mdbw22-hackathon", "mdbw-hackhelp" ]
[ { "code": "", "text": "We’re so glad to have so many participants onboard already, but for new joiners, or for those would couldn’t make the orientation, don’t forget, we have $100 of Atlas Credits for all participants to use. Simply register for Atlas (see Resources for details) and then once registered, go to the Billing tab and enter the code -WORLDHACK22to redeem the $100 of credits!!(if you’ve any issues with this, just post a reply below)", "username": "Shane_McAllister" }, { "code": "Invalid Code", "text": "Hi Shane. I recently imported more data into my cluster and even though I have added the promo code as instructed above, I still get an error saying that I have exceeded my storage limit (512MB for the free tier)I then tried to upgrade my cluster and while adding the promo code there again, I get an Invalid Code error probably because I have used it already?Please, I would like some help with this.\nCC: @Avik_Singha", "username": "Fiewor_John" }, { "code": "", "text": "Hello @Fiewor_John\ndid you upgrade your cluster? Adding the code will add the credits to your account but not change the sizing of your cluster.To upgrade click on Edit Configuration\nimage722×349 29.4 KB\nThen, on the next screen, click on “dedicated” (upper left side) and on M10 (further down).\nThe settings are well documented, if in doubt go with the default.You have to confirm your choice to upgrade, wait shortly and then you should be all set.\nPlease do not forget to terminate your cluster when you are done. The credits do not last forever Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thank you for your helpful answer!", "username": "Fiewor_John" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB ATLAS Credits for Hackathon
2022-04-15T09:28:55.296Z
MongoDB ATLAS Credits for Hackathon
5,408
null
[ "dot-net" ]
[ { "code": " private async Task<Realms.Realm> InitializeAsync()\n {\n var appconfig = new Realms.Sync.AppConfiguration(_AppID);\n\n Realms.Realm realm;\n\n Realms.Sync.PartitionSyncConfiguration syncConfig;\n\n try\n {\n var app = Realms.Sync.App.Create(appconfig);\n\n if (app.CurrentUser == null)\n {\n Realms.Sync.User user = await app.LogInAsync(Realms.Sync.Credentials.Anonymous());\n syncConfig = new Realms.Sync.PartitionSyncConfiguration(_Partition, user);\n realm = await Realms.Realm.GetInstanceAsync(syncConfig);\n } \n else\n {\n syncConfig = new Realms.Sync.PartitionSyncConfiguration(_Partition, app.CurrentUser);\n realm = Realms.Realm.GetInstance(syncConfig);\n }\n\n return realm;\n }\n catch\n {\n throw;\n }\n } \n", "text": "In my MAUI project i use Ralm package 10.13.0.\nWhe i try to login with LogInAsync method the application hang without exceptions.\nHere is the code:What is wrong?", "username": "M_Walter" }, { "code": "", "text": "Do you have a simple project that reproduces this? We do have some smoke tests for MAUI and those pass, but it’s very possible we’ve missed a corner case or an update to the tooling broke something. If possible, create a repro project and file an issue here: Issues · realm/realm-dotnet · GitHub so that we can take a look.", "username": "nirinchev" }, { "code": "", "text": "I have found the mistake. The problem was a method that was called synchronously.\nThank you for response.", "username": "M_Walter" }, { "code": "", "text": "Glad to hear it’s sorted out And as always, if you face other issues or have suggestions for improvements, we’re here to help. If the issue you’re facing seems like a bug, feel free to just go ahead and file a github ticket as we typically respond to those faster.", "username": "nirinchev" } ]
Method LogInAsync hang in MAUI Project
2022-05-22T08:50:22.233Z
Method LogInAsync hang in MAUI Project
1,719
null
[ "swift", "kotlin" ]
[ { "code": "", "text": "I have several Realm instances running on the same cluster, each tied to a different client-app (Kotlin, Swift). I’ve noticed that one client in particular has far more writes to a specific collection (“History”) than expected.\nI know about pausing sync for the entire Realm, but is there an easy way to pause/disable syncing of just the History collection without impacting syncing of other collections, and without affecting the local creation of History entries for end-users? Thanks!", "username": "Johannes_Deva-Koch" }, { "code": "", "text": "Hi Johannes,Thanks for posting and welcome to the community.Are you referring to the History collection of the __realm_sync database or is this a collection in a database you created?Please know that __realm_sync.history is used to store metadata about Sync operations which is crucial to how Sync works. This db is hidden in Atlas data explorer so you will need to use Compass or MongoShell to read it.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi Mansoon, thanks for you reply. This is a collection that I created, not the internal one.\nMy main concern here is performance - I’d like to know if there’s a good way I can pause syncing on less-important collections at times of high load.", "username": "Johannes_Deva-Koch" }, { "code": "", "text": "@Johannes_Deva-Koch did you manage to find a solution ?", "username": "Ibrahem" }, { "code": "", "text": "Hello,Since the term “collection” doesn’t exist on the client side I’m not sure I understand the use case. Could you please describe in more detail the reasons of this request and the nature of the performance issues you are trying to solve?Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi guys, thanks for getting back. I’ve been in touch with some of your colleagues in support, and we’ve figured out the performance problems (mostly too many writes from our apps). Thanks anyway!", "username": "Johannes_Deva-Koch" } ]
Disabling sync on a specific collection
2022-04-05T12:12:09.454Z
Disabling sync on a specific collection
3,261
null
[]
[ { "code": "", "text": "I have two tables 1. Users 2. Products\nAnd I want to manage one array named as “viewUsers”. I want to give a user table reference in this array field but the problem is we are not using MongoDB auto-generated _id field. The productId we are considering is having a field name as “id”.So there is any way we can specify the field name while refencing?\nI have tried below in my model:\nviewUsers: { type: [{ type: Number, ref: ‘users’, field: “id” }] }\nBut it’s not working.", "username": "Zil_D" }, { "code": "", "text": "You might not be using theMongoDB auto-generated _id fieldbut all documents do have one and it is unique. Nothing stops you from using this _id for the ref. despite the presence of another id field.Why using another field name? Nothings stops you from using _id for your productId. The _id field will be there, use it or not, will take some space use it or not, will have an unique index will updated when inserting, use it or not.", "username": "steevej" }, { "code": "", "text": "@steevej , yeah you are right we can use _id but from starting we are managing id in this project so I got your point their is no way to define custom field & it will point to _id by default right?", "username": "Zil_D" }, { "code": "", "text": "I got your point their is no way to define custom field & it will point to _id by default right?This is not really what I meant. My point was why are you trying to use another one when the one that is there by default (and potentially optimized) is sufficient.The following makes me think that you must be using something like mongoose.my model:\nviewUsers: { type: [{ type: Number, ref: ‘users’, field: “id” }] }I don’t know mongoose. With simple $lookup you may use what ever fields you which to refer to another object. If you are indeed using mongoose, tag your post accordingly so that mongoose people can see it better.", "username": "steevej" }, { "code": "", "text": "Yes, I’m using mongoose. I got your concern @steevej. Thank you", "username": "Zil_D" } ]
How to add reference of different field?
2022-05-19T10:43:15.680Z
How to add reference of different field?
4,694
https://www.mongodb.com/…1_2_1024x577.png
[]
[ { "code": "", "text": "Hi,I am getting this error continuously\nError: Invalid code point\nimage1563×881 49.7 KB\nI am trying to debug this issue but i don’t know where to check becuse it doesn’t show document id.\nPlease tell me what this error is about?\nwhat can possibly generate this error?\nor how do i know which document is throwing this error?Trigger Type: Database\nOperation Type: Update\nFull Document: ON\nEvent Type: FunctionNote: i am sending indexing document to elasticsearch in this trigger", "username": "Nirali_88988" }, { "code": "failed to lookup path for module 'supports-color': module not found: supports-color", "text": "Hi Nirali,Please tell me what this error is about?This is the error that corresponds to the log shown:failed to lookup path for module 'supports-color': module not found: supports-colorIt appears it is related to the @ elastic/elasticsearch dependency being used in the function.or how do i know which document is throwing this error?If the ID isn’t shown in the Realm logs it’s possible that the operation which triggered the trigger is projecting out the _id field.Please check your MongoDB Logs on the cluster at around 19/05/2022 09:43:41.849 UTC (when error occurred) to find the update operation which would have fired the trigger.Regards\nManny", "username": "Mansoor_Omar" } ]
Realm trigger error: Invalid code point
2022-05-19T10:04:28.425Z
Realm trigger error: Invalid code point
1,815
null
[ "node-js" ]
[ { "code": "", "text": "MongoError: bad auth : Authentication failed.\nat MessageStream.messageHandler (/home/runner/Saturn-bot/node_modules/mongodb/lib/cmap/connection.js:299:20)", "username": "NIGHT_SWORD" }, { "code": "MongoError: bad auth : Authentication failed.\n at MessageStream.messageHandler (/home/runner/Saturn-bot/node_modules/mongodb/lib/cmap/connection.js:299:20)\n at MessageStream.emit (events.js:375:28)\n at processIncomingData (/home/runner/Saturn-bot/node_modules/mongodb/lib/cmap/message_stream.js:144:12)\n at MessageStream._write (/home/runner/Saturn-bot/node_modules/mongodb/lib/cmap/message_stream.js:42:5)\n at writeOrBuffer (internal/streams/writable.js:358:12)\n at MessageStream.Writable.write (internal/streams/writable.js:303:10)\n at TLSSocket.ondata (internal/streams/readable.js:726:22)\n at TLSSocket.emit (events.js:375:28)\n at addChunk (internal/streams/readable.js:290:12)\n at readableAddChunk (internal/streams/readable.js:265:9)\n at TLSSocket.Readable.push (internal/streams/readable.js:204:10)\n at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23) {\n ok: 0,\n code: 8000,\n codeName: 'AtlasError'\n} Promise {\n <rejected> MongoError: bad auth : Authentication failed.\n at MessageStream.messageHandler (/home/runner/Saturn-bot/node_modules/mongodb/lib/cmap/connection.js:299:20)\n at MessageStream.emit (events.js:375:28)\n at processIncomingData (/home/runner/Saturn-bot/node_modules/mongodb/lib/cmap/message_stream.js:144:12)\n at MessageStream._write (/home/runner/Saturn-bot/node_modules/mongodb/lib/cmap/message_stream.js:42:5)\n at writeOrBuffer (internal/streams/writable.js:358:12)\n at MessageStream.Writable.write (internal/streams/writable.js:303:10)\n at TLSSocket.ondata (internal/streams/readable.js:726:22)\n at TLSSocket.emit (events.js:375:28)\n at addChunk (internal/streams/readable.js:290:12)\n at readableAddChunk (internal/streams/readable.js:265:9)\n at TLSSocket.Readable.push (internal/streams/readable.js:204:10)\n at TLSWrap.onStreamRead (internal/stream_base_commons.js:188:23) {\n ok: 0,\n code: 8000,\n codeName: 'AtlasError' \nFull message\n", "text": "", "username": "NIGHT_SWORD" }, { "code": "", "text": "bad auth : Authentication failed.Means you specified the wrong user name or password in your connection string.", "username": "steevej" } ]
I cannot turn on my discord bot with this message
2022-05-22T20:59:44.947Z
I cannot turn on my discord bot with this message
1,822
https://www.mongodb.com/…5_2_1024x512.png
[ "swift", "mdbw22-hackathon", "mdbw-hackhelp" ]
[ { "code": "Choose Package Repository\n\n https://github.com/mongodb/mongo-swift-driver\n\n Rules: Version: Up to next Major > 1.3.1\n- MongoSwift (Asynchronous API)\n\n- MongoSwiftSync (Synchronous API)\n- import MongoSwift // No such module 'MongoSwift' \nThe package product 'SwiftBSON' requires minimum platform version 13.0 for the iOS platform, but this target supports 9.0\n- Package.swift\n\n dependencies: [\n .package(url: \"https://github.com/mongodb/swift-bson\", .upToNextMajor(from: \"3.0.0\"))\n ],\n", "text": "I have issues trying to install MongoSwift (The official MongoDB driver for Swift applications on macOS and Linux).The official MongoDB driver for Swift. Contribute to mongodb/mongo-swift-driver development by creating an account on GitHub.I mention this people because I see that they are related with Swift and works in MongoDB so maybe they will see the post and can give me some advice:@kmahar @Michael_LynnXcode > File > Swift Packages > Add Package DependencyThen I have the choice to choose the dependencies that I want to install:I tried in one project choose MongoSwift and in another project choose MongoSwiftSync but in both projects I have the same errors:The No such module is because the following error:In mongo-swift-driver (GitHub - mongodb/mongo-swift-driver: The official MongoDB driver for Swift)The latest version of swift-bson in GitHub - mongodb/swift-bson: pure Swift BSON library is 3.1.0I am using Xcode Version 12.4 (12D4e) in a MacOS Catalina Version 10.15.7 (19H1715)I need some help to solve this issue.", "username": "Manuel_Martin" }, { "code": "MongoClientmaxPoolSize", "text": "Hi @Manuel_Martin, thank you for getting in touch.This error occurs because:Sorry that the error here isn’t very clear; I’m currently investigating to see if there’s some way we can specify a custom error message to emit in this case.If you’d like to use the Swift driver for building an iOS app, the architecture I would recommend is to build a backend server using a web framework like Vapor which handles interactions with the database, and having your iOS app communicate with that backend via HTTP. We have an example app doing exactly that available here.To name a few reasons why it is not recommended to use a database driver directly from an iOS app:Another alternative to consider if you don’t want to write a backend would be using Realm, which allows you to both store data on-device and also sync it to a MongoDB cluster running on MongoDB Atlas via Realm Sync.", "username": "kmahar" }, { "code": "", "text": "Thank you very much for your detailed answer. ", "username": "Manuel_Martin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I have issues trying to install MongoSwift (The official MongoDB driver for Swift applications on macOS and Linux)
2022-05-22T05:08:31.341Z
I have issues trying to install MongoSwift (The official MongoDB driver for Swift applications on macOS and Linux)
4,118
null
[ "dot-net", "java", "python", "php", "cxx" ]
[ { "code": " },\n },\n],\nrequired: false,\n", "text": "It’s my first time using mongo and I’m facing a problem now I want to delete a certain element in the collection at a specific date.\nI tried to use expirationDate but it deleted the whole collection not just the element\nHere is my schema and the element that I want to delete from the schedule.\n{\nuserId: {\n$ref: ‘User’,\ntype: Schema.Types.ObjectId,\nrequired: true,\n},\nlanguages: {\ntype: [String],\nenum: [‘JS’, ‘PHP’, ‘C++’, ‘C#’, ‘RUBY’, ‘PYTHON’, ‘JAVA’, ‘C’, ‘GO’],\nrequired: true,\n},\nspecialization: {\ntype: String,\nenum: [‘FRONTEND’, ‘BACKEND’, ‘DEVOPS’, ‘SECURITY’, ‘DATA STRUCTURE’, ‘FULL STACK’],\nrequired: true,\n},\ninterviews: {\ntype: [\n{\nintervieweeId: {\n$ref: ‘Interviewee’,\ntype: String,\nrequired: true,\n},\ndate: {\ntype: Date,\nrequired: true,\n},\ntime: {\ntype: Number,\nrequired: true,\n},\nlanguage: {\ntype: String,\nenum: [‘JS’, ‘PHP’, ‘C++’, ‘C#’, ‘RUBY’, ‘PYTHON’, ‘JAVA’, ‘C’, ‘GO’],\nrequired: true,\n},\nspecialization: {\ntype: String,\nenum: [‘FRONTEND’, ‘BACKEND’, ‘DEVOPS’, ‘SECURITY’, ‘DATA STRUCTURE’, ‘FULL STACK’],\nrequired: true,\n},\nquestionCategory: {\ntype: String,\nrequired: true,\nenum: [‘Technical’, ‘Analytical’, ‘Algorithms’, ‘System Design’],},\nschedule: {\ntype: [\n{\ndate: {\ntype: Date,\nrequired: true,\n},\ntime: {\ntype: Array,\nrequired: true,\n},\n},\n],\nrequired: false,\n},\n}\nSchedule it an array of objects and I need to delete a specific object in the schedule array.", "username": "Mahmoud_Ahmad" }, { "code": "", "text": "Please read Formatting code and log snippets in posts and the publish real sample documents that we can cut-n-paste into our db to experiment.Also publish what you have tried and indicate how it fails to provide the desired results. This way we will not waste time pursuing a solution that you already know is wrong.", "username": "steevej" } ]
How can I delete a certain element in the collection at a specific date?
2022-05-21T20:13:02.025Z
How can I delete a certain element in the collection at a specific date?
3,665
null
[ "queries" ]
[ { "code": "", "text": "db.student.find($regex:{“spn”})", "username": "Prathamesh_N" }, { "code": "SyntaxError: Unexpected token, expected \",\" (1:22)\n\n> 1 | db.student.find($regex:{“spn”})\n | ^\n", "text": "That query produce a syntax error:", "username": "steevej" }, { "code": "", "text": "db.products.find( { name: { $regex: /^ABC/i } } )", "username": "Prathamesh_N" }, { "code": "", "text": "Do you have an index on this field?Are you sure you are using the correct collection? You went fromdb.student.findtodb.products.findIf you do not have an index on the queried field on the appropriate collection, then a collection scan will occur. If you created an index on the student collection and do the find on the products collection, a collection scan will occur. If you created an index on the products collection and do the find on the student collection, a collection scan will occur.The very first step is to make sure you have an index on the appropriate field name of the appropriate collection name and to perform find on the same appropriate field name of the same appropriate collection name. Any typos or confusion about the field name or collection name will result in a slow collection scan for the find.Some reading to do about indexes: https://www.mongodb.com/docs/manual/applications/indexes/ , Performance Best Practices: Indexing | MongoDB Blog , https://www.mongodb.com/docs/manual/indexes/ and https://www.mongodb.com/docs/manual/reference/method/cursor.explain/.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cpu utilization is more to search 5 million record using regex expression any other optimized way to search
2022-05-21T10:36:42.769Z
Cpu utilization is more to search 5 million record using regex expression any other optimized way to search
1,855
https://www.mongodb.com/…a22341b2293c.png
[ "app-services-user-auth", "android" ]
[ { "code": "", "text": "So I have set up my android app to include google login as a means of authentication. In the google api console, I created a OAuth 2.0 Client with the type as a web application using “https://realm.mongodb.com/api/client/v2.0/auth/callback” as an authorized uri. However, there is a 100 person user cap unless the the app is verified in google api console. Verification requires that I verify ownership of the callback domain. Has anyone ran into this? I obviously don’t own mongodb.com, so how do I progress with the verification process?\n\n\nScreen Shot 2021-09-27 at 4.43.09 PM587×843 40.8 KB\n\nAny advice will be appreciated.", "username": "Deji_Apps" }, { "code": "", "text": "Hi @Deji_Apps,I don’t quite understand your problem here. Please, add more information:As you can see, your users, interacting with Realm will use Google Login to log in your Realm App. Right now I feel you’re doing it the other way around (from Google → MongoDB Realm)Does this help you? Let me know if we can help with anything else!", "username": "Diego_Freniche" }, { "code": "", "text": "this guideThanks for replying.\nI am workin on an Android App, and I am using the Android Realm SDK.\nI do have a MongoDB Realm App created, and I did add Google Login Auth. I did follow the guide to set up Google Login except for the part in the screenshot below.\n\nScreen Shot 2021-09-28 at 6.30.20 AM1592×1000 116 KB\n\nIn the screenshot above, it says to set up the application type in Google api console as android, but if you do that it does not give you a client secret as needed in step 3 for mongoDB’s authentication provider configuration. So I choose the application type as a web app so that I can have a client secret to put into MongoDB authentication provider.Setting up OAuth as a web app in google api console however requires a redirect URI’s for which I used “https://realm.mongodb.com/api/client/v2.0/auth/callback”.What I was unaware of until I started testing my app was that in google api, your redirect uri’s needs to be verified that you own the domain or it sets a limit of 100 users that can use the google login.So my problem is that if I follow the mongodb guide to the letter and set up my OAuth in google api console as an android app, I won’t get a client secret to put into mongo’s authentication provider configuration. But if I set up my OAuth as a web app, I get a client secret, but I am required to verify that I own the mongodb callback url, which I don’t.", "username": "Deji_Apps" }, { "code": "", "text": "Did you come to any resolution to this problem?", "username": "Ada_Lovelace" } ]
Google OAuth needs domain verification
2021-09-27T21:03:35.630Z
Google OAuth needs domain verification
4,844
null
[ "node-js", "crud", "mongoose-odm" ]
[ { "code": "connectDB();\n\nconst importData = async () => {\n try {\n await Order.deleteMany();\n await Product.deleteMany();\n await User.deleteMany();\n\n const createdUsers = await User.insertMany(users);\n const adminUser = createdUsers[0]._id;\n const sampleProducts = products.map(product => {\n return { ...product, user: adminUser, slug: product.name.toLowerCase().replace(/ /g, '-').replace(/[^\\w-]+/g, '') }\n })\n\n await Product.insertMany(sampleProducts);\n console.log('Data Imported!'.green.inverse)\n process.exit()\n } catch (error) {\n console.error(`${error}`.red.inverse);\n process.exit(1);\n }\n};\n\nconst destroyData = async () => {\n try {\n await Order.deleteMany();\n await Product.deleteMany();\n await User.deleteMany();\n\n console.log('Data Destroyed!'.red.inverse)\n process.exit()\n } catch (error) {\n console.error(`${error}`.red.inverse);\n process.exit(1);\n }\n};\n\nif(process.argv[2] === '-d') {\n destroyData();\n} else {\n importData();\n}\nconst products = [\n {\n _id: '1',\n name: 'The Legend of Zelda: Breath Of The Wild',\n image: '/images/zelda.jpg',\n description:\n 'The Legend of Zelda: Breath of the Wild[b] is a 2017 action-adventure game developed and published by Nintendo for the Nintendo Switch and Wii U consoles. The game is an installment of The Legend of Zelda series and is set at the end of its timeline. The player controls an amnesiac Link, who awakens from a hundred-year slumber, and attempts to regain his memories and prevent the destruction of Hyrule by Calamity Ganon. Similar to the original 1986 The Legend of Zelda game, players are given little instruction and can explore the open world freely. Tasks include collecting various items and gear to aid in objectives such as puzzle-solving or side quests. The world is unstructured and designed to encourage exploration and experimentation, and the main story quest can be completed in a nonlinear fashion. ',\n platforms: ['WiiU', 'Nintendo Switch'],\n category: 'Action-Adventure',\n price: 59.99,\n countInStock: 10,\n rating: 5,\n numReviews: 12,\n },\n....\n....\nValidationError: _id: Cast to ObjectId failed for value \"1\" (type string) at path \"_id\" because of \"BSONTypeError\"const reviewSchema = mongoose.Schema({\n name: {\n type: String,\n required: true\n },\n rating: {\n type: Number,\n required: true\n },\n comment: {\n type: String,\n required: true\n },\n},{\n timestamps: true,\n});\n\nconst productSchema = mongoose.Schema({\n user: {\n type: mongoose.Schema.Types.ObjectId,\n required: true,\n ref: 'User'\n },\n name: {\n type: String,\n required: true\n },\n slug: {\n type: String,\n required: true\n },\n image: {\n type: String,\n required: true,\n },\n platforms: {\n type: Array,\n required: true,\n default: [],\n },\n category: {\n type: String,\n required: true,\n },\n description: {\n type: String,\n required: true,\n },\n price: {\n type: Number,\n required: true,\n default: 0\n },\n countInStock: {\n type: Number,\n required: true,\n default: 0\n },\n reviews: [reviewSchema],\n rating: {\n type: Number,\n required: true,\n default: 0\n },\n numReviews: {\n type: Number,\n required: true,\n default: 0\n }\n\n\n}, {\n timestamps: true\n});\n\n//productSchema.plugin(sluggy, { tmpl: '<%name%>' })\nconst Product = mongoose.model('Product', productSchema);\n\nexport default Product;\n", "text": "Hello everyone, I’m very fresh in the MongoDB world and I’m still trying to grasp it considering I’m very used to SQL. I’m following a course from Brad Traversy about the MERN stack, first time switching to a full javascript envitornment. Naturally MongoDB is used and mongoose package to work with it, I’m asking here for help cause the course is not exactly recent and up-to-date (at least from 1 year ago), so there might be differences im not aware of nowdays.Here’s the issue, I’m trying to populate an atlas database, I’m already successfully connected to it, I created the Schemas for the models I wanna use, the application is basically a small barebone e-commerce, the problem is found in the products Schema/Importing.\nI made a seeder.js file, that aims to populate the database from a static local file in which I have manually inserted products when I started coding. I’m gonna show you both files for context:seeder.jsproducts.jsNow If I try to run the seeder script, I get back from the console the following error:ValidationError: _id: Cast to ObjectId failed for value \"1\" (type string) at path \"_id\" because of \"BSONTypeError\"Which in my ignorance might mean 2 of these things: either there’s a problem with how I structured my products.js file, or theres a problem with the Product Schema itself, that im gonna include now:productModel.jsis there something that was explained in the course when coding these that is different now a year later? I’m almost sure that might be the issue here, Google searching for that error didn’t help much, being somewhat generic, so I ask you guys, I’d appreciate some advice. Thanks to whoever takes the time to take a look.", "username": "K3nzie" }, { "code": "", "text": "delete all _id in file products", "username": "ardi_zariat" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongoose populating empty altas from file.js gives ValidationError because of "BSONTypeError"
2022-05-21T09:18:48.222Z
Mongoose populating empty altas from file.js gives ValidationError because of &ldquo;BSONTypeError&rdquo;
4,191
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "hi,i have existing users in a realm app,\ni could copy and redeploy all functions , triggers and schemas and rules from 1 realm to another realm.\nbut how do i migrate existing users to the other realm app and retaining user.id ?\nthis user id is not the collection _id, im referring to context.user.idthank you", "username": "James_Tan1" }, { "code": "", "text": "i could do this with firebase, but i do not know how on realm", "username": "James_Tan1" } ]
Migrate users from 1 realm app to another
2022-05-20T06:53:50.903Z
Migrate users from 1 realm app to another
1,703
null
[]
[ { "code": "MongoDB Enterprise atlas-o1jdjg-shard-0:PRIMARY> db.listingsAndReviews.aggregate([ {\"$project\": {\"room_type\":1, \"_id\":0}}, {\"$group\": {\"_id\": \"$room_type\"}} ]).pretty()\nMongoDB Enterprise atlas-o1jdjg-shard-0:PRIMARY> \nMongoDB Enterprise atlas-o1jdjg-shard-0:PRIMARY> \nMongoDB Enterprise atlas-o1jdjg-shard-0:PRIMARY> **db.listingsAndReviews.aggregate([**\n**... {\"$project\": {\"room_type\":1, \"_id\":0}},**\n**... {\"$group\": {\"_id\": \"room_type\", \"count\":{\"sum\":1}}}**\n**... ]).pretty()**\n2022-01-24T05:15:01.353+0000 E QUERY [js] Error: command failed: {\n \"operationTime\" : Timestamp(1643001301, 7),\n \"ok\" : 0,\n \"errmsg\" : \"The field 'count' must be an accumulator object\",\n \"code\" : 40234,\n \"codeName\" : \"Location40234\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1643001301, 7),\n \"signature\" : {\n \"hash\" : BinData(0,\"PJNZrRUuC/BKQjlFv6tNRK5f5iw=\"),\n \"keyId\" : NumberLong(\"6990363765946449922\")\n }\n }\n} : aggregate failed :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\ndoassert@src/mongo/shell/assert.js:18:14\n_assertCommandWorked@src/mongo/shell/assert.js:534:17\nassert.commandWorked@src/mongo/shell/assert.js:618:16\nDB.prototype._runAggregate@src/mongo/shell/db.js:260:9\nDBCollection.prototype.aggregate@src/mongo/shell/collection.js:1062:12\n@(shell):1:1\nMongoDB Enterprise atlas-o1jdjg-shard-0:PRIMARY>\n", "text": "I get this error message(“The field ‘count’ must be an accumulator object”, ), on lab 1, and I still cannot see what is wrong with the code. Please kindly help…!!", "username": "Agatha_Ndalichako" }, { "code": "", "text": "You have to use one of the accumulator rather that sum.", "username": "steevej" }, { "code": "", "text": "I have a similar problem, but I am not doing the count. I was just trying to find what room types are included in the airbnb database. I did this:\ndb.listingsAndReviews.aggregate( [{$project:{“room_type”:1, _id:0}}, {$group:{\"_id\" : “room_type”}}]).I got no results. I did it exactly as the example during the lecture. What am I missing? Thanks to whoever reads these things.", "username": "Zio_Bonacci" }, { "code": "", "text": "Read\nand look for examples to see what is wrong with the <expression> part in{“_id” : “room_type”}", "username": "steevej" }, { "code": "", "text": "The aggregation page doesn’t explain that an expression format should be different than what I have-- it mostly provides a list of the possible aggregate functions. The example used in the lesson shows the expression as “address.country.” But since in my task the field I want is a top level field, I do not need a period between the top level and sub-level. Otherwise, my script is identical. Also, the lesson page has a section that says // Group By Expression (which I assume is an instruction note and not to be part of the query). I tried to write it to match the format : { : }, like this: …{\"_id\":{ “$mergeObject”:“room_type”}}, and that returned an error. Same problem when I write {\"_id\":{“room_type”:{}}}. Can you direct me to the specific lesson? Thanks for your help.", "username": "Zio_Bonacci" }, { "code": "", "text": "Are you saying all I needed was to put the $ sign in front of the field? Like this:\ndb.listingsAndReviews.aggregate( [ {\"$project\":{“room_type”:1, “_id”:0}},{\"$group\":{\"_id\":\"$room_type\"}}])? I hope so. That sucks and was such a small item to overlook. It would have been nice if you just reminded me that I want to get the value of the field, which requires a slightly different format.So I think I got it. Thanks again for your help.", "username": "Zio_Bonacci" }, { "code": "", "text": "The extra reading and thinking you did to figure out by yourself is worth more than any direct straight answer.Give a man a fish,\nyou feed him for the dayTeach a man to fish,\nyou feed him for the rest of his life.", "username": "steevej" } ]
M001: Mongodb basics, Chapter 5: Indexing and Aggregation Pipeline Lab: Aggregation Framework
2022-01-24T05:32:13.154Z
M001: Mongodb basics, Chapter 5: Indexing and Aggregation Pipeline Lab: Aggregation Framework
8,855
null
[ "aggregation" ]
[ { "code": "", "text": "I have two collections, one with one million records and the other with ten million documents. Using aggregate matching and look up, I’m attempting to retrieve all collections from collection 2 with a common local-field/foreign field value in collection 1. However, it is not as efficient in terms of time. Can anyone suggest anything else I might do to get around this time limit on extensive data?", "username": "Aman_Jaiswal2" }, { "code": "$lookup", "text": "Hi @Aman_Jaiswal2, welcome to the community. \nDo you have an index on the fields that you are using for $lookup? Having an index on those fields can significantly increase the performance of your pipeline.\nWould it be okay for you to share sample documents from both collections?\nAlso, please share your aggregation pipeline with us as well.If you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "{\n \"_id\": {\n \"$oid\": \"6267950f0ef8c21faf3ac0ac\"\n },\n \"OwnerID\": 3650185,\n \"TeamID\": 9697,\n \"MemberID\": 2308,\n \"ReportsTo\": 15,\n \"ID\": 1046414917,\n \"LastMet\": {\n \"$date\": \"2017-10-12T00:00:00.000Z\"\n },\n \"LastUpdated\": {\n \"$date\": \"2021-03-26T00:00:00.000Z\"\n },\n \"LastContacted\": {\n \"$date\": \"2018-06-07T00:00:00.000Z\"\n },\n \"Date\": {\n \"$date\": \"2019-01-13T00:00:00.000Z\"\n }\n}\n{\n \"_id\": {\n \"$oid\": \"6257c13ce98303a96f4c8c94\"\n },\n \"AccountID\": 61320380681,\n \"AssignedToID\": 297,\n \"AssignedToName\": \"50579\",\n \"Caption\": \"8244853\",\n \"CreatedBy\": \"6697955755\",\n \"LastActionID\": 360319,\n \"MappingId\": 78298773,\n \"RMModifiedbyO1\": \"66877527\",\n \"Status\": \"7978224\",\n \"StatusCode\": 3769242,\n \"SubCategoryCCRAO1\": \"8493861710\",\n \"RelatedToTypeID\": 17095,\n \"Gender\": \"Female\",\n \"Age\": 39,\n \"Date\": {\n \"$date\": \"2020-04-28T00:00:00.000Z\"\n }\n}\nmy_db.aggregate([\n {'$match': {'ReportsTo': ReportsID}},\n {'$lookup':\n {\n 'from': 'TestData',\n 'localField': 'MemberID',\n 'foreignField': 'AssignedToID',\n 'as': 'Table_Test'\n }\n }\n])\n", "text": "Hello, @SourabhBagrecha Thanks for the quick answer.No, I don’t think I’ll use an index on any fields for the time being.\nYes, sample documents for both collections are as follows:Collection 1 (1 Crore document )Collection 2 (10 Crore documents)Both these collections have a common field as ReportsTo (Collection1) and AssignedTo (Collection2).The query I’m now executing is as follows…", "username": "Aman_Jaiswal2" }, { "code": "", "text": "No, I don’t think I’ll use an index on any fields for the time being.This is a contradiction compare to your goal of Efficient Lookup for data fetching from databases. You wroteCan anyone suggest anythingThe very first step to increased performance has been suggestedhave an index on the fields that you are usingAlso have an index on the $match-ed fields.Without indexes it will always benot as efficient in terms of timeTo understand the importance of indexes, take MongoDB Courses and Trainings | MongoDB University.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Efficient Lookup for data fetching from databases (two collections)
2022-05-20T10:28:07.249Z
Efficient Lookup for data fetching from databases (two collections)
2,820
null
[]
[ { "code": "index.ts:32 Uncaught (in promise) Error: reason=\"role \\\"default\\\" in \\\"maintenance-tracker.User\\\" does not have update permission for document with _id: 6286bc0194c80f7c64c60124: could not validate document: \\n\\t_id: Invalid type. Expected: type: undefined, bsonType: objectId, given: [string mixed]\"; code=\"SchemaValidationFailedWrite\"; untrusted=\"update not permitted\"; details=map[]\n", "text": "Hi all,I’m running into the following error when trying to update a User document:I’m not sure what this means. I do not have trouble updating any other documents. Why would updating this document lead to issues? Does anyone have any tips on figure out what is going on? I’m quite new to this so I’d appreciate some guidance.The difference between the User documents and other documents is that I create User documents using a function which is triggered whenever a new user signs up.", "username": "Nedim_Bayrakdar" }, { "code": "exports = async function createNewUserDocument({user}) {\n const cluster = context.services.get(\"mongodb-atlas\");\n const users = cluster.db(\"maintenance-tracker\").collection(\"User\");\n return users.insertOne({\n _id: user.id,\n id: user.id.toString(),\n _partition: `user=${user.id}`,\n name: user.data.email,\n // canReadPartitions: [`user=${user.id}`],\n // canWritePartitions: [`project=${user.id}`],\n // memberOf: [\n // {\"name\": \"My Project\", \"partition\": `project=${user.id}`}\n // ],\n });\n};\nidObjectId", "text": "With the help of SchemaValidationFailedRead on GraphQL I resolved the issue.It apparently was related to the User document being generated by a function within realm, the function being:the id field is here created as a string, but in the schema was denoted as an ObjectId.", "username": "Nedim_Bayrakdar" }, { "code": "permission", "text": "I was too focused on the permission part of the error message, and not the validation error ", "username": "Nedim_Bayrakdar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error due to not having update permissions as a user
2022-05-20T10:32:23.141Z
Error due to not having update permissions as a user
3,668
null
[ "node-js" ]
[ { "code": "", "text": "Hello I just started the mongodb with nodejs and react on the front side.I’m looking for a way to do a “Put” request and change all the data at once\nI have a model called “Vote” in which I want to add +1 , I would like to do it for all my databasethank you", "username": "Mielpops" }, { "code": "", "text": "Please provide sample input documents and expected result. Also share what you have tried and explain how it fails to provide you with the expected result.Read before posting documents and code:\nMethod you need to update all documents:\nThe filter argument is {} to specify all documents.\nThe update parameter you need is", "username": "steevej" }, { "code": "exports.vote = (req, res) => { \n let edition;\n edition = {vote: (req.body.vote + 1) };\n const update = { $set: edition };\n /* const update = { $set: edition }; */\n const id = req.params.id;\n const conditions = { _id: id };\n const options = {\n upsert: true,\n new: true\n };\n Project.findOneAndUpdate(conditions, update, options, (err, response) => {\n if (err) return res.status(500).json({ msg: 'update failed', error: err });\n res.status(200).json({ msg: `document with id ${id} updated`, response: response });\n });\n\n};\n", "text": "Thank you for your replyCurrently i do it individually against each idBut I would like to do it for all at onceI would like to make + 1 to the vote, a on all my database and not only on the selected id", "username": "Mielpops" }, { "code": "", "text": "Thanks I succeeded using updateMany() and $inc", "username": "Mielpops" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Modify all the documents in my database
2022-05-21T12:36:10.897Z
Modify all the documents in my database
3,767
null
[ "queries" ]
[ { "code": "{\n\t\"_id\" : ObjectId(\"6287e3c6ae6cd20f3571fd80\"),\n\t\"attendanceStaff\" : [\n\t\t{\n\t\t\t\"attendanceId\" : \"6287e3c6ae6cd20f3571fd9a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"20-05-2022\",\n\t\t\t\"dateString\" : \"20220520\",\n\t\t\t\"day\" : 20,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"62885168ae6cd2094074eb5a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"628855b2ae6cd209405267f2\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"afternoon\"\n\t\t}\n\t],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-05-20T18:53:58.086391Z\",\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"updatedAt\" : \"2022-05-20T18:53:58.086385Z\",\n\t\"userId\" : ObjectId(\"606b6c5fa0ccf722221c7319\"),\n\t\"year\" : 2022\n}\n{\n\t\"_id\" : ObjectId(\"6287e3c6ae6cd20f3571fd81\"),\n\t\"attendanceStaff\" : [\n\t\t{\n\t\t\t\"attendanceId\" : \"6287e3c6ae6cd20f3571fd9a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"20-05-2022\",\n\t\t\t\"dateString\" : \"20220520\",\n\t\t\t\"day\" : 20,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"62885168ae6cd2094074eb5a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"628855b2ae6cd209405267f2\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"afternoon\"\n\t\t}\n\t],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-05-20T18:53:58.086403Z\",\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"updatedAt\" : \"2022-05-20T18:53:58.086397Z\",\n\t\"userId\" : ObjectId(\"606b6c77a0ccf72222c5d301\"),\n\t\"year\" : 2022\n}\n{\n\t\"_id\" : ObjectId(\"6287e3c6ae6cd20f3571fd82\"),\n\t\"attendanceStaff\" : [\n\t\t{\n\t\t\t\"attendanceId\" : \"6287e3c6ae6cd20f3571fd9a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"20-05-2022\",\n\t\t\t\"dateString\" : \"20220520\",\n\t\t\t\"day\" : 20,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"62885168ae6cd2094074eb5a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"628855b2ae6cd209405267f2\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"afternoon\"\n\t\t}\n\t],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-05-20T18:53:58.086415Z\",\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"updatedAt\" : \"2022-05-20T18:53:58.086409Z\",\n\t\"userId\" : ObjectId(\"606b6cc1a0ccf72222117b3e\"),\n\t\"year\" : 2022\n}\n{\n\t\"_id\" : ObjectId(\"6287e3c6ae6cd20f3571fd83\"),\n\t\"attendanceStaff\" : [\n\t\t{\n\t\t\t\"attendanceId\" : \"6287e3c6ae6cd20f3571fd9a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"20-05-2022\",\n\t\t\t\"dateString\" : \"20220520\",\n\t\t\t\"day\" : 20,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\",\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"time\" : \"08:29\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"62885168ae6cd2094074eb5a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"628855b2ae6cd209405267f2\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"afternoon\",\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"time\" : \"08:30\"\n\t\t}\n\t],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-05-20T18:53:58.086427Z\",\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"updatedAt\" : \"2022-05-20T18:53:58.086421Z\",\n\t\"userId\" : ObjectId(\"606b6cd7a0ccf7222269ae8d\"),\n\t\"year\" : 2022\n}\n{\n\t\"_id\" : ObjectId(\"6287e3c6ae6cd20f3571fd84\"),\n\t\"attendanceStaff\" : [\n\t\t{\n\t\t\t\"attendanceId\" : \"6287e3c6ae6cd20f3571fd9a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"20-05-2022\",\n\t\t\t\"dateString\" : \"20220520\",\n\t\t\t\"day\" : 20,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\",\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"time\" : \"08:29\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"62885168ae6cd2094074eb5a\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"morning\"\n\t\t},\n\t\t{\n\t\t\t\"attendanceId\" : \"628855b2ae6cd209405267f2\",\n\t\t\t\"attendanceTakenById\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"attendanceTakenByName\" : \"Admin \",\n\t\t\t\"date\" : \"21-05-2022\",\n\t\t\t\"dateString\" : \"20220521\",\n\t\t\t\"day\" : 21,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"session\" : \"afternoon\",\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"time\" : \"08:30\"\n\t\t}\n\t],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"insertedAt\" : \"2022-05-20T18:53:58.086439Z\",\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"updatedAt\" : \"2022-05-20T18:53:58.086433Z\",\n\t\"userId\" : ObjectId(\"606b6cfaa0ccf722224daddc\"),\n\t\"year\" : 2022\n}\ndb.staff_attendance_database.find({\"userId\" : ObjectId(\"606b6cfaa0ccf722224daddc\"),\"month\" : 5,\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\")},{\"attendanceStaff.day\":21}).pretty()\n{\n\t\"_id\" : ObjectId(\"6287e3c6ae6cd20f3571fd84\"),\n\t\"attendanceStaff\" : [\n\t\t{\n\t\t\t\"day\" : 20\n\t\t},\n\t\t{\n\t\t\t\"day\" : 21\n\t\t},\n\t\t{\n\t\t\t\"day\" : 21\n\t\t}\n\t]\n}\n\n", "text": "The query us to filterThe result I’m gettingI just wanted to fetch all document of day 21", "username": "Prathamesh_N" }, { "code": "", "text": "This is the same question as your other thread How to fetch all the values of the specified date in the particular arrayChanging from day 18 to day 21 won’t provide you with a different answer. So here is the same answer:Try to do that in an aggregation $set stage using $filter.There are examples that you may use to help you.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to select the document based on day
2022-05-21T03:05:15.028Z
How to select the document based on day
1,738
https://www.mongodb.com/…1_2_1024x424.png
[ "app-services-user-auth", "react-js" ]
[ { "code": "", "text": "Hi, I am new to MongoDB Realm. I’m trying to build a simple app using react js and MongoDB realm. I have setting authentication using anonymous and email/password. In email/password authentication, I have set the confirmation method to automatically confirm user. I also have saved the draft and deploy the config, but when I am trying to register a new user from my web app, the user does not automatically confirmed by MongoDB, so the status is still pending, like this\n\nwhyPending1181×490 18.7 KB\n\nI have read the documentation and it says “… Realm immediately confirms new Email/Password users after registration …”. I would like to know if anyone has experienced this, and how to deal with this?Thanks in advance", "username": "fajarZuhri" }, { "code": "", "text": "[Update]\nI realized that the user status was “Pending User Login”, so I tried to log in to my web app, and apparently, it worked. But, I am still confused as, why the user goes to the pending list when first-time registration. any thought?", "username": "fajarZuhri" }, { "code": "app.emailPasswordAuth.registerUser(email, password)", "text": "I have the same issue. I selected ‘automatically confirm users’ in Atlas Realm but when I register a new user with app.emailPasswordAuth.registerUser(email, password), the new user is flagged as ‘pending confirmation’ and they cannot log in until they are confirmed. What am I missing?", "username": "Laekipia" }, { "code": "", "text": "I don’t think the user record is technically created until the confirmed user signs in for the first time.I just tested it. I logged in on a confirmed user, then my authentication trigger (which triggers after user creation) ran and created my additional user info.", "username": "Alex_Wells" }, { "code": "var xd = app.EmailPasswordAuth.RegisterUserAsync(\"[email protected]\", \"19sd5998984\");\nxd.GetAwaiter().GetResult();\n", "text": "Esto sucede por que nunca se llego a ejecutar esta tarea, en c# seria algo comola variable xd contendrá todo lo resuelto.", "username": "Pier_Castaneda" } ]
Unable to confirm user
2021-10-15T09:07:56.744Z
Unable to confirm user
5,773
null
[ "aggregation", "queries", "dot-net" ]
[ { "code": "$arrayElemAt: [\n '$Invoice',\n {\n $indexOfArray: [\n '$Invoice.LastModifiedDateTime',\n {\n $max: '$Payments.LastModifiedDateTime'\n }\n ]\n }\n ]\ndatabase.GetCollection<Invoice>(nameof(Invoice)).Aggregate()\n\t\t\t\t.Project(x => new ProjectResult\n {\n Payments = x.Payments == null ? null : x.Payments.ElementAt(0)\n });\n{\n \"$project\": {\n \"Payments\": {\n \"$cond\": [\n {\n \"$eq\": [\n \"$Payments\",\n null\n ]\n },\n null,\n {\n \"$arrayElemAt\": [\n \"$Payments\",\n 0\n ]\n }\n ]\n },\n \"_id\": 0\n }\n}\n", "text": "Below is the sample Mongo query to fetch the array elements based on the last modified date.I tried the code in the following wayTranslated QueryHow to implement the $indexOfArray in the code?", "username": "Sudhesh_Gnanasekaran" }, { "code": "$indexOfArrayElementAtint32ElementMatching$arrayElemAt$indexOfArrayvar query = coll.Aggregate()\n .Project(x => new ProjectResult\n {\n Payment = x.Payments == null ? null : x.Payments.First(y => y.Date == x.Payments.Max(y => y.Date))\n });\nConsole.WriteLine(query);\naggregate([{ \"$project\" : { \"Payment\" : { \"$cond\" : { \"if\" : { \"$eq\" : [\"$Payments\", null] }, \"then\" : null, \"else\" : { \"$arrayElemAt\" : [{ \"$filter\" : { \"input\" : \"$Payments\", \"as\" : \"y\", \"cond\" : { \"$eq\" : [\"$$y.Date\", { \"$max\" : \"$Payments.Date\" }] } } }, 0] } } }, \"_id\" : 0 } }])\n$sortArrayLastModifiedDateTime", "text": "Hi, @Sudhesh_Gnanasekaran,I understand that you’re trying to find a particular matching element of an array using $indexOfArray. There is no easy way to write this in C# as ElementAt takes an int32 and not a predicate. We would need to implement an extension method such as ElementMatching that could take a predicate and translate it into $arrayElemAt and $indexOfArray.You could try something like this:Output is:Alternatively CSHARP-3958 will allow you to sort arrays using the new $sortArray operator introduced in MongoDB 5.2. You could then sort by LastModifiedDateTime descending and take the first element. CSHARP-3958 is still a work in progress, but will be available in an upcoming release.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Thank you @James_Kovacs and for the update of CSHARP-3958", "username": "Sudhesh_Gnanasekaran" }, { "code": "var query = coll.Aggregate()\n .Project(x => new ProjectResult\n {\n Payment = x.Payments == null ? null : x.Payments.First(y => y.Date == x.Payments.Max(y => y.Date))\n });\n", "text": "Hi @James_KovacsThe expected translated query is wrongIn the below query “$Payments.Date” should come instead “$$y.Payments.Date” is replaced. Is this an existing bug or is any workaround available?aggregate([{ “$project” : { “Payment” : { “$cond” : { “if” : { “$eq” : [\"$Payments\", null] }, “then” : null, “else” : { “$arrayElemAt” : [{ “$filter” : { “input” : “$Payments”, “as” : “y”, “cond” : { “$eq” : [\"$$y.Date\", { “$max” : “$$y.Payments.Date” }] } } }, 0] } } }, “_id” : 0 } }])Thanks,\nSudhesh", "username": "Sudhesh_Gnanasekaran" }, { "code": "$field$$CURRENT.field{$max: \"$Payments.Date\"}{$max: \"$$CURRENT.Payments.Date\"}$$y$$CURRENTmongo> db.remittance.insert({Payments: [{Date: ISODate(\"2020-01-01\")}, {Date: ISODate(\"2021-01-01\")}, {Date: ISODate(\"2000-01-01\")}]})\nWriteResult({ \"nInserted\" : 1 })\n> db.remittance.aggregate([{ \"$project\" : { \"Payment\" : { \"$cond\" : { \"if\" : { \"$eq\" : [\"$Payments\", null] }, \"then\" : null, \"else\" : { \"$arrayElemAt\" : [{ \"$filter\" : { \"input\" : \"$Payments\", \"as\" : \"y\", \"cond\" : { \"$eq\" : [\"$$y.Date\", { \"$max\" : \"$Payments.Date\" }] } } }, 0] } } }, \"_id\" : 0 } }])\n{ \"Payment\" : { \"Date\" : ISODate(\"2021-01-01T00:00:00Z\") } }\n", "text": "Hi, @Sudhesh_Gnanasekaran,In MQL. $field is equivalent to $$CURRENT.field. (See $$CURRENT.) Thus {$max: \"$Payments.Date\"} is equivalent to {$max: \"$$CURRENT.Payments.Date\"} and since $$y is the same as $$CURRENT, the two formulations are the same.Inserting some test data and running the MQL in the mongo shell, we see the expected result returned.If you have a test case that does not produce the expected result, please consider filing a CSHARP or SERVER ticket so that we can investigate further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is the Equivalent of $indexOfArray in C# Fluent Aggregate?
2022-05-16T14:15:42.462Z
What is the Equivalent of $indexOfArray in C# Fluent Aggregate?
2,731
https://www.mongodb.com/…592a19df4b9f.png
[ "dot-net", "swift", "atlas-device-sync", "android", "kotlin" ]
[ { "code": "", "text": "It’s been a busy month for us here at Realm, and we’re sure you could say the same! Here’s a quick recap of the latest Realm news and updates.We are excited to announce that Realm has officially launched preview support for .NET 6 which includes support for iOS, Android, and MAUI apps. Try it out now!Use the new AsyncWrite API to automatically dispatch a write to the background thread – eliminating all the custom code you used to have to spend time writing yourself.Join us in-person in NYC or virtually for our 7th annual MongoDB World, June 7 - 9. Hear the strategic business case for why the future runs on MongoDB (and Realm), discover the latest technologies to boost your business, and get tactical advice from experts who have built mission-critical applications at scale. You won’t want to miss:From Zero to Mobile Developer in 120 Minutes With Realm & SwiftUISimplify Your Mobile Front End with Realm Kotlin Multiplatform Mobile (KMM)Atlas Device Sync – Flexible Sync: The Future of Mobile to Cloud SynchronizationRegister NowNew video series from Stewart Lynch on incorporating Realm into your SwiftUI appsIn @StewartLynch’s new 5 part video series, he shows you step by step how to incorporate Realm into a SwiftUI application. Get started with part 1!New updates to John O’Reilly’s #KMM samplesLooking for new examples of using Realm in your KMM apps? Thanks to @joreilly – we’ve got you covered. Check out his updated BikeShare and FantasyPremierLeague sample apps.Say hi to some of our Developer Advocates at plSwiftRealm Developer Advocates @jdortiz and @dfreniche will be speaking at plSwift on topics like what makes a great architecture for iOS apps using SwiftUI and a day in the life of a developer advocate. Make sure to stop by and say hi!", "username": "Emma_Lullo" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Realm May 2022 Newsletter: .NET MAUI support, KMM samples, and MongoDB World
2022-05-20T20:38:41.314Z
Realm May 2022 Newsletter: .NET MAUI support, KMM samples, and MongoDB World
3,194
null
[ "queries", "python" ]
[ { "code": "[{'_id': \n ObjectId('6179b174d5611ff2a0d09896'),\n 'Visible': True,\n 'fechaCreacion': datetime.datetime(2021, 10, 27, 20, 6, 46, 572000),\n 'fechaUpdate': datetime.datetime(1, 1, 1, 0, 0),\n 'name': 'Cierre-1',\n 'fechaOrdenes': datetime.datetime(2021, 10, 28, 20, 6, 46, 572000),\n 'usuario': 'auser',\n 'jobId': 'jfe56c914ca41450e9bac2147bcd41767',\n 'stops': {'value': \n {'features': [\n {'attributes': {'ObjectID': 1,\n 'Name': 'KK--424|san #11 #31, sfe',\n 'RouteName': '07081211',\n \t\t\t\t\t 'Sequence': 4,\n 'SnapX': -100.92710374050219,\n 'SnapY': 25.508205162504694}},\n {'attributes': {'ObjectID': 2,\n 'Name': 'MX--461|Lorenzo Garza',\n 'RouteName': '07081211',\n 'Sequence': 5,\n 'SnapX': -100.90922743603264,\n 'SnapY': 25.450843660104084}},\n {'attributes': {'ObjectID': 3,\n 'Name': 'MX--487|aris',\n 'RouteName': '07081210',\n 'Sequence': 6,\n 'SnapX': -100.97002999999995,\n...\n {'attributes': {'ObjectID': 6, 'Name': '07081298', 'OrderCount': None}},\n {'attributes': {'ObjectID': 7, 'Name': '07081297', 'OrderCount': None}},\n {'attributes': {'ObjectID': 8,\n 'Name': '07081218',\n 'OrderCount': None}}]}}}]\nimport pdmongo as pdm\n\nmongouser = 'user'\nmongopsw = 'PsW2.www'\nmongohost = '202.mex-east-2.compute.amazonaws.com'\nmongodb = 'DataDB'\nmongouri = f\"mongodb://{mongouser}:{mongopsw}@{mongohost}\"\nmongourl = f\"mongodb://{mongouser}:{mongopsw}@{mongohost}/{mongodb}\"\n\ntry: \n client = MongoClient(mongouri)\n print(f\"Connected to MongoDB Successfully\")\n db = client\n rutas = db['Rutas'] \n cierres = db['Cierres'] \n \nexcept ConnectionFailure:\n print(f\"Could not connect to MongoDB\")\n sys.exit(1)\ncursor_rutas = rutas.find()\ndf_rutas = pd.json_normalize(list(cursor_rutas), max_level = 1)\n\n\ncursor_cierres = cierres.find({\"stops.features.attributes.name\":\"$exists\"},\n {\"stops.features.attributes.Name\": 1},\n {\"stops.features.attributes.RouteName\": 1},\n {\"stops.features.attributes.Sequence\": 1})\ndf_cierres = pd.json_normalize(cursor_cierres, max_level = 1)\n\ncursor_rutas.close()\ncursor_cierres.close()\n", "text": "Hi, I’ve beeing working with json files or json format about one year but all the work I’ve done is in Snowflake, but now I’m using SQL in DBeaver and most of all Python. I’ve beeing workin’ all night long with no sleeping and I can’t figure it out how to achieve the goal of gettin’ the information from this collectionThis is the example from the collection named “Cierres” and I want to obtain the information of “attributes” just “Name”, “RouteName” and “Sequence”.Here’s the work I’ve done and can paste all the work from last 24hrs with no sleepin’Define de DataFrame in pandas named “df_rutas” has beeing represented no problem to obtain the information but I want to obtain the DataFrame from collection ‘Cierres’ as mentioned before.PLEASE!!! I’m beeging for some help!!", "username": "Aldo_Leal" }, { "code": "\"stops.features.attributes.name\":\"$exists\"\"stops.features.attributes.Name\": 1{\"stops.features.attributes.name\":\"$exists\"}{ \"stops.features.attributes.Name\": 1,\n \"stops.features.attributes.RouteName\": 1,\n \"stops.features.attributes.Sequence\": 1 }\n", "text": "Field names are case sensitive so either name lower case n\"stops.features.attributes.name\":\"$exists\"is wrong Name upper case N or\"stops.features.attributes.Name\": 1Read the $exists documentation to see what is wrong with your syntax.The query is the first argument of cierres.find(), currently you have 4 arguments and only{\"stops.features.attributes.name\":\"$exists\"}is the query. The second one is the projection. The closing brace after …Name:1 terminates the second parameter so RouteName:1 and Sequence:1 are not part of the projection document. To have all 3 projected try:", "username": "steevej" } ]
Get information from collection in collection in array in collection
2022-05-20T19:27:01.828Z
Get information from collection in collection in array in collection
1,786
https://www.mongodb.com/…2e348a1c84fa.png
[]
[ { "code": "", "text": "Hello,I’ve just completed an upgrade from 4.2 to 4.4 and it worked smoothly.\nNow, I wanted to upgrade to 5 and then I read\n\nimage802×174 24.6 KB\n\nbut didn’t find any further instructions.So do I have to simply go back to the 4.4 docs and do steps 1 & 2 of the uninstallation process:\n\nimage802×536 31.8 KB\n\nand then install 5 and 5 will simply pick all my files and everything will run?Thanks,\nChris", "username": "Chris_Haus" }, { "code": "chown mongodb:mongodb /tmp/mongodb*\n", "text": "I’ve been impatient and went ahead.\nIndeed, it works as described.Uninstall 4.4 with the first 2 steps only.\nInstall 5.I had to change the owner of my socket file:And the server is up and running on v5.", "username": "Chris_Haus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Upgrade from 4.4 to 5 on Debian 10
2022-05-20T19:07:14.820Z
Upgrade from 4.4 to 5 on Debian 10
1,748
null
[ "aggregation" ]
[ { "code": "daymonthyearshop_idtypecostshop_idtype", "text": "Hello, I’m fairly new to MongoDB Aggregation Pipelines. I’m trying to test the performance of MongoDB pipelines for possible use in my company.\nFor this test, I have generated 50 million documents (using this amazing tool). The document structure it’s fairly simple:The main index is a composite index for day, month, year + shop_id and type.\nOne of my requirements for aggregation queries is to sum cost for a specific day (or month), filtering by shop_id and type. The pipeline I designed is quite simple, but it does the job and it’s also quite fast, except for the first “execution”.To give an idea of the performances, on my 2.4Ghz laptop, using a 50 million documents collection, the first execution takes between 2 and 3 seconds and the following executions between 500 and 600 ms.\nOverall, these are excellent response times, but I’m wondering why the first execution is always a bit slower?\nIs MongoDB caching the “correct” exec plan? Or, am I getting cached results after the second execution?I would love to understand more, and also how should I go about understanding what MongoDB is doing under the hood in cases like this.Thanks!", "username": "Luciano_Fiandesio" }, { "code": "", "text": "Probably your PC operating system’s disk caching is speeding up the subsequent execution times.", "username": "Jack_Woehr" } ]
Understanding MongoDb's Aggregation Pipeline performances
2022-05-20T18:01:39.198Z
Understanding MongoDb&rsquo;s Aggregation Pipeline performances
1,408
null
[ "replication" ]
[ { "code": "{\"schema\":{\"type\":\"string\",\"optional\":false},\"payload\":\"<updated_document>\"}\n{<updated_document>}\n\"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n\"connection.uri\": \"mongodb://mongo/?replicaSet=rs0\",\n\"database\": \"<db_name>\",\n\"collection\": \"<collection_name>\",\n\"publish.full.document.only\": \"true\",\n\"topic.namespace.map\": \"{\\\"<db_name>.<collection_name>\\\": \\\"<kafka_topic_name>\\\"}\"\n", "text": "How can I change structure of the message that is written to kafka by mongodb kafka connector?\nBy default it writes message like belowBut I just want the payload i.e. the message I want to be written is likeMy source config is", "username": "Animesh_Pathak" }, { "code": "key.converter.schemas.enable=false\nvalue.converter.schemas.enable=false\nkey.converter=org.apache.kafka.connect.storage.StringConverter\nvalue.converter=org.apache.kafka.connect.storage.StringConverter\n", "text": "Is that your complete source config? What converter are you using? StringConverter or JsonConverter?Try String Converter something like this:", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change kafka message structure of mongodb kafka connector
2022-05-20T12:41:31.498Z
Change kafka message structure of mongodb kafka connector
1,464
null
[ "aggregation" ]
[ { "code": "// Schema for styles\nconst styles = mongoose.Schema({\n id: Number,\n productId: Number,\n name: String,\n sale_price: String,\n original_price: String,\n default_price: Boolean,\n}, {\n strict: false,\n});\n\n// Schema for skus\nconst skus = mongoose.Schema({\n id: Number,\n styleId: Number,\n size: String,\n quantity: String,\n}, {\n strict: false,\n});\n{\n \"style_id\": 1,\n \"name\": \"Forest Green & Black\",\n \"original_price\": \"140\",\n \"sale_price\": \"0\",\n \"default?\": true,\n \"skus\": [\n { \n \"id\": 37,\n \"styleId\": 1,\n \"size\": \"XS\",\n \"quantity\": 16 \n },\n { \n \"id\": 38,\n \"styleId\": 1,\n \"size\": \"S\",\n \"quantity\": 8\n } \n ] \n\n}\n{\n \"style_id\": 1,\n \"name\": \"Forest Green & Black\",\n \"original_price\": \"140\",\n \"sale_price\": \"0\",\n \"default?\": true,\n \"skus\": {\n \"37\": {\n \"styleId\": 1,\n \"size\": \"XS\",\n \"quantity\": 16 \n },\n \"38\": {\n \"styleId\": 1,\n \"size\": \"S\",\n \"quantity\": 8\n }\n }\n \n\n}\n", "text": "I am currently doing a project where my Database Management System is MongoDB. I am writing an aggregation pipeline where there are several stages. I am currently struggling at a particular stage where I want to obtain the following output. MongoDB has so many operator expressions that I am confused about which one to use to achieve this. I have a collection called Styles and Skus which are as follows:Each style can have several SKUs, one-to-many relationships. In my aggregation pipeline, I am using $lookup to find all the SKUs of that particular style and adding a new field called SKUs in the styles document. I am getting results like this after the $lookup.Which is expected as $lookup returns a matching array. But I want my Styles document to look like this.Can someone give any idea how to structure the data like above in aggregation pipeline? Any help would be greatly appreciated. Thanks in advance.", "username": "Ashequl_Haque" }, { "code": "", "text": "You are in luck. See https://docs.mongodb.com/manual/reference/operator/aggregation/arrayToObject/ you might need to do some $project or $map first.Be aware that some client drivers do not like numbers as field keys and might build a sparse array anyway. See Mongodb import object with numbers as keys results in arrayPersonally, I would keep it as an array as I find it is cleaner and more representative of what the data is. You could pass skus[n] to a function and you would have everything about the sku. Otherwise you wound need to pass the key (ie: 37) and the object to have all the info about the sku because sku.37 does not have the key. It would make further $match harder to do because 37 is really data not a key.", "username": "steevej" }, { "code": "", "text": "\ngetting this error($arrayToObject requires an object keys of ‘k’ and ‘v’. Found incorrect number of keys:5)\nplease help me", "username": "vamshi_krishna_jiguru" }, { "code": "", "text": "If you look at the documentation you will see that your array product does not fulfill the initial requirements.You will need to first use $map to modify your array to match the requirement.More importantly, why do you want to apply this transformation?", "username": "steevej" } ]
How to convert array of objects into nested object in MongoDB aggregation pipeline
2021-10-23T00:37:27.503Z
How to convert array of objects into nested object in MongoDB aggregation pipeline
6,473
null
[ "aggregation", "node-js" ]
[ { "code": "", "text": "Apologies first of all. I’m migrating from a SQL platform to MongoDB and still finding it strange going from a SQL relational querying to the MongoDB querying.I’m using NodeJS to write an application to search and select records from my migrated database. I have a collection called Dealers which contains within it an array of objects called Vehicles.I’m using an aggregation to search for vehicles with specific values. My aggregate matches the specific Dealer document, then unwinds the vehicles array and then matches this with things like Vehicles.Model: ‘FORD’, Vehicles.FuleType: ‘Petrol’ etc.And this is where my logic hits a bit of a wall with my SQL way of thinking. My aggregate returns a number of records, and I’m not sure how to then select one of those returned records to return the vehicles full data.Where as before the Dealers would by in a table of their own all with IDs. And the Vehicles would be in a separate table with its own ID linking to the Dealers ID.Like I say, sorry if it’s a stupid question, but the none relational database environment is totally different a SQL one. I like it a lot. Just need to adjust my thinking.Thanks in advance.", "username": "Andy_Bryan" }, { "code": "", "text": "I think I’ve figured it out.\nI added a ‘includeArrayIndex’: ‘vehicleIndex’ to my $unwind stage.Not sure if this is the best way to do it or not.", "username": "Andy_Bryan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filtering and selecting an Array object within a Collection
2022-05-19T15:35:50.070Z
Filtering and selecting an Array object within a Collection
1,295
null
[ "queries" ]
[ { "code": "{\n\t\"_id\" : ObjectId(\"627e2e35ae6cd2104c58ef38\"),\n\t\"attendance\" : [ ],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"teamId\" : ObjectId(\"6069a681a0ccf704e78a720c\"),\n\t\"userId\" : ObjectId(\"6070b1d5b6d3d082e72c0cd8\"),\n\t\"year\" : 2022,\n\t\"offlineAttendance\" : [\n\t\t{\n\t\t\t\"attendance\" : true,\n\t\t\t\"attendanceAt\" : \"2022-05-13T10:08:53.322930Z\",\n\t\t\t\"attendanceId\" : \"627e2e35ae6cd2104c58ef63\",\n\t\t\t\"date\" : \"13-05-2022\",\n\t\t\t\"dateString\" : \"20220513\",\n\t\t\t\"day\" : 13,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"subjectId\" : \"606b6fa1a0ccf7222260a570\",\n\t\t\t\"subjectName\" : \"English\",\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"15:38\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-13T11:30:18.734192Z\",\n\t\t\t\"attendanceId\" : \"627e414aae6cd2104c8cd4d9\",\n\t\t\t\"date\" : \"13-05-2022\",\n\t\t\t\"dateString\" : \"20220513\",\n\t\t\t\"day\" : 13,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"subjectId\" : \"606b6fa1a0ccf7222260a570\",\n\t\t\t\"subjectName\" : \"English\",\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"17:00\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-14T04:36:47.146191Z\",\n\t\t\t\"attendanceId\" : \"627f31dfae6cd20cef0878bb\",\n\t\t\t\"date\" : \"14-05-2022\",\n\t\t\t\"dateString\" : \"20220514\",\n\t\t\t\"day\" : 14,\n\t\t\t\"isApproved\" : true,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:06\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-15T09:41:34.588895Z\",\n\t\t\t\"attendanceId\" : \"6280caceae6cd208d95bda7a\",\n\t\t\t\"date\" : \"15-05-2022\",\n\t\t\t\"dateString\" : \"20220515\",\n\t\t\t\"day\" : 15,\n\t\t\t\"isApproved\" : true,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"15:11\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T04:58:28.541419Z\",\n\t\t\t\"attendanceId\" : \"62847cf4ae6cd20eb1e1ebb0\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:28\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T05:02:01.584568Z\",\n\t\t\t\"attendanceId\" : \"62847dc9ae6cd20eb1a772be\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:32\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T05:11:53.673496Z\",\n\t\t\t\"attendanceId\" : \"62848019ae6cd20eb1e54319\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:41\"\n\t\t}\n\t]\n}\n{\n\t\"_id\" : ObjectId(\"627e2e35ae6cd2104c58ef39\"),\n\t\"attendance\" : [ ],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"teamId\" : ObjectId(\"6069a681a0ccf704e78a720c\"),\n\t\"userId\" : ObjectId(\"6070b1d5b6d3d082e72c0cdb\"),\n\t\"year\" : 2022,\n\t\"offlineAttendance\" : [\n\t\t{\n\t\t\t\"attendance\" : true,\n\t\t\t\"attendanceAt\" : \"2022-05-13T10:08:53.322930Z\",\n\t\t\t\"attendanceId\" : \"627e2e35ae6cd2104c58ef63\",\n\t\t\t\"date\" : \"13-05-2022\",\n\t\t\t\"dateString\" : \"20220513\",\n\t\t\t\"day\" : 13,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"subjectId\" : \"606b6fa1a0ccf7222260a570\",\n\t\t\t\"subjectName\" : \"English\",\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"15:38\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-13T11:30:18.734192Z\",\n\t\t\t\"attendanceId\" : \"627e414aae6cd2104c8cd4d9\",\n\t\t\t\"date\" : \"13-05-2022\",\n\t\t\t\"dateString\" : \"20220513\",\n\t\t\t\"day\" : 13,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"subjectId\" : \"606b6fa1a0ccf7222260a570\",\n\t\t\t\"subjectName\" : \"English\",\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"17:00\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-14T04:36:47.146191Z\",\n\t\t\t\"attendanceId\" : \"627f31dfae6cd20cef0878bb\",\n\t\t\t\"date\" : \"14-05-2022\",\n\t\t\t\"dateString\" : \"20220514\",\n\t\t\t\"day\" : 14,\n\t\t\t\"isApproved\" : true,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:06\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-15T09:41:34.588895Z\",\n\t\t\t\"attendanceId\" : \"6280caceae6cd208d95bda7a\",\n\t\t\t\"date\" : \"15-05-2022\",\n\t\t\t\"dateString\" : \"20220515\",\n\t\t\t\"day\" : 15,\n\t\t\t\"isApproved\" : true,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"15:11\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T04:58:28.541419Z\",\n\t\t\t\"attendanceId\" : \"62847cf4ae6cd20eb1e1ebb0\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:28\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T05:02:01.584568Z\",\n\t\t\t\"attendanceId\" : \"62847dc9ae6cd20eb1a772be\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:32\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T05:11:53.673496Z\",\n\t\t\t\"attendanceId\" : \"62848019ae6cd20eb1e54319\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:41\"\n\t\t}\n\t]\n}\n{\n\t\"_id\" : ObjectId(\"627e2e35ae6cd2104c58ef3a\"),\n\t\"attendance\" : [ ],\n\t\"groupId\" : ObjectId(\"5f06cca74e51ba15f5167b86\"),\n\t\"isActive\" : true,\n\t\"month\" : 5,\n\t\"teamId\" : ObjectId(\"6069a681a0ccf704e78a720c\"),\n\t\"userId\" : ObjectId(\"6070b1d5b6d3d082e72c0cde\"),\n\t\"year\" : 2022,\n\t\"offlineAttendance\" : [\n\t\t{\n\t\t\t\"attendance\" : true,\n\t\t\t\"attendanceAt\" : \"2022-05-13T10:08:53.322930Z\",\n\t\t\t\"attendanceId\" : \"627e2e35ae6cd2104c58ef63\",\n\t\t\t\"date\" : \"13-05-2022\",\n\t\t\t\"dateString\" : \"20220513\",\n\t\t\t\"day\" : 13,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"subjectId\" : \"606b6fa1a0ccf7222260a570\",\n\t\t\t\"subjectName\" : \"English\",\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"15:38\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-13T11:30:18.734192Z\",\n\t\t\t\"attendanceId\" : \"627e414aae6cd2104c8cd4d9\",\n\t\t\t\"date\" : \"13-05-2022\",\n\t\t\t\"dateString\" : \"20220513\",\n\t\t\t\"day\" : 13,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"subjectId\" : \"606b6fa1a0ccf7222260a570\",\n\t\t\t\"subjectName\" : \"English\",\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"17:00\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-14T04:36:47.146191Z\",\n\t\t\t\"attendanceId\" : \"627f31dfae6cd20cef0878bb\",\n\t\t\t\"date\" : \"14-05-2022\",\n\t\t\t\"dateString\" : \"20220514\",\n\t\t\t\"day\" : 14,\n\t\t\t\"isApproved\" : true,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:06\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"absent\",\n\t\t\t\"attendanceAt\" : \"2022-05-15T09:41:34.588895Z\",\n\t\t\t\"attendanceId\" : \"6280caceae6cd208d95bda7a\",\n\t\t\t\"date\" : \"15-05-2022\",\n\t\t\t\"dateString\" : \"20220515\",\n\t\t\t\"day\" : 15,\n\t\t\t\"isApproved\" : true,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"15:11\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T04:58:28.541419Z\",\n\t\t\t\"attendanceId\" : \"62847cf4ae6cd20eb1e1ebb0\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:28\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T05:02:01.584568Z\",\n\t\t\t\"attendanceId\" : \"62847dc9ae6cd20eb1a772be\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:32\"\n\t\t},\n\t\t{\n\t\t\t\"attendance\" : \"present\",\n\t\t\t\"attendanceAt\" : \"2022-05-18T05:11:53.673496Z\",\n\t\t\t\"attendanceId\" : \"62848019ae6cd20eb1e54319\",\n\t\t\t\"date\" : \"18-05-2022\",\n\t\t\t\"dateString\" : \"20220518\",\n\t\t\t\"day\" : 18,\n\t\t\t\"isApproved\" : false,\n\t\t\t\"teacherId\" : \"6069a5daa0ccf704e7319d16\",\n\t\t\t\"teacherName\" : \"Admin \",\n\t\t\t\"time\" : \"10:41\"\n\t\t}\n\t]\n}\n\n{\n\t\"_id\" : ObjectId(\"627e2e35ae6cd2104c58ef27\"),\n\t\"offlineAttendance\" : [\n\t\t{\n\t\t\t\"day\" : 18\n\t\t}\n\t]\n}\n{\n\t\"_id\" : ObjectId(\"627e2e35ae6cd2104c58ef28\"),\n\t\"offlineAttendance\" : [\n\t\t{\n\t\t\t\"day\" : 18\n\t\t}\n\t]\n}\n{\n\t\"_id\" : ObjectId(\"627e2e35ae6cd2104c58ef29\"),\n\t\"offlineAttendance\" : [\n\t\t{\n\t\t\t\"day\" : 18\n\t\t}\n\t]\n}\n", "text": "MONGO QUERY\ndb.offline_class_attendance.find({“month”:5,“offlineAttendance.day”:18},{“offlineAttendance.day.$”:1}).pretty()when i project i’m getting the first day:“18” in that array but want to grab all day:“18”", "username": "Prathamesh_N" }, { "code": "db.offline_class_attendance.find({“month”:5,“offlineAttendance.day”:18},{“offlineAttendance.day.$”:1}).pretty()\n", "text": "", "username": "Prathamesh_N" }, { "code": "", "text": "Try to do that in an aggregation $set stage using $filter.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to fetch all the values of the specified date in the particular array
2022-05-18T05:40:41.460Z
How to fetch all the values of the specified date in the particular array
3,136
null
[ "queries", "indexes" ]
[ { "code": "student{\n\t\"sample\": false,\n\t\"paid\": true,\n\t\"status\": 1,\n\t\"department\": \"1435334\",\n\t\"ts\" : ISODate(\"2022-04-20T04:51:00.731Z\"),\n}\ndepartment{\n\t\"sample\": true,\n\t\"paid\": true,\n\t\"status\": 4,\n\t\"ts\" : ISODate(\"2022-04-20T04:51:00.731Z\"),\n}\ndb.student.find({\n\t\"paid\": true,\n\t\"sample\": {\"$ne\": true},\n\t\"status\": {\"$ne\": 4},\n}).sort({'ts': -1}).limit(10)\ndb.student.find({\n\t\"paid\": true,\n\t\"sample\": {\"$ne\": true},\n\t\"status\": {\"$ne\": 4},\n\t\"ts\": {\"$lt\": createdAt},\n}).sort({'ts': -1}).limit(10)\ndb.student.find({\n\t\"paid\": true,\n\t\"sample\": {\"$ne\": true},\n\t\"status\": {\"$ne\": 4},\n\t\"ts\": {\"$lt\": createdAt},\n\t\"department\": {\"$in\": [\"1\", \"2\", \"3\"]}\n}).sort({'ts': -1}).limit(10)\n{ \"paid\": true, \"sample\": {\"$ne\": true}, \"status\": {\"$ne\": 4} }\n{\n\t\"key\" : {\n\t\t\"status\": 1,\n\t\t\"department\": 1,\n\t\t\"ts\": 1,\n\t},\n\t\"name\" : \"idx_student_data\",\n\t\"background\" : true,\n\t\"partialFilterExpression\" : {\n\t\t\"paid\" : true,\n\t}\n}\ndepartment", "text": "A document in my student collection looks like this:There are some records, which do not have department field. So another sample:so I have a query which is like:If it is being paginated, then:sometimes I need to find students who belong to departments:Now I want to know an optimal index which can cover these.In my query, three things are always constant:so I want to add these to partial expression, so that my index has only these values.Following ESR rule, I came with following index:", "username": "V_N_A" }, { "code": "{ \"paid\": true, \"sample\": {\"$ne\": true}, \"status\": {\"$ne\": 4} }\n\"partialFilterExpression\" : {\n\t\t\"paid\" : true,\n\t}\n\"sample\": {\"$ne\": true}", "text": "If you want a partial index forthree things are always constant:You could have that as partialFilterExpression: value rather thanAny reason why you have\"sample\": {\"$ne\": true}rather than sample:false.", "username": "steevej" }, { "code": "sample", "text": "hey @steevej thank you for responding.rather than sample:false .this is because, the sample isn’t always present in all documents. So there three states:So, in my query, I want to consider only the documents which don’t have sample field at all or sample field is set to false. How do I represent this in the partial filter expression?", "username": "V_N_A" }, { "code": "\"status\": {\"$ne\": 4},\n$ne0 to 4", "text": "I am also not on how to represent this on the partial filter exp:Since they don’t support $ne operator. My status value ranges from 0 to 4 and I want to avoid all the documents which have status as 4", "username": "V_N_A" }, { "code": "0 to 4", "text": "How do I represent this in the partial filter expression?I do not think you can but you might migrate your data so that sample is always there. This way you could add sample:false to your partialFilterExpression.You are in luck withstatus value ranges from 0 to 4as you could then use status:{$lt:4} to your filter.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help me create a correct index for my data
2022-05-19T12:55:30.531Z
Help me create a correct index for my data
1,880
null
[ "mongoose-odm" ]
[ { "code": "", "text": "{\n“campaignID”: {\n“network”: {\n“primary”: “61c553430c61c7ef2aeb1755”,\n“secondary”: “61c553430c61c7ef2aeb1755”\n},\n“linkURL”: “”,\n“_id”: “61c6a74d0c61c7ef2aeb24d4”,\n“status”: false,\n“published”: false,\n“created”: “Wed Dec 15 2021 12:52:06 GMT+0300 (East Africa Time)”,\n“__v”: 0\n},\n“label”: “How is Web3?”,\n“values”: [\n“good, bad, very bad”\n],\n“_id”: “62878191ed8c11dbbfa5c5b2”,\n“__v”: 0\n}", "username": "kerenke_tepela" }, { "code": "", "text": "Read Formatting code and log snippets in posts and update the published document so that we can cut-n-paste it.You requirement is not clear. Which _id? You have 2 in the publish document and 61c6a74d0c61c7ef2aeb24d4 is already a field of compaignID.", "username": "steevej" }, { "code": "", "text": "I need to get this id 61c6a74d0c61c7ef2aeb24d4", "username": "kerenke_tepela" }, { "code": "\"campaignID._id\"", "text": "The field is referred as \"campaignID._id\".", "username": "steevej" } ]
How do I get The _id inside campaignID?
2022-05-20T12:06:25.819Z
How do I get The _id inside campaignID?
3,496
null
[ "aggregation" ]
[ { "code": "{ \"_id\" : ObjectId(\"6284a6b84b171c659ec86561\"), \"22\" : 423 }\n{ \"_id\" : ObjectId(\"6284a6b84b171c659ec86562\"), \"22\" : 506 }\n...\n{ \"_id\" : ObjectId(\"6284a6b84b171c659ec86561\"), \"23\" : \"AS\" }\n{ \"_id\" : ObjectId(\"6284a6b84b171c659ec86562\"), \"23\" : \"DF\" }\n...\npipeline = [\n {'$lookup': {\n 'from': 'Rec22', \n 'localField': '_id', \n 'foreignField': '_id', \n 'as': 'fromExtra1'}\n }, \n \n {'$lookup': {\n 'from': 'Rec23', \n 'localField': '_id', \n 'foreignField': '_id', \n 'as': 'fromExtra2'}\n }, \n \n {'$replaceRoot': {'newRoot': {'$mergeObjects': [{'$arrayElemAt': ['$fromExtra2', 0]}, '$$ROOT',\n {'$arrayElemAt': ['$fromExtra1', 0]}, '$$ROOT']}}}, \n \n {'$project': {'fromExtra2': 0, 'fromExtra1': 0}}, \n \n {'$addFields': {\n '22_str': {'$toString': '$22'},\n '23_str': {'$toString': '$23'}}\n }, \n \n {'$project': {\n '24': {\n '$concat': [{'$ifNull': ['$22_str', '']}, '-', {'$ifNull': ['$23_str', '']}]}}\n }, \n \n {'$out': 'Rec24'}\n]\n", "text": "I have two collections (Rec22, Rec23) having 120K documents in each.\nRec22Rec23I am making a aggregate to concatenate “22” and “23” as store the result in new collection. The aggregation pipeline is the following:The query execution time is about 10s. However in case if “22” and “23” fields are in the same collection, the execution time of concatenating these fields and storing the results in new collection is less than 0.5s.\nIs there any way to optimize the aggregation pipeline to get better performance?", "username": "Vahe_Sahakyan" }, { "code": "", "text": "Do you have sample documents from the starting collection?", "username": "steevej" }, { "code": "", "text": "The documents in collection on which the aggregation is applied have the following structure:{“id\" : ObjectId(“6284a6b84b171c659ec86561”), “ListingId” : 987546, “State” : “CA”, “Score”: 15 \"row_index” : 0 }\n{“id\" : ObjectId(“6284a6b84b171c659ec86562”), “ListingId” : 986875, “State” : “CA”, “Score”: 23 \"row_index” : 1 }\n…\nWhen I make aggregation to concatenate fields from this collection (for exammple “ListingId” and “State”) and store the result of aggregation in new collection the execution time is about 0.4s. I understand that when lookup is included in aggregation it may take longer time to retrive documents from different collections.", "username": "Vahe_Sahakyan" }, { "code": "", "text": "I do not see anything obvious. May be your hardware setup is insufficient for your workload.", "username": "steevej" } ]
Aggregate slow on $lookup
2022-05-18T15:44:37.854Z
Aggregate slow on $lookup
2,035
null
[ "java", "performance" ]
[ { "code": "2022-05-16T03:24:49.382Z : Connection check out started for server\n2022-05-16T03:24:49.382Z : Connection connectionId{localValue:3902} checked out\n2022-05-16T03:24:49.382Z : Sending command '{\"find\":.......' on connection [connectionId{localValue:3902}]\n2022-05-16T03:24:49.384Z : Execution of command with request id <id> completed successfully in 1.75 ms on connection [connectionId{localValue:3902}] \n\n2022-05-16T03:24:54.392Z : Received batch of 1 documents with cursorId 0 from server\n2022-05-16T03:24:54.392Z : Checked in connection [connectionId{localValue:3902}] to server\nGetMore", "text": "This is connection lifecycle logCommand execution itself was done in 1.75ms which has the single document … so what’s happening after command execution and before connection checkin which is taking 5s and sometimes 10s. … There is no GetMore command executed… first batch itself has the result.driver = Java sync driver 4.6.0\nserver = 4.0.5", "username": "Pradeepp_Sahnii" }, { "code": "", "text": "Hi @Pradeepp_Sahnii I don’t have a hypothesis yet that could explain this, so allow me to ask a few questions:Thanks,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "e.g This is what we are using for find by id ... and sometimes we see issue in this sometimes in other commands\n\n BsonDocument filter = new BsonDocument()\n .append(\"field1\", new BsonString(\"value1))\n .append(\"field2\", new BsonString(\"value2));\n\nFindIterable<CustomPojo> findIterable = myCollection.find(filter);\nreturn findIterable.first();\n\n", "text": "@Jeffrey_YeminI was looking at the flow of find command execution, it looks like after the command execution event … driver releases the response buffers to the PowerOfTwoBufferPool … and there it acquires lock/permits … is it possible that release buffer is causing these intermittent pauses ?", "username": "Pradeepp_Sahnii" }, { "code": "2022-05-16T03:24:49.382Z : Connection check out started for server2022-05-16T03:24:54.392Z : Checked in connection [connectionId{localValue:3902}] to server", "text": "Please share a little bit more code.How your find is called?How your connection is established?How is myCollection initialized?From your lifecycle log, it looks like the command is sent right away when the connection is established. What is happening before2022-05-16T03:24:49.382Z : Connection check out started for serverIs there anything happening after2022-05-16T03:24:54.392Z : Checked in connection [connectionId{localValue:3902}] to serverCould you explain a little bit more about the lifecycle? How is the process doing the find started? What fires the find command?", "username": "steevej" }, { "code": "", "text": "@Pradeepp_Sahnii can you give me a sense of your Java driver upgrade history? Was this application running on previous driver releases, and if so, after what upgrade did you first start seeing this issue? I’m trying to narrow down whether this is a regression introduced by a recent (or not so recent) release of the driver.Thanks,\nJeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "@Jeffrey_Yemin Few more detailsearlier we were using 3.11.0 now we have upgraded it to 4.60, however we were seeing this issue in 3.11.0 as well.we are seeing this issue only for app server deployed in Azure … it works fine in app servers in AWS.connection pool settingsThread pool which executes the mongo commands has max size 150, core size=100.", "username": "Pradeepp_Sahnii" }, { "code": "", "text": "I don’t currently have a hypothesis for a driver issue that could explain what you’re observing. The 3.11 driver is widely used, and no one else has ever reported this sort of behavior. Given that, it seems likely that the issue is somewhat unique to your particular application and deployment.Please let us know though if you uncover any additional evidence that would point towards a driver issue.Regards,\nJeff", "username": "Jeffrey_Yemin" } ]
Java driver connection lifecycle is taking 5-10 seconds to complete
2022-05-16T09:53:23.514Z
Java driver connection lifecycle is taking 5-10 seconds to complete
3,508
null
[]
[ { "code": "", "text": "We need to design a way to manage and automate deployment of mongodb database scripts.Today we do this adhoc and manually, and someone needs to remember to perform this every time we promote changes across environments. We need some mechanism to manage state of executed scripts and and not require having to rerun them in specific environments as well as we should be able integrate this with our CI/CD process.Need some help and pointers regarding this, like something similar to DACPAC projects for SQL server and DACPAC tasks in Azure DevOps release pipelines for SQL Server. So what is the equivalent thing in MongoDB? By the way, we are using Azure DevOps for CI/CD.", "username": "Ambar_Ray" }, { "code": "", "text": "Is there a way we can leverage SaltStack for automating deployment of database changes. Heard about PyMongo sometime before, is it related to that? Could you provide more pointers and/or documentation regarding how SaltStack could be effectively harnessed to deploy MongoDB database changes across environments?", "username": "Ambar_Ray" }, { "code": "", "text": "I am interested in this too. Can anyone share a nice way they handle their Mongo scripts when releasing code or do most people ask DBA to run the Mongo scripts?", "username": "Claire_Moore1" }, { "code": "", "text": "Hi Ambar, did you get a method for it?", "username": "alex_du" }, { "code": "", "text": "Hi everyone!I believe you are describing a desire to handle database migrations similar to this recent discussion: Merge customer database with upgraded version without data loss - #2 by StennieManaging schema migrations is outside the scope of the core MongoDB server, but there are quite a few libraries and tools that can help with this depending on your requirements and software preferences.I expect you could choose (or create) a schema migration approach independent of the automation tooling that you use for deployment (SaltStack, Puppet, etc) – automation tooling could just invoke schema migration at the relevant step in your deployment process.I suggest creating separate discussion topics with more specific details of your individual requirements and use cases. The original question in this topic was focused on Azure DevOps, which is currently on the more niche side in terms of discussion and expertise in this forumRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "From my perspective I don’t want to do a migration. What I am wondering is how best to handle release scripts. E.g. for a given release we add 2 new tables, ​2 new indexes and have an insert script to populate them.\n​In Oracle the answer would be to create a master script that calls DDL script first, then dml and have a validate statement that ensures everything inserted as expected otherwise rollback whole transaction. All this could be called from Jenkins.", "username": "Claire_Moore1" }, { "code": "", "text": "Hi @Claire_Moore1,What you are describing is the same concept as schema migration: you want to apply DDL changes to add new collections, indexes, and structural data as part of your deployment process so the end outcome is a consistent database schema.Each changeset would have a unique identifier which can be used to identify whether those changes have already been applied and some convention to ensure changes are applied in a predictable order. Changesets could be applied within transactions – MongoDB 4.4+ would be required to Create Collections and Indexes in a Transaction. Changesets can also be committed to version control and deployed as part of your release process or continuous integration.There is no strict requirement to use a schema migration tool with MongoDB, but one can certainly help with consistency of your deployments. From a data point of view there are also patterns like Schema Versioning that take advantage of MongoDB’s flexible schema to allow applications to be aware of multiple versions of document schemas co-existing in the same collection.DDL commands are ultimately sent to a MongoDB deployment via the MongoDB Driver API, but there are schema migration/management tools that provide higher-level abstractions like JSON, YAML, or XML. I personally prefer a tool that matches the implementation language for my app so the dev team doesn’t have to learn additional syntax and existing data models can be leveraged to update structural data.If you have more specific requirements, it would be best to start a new discussion topic focused on your environment and use case. There is definitely more than one way to approach this.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thank you for the reply. Yes I am going to give using Transactions a shot for our release process & Schema versioning", "username": "Claire_Moore1" }, { "code": "", "text": "Hi @Claire_Moore1 , were you able to implement your DDL scripts from Azure DevOps pipelines? I’m attempting the same thing and would appreciate any guidance if you were successful.\nRegards,\nMatt", "username": "Matt_L" } ]
DB scripts as part of deployment
2020-06-24T11:00:27.889Z
DB scripts as part of deployment
6,655
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "So come, join in and ask questions. We will be sharing details about the submission process that will go live next week!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "We will be live in 20 minutes on MongoDB Youtube and MongoDB TwitchOr watch below", "username": "Shane_McAllister" }, { "code": "", "text": "If anyone is having problems (like I did) with duplicate key errors, @Manuel_Martin has solved that bug here: Realm Trigger: Duplicate Key Error - #6 by Manuel_Martin", "username": "Mark_Smith" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Office Hours Thursday - APAC/EMEA
2022-05-12T08:36:55.437Z
Hackathon Office Hours Thursday - APAC/EMEA
3,783
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "I’ve added the Realm functions I wrote in this morning’s stream to the GitHub repoRemember there’s no real error handling there - you may want to add your own!@Crist I know you’re interested in this update Mark", "username": "Mark_Smith" }, { "code": "", "text": "Thank you @Mark_SmithCrist", "username": "Crist" }, { "code": "", "text": "If anyone is having problems with duplicate key errors, @Manuel_Martin has solved the problem in this post: Realm Trigger: Duplicate Key Error - #6 by Manuel_Martin", "username": "Mark_Smith" } ]
Realm functions to update GDELT
2022-05-12T18:30:16.643Z
Realm functions to update GDELT
3,276
https://www.mongodb.com/…0_2_1024x203.png
[ "atlas-triggers", "mdbw22-hackathon", "mdbw-hackhelp" ]
[ { "code": "", "text": "@Mark_SmithI was following through with one of the sessions and after implementing the trigger, I checked the logs to see this error:\nimage1096×218 17.1 KB\nI believe you had a similar error while working on this during the livestream but it seems yours happened because of the dummy data you used to test your updater function which you forgot to cleanup.I didn’t do that, so I would like some help with fixing my trigger which is still showing this error", "username": "Fiewor_John" }, { "code": "", "text": "I had the same error and fixed with this:Line in the repo:\nconst csvLines = csvData.split(\"/n\");\nthe issue is in ‘/n’:\nconst csvLines = csvData.split(\"\\n\");I hope that this can help you.", "username": "Crist" }, { "code": "", "text": "Thank you!\nI have applied this fix.\nFingers crossed hoping it works.", "username": "Fiewor_John" }, { "code": "", "text": "Ok.This hasn’t fixed it, but I think you should create a PR proposing this fix to the mongodb repo that also has this error.I was going to do it, but it looks like I would be taking credit for your observation if I do so.However, if you’re too busy to do this, let me know so I can create a PR in your stead.", "username": "Fiewor_John" }, { "code": "", "text": "Update: The trigger now appears to be working\nimage1593×822 99.4 KB\nI’m not entirely sure what resolved it, but I’m sure your fix helped\nThank you @Crist", "username": "Fiewor_John" }, { "code": "http://data.gdeltproject.org/gdeltv2/20220518214500.export.CSV.zip\nconst csvLines = csvData.split(\"/n\");\nconsole.log(`csvLines.length: ${csvLines.length}`) // IT SHOULD BE 1359 BUT THE RESULT IS csvLines.length: 818\nconst csvLines = csvData.split(/\\r\\n|\\r|\\n/);\nconsole.log(`csvLines.length: ${csvLines.length}`) // IT SHOULD BE 1359 AND THE RESULT IS csvLines.length: 1359\nconst latestCSV = (await http.get({ url: csvURL })).body;\n// VSCode Warning:\n// var Buffer: BufferConstructor new (str: string, encoding?: BufferEncoding) => Buffer (+5 overloads)\n// @deprecated — since v10.0.0 - Use Buffer.from(string[, encoding]) instead.\n//const zip = new AdmZip(new Buffer(latestCSV.toBase64(), 'base64')); // SAMPLE CODE\nconst zip = new AdmZip(new Buffer.from(latestCSV.toBase64(), 'base64')); // Changed by me\nconst csvData = zip.getEntries()[0].getData().toString('utf-8');\n\n//const csvLines = csvData.split(\"/n\");\n//console.log(`csvLines.length: ${csvLines.length}`) // IT SHOULD BE 1359 BUT THE RESULT IS csvLines.length: 818\n\nconst csvLines = csvData.split(/\\r\\n|\\r|\\n/);\nconsole.log(`csvLines.length: ${csvLines.length}`) // IT SHOULD BE 1359 AND THE RESULT IS csvLines.length: 1359\n\nif (csvLines[csvLines.length - 1] === \"\"){ // Remove last line\n csvLines.pop();\n}\nconst rows = csvLines.map((line) => line.split(\"\\t\"));\nconsole.log(`rows.length: ${rows.length}`) // IT SHOULD BE 1358 BUT THE RESULT IS rows.length: 818 (After changing for split(/\\r\\n|\\r|\\n/) is rows.length: 1358)\nconsole.log(`rows: ${rows}`);\n// await context.functions.execute(\"insertCSVRows\", rows, downloadId); // COMENTED TO SEE THE LOGS OF THIS FILE\n", "text": "I also have the same error, and deleted my collections and create again but the issue was the same.The problem is that the Realm Function gdeltUpdater in Sample Code is wrong (from the video Use Realm Functions to Keep Your GDELT Up To Date with @MarK_Smith and @Shane_McAllister)\nWrong Function gdeltUpdater1199×582 260 KB\nTo get an expected result I modified a bit the code to request always the same zip file:It contains the file 20220518214500.export.CSV <== 1359 Lines (The last is empty)In the sample code the way to get the lines from this file is:The correct way to do it is:Here is the complete gdeltUpdater I used to test if the code works ok or not:exports = async function(){\nconst AdmZip = require(“adm-zip”);\nconst http = context.http;\nconst csvURL = ‘http://data.gdeltproject.org/gdeltv2/20220518214500.export.CSV.zip’;};", "username": "Manuel_Martin" }, { "code": "", "text": "Thank you so much for solving this. Hopefully other people will find this helpful.Mark", "username": "Mark_Smith" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Trigger: Duplicate Key Error
2022-05-17T17:17:24.684Z
Realm Trigger: Duplicate Key Error
4,746
null
[]
[ { "code": "", "text": "i just realised if the user is created without logging in, the customdata user wil not be stored even thou i have auto confirm user is on.\ni find this a biggest bummer as i need to setup user on behalf of the user from admin perspective without going through mongodb dashboard.now without the custom data, i could not add in roles or update the user data", "username": "James_Tan1" }, { "code": "roles", "text": "Hi @James_Tan1,now without the custom data, i could not add in roles or update the user dataCould you provide the following information:Regards,\nWan.", "username": "wan" }, { "code": "", "text": "using email and password provider\nive saved roles inside custom data of users table users.roles as array of name of role, example:\n[‘admin’, ‘hr’]yes iam using users.roles to validate for roles based permission", "username": "James_Tan1" }, { "code": "", "text": "the problem is that admin need to setup account info inside the user .custom data collection, but since the user is not yet logged in, this is not possible. so the only way is to do a work around by saving it by email and use onauthentication to look it up and save the uid for that record", "username": "James_Tan1" } ]
Register user as admin
2022-04-27T15:56:12.284Z
Register user as admin
1,798
null
[ "swift", "transactions" ]
[ { "code": "func writeAsync<T: ThreadConfined>(obj: T, errorHandler: @escaping ((_ error: Swift.Error) -> Void) = { _ in return }, onComplete: ((T?) -> Void)? = nil, withoutNotifying:[NotificationToken] = [], block: @escaping ((Realm, T?) -> Void)) {\n let wrappedObj = ThreadSafeReference(to: obj)\n let config = self.configuration\n DispatchQueue(label: \"background\").async {\n autoreleasepool {\n do {\n let realm = try Realm(configuration: config)\n let obj = realm.resolve(wrappedObj)\n\n try realm.write(withoutNotifying: withoutNotifying) {\n block(realm, obj)\n }\n onComplete?(obj)\n } catch {\n errorHandler(error)\n }\n }\n }\n }\n", "text": "Hi! I as part of migrating to Atlas from a local database I am updating the app to do write transactions off the main thread.One issue I have is that when I do user interface driven writes, I need to be able to do that asynchronously without notifying the the NotificationToken that I am using to listen for changes. Is there a way to do this?The following does not work and will crash as you can’t ignore notification tokens on other threads.What would be the right way to achieve async user interface driven writes without notifying the notification token on the main thread?", "username": "Simon_Persson" }, { "code": "", "text": "Anyone know how to do this?", "username": "Simon_Persson" }, { "code": "", "text": "Just saw the new AsyncWrite API:s. I think the new API:s may make this a non-issue ", "username": "Simon_Persson" } ]
Async user interface driven writes and notification tokens
2022-05-09T10:44:26.696Z
Async user interface driven writes and notification tokens
1,930
null
[ "containers", "storage" ]
[ { "code": "top", "text": "According to docs ( WiredTiger Storage Engine — MongoDB Manual) MongoDB should be using around 50% of available RAM, but on our server (8GB, 2vCPU, Ubuntu 20.04 arm64) the usage stays around 85 - 90% all the time. And this is just from MongoDB, nothing else is running on the server (checked using top command).Is this normal? If not what steps should I take to solve it?The MongoDB instance is running inside a docker container (no resource restrictions)", "username": "Arun_Nair" }, { "code": "", "text": "Lower down on that page:Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes.", "username": "chris" }, { "code": "", "text": "So, lets say if upgraded to a 64GB RAM instead, it would still try and use up as much as possible? Also, if I have a large enough dataset (100-200 GB+) Is there any chance of it crossing 100% RAM usage and crashing?", "username": "Arun_Nair" }, { "code": "", "text": "With filesystem cache, if the system need ram for anything else the filesystem cached item will be evicted.The utilization of filesystem cache is to have as much ‘hot’ data in ram as possible so you’re getting good performance instead of reading from disk at a much higher latency.If you start seeing lots of page faults you know you need to increase ram on the system.", "username": "chris" }, { "code": "", "text": "@chris suppose we doing a storage based application performance testing so I am supposed to make sure that mongodb consumes all memory and start utilizing the disk and then with sub milisecond latency check the disk performance (throughput i notice) how do i make sure that its reading from disk now like at what point or Pointers to be sure that the IO i am pumping is hitting the disk ?", "username": "Shilpa_Agrawal" } ]
MongoDB RAM usage at 85% on Ubuntu server
2022-05-19T12:27:08.826Z
MongoDB RAM usage at 85% on Ubuntu server
4,405
null
[ "replication" ]
[ { "code": "db.adminCommand({\n \"setDefaultRWConcern\": 1,\n \"defaultWriteConcern\": {\n \"w\": 1\n },\n \"defaultReadConcern\": {\n \"level\": \"local\"\n }\n})\n{\n \"t\": {\n \"$date\": \"2022-05-13T10:21:41.297+02:00\"\n },\n \"s\": \"I\",\n \"c\": \"COMMAND\",\n \"id\": 51803,\n \"ctx\": \"conn149\",\n \"msg\": \"Slow query\",\n \"attr\": {\n \"type\": \"command\",\n \"ns\": \"<db>.<col>\",\n \"command\": {\n \"insert\": \"<col>\",\n \"ordered\": true,\n \"txnNumber\": 4889253,\n \"$db\": \"<db>\",\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1652430100,\n \"i\": 86\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"bEs41U6TJk/EDoSQwfzzerjx2E0=\",\n \"subType\": \"0\"\n }\n },\n \"keyId\": 7096095617276968965\n }\n },\n \"lsid\": {\n \"id\": {\n \"$uuid\": \"25659dc5-a50a-4f9d-a197-73b3c9e6e556\"\n }\n }\n },\n \"ninserted\": 1,\n \"keysInserted\": 3,\n \"numYields\": 0,\n \"reslen\": 230,\n \"locks\": {\n \"ParallelBatchWriterMode\": {\n \"acquireCount\": {\n \"r\": 2\n }\n },\n \"ReplicationStateTransition\": {\n \"acquireCount\": {\n \"w\": 3\n }\n },\n \"Global\": {\n \"acquireCount\": {\n \"w\": 2\n }\n },\n \"Database\": {\n \"acquireCount\": {\n \"w\": 2\n }\n },\n \"Collection\": {\n \"acquireCount\": {\n \"w\": 2\n }\n },\n \"Mutex\": {\n \"acquireCount\": {\n \"r\": 2\n }\n }\n },\n \"flowControl\": {\n \"acquireCount\": 1,\n \"acquireWaitCount\": 1,\n \"timeAcquiringMicros\": 982988\n },\n \"readConcern\": {\n \"level\": \"local\",\n \"provenance\": \"implicitDefault\"\n },\n \"writeConcern\": {\n \"w\": 1,\n \"wtimeout\": 0,\n \"provenance\": \"customDefault\"\n },\n \"storage\": {},\n \"remote\": \"10.10.7.12:34258\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 983\n }\n", "text": "Hi MongoDBs,I have some write performance struggle with MongoDB 5.0.8 in an PSA (Primary-Secondary-Arbiter) deployment when one data bearing member goes down.I am aware of the “Mitigate Performance Issues with PSA Replica Set” page and the procedure to temporarily work around this issue.However, in my opinion, the manual intervention described here should not be necessary during operation. So what can I do to ensure that the system continues to run efficiently even if a node fails? In other words, as in MongoDB 4.x with the option “enableMajorityReadConcern=false”.As I understand the problem has something to do with the defaultRWConcern. When configuring a PSA Replica Set in MongoDB you are forced to set the DefaultRWConcern. Otherwise the following message will appear when rs.addArb is called:MongoServerError: Reconfig attempted to install a config that would change the implicit default write concern. Use the setDefaultRWConcern command to set a cluster-wide write concern and try the reconfig again.So I didI would expect that this configuration causes no lag when reading/writing to a PSA System with only one data bearing node available.But I observe “slow query” messages in the mongod log like this one:The collection involved here is under proper load with about 1000 reads and 1000 writes per second from different (concurrent) clients.MongoDB 4.x with “enableMajorityReadConcern=false” performed “normal” here and I have not noticed any loss of performance in my application. MongoDB 5.x doesn’t manage that and in my application data is piling up that I can’t get written away in a performant way.So my question is, if I can get the MongoDB 4.x behaviour back in 5.x. A write guarantee from the single data bearing node which is available in the failure scenario would be OK for me. But in a failure scenario, having to manually reconfigure the faulty node should actually be avoided.Thanks for any advice!", "username": "Franz_van_Betteraey" }, { "code": "rs.add()rs.addArb()w:1", "text": "Hi @Franz_van_Betteraey welcome to the community!Arbiters are useful to allow a replica set to have a primary while in a degraded state (i.e.: when one secondary is down), however they come at the expense of data integrity and more complex operation & maintenance.The safest setup for your data is to have a minimum of 3 members replica set with no arbiters, and use majority write concern. This way, your writes will propagate to the majority of nodes, ensuring that your data is safe once written. If you have a PSA setup, it is possible to have acknowledged writes to be rolled back. As an added bonus, majority write concern will also ensure that your app cannot feed more data that can be handled by the replica set safely, that is, it can act as a backpressure to ensure you don’t inadvertently overload your database.Notably, the default write concern is now “majority” since MongoDB 5.0.MongoDB 4.x with “enableMajorityReadConcern=false” performed “normal” hereThere are major changes in WiredTiger between MongoDB 4.4 series and 5.0 so they behave slightly differently under a degraded situation (such as when a secondary is down in a PSA set). However the changes are done to ensure better data integrity.Otherwise the following message will appear when rs.addArb is called:I believe this can be supressed by initializing the replica set with a configuration document instead of using rs.add() and rs.addArb(), and I think this also sets up a different write concern default since the default implicit write concern changes depending on the presence of arbiters. See Implicit Default Write Concern. If you have a PSA setup, the implicit write concern should defaults to w:1, and I think this should be about comparable to the older 4.4 setup you refer.Having said that, I would encourage you to explore a PSS setup instead of a PSA setup.Best regards\nKevin", "username": "kevinadi" }, { "code": "writeConcer w:1 \"readConcern\": {\n \"level\": \"local\",\n \"provenance\": \"implicitDefault\"\n },\n \"writeConcern\": {\n \"w\": 1,\n \"wtimeout\": 0,\n \"provenance\": \"customDefault\"\n },\n", "text": "Hi @kevinadi,thank you very much for reaching out. The recommendation to use a PSS structure is certainly correct, but things are what they are. It has also worked well for us so far and it has at least protected us from the failure of one data-bearing node. I can’t change anything about this architecture at the moment.As you see in the “slow query” messagens we also configured the system to use the default writeConcer w:1 which then allowed us to add the arbiter. Thus no problem here anymore:However, we still observe a performance that is 10 times worse in the degraded state than before (related to the insert counts per second. That was not the fact with the 4.x version. I wonder if I should adress this as an issue? But I almost can’t believe that the system behaviour has changed so much and I’m still looking for a cause that I can fix on my side.Best regards,\nFranz", "username": "Franz_van_Betteraey" }, { "code": "enableMajorityReadConcern", "text": "Hi @Franz_van_BetteraeyYes unfortunately this is a side effect of architectural changes in WiredTiger. As of MongoDB 5.0 going forward, enableMajorityReadConcern is not available as an option anymore, and thus it is always on. Setting w:1 as default does not achieve the same effect as disabling majority read in pre-5.0, as the majority commit point of a PSA replica set will still fall behind as long as the replica set is in a degraded state.There are various technical reasons why this change is necessary, but the main benefits are:However it also comes with some drawbacks when a PSA set is in a degraded state for long periods. The set will still be functional, but please note that no one should be running any replica set in a degraded state for an extended period of time. Without the majority of data bearing node available and the majority commit point reflecting the latest state of your database, your data isn’t safe.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Poor write perfomance with MongoDB 5.0.8 in a PSA (Primary-Secondary-Arbiter) setup
2022-05-13T11:45:49.262Z
Poor write perfomance with MongoDB 5.0.8 in a PSA (Primary-Secondary-Arbiter) setup
3,555
https://www.mongodb.com/…c54527b9ba97.png
[ "compass", "mongodb-shell" ]
[ { "code": "", "text": "Hi team.I’m running mongo in private network. It’s hidden from public internet.\nI’m trying to proxy traffic through Cloudflare tcp tunnel.But I see that Compass is trying to access IP of a public endpoint. Of course it fails because our atlas mongo cluster is hidden from the internet:\n\nimage666×781 49.1 KB\nWhy is Compass doing this? Is there a way to force it to use only my proxied localhost ports?mongosh is having the same behavior.\nIf I open atlas to the internet the proxied connection string above works correctly. But I want to avoid public access.", "username": "Ivan_Sabelnikov" }, { "code": "replicaSetcloudflared0.0.0.0/0", "text": "Hi @Ivan_Sabelnikov - Welcome to the community I’m running mongo in private network. It’s hidden from public internet.Based off the details of the post and the replicaSet value, I presume you are referring to an Atlas cluster. When you state that it is hidden from the public internet, do you mean that you have configured Network Peering Connection?I’m trying to proxy traffic through Cloudflare tcp tunnel .I’m not too familiar with the Cloudflare TCP tunnel you’ve linked but based off the same documentation page, more specifically the requirements:The third requirement is for the cloudflared daemon to be installed on both the host and client machines. If the host machine (or client machine) is to be the Atlas nodes then this won’t be possible.If I open atlas to the internet the proxied connection string above works correctly. But I want to avoid public access.Can you clarify what you mean by opening atlas to the internet? Do you mean adding the CIDR 0.0.0.0/0 to your Network Access List so that it allows access from anywhere?Perhaps setting either of the following may suit your use case:Set Up a Network Peering Connection\nNote: Atlas supports Network Peering connections for AWS, Google Cloud, and Azure-backed and multi-cloud dedicated clusters.Set Up a Private Endpoint for a Dedicated Cluster\nNote: MongoDB Atlas supports private endpoints on:Atlas is secure by default as communications are encrypted using TLS and has IP access list capabilities which limits exposure of the Atlas endpoints to certain IP’s which user’s control. You may find the Atlas Security page useful as it includes much more detailed information regarding Atlas Security. On the page, you’ll also be able to download the Atlas Security Controls white paper.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Cannot connect to atlas mongodb through cloudflared proxy
2022-05-10T15:56:10.265Z
Cannot connect to atlas mongodb through cloudflared proxy
2,967
null
[ "queries", "atlas" ]
[ { "code": "", "text": "db.companies.find({“relationships.0.person.first_name”:“Mark”, “relationships.0.title”:{\"$regex\":“CEO”}}).count()I can use the above MQL using Shell. But how to execute the above statement using Atlas IDE. Thanks.", "username": "m_o_n_g_o_db" }, { "code": "", "text": "In MongoDB atlas there isn’t a built in clean way to count the documents. Here is a previous post that talks about it, and some work arounds.https://www.mongodb.com/community/forums/t/not-able-to-view-count-in-mongodb-atlas-1-20-of-many/113771/3", "username": "tapiocaPENGUIN" }, { "code": "", "text": "Hi Bhuppal,You can run this aggregation using the visual pipeline builder under the “Aggregation” tab in the Atlas IDE. The final output in the preview will show the final count, as long as the aggregation executes in less than 45 seconds; otherwise, the operation will timeout as to not strain your cluster.Please let me know if this works, and feel free to message me directly if you have any feedback or further questions. We’re always looking to improve our aggregation experience.Best,\nJulia Oppenheim, Product Manager", "username": "Julia_Oppenheim" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Count operation using Atlas IDE
2022-05-11T13:13:34.607Z
Count operation using Atlas IDE
5,133
null
[ "node-js", "serverless" ]
[ { "code": "", "text": "Hello, I am having issues with my serverless cluster having writes taking over 2-5 minutes. This is causing an h12 error in my Heroku node js app.2022-05-12T03:45:13.627451+00:00 heroku[router]: at=error code=H12 desc=“Request timeout” method=POST path=“redactedbyme” host=redactedbyme request_id=238dc2a1-ef82-45ee-8a01-5d66bbee2f38 fwd=“99.57.64.104” dyno=web.1 connect=0ms service=30000ms status=503 bytes=0 protocol=httpsThe serverless instance is using GCP /us-central1\nthe average document size is 211 bytesWhat can I do to reliably get the write speed below 30 seconds. Should I upgrade to a dedicated server? This is a production app but the writes are very infrequent at the moment (Less than once a week).", "username": "Asim_Williams" }, { "code": "", "text": "Hey Asim,Thank you for providing this feedback. Your experience with serverless instances is definitely not one we want, nor something that I would expect. I will reach out privately so that we can dig deeper.Best,\nChris\n-Atlas product team", "username": "Christopher_Shum" } ]
Serverless cluster slow write causing h12 error in Heroku
2022-05-17T18:44:55.486Z
Serverless cluster slow write causing h12 error in Heroku
2,806
null
[ "aggregation" ]
[ { "code": "{\n _id: \"...\",\n name: \"...\",\n email: \"...\",\n}\n{\n _id: \"...\",\n name: \"...\",\n reviews: [\n {\n _id: \"..\",\n rating: 1, // number\n content: \"...\",\n customerId: \"...\" // ObjectId ref to customer\n },\n {\n _id: \"...\",\n rating: 1, // number\n content: \"...\",\n customerId: \"...\" // ObjectId ref to customer\n }\n ]\n}\nHotel.aggregate(\n[\n {\n $lookup: {\n from: \"customers\",\n localField: \"reviews.customerId\",\n foreignField: \"_id\",\n as: \"customer\"\n }\n }\n]\n)\n", "text": "I have the following schema(s):\nCustomerHotel:For simplicity I cut off most of the properties from both modelsI am trying to use $lookup to fetch customer data using customerId which is present in the review object within the reviews array, I tried:But it is not working.How do I lookup customers using the customerId present in the review object within the reviews array?", "username": "Ahmed_Abdellatif" }, { "code": "customerId: \"...\" // ObjectId ref to customer _id: \"...\",", "text": "Can you provide real sample data?We cannot really experiment easily withcustomerId: \"...\" // ObjectId ref to customerand _id: \"...\",Filling the tree dots with real values in order to experiment is time consuming.", "username": "steevej" }, { "code": "{\n _id: \"6259f7dc78e6ee2c49611c04\",\n name: \"Customer 1\",\n email: \"[email protected]\",\n}\n{\n _id: \"6259f7dc78e6ee2c49611c00\",\n name: \"Hotel 1\",\n reviews: [\n {\n _id: \"6259f7dc78e6ee2c49611c00\",\n rating: 1, \n content: \"Great Hotel\",\n customerId: \"6259f7dc78e6ee2c49611c04\" \n },\n {\n _id: \"6259f7dc78e6ee2c49611c01\",\n rating: 3.5, \n content: \"Great Hotel\",\n customerId: \"6259f7dc78e6ee2c49611c04\"\n }\n ]\n}\n", "text": "Sorry for that, and for the late response because the website was on maintenanceCustomer:Hotel:", "username": "Ahmed_Abdellatif" }, { "code": "", "text": "I would try something like:", "username": "steevej" } ]
Using lookup for ref in array of objects
2022-05-18T00:14:17.461Z
Using lookup for ref in array of objects
2,659
null
[ "compass", "indexes" ]
[ { "code": "", "text": "Hello. I am currently trying to prevent duplicate data in one particular field from being entered into my database. My data is a comment feed from a web scraping application and I need to prevent duplicate entries from being added. I tried creating a unique index in MongoDB compass, but it seems that the data input stops upon finding the duplicate data. I know that ordered:false will prevent this, but is there a way to set ordered:false in MongoDB Compass or Atlas? Thanks!", "username": "HD_Roofers" }, { "code": "", "text": "In Compass, if you Insert Data ▼, then Import File and you DON’T select the option Stop on errors you will be doing something similar to an unordered insert. Duplicated documents, according to _id and unique indexes, should not be inserted and all other documents will be despite the errors.", "username": "steevej" } ]
Using Ordered: False in MongoDB Compass/Atlas
2022-05-19T13:11:31.216Z
Using Ordered: False in MongoDB Compass/Atlas
1,494
https://www.mongodb.com/…2e9d80d1cc8.jpeg
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "If you missed this live session - you can rewatch it belowAs well as the usual chat & answers, we were also joined by John & Avik who gave us a Demo of their Good News App. If you want to demo your project, and earn some FREE SWAG, let us know for the next livestreams and we’ll send you and invite!", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Office hours & "Good News" Demo - US/EMEA
2022-05-19T16:38:57.249Z
Office hours &amp; &ldquo;Good News&rdquo; Demo - US/EMEA
2,656
https://www.mongodb.com/…b6_2_1024x79.png
[ "python", "mdbw22-hackathon" ]
[ { "code": "$ python -m venv venv\n$ python -m venv --upgrade-deps venv\n", "text": "Hi @Mark_Smith ,I have seen you in the videos creating the virtual environment using this code:I think you could avoid the following warning creating the virtual environment using this code instead:\nAvoid warning creating the virtual environment1547×120 173 KB\n", "username": "Manuel_Martin" }, { "code": "", "text": "Thanks for sharing @Manuel_Martin", "username": "Shane_McAllister" }, { "code": "", "text": "I didn’t know this existed - and I consider myself a Python expert!Thanks so much for this, @Manuel_Martin!", "username": "Mark_Smith" }, { "code": "", "text": "I don’t consider myself a Python expert!, just have the lucky to learnt it somewhere, I am glad you find it interested for you.", "username": "Manuel_Martin" } ]
Tip: Avoid warning creating the virtual environment
2022-05-17T15:15:46.082Z
Tip: Avoid warning creating the virtual environment
2,954
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Hello fellow hackathon-ees!Good news! We are going to extend the deadline for Hackathon submissions - from May 20th to May 27th. This is to give you more time to gather your submissions and put the finishing touches on your projects. So, the deadline is moving, and submissions will be open until EOD (wherever you are in the world) on May 27th. But don’t rest up, or relax, use this extra time to iron out your bugs, practice your video demo walkthroughs and tidy up your Repos!!The Hackathon Team", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Extended closing date for submissions! Now May 27th!
2022-05-19T15:53:22.447Z
Extended closing date for submissions! Now May 27th!
2,520
null
[ "data-modeling", "mdbw22-hackathon" ]
[ { "code": "", "text": "Hi Team,One of the replication server 2 secondary 1 primary one secondary server host is down“stateStr” : “(not reachable/healthy)” so when i checked mongod.logs below error was encounter2022-05-05T00:35:2995+0000 F - [conn1841] Failed to mlock: Unknown error\n2022-05-05T00:35:29.295+0000 F - [conn1841] Fatal Assertion 28832 at src/mongo/base/secure_allocator.cpp 255\n2022-05-05T00:35:29.295+0000 F - [conn1841]***aborting after fassert() failure2022-05-05T00:35:29.312+0000 I NETWORK [listener] connection accepted from 10.244.133.0:50974 #2114 (1134 connections now open)\n2022-05-05T00:35:29.316+0000 F - [conn1841] Got signal: 6 (Aborted).", "username": "hari_dba" }, { "code": "", "text": "Hi,Please try the recommendation here:\nhttps://jira.mongodb.org/browse/SERVER-36600Jess", "username": "jbalint" }, { "code": "max locked memory settingunlimited@jbalint<relevant ulimit command to confirm what that setting should be>", "text": "Hi @hari_dba, did you try adjusting the max locked memory setting to the recommended value of unlimited per the SERVER issue @jbalint shared?If you are still having issues, please confirm your versions of MongoDB server, O/S version, and output of\n<relevant ulimit command to confirm what that setting should be>In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "Hi Sourabh,Thank you very much for reply.I will be reach you another issue\nBut previous issue is fixed.Thanks,\nSrihari", "username": "hari_dba" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replication server one of the secondary unreachable
2022-05-05T10:39:37.271Z
Replication server one of the secondary unreachable
4,099
null
[ "licensing" ]
[ { "code": "", "text": "I understand that, if MongoDB is simply used as a backend database (not directly exposed to the customer), then one can use the MongoDB Community Server for free even for commercial applications.What if the MongoDB server is also deployed on customer premises, but still used as a backend database for other software: Is this still fine with the SSPL license of MongoDB?Many thanks!", "username": "PAPPAPPERO" }, { "code": "", "text": "I have interpreted SSPL to mean if you are running MongoDB as a service (making your own mongodb atlas for example to sell DBaaS) then you are obligated to release the modified source along with that of tools, interfaces, management software etc…If you are deploying mongodb as part of the application stack you are providing customers then, no, it does not apply.disclaimer: Not a lawyer or licensing specialist", "username": "chris" }, { "code": "", "text": "Thanks for your reply ", "username": "PAPPAPPERO" } ]
SSPL license of MongoDB when deployed on customer premises
2022-05-19T12:26:47.038Z
SSPL license of MongoDB when deployed on customer premises
3,052
null
[ "100daysofcode" ]
[ { "code": "", "text": "The #100DaysOfCode challenge is intended to help you develop a learning habit by being publicly accountable: Code a minimum of an hour a day for the next 100 days\n Share your progress every dayIt is a journey of discipline, commitment, and learning. I have always been inspired by working together when we help each other grow. Working alone is great but working together yields amazing and impossible results. We all help and encourage each other with new learnings a nd problem solutions. I originally suggested this as a team-building exercise with some of my colleagues, and @Kushagra_Kesav enthusiastically joined me in the challenge. We both are very inspired and challenge each other every day. We are on Day 08 and all our learnings have been penned down on our Twitter and Medium blog. I am currently learning front-end development and Kushagra is learning React JS. We have been enjoying the learning so much that we are now excited to take the challenge to the next level: inviting our wider MongoDB Community to join us.#TogetherWeBuildBetterThingsWe all would like to have you all join us on this habit-forming, relationship-building, and technology learning journey of 100 Days.The 100Days is not restricted to any one technology, Pick any topic you are interested in learningWe would follow the guidelines as stated on the official #100DaysOfCode website, along with some MongoDB Community fun Code every day for 100 days, make a streak\n Tweet about what you learned using #100DaysOfCode, #MongoDBCommunity hashtags\n Create your topic on the forum and post the tweet series of 100 days on the same topic starting with Day1If you have already started and are on any Day X, please feel free to create a topic and all references of your tweets starting from Day 01. Feel free to refer to mine or Kushagra’s post as exampleNow let’s talk about some Recognition I promise this will all be a lot of fun. We definitely have badges and surprises for consecutive days of sharing your progress You will earn the badge and the fun naming on your profile 1 Day of Code Using MongoDB (Database Dabbler) 10 Days of Code (Code Wrangler) 25 Days of Code (Committed Coder) 50 Days of Code (Code Ranger) 75 Days of Code (Coding Legend) 100 Days of Code (Coding Centurion)I hope you are as excited as I am to #buildtogether Cheers, ", "username": "henna.s" }, { "code": "", "text": "", "username": "Stennie_X" }, { "code": "", "text": "Thanks to @Allison_Mui & @henna.s, we have a shiny new set of badges to recognise your progress in this challenge.If you earn one of these special badges, they can also be used as a custom title in the Community Forums. To earn a badge you will have to share your daily progress posts in your own topic in The Treehouse category, similar to our first adventurers:Badge designs will be revealed as soon as they are earned by someone in the MongoDB Community. Click on a badge design or title to see who has earned this badge so far. Will you be next?Regards,\nStennieParticipate in #100DaysOfCode challenge and share at least one day of learning MongoDB or Realm.\n\nParticipate in #100DaysOfCode challenge and share your daily learnings for 10 consecutive days.\n\n", "username": "Stennie_X" }, { "code": "", "text": "A post was split to a new topic: The Journey of #100DaysOfCode (@JasonNutt14)", "username": "Stennie_X" }, { "code": "", "text": "G’day folks, @henna.s has just unlocked the Committed Coder badge:Participate in #100DaysOfCode challenge and share your daily learnings for 25 consecutive days.\n\nIf you haven’t been following The Journey of #100DaysOfCode (@henna_dev) updates so far, her daily updates include some learnings shared in Henna’s Medium blog. If there’s something you’d like her to write more about, or comment on her updates for encouragement ;-).You can also follow updates on the Community Forums or other channels like Medium or Twitter if you have a preference and the author is cross-posting there. If there is a channel you’d find more convenient for updates, that would also be great feedback for your favourite posters.Henna is also sharing weekly Realm Bytes updates in the forums with some more insight into common questions about MongoDB Realm & Realm SDKs. These are driven by community discussion and suggestions, so if there’s something you’d like covered in Realm Bytes please comment on one of the Realm Bytes topics or start a new discussion in a MongoDB Realm forum category.Regards,\nStennie", "username": "Stennie_X" }, { "code": "100daysofcode", "text": "G’day folks, @Kushagra_Kesav and @henna.s have both completed an amaaaaazing 50 days of coding and sharing and have are now more than halfway through their 100 Days of Code journey!!!They have just unlocked the Code Ranger badge (which can also be used as a title in the forum):Participate in #100DaysOfCode challenge and share your daily learnings for 50 consecutive days.\n\nPlease check out their journeys (and all of our intrepid challengers) via the 100daysofcode tag. If you find any particular posts of interest, please give our challengers some encouragement with a or reply.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "G’day folks,@Kushagra_Kesav and @henna.s have reached 75 days of coding and sharing and are now three quarters of the way through their 100 Days of Code journey!!!They have just unlocked the Coding Legend badge (which can also be used as a title in the forum):Participate in #100DaysOfCode challenge and share your daily learnings for 75 consecutive days.\n\nImpressive commitment to the challenge and some great daily updates Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "I already completed the 100daysofcoding, and now I conquer the 150daysofcoding, I am sharing my LinkedIn-“https://www.linkedin.com/in/ankit-gupta-7a8038a5” & GitHub-“CCAnkit (Ankit Gupta) · GitHub” Link for the reference. #100daysofcode #mongodb #Javascipt #coding", "username": "ankit_gupta8" }, { "code": "", "text": "G’day folks,I have inspiring news to share! Many brave adventurers have started the 100 Days of Code Challenge, but few have the tenacity to complete the journey.Please join me in congratulating @henna.s and @Kushagra_Kesav, the first challengers to complete a 100 Days of Code challenge in the MongoDB Community Forums. They have unlocked the legendary Coding Centurion badge!Participate in #100DaysOfCode challenge and share your daily learnings for 100 consecutive days!\n\nRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "", "username": "system" } ]
Join us in the adventure of #100DaysOfCode
2022-02-15T05:39:54.834Z
Join us in the adventure of #100DaysOfCode
9,806
https://www.mongodb.com/…f9_2_1023x91.png
[ "mongoose-odm", "connecting", "typescript" ]
[ { "code": "import mongoose from 'mongoose'\ntry {\n await mongoose.connect(process.env.MONGO_URI)\n console.log('Connected to MongoDB')\n} catch (err) {\n console.error(err)\n}\nprocess.env.MONGO_URI = mongodb+srv://<username>:<password>@<cluster>.ig6hm.mongodb.net/<dbname>?retryWrites=true&w=majority\n\n- Error\n ```shell\n MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's \n IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\n at NativeConnection.Connection.openUri (/home/node/app/node_modules/mongoose/lib/connection.js:796:32)\n at /home/node/app/node_modules/mongoose/lib/index.js:328:10\n at /home/node/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/home/node/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (/home/node/app/node_modules/mongoose/lib/index.js:1149:10)\n at Mongoose.connect (/home/node/app/node_modules/mongoose/lib/index.js:327:20)\n at /home/node/app/src/index.ts:33:20\n at step (/home/node/app/src/index.ts:33:23)\n at Object.next (/home/node/app/src/index.ts:14:53) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'devcluster-shard-00-01.asdf1234.mongodb.net:27017' => [ServerDescription],\n 'devcluster-shard-00-00.asdf1234.mongodb.net:27017' => [ServerDescription],\n 'devcluster-shard-00-02.asdf1234.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-11mhge-shard-0',\n logicalSessionTimeoutMinutes: undefined\n }\n ```\n\nHere is the content of the `err.reason.servers`:\n\n```shell\nMap(3) {\n'devcluster-shard-00-00.ig6hm.mongodb.net:27017' => ServerDescription {\n _hostAddress: HostAddress {\n isIPv6: false,\n host: 'devcluster-shard-00-00.ig6hm.mongodb.net',\n port: 27017\n },\n address: 'devcluster-shard-00-00.ig6hm.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 1164506954,\n lastWriteDate: 0,\n error: MongoNetworkError: certificate is not yet valid\n at connectionFailureError (/home/node/app/node_modules/mongoose/node_modules/mongodb/src/cmap/connect.ts:390:14) \n at TLSSocket.<anonymous> (/home/node/app/node_modules/mongoose/node_modules/mongodb/src/cmap/connect.ts:358:16) \n at Object.onceWrapper (node:events:514:26)\n at TLSSocket.emit (node:events:394:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)\n},\n'devcluster-shard-00-01.ig6hm.mongodb.net:27017' => ServerDescription {\n _hostAddress: HostAddress {\n isIPv6: false,\n host: 'devcluster-shard-00-01.ig6hm.mongodb.net',\n port: 27017\n },\n address: 'devcluster-shard-00-01.ig6hm.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 1164506989,\n lastWriteDate: 0,\n error: MongoNetworkError: certificate is not yet valid\n at connectionFailureError (/home/node/app/node_modules/mongoose/node_modules/mongodb/src/cmap/connect.ts:390:14) \n at TLSSocket.<anonymous> (/home/node/app/node_modules/mongoose/node_modules/mongodb/src/cmap/connect.ts:358:16) \n at Object.onceWrapper (node:events:514:26)\n at TLSSocket.emit (node:events:394:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)\n},\n'devcluster-shard-00-02.ig6hm.mongodb.net:27017' => ServerDescription {\n _hostAddress: HostAddress {\n isIPv6: false,\n host: 'devcluster-shard-00-02.ig6hm.mongodb.net',\n port: 27017\n },\n address: 'devcluster-shard-00-02.ig6hm.mongodb.net:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 1164507365,\n lastWriteDate: 0,\n error: MongoNetworkError: certificate is not yet valid\n at connectionFailureError (/home/node/app/node_modules/mongoose/node_modules/mongodb/src/cmap/connect.ts:390:14) \n at TLSSocket.<anonymous> (/home/node/app/node_modules/mongoose/node_modules/mongodb/src/cmap/connect.ts:358:16) \n at Object.onceWrapper (node:events:514:26)\n at TLSSocket.emit (node:events:394:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)\n}\n}\n>mongosh \"mongodb+srv://devcluster.ig6hm.mongodb.net/auth\" --username devdb-admin\nEnter password: ****************\nCurrent Mongosh Log ID: 6159072f46e048ec94c0c683\nConnecting to: mongodb+srv://devcluster.ig6hm.mongodb.net/auth\nError: querySrv EREFUSED _mongodb._tcp.devcluster.ig6hm.mongodb.net\nMongoDBAtlasMongoDB Compassusernamepasswordmongodb://mongodb+srv://Compass1.28.4mongodb+srv://DB Clusterusers and network accessAtlas", "text": "What is the current behavior?\nThe connection to a MongoDB hosted on Mongo Atlas suddenly throws out an error. I have been searching on the Internet (Stackoverflow, …) for a day about this bug, but haven’t been able to figure out why it happens. Here are the detailed configs and error:Connections from any IPs are allowed\n\nimage1999×178 12.5 KB\nConnection codeConnection string’s formatmongodb+srv://devdb-admin:<password_is_hide_here>@devcluster.ig6hm.mongodb.net/auth?retryWrites=true&w=majorityAre you able to connect to this Atlas cluster using the mongo shell from the command line?No, I tried but wasn’t able to connect to the DB on Atlas using MongoDB Shell.But I was able to connect to the MongoDB on Atlas using MongoDB Compass with the same username and password (however, I need to downgrade the connection to a form of mongodb:// instead of mongodb+srv:// even though my Compass's version is 1.28.4 which should be able to connect to the DB using a connection string with the format of mongodb+srv://). Another strange thing is that the connection was successful for the last 2 months and suddenly it becomes unsuccessful while no changes were added to the source code and no changes were made to the configuration of the DB Cluster or users and network access on the Atlas as well.What are the versions of Node.js, Mongoose, MongoDB and TypeScript you are using?", "username": "Zigse_Dev" }, { "code": "", "text": "Please ensure that you’ve opened up a support case so that we can get you the assistance you need here (you can open up the chat in the lower right in the Atlas UI)", "username": "Andrew_Davidson" }, { "code": "", "text": "@Andrew_Davidson Thanks for your reply. The issue was fixed after I waited for a couple of days - without changing any configurations of MongoDB on Atlas or source code. I still don’t get the reasons why the connection error occurred, but at least now it works again.", "username": "Zigse_Dev" }, { "code": "", "text": "I had the same problem after resetting my router. You have to set your IP again.", "username": "Josephine_Geoghegan" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Cannot connect to MongoDB on Atlats
2021-10-03T03:45:43.492Z
Cannot connect to MongoDB on Atlats
9,603
https://www.mongodb.com/…c1_2_1024x74.png
[ "replication", "python", "database-tools", "mdbw22-hackathon", "mdbw-hackhelp" ]
[ { "code": "mongo --version:\nBuild Info: {\n \"version\": \"5.0.8\",\n \"gitVersion\": \"c87e1c23421bf79614baf500fda6622bd90f674e\",\n \"openSSLVersion\": \"OpenSSL 1.1.1f 31 Mar 2020\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu2004\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n", "text": "As part of the MongoDB World Hackathon, I’m trying to import more data into my cluster and I’m now experiencing this error which I didn’t have the first time I ran this import script\nimage1355×99 8.19 KB\nThis issue is very similar to an earlier one but I’m having difficulty applying the solution there.Perhaps more newbie-friendly instructions as to what I should do will help because I saw instructions concerning changing host name or port to replica set something, but I didn’t quite what exactly to do.@Mark_Smith @nraboyTagging the people from the similar issue incase they can help out: @Kushagra_Kesav @Ramachandra_Tummala @Joshua_CadavezSystem details are:\nUbuntu 20.04.4 LTS\nPython 3.10.4\nMongoDB shell version v5.0.8", "username": "Fiewor_John" }, { "code": "", "text": "May be your network not supporting srv type string\nTry longform of string(old style)\nIf you have Atlas account you can get full mongoimport command from command line tools\nYou may get sample command from our forum threads too", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Mark_Smith this might be a problem with the & in your script?", "username": "Joe_Drumgoole" }, { "code": "", "text": "Just looking into this - hopefully I’ll have a fix shortly.", "username": "Mark_Smith" }, { "code": "$*\"$*\"", "text": "Hi @Fiewor_John - have you modified mongoimport.sh at all? I can only reproduce your error if I put double quotes around the $* at the end of that script, so it looks like \"$*\".", "username": "Mark_Smith" }, { "code": "--host--port", "text": "I previously modified it in an attempt to add --host and --port but I have removed those flags.Here’s what it looks like when I open it up in vim\nimage1600×900 72.1 KB\n", "username": "Fiewor_John" }, { "code": " \"mongodb://john:[email protected]:27017,gdelt-shard-00-01.n1mbb.mongodb.net:27017,gdelt-shard-00-02.n1mbb.mongodb.net:27017?authSource=admin&replicaSet=atlas-7o9d3y-shard-0\"", "text": "Thanks @Fiewor_John - I’ve just realised that I confused your error message with another I agree with @Ramachandra_Tummala that this may be a limitation of your DNS server. Have you changed network since the last time things worked?If the mongodb+srv URI doesn’t work, there is an alternative. I think the following should work for you (remember to put your password in the correct place ) \"mongodb://john:[email protected]:27017,gdelt-shard-00-01.n1mbb.mongodb.net:27017,gdelt-shard-00-02.n1mbb.mongodb.net:27017?authSource=admin&replicaSet=atlas-7o9d3y-shard-0\"", "username": "Mark_Smith" }, { "code": "", "text": "Incidentally, if the connection string works for you, then you have @Joe_Drumgoole to thank for the following blog post that taught me how to do this : MongoDB 3.6: Here to SRV you with easier replica set connections | MongoDB Blog", "username": "Mark_Smith" }, { "code": "", "text": "“mongodb://john:[email protected]:27017,gdelt-shard-00-01.n1mbb.mongodb.net:27017,gdelt-shard-00-02.n1mbb.mongodb.net:27017?authSource=admin&replicaSet=atlas-7o9d3y-shard-0”I’m getting a\n\nimage1180×40 2.89 KB\n", "username": "Fiewor_John" }, { "code": "", "text": "After last host:port in your connect string add / before ?", "username": "Ramachandra_Tummala" }, { "code": "/\"mongodb://john:[email protected]:27017,gdelt-shard-00-01.n1mbb.mongodb.net:27017,gdelt-shard-00-02.n1mbb.mongodb.net:27017/YOURDATABASE?authSource=admin&replicaSet=atlas-7o9d3y-shard-0\"", "text": "I think you’re right. I missed a / \"mongodb://john:[email protected]:27017,gdelt-shard-00-01.n1mbb.mongodb.net:27017,gdelt-shard-00-02.n1mbb.mongodb.net:27017/YOURDATABASE?authSource=admin&replicaSet=atlas-7o9d3y-shard-0\"Don’t forget to put your database name in, either - mongoimport.sh requires it.", "username": "Mark_Smith" }, { "code": "/test", "text": "Thanks Mark and @Ramachandra_Tummala . I actually tried this already after I observed that @Joe_Drumgoole had a /test in his blog post but it gave another weird error which I frankly didn’t want to bother anyone with again But I’ll share it here and try again later cause I fear it has something to do with my network\nimage1363×178 13.8 KB\n", "username": "Fiewor_John" }, { "code": "ping gdelt-shard-00-00.n1mbb.mongodb.net64 bytes from 40.67.234.22: icmp_seq=0 ttl=50 time=29.413 ms\n64 bytes from 40.67.234.22: icmp_seq=1 ttl=50 time=29.855 ms\n64 bytes from 40.67.234.22: icmp_seq=2 ttl=50 time=29.782 ms\n64 bytes from 40.67.234.22: icmp_seq=3 ttl=50 time=29.692 ms\nmongosh", "text": "I’m starting to think the network you’re on is quite restricted. Can you try pinging one of the nodes, like thisping gdelt-shard-00-00.n1mbb.mongodb.netIf you get something like the following then you might be able to access the replica-set:If you can, then try using mongosh directly to connect to the replicaset using your SRV connection string, and then the comma-separated connection string, and see what the results are.If the ping times out then there’s a good chance you just can’t connect to MongoDB Atlas over the network you’re on.", "username": "Mark_Smith" }, { "code": "", "text": "I just tried these and I’m able to both ping one of the nodes and connect to the SRV string using mongosh\nimage1350×474 26.5 KB\nHowever, connecting to the comma-separated connection string gives this:\nimage1364×141 12.4 KB\n", "username": "Fiewor_John" }, { "code": "cat *.export.CSV | mongoimport \\\n --collection=eventscsv \\\n --mode=upsert \\\n --writeConcern '{w:1}' \\\n --type=tsv \\\n --columnsHaveTypes \\\n --fieldFile=\"${fieldfile}\" \\\n --uri 'PUT_YOUR_URI_HERE'\n--uri", "text": " Okay! I think it’s starting to look like @Joe_Drumgoole was right when we were talking about this issue this morning - I think there’s a bug in the script, but I can’t work out how to fix it exactly. Please change the last block of the script so it looks like the following, and paste in your connection string:You can now run it without the --uri flag - don’t forget to add your database name at the end - I can see you’re not in the first screenshot above Let me know how this goes! ", "username": "Mark_Smith" }, { "code": "ssl=true\"mongodb://john:[email protected]:27017,gdelt-shard-00-01.n1mbb.mongodb.net:27017,gdelt-shard-00-02.n1mbb.mongodb.net:27017/DATABASE?ssl=true&authSource=admin&replicaSet=atlas-7o9d3y-shard-0\"", "text": "I’ve fixed it!After taking a closer look at the blog post you shared, I added ssl=true to the comma-seperated connection string and it finally worked!\nimage1360×567 15.6 KB\nSo. here’s the final connection string that worked:\n\"mongodb://john:[email protected]:27017,gdelt-shard-00-01.n1mbb.mongodb.net:27017,gdelt-shard-00-02.n1mbb.mongodb.net:27017/DATABASE?ssl=true&authSource=admin&replicaSet=atlas-7o9d3y-shard-0\"And the link to the doc explaining the ssl part of mongoDB URI for anyone that might come across this later:Thanks @Ramachandra_Tummala @Mark_Smith @Joe_Drumgoole", "username": "Fiewor_John" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoImport Error: cannot unmarshal DNS message
2022-05-19T08:06:54.103Z
MongoImport Error: cannot unmarshal DNS message
7,773
null
[]
[ { "code": "", "text": "I have data in TSV format. The data has a multiple rows preceding the header row.\nHow to start my import on a particular row number or preferably when a row contains the header info for variation in the preceding row info?", "username": "jamie_humphries" }, { "code": "", "text": "Have you tried headerfile and addFields?\nI don’t think you can import partial data skipping some rows\nCheck mongo documentation for exact syntax", "username": "Ramachandra_Tummala" }, { "code": "mongoimport--headlerlinetailtailtail +<lines to skip +1> ... mongoimport", "text": "Welcome to the MongoDB Community @jamie_humphries !mongoimport's --headlerline option uses the first line in the input source as a header as the field list.If you have lines preceding the header to skip, you could edit your tsv file to remove them, or use another command-line utility to filter as required.For example, on macOS or Linux the tail utility should be available by default. The syntax for tail to skip lines is tail +<lines to skip +1> ... , so to skip the first 3 lines and pass the filtered tsv to mongoimport I would use a command line like:tail +4 sample-data.tsv | mongoimport -d sample -c data --type=tsv --headerlineRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks.\nI appreciate it. In SQL you can ignore x number of rows. but you have to specify. When you need to import several hundred text files a day editing becomes a huge task and I can’t change the output of the machines I’m getting the data from.", "username": "jamie_humphries" } ]
Import a CSV, TSV, ect... beginning at a particular row number or a row beginning with a known header row?
2022-05-18T17:06:43.011Z
Import a CSV, TSV, ect&hellip; beginning at a particular row number or a row beginning with a known header row?
2,159
null
[ "server" ]
[ { "code": "/ brew services start mongodb-community\n==> Successfully started `mongodb-community` (label: homebrew.mxcl.mongodb-community)\n➜ / brew services list\nName Status User File\narangodb started laly ~/Library/LaunchAgents/homebrew.mxcl.arangodb.plist\ncassandra none\nemacs none\nhbase none\nmongodb-community error 25600 root ~/Library/LaunchAgents/homebrew.mxcl.mongodb-community.plist\nnginx none\nredis started laly ~/Library/LaunchAgents/homebrew.mxcl.redis.plist\nunbound none\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.125+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"19.6.0\"}}}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.126+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/usr/local/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\"},\"storage\":{\"dbPath\":\"/tmp/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/usr/local/var/log/mongodb/mongo.log\"}}}}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.129+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.130+03:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"IllegalOperation: Attempted to create a lock file on a read-only directory: /tmp/mongodb\"}}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.130+03:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.132+03:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.133+03:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.133+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-05-19T13:22:55.133+03:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n", "text": "Hi! I am trying to install Mongodb Community 5.0 in an Intel Mac through homebrew.I am encountirng an error, 25600.Looking through the logs, It appears that something related to Asio socket is failing:Any idea how to fix it?Thanks.", "username": "Jonatan_Kruszewski1" }, { "code": "", "text": "Your dbpath is /tmp/mongodb\nCan mongod write to this?\nDoes it have permissions\nTry to use another dirpath /tmp is not recommended", "username": "Ramachandra_Tummala" }, { "code": "brew services stop mongodb-community\ncd ~\ncd Desktop\nmkdir mongodb\n/usr/local/etc/mongod.conf/Users/laly/Desktop/mongodb\nbrew services start mongodb-community\n", "text": "Thank you very much! I was so focused on the first error that didn’t see that the /tmp/mongodb was read only.Changed to another path (where writing is allowed) and got fixed.For anyone encountering this problem:in /usr/local/etc/mongod.conf change the dbpath to (where laly = my user):Run again:", "username": "Jonatan_Kruszewski1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0 Fails to run in Mac OS with Homebrew Error 25600 (Asio Socket failing)
2022-05-19T10:36:17.512Z
MongoDB 5.0 Fails to run in Mac OS with Homebrew Error 25600 (Asio Socket failing)
7,316
null
[ "node-js", "mongoose-odm" ]
[ { "code": "let mongoGridFsBucket = new mongodb.GridFSBucket(Mongoose.connection.db, {\n chunkSizeBytes: 1024,\n bucketName\n})\n\nlet gridFsDownloadStream = mongoGridFsBucket.openDownloadStreamByName(filename)\n\ngridFsDownloadStream.on('error', console.error)\n \ngridFsDownloadStream.on('end', function() {\n console.info('downloaded')\n})\n\ngridFsDownloadStream.pipe(fs.createWriteStream('/local/path/to/downloaded.zip'))\nMongoServerError: Executor error during find command :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting.\nallowDiskUseallowDiskUsetrue", "text": "I have uploaded a 400Mb zip file to MongoDB using GridFS. I then try to donwnload it using the following code:and get this Error:The above code works fine for smaller files (e.g. 9Mb files), I’ve already tested that successfully.However in this case the file is too big. I looked for a solution online and apparently there is some allowDiskUse flag that I need to set somewhere but I don’t know where and how .There is no place in the above code where I could set this allowDiskUse to true so I don’t know what else to do to make this work.", "username": "Ni_Ma" }, { "code": "", "text": "So did you find a solution to this problem? I’m still looking Please share", "username": "Nikitas" }, { "code": " // Create an index for the 'n' field to sort the chunks collection.\n db.collection('media.chunks').createIndex({n: 1});\n", "text": "I did some research and I’ve found a solution.Apparently, when you download a file by streaming it with GridFS, the documents that it is comprised of are first sorted. According to this blog post, when doing a sort, MongoDb first attempts to retrieve the documents using the order specified in an index. When no index is available it will try to load the documents into memory and sort them there.The catch is that Mongo is configured by default to abort the operation when exceeding usage of 32 MB. In that case, we run into the “Sort exceeded memory limit” error described above. In order to solve this problem then, you’ll have to create an index for the ‘n’ field of the chunks collection that contains the file you want to download:", "username": "Nikitas" }, { "code": "nfiles_id", "text": "Welcome to the MongoDB Community @Nikitas!The blog post you have shared is relevant in terms of limitations of an in-memory sort, however drivers that conform to the GridFS specification should automatically create required GridFS Indexes for API retrieval.If these indexes do not exist, you can manually create them.In order to solve this problem then, you’ll have to create an index for the ‘n’ field of the chunks collection that contains the file you want to download:An index on n alone is missing files_id for efficiently retrieving all chunks related to a specific uploaded file.The expected GridFS indexes are actually:db.fs.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } );\ndb.fs.files.createIndex( { filename: 1, uploadDate: 1 } );Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Node.js | GridFS: Executor error during find command :: caused by :: Sort exceeded memory limit
2022-04-07T16:54:32.680Z
Node.js | GridFS: Executor error during find command :: caused by :: Sort exceeded memory limit
3,455
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hi there,Here is a document from my collection.{\noptionId: ‘18270562735’,\nsizes: [\n{ sku: ‘1827056273534’ },\n{ sku: ‘1827056273536’ },\n{ sku: ‘1827056273538’ },\n{ sku: ‘1827056273540’ },\n{ sku: ‘1827056273542’ },\n{ sku: ‘1827056273544’ },\n{ sku: ‘1827056273546’ },\n{ sku: ‘1827056273548’ },\n{ sku: ‘1126083251150’ },\n{ sku: ‘1126083251152’ }\n]\n}I’d like to find all documents where the substring of sku (length 11) in the “sizes” array is not equal to “optionId” value.I succed with aggregate() operation.\nIs there any way to do so with a find() operation ?Such a query is wrongfind({‘sizes.sku’:{$regex:/’$optionId.*’/}})", "username": "emmanuel_bernard" }, { "code": "", "text": "I tried something that make an error{$expr:{$not:{$in:[{$substr:[’$sizes.sku’,0,11]},[’$optionId’]]}}}", "username": "emmanuel_bernard" }, { "code": "$map{$map: { input: \"$sizes.sku\", in: {$substrCP:[\"$$this\", 0, 11] } } }\n{$map: { input: \"$sizes\", in: {$substrCP:[\"$$this.sku\", 0, 11] } } }\n[ \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"11260832511\", \"11260832511\" ]$in$optionIddb.foo.find( {$expr: {$in: [ \n \"$optionId\", \n {$map: { input: \"$sizes\", in: {$substrCP:[\"$$this.sku\", 0, 11] } } } \n] } } )\n", "text": "There are a number of ways to do this, all of them require somehow trimming an array of sku’s to just first 11 characters, which you can do with $map:orNow that you have an array [ \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"18270562735\", \"11260832511\", \"11260832511\" ] you can use $in to check if $optionId is there:Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "Hi Asya,Thanks a lot for your answer.I didn’t know it was possible to use $map in find() operator.\nIs there a full documentation to find all behaviors ?", "username": "emmanuel_bernard" }, { "code": "$expr", "text": "$expr allows use of all aggregation expressions.", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find() method to query substring of a field value in array not equal to a field value
2022-05-13T12:32:18.002Z
Find() method to query substring of a field value in array not equal to a field value
4,591
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "So come, join in and ask questions. We will be sharing details and guidelines about the submission process and also the hackathon Prizes! We’d love for these sessions to be very participatory this week - so, if you have a demo to share, please reply here and we’ll send you an invite link. All participants get SWAG!!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "We will be live in 30 minutes - do join!!You can watch on MongoDB Youtube and MongoDB Twitchor below here -", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Office hours - APAC/EMEA Session
2022-05-18T21:44:13.105Z
Hackathon Office hours - APAC/EMEA Session
2,837
null
[]
[ { "code": "", "text": "sudo yum install -y mongodb-org\nMongoDB Repository 869 B/s | 392 B 00:00\nErrors during downloading metadata for repository ‘mongodb-org-5.0’:", "username": "Ashish_Wanjare" }, { "code": "$releaseverhttps://repo.mongodb.org/yum/redhat/8Server/mongodb-org/5.0/x86_64/repodata/repomd.xml8Server$releasever8.2", "text": "https://repo.mongodb.org/yum/redhat/8.2/mongodb-org/5.0/x86_64/repodata/repomd.xmlHa, it appears the docs may be wrong (@Stennie_X ??) because the repo doesn’t match on the $releasever expansion.In your repo file, change the URL to https://repo.mongodb.org/yum/redhat/8Server/mongodb-org/5.0/x86_64/repodata/repomd.xml … note the 8Server in place of the $releasever expansion to 8.2.", "username": "Jack_Woehr" }, { "code": "$releasever", "text": "Welcome to the MongoDB Community @Ashish_Wanjare !Can you confirm the specific distro & version of Red Hat you are running?The docs should be correct for Red Hat Enterprise Linux Server, but $releasever may vary if you happen to be using a variant like RedHat Enterprise Linux Workstation.A workaround would be to hardcode the version as @Jack_Woehr suggested, but if you can share more details there may be a documentation improvement needed as well.Thanks,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Yes thanks both of you @Jack_Woehr @Stennie_X\nIt’s work & now able to install MongoDB community edition on RedHat 8.2", "username": "Ashish_Wanjare" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Not able to install MongoDB on redhat
2022-05-18T06:56:12.105Z
Not able to install MongoDB on redhat
6,635
null
[]
[ { "code": " text: {\n query: \"radio tower\",\n path:{'wildcard': '*'},\n synonyms: 'synonymCollection'\n }\n", "text": "Hi,\nIf I search for a phrase, when I have synonym enabled, it searches for the whole phrase, rather than an or search for each individual word.So the above will only bring anything containing the whole phrase.", "username": "B_A" }, { "code": " compound: {\n should: [\n {\n text: {\n query: \"radio tower\",\n path: {\n \"wildcard\": \"*\"\n },\n synonyms: \"synonymCollection\"\n }\n },\n {\n text: {\n query: \"radio tower\",\n path: {\n \"wildcard\": \"*\"\n }\n }\n }\n ]\n }\n}", "text": "Hey there! You spotted a tricky one. This is something we are aware of and hoping to add solutions to improve, but what you are experiencing is the expected behavior.A short term fix for it is to use compound with two should clauses - one with the text query that uses synonyms, and another that doesn’t.", "username": "Elle_Shwer" }, { "code": "", "text": "Hi Elle,\nThank you for the reply. The issue with this is I wont get back any of the synonym results. So if search for transmission tower, with transmission being a synonym for radio ,I will only get back results for radio tower, and not where radio is by itself in a field. Then on the second text query, I will get results back for tower as normal. So the synonym part will be useless. With standard search if have radio in one field and tower in another field, this results have a higher score. But with synonyms I cant do this.", "username": "B_A" }, { "code": "", "text": "This is still the case, right? So if I want to search for each word individually, while using synonyms, should I handle the synonyms in JavaScript code instead of directly Mongo Alas?", "username": "Florian_Walther" } ]
Synonym search not working when searching for phrase
2022-01-26T16:58:39.157Z
Synonym search not working when searching for phrase
3,156
https://www.mongodb.com/…9364f388ba8.jpeg
[ "charts", "delhi-mug" ]
[ { "code": "Product Engineer, LooppanelThoughtFocus, Lead Database, and Cloud OperationsCo-Founder, The Coding CultureSoftware Engineer, LinkedIn", "text": "Delhi-NCR MongoDB User Group is excited to announce a meetup on the coming weekend to introduce you all to MongoDB Charts. MongoDB Charts is the best way to create stunning visualizations for your data. Join us for a demonstration of visualizing MongoDB data using this amazing tool.It will start with a quick no-code demo followed by a collaborative exercise with some amazing swag to win. Not to forget we have some exciting fun games as well.If you are a beginner or have some experience with MongoDB already, there is something for all you of you!Event Type: Online\nLink(s):\nLocation\nVideo Conferencing URLTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Product Engineer, LooppanelFull-Stack JavaScript Developer, passionate about technology and problem-solving. Possess experience in working with web technologies like ReactJS, AngularJS and Angular2, HTML, and CSS, NodeJS, Express, MongoDB, Firebase, and SQL.ThoughtFocus, Lead Database, and Cloud Operations–\nCo-Founder, The Coding Culture–\nSoftware Engineer, LinkedInJoin the Delhi-NCR group to stay updated with upcoming meetups and discussions.", "username": "Rohit_Kumar" }, { "code": "", "text": "Hi,Will that be an online webinar? Will it be recorded? Is it publicly accessible?", "username": "NeNaD" }, { "code": "", "text": "Hi,It will be an online event.", "username": "GeniusLearner" }, { "code": "", "text": "Hey @NeNaD,\nAs Sanchit mentioned, this would be an online meetup and thus, open for anyone located anywhere to join.Also, this being a user group meetup would be interactive and collaborative in nature, with the focus being introducing and getting everyone attending up to speed on MongoDB Charts.Hope to have you join and be a part of it. ", "username": "Harshit" }, { "code": "", "text": "Hi @Harshit,Thanks for additional details. Where can we find the link to join? I will definitely interested! Kind regards,\nNenad", "username": "NeNaD" }, { "code": "", "text": "That’s great to know The link is collapsed under the “Where” section in the description.\nScreenshot 2022-03-25 at 10.59.27 AM1282×308 16.4 KB\nI will post here as well for easy access: Launch Meeting - Zoom", "username": "Harshit" }, { "code": "", "text": "Hey Everyone,\nGentle Reminder, that the event starts in less than 24 hours. Yay!Here’s how you can be better prepared for the demo and exercise the speaker has for you:Please feel free to reply to this thread and ask if you get stuck anywhere. We are looking forward to seeing you all tomorrow at the event ", "username": "Harshit" }, { "code": "", "text": "The event begins in less than 25 mins! \nJoin here: Launch Meeting - Zoom", "username": "Harshit" }, { "code": "", "text": "Thanks all for kind information", "username": "SONU_VERMA" }, { "code": "", "text": "Breakout rooms: We will soon be breaking into rooms with a timer at the top.Exercise Details: Inside the breakout rooms you will see questions you to solve by plotting charts.Submission Form: Once done, or once the time is up, make sure you submit to the form. NOTE: Any team member can submit with their email address used for zoom.Completeness (40%) - Correct use of data to answer all important questions.Visualisation (30%) - Creative and effective use of visual analytics to provide relevant and attractive charts and/or graphs that depict the sameStory Telling (30%) - Capability to tell a compelling and engaging story that is logical and criticalWhat are the total number of restaurants in Brooklyn?Amongst a certain set of restaurants, which have the maximum number of outlets?Which are the most popular restaurants by scores?Which restaurants have the maximum cuisines?Compare different cuisine types served by the restaurantsYou need to submit your details, screenshot, and dashboard link to this form.", "username": "Harshit" }, { "code": "", "text": "Hey All, Jayesh Choudhary here,\nI was in the 3rd place in trivia\nbelow are my contact details, in case you don’t find my account detailsemail - [email protected] was very fun and got to lean more about MongoDB ", "username": "jayesh_choudhary" }, { "code": "", "text": "Hi @Harshit,Is there a recording of a webinar available somewhere?", "username": "NeNaD" }, { "code": "", "text": "Hey @NeNaD\nThere is a recording and we will be publishing it soon. ", "username": "Harshit" }, { "code": "", "text": "Hey Everyone,\nThanks for joining the event last Saturday.Here’s the recording of the event in case you missed it: MUG Delhi-NCR: Introduction to MongoDB Charts | March 26, 2022 - YouTubeSome important updates:We hope to see you in person at the next meetup!", "username": "Harshit" }, { "code": "", "text": "We have the results of the Charts Challenge!Congratulations to all the winners!@Ansh_Ganatra @jayesh_choudhary @Anurag_Gupta1@Preeti_Sharma @Sai_Charan3@Coder_Chirag @Gokul_Ks @i_koala @Parundeep_SinghYou will be reached out this week for taking in your details around sending you Swag.", "username": "Harshit" }, { "code": "", "text": "yay\nthanks for organizing the event y’all!", "username": "Ansh_Ganatra" }, { "code": "", "text": "Congratulations to all the winners!! Let’s keep on learning, growing and improving our skills with some fun and games.\nFor any roadblocks you may face in this journey, your questions are most welcome at MongoDB Developer Community.\nAlso, please free to explore our free MongoDB University courses.Happy learning \nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hello\nI’d won the Kahoot Trivia and was supposed to be contacted last week\nIs there any update on the same?", "username": "Ansh_Rathod1" }, { "code": "", "text": "Hey @Ansh_Rathod1,\nSorry for the delay, we were figuring out some operational things. Expect to hear from us in a couple of days. ", "username": "Harshit" }, { "code": "", "text": "Hello Organizers!\nI didn’t recieve swag mail yet. Can you please let me know.", "username": "Prakhyat_Singhal" } ]
Delhi-NCR MUG: Introduction to MongoDB Charts
2022-03-21T04:05:12.924Z
Delhi-NCR MUG: Introduction to MongoDB Charts
11,859
null
[ "data-modeling", "database-tools", "backup" ]
[ { "code": "", "text": "We need to clone/ copy a collection from one db (Let’s call it main) to another (Let’s call it analytics) every 24 hours.Currently the best idea I’ve come up with is to do a mongodump to an s3 bucket and then use mongo restore to copy it to the analytics db.Is there any built in tool/ db sync, or is there an established best practice for doing this in mongo? All the docs seem to point at mongodump → restore or older versions of those.", "username": "Kai_N_A" }, { "code": "", "text": "Take a look at", "username": "steevej" }, { "code": "", "text": "Hello Kai, to go a bit deeper, is this other Database in another cluster or is it all within the same cluster?You could use a delayed node as steevej recommended or you could use a scheduled trigger with $out.\nEvery 24 hours run a $out to the new collection.Furthermore, if the other database is in another cluster, you could use Atlas Data Lake along with “$out to Atlas” and again a realm trigger to schedule it.This tutorial covers that use case, except instead of $out to S3 you would use $out to Atlas.Learn how to set up a continuous copy from MongoDB into an AWS S3 bucket in Parquet.Lastly, we actually have some new functionality coming at MongoDB world that I think might actually solve your problem better than these others, please reach out at [email protected] if you’d like to discuss it further and I can get you early access.Best,\nBen", "username": "Benjamin_Flast" }, { "code": "", "text": "Looking forward to this:some new functionality coming at MongoDB world that I think might actually solve your problem better than these others", "username": "steevej" }, { "code": "", "text": "Hi @steevej & @Benjamin_Flast, thanks for the responses and links.The other database will be within a seperate cluster; the idea being to create some anonymised data in the collection based on various collections in the main database and the copy this to a seperate db which will be used for analytics through Tableau (Assuming we’ll be using the mongo Tableau connector) without any connection or reference back to the main db.I’ll send you an email, wouldn’t mind having a look at the new functionality if it’s going to be a better solution.Cheers\nKai", "username": "Kai_N_A" }, { "code": "exports = async function() {\n\n const movies = context.services\n .get(\"DataLake0\")\n .db(\"Database0\")\n .collection(\"Collection0\");\n \n const pipeline = [\n {\n $match: {}\n }, {\n \"$out\": {\n \"atlas\": {\n \"projectId\": \"111111111111111111111111\",\n \"clusterName\": \"mflix\",\n \"db\": \"analytics\",\n \"coll\": \"test\"\n }\n }\n}\n ];\n return movies.aggregate(pipeline);\n};\n", "text": "Hi @Benjamin_Flast thanks again.\nLooks like we’ll take the DataLake route using a trigger with $merge or $out.I have one last question around DataLake & Atlas triggers.Is there a way to setup DataLake or Atlas triggers locally or in a Docker container?\nWe currently have a mongodb-memory-server that gets spun up for our integration tests. I’m hoping to do something similar for this scheduled task that will copy / update the analytics db. My thinking isn’t to test the triggers or mongo functionality; I assume you have that covered.\nI mainly want to have a canary test to indicate if something may be broken and ensure our queries are correct so we can catch things early. For example if we have a schema change that may affect the aggregate query or something like that.In case someone else stumbles across this thread searching for the same thing our approach as a rough/ pseudocode-ish overview on the same db using the mflix movies example collection is as follows:", "username": "Kai_N_A" }, { "code": "", "text": "Hello Kai,Unfortunately no there is no way to deploy Data Lake or Triggers locally, they are only available in Atlas.Do you have a Dev or QA project in Atlas where you could run these integration tests on a low tier cluster?-Ben", "username": "Benjamin_Flast" }, { "code": "", "text": "pseudocodeHi Ben,We do, the problem is if we use those the data could potentially change when someone uses those environments leading to inconsistent automated tests.\nI’ll just test the query in isolation for now using the in-memory db. Ideally it would be a little closer to production, but at least it will cover the most likely potential cause of issues.Cheers\nKai", "username": "Kai_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Copy atlas collection to another database
2022-05-03T06:06:22.520Z
Copy atlas collection to another database
13,594
https://www.mongodb.com/…0_2_1024x154.png
[ "dot-net" ]
[ { "code": "", "text": "Trying convert entity type in the conditional expression, unfortunately it doesn’t work. Is there other solution can help this.My information", "username": "chock_chen" }, { "code": "$convertOfType<T>()var coll = database.GetCollection<Asset>(\"Asset\");\nvar query = coll.AsQueryable()\n .OfType<Equipment>()\n .Where(x => x.Supplier == \"Supplier\");\n\nConsole.WriteLine(query);\ntest.asset.Aggregate([{ \"$match\" : { \"_t\" : \"Equipment\" } }, { \"$match\" : { \"Supplier\" : \"Supplier\" } }])\n_t", "text": "Hi, @chock_chen,Welcome to the MongoDB Community Forums. I understand that you’re trying to perform a cast in a where predicate using LINQ3, but it is failing. Cast operations map to the $convert function on the server and it is only able to convert to certain well known types such as numbers, strings, dates, and ObjectIds. Notably it doesn’t understand any user-defined types specified in C# class definitions or JSON schema definitions.If you’re trying to query for properties on a derived type, you can use OfType<T>(), which eliminates the need for a cast:Output:Note the query on the type discriminator _t. The exact predicate will depend on how you’ve configured your type hierarchy mapping. See Polymorphism in the documentation for more information.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hi, @James_Kovacs ,\nThank you for your feedback! If the retrieval field comes from one derived class, it works just fine by your way. But from my case there are two derived classes (Equipment and Busbar) in the Linq expression, There seems no way to get results from a single query. ", "username": "chock_chen" }, { "code": "AppendStage<TResult>var client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<Asset>(\"coll\");\n\nvar matcher = new BsonDocument {\n { nameof(Equipment.Supplier), \"Supplier\" },\n { nameof(Busbar), \"Number\" }\n};\n\nvar query = coll.Aggregate().Match(matcher);\n\nforeach (var result in query.ToList())\n{\n Console.WriteLine(result);\n}\nAsset[BsonIgnoreExtraElements]SupplierNumberEquipmentBusbarSupplierNumber", "text": "Hi, @chock_chen,Given how you’ve defined your class hierarchy, there is no way to express the query in C#. You can however express it in MQL using AppendStage<TResult> and the matcher expressed using BSON.Depending on how you persist your Asset class hierarchy, you will need to add [BsonIgnoreExtraElements] to certain classes or project out only the valid fields for a particular class. For example, if the returned BSON contains both Supplier and Number fields, then it cannot be deserialized into either an Equipment or Busbar object because there is an extra field.I would strongly encourage you to reconsider class models and database schema so that they are logically consistent. If you need to query on both Supplier and Number, you should have a C# base class that contains both those properties rather than relying on hand-rolled MQL.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hi, @James_Kovacs ,\nI’m going to consider your suggestion to update class models. Thanks a lot for your help!!!", "username": "chock_chen" } ]
How to support entity type converting in LinqProvider.V3
2022-05-17T08:06:35.473Z
How to support entity type converting in LinqProvider.V3
4,159
null
[ "atlas-triggers" ]
[ { "code": "console.log(\"Change event is: \", JSON.stringify(changeEvent));", "text": "I’m really puzzled by something that feels like it must be super basic or an oversight on my part. I’ve created triggers on a collection, no match filter, and set to fire on insert, update, delete, and replace. They are getting fired, but only on changes to a last updated timestamp field we periodically update. They are not getting fired when I change a document in the collection in other ways, including manually in the atlas collection viewer/editor or via our other existing code paths besides the timestamp update.When I first came across the issue, I tried to simplify and recreate the problem and think I have a minimal example. I made a function trigger that does nothing but log the change event ( console.log(\"Change event is: \", JSON.stringify(changeEvent));). All I get are update type changes on the single timestamp field I mentioned.I expect unrelated, but I also noticed that I can’t create or edit a function to set “Skip Events on Re-Enable” to on. It won’t save - gives a “json: cannot unmarshal object into Go struct field dbConfigData.skip_catchup_events of type bool” error when saving.Any thoughts? I’m happy to share more, but there’s just not much in the trigger. Feels like something else in the collection is getting in the way of trigger firing, but I don’t know where to go from here.", "username": "Rob_Arnold" }, { "code": "https://realm.mongodb.com/groups/610932e76ef44e5b35860fd3/apps/615184b60d69430515e10ebc", "text": "Hi, can you send the URL for your realm application? It will look something like this:\nhttps://realm.mongodb.com/groups/610932e76ef44e5b35860fd3/apps/615184b60d69430515e10ebc. This is safe to send as employees have access to look at it.", "username": "Tyler_Kaye" }, { "code": "", "text": "https://realm.mongodb.com/groups/61f3ff816709e7673f97eaae/apps/622e8d0f8fd89c14fdd8b46cThe trigger named, appropriately enough, “trigger_test” should be fine to look at.", "username": "Rob_Arnold" }, { "code": "", "text": "I am not seeing any database or collection set. Does something show up there for you?\n\nScreen Shot 2022-05-18 at 4.05.28 PM2298×742 59.4 KB\n", "username": "Tyler_Kaye" }, { "code": "", "text": "Yes, it’s filled in appropriately with the db name and collection. Not sure why it didn’t populate for you, but I don’t understand how the permissions would work for Mongo employees. Can DM if you wish - wouldn’t think it’d be good to show our specific details. Same behavior with both triggers in the app btw.Thanks so much for looking at this. I appreciate how fast you responded and your help.", "username": "Rob_Arnold" }, { "code": "", "text": "For sure. Feel free to DM me or email me at [email protected]. Triggers are pretty well tested and I cant reproduce this on my own so I suspect that its just something odd with the setup. How are you changing the document such that events are not generated? Note that if the document doesnt actually change, then an event will not fire", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks, email sent. I promise to update here with whatever is figured out in order to help the next person.", "username": "Rob_Arnold" }, { "code": "", "text": "I am now getting the triggers. I believe the issue was pilot error. I think when I first saw the issue, it was due to missing many triggers due to a long execution time and timeout of the function blocking many of the calls. Once that was fixed, I think I missed the relatively few test changes I made due to the number of backed up timestamp changes.", "username": "Rob_Arnold" } ]
Triggers on a collection not firing except on one field change
2022-05-18T19:56:14.526Z
Triggers on a collection not firing except on one field change
4,283
null
[ "dot-net" ]
[ { "code": "", "text": "In our application we need to sign the drivers with our snk file. Downloaded the source code and updated and signed the projects but we got this error when I complie MongoDB.Driver project:|Error|CS0122|‘IClock.UtcNow’ is inaccessible due to its protection levelPreviously, we can sign v2.13.1 from the source code without any problems.Do you have any ideas? Thanks!", "username": "Helena_Reyes" }, { "code": "MongoDB.DriverIClockIClockMongoDB.Driver.Core.MiscinternalMongoDB.Driver.Core[assembly: InternalsVisibleTo(\"MongoDB.Driver\")]MongoDB.DriverMongoDB.Driver.Coresrc/MongoDB.Driver.Core/Properties/AssemblyInfo.csAssemblyInfo.cs", "text": "Hi, @Helena_Reyes,Welcome to the MongoDB Community Forums. I understand that you’re having trouble compiling and signing the MongoDB.Driver (and related) assemblies from source.You mentioned the CS0122 compiler error related to the inaccessibility of the IClock interface. IClock is defined in the MongoDB.Driver.Core.Misc namespace and is marked internal. It is part of the MongoDB.Driver.Core assembly. That assembly is marked with [assembly: InternalsVisibleTo(\"MongoDB.Driver\")] to allow the MongoDB.Driver assembly access to the internal members of the MongoDB.Driver.Core assembly.When you strong name assemblies, the public key of the assembly becomes part of its strong name. Thus you will have to update src/MongoDB.Driver.Core/Properties/AssemblyInfo.cs to include the public key of your SNK. You will have to do the same for the other AssemblyInfo.cs files for other projects in the solution. You should then be able to compile the solution to produce strong-named assemblies.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hi James! Thank you so much for your response.Yes, I also updated the AssemblyInfo.cs with the public key. Previously, we can apply SNK to the code without any errors. We just simply define snk in project properties, update Assemblyinfo.cs then build the code.This issue happens since v2.14.x onwards. Since then, we cannot strong name the assemblies from the source code.", "username": "Helena_Reyes" }, { "code": "", "text": "Hi, @Helena_Reyes,I am not aware of any changes in v2.14.x and later that would prevent you from manually strong naming the assemblies as you’ve been doing. If you are able to strong name v2.13.1 using your procedure, I don’t see a reason why v2.14.x or v2.15.x would pose a problem. Does the build output provide any more clues regarding why strong naming is failing other than the CS0122 compiler warning?I want to note that we don’t currently ship nor support strong named versions of our assemblies due to dependency conflicts and binding redirect problems that our users have encountered. We do realize that there are certain environments that require strong named assemblies. Please vote and/or comment on CSHARP-1276 with your use case as that will be helpful in understanding users’ needs for strong naming.Sincerely,\nJames", "username": "James_Kovacs" } ]
Strong name key
2022-05-04T12:59:41.285Z
Strong name key
2,341
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Hello fellow HackersWe DM’d the group, and there’s a new category too, BUT, on the off chance that you missed it - Read about the submission process HERE and then CLICK HERE to directly access the submission wizzard.So get those projects finished, and submitted!!Any questions? Just reply belowThe Hackathon Team", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
ICYMI - The Project Submissions form is open!
2022-05-18T21:36:17.860Z
ICYMI - The Project Submissions form is open!
2,523
https://www.mongodb.com/…_2_899x1024.jpeg
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Hello Hackers…We hope you are all deep into finalising your projects now that the finish line is in sight!We are livestreaming again tomorrow (Thursday 19th) and we’d love for some brave souls to join us to share their progress so far? Anyone up for it? If so, you will earn this fine item of clothing -\nScreenshot 2022-04-25 at 17.49.52957×1089 117 KB\nand of course, our eternal kudos and gratitude for your bravery and sense of community!C’mon - don’t be shy! We already have some teams on-board, so the more the merrier. Just reply to this post and we’ll swing an invite your way…and in time (postage/shipping delays notwithstanding), you’ll be wearing exclusive hacakthon swag!", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Demo your project! Get Swag!
2022-05-18T21:30:43.115Z
Demo your project! Get Swag!
3,741
null
[ "queries", "node-js", "data-modeling" ]
[ { "code": "const posts = await Post.find({ \n $or: [\n // only if req.body.category.general is TRUE\n {category: \"general\"}, \n // only if req.body.category.jobs is TRUE\n {category: \"jobs\"}, \n // only if req.body.category.events is TRUE\n {category: \"events\"}\n ]\n });\n", "text": "Hi everybody,i created a webapp, which works like a normal blog with postings. The posts can be filtered by the user.\nOne way to filter should be by category. There are three categories. My server gets an associative array with the name of the category as key and a boolean as value. If the value is true, the posts should by filtered by the key. It should be also possible to choose more than one category. So what i need is an OR-operation, with conditional queries. I’m struggeling to get it work… any tipps?Thank you! ", "username": "Felicia_Debye" }, { "code": " var categories = req.body.category;\n var selectedCategory = Object.keys(categories).filter(function(key){\n return categories[key];\n });\n var filter = [];\n for(let i of selectedCategory) {\n filter.push({category: {$regex: i, $options: \"i\"}});\n }\n\n \n try {\n if(filter.length > 0) {\n const posts = await Post.find({$or: filter}); \n res.status(200).json(posts);\n } else {\n const posts = await Post.find(); \n res.status(200).json(posts);\n } \n } catch (err) {\n res.status(404).json({message: err.message});\n }\n", "text": "I got a solution (for sure not the best way):", "username": "Felicia_Debye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Conditional Queries with nodeJS
2022-05-18T12:26:06.202Z
Conditional Queries with nodeJS
7,644
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 5.0.9-rc1 is out and is ready for testing. This is a release candidate containing only fixes since 5.0.8. The next stable release 5.0.9 will be a recommended upgrade for all 5.0 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 5.0.9-rc1 is released
2022-05-18T19:19:19.564Z
MongoDB 5.0.9-rc1 is released
2,720
null
[]
[ { "code": " mongoc_client_t *client;\n\n client = mongoc_client_pool_pop (pool);\n collection = mongoc_client_get_collection(client, \"database\", \"collection\");\n mongoc_client_pool_push (pool, client);\n\n // Use collection to insert, get or update documents after pushing the mongo client\n", "text": "Hi,\nI’m working on a multi-threading C server where I use mongo as database.\nFor that, I’m using the mongo connection pooling. When my server process multiple connections, I have noticed very high CPU usage by my application due to multiple mongo connections.My questions are :Is it possible to avoid opening multiple connections each time we pop a mongo client ?Is it possible to execute mongo operations after pushing a mongo client to the pool ? (as it shown below) :", "username": "KamelA" }, { "code": "", "text": "", "username": "Jack_Woehr" }, { "code": " mongoc_client_t *client;\n\n client = mongoc_client_pool_pop (pool);\n collection = mongoc_client_get_collection(client, \"database\", \"collection\");\n mongoc_client_pool_push (pool, client);\n\n // Use collection to insert, get or update documents after pushing the mongo client\n", "text": "Thanks for the reply, I solved the performance issue by removing minpoolsize config (it was set to 1). however after reading all the doc I couldn’t find an answer for my 2nd question :", "username": "KamelA" }, { "code": "mongoc_client_t", "text": "From this example and this api description it appears to me that after a client pool push you can no longer use the connection. You must obtain the mongoc_client_t via a client pool pop and push it back when done.", "username": "Jack_Woehr" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo c driver uses so much CPU
2022-05-17T13:48:51.377Z
Mongo c driver uses so much CPU
1,484
null
[]
[ { "code": "", "text": "Can Atlas Search perform country-specific faceting? For example, one product can have different distribution/classification characteristics in different countries.", "username": "Harshad_Dhavale" }, { "code": "productscountryoperator$searchMetafacetitem> db.products1234.find()\n{ \"_id\" : 1, \"item\" : \"Tea\", \"country\" : \"United States\" }\n{ \"_id\" : 2, \"item\" : \"Tea\", \"country\" : \"United States\" }\n{ \"_id\" : 3, \"item\" : \"Coffee\", \"country\" : \"United States\" }\n{ \"_id\" : 4, \"item\" : \"Coffee\", \"country\" : \"United States\" }\n{ \"_id\" : 5, \"item\" : \"Coffee\", \"country\" : \"United States\" }\n{ \"_id\" : 6, \"item\" : \"Coffee\", \"country\" : \"United States\" }\n{ \"_id\" : 7, \"item\" : \"Tea\", \"country\" : \"United Kingdom\" }\n{ \"_id\" : 8, \"item\" : \"Tea\", \"country\" : \"United Kingdom\" }\n{ \"_id\" : 9, \"item\" : \"Tea\", \"country\" : \"United Kingdom\" }\n{ \"_id\" : 10, \"item\" : \"Tea\", \"country\" : \"United Kingdom\" }\n{ \"_id\" : 11, \"item\" : \"Coffee\", \"country\" : \"United Kingdom\" }\n{ \"_id\" : 12, \"item\" : \"Coffee\", \"country\" : \"United Kingdom\" }\nstringFacetstring\"Tea\"> db.products1234.aggregate([\n... {\n... \"$searchMeta\": {\n... \"facet\": {\n... \"operator\": {\n... \"text\": {\n... \"path\": \"item\",\n... \"query\": \"Tea\"\n... }\n... },\n... \"facets\": {\n... \"countryFacet\": {\n... \"type\": \"string\",\n... \"path\": \"country\"\n... }\n... }\n... }\n... }\n... }\n... ]).pretty()\n{\n \"count\" : {\n \"lowerBound\" : NumberLong(6)\n },\n \"facet\" : {\n \"countryFacet\" : {\n \"buckets\" : [\n {\n \"_id\" : \"United Kingdom\",\n \"count\" : NumberLong(4)\n },\n {\n \"_id\" : \"United States\",\n \"count\" : NumberLong(2)\n }\n ]\n }\n }\n}\n", "text": "Inherently, Atlas Search or the Facets functionality are not “country aware”. However, Atlas Search can perform faceting based on the classification specified, to produce country specific faceting. For example, if there is a products collection having a country field in each document, then the operator part of the $searchMeta and facet query can filter on the item and produce the facet results based on that, which can be country specific.Here’s a quick example - given this products collection:Using an Atlas Search index that uses the stringFacet and string datatype mappings, we can run a query like this to perform country-specific faceting where the product is \"Tea\":The results returned show that for the product “Tea”, the count is 4 for country “United Kingdom” and the count is 2 for country “United States”.", "username": "Harshad_Dhavale" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can Atlas Search perform country-specific faceting?
2022-05-18T18:02:14.100Z
Can Atlas Search perform country-specific faceting?
1,328
null
[ "replication", "python", "compass", "connecting", "sharding" ]
[ { "code": "", "text": "Hello,I am doing a personal project to acquire knowledge about different technologies, and I have reached a point with MongoDB that I do not know if what I want to do, can be done somehow, I explain:I have a replica set with a primary and two secondaries, and the project I am developing is with Python, and when I want to connect, the connection I do it against localhost:27017. The problem is when 27017 is down, because even if the replica set localhost:27018 and localhost:27019 are up, it doesn’t connect.\nSo, I wanted to know if there is a way for me to always connect to localhost:27017 (either from Python or Compass), even though in the backend it is actually connecting to localhost:27018.I’ve read a lot about it, but I can’t figure it out. I don’t know if with the Sharded Cluster and Router it could be done? I’m a bit lost on this.Thank you very much in advance, any help that allows me to continue researching and learning is appreciated. If you need more details, let me know and I will add them to the OP.", "username": "Jaime_Martin" }, { "code": "mongodb://localhost:27017,localhost27018,localhost:27019/myDatabase?replicaSet=myReplicaSetName\nreplicaSetmongos", "text": "Hi @Jaime_Martin and welcome in the MongoDB Community !First of all, if you want to learn more about MongoDB, you should check out the MongoDB University. It’s free and full of courses that will help you get up to speed with MongoDB. Given the context of your question, the M103 one should be just right for you.Now to answer your question. MongoDB works with a Replica Set (RS) of, usually, 3 nodes. If you want to connect to the full replica set rather than just a single node, you have to connect with the full connection string. In your case it’s something like:More about connection strings in the doc: https://www.mongodb.com/docs/manual/reference/connection-string/If you connect with the replicaSet option, then the driver will retrieve the RS config and identify the Primary in the list of servers. We say that the drivers are “replica set aware”. It knows the entire topology so it can adapt in case another nodes becomes Primary.Sharded clusters are an entire different story. You use a sharded cluster when you want to split the workload across multiple RS working together (=shards). All the shards are then reached through mongos nodes (=routers) which are usually hosted near the drivers. Usually you use these when you have more than 2 TB of data.Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hello Maxime,Thank you very much for your reply.Yes, so far that’s how I had been working, with that connection string, but I wanted to know if it was possible to do what I was commenting, so I understand no from your answer.Thank you very much for the links, I will take a look at them.Best regards,Jaime.", "username": "Jaime_Martin" }, { "code": "w=majority", "text": "Try to kill the primary during writes operations. The write operations will automatically move to another node once the election of the new primary is done.\nIf you are using a recent version of MongoDB and the driver (v4.2 and up), retryable writes are enabled by default. If you write with the write concern w=majority, you shouldn’t be missing any write operation at the end.", "username": "MaBeuLux88" }, { "code": "", "text": "I understand.Let’s see if you can clarify the last doubt I have, which I think I know the answer to, but just to make sure. If I have a replica set with 3 nodes (without referee), and one falls, how is automatically chosen which one is going to be the primary? Couldn’t there be a case of 1-1 tie in votes?I have read this on stackexchange and this is how I think it works, is this correct?“If you have 3 voting nodes in the replica set configuration and any single node is unavailable, the remaining 2 nodes still represent a strict majority of the replica set configuration (i.e. 2/3 nodes) and can elect a primary. The primary election requires a strict majority of voting nodes, so either 2 or 3 votes will elect a primary. With an even number of voting nodes (for example, 4) a strict majority will require n/2+1 votes (so 3 votes). With all members healthy, a 4 node replica set with an even number of votes could result in a 2/2 split and take longer to reach consensus.”Thank you very much for the time you dedicate to clarify doubts and help. Thank you very much.", "username": "Jaime_Martin" }, { "code": "", "text": "Hi @Jaime_Martin,Sorry for the delay, I had a baby since last time so the amount of mess in my life is definitely increasing.In my comment RS = Replica Set, P = Primary, S = Secondary, PSS = state of the RS with P, S & S in this case.If you have a 3 nodes RS:Let take a note of one thing: with 3 nodes, I have a majority at 2 nodes. Each day a server has a certain probability to fail and I’m starting to have problems if 2 nodes fails.\nThe more nodes I have, the greater the chances I have to lose 2 of these nodes and have a problem. I’ll come back to this logic in a second.Now let’s try to have a RS with 4 nodes. Majority is at 3 now.\nThis means that I can only afford to lose one node.\nIt would be more interesting to have 5 nodes. Because the majority is still 3 and now I can afford to lose 2 nodes instead of one.With 3 or 4 nodes RS, I can only afford to lose 1 node. But with 4 nodes instead of 3, I now have a greater probability to lose 2 nodes than when I had only 3 nodes (CF my comment above). So basically, with 4 nodes, I made my RS less resilient and less highly available (HA) than when I had 3 nodes.\nThat’s why we don’t recommend 4 or 6 nodes RS and prefer 3, 5 or 7 nodes RS which provide better high availability.Let’s take a final exemple to finish this example. Let’s say you only have 2 Data Centers (DC) available to deploy your MongoDB RS. It’s not optimal but that’s what you have.You follow the recommendations and go with a 3 nodes RS => DC1 take 2 nodes and DC2 take 1 node.\nThat’s the best possible option here. If DC1 goes down entirely => You lose 2 nodes. DC2 is in read only . If DC2 does down, you only lose 1 node. The 2 other nodes can perform an election if necessary .If you decided that you wanted some symmetry in there and you decided that a 4 nodes RS was a better idea => DC1 takes 2 nodes. DC2 takes 2 nodes. Majority is still at 3 nodes… I guess you understand the rest now. If DC1 or DC2 goes down, you lose 2 nodes at once => no more majority => and . You are less resilient to a DC level failure than with 3 nodes.3 nodes in DC1 and 2 nodes in DC2 would provide the same level of resilience to a DC level failure but provide a better resilience to server level failure.The optimal solution here would be to bring a 3rd DC in the game and move one node from DC1 in DC3. Then you could afford to lose any DC entirely and still have 3 nodes in total with the 2 others.\nSame logical applies if you have 3 DCs and 3 nodes RS (1 in each DC).I hope it’s more clear. Ties aren’t the problem. If a tie ever occurs (not even sure it’s actually possible), then it’ll be resolved within the next second. The real problem is the majority and the probability to lose nodes.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Same connection string to connect to any node of a replica set
2022-04-29T06:47:06.417Z
Same connection string to connect to any node of a replica set
4,778