image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "atlas-functions", "atlas-triggers" ]
[ { "code": "uncaught promise rejection: mongo: no documents in result", "text": "Randomly from time to time I’m getting this error when running Realm Functions manually or from scheduled trigger:\nuncaught promise rejection: mongo: no documents in result\nThis doesn’t seem to be a problem with the code, as I will run it again with the same parameters it finishes without a problem. Is that a problem with the size of my cluster, memory, CPUs? How can I investigate it closer, there is barely any information in logs.", "username": "Marcin_Pilarczyk" }, { "code": "uncaught promise rejection: mongo: no documents in result", "text": "I’m getting this same error sporadically from a triggered function in App Services/Realm. The function runs to its end, logs the result of a collection update, all without throwing an error, yet we get an error in the App Services logs, uncaught promise rejection: mongo: no documents in result. Can this error be safely overlooked? Or better still, explained and prevented?", "username": "Orion_Layton" } ]
Random "uncaught promise rejection" on Realm Functions and Triggers
2021-10-27T03:18:37.708Z
Random “uncaught promise rejection” on Realm Functions and Triggers
3,320
null
[ "queries", "next-js" ]
[ { "code": " typeerror SyntaxError: Unexpected token T in JSON at position 0 \n at GET (webpack-internal:///(sc_server)/./app/api/loadteachers/route.js:19:33)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async eval (webpack-internal:///(sc_server)/./node_modules/next/dist/server/future/route-modules/app-route/module.js:242:37)\n//Fetches the teachers from database.\nimport {MongoClient} from 'mongodb';\n\nexport async function GET()\n{\n \n console.log(\"😃 dbConnect executed.......\");\n const client = await MongoClient.connect(process.env.MONGODB_CONNECTION_STRING);\n const db = await client.db(\"WDN2\"); \n const teachers = await db.collection('Teachers')\n .find()\n .sort({_id: -1})\n .toArray(); \n \n //res.status(200).json({text: teachers});\n //console.log(\"From api route: \", teachers);\n //const data = await res.json();\n\n const data = await teachers.json();\n client.close();\n \n //return new Response(teachers)\n return data\n}\ntype//load teachers into the table.\n\"use client\";\n//import dbConnect from \"../../../utils/dbConnect\";\n\nasync function getStudents(){\n console.log(\"Get students working ........\");\n const res = await fetch(\"http://localhost:3000//api/loadteachers\");\n return res.json();\n}\n\nexport default async function Table()\n//const Table = ()=>\n{\n //Call api here.\n //const dbConnectfunc = await dbConnect();\n //console.log(\"hello this is the table component.\");\n //console.log(dbConnectfunc); \n const data = await getStudents();\n console.log(data);\n return(\n <div >\n <table className=\"table \">\n <thead>\n <tr>\n <th scope=\"col\">#</th>\n <th scope=\"col\">Date</th>\n <th scope=\"col\"> First Name</th>\n <th scope=\"col\"> Last Name</th>\n <th scope = \"col\"> Email </th>\n <th scope = \"col\"> Classes</th>\n <th scope = \"col\"> Students </th>\n <th scope = \"col\"> Remove</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th scope=\"row\">1</th>\n <td>5/17/22</td>\n <td> megan</td>\n <td> jones</td>\n <td>[email protected] </td>\n <td> 4</td>\n <td> 120 </td>\n <td> Remove</td>\n </tr>\n </tbody>\n </table>\n </div> \n ) \n} \n", "text": "I am using Nextjs 13.4 and mongo. I get the following error when I attempt to access the database with a query.:Here are my files:\nroute.jsTable.jsxHere is the folder structure:\napp\n— components\n---- Table.jsx\n— api\n---- loadteachers\n---- route.jsNot sure what else to do here.", "username": "david_hollaway" }, { "code": "", "text": "Is your collection name Teachers or teachers?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "It’s ‘Teachers’.\n\n0909×366 16.2 KB\n", "username": "david_hollaway" } ]
Error SyntaxError: Unexpected token T in JSON at position 0
2023-07-19T05:41:50.505Z
Error SyntaxError: Unexpected token T in JSON at position 0
1,128
null
[ "queries" ]
[ { "code": "{\n \"_id\" : ObjectId(\"60054d3e20b5c978eb93bb7f\"),\n\t\"items\": {\n\t\t\"o1n2rpon12prn\": {\n\t\t\t\"LastEvent\": \"On\"\n\t\t},\n\t\t\"sdbqw12f12f\": {\n\t\t\t\"LastEvent\": \"Off\"\n\t\t},\n\t\t\"yxvcaqeg23g23g\": {\n\t\t\t\"LastEvent\": \"Error\"\n\t\t}\n\t}\n}\n", "text": "Hi,I have a collection with many documents that do have a “item” property where the value is an object with key-value-pairs:Now I try to query all objects that have an item with “LastEvent” being “On” but I can’t find a way how to do that.I can find many examples to query nested arrays but not a dictionary like structure with key value pairs.\nOnly thing I found was aggregating and using objectToArray before matching but that sounds like a huge performance impact and it’s pretty hard to get done with C#. Is there no way to do something like “item.*.LastEvent”: “On”?", "username": "Wolfspirit" }, { "code": "{\n \"_id\" : ObjectId(\"60054d3e20b5c978eb93bb7f\"),\n\t\"items\": [\n\t\t{ \"k\" : \"o1n2rpon12prn\" ,\n\t\t\t\"LastEvent\": \"On\"\n\t\t},\n\t\t{ \"k\" : \"sdbqw12f12f\" ,\n\t\t \"LastEvent\": \"Off\"\n\t\t},\n\t\t{ \"k\" : \"yxvcaqeg23g23g\",\n\t\t \"LastEvent\": \"Error\"\n\t\t}\n\t]\n}\n", "text": "It will be hard to achieve something with the schema. I would consider using Building with Patterns: The Attribute Pattern | MongoDB Blog.You can then have a multi-keys index on items.k and with $elemMatch (or $unwind and $match) you can find the k for which LastEvent:On.Just the names o1n2rpon12prn, sdbqw12f12f and yxvcaqeg23g23g gives me the feeling that they should be values rather than keys.", "username": "steevej" }, { "code": "[\n {$addFields:{u:{$objectToArray:\"$items\"}}},\n {$match:{\"u\":{$elemMatch: { \"v.LastEvent\": \"On\" }}}},\n {$project:{u:0}}\n]\n", "text": "Thank you for the answer The keys are unique and basicly the id of the item (in my case they were just examples but they are based on UUIDs in reality).I was thinking about using an array and adding the key as an id field, but I found it hard to update the correct item later. I have multiple services that access the same database and add different values to the same “key” (sometimes even simultaneously). For example one adds/updates the “LastEvent” field and another service adds/updates a “CurrentState” field. Currently it works very easy by using an upsert and a simple set on “item.id.LastEvent” and the different services don’t interfere. If the key isn’t there yet it will just be created.\nWith arrays it seems to be much harder to upsert data into the array depending on a key or field inside the array.\nI think I could do something similar for existing values by filtering for the item id and then using “items.$.LastEvent” to update that object, but the filter won’t work if there is no such object yet and with upsert it will just create a whole new document then.On stackoverflow I found that question and the answers there is to use objects…but then I’m back here.\nnode.js - Can mongo upsert array data? - Stack OverflowAs multiple services might work on the same document I also can’t use two calls (pull and update).I also thought about using separate documents for the items but then my problem is that I’ve a retention policy set for this collection and I then need to figure out a way to remove the other documents aswell.As a workaround I’m now using an aggregate pipeline similar to that:But I’m a bit worried about performance as I might have a huge count of documents", "username": "Wolfspirit" }, { "code": "{ 'items.$**' : \"text\" } db.stores.find( { $text: { $search: \"On\" } } )\n", "text": "Hi @Wolfspirit,Have you considered using a text index where you will index all items like { 'items.$**' : \"text\" }Than query :Or aggregate with text search and later on filter with $map.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "{ 'items.$**' : \"text\" }mongo localhost> db.search.createIndex( { 'items.$**' : \"text\" } )\n{\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Index key contains an illegal field name: field name starts with '$'.\",\n\t\"code\" : 67,\n\t\"codeName\" : \"CannotCreateIndex\"\n}\nmongo localhost> db.search.createIndex( { '$**' : \"text\" } )\n2021-01-19T08:24:52.091-0500 I INDEX [conn8] build index on: test.search properties: { v: 2, key: { _fts: \"text\", _ftsx: 1 }, name: \"$**_text\", ns: \"test.search\", weights: { $**: 1 }, default_language: \"english\", language_override: \"language\", textIndexVersion: 3 }\n2021-01-19T08:24:52.091-0500 I INDEX [conn8] \t building index using bulk method; build may temporarily use up to 500 megabytes of RAM\n2021-01-19T08:24:52.094-0500 I INDEX [conn8] build index done. scanned 1 total records. 0 secs\n{\n\t\"createdCollectionAutomatically\" : false,\n\t\"numIndexesBefore\" : 1,\n\t\"numIndexesAfter\" : 2,\n\t\"ok\" : 1\n}\ndb.search.find( { $text: { $search: \"On\" } } )mongo localhost> db.search.find().pretty()\n{\n\t\"_id\" : ObjectId(\"60054d3e20b5c978eb93bb7f\"),\n\t\"items\" : {\n\t\t\"o1n2rpon12prn\" : {\n\t\t\t\"LastEvent\" : \"On\"\n\t\t},\n\t\t\"sdbqw12f12f\" : {\n\t\t\t\"LastEvent\" : \"Off\"\n\t\t},\n\t\t\"yxvcaqeg23g23g\" : {\n\t\t\t\"LastEvent\" : \"Error\"\n\t\t}\n\t}\n}\n", "text": "Since I am not familiar with such indexes, I go ahead and try it.I reach an hurdle at creating the index:{ 'items.$**' : \"text\" }I got:Creating the index as given at https://docs.mongodb.com/manual/core/index-text/ worked:However, db.search.find( { $text: { $search: \"On\" } } ) gives no result despite:Both mongo and mongod are 4.0.5 on Arch Linux.I guess I miss something.", "username": "steevej" }, { "code": "items.$**db.search.createIndex( { '$**' : \"text\" },{ default_language: \"none\" , language_override : \"none\"} )\n db.search.aggregate([{$match : { $text: { \"$search\": \"on\" } }}])\n{ \"_id\" : ObjectId(\"60054d3e20b5c978eb93bb7f\"), \"items\" : { \"o1n2rpon12prn\" : { \"LastEvent\" : \"On\" }, \"sdbqw12f12f\" : { \"LastEvent\" : \"Off\" }, \"yxvcaqeg23g23g\" : { \"LastEvent\" : \"Error\" } } }\n", "text": "Hi @steevej and @Wolfspirit,Sorry I might confused Wild Card index syntax and text index and thought items.$** is acceptable.Now @steevej, you have hit an edge case with your search for the word “on” . The problem is that this word is considered a stop word like “and”,“or” and “the” etc.Therefore it is not indexed If you remove the default english and specifiy “none” they will:Now it returns a result:Please let me know if you have any additional questions.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks, I learned a lot.", "username": "steevej" }, { "code": "", "text": "Hello Steevej, is there any way to search the key in your reference if the key is already know ?any suggestion will save a day. ", "username": "Kaushal_Sengar" }, { "code": "", "text": "I do not understand your question.It is an old thread. Please start a new one.Provide sample documents, the queries and expected results.", "username": "steevej" }, { "code": "", "text": "I am asking if we provide the text value for searching it will search and return the document . is there any way to find the key for ex below document :- is there any way to search o1n2rpon12prn value and return all the document which contains this value:- I am reading your last year previous post so thought I can ask here only. sorry for the statement, I am not a good writer.\n{\n“_id” : ObjectId(“60054d3e20b5c978eb93bb7f”),\n“items” : {\n“o1n2rpon12prn” : {\n“LastEvent” : “On”\n},\n“sdbqw12f12f” : {\n“LastEvent” : “Off”\n},\n“yxvcaqeg23g23g” : {\n“LastEvent” : “Error”\n}\n}\n}", "username": "Kaushal_Sengar" }, { "code": "db.collection.find( { \"items.o1n2rpon12prn.LastEvent\" : \"On\" } )\n", "text": "see MongoDB Courses and Trainings | MongoDB University", "username": "steevej" }, { "code": " // Continue to iterate over array elements\n for (var i = 0; i < doc[key].length; i++) {\n if (typeof doc[key][i] === 'object') {\n this.Availablefields(doc[key][i], keys, prefix + key + '.' + i + '.');\n }\n }\n } else if (typeof doc[key] === \"object\") {\n this.Availablefields(doc[key], keys, prefix + key + \".\");\n } else {\n keys[prefix + key] = prefix + key;\n }\n }\n}\nreturn keys;\n", "text": "Availablefields(doc: any, keys: {}, prefix: string): any {\nfor (var key in doc) {\nif (!(key in keys) && key !== “_id”) {\nif (Array.isArray(doc[key])) {\n// Include array key name\nkeys[prefix + key] = prefix + key;}} you can get unknown nested key object and nested array by this logic", "username": "Deepak_Tak" } ]
Query nested objects in a document with unknown key
2021-01-18T19:44:59.733Z
Query nested objects in a document with unknown key
44,353
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "I have just been working with Realm but I have hit a dead end when trying to save user data like names and phone numbers when they sign upso the idea is when a username signs up their full name and phone number gets saved on user data and they will receive an email for the confirmation as usualPlease help with a solution or idea on how I can go about achieving this using Realm", "username": "Jeremie_D_Negie" }, { "code": "", "text": "@Jeremie_D_Negie : Have thought of setting up a Realm function which get triggered as soon as sign up is complete.In that function you can save all the custom information you want in the User Model.", "username": "Mohit_Sharma" }, { "code": "exports = async function (authEvent, firstname, lastname, phoneNumber) {\n const user = authEvent.user;\n const collection = context.services\n .get(\"mongodb-atlas\")\n .db(\"educators\")\n .collection(\"user\");\n\n const updateDoc = { firstname, lastname, phoneNumber, userID: user.id };\n\n const result = await collection.insertOne(updateDoc);\n\n};\n", "text": "Hi @Mohit_Sharma , thank you for your answer.Yes, I have set up a function already but what I do not know what to do is pass data to that function.for example, my function looks like thisThe trigger will call this function on user sign up but how do I pass the required arguments so firstname, lastname, and phoneNumber", "username": "Jeremie_D_Negie" }, { "code": "", "text": "Can someone please help with this, I have bee sitting with this problem for a week now", "username": "Jeremie_D_Negie" }, { "code": "", "text": "@Jeremie_D_Negie : Once you get the sign-up success callback on the client you can call this function to save the custom information.", "username": "Mohit_Sharma" }, { "code": "", "text": "@Jeremie_D_Negie did u solve the problem ? if so tell me how please", "username": "Bouraoui_Ben_abdalla" }, { "code": "", "text": "@Mohit_Sharma i am using a trigger function so how can i pass the arguments from the activity on android studio to that function ?", "username": "Bouraoui_Ben_abdalla" }, { "code": "", "text": "I wasn’t able to solve the problem ey… I ended up rebuilding the authentication with Nodejs and Postgresql so I can have more control however I was building a project with NextAuth and what I did was show a user a form on the first login to set up their profile. I guess you could set up a way to check whether a user has, for example, a username or something similar if they do you take them to whatever page you otherwise you show them a profile form where they need to fill in their information… Please let me know if you do it otherway", "username": "Jeremie_D_Negie" }, { "code": "", "text": "in the same signup activity i logged in and get the user id upload data then i logged out and moved to the login activity", "username": "Bouraoui_Ben_abdalla" }, { "code": "", "text": "Okay, but MongoDB realm sends a verify email, why not get all those data when after the user verify their email>", "username": "Jeremie_D_Negie" }, { "code": "", "text": "@Bouraoui_Ben_abdalla : You can consider checking out this repo, where I can calling a simple SUM function to add two numbers, similarly you call function and pass arguments to it.", "username": "Mohit_Sharma" }, { "code": "", "text": "@Jeremie_D_Negie : Did you got a chance to validate the approach I suggested you earlier ?", "username": "Mohit_Sharma" }, { "code": "", "text": "I did, and first I was using Realm with Nextjs, and with the Realm client in javascript you need to be authenticated to call a function so the problem I had was a new user need to verify their account first before performing an action with Realm like calling a function. so that did not work well", "username": "Jeremie_D_Negie" }, { "code": "", "text": "G’Day @Jeremie_D_Negie, @Bouraoui_Ben_abdalla ,I acknowledge it has been a while since you asked these questions. I hope you have been able to find answers to it.For anyone else, who has the same questions.so the idea is when a username signs up their full name and phone number gets saved on user data and they will receive an email for the confirmation as usualThis is user-meta data. This depends on the auth provider you are using in your application. Refer user-meta data on how to enable these fields.Once enabled, you should be able to access these fields from your user object.You can enable Authentication Trigger which will run the function when the user sign-up and save those details in your user collection. You can look at Task Tracker Tutorial in your preferred Tech Stack for the same functionality.I hope the provided information helps.Cheers, ", "username": "henna.s" }, { "code": "const mongo = user.mongoClient(\"<atlas service name>\");\n const collection = mongo.db(\"<database name>\").collection(\"<collection name>\");\n const filter = {\n userID: user.id, // Query for the user object of the logged in user\n };\n const updateDoc = {\n $set: {\n favoriteColor: \"pink\", // Set the logged in user's favorite color to pink\n },\n };\n const result = await collection.updateOne(filter, updateDoc);\n", "text": "Hi… i’m having the same issue. I want to store extra information during registration(email/password provider) and reading the documentation that doesn’t seems possible? This sounds like a terrible omission IMHO.The documentation says you have to use some pure settings like this:Sorry but this is just… you don’t need to know the Atlas service name, the database name nor the collection for all your normal operations… but for this little feature your client need to know those values? It’s even worse considering that you have “refreshCustomData” right there… so the process/server knows where that data is stored. Why not add an user.updateCustomData() ?This feature should be a priority IMHO. There are a lot of triggers during user creation that could need this information. For example registering a customer information on stripe. With customer name as a needed value.I’m not seeing how the Authentication Trigger could be a solution considering that the information is not there yet. Unless the solution is to store the “registration” data somewhere… and then reading from there?", "username": "Mariano_Cano" }, { "code": "", "text": "I was in the same boat, I thought it was strange there would be no option to pass custom metadata on sign up. Turns out, there is! This links to the documentation section you’re looking for.https://www.mongodb.com/docs/atlas/app-services/users/custom-metadata/#user-creation-function", "username": "Peter_Rauscher" }, { "code": "", "text": "the issue is that function won’t get any information you pass in a signup form. You can add custom data you already know but for example, a form with “name” using email/password credential cannot be passed. You have to do some shenanigans to get this data through", "username": "Mariano_Cano" }, { "code": "\n \n app.currentUser.functions.logUserActive();\n setUser(app.currentUser);\n setUserLoading(false);\n return { success: true };\n } catch (error) {\n setUserLoading(false);\n return { success: false, error: error.message };\n }\n };\n \n \nconst emailPasswordSignup = async (\n name,\n email,\n password,\n confirmPassword\n ) => {\n try {\n if (password !== confirmPassword)\n throw new Error(\"Password and confirmation did not match.\");\n setUserLoading(true);\n await app.emailPasswordAuth.registerUser({ email, password });\n \n ", "text": "Yep, realized this myself on implementing. Spent a lot of time trying to pass custom data, but ultimately I just added a second Atlas Function that takes the custom data and adds it to the custom user data collection right after sign up.It feels a little hacky to separate the functions, but here’s how I achieved it in my React UserContext:", "username": "Peter_Rauscher" }, { "code": "updateCustomUserData", "text": "G’Day @Peter_Rauscher , and othersMany thanks for sharing your solution with the wider community . I am sure this can help people if they are having the same issue.This is the correct solution at this time. There is no updateCustomUserData method available in the SDKs to achieve what you are looking for.If there are any updates planned for the custom data feature, I would keep you all informed.If there is anything else, I can help you with. Please feel free to reach out.Cheers, \nhenna,\nCommunity Manager, MongoDB", "username": "henna.s" }, { "code": "", "text": "Henna,Glad to know we’ve found the best solution for the time being. I appreciate you getting back to us and confirming the accuracy of our findings!", "username": "Peter_Rauscher" } ]
How do I save user customData on Signup
2021-07-05T17:34:22.462Z
How do I save user customData on Signup
8,029
null
[ "aggregation", "node-js" ]
[ { "code": "find", "text": "I am learning the aggregation framework and I have some questions. I would be very grateful for your answers.When should I use aggregation pipelines, and when should I use simple find queries and then process data in code?Does the aggregation pipeline execute in the database and only the result transfers via the network?", "username": "empty2" }, { "code": "", "text": "It can save you a few round trip of requests. And the db code may be optimized sometimes for special tasks, who knows.Aggregation can run i db shard or mongos , secondary or primary.", "username": "Kobe_W" }, { "code": "", "text": "Aggregation is the mainstream of MongoDB programming.\nUse it all the time because that’s the way to get familiar.", "username": "Jack_Woehr" }, { "code": "", "text": "But I guess we can not use just one option every time. Aggregation pipelines have drawbacks and minuses. Sometimes writing the same functionality but in a programming language like JS would be easier.And I doubt when to use aggregation pipelines and when to omit them.", "username": "empty2" }, { "code": "find()", "text": "Well, if you were doing this in a relational database, when would you use:It’s the same thing. But it’s hard to do much in MongoDB without aggregation beyond a simple find(),", "username": "Jack_Woehr" }, { "code": "find()find()update() const x = await dbFindX({\n _id: 'someId',\n });\nconst y = await dbFindY({\n _id: x.yId,\n });\n", "text": "But it’s hard to do much in MongoDB without aggregation beyond a simple find(),But I can do everything with a simple find() update() and so on. When I’m combining them on level of application.For example, instead of lookup I can write something like this:And mostly everything can be written in the same way in application level instead of writing it in database level.", "username": "empty2" }, { "code": "", "text": "You can so that but running server side you get access to indexes, data and server runtime in it’s most pure form. Say you wanted to group 5M records, by fields A and B, summing field C and then outputting the results to a new collection, or using that to update another collection based on that data for monthly sales figure aggregation or similar.\nYou can pull all that data back to the client, limiting fields, but then you need to process all that data client side, form the updates and send back to the server.\nYou can do that in one aggregation call to the server, which runs in the server process with covered indexes so is running physically in memory, even over a cluster of sharded collections.Personally I use it constantly in the day, to explore and analyse data. I can sit down with a BA who needs something and in the space of minutes we can quickly drill into the data, slicing and dicing it and then collating and outputting, all from one window.Of course it has its place, you shouldn’t use it to reformat everything that flows into or out of your application, you should model the data as it’s used and cope with the edge cases or special situations. If 99% of the application is presenting data in a certain way, then keeping it together makes sense and you can then add indexes etc to help cope with the special things, or build to cope with that.As you can probably tell, I like it, you can gradually build up aggregation pipelines and debug them easily by adding them in and removing them, even debugging performance with explains. It’s probably what I find most powerful about Mongo.Of course, if you’re using a purely relation model…then use a relational database, but don’t use Oracle…friends don’t let friends use Oracle…", "username": "John_Sewell" }, { "code": "", "text": "I guess the most important reasons why we should not use aggregation pipelines are:@Jack_Woehr @John_Sewell\nWhat do you think about that?", "username": "empty2" }, { "code": "", "text": "While vendor lock-in is an issue, there is a balance to be found, for example if I want to limit data from a SQL based server…SQL Server, Oracle and others have different options for Limit/Top so I could have my client pull all data back and only take the first X records, or if the grouping styles were different I could ignore all the server functions available and implement grouping and rollup etc in client code.For the second point, I can’t comment that much as I don’t use TypeScript (much…I’m learning Angular at the moment).As I said before, for your bread and butter calls to Mongo for your app, you probably don’t want to be doing complex aggregations as the data will be in a usable format.\nFor other scenarios, say reporting that are run on occasion or statistic generation it comes down to do you want to harness the power of the server side processing that aggregation gives or re-write the server code in an interpreted language. With large datasets this could be a huge performance hit, plus you are now maintaining code that’s an implementation of something that already on the server.If you really want to abstract things out, then put an extra data access abstraction layer in there so your application can talk to Mongo, Oracle or csv files, but you definitely are creating more complex code now!", "username": "John_Sewell" }, { "code": "", "text": "easy to understand vs MongoDBIf you want to use an RDB, you use SQL. If you want to use MongoDB, you use aggregation. It’s just part of the game. MongoDB aggregation pipelining is the most efficient way to run complex aggregation over a MongoDB database.Node.js driver typed very badly.Perhaps, I’m not an expert. It’s open source. Contribute code to the project to fix that deficiency.In the end it’s your decision. What works for you, works for you. You don’t need anyone’s approval to do what you are doing!", "username": "Jack_Woehr" } ]
When to use aggregation pipelines instead of processing data at the app level?
2023-07-17T18:45:41.841Z
When to use aggregation pipelines instead of processing data at the app level?
618
null
[ "kotlin" ]
[ { "code": "query<EntityClass>(\"_id IN $0\", ids)\n", "text": "Hello. I’m encountering a problem where I need to query entities with ids in a list. So I do:where ids is the list of ids, but my app crashes, saying this type is not supported. What worked for me was to manually join the list into a static query, or into comma-separated templates.Documentation here shows that it’s possible to provide a list as a single argument (third example), but it doesn’t seem to be in practice when using Kotlin SDK.Am I missing something? Or is manually building the query the correct way? What if I have, say, 100 elements in the list?", "username": "TheHiddenDuck" }, { "code": "1.10.0", "text": "What version of the SDK are you using? We added support for passing lists as arguments in version 1.10.0.", "username": "Clemente_Tort_Barbero" }, { "code": "", "text": "Hello. I’m using 1.9.0. I will try updating to 1.10.0 today! Thanks!", "username": "TheHiddenDuck" } ]
Using IN in Kotlin SDK queries
2023-07-19T09:11:13.687Z
Using IN in Kotlin SDK queries
499
null
[ "golang" ]
[ { "code": "(BSONObjectTooLarge) Executor error during getMore :: caused by :: BSONObj size: 19665104 (0x12C10D0) is invalid. Size must be between 0 and 16793600(16MB) First element: _id: { _data: \"8262EB3576000002BB2B022C0100296E5A1004333EBB2C7745456497CD595172557DDC46645F696400645F20B948F93E9425F70F739F0004\" }\n_id._data", "text": "We have implemented a cluster-level change stream watcher in the golang driver to help migrate schema and migrate clusters. We’ve been touching each document’s updated date to get the stream to pick it up. Been working great. While watching, my cursor broke out with:I can’t re-open the change stream as the resume token is stuck on this document. The hope would be to perform the fix on the old document and re-open the change stream at that resume token. The problem is, I can’t locate the document - we have thousands of collections (one of the reasons why we’re doing this data migration project), and I can’t figure out which document, in particular, is the problem child. In production, I’d like to be able to automatically handle the error - perform a known fix of trimming an overly long array, but I can’t figure out how to get from the error message to the document that needs fixing. We need access to the full document each time for our use case, so we can’t project the large field away altogether. The calltime on the fix can be complex, so if it’s something like…using the resume token and directly querying the oplog and performing the update on the old document, that’s doable.Any help is appreciated.", "username": "TopherGopher" }, { "code": "", "text": "Hi, I just ran into the same problem. Wondering if you managed to solve it somehow.\nCheers", "username": "Jannik_Schmiedl" } ]
Automating fix for BSONObjectTooLarge from cluster level change stream
2022-08-08T16:45:04.307Z
Automating fix for BSONObjectTooLarge from cluster level change stream
2,216
null
[ "java", "android", "kotlin", "flexible-sync" ]
[ { "code": "io.realm.kotlin.mongodb.exceptions.ServiceException: [Http][HttpError(4309)] http error code considered fatal. Server Error: 504.\n at io.realm.kotlin.mongodb.internal.RealmSyncUtilsKt.convertAppError(SourceFile:171)\n at io.realm.kotlin.mongodb.internal.RealmSyncUtilsKt$channelResultCallback$1.onError(SourceFile:66)\n at io.realm.kotlin.internal.interop.realmcJNI.complete_http_request(SourceFile)\n at io.realm.kotlin.internal.interop.realmc.complete_http_request\n at io.realm.kotlin.internal.interop.sync.ResponseCallbackImpl.response(SourceFile:26)\n at io.realm.kotlin.mongodb.internal.KtorNetworkTransport$sendRequest$1.invokeSuspend(SourceFile:123)\n at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(SourceFile:33)\n at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SourceFile:138)\n at io.ktor.util.pipeline.SuspendFunctionGun.loop(SourceFile:112)\n at io.ktor.util.pipeline.SuspendFunctionGun.access$loop\n at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SourceFile:62)\n at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(SourceFile:46)\n at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SourceFile:138)\n at io.ktor.util.pipeline.SuspendFunctionGun.loop(SourceFile:112)\n at io.ktor.util.pipeline.SuspendFunctionGun.access$loop\n at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SourceFile:62)\n at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(SourceFile:46)\n at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SourceFile:138)\n at io.ktor.util.pipeline.SuspendFunctionGun.loop(SourceFile:112)\n at io.ktor.util.pipeline.SuspendFunctionGun.access$loop\n at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SourceFile:62)\n at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(SourceFile:46)\n at io.ktor.util.pipeline.SuspendFunctionGun.resumeRootWith(SourceFile:138)\n at io.ktor.util.pipeline.SuspendFunctionGun.loop(SourceFile:112)\n at io.ktor.util.pipeline.SuspendFunctionGun.access$loop\n at io.ktor.util.pipeline.SuspendFunctionGun$continuation$1.resumeWith(SourceFile:62)\n at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(SourceFile:46)\n at kotlinx.coroutines.DispatchedTask.run(SourceFile:106)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)\n at java.lang.Thread.run(Thread.java:923)\n", "text": "I got a crash in sentry with the bellow stacktrace. What’s the meaning of this and how to avoid?", "username": "Gaurav_Bordoloi" }, { "code": "", "text": "Hello @Gaurav_Bordoloi ,Thanks for raising your concerns. The server Error 504 appears to be coming from the sync client. It is possible server was under a heavy load.Could you please share if you are still getting the same crash? I will get feedback if this is still happening.I look forward to your reply.Cheers, \nhenna,\nCommunity Manager, MongoDB", "username": "henna.s" }, { "code": "", "text": "Hi,\nNo, I am not facing the issue anymore but I am using a M10 paid instance with very less data and app clients, so the server shouldn’t be under a heavy load. I am using M10 for both prod and dev and if it is happening in dev, I doubt my prod server is stable. Any comment on this?", "username": "Gaurav_Bordoloi" }, { "code": "", "text": "Hi @Gaurav_Bordoloi,Thanks for your response back. M10 and M20 cluster tiers support development environments and low-traffic applications.One of our engineers in the Cloud team has listed some helpful steps on how you can do performance testing on your cluster tier.You can also refer to Cluster configuration details in MongoDB’s official documentation.Please let me know if I can help you with anything else.Cheers,\nhenna,\nCommunity Manager, MongoDB", "username": "henna.s" } ]
ServiceException crash in Android when using kotlin sdk
2023-07-08T04:27:01.592Z
ServiceException crash in Android when using kotlin sdk
705
https://www.mongodb.com/…c_2_1024x393.png
[]
[ { "code": "", "text": "There seems to be issue in one question. Please advise.\n\nScreenshot 2023-07-19 at 11.55.50 AM2360×908 182 KB\n\nOption B and C are exactly the same. Please let me know what am i missing in it.", "username": "Debabrata_Patnaik" }, { "code": "", "text": "Option B and C are exactly the same.Exactly. And this why C cannot be added.A cannot be inserted because _id:1 exists.\nB can be inserted because _id:5 does not exists.\nC cannot be inserted because once B is inserted _id:5 exists.\nD can be added because _id:6 does not exists.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Incorrect Answer in practice test
2023-07-19T06:57:35.869Z
Incorrect Answer in practice test
627
https://www.mongodb.com/…b7e3201ac995.png
[ "cxx", "field-encryption" ]
[ { "code": "HMAC data authentication failedCustomer Master Key Data Encryption KeyMaster Keymongo --eval \"var LOCAL_KEY = '$MASTER_KEY'\" --shell -u Adminvar autoEncryptionOpts = \n {\n \"keyVaultNamespace\" : \"keyvault.datakeys\",\n \"bypassAutoEncryption\" : true,\n \"kmsProviders\" : { \"local\" : { \"key\" : BinData(0, LOCAL_KEY) } }\n }\n", "text": "Hai,We are currently testing Mongo CSFLE using C++ for our project. We’re using MongoDB 4.4 version. Previously, we faced challenges while encrypting data with CSFLE in C++ Code. Now, we want to decrypt the latest encrypted data through Mongo Shell. However, we are facing the error HMAC data authentication failed. We’re using the right Customer Master Key & Data Encryption Key as we’ve cross checked.We’re storing the Master Key locally and importing it into Mongo Shell Environment using command mongo --eval \"var LOCAL_KEY = '$MASTER_KEY'\" --shell -u AdminDefined autoEncryptionOpts asWhen we query the encrypted Collection using the MongoClient with CSFLE options only. We’re getting thisAnd, we’re able to decrypt the documents from Code. Unable to do it via Mongo Shell.\nPlease look into it and help us to solve this issue.", "username": "Bhargav_Sai" }, { "code": "", "text": "@Pavel_Duchovny Could you please look into this and help us out.?", "username": "Bhargav_Sai" }, { "code": "", "text": "What was the solution for this ?", "username": "Navaneethakumar_Balasubramanian" } ]
Unable to Decrypt Data using CSFLE through MongoShell - HMAC Failed
2022-12-26T09:43:10.972Z
Unable to Decrypt Data using CSFLE through MongoShell - HMAC Failed
1,760
null
[ "compass", "mongodb-shell", "php" ]
[ { "code": "", "text": "Hello,i have a server php with mongodb\nAll is working :But : i can’t access to db via compassI use this :\nmongodb://user:mdp@hostname ( exactly like my php )And compas return me a timeout error.I don’t understand why…\nI have check net.bindip and i have 2 ip set ( my localhost and ip remote server )Thanks to advance", "username": "Adrien_Didier" }, { "code": "", "text": "You probably don’t have mongod configured to accept connections via the external TCP/IP interface.", "username": "Jack_Woehr" }, { "code": "iptables -A INPUT -s <ip-address> -p tcp --destination-port 27017 -m state --state NEW,ESTABLISHED -j ACCEPT\n", "text": "Ok thanks Jack to give me the road to solutionSo, the probleme it was that i need to add my ip local machine to ip list authorized to listening :27017The doc exact for me is : https://www.mongodb.com/docs/v6.0/tutorial/configure-linux-iptables-firewall/And i had just plat this line :I hope it was not a mistake but anyway : now compass connection is ok", "username": "Adrien_Didier" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Access remote server mongo with compass
2023-07-18T14:01:47.531Z
Access remote server mongo with compass
513
null
[ "java" ]
[ { "code": "", "text": "Can there be native support to ZonedDateTime in MongoDB java Driver sync. Seems it only have LocalDateTime implemented as part of JSR310\nSupported classes are The high level API contains the following key classes:", "username": "Debabrata_Patnaik" }, { "code": "", "text": "Thanks for letting us know about your feature request for native support of ZonedDateTime.While we don’t have this on our roadmap currently, I’ve added it to the JAVA project so that we can gauge popularity and interest. Feel free to upvote it here, and to add any additional information about your specific use case.", "username": "Ashni_Mehta" }, { "code": "serverSelectionTryOnce", "text": "Fatal error : Uncaught MongoDB\\Driver\\Exception\\ConnectionTimeoutException: No suitable servers found (serverSelectionTryOnce set): [Failed to receive length header from server. calling hello on ‘cluster0-shard-00-02.sw31d.mongodb.net:27017’] [Failed to receive length header from server. calling hello on ‘cluster0-shard-00-00.sw31d.mongodb.net:27017’] [Failed to receive length header from server. calling hello on ‘cluster0-shard-00-01.sw31d.mongodb.net:27017’] in C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\functions.php:520 Stack trace: #0 C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\functions.php(520): MongoDB\\Driver\\Manager->selectServer(Object(MongoDB\\Driver\\ReadPreference)) #1 C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\Collection.php(932): MongoDB\\select_server(Object(MongoDB\\Driver\\Manager), Array) #2 C:\\xampp\\htdocs\\php_mongdb\\index.php(14): MongoDB\\Collection->insertOne(Array) #3 {main} thrown in C:\\xampp\\htdocs\\php_mongdb\\vendor\\mongodb\\mongodb\\src\\functions.php on line 520how can i fix this", "username": "Gimhan_De_Silva" }, { "code": "abstract class DateTimeBasedCodec<T> implements Codec<T> {\n\n long validateAndReadDateTime(final BsonReader reader) {\n BsonType currentType = reader.getCurrentBsonType();\n if (!currentType.equals(BsonType.DATE_TIME)) {\n throw new CodecConfigurationException(format(\"Could not decode into %s, expected '%s' BsonType but got '%s'.\",\n getEncoderClass().getSimpleName(), BsonType.DATE_TIME, currentType));\n }\n return reader.readDateTime();\n }\n\n}\npublic class ZonedDateTimeCodec extends DateTimeBasedCodec<ZonedDateTime> {\n\n @Override\n public ZonedDateTime decode(final BsonReader reader, final DecoderContext decoderContext) {\n return Instant.ofEpochMilli(validateAndReadDateTime(reader)).atZone(ZoneOffset.UTC);\n }\n\n @Override\n public void encode(final BsonWriter writer, final ZonedDateTime value, final EncoderContext encoderContext) {\n try {\n writer.writeDateTime(value.toInstant().toEpochMilli());\n } catch (ArithmeticException e) {\n throw new CodecConfigurationException(format(\"Unsupported ZonedDateTime value '%s' could not be converted to milliseconds: %s\",\n value, e.getMessage()), e);\n }\n }\n\n @Override\n public Class<ZonedDateTime> getEncoderClass() {\n return ZonedDateTime.class;\n }\n}\npublic class Jsr310CodecProvider implements CodecProvider {\n private static final Map<Class<?>, Codec<?>> JSR310_CODEC_MAP = new HashMap<>();\n\n static {\n try {\n putCodec(new ZonedDateTimeCodec());\n } catch (ClassNotFoundException classNotFoundException) {\n // empty catch block\n }\n }\n\n private static void putCodec(Codec<?> codec) {\n JSR310_CODEC_MAP.put(codec.getEncoderClass(), codec);\n }\n\n @SuppressWarnings(\"unchecked\")\n @Override\n public <T> Codec<T> get(Class<T> clazz, CodecRegistry registry) {\n return (Codec<T>) JSR310_CODEC_MAP.get(clazz);\n }\n\n @Override\n public String toString() {\n return \"Jsr310CodecProvider{}\";\n }\n}\n", "text": "I am doing the following in java please confirm if i am on the right path\nThis is based on implementation of mongodb jsr310DateTimeBasedCodecZonedDateTimeCodecJsr310CodecProvider.java", "username": "Debabrata_Patnaik" } ]
Support for ZonedDateTime with MongoDB using Codec for Java
2023-01-05T04:57:44.405Z
Support for ZonedDateTime with MongoDB using Codec for Java
3,155
null
[]
[ { "code": "{\n \"_id\": \"64b03ed794d87927a3066e13\",\n \"startDateTime\": \"2023-07-07T18:00:00.000Z\",\n \"endDateTime\": \"2023-07-12T15:00:00.000Z\",\n \"availabilityType\": \"blackout\"\n}\n{\n \"_id\": \"64b03eb094d87927a3066ddb\",\n \"startDateTime\": \"2023-07-03T18:00:00.000Z\",\n \"endDateTime\": \"2023-07-06T15:00:00.000Z\",\n \"availabilityType\": \"blackout\"\n} \n[\n {\n \"date\": \"2023-07-01\",\n \"availabilityType\": \"available\"\n },\n {\n \"date\": \"2023-07-02\",\n \"availabilityType\": \"available\"\n },\n {\n \"date\": \"2023-07-03\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\n {\n \"date\": \"2023-07-04\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\n {\n \"date\": \"2023-07-05\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\n {\n \"date\": \"2023-07-06\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\n ... etc up to July 31\n]\n", "text": "I have an “availabilities” collection. Inside this collection could be a few entries that are date based for example:I am trying to figure out the best way to create an array of dates for the month and if there is an “availability” for some of the days, then return that within the date array. I thought about creating an array of days using javascript but then I’d have to iterate for each day to see there are any “availabilities” on the day; this seems very inefficient.Ideally what I am looking to build is something similar to the following:Any suggestions on how this might be done?", "username": "JeffCi" }, { "code": "{\n \"_id\": \"64b03eb094d87927a3066ddb\",\n \"startDateTime\": \"2023-07-03T18:00:00.000Z\",\n \"endDateTime\": \"2023-07-06T15:00:00.000Z\",\n \"availabilityType\": \"blackout\"\n}\n{\n \"date\": \"2023-07-03\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\n {\n \"date\": \"2023-07-04\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\n {\n \"date\": \"2023-07-05\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\n {\n \"date\": \"2023-07-06\",\n \"availabilityType\": \"booked\" // because there was an availability entry in the collection that is marked as booked\n },\nstartDateTimeendDateTimeavailabilityTypeblackoutbooked", "text": "Hi @JeffCi and welcome to MongoDB community forums!!In order to assist you better, it would be helpful if you could clarify a few of my understanding about the sample document and the expected output shared.Firstly, for the below documentandcan you confirm if my understanding is correct in that you are wanting to create an array (the output), for example, for the month July using the startDateTime and endDateTime of each document in the availabilities collection?\nIf my understanding is right, could you also confirm, if the two values for availabilityType as blackout and booked have any inter-dependency or do they imply the same meaning?\nFinally, can you let us know the MongoDB version you are on?Regards\nAasawari", "username": "Aasawari" } ]
Creating an array of dates?
2023-07-18T01:16:53.437Z
Creating an array of dates?
240
null
[ "aggregation", "mongodb-shell" ]
[ { "code": "MongoServerError: PlanExecutor error during aggregation :: caused by :: Out of memory\nreplication:\n oplogSizeMB: 2048\n replSetName: rs0\n\tdb.adminCommand(\n\t\t{\n\t\t\tsetParameter: 1,\n\t\t\tallowDiskUseByDefault: true\n\t\t}\n\t)\ndb.runCommand({\n\t\"aggregate\":sourceCollection,\n\t\"pipeline\":pipeline,\n\tallowDiskUse: true,\n\tcursor:{},\n});\n{\n\t_id: ObjectId(\"646942e05b40688d6a004b60\"),\n\tsite: 416,\n\tts: ISODate(\"2023-05-20T22:00:00.579Z\"),\n\tadblock: 0,\n\ts: 'g',\n\tc: 'm',\n\ta_b: 1,\n\ta_m: 2,\n\ta_t: 3,\n\tau: 14495,\n\tc_t: 'a',\n\tc_ap: 1,\n\tc_t_a: 'h',\n\tc_ai: 1420172,\n\tc_t_a_pdt: ISODate(\"2023-05-20T10:00:00.000Z\"),\n\tc_gi: 216337,\n\tc_ei: 16678,\n\tc_et: 217\n}\nMongoServerError: PlanExecutor error during aggregation :: caused by :: Out of memory\n{\n \"_id\": {\n \"$oid\": \"646942e021893395200d8872\"\n },\n \"emits\": {\n \"key\": {\n \"site\": 1,\n \"t\": {\n \"$date\": \"2023-05-20T22:00:00.000Z\"\n }\n },\n \"values\": {\n \"pi\": 1,\n \"pi_ab\": 0,\n \"pi_c_m\": 1,\n \"pi_c_m_ab\": 0,\n \"pi_c_d\": 0,\n \"pi_c_d_ab\": 0,\n \"pi_c_t\": 0,\n \"pi_c_t_ab\": 0,\n \"s_d\": 0,\n \"s_d_ab\": 0,\n \"s_d_m\": 0,\n \"s_d_d\": 0,\n [... more simple increment fields]\n \"art\": [\n {\n \"id\": 1408831,\n \"pi\": 1,\n \"pi_ab\": 0,\n \"a_t\": 3,\n \"a_b\": 1,\n \"a_s\": 0,\n \"a_m\": 2,\n \"a_l\": 0,\n \"a_o\": 0,\n \"a_v\": 0,\n \"a_p\": 0,\n \"c_a_pi\": 1,\n \"c_a_px\": 1,\n \"c_a_p1\": 1,\n \"c_h\": 0,\n \"c_l\": 0,\n \"c_a\": 1,\n \"c_g\": 0,\n \"c_f\": 0,\n \"c_b\": 0,\n \"c_c\": 0,\n \"c_s\": 0,\n \"c_o\": 0,\n \"t\": \"t\",\n \"eid\": \"278622\",\n \"etid\": \"217\",\n \"gid\": 157,\n \"auid\": [\n 3037\n ],\n \"pubdt\": {\n \"$date\": \"2022-12-06T10:00:00.000Z\"\n }\n }\n ],\n [... more similar array of object fields]\n }\n }\n}\nconst reduceFunction = function(state, newVal){\n\t/* \n\t\twe want to use the same reduce function for accumulate and merge\n\t\tthe latter does not support passing of outside variables/constants\n\t\twe therefore need to hardcode the nest configuration unfortunately\n\t\t*/\n\tconst nestKeys = ['aut','gen','ett','ent','art','lp'];\n\t\n\t/* \n\t\tthese are keys inside nested documents that contain values that must not be summed up, such as ids or dates\n\t*/\n\tconst noSumDocKeys = ['etid','eid','gid','t','auid','pubdt','lt'];\n\n\tconst newKeys = Object.keys(newVal);\n\tconst sk = newKeys.filter(key => !nestKeys.includes(key));\n\tconst nk = newKeys.filter(key => nestKeys.includes(key));\n\t// crashes even with nk = []\n\t\n\t/* \n\t\tsk -> sumKeys contain the simple values\n\t\t*/\n\tsk.forEach(key =>{\n\t\tif (typeof newVal[key] == 'number'){\n\t\t\tif (typeof state[key] === 'undefined'){\n\t\t\t\tstate[key] = newVal[key];\n\t\t\t} else {\n\t\t\t\tstate[key] += newVal[key];\n\t\t\t}\n\t\t}\n\t});\n\n\t/* \n\t\tnk -> nestKeys contain arrays of documents that need to be merged\n\t\t*/\n\tnk.forEach(key =>{\n\t\tif (typeof state[key] === 'undefined' || !Array.isArray(state[key]) || state[key].length == 0 ){\n\t\t\t/* \n\t\t\t\tif state is still empty, we can just assign \n\t\t\t\tthe whole document as the only element\n\t\t\t\t*/\n\t\t\tstate[key] = newVal[key];\n\t\t} else {\n\t\t\t/* \n\t\t\t\twe need to merge the two arrays\n\t\t\t\t*/\n\t\t\tnewVal[key].forEach(newDoc =>{\n\t\t\t\t// check if the old state already contains a document with that id\n\t\t\t\tconst docIndex = state[key].findIndex(obj => obj.id === newDoc.id);\n\t\t\t\t\n\t\t\t\tif (docIndex < 0){\n\t\t\t\t\t/* \n\t\t\t\t\t\tnew doc is not contained in the old state, so we simply\n\t\t\t\t\t\tadd it to the array\n\t\t\t\t\t\t*/\n\t\t\t\t\tstate[key].push(newDoc);\n\t\t\t\t} else {\n\t\t\t\t\t/* \n\t\t\t\t\t\twe iterate over the keys of the newDoc\n\t\t\t\t\t\tanything not id and not in noSumDocKeys is summed\n\t\t\t\t\t\tup into the state\n\t\t\t\t\t\t*/\n\t\t\t\t\tObject.keys(newDoc).forEach(docKey => {\n\n\t\t\t\t\t\tconst newDocVal = newDoc[docKey];\n\n\t\t\t\t\t\tif (docKey !== 'id'){\n\t\t\t\t\t\t\tif (typeof state[key][docIndex][docKey] === 'undefined'){\n\t\t\t\t\t\t\t\t// if current state doesn't contain that key, simply set it\n\t\t\t\t\t\t\t\tstate[key][docIndex][docKey] = newDocVal;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tswitch(docKey){\n\t\t\t\t\t\t\t\t\tcase 'auid':\n\t\t\t\t\t\t\t\t\t\t// special case: merge two arrays of author ids to a unique array\n\t\t\t\t\t\t\t\t\t\tstate[key][docIndex][docKey] = [...new Set([...state[key][docIndex][docKey], ...newDocVal])];\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\t\t\t\tif (noSumDocKeys.includes(docKey)){\n\t\t\t\t\t\t\t\t\t\t\t// simply set the value\n\t\t\t\t\t\t\t\t\t\t\tstate[key][docIndex][docKey] = newDocVal;\n\t\t\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\t\t\t// increment the value\n\t\t\t\t\t\t\t\t\t\t\tstate[key][docIndex][docKey] += newDocVal;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t});\n\n\t\t\t\t}\n\t\t\t});\n\n\t\t}\n\t});\n\n\treturn state;\n\n};\n{\n \"_id\": {\n \"site\": 1,\n \"t\": {\n \"$date\": \"2023-05-20T22:00:00.000Z\"\n }\n },\n \"value\": {\n \"pi\": 11050,\n \"pi_ab\": 460,\n \"pi_c_m\": 9738,\n \"pi_c_m_ab\": 110,\n \"pi_c_d\": 1181,\n \"pi_c_d_ab\": 348,\n \"pi_c_t\": 131,\n \"pi_c_t_ab\": 2,\n \"s_d\": 1200,\n \"s_d_ab\": 98,\n \"s_d_m\": 733,\n \"s_d_d\": 457,\n\t[... more simple increment fields]\n \"aut\": [],\n \"gen\": [],\n \"ett\": [],\n \"ent\": [],\n \"art\": [\n {\n \"id\": 1419484,\n \"pi\": 196,\n \"pi_ab\": 14,\n \"a_t\": 617,\n \"a_b\": 182,\n \"a_s\": 13,\n \"a_m\": 383,\n \"a_l\": 13,\n \"a_o\": 26,\n \"a_v\": 0,\n \"a_p\": 0,\n \"c_a_pi\": 196,\n \"c_a_px\": 195,\n \"c_a_p1\": 55,\n \"c_h\": 0,\n \"c_l\": 0,\n \"c_a\": 182,\n \"c_g\": 14,\n \"c_f\": 0,\n \"c_b\": 0,\n \"c_c\": 0,\n \"c_s\": 0,\n \"c_o\": 0,\n \"t\": \"h\",\n \"eid\": \"274783\",\n \"etid\": \"5115\",\n \"gid\": 142,\n \"auid\": [\n 3037\n ],\n \"pubdt\": {\n \"$date\": \"2023-05-12T11:27:00.000Z\"\n }\n },\n {\n \"id\": 1420177,\n \"pi\": 589,\n \"pi_ab\": 3,\n \"a_t\": 1828,\n \"a_b\": 586,\n \"a_s\": 14,\n \"a_m\": 1186,\n \"a_l\": 14,\n \"a_o\": 28,\n \"a_v\": 0,\n \"a_p\": 0,\n \"c_a_pi\": 589,\n \"c_a_px\": 581,\n \"c_a_p1\": 579,\n \"c_h\": 0,\n \"c_l\": 0,\n \"c_a\": 587,\n \"c_g\": 2,\n \"c_f\": 0,\n \"c_b\": 0,\n \"c_c\": 0,\n \"c_s\": 0,\n \"c_o\": 0,\n \"t\": \"n\",\n \"eid\": \"25101\",\n \"etid\": \"821\",\n \"auid\": [\n 16722\n ],\n \"pubdt\": {\n \"$date\": \"2023-05-20T08:52:00.000Z\"\n }\n },\n\t [... more article documents]\n ],\n \"lp\": []\n }\n}\nconst initFunction = function(sk, nk){\n\tconst initState = {};\n\n\t// sumKeys\n\tsk.forEach(key =>{\n\t\tinitState[key] = 0;\n\t});\n\t// nestKeys\n\tnk.forEach(key =>{\n\t\tinitState[key] = [];\n\t});\n\t\n\treturn initState;\n};\npipeline.push( { '$group': {\n\t_id: \"$emits.key\",\n\tvalue: {\n\t\t$accumulator: {\n\t\t\tinit: initFunction,\n\t\t\tinitArgs: [ sumKeys, nestKeys ],\n\t\t\taccumulate: reduceFunction,\n\t\t\taccumulateArgs: [ \"$emits.values\" ],\n\t\t\tmerge: reduceFunction,\n\t\t\tlang: \"js\"\n\t\t}\n\t}\n}});\n", "text": "I am currently working on a rewrite of MapReduce operations to the aggregation pipeline. The task requires that the result structure should not change at all, as there is quite a lot of code relying on the data structure and it would not be practical to rewrite all of that in a sensible amount of time. My issue with the new job after rewrite to the aggregation pipeline is, that it’s crashing when too many incoming documents are being processed in one run. The error message isI don’t quite understand what exactly causes this error, neither how to avoid it. I am very sorry for the length of the description, I have already tried to trim down the code examples as much as I can. Here are the details:The old MapReduce jobs are running fine in production and have done so for years now, though at the moment we’re still on Mongo 3.6; I am running development on MongoDB 6.0.8 however, using Mongosh 1.10.1. The development server has 32GB of RAM and doesn’t do anything but run that one aggregation. The production server (the one still running the original MapReduce job) had 32GB of RAM as well until a fairly recent upgrade, so I know the MR-aggregation can cope with that amount of RAM in the ancient Mongo 3.6.The jobs are run using mongosh directly on the server.I haven’t touched the server config much, server is the only member of the replication set:I runimmediately before the job and the aggregation command itself is executed with explicit allowDiskUse set to true:The source collection is holding information on page impressions; one such impression document may look like this:To give some context: This is an impression for site id 416 with a Google-referrer (s:g) from a mobile devive (c:m) with no adblocker active for a content of type article (c_t: a) on an article with the id 1420172, written by an author with the id 14495 and published on c_t_a_pdt; article type was ‘h’, the topic had the id 16678 and a genre-id of 216337 and a topic type of 217; this article would potentially yield one banner ad impression, two medium-rectangle ad impressions, three ad impressions in total.There are other impression documents which are a bit less complex for non-articles (home page, index pages, forum etc.); there usually are a few thousand up to some 100k such documents per hour; the resulting aggregated collection holds multi-dimensional breakdowns inside one document per hour and site (example below), so next to total impressions and ad impressions there are fields for totals by device, adblocked totals by device, totals by source and content-type, totals by source and article type etc; in addition, each such document holds a couple of arrays containing aggregations for individual articles, topic-ids, topic-type-ids, genre-ids, authors and indexpage-ids; each of these arrays usually holds a few hundred up to a few thousand such documents per hour.The aggregation pipeline consists of a match-stage, limiting the time span, a project stage, which maps the simple impression documents to the target aggregation form, and a group stage, which processes the mapped documents, simply incrementing most keys, only the nested documents are upserted into their array fields. This group stage is using a custom $accumulator, as I need to deal with the nested arrays and I need to update the current hour with incoming data every few minutes using a subsequent merge phase. The accumulator function is assembled dynamically from a configuration of fields, and while the resulting function is long, it is far from complicated.While the new aggregation job is running fine as long as only some one or two thousand documents are processed, I get an error from the group stage if it is fed with more than about 10,000 documents. The error, as stated above, isIf I simply “$out” the project stage to a test collection without grouping it, everything is working fine. If I put the out-stage after the group however, I get the error, so it’s not in the merge stage.A projected document may look like this (abbreviated, the full document is about 500 lines long):The reduce-function for the $accumulator looks like this; it makes no difference if I omit the processing of the nested keys:A result document may look like this - I have kept only two nested document in one of the structures, there may be well over 1,000 sub-documents in the “art” array in particular; the total size of one such result document is a little over 1,000 kilobytes.… and this is the init-function:The group-stage looks like this:Anything that may shed some light onto the issue would help tremendously. I’d have a somewhat bad feeling if the aggregation should some day crash because we may have collected a couple too many impressions in a few minutes. It would also be helpful if the aggregation could simply catch up if it hadn’t been running for a few hours. So far, I have implemented a hard limit of processing just five minutes worth of data at a time. This workaround seems far from elegant though.Kind regardsMarkus", "username": "Markus_Wollny" }, { "code": "$group$project$$REMOVE$groupallowDiskUse: true$group$group$groupallowDiskUsetrueallowDiskUse", "text": "I have managed some optimization towards increasing the number of documents processed in the $group stage without a crash. Before, I couldn’t get it to process even 5,000 documents at a time, now I can manage about 23,000 documents without a crash (though it still crashes somewhere between 25,000 and 30,000 documents).All I did was replace the setting of a 0 in the $project stage with $$REMOVE, so only fields with a non-zero value would end up in the projection, so I end up with sparse documents streaming into the $group stage, as in the original code most fields would have been assigned a value of 0.What puzzles me though: I was under the impression that setting allowDiskUse: true would in fact write a temporary result and flush the documents already processed, if memory of the $group ran too close to the limit, and then continue processing the next batch, merging the next result with the temporary on disk until all incoming documents where processed.This seems to not be the case. If I feed too many documents in the $group stage, it just bombs and there seems to be no surefire way to avoid that, neither some server setting that would allow assigning more memory to certain aggregations nor any way to force the $group to actually use the disk if allowDiskUse is true.I didn’t use to see this issue with mapReduce in MongoDB 3.6, so having to worry about aggregations failing is definitely not an improvement in my book. Sure, mapReduce took a long time, but I could count on it not failing, even when processing a much larger number of documents, as long as allowDiskUse was enabled.Could somebody please enlighten me about what’s going wrong here?", "username": "Markus_Wollny" } ]
PlanExecutor Out of memory in aggregation group-stage with $accumulator
2023-07-17T10:09:47.715Z
PlanExecutor Out of memory in aggregation group-stage with $accumulator
593
null
[ "aggregation", "atlas-search", "realm-web", "react-js" ]
[ { "code": "Error retrieving facets: Error: Request failed (POST https://us-west-2.aws.realm.mongodb.com/api/client/v2.0/app/xxxxx/functions/call): Location40324 Unrecognized pipeline stage name: ‘$searchMeta’ (status 500) at MongoDBRealmError.fromRequestAndResponse. \n const collection = await connectToMongoDB(connetionData);\n const pipeline = [];\n pipeline.push({\n $searchMeta: {\n index: “hc-appointments”,\n facet: {\n facets: {\n stringFacet: {\n type: “string”,\n path: patient.display\n }\n }\n }\n }\n });\n const facets = await collection.aggregate(pipeline);\n", "text": "Hello, I’m trying to use $searchMeta in React using Realm, and it produces this error:However, I do the same for other $match, $search, and other stages and it works properly.This is the code:Does anyone know what am I doing wrong?\nThanks a lot", "username": "Paco_Mateu" }, { "code": "Unrecognized pipeline stage name: ‘$searchMeta’ (status 500) at MongoDBRealmError.fromRequestAndResponse.$searchMeta$searchMeta", "text": "Hey @Paco_Mateu,Welcome to the MongoDB Community!Unrecognized pipeline stage name: ‘$searchMeta’ (status 500) at MongoDBRealmError.fromRequestAndResponse.In the current implementation, the $searchMeta aggregation pipeline stage is only available for collections hosted on MongoDB Atlas cluster tiers running MongoDB version 4.4.9 or later. To learn more, please refer to the Atlas Search documentation.Hello, I’m trying to use $searchMeta in React using Realm, and it produces this error: Error retrieving facets:Based on the details, it seems that you are trying to implement the $searchMeta aggregation pipeline in your React codebase while using MongoDB Realm as a mobile database.May I ask what you are trying to achieve with this implementation? Could you please provide more context about your use case? This will help us assist you more effectively.Look forward to hearing from you.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Unrecognized pipeline stage name: ‘$searchMeta’
2023-07-17T19:18:35.252Z
Unrecognized pipeline stage name: ‘$searchMeta’
594
null
[ "aggregation", "atlas-search" ]
[ { "code": "$search$match$search$incompoundmustequalsshouldequalsminimumShouldMatch: 1 { '$match':\n { '$and':\n [ { customer:\n { '$in': [ 5f3d6c51f5b2ed74fe93dd60, 5c221b4e4d17f1734806b4a5 ] } },\n { carrier: { '$in': [ 'carrier1', 'carrier2', 'carrier3' ] } },\n { status: { '$in': [ 'late', 'idle', 'delivered-late' ] } },\n { createDate: { '$gte': 2023-06-01T04:00:00.000Z } },\n { createDate: { '$lte': 2023-07-01T03:59:59.999Z } } ] } \n}\n", "text": "During a recent training, I learned about the benefits of using the $searchoperator.\nI would like to modify my $match query to use $search instead.\nHowever, I couldn’t find an equivalent for the $in operator.When the fields need to match a single ObjectId, there is no problem, I can use the compound operator with must + equals and it works.The problem arises when I have multiple ObjectId values.\nAgain, if it’s only for one field, I could use should + equals while specifying minimumShouldMatch: 1.But I’m not sure how to handle it if I have multiple fields with multiple ObjectId values…\nHow should I approach transforming a query like this using the $search operator?", "username": "MLR" }, { "code": "$search", "text": "Hi @MLR,Do you have some sample document(s) you could provide and advise which ones you expect being returned?I could do some testing based off those but i’m not entirely sure off the top of my head if there is an equivalent to this in $search but will see what I can find out. Just want to see some sample documents + expected output to verify if it’s possible.Regards,\nJason", "username": "Jason_Tran" } ]
Use $search instead of $match when using $in conditions
2023-07-06T19:13:47.572Z
Use $search instead of $match when using $in conditions
576
null
[ "dot-net", "atlas-search" ]
[ { "code": "", "text": "Hello,Am I correct in assuming that it is not possible yet to use the (relatively) newly released “scoreDetails” feature with the C# driver?And secondly when it will be available, I assume that setting the bool would work the same as the implemented “returnStoredSource” feature and the projectionBuilder would get a “metaSearchScoreDetails” ?Thanks in advance,Samir", "username": "sboulema" }, { "code": "static void Main(string[] args)\n {\n var uri = \"<REDACTED>\";\n var mongoURL = new MongoUrl(uri);\n var client = new MongoClient(mongoURL);\n var database = client.GetDatabase(\"lamp\"); \n var collection = database.GetCollection<BsonDocument>(\"collection\");\n\n \tvar pipeline = new BsonDocument[]\n {\n new BsonDocument(\"$search\", \n new BsonDocument\n {\n { \"index\", \"default\" }, \n { \"autocomplete\", \n new BsonDocument\n {\n { \"query\", \"lamp\" }, \n { \"path\", \"name\" }\n } }, \n { \"scoreDetails\", true }\n }),\n new BsonDocument(\"$project\", \n new BsonDocument\n {\n { \"_id\", 0 }, \n { \"name\", 1 }, \n { \"scoreDetails\", \n new BsonDocument(\"$meta\", \"searchScoreDetails\") }\n })\n };\n \n \tvar result = collection.Aggregate<BsonDocument>(pipeline).ToList();\n\n foreach (var document in result)\n {\n Console.WriteLine(document);\n }\n\n Console.WriteLine(\"Finished!\");\n }\nscoreDetails{ \"name\" : \"floor lamp\", \"scoreDetails\" : { \"value\" : 2.0917689800262451, \"description\" : \"sum of:\", \"details\" : [{ \"value\" : 1.0077003240585327, \"description\" : \"$type:autocomplete/name:lamp [BM25Similarity], result of:\" ...\n\"returnStoredSource\"", "text": "Hi @sboulema - Welcome to the community Am I correct in assuming that it is not possible yet to use the (relatively) newly released “scoreDetails” feature with the C# driver?I have the following code snippet which was run in my test environment (using C# driver version 2.19.0):Which provides the following output (cut short for brevity but you can see the scoreDetails field in the output):And secondly when it will be available, I assume that setting the bool would work the same as the implemented “returnStoredSource” feature and the projectionBuilder would get a “metaSearchScoreDetails” ?Hoping my above example helps but regarding this question, do you have an example of how you’re performing the search with \"returnStoredSource\" so I can assist better here.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": " var results = await _carsCollection\n .Aggregate()\n .Search(\n indexName: \"searchFacets\",\n searchDefinition: Builders<CarModel>.Search\n .Compound()\n .Filter(...)\n .MustNot(...)\n .Must(...),\n returnStoredSource: true\n )\n", "text": "Thanks for the reply!My current code looks like this:I am using the AggregateFluent interface and that has a nice parameter to easily give the returnStoredSource boolean. I was hoping for a similar way of specifying the scoreDetails boolean.Found this related ticket: MongoDB Jira. Which gives a way to do the projection part.", "username": "sboulema" }, { "code": "", "text": "Created a Jira ticket for this: https://jira.mongodb.org/browse/CSHARP-4703I am currently working a PR that would implement this ", "username": "sboulema" }, { "code": "", "text": "Added PR! CSHARP-4703: Initial implementation by sboulema · Pull Request #1126 · mongodb/mongo-csharp-driver · GitHub", "username": "sboulema" }, { "code": "", "text": "Thanks Samir - I believe the appropriate drivers team will need to check into the PR / ticket raised. I would recommend following the tickets in the mean time for any updates.Cheers,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "PR has been merged, so this should be available in an upcoming release ", "username": "sboulema" }, { "code": "", "text": "Thanks Samir! I’ll mark your reply with the CSHARP ticket link as the solution so that it can be monitored Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C# driver and scoreDetails
2023-06-09T09:34:13.341Z
C# driver and scoreDetails
922
null
[]
[ { "code": "", "text": "Lately I’ve noticed the order of my documents in collections being changed automatically. I haven’t been able to spot any pattern in it - they change seemingly randomly and then eventually revert back to normal. These are documents that aren’t even being modified.Any idea what’s causing this behaviour? Thanks!", "username": "TemeS" }, { "code": "sort_id", "text": "Hi @TemeS,Why is it a problem if you don’t expect them to be in a certain order?\nIn theory, docs are supposed to stay in the same natural order if the collection is read only.\nI’m not a pro of the WiredTiger internals but I don’t see a reason why the engine would reorder docs randomly without any write operation.If you expect the docs in a non-random order, I would advice to create an index and add a sort to your query to “sort” this problem. If you just always want them in the same order and you don’t care which one it is, I would sort on the _id field (and the index already exists so it’s free).Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hey,thanks for the reply! The order they were originally implemented in is the order that they’re meant to be presented in the app. But an even bigger problem is that as my server and clients both read from those same documents, them getting “randomly” rearranged means that the clients and server will have different views on the data.The app in question is in closed development so nothings urgently on the line here but I’m quite confused by the behaviour. There’s definitely no write actions happening on those documents.I could just sort the data when querying it but it’s strange that the initial order of documents isn’t preserved.", "username": "TemeS" }, { "code": "", "text": "I could just sort the data when queryingIt is the only way to present data in a consistent order.it’s strange that the initial order of documents isn’t preservedIt is not strange for some. For example, I can easily imagine that documents already in the server cache are sent to the client first while the server reads other documents from disk to prepare the next batch. Otherwise, you flush the current documents from the cache to read the documents of the consistent order, and then you reread the documents that were already in the cache because they are next in the consistent order.If you do not sort, the server assume rightfully that you do not care about the order, and make sure it does the least work possible to handle your request so that it has more cpu cycle to handle requests where order is important and specified.", "username": "steevej" }, { "code": "", "text": "my server and clients both read from those same documentsWhat do you mean by “document order”? Do you mean that the documents are returned from collection in different order or do you perhaps mean that inside the document the fields are in different order than they were in when you created the document?Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "I think I have the exact issue today when the documents within my collection rearranged themselves to a different order for some reason. It would be good to see what caused this, as for me, this is the first that I see documents rearrange themselves.At first, I was suspecting that there is a security issue, where a hacker was rearranging the order. =X", "username": "Michael_Xie1" }, { "code": "", "text": "Documents have no specific order. If you want to see the documents in a specific order you need to sort them.", "username": "steevej" } ]
Documents being rearranged on their own?
2022-04-20T06:39:14.209Z
Documents being rearranged on their own?
3,281
https://www.mongodb.com/…_2_1024x576.jpeg
[]
[ { "code": "", "text": "\nPXL_20230717_1757046731920×1080 111 KB\nCan someone please help .\nI am trying to connect using the connection string\nBut tha data base name or test in not written in the string", "username": "Mitali_Bhattad" }, { "code": "mongoshtest", "text": "Hi @Mitali_Bhattad,Have you tried connecting with that string? It should work. From my mongosh testing without specifying a database in the connection string it connects to the test database by default as of the time of this message.Let us know if you have any errors connecting and please be sure to include all relevant information regarding your connection attempt(s).Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Json_TranI have tried connecting with this string\n\nPXL_20230718_0715472811920×1080 98.2 KB\nBut then my database is not visible", "username": "Mitali_Bhattad" }, { "code": "atlasAdmin", "text": "What is the role of the database user connecting? I would try with test user with atlasAdmin to ensure this isn’t related to any permissions.Regards,\nJason", "username": "Jason_Tran" } ]
/test is not written in the connection string
2023-07-17T17:59:14.907Z
/test is not written in the connection string
373
null
[ "flutter" ]
[ { "code": "", "text": "Hello, I’m new to using Realm and I’m creating an app that will be used by multiple clients. In terms of the database, I’ve seen that there are several strategies, such as:Single database with schema: In this strategy, a field called tenant_id is added, but I would like each user to only be able to view and synchronize the information of the client they belong to.Database per tenant: In this approach, I believe I should have a Flexible Sync for each client, and I would need a mechanism to determine which client each user belongs to.Has anyone had any experience with these strategies? Are there any limitations in terms of service for either strategy? Is there any other strategy that would allow me to achieve this?", "username": "Fabian_Eduardo_Diaz_Lizcano" }, { "code": "", "text": "Hi @Fabian_Eduardo_Diaz_Lizcano!\nUsing Atlas MongoDB gives you a huge flexibility to design your app schema on a way it will be convenient for your goals. Before to have a decision you may take in consideration the following questions:In case you will need some cross client data access and the schema is not so complex I would recommend you to use the same app service and configuring read/write rules per client.But if you are going to have more than two-three collections per client configuring the rules may appear to be complex. If you don’t need any cross client data access then it is better to create app services per client/company. In this case you may need another app service which can allow anonymous access, where the user name could be associated to the company and it’s app service id. These mappings could be updated during the sign up process, where the users select the company name that they belong to. Once you get the app service id, you will be able to login to a dedicated app service for a client.Feel free to write if I’ve missed something or please share if it helped to you finding the better approach.", "username": "Desislava_St_Stefanova" }, { "code": "", "text": "Hi @Desislava_St_Stefanova,Thank you for all of your questions, as they help me determine which option to use. In response to some of your questions:Thank you very much for your help.", "username": "Fabian_Eduardo_Diaz_Lizcano" }, { "code": "", "text": "", "username": "henna.s" } ]
Multitenant App In Flutter
2023-07-18T02:07:55.833Z
Multitenant App In Flutter
602
null
[]
[ { "code": "", "text": "I am encountering an issue while attempting to delete a daily query limit in MongoDB Data Federation. I have followed the provided link to access the data federation settings page: (Cloud: MongoDB Cloud). However, I am receiving an error during the process.I would appreciate any assistance or guidance regarding this matter. I have verified my permissions and ensured that I am logged in with the appropriate account credentials. The error message is not clear, and I am unable to proceed with deleting the daily query limit.If anyone has encountered a similar issue or has knowledge of resolving this problem, please kindly provide your insights and suggestions. I would greatly appreciate your help in optimizing the process of deleting the daily query limit in MongoDB Data Federation.Thank you in advance for your support.", "username": "Vishnu_Pathak" }, { "code": "", "text": "Hi @Vishnu_Pathak,I have followed the provided link to access the data federation settings page:Can you provide this to the Atlas in-app chat support team? They will have further insight into your Atlas account and may be able to assist from there.Regards,\nJason", "username": "Jason_Tran" } ]
Failed to delete Project level per day limit, please try again later
2023-07-18T12:22:26.561Z
Failed to delete Project level per day limit, please try again later
340
null
[ "aggregation", "queries" ]
[ { "code": "db.collection.find({\n feature: { $in: [ \"buy\", \"sell\", \"ship\", \"contact\" ] },\n date_time: { $gt: new ISODate(~~) },\n _id: { $gt: ObjectId(~~) }\n})\n.sort({ date_time: 1 })\n.limit(limit size)\nlimit_sizelimit_size", "text": "Hello there.I’m using MongoDB 4.4 version and i have some questions with executionStats result.QueryIndexStages\nIXSCAN (multiple stages) → SORT_MERGE → FETCH (nReturned limit_size) → LIMIT (nReturned limit_size)\n(not real plan)First - I expect my query scans always same number of index keys but with limits, there are difference with scanned index keys in IXSCAN stage. Are limit aggregation affects to ixscan stage?Second - In executionStats result, FETCH stage filters _id field so i expect filtering works in this stage. But when i change comparision value of _id field in query, IXSCAN and SORT_MERGE stages has different nReturned value with previous _id value.\nIt doesn’t occurs when i don’t use limit operation. Why?Third - FETCH nReturned and LIMIT nReturned is same. I expected that FETCH returns more that limit_size and LIMIT returns exactly limit_size docs but it seems that it’s already applied in FETCH stage.\nDid i missed something about how index works?Thanks for reading.", "username": "DongYoung_Lee" }, { "code": "> db.test.find()\n[\n { _id: 0, a: 0 },\n { _id: 1, a: 0 },\n { _id: 2, a: 1 },\n { _id: 3, a: 1 },\n { _id: 4, a: 2 },\n { _id: 5, a: 2 }\n]\n$group$limit> db.test.aggregate([ {$group:{_id:'$a', sum:{$sum:1}}}, {$limit:2} ])\n[ { _id: 0, sum: 2 }, { _id: 1, sum: 2 } ]\n$limit$group> db.test.aggregate([ {$limit:2}, {$group:{_id:'$a', sum:{$sum:1}}} ])\n[ { _id: 0, sum: 2 } ]\n$group$groupnReturnedlimit_sizelimit_size", "text": "Hi @DongYoung_Lee welcome to the community!First - I expect my query scans always same number of index keys but with limits, there are difference with scanned index keys in IXSCAN stage. Are limit aggregation affects to ixscan stage?As I understand it, you observe that the limit was enforced in the last part of the explain output, instead of the early part (during the IXSCAN phase). Is this correct?If yes, then this is expected. When you execute a query with limit, it process the whole result set first, then limit the returned output. It doesn’t limit it at the start, since they will return a different result set.For example, if I have this collection:Illustrating using aggregation, if I set $group → $limit:Now if I do $limit → $group:The results are not the same. This is because in the first case, it process the required data (using $group), then limit the output to only 2 documents. In the second case, it limits the input to the $group stage.Second - In executionStats result, FETCH stage filters _id field so i expect filtering works in this stage. But when i change comparision value of _id field in query, IXSCAN and SORT_MERGE stages has different nReturned value with previous _id value.\nIt doesn’t occurs when i don’t use limit operation. Why?Could you post the the different explain plain output you observed here? I suspect the different nReturned value is due to the actual documents in the collection.Third - FETCH nReturned and LIMIT nReturned is same. I expected that FETCH returns more that limit_size and LIMIT returns exactly limit_size docs but it seems that it’s already applied in FETCH stage.I believe this is deliberate. The whole query including the limit is visible to the query engine. If the server FETCH more data and throw them away due to the presence of LIMIT, it does wasted work. It’ll be much better to FETCH what’s only required, especially since FETCH is frequently an expensive process.If you need more information, could you post some example data, and the explain output of the queries you’re trying to do as well?Best regards\nKevin", "username": "kevinadi" }, { "code": "nReturned", "text": "Hi @kevinadi . Thanks for your replying.In short, i totally misunderstood how mongo works with index and meaning of nReturned explain results.Answering to myself,\nFirst and Third\nWhole stages does work with documents one by one so, ‘limit’ makes not to do wasted work as you said.Second\n@kevinadi was right. It was because of the actual documents’ distribution and misusage of the query not fits with indexes.Thank you ", "username": "DongYoung_Lee" } ]
IXSCAN with LIMIT returns unexpected result
2023-05-24T07:58:57.914Z
IXSCAN with LIMIT returns unexpected result
568
null
[ "sharding", "kubernetes-operator" ]
[ { "code": "", "text": "Hello everyone, i have a problem. I used MongoDB Enterprise Kubernetes Operator to deploy mongodb Sharded Cluster. So, can i used In Memory Storage Engine with MongoDB Enterprise Kubernetes Operator?\nIf can, how to used it, please give me a example or link to documentation to refer.\nThank you so much!", "username": "Nguy_n_Xuan_D_ng" }, { "code": "storage.engine: inMemorykubectl edit mdbspec:\n additionalMongodConfig:\n storage:\n engine: inMemory\n", "text": "Hi @Nguy_n_Xuan_D_ngIn the spec file under additionalMongodConfig you can add storage.engine: inMemory or you can change this in Ops Manager.kubectl edit mdb", "username": "chris" }, { "code": "", "text": "Thank you so much! I will try this.", "username": "Nguy_n_Xuan_D_ng" }, { "code": "kubectl edit mdbspec:\n additionalMongodConfig:\n storage:\n engine: inMemory\n", "text": "Hi @chris\nWhen i update storage engine in ops manager, I got this error: “Field storage.engine in deployment args2_6 can only be modified by the mongodb-enterprise-operator, Version: 1.20.1”.\n\nScreenshot from 2023-07-18 08-56-451920×1080 67.1 KB\nSo i’am using kubectl edit mdb to change storage engine:Unfortunately, i am facing this error:\n\nScreenshot from 2023-07-18 08-59-391920×1080 97.6 KB\n\nScreenshot from 2023-07-18 09-05-481920×1080 78.6 KB\nPlease tell me where I’m wrong.\nThank you so much!", "username": "Nguy_n_Xuan_D_ng" }, { "code": "", "text": "I want to explain more detail about my problem. I deploy mongodb Sharded Cluster with MongoDB Enterprise Kubernetes Operator. I have 2 shard with 3 mongodsPerShardCount. I want in 3 mongodsPerShardCount have 1 memory (primary) and 2 wiredtiger (secondary) storage engine. So i deploy all in one storage engine (inmemory) and then change manually in ops manager. But I’am facing above issue.\nCan you give me a solution to solve this problem.\nThank you so much!", "username": "Nguy_n_Xuan_D_ng" }, { "code": "", "text": "One of the errors is due to the permissions. It seems you may have selected /data as the dbpath rather than a directory under it.The second error could have a few causes and full output from the agent would be needed to diagnose more.I would suggest you open a support ticket with MongoDB for them to assist you further.", "username": "chris" } ]
How i can used In-Memory Storage Engine in MongoDB Enterprise Kubernetes Operator
2023-07-14T10:03:38.568Z
How i can used In-Memory Storage Engine in MongoDB Enterprise Kubernetes Operator
626
null
[ "aggregation" ]
[ { "code": "db.getCollection(\"test\").aggregate( [\n { $planCacheStats: { } }\n] )\n[\n{\n \"queryHash\" : \"60879Z27\",\n \"planCacheKey\" : \"E72AZZA6\",\n \"isActive\" : true,\n \"works\" : 12,\n \"timeOfCreation\" : ISODate(\"2023-04-21T15:27:07.473+0000\"),\n \"indexFilterSet\" : false,\n \"estimatedSizeBytes\" : 70964\n}\n{\n \"queryHash\" : \"F70XYDE4\",\n \"planCacheKey\" : \"C98ZZ425\",\n \"isActive\" : true,\n \"works\" : 2,\n \"timeOfCreation\" : ISODate(\"2023-04-21T14:32:39.608+0000\"),\n \"indexFilterSet\" : false,\n \"estimatedSizeBytes\" : 60312\n}\n{\n \"queryHash\" : \"0XY01C75\",\n \"planCacheKey\" : \"7232ZZ56\",\n \"isActive\" : false,\n \"works\" : 14,\n \"timeOfCreation\" : ISODate(\"2023-05-17T07:13:12.752+0000\"),\n \"indexFilterSet\" : false,\n \"estimatedSizeBytes\" : 61997\n}\n...\n]\n", "text": "Hi all,\nI am trying to analyze the query execution plan to drill down into the use of some indexes but using the $planCacheStats command I am not getting all the information I would expect.I am using a server with MongoDb 4.2 and running the following commandwith the database root user, I get this information:The values createdFromQuery, cachedPlan, … are not returned as shown in documentation https://www.mongodb.com/docs/manual/reference/operator/aggregation/planCacheStats/#outputDoes anyone know the reason for this behavior?Thank you in advanceA", "username": "Andrea_Cervellin" }, { "code": "db.serverStatus().metrics.query.planCacheTotalSizeEstimateBytes\n", "text": "Hi @Andrea_Cervellin - great question. My name is Chris, I’m a Senior Product Manager here at MongoDB and I think I can help out.The additional fields that you mention can sometimes get pretty verbose. Therefore the database will stop capturing them after a certain point. You can check on the memory usage of the plan cache by using the following command in the shell:While you can increase the memory limit for the plan cache in order to capture more information, that wouldn’t necessarily be the first action I would advise taking. Instead I’d like to find out what your purpose for inspecting the plan cache is. Are you trying to diagnose a performance issue? Any additional context that you can provide may allow us to give some recommendations on how to achieve the desired outcome.Best,\nChris", "username": "Christopher_Harris" } ]
$planCacheStats does not return all the information expected
2023-05-17T07:17:42.300Z
$planCacheStats does not return all the information expected
791
https://www.mongodb.com/…4_2_1024x512.png
[ "sharding" ]
[ { "code": "", "text": "Hello,I’m going to set up MongoDB servers: maybe a couple of replica sets or a sharded cluster, I don’t have the final decision yet, I’m reading docs at the moment.I’m concerned with security and want to know which authentication is the most reliable and which one is the most frequently used? I don’t have any special requirements so I just want to follow trends I’m talking about this:\n(SCRAM/KERBEROS/LDAP/X.509)Just curious who uses what and why, what is your experience, recommendations, thoughts Any thoughts will be appreciated!", "username": "Petr_Makarov" }, { "code": "", "text": "As you don’t have specific requirements, you should go with user name and password. Simple.So go for SCRAM. (e.g. kerberos requires additional ticket management, cert will expire,…)", "username": "Kobe_W" }, { "code": "", "text": "Hi Kobe_W,Thank you!", "username": "Petr_Makarov" } ]
Which authentication to choose?
2023-03-17T21:49:20.561Z
Which authentication to choose?
908
null
[ "replication", "sharding" ]
[ { "code": "", "text": "Hello!I’m planning to deploy a sharded cluster and I’m looking for the best practices for sizing. Let’s say I want my workloads to be available to many clients and I think about creating a database for each single client + test database for Dev purposes. I don’t know yet how many DBs will I have, maybe more than 1 K. So my questions are:Many thanks in advance!", "username": "Petr_Makarov" }, { "code": "", "text": "Hi @Petr_Makarov and welcome to MongoDB community forums!!I’m looking for the best practices for sizing.The documentation on Operational Restrictions in Sharded Clusters would be a good starting point to learn about the sizing of a sharded cluster.What’s the maximal number of DBs you have ever seen and is there a number of DBs that must not be exceeded if I want to keep a reliable performance?Since the sharding is performed on a single collection and it is broken down into multiple chunks based on the shard key selected. In the above statement, by DBs, are you referring to the number of chunks being formed? If yes, the chunks are approximately equal sized bytes divided between various shards based on the selected shard key. You can read more about shards, chunks and shard keys in the attached documentations.\nAlso, regarding the shards size, the official documentation says:Sharded collections can grow to any size after successfully enabling sharding.How many collections did you see in the “biggest” sharded cluster ?When a collection is sharded between the shards, the documents are distributed between the shards based on the shard key. Does the collection implies to documents in the collection?\nIf MongoDB cannot split a chunk that exceeds the specified range size, MongoDB labels the chunk as jumbo.. You can read more about Data partitioning in the official documentation.Also, I would recommend you taking the Basic Cluster Administration Course which explains about the basics of the sharding concepts.I know that 10 K per replica set is supported officially but my cluster can consist of many replica sets, did you see more than 10 K?Generally, we do not have a hardcoded limit on the number of collections we should have on the replica set. However, the performance would always depend on various factors of the applications.Let us know if you have further questions.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi Aasawari,Thank you!", "username": "Petr_Makarov" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sharded cluster sizing recommendations
2023-05-29T22:20:32.180Z
Sharded cluster sizing recommendations
750
null
[ "aggregation", "mongodb-shell" ]
[ { "code": "gov_data.businessesaddress.citynameaddress.citynamenameTypeError: Cannot read properties of undefined (reading 'toUpperCase')use gov_data\nfunction titleCase(str) {\n return str && str.toLowerCase().split(/\\s/).map(function(word) {\n return word.replace(word[0], word[0].toUpperCase());\n }).join(' ');\n}\n\n console.log(titleCase(undefined));\n\n console.log(titleCase(\"\"));\n\n console.log(titleCase(null));\n\n console.log(titleCase(\"NAMAR\"));\n\ndb.businesses.aggregate().forEach(function(doc){\n db.businesses.updateMany(\n { \"_id\": doc._id },\n { \"$set\": { \"name\": titleCase(doc.name) } }\n );\n});\n", "text": "I end up having anywhere from 1.5Million to 3Million documents in the collection. Ingest is from public Government CSV data aggregated to gov_data.businesses collection. Everything is ALLCAPS. I aggregated the data to a new collection with the address.city and name fields $toLower. Now I need to titleCase those fields. using address.city instead of name in the following code takes a while (28 minutes), but succeeds. name however fails with TypeError: Cannot read properties of undefined (reading 'toUpperCase') after some 400,000 documents at about 8 (minutes). Feels like a data size issue, but I’ve no idea. I’m relatively new to aggregations and coding in mongo/mongosh.I borrowed the script from here: how to update field value to TittleCase in MongoDb?", "username": "Bill_Fetters" }, { "code": "var batchSize = 1000;\nvar currentUpdates = 0 ;\nvar bulk = db.Test.initializeUnorderedBulkOp();\nvar results;\n\n//db.getCollection('Test').find()\n\nprint(new Date());\ndb.getCollection('Test').find({}, {\"name3\":1}).forEach(theDocument =>{\n currentUpdates++;\n var filter = { _id: theDocument._id };\n var update = { $set: { name3: titleCase(theDocument.name3) } };\n \n bulk.find(filter).update(update);\n \n if((currentUpdates % batchSize) == 0){\n print(`Progress: ${currentUpdates}, commiting`)\n results = bulk.execute(); \n bulk = db.Test.initializeUnorderedBulkOp();\n } \n})\n\nif((currentUpdates % batchSize) > 0){\n print(`${currentUpdates} remaining in bulk, commiting`)\n results = bulk.execute(); \n}\n\nprint(`Done`)\nprint(new Date());\nprint(titleCase(' '))TypeError: Cannot read properties of undefined (reading 'toUpperCase')\nfunction titleCase(str) {\n return str && str.toLowerCase().split(/\\s/).map(function(word) {\n if(word.length == 0){\n return ''\n } else{\n return word.replace(word[0], word[0].toUpperCase());\n }\n }).join(' ');\n}\n", "text": "Updating one by one will be not great, at the least batch up the operations in a bulk operator, it’ll be much faster:You can send updates to the server in batches of X updates, you can tune the batch volume for best performance.Something similar to this:Running tests on a local collection with 600, 000 records the following were timings:\nSingle Updates: 452s\nBatch of 1,000: 104s\nBatch of 10,000: 96sThe error you’re getting looks like it’s due to a whitespace in the input, i.e.print(titleCase(' '))Gives:So one of your documents has a field with a space in it that’s causing the issue, you need a base case in the function to catch this before trying to access the character at the first location [0]:", "username": "John_Sewell" }, { "code": "results = bulk.execute();\n{\n acknowledged: true,\n insertedCount: 0,\n insertedIds: [],\n matchedCount: 1,\n modifiedCount: 1,\n deletedCount: 0,\n upsertedCount: 0,\n upsertedIds: []\n}\n\n", "text": "To add…we capture the results above:…you can look at the return object and collate the updates to see how many matches and upates were performed as a sanity check, the return object looks like this:", "username": "John_Sewell" }, { "code": "consolemongosh", "text": "Thanks. You are da man… that worked perfectly. I just didn’t know the allowable code. I’m still thinking database…\nI had issues with the console in DataGrip (Arity Error), but this worked in mongosh.\nThanks again.", "username": "Bill_Fetters" }, { "code": "", "text": "Glad to help! Weird that the SO post didn’t take into account the base case!Hopefully running in a bulk operation should make the update faster for you.John", "username": "John_Sewell" }, { "code": "", "text": "I refactored it complete with an initial aggregation to create the collection from the source, then run 2 separate batches; one for the name field, and one for the city field (this is really just data beautification).\nThe entire scripts runs in less than 19 minutes, where seeding it all through the API took upwards of 70 hours. I’d say that’s an improvement…\nThanks for your help again.", "username": "Bill_Fetters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB v5... ingest of public business data. I need to do a titleCase() of the $name field. Works great to about 400,000 records, then errors
2023-07-17T20:24:18.316Z
MongoDB v5&hellip; ingest of public business data. I need to do a titleCase() of the $name field. Works great to about 400,000 records, then errors
393
null
[]
[ { "code": "", "text": "Hello everyone I am Faizan Akhtar from Pune, excited to onboard as one of the MUG Pune leader. I tell about myself using few phrases:Fun Fact: I give too many Imaginary Ted Talks to my imaginary audience I have been part of many tech communities here in India and can’t wait to experience this new community MUG and meet all of you! You can know more about me and my socials: https://www.faizanakhtar.com/. Feel free to drop in and say a \"Hi \"\n(P.S. If you google Faizan Akhtar, my portfolio will probably be the first result that shows up )", "username": "Faizan_Akhtar" }, { "code": "", "text": "Welcome Fazian - looking forward to hearing more about the Pune MUG from you!", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "Hey @Faizan_Akhtar,\nWelcome to MongoDB Community!We’re thrilled to have you lead the Pune MongoDB Community. Your passion for learning and experience with communities would be very helpful for the community.As a Mixed Reality enthusiast, I am excited to hear your insights and experiences in this field.++ It’s always refreshing to see someone with a sense of humor and creativity, that will definitely help make our community events in Pune more engaging and fun.", "username": "Harshit" }, { "code": "", "text": "That’s great @Faizan_Akhtar , Congratulation \nMongoDB in Pune! Woohoo!\" ", "username": "Shekhar_Chaugule" } ]
Faizan Akhtar - MUG Leader Intro
2023-04-21T01:48:59.885Z
Faizan Akhtar - MUG Leader Intro
1,034
null
[ "ops-manager", "upgrading" ]
[ { "code": "", "text": "hello,I open this to ask you for some information, I have some doubts about Upgrade OpsManager.I need to update the Ops Manager, it is currently at version 5.0.17 and its AppDB at 4.2.13.When I update opsmanager to version 5.0.21 and then to 6.0.17, is its system database (AppDB) automatically updated?Or must I first update the mongodb verion of the AppDB database and then proceed to update opsmanager?Thank you,\nAlessia", "username": "Alessia_Papagno" }, { "code": "", "text": "You should open a support case with MongoDB and have them review the Ops Manager deployment and guide you through the upgrade.When I update opsmanager to version 5.0.21 and then to 6.0.17, is its system database (AppDB) automatically updated?AppDB is something you will have to upgrade manually. You can monitor it with OpsManager but it should not be managed by it.Or must I first update the mongodb verion of the AppDB database and then proceed to update opsmanager?Ops Manager 6.0 requires MongoDB 4.4 or later for the backing databases, 4.4 is deprecated. But yes updating your backing db should occur first.Roughly you should.", "username": "chris" } ]
OpsManager upgrade 5.0.17 -- 5.0.21 -- 6.0.17
2023-07-18T16:09:58.457Z
OpsManager upgrade 5.0.17 &ndash; 5.0.21 &ndash; 6.0.17
557
null
[]
[ { "code": "", "text": "Hi Team,\ni have set up aws endpoint connection between Mongodb to access our db through the endpoint privately. now i am able to access Mongodb through the ec2 instance. but we are running one glue job and we have created one network connection by using same VPC and subnet but we are not able to connect mongodb . kindly help us on that .", "username": "Shehzad_Ali" }, { "code": "", "text": "Hi @Shehzad_Ali,Welcome to the MongoDB Community!we are running one glue job and we have created one network connection by using same VPC and subnet but we are not able to connect Mongodb. kindly help us with that.It seems like there may be a configuration issue on the AWS end - may be related to security groups. However, I found a blog that provides an overview of how to utilize the AWS Glue crawler with MongoDB Atlas, (fully-managed cloud database). Please refer to the blog for more details: Introducing MongoDB Atlas Metadata Collection with AWS Glue Crawlers.Additionally, I suggest referring to Troubleshooting connection issues in AWS Glue or opening a similar thread with relevant details at the AWS community (https://repost.aws/), as they have expertise in AWS Glue Crawler and may be able to provide you with a relevant solution.Please let us know how it goes, and feel free to reach out if you have any other questions or feedback.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Connection between AWS glue to Mongodb using aws endpoint
2023-07-18T01:03:33.727Z
Connection between AWS glue to Mongodb using aws endpoint
719
null
[]
[ { "code": "", "text": "Hello, I have a problem in mongoDB when paying for an order via Stripe. Everything is fine in the application stripe, it shows that the order was paid and everything, but in mongoDB it doesn’t want to change the “paid” status to true when the customer pays, also an order is created when he hasn’t paid. I asked stripe developers and they said that the problem will be in mongoDB . Someone want to see code ? (no errors in console or browser)", "username": "JaXo_N_A" }, { "code": "", "text": "Hey @JaXo_N_A,Welcome to the MongoDB Community!Hello, I have a problem in MongoDB when paying for an order via Stripe. Everything is fine in the application stripe, it shows that the order was paid and everything, but in MongoDB it doesn’t want to change the “paid” status to true when the customer pays, also an order is created when he hasn’t paid. I asked stripe developers and they said that the problem will be in MongoDB. Someone want to see the code?Could you please share more details of the issue you are encountering such as relevant code snippets related to the payment process and updating the MongoDB document when the order is paid?Additionally, let us know the framework or programming language you are using for your application (e.g., Node.js, Python, etc.) as it can be relevant to understanding the code and providing better assistance.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Paid status not updating MongoDB
2023-07-18T05:07:22.424Z
Paid status not updating MongoDB
222
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "context.httpcontext.services.get('http')context.http", "text": "Hi,I just noticed this response on another thread from @John_Page which states that context.http is also going away on August 1st. I understand that the context.services.get('http') will go away. But not the built in context.httpIs this definitely the case? As if it is, it’s not been highlighted very well. I was under the assumption that this was a core built in modeul. If it is being replaced then myself and I believe a lot of others will need to know about this asap. As from the docs below, there is nothing about it being deprecated.Thanks", "username": "Adam_Holt" }, { "code": "", "text": "I don’t think the context.http client is being removed for the 1st August, that’s would cause many many people issues myself included.But the advice from the team is to use fetch or axios as they plan to ultimately not duplicate functions that exist in well maintained node modules.The other thread shows an example of a bug they have declined to fix.", "username": "John_Page" }, { "code": "", "text": "Thanks! Is there any chance you can check with the atlas app/realm team to make sure this is the case?", "username": "Adam_Holt" }, { "code": "", "text": "Hi Adam,Thanks for your question. The team has decided to extend the deprecation timeline for 3rd party services from Aug 1st, 2023 to November 1, 2024. There will be no impact to applications using 3rd party services or context.http on Aug 1st, so no immediate changes need to be made. Although as John mentioned, our recommend approach is to use fetch or axios.We will be updating the banners in product as well as in documentation this week to reflect these changes.", "username": "Laura_Zhukas1" }, { "code": "", "text": "Thanks @Laura_Zhukas1 for the extended timeline.A question, are we looking in performance benefits for those who do not use 3rd party services?For example, with 3rd party services disabled the function runs in a newer, upgraded environment, light, faster?", "username": "andrefelipe" }, { "code": "", "text": "Hi @andrefelipe, yes we have on-going work to continually enhance functions, as well as, a larger project to target this upcoming on our near-term roadmap.", "username": "Laura_Zhukas1" }, { "code": "", "text": "Thanks @Laura_Zhukas1 that’s exciting to here. It’s great to rely on MongoDB Atlas platform.", "username": "andrefelipe" } ]
Context.http - Going to be deprecated..?
2023-07-16T03:12:07.281Z
Context.http - Going to be deprecated..?
647
null
[ "queries", "atlas-functions" ]
[ { "code": "exports =async function() {\n\tlet collections = context.services.get(\"XYZZZ\").db(\"XXXXX\");\n\tlet orderhistoryCollection = collections.collection(\"XXXXXX\");\n\tconst pipeline =[\n\t{\n $match: {\n creationDate: {\n $ne: \"\",\n }\n }\n },\n {\n $addFields: {\n created_at: {\n $toDate: \"$creationDate\",\n },\n },\n },\n {\n $match:{\n \"created_at\":{$lt:new Date(getThreshholdDate())}\n},\n },\n {\n $project: {\n _id: \"$_id\"\n\n },\n },\n];\nconst idList = await orderhistoryCollection.aggregate(pipeline).toArray();\n\nconsole.log(\"No of documents deleted: \"+idList.length);\n//console.log(idList);\nif(idList.length>0){\norderhistoryCollection.remove({ _id: { $in: idList } });\n}\nreturn idList;\n \n}\n\nfunction getThreshholdDate() {\n let date = new Date();\n date.setMonth( date.getMonth() - 7 );\n console.log(date.getFullYear()+\"-\"+date.getMonth()+\"-\"+date.getDate());\n return date;\n}\n\n", "text": "Hello, Query we are firing from Atalas Trigger.\nAtlas version: \t4.4.18Trigger:Error:{“message”:\"‘remove’ is not a function\",“name”:“TypeError”}Logs:[ “2022-11-6”, “26200” ]", "username": "Anil_Prasad1" }, { "code": "deleteManyremove", "text": "Perhaps you want deleteMany rather than remove?", "username": "Cast_Away" }, { "code": "console.log(\"No of documents to be deleted: \"+idList.length);\nif(idList.length>0){\n const deleteResult = await orderhistoryCollection.deleteMany({ _id: { $in: idList } });\n console.log(\"Deleted \" + deleteResult.deletedCount + \" documents\");\n}\n\n", "text": "Changed the code to deleteMany but it is not deleting records. Am I missing something?ran on Fri Jan 06 2023 12:56:03 GMT-0600 (Central Standard Time)\ntook 4.528895883s\nlogs:\n2022-5-6\nNo of documents to be deleted: 50000\nDeleted 0 documents", "username": "Anil_Prasad1" }, { "code": "console.log(orderhistoryCollection.find({ _id: { $in: idList } }).count());\nawait", "text": "What is the output of:N.B.: Depending on how/where you call this, you may need await, etc.", "username": "Cast_Away" }, { "code": "const idList = await orderhistoryCollection.aggregate(pipeline).toArray();\nconsole.log(\"No of documents to be deleted: \"+idList.length);\n\nif(idList.length>0){\n try{\n let ids=[];\n idList.forEach(doc=>ids.push(doc._id));\n const deleteResult = await orderhistoryCollection.deleteMany({ _id: { $in: ids } });\n console.log(\"Deleted \" + deleteResult.deletedCount + \" documents\");\n }\n catch(e)\n {\n console.log(e);\n }\n}\n", "text": "Fixed issue with this code change", "username": "Anil_Prasad1" }, { "code": "", "text": "Please Use deleteOne() Instead remove() function. If you want to delete single item deleteOne( ) & if there ary many documents there please use other delete Options.", "username": "ALI_SHER1" }, { "code": "", "text": "remove() function is deprecated, now you may use deleteOne() or deleteMany() functions instead of that.", "username": "034_Bharat_N_A" } ]
Error: {"message":"'remove' is not a function","name":"TypeError"}
2023-01-06T17:21:02.658Z
Error: {&ldquo;message&rdquo;:&ldquo;&lsquo;remove&rsquo; is not a function&rdquo;,&rdquo;name&rdquo;:&rdquo;TypeError&rdquo;}
13,996
null
[ "kafka-connector" ]
[ { "code": "true", "text": "Hello,We are using the Kafka source connector and we have noticed an unexpected issue recently.We set the copy.existing option to true to perform an initial load of the whole MongoDB collection before switching to the change stream. This works fine most of the time. But recently we had an issue causing our connectors to restart frequently (1-2 times per hour). We noticed that after the restart, if the copy was not finished yet, it started copying everything again. In the end our Kafka topic was filled with hundreds of copies of the same data while the copy process never ended.While we can try to avoid restarts as much as possible, it would be great if instead the copy would restart from where it left off. For huge and long initial loads, a single restart can have a big impact.Do you know if there is anything we can do to prevent that problem? Is that supposed to happen? Is there a plan to improve that behaviour?Thanks for your help", "username": "Colin_Smetz" }, { "code": "", "text": "Hi Colin, This is by design as the connector has no idea where within the copy process it currently is. Thus, the connector itself can’t really resume upon failure. Is your scenario to just replicate MongoDB data between clusters? How large are these collections that are causing it to take a long time to do the initial copy?", "username": "Robert_Walters" }, { "code": "", "text": "Hi Robert,Thanks for your answer.Is your scenario to just replicate MongoDB data between clusters?We do not really need to replicate the data to another MongoDB cluster. We need to ingest the data on Kafka to be used by other tools on our side that work specifically with Kafka. So we can’t bypass Kafka if that’s what you were suggesting.How large are these collections that are causing it to take a long time to do the initial copy?Around 50M records and several hundred GBs of data. We could have more in the future.This is by design as the connector has no idea where within the copy process it currently is. Thus, the connector itself can’t really resume upon failure.Would it make sense to consider handling this failure scenario? I mean, is it an explicit design choice to not handle it, or is it simply too complex / impossible to handle it? Would it be hard to keep track of where the copy process currently is?Note that my knowledge of MongoDB is pretty limited so I’m only trying to understand the limitations here so that we can evaluate our options on our side. I understand of course that is probably not a trivial problem.", "username": "Colin_Smetz" }, { "code": "curl -s \"http://localhost:8083/connectors?expand=info&expand=status\" | \\\n jq '. | to_entries[] | [ .value.info.type, .key, .value.status.connector.state,.value.status.tasks[].state,.value.info.config.\"connector.class\"]|join(\":|:\")' | \\\n column -s : -t| sed 's/\\\"//g'| sort\n", "text": "In terms of supporting failures for copy.existing, as the code is today, it would be hard to keep track of where the copy process currently is. Perhaps you could watch for a failed connector (in curl terms…and restart the process - all outside the connector. Perhaps through an automation tool.Feel free to file a JIRA ticket on this feature request.", "username": "Robert_Walters" }, { "code": "", "text": "Thanks for the suggestions. I have opened a JIRA ticket: KAFKA-315, and we will look for other workarounds in the meantime.", "username": "Colin_Smetz" }, { "code": "", "text": "Hi,Curious how you ended up dealing with this problem.", "username": "Kay_Khan" } ]
Kafka Source Connector: Copy Existing restarts from scratch if interrupted
2022-05-06T09:54:50.438Z
Kafka Source Connector: Copy Existing restarts from scratch if interrupted
3,359
null
[ "aggregation" ]
[ { "code": "\n{\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c6\"),\n \"name\": {\n \"type\": \"permenant\"\n },\n \"address\": \"permenant address 1\",\n \"parent\":{\n \"_id\": ObjectId(\"64a67f32dbe7c36e2e6c15c8\"),\n \"name\": {\n \"user\": \"xyz\"\n },\n \"data\": {\n \"age\": \"55\"\n }\n }\n\n }\n", "text": "I have created mongo playground and I’m expecting below output.Mongo playground: a simple sandbox to test and share MongoDB queries online\nNote: Relationship between parent and address uses DBRef.\nExpected Output:", "username": "Janak_Rana" }, { "code": "[\n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c6\"),\n \"address\": \"permenant address 1\",\n \"name\": {\n \"type\": \"permenant\"\n },\n \"parent\": {\n \"_id\": ObjectId(\"64a67f32dbe7c36e2e6c15c8\"),\n \"data\": {\n \"age\": \"55\"\n },\n \"name\": {\n \"user\": \"xyz\"\n }\n }\n },\n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c7\"),\n \"address\": \"permenant address 2\",\n \"name\": {\n \"type\": \"secondary\"\n },\n \"parent\": {\n \"_id\": ObjectId(\"64a67f32dbe7c36e2e6c15c8\"),\n \"data\": {\n \"age\": \"55\"\n },\n \"name\": {\n \"user\": \"xyz\"\n }\n }\n },\n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c3\"),\n \"address\": \"other address 2\",\n \"name\": {\n \"type\": \"other\"\n },\n \"parent\": {\n \"_id\": ObjectId(\"64a67f32dbe7c36e2e6c15c8\"),\n \"data\": {\n \"age\": \"55\"\n },\n \"name\": {\n \"user\": \"xyz\"\n }\n }\n }\n]\ndb.Address.aggregate([\n {\n $lookup: {\n from: \"Parent\",\n localField: \"parent.id\",\n foreignField: \"_id\",\n as: \"parent\"\n }\n },\n {\n $unwind: \"$parent\"\n },\n \n])\n", "text": "This one should be simpler than the last one, you just need to have a lookup stage and then $unwind to convert the array to a single entry.Mongo playground: a simple sandbox to test and share MongoDB queries onlineI’d recommend running through the Mongo University courses on these stages to get a good handle of them:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.If from a relational background:Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "John_Sewell" }, { "code": "", "text": "Yes, that works but struggling with DBRef(‘Parent’, ‘64a67f32dbe7c36e2e6c15c8’)\nWhich is actual in mongodb.", "username": "Janak_Rana" }, { "code": "", "text": "It worked after adding $ sign\nlocalField: “parent.$id”,", "username": "Janak_Rana" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Add Parent DBRef data in Child records as a Single object
2023-07-18T10:34:44.497Z
Add Parent DBRef data in Child records as a Single object
352
https://www.mongodb.com/…1_2_1024x342.png
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Hi, I am able to connect my node.js app via MongoDB url provided by mongodb.com. But in the server(ubuntu 20.04), it,s not connecting.\nI am sure the database URL is accurate on the server. Also the server ip is whitelisted in Mongodb.com.\nimage1917×642 40.2 KB", "username": "Mohammod_Shuvo" }, { "code": "(includes your current IP address)", "text": "Hi @Mohammod_Shuvo,Welcome to the MongoDB Community!Hi, I am able to connect my node.js app via MongoDB url provided by mongodb.com. But in the server(ubuntu 20.04), it,s not connecting.Could you please clarify how you were able to connect earlier (e.g., were you using a different operating system or a Virtual Machine), and if anything changed after which the issue started occurring?Have you tested your Atlas connection string using mongo shell? If so, was it successful?However, the error message is referring to the IP whitelisting issue. Could you please ensure that your current IP address is whitelisted on your MongoDB Atlas dashboard’s network access tab? Also, make sure it says (includes your current IP address).\nimage2676×320 33.9 KB\nIf the issue still persists after verifying the above, feel free to reach out to us.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Nodejs connect to mongoose failed on server(ubuntu 20.04), but worked on local
2023-07-14T13:49:05.968Z
Nodejs connect to mongoose failed on server(ubuntu 20.04), but worked on local
462
null
[ "atlas", "change-streams" ]
[ { "code": "FullDocumentBeforeChangeFullDocumentFullDocumentBeforeChange", "text": "Hi,Recently as part of a project by our team to implement Mongo Change Streams on one of our collections, i’ve noticed that our Project has been autoscaling from M20 → M30 more and more frequently. When implementing Mongo CS, we ran a query to add in the FullDocumentBeforeChange for each MongoCS event (so it contains now the FullDocument and the FullDocumentBeforeChange for each update).Is it possible that including this additional feature is leading to the increase in autoscaling between M20 → M30? We used to sit more regularly at M10 and occasionally the autoscaling would increase to M20.Are there other things I should investigate? if so if you had any advice that would be appreciated.We are on Atlast and version 6.0.6", "username": "Paul_Chynoweth" }, { "code": "FullDocumentBeforeChangeFullDocumentFullDocumentBeforeChangeFullDocumentBeforeChangeconfig.system.preimagesconfig.system.preimagesexpireAfterSeconds", "text": "Hi @Paul_Chynoweth,When implementing Mongo CS, we ran a query to add in the FullDocumentBeforeChange for each MongoCS event (so it contains now the FullDocument and the FullDocumentBeforeChange for each update).Is it possible that including this additional feature is leading to the increase in autoscaling between M20 → M30? We used to sit more regularly at M10 and occasionally the autoscaling would increase to M20.I suspect that you might be currently operating close to the limit of what M20 provides, which may be causing you to reach the M30 tier due to further resource usage. To learn more, please refer to the Auto-Scaling documentation.May I ask if you notice any patterns when the autoscale happens, such as at a certain time of day or under specific workloads (e.g., analytics)?As you mentioned that you experienced the same issue with M10 → M20, are you observing any similar usage patterns when that happened as well?However, in the current implementation as per MongoDB 6.0.8, the FullDocumentBeforeChange feature, allows you to store pre-images, which are written to the config.system.preimages collection. This could consume additional resources and may impact your autoscaling in some cases, depending on your workload, especially if you are on the edge of your cluster capacity and experience an influx of additional workload that triggers autoscaling.Also as per the Change Stream documentation, you can limit the size of config.system.preimages collection and set an expireAfterSeconds time for the pre-images to prevent it from becoming too large.Hope the above helps! In case you need further assistance, I suggest reaching out to MongoDB Atlas support as they have better insights into your cluster.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Tier increase via Autoscaling after implementing FullDocumentBeforeChange
2023-07-10T08:47:50.640Z
Tier increase via Autoscaling after implementing FullDocumentBeforeChange
662
null
[ "java", "spring-data-odm" ]
[ { "code": "", "text": "Hii currently I am having mongo-java-driver 3.12.1 , spring-data-mongodb 2.2.5.RELEASE configured, Spring boot version 2.2.5.RELEASEI tried upgrade spring boot to 2.5.8 with spring-data-mongodb 3.2.3 and mongodb-java-driver-core mongodb-java-driver-sync 4.2.3But getting various errors of class not found/ method not found on deployment what is right way to upgrade mongo java driver and spring data mongodb with proper dependencies.", "username": "Sanket_Chauhan" }, { "code": "", "text": "Hi\nI’m currently migrating from spring2.3.6 to spring2.5.8 for mongo-java-driver upgradation. During this process of changing the spring version and building the application via gradle and starting the app, various errors were faced due to deprecation, dependency path change, adding dependencies on applying those kinds of changes we were able to run the application. Hope this process we followed may help.", "username": "Thulasi_Ram" } ]
Mongo java driver upgrade from 3.12.1 to 4.2.3
2022-01-06T12:46:13.346Z
Mongo java driver upgrade from 3.12.1 to 4.2.3
5,077
null
[ "connecting" ]
[ { "code": "const connectToDatabase = async () => {\n try {\n const uri = process.env.MONGODB_URI;\n const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n await client.connect();\n console.log('Connected to MongoDB successfully');\n const db = client.db();\n const sessions = new session(db, {\n collectionName: 'session',\n sessionName: 'telegrafSessions'\n });\n return { db, sessions };\n } catch (error) {\n console.error('Error connecting to MongoDB:', error);\n throw error;\n }\n};\nconst { MongoClient } = require('mongodb');\nconst { session } = require('telegraf-session-mongodb');\nconst { Telegraf, Markup } = require('telegraf');\nconst { connectToDatabase, saveUserSession, saveUserLog } = require('./src/database');\n// Connect database\nconnectToDatabase()\n.then(({ db, sessions }) => {\n // Middleware session user\n bot.use(sessions.middleware());\n\n bot.use(async (ctx, next) => {\n ctx.session.db = db; // insert object db session\n await next();\n });\n\n // Middleware save user telegram\n bot.use((ctx, next) => {\n saveUserSession(db, ctx.session)\n .then(() => next())\n .catch((error) => {\n console.error('Error saving user session:', error);\n ctx.reply('Oops! Something went wrong.');\n });\n });\n // Middleware save log user telegram\n bot.use((ctx, next) => {\n const log = {\n session_id: ctx.session.session_id,\n user_id: ctx.from.id,\n timestamp: new Date(),\n message: ctx.message.text\n };\n saveUserLog(db, log);\n return next();\n });\n", "text": "Hello,\nI want to ask, I’m getting an error while running mongodb for telegram bot. the error is Error connecting to MongoDB: TypeError: session is not a constructor. Here I show the code file database.jsThe modules that I installed on database.jsHere is the index.js file for connecting to the database.js file", "username": "Ekacitta_Wibisono" }, { "code": " const sessions = new session(db, {\n collectionName: 'session',\n sessionName: 'telegrafSessions'\n });\nMongoDB: TypeError: session is not a constructorconnectToDatabase()// Connect to the database and return the database object\nconst connectToDatabase = async () => {\n try {\n const uri = process.env.MONGODB_URI || \"mongodb://localhost:27017\"; // Use the environment variable if available, otherwise use the local URL\n const client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n await client.connect();\n\n console.log('Connected to MongoDB successfully');\n\n return client.db(); \n } catch (error) {\n console.error('Error connecting to MongoDB:', error);\n throw error;\n }\n};\n\n// Connect database and start the bot\nconnectToDatabase()\n .then((db) => {\n const bot = new Telegraf('YOUR_TELEGRAM_BOT_TOKEN'); // Replace 'YOUR_TELEGRAM_BOT_TOKEN' with your actual bot token\n\n // Middleware to manage user sessions\n bot.use(session(db, { sessionName: 'test', collectionName: 'sessions' })); // Replace 'test' with your actual database name\n\n bot.use(async (ctx, next) => {\n ctx.db = db; \n await next();\n });\n...\n", "text": "Hi @Ekacitta_Wibisono,Welcome to the MongoDB Community!MongoDB: TypeError: session is not a constructorBased on the shared details, the error doesn’t seems to be related to MongoDB rather it is occurring from the above code snippet because the session is not a constructor so adding a new keyword in-front of it make it throw an error.So, to avoid this error message first connect to the MongoDB database and then use the connectToDatabase() function to further start a bot. Sharing the code snippet for your reference:I’ve only briefly tested this out so I recommend evaluating the code in your test environment to see if it suits your use case and requirements.To learn more, please refer to telegraf-session-mongodb - npm documentation.Hope the above helps!Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
MongoDB error connect to DB (Bot telegram) session instalation
2023-07-14T13:39:10.739Z
MongoDB error connect to DB (Bot telegram) session instalation
763
null
[ "react-native" ]
[ { "code": " const credentials = Credentials.anonymous();\n await app.logIn(credentials);\nrealm.syncSession.pause();class StatisticsClass extends Realm.Object<StatisticsClass> {\n name!: string;\n _id!: BSON.ObjectId;\n owner_id!: string;\n questionId!: string;\n numberOfCorrectAnswers!: number;\n catalogueId!: string;\n timestamp!: number;\n primaryKey!: string;\n\n static schema: ObjectSchema = {\n name: STATISTICS_SCHEMA_NAME,\n primaryKey: '_id',\n properties: {\n _id: {type: 'objectId', default: () => new BSON.ObjectId()},\n owner_id: 'string',\n questionId: 'string',\n numberOfCorrectAnswers: 'int',\n timestamp: 'int',\n catalogueId: 'string',\n },\n };\n}\nclass BookmarkClass extends Realm.Object<BookmarkClass> {\n name!: string;\n _id!: BSON.ObjectId;\n owner_id!: string;\n id!: string;\n questionid!: string;\n questionText!: string;\n\n static schema: ObjectSchema = {\n name: BOOKMARK_SCHEMA_NAME,\n primaryKey: '_id',\n properties: {\n _id: {type: 'objectId', default: () => new BSON.ObjectId()},\n owner_id: 'string',\n id: 'string',\n questionid: 'string',\n questionText: 'string',\n },\n };\n}\n", "text": "Hello! I am currently working on an application that can be opened without the need to login at first.On fresh install the user navigates from the OnboardingScreen to AppScreen where he can start using it.\nThe main functionality is that the app fetches some questions from server and the user can answer them.My need is to save the output of the questions results (if he answered correct/wrong, questionId etc.) and some bookmarks so he can have a list of the Bookmarked questions.Now when the user logs in to his account (Facebook/Apple/Google) he can sync with Atlas and push the results to mongoDB.Then if he logout again he will be able to continue using the application locally on his device and on re-login he will sync local data with cloud.The first problem that I’ve encounter was <UserProvider fallback={}> which I cannot bypass if I don’t have a user.To bypass fallback I first create an anonymous user:and then I redirect to AppScreen.Because I do not want to sync anonymous user to Atlas I disable sync by\nrealm.syncSession.pause();\nMy question is how I can now login with Google (for the sake of this example) and migrate anonymous user data to the authenticated and sync with Atlas.My schemas for Statistics and Bookmarks are those:", "username": "Dimitris_Tzimikas" }, { "code": "", "text": "This is the error that I get when I try to sync realm from anonymous user to authenticate user e.g. Google:SyncError: Client attempted a write that is outside of permissions or query filters; it has been reverted\n\nLogs___App_Services1366×1426 81.1 KB\n", "username": "Dimitris_Tzimikas" }, { "code": "", "text": "For anyone who might have the same problem the fix that solved this problem is to link user identities so all the users can bundle together\n", "username": "Dimitris_Tzimikas" } ]
Application without login required to enter app
2023-07-13T11:41:25.434Z
Application without login required to enter app
702
null
[]
[ { "code": "", "text": "Hi Team ,Thinking if we can manually flush “dirty bytes” from cache to disk .If yes , how can we do it ?\nHelp me in sharing documents link / commands.Regards\nPrince.", "username": "Prince_Das" }, { "code": "", "text": "There’s a lock side effect if you use fsync. Not sure if you want that.This is the very first step before a disk snapshot based backup can be made.", "username": "Kobe_W" }, { "code": "fsync", "text": "Hi @Prince_Das,As @Kobe_W mentioned there is a way to do it using fsync but may I ask why you want to do that. Are you facing any issues related to it.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "@Kushagra_Kesav\nNo , i am not facing any issue , was thinking like in oracle if we want we can flush “dirty buffer” , without any issue ,… thought Mongodb might be having similar type of feature where we can flush those dirty bytes.but not in regularly , Only if it reaches near to “bytes currently in the cache” . We can us those command …For temp. purpose.\n@Kobe_W fsync would help but it can acquire locks .Thanks both for the help.Regards\nPrince", "username": "Prince_Das" }, { "code": "", "text": "Hi @Prince_Das,thought Mongodb might be having similar type of feature where we can flush those dirty bytes.but not in regularly , Only if it reaches near to “bytes currently in the cache” . We can us those command …For temp. purpose.In MongoDB, you don’t need to manually flush dirty bytes. The system takes care of automatically flushing them to disk once dirty contents in the WiredTiger cache reach 5%. It will flush them more aggressively once this number reaches 20% of the configured cache size. You can customize these numbers if you need more control, but unless you’re experiencing issues that can be conclusively traced to this setting, I would not recommend changing these values.To read more, please refer to the WiredTiger: Cache and eviction tuning documentation.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flush Dirty bytes
2023-07-13T03:07:43.544Z
Flush Dirty bytes
573
null
[ "dot-net", "android", "unity" ]
[ { "code": "Could not analyze the user's assembly. Object reference not set to an instance of an object UnityEngine.Debug:LogError (object) RealmWeaver.UnityWeaver/UnityLogger:Error (string,Mono.Cecil.Cil.SequencePoint) RealmWeaver.Analytics:AnalyzeUserAssembly (Mono.Cecil.ModuleDefinition) RealmWeaver.Analytics/<>c__DisplayClass9_0:<.ctor>b__2 () System.Threading._ThreadPoolWaitCallback:PerformWaitCallback ()", "text": "Just implemented Realm on Unity, and it works like charm on Editor.\nHowever, upon build on Android, it fails within few seconds, Error messageCould not analyze the user's assembly. Object reference not set to an instance of an object UnityEngine.Debug:LogError (object) RealmWeaver.UnityWeaver/UnityLogger:Error (string,Mono.Cecil.Cil.SequencePoint) RealmWeaver.Analytics:AnalyzeUserAssembly (Mono.Cecil.ModuleDefinition) RealmWeaver.Analytics/<>c__DisplayClass9_0:<.ctor>b__2 () System.Threading._ThreadPoolWaitCallback:PerformWaitCallback ()I’ve searched all over the place, i found nothing related", "username": "Slim_Hidri" }, { "code": "", "text": "What version of the initial SDK are you using?", "username": "nirinchev" }, { "code": "", "text": "11.2.0 , supposed to be latest one !", "username": "Slim_Hidri" }, { "code": "", "text": "So there are no suggestions how to tackle this ?\nHere’s a shot of my settings, dunno if it’s helpful\n\nUntitled-2-Recovered1280×720 145 KB\n", "username": "Slim_Hidri" }, { "code": "", "text": "I’m traveling today but will try and investigate when I get more stable internet. In the meantime I asked the team to take a look, but between vacations and other tasks, it may take a day or two to find a workaround.", "username": "nirinchev" }, { "code": "", "text": "Sure man, I’m extremely grateful for your help ", "username": "Slim_Hidri" }, { "code": "", "text": "Hi @Slim_Hidri, can you try to use the build at this link?\nHere you can find a guide on how to install Realm from a tarball if needed.This build will raise the error you encountered as a warning instead of an error, so you should be able to use Realm. It will also log the exception, so we’d be really grateful if you could create an issue in our repo with the error logs.Let us know how it goes!", "username": "papafe" }, { "code": "", "text": "Wonderful work Fernandino !\nHappy to say it works perfectly !There were no warning messages though in the logcat, no exceptions were logged either !\nJust let me know how I can help you guys with this issue !", "username": "Slim_Hidri" }, { "code": "", "text": "Hi @Slim_Hidri ,Glad it works \nRegarding the error message, are you sure there is nothing in the build log either? I suppose that’s where the original error message showed up.", "username": "papafe" }, { "code": "", "text": "I couldn’t share the whole log file “New users cant send an attachment”, but i assume this is the warning log !\n\nEditorLog1482×1080 91.8 KB\nLet me know if that’s the right part !", "username": "Slim_Hidri" }, { "code": "", "text": "@Slim_Hidri thanks a lot! That’s exactly the error. Now we can investigate further, thank you ", "username": "papafe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unity Android build fail, "Could not analyze the user's assembly"!
2023-07-16T14:43:31.318Z
Unity Android build fail, &ldquo;Could not analyze the user&rsquo;s assembly&rdquo;!
767
null
[]
[ { "code": "website_seeds.doc.status = new\nstart batch run\nwebsite_seeds.doc.status = processing\nfinished batch run\nwebsite_seeds.doc.status = completed\nwebsite_seeds.doc.run_log = [{'run_start': datetime, 'status': 'completed', ...},\n{'run_start': datetime, 'status': 'processing'}]\n", "text": "Hello,I’m builidng some sort of website monitor with around 10k websites to monitor, roughly 2 times per month.\nThe monitor should start, run some tests on a batch of websites, maybe 100 per batch run, wait a min or so and run the next 100 batch tests.In the collection: website_seeds I’ve got the list of all websites with some meta info.\nThe test results should be stored in website_results.How would I data model / structure the data of the progress?If I do sth like:I’d need to “reset” all values to ‘new’, when I need to start the 2nd run.So would that be reasonable to nest this info?In that way, I can check the most recent entry of all run_log.run_end fields and start a new batch after x days for the whole collection.Is there a better aproach?", "username": "Chris_Haus" }, { "code": "website_seedsurlstatuswebsite_resultswebsite_idwebsite_seedstimestampresultwebsite_seedsrun_logrun_startrun_endstatusrun_logstatuswebsite_seeds", "text": "Based on your requirements, here’s a suggestion for structuring the data for your website monitor:To track the progress and history of each website’s monitoring runs, you can add a nested field within the website_seeds document:“run_log”: [\n{\n“run_start”: “2023-07-18T09:00:00”,\n“run_end”: “2023-07-18T09:30:00”,\n“status”: “completed”\n},\n{\n“run_start”: “2023-07-19T10:00:00”,\n“status”: “processing”\n}\n]In this example, each entry in the run_log array represents a monitoring run. It includes the run_start timestamp, run_end timestamp (if available), and the status of the run. This allows you to track the history of each run and determine the most recent run.To initiate a new batch run, you can check the latest entry in the run_log array for each website. If the latest run is completed or if a certain time threshold has passed, you can update the status field of all documents in the website_seeds collection to “new” to indicate that they need to be processed again.By following this data model, you can easily track the progress of each website, store the test results separately, and have a history of the monitoring runs for reference.Remember to adapt this model to your specific requirements and consider any additional fields or information that might be relevant to your monitoring process.", "username": "new_mail" } ]
Data structure for regular website monitor
2023-05-25T05:37:02.097Z
Data structure for regular website monitor
324
null
[ "aggregation" ]
[ { "code": "claimiddocumentidcomplaintiddb.document.aggregate([ { $lookup: { from: \"complaint\", localField: \"documentid \", foreignField: \"documentid\", as: \"ComplaintDetail\" } }, { $lookup:{ from: \"Claim\", localField: \"claimId\", foreignField: \"claimId\", as: \"Claims\" } } ])\n", "text": "I have 5 collections viz. claim, document, complaint, acknowledge, responsesrelationship are as followsI need data from document collection and all its claims, complaint, responses, and acknowledgment dataquery used so far, pls help in making a query to fetch data from acknowledge & responses collections.", "username": "bharat_phartyal" }, { "code": "$lookupdb.document.aggregate([\n {\n $lookup: {\n from: \"complaint\",\n localField: \"documentid\",\n foreignField: \"documentid\",\n as: \"ComplaintDetail\"\n }\n },\n {\n $lookup: {\n from: \"claim\",\n localField: \"claimid\",\n foreignField: \"claimid\",\n as: \"Claims\"\n }\n },\n {\n $lookup: {\n from: \"acknowledge\",\n localField: \"ComplaintDetail.complaintid\",\n foreignField: \"complaintid\",\n as: \"Acknowledgements\"\n }\n },\n {\n $lookup: {\n from: \"responses\",\n localField: \"ComplaintDetail.complaintid\",\n foreignField: \"complaintid\",\n as: \"Responses\"\n }\n }\n]);\n", "text": "Hey @bharat_phartyal,Welcome to the MongoDB Community Based on your schema design, you can use the following aggregation pipeline consisting of 4 $lookup stages:However, I’d recommend reconsidering your schema design to make the query more efficient and optimize it for better performance. Depending on your use case, you may need to use appropriate indexes and possibly denormalize some data to reduce the need for multiple lookups in the aggregation pipeline.Note: I recommend evaluating the above aggregation pipeline with different data loads to ensure it meets your use case and requirements.Hope it helps!Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Multiple collections joins
2023-07-13T07:19:43.654Z
Multiple collections joins
617
null
[ "aggregation", "queries", "graphql" ]
[ { "code": "{\n \"title\": \"Task\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"label\": {\n \"bsonType\": \"string\"\n },\n \"taskSet_id\": {\n \"bsonType\": \"objectId\"\n },\n}\n{\n \"title\": \"TaskSet\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"owner_id\": {\n \"bsonType\": \"objectId\"\n },\n \"editor_ids\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"objectId\"\n }\n },\n \"viewer_ids\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"objectId\"\n }\n }\n }\n}\n{\n \"taskSet_id\": {\n \"ref\": \"#/relationship/mongodb-atlas/ungant/TaskSet\",\n \"foreignKey\": \"_id\",\n \"isList\": false\n }\n}\n{\"taskSet_id.owner_id\": \"%%user.id\"}\n", "text": "I’ve got a tasks collectionwith Schema:and a TaskSet collection with schema:I then have a relationship:I want to create an access Rule on Task with an Apply When:From the docs it says “App Services automatically replaces source values with the referenced objects or a null value in resolved GraphQL types and SDK models”, but my Rule does not appear to be working, and I get no results from JavaScript code when running collection.find({}).I am admittedly confused by the whole Relationships topic, and how it relates to $lookup. I thought with the Relationship defined I might get the related document properties in results from a find(), but I do not. Can the Relationship be used for Rules permissions, and where else does it apply?", "username": "Daniel_Bernasconi" }, { "code": "", "text": "Hi, a relationship cannot be used within a rule expression. I will file a Documentation ticket to make this clear in the docs. The reason is simply that we can only use the data present within the document to evaluate rules, and pulling in all linked documents could result in significant performance degradation. Our recommendation would be to modify your data model to potentially duplicate data in order to achieve your permission goals if needed.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks Tyler, that makes sense. I will have potentially thousands of Tasks in a TaskSet, and access control is by TaskSet. With hundreds of users with access to a particular TaskSet, that would mean every one of the thousands of Tasks in a TaskSet contains the same list of hundreds of users with access permission. Is that a reasonable solution? I’m thinking it wouldn’t be great for performance or storage.I also thought of making the TaskSet a document containing all the Tasks, which would solve the problem, but then the TaskSet contains an unbounded array.A couple of options:Very grateful for any advice on this!Thanks,\nDan", "username": "Daniel_Bernasconi" }, { "code": "type task {\n _id ObjectId\n name string \n taskSet ObjectId\n}\n\ntype taskSet {\n _id ObjectId\n ... blah \n}\n\ntype user {\n _id ObjectId\n taskSets: []ObjectId\n}\n{taskSet: {$in: \"user.custom_data.taskSets\"}}\n", "text": "Hi, one other approach is to use custom user data and have each user document keep track of the TaskSet’s.Therefore, you could structure your data like this:And then structure permissions for the task collection to be:Although it is not a one-to-one comparison, I think that reading through the Restricted News Feed example here might be helpful for you: https://www.mongodb.com/docs/atlas/app-services/sync/app-builder/device-sync-permissions-guide/#restricted-news-feedLet me know if this solves your use case!Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler,Thanks very much for the suggested structure and rule - that’s a neat proposal. I hadn’t considered writing the Rule that way around but that should work so I will go with that.Cheers,Dan", "username": "Daniel_Bernasconi" } ]
How to specify a Rule based on a Relationship
2023-07-14T06:11:25.061Z
How to specify a Rule based on a Relationship
616
null
[ "replication", "compass", "containers" ]
[ { "code": "", "text": "How can i setup the mongo db cluster in cloudI have deployed 3 separate instances in oracle cloud and created the docker containers with the replica set. I have initiated the cluster and in rs.status() , i can able to see the primary and 2 secondary instances. In R53 i have created the records with TXT and SRV. While im connecting in mongo compass its getting error.Server selection timed out after 30000 mscan anyone help to sort this out.", "username": "siva_kumar7" }, { "code": "", "text": "Your connection string?", "username": "Kobe_W" }, { "code": "", "text": "mongodb+srv://user:pass@mongouatrs.$domain.com/admin?ssl=false", "username": "siva_kumar7" } ]
MongoDB cluster setup
2023-07-17T06:53:54.896Z
MongoDB cluster setup
559
null
[ "data-modeling", "flutter" ]
[ { "code": "", "text": "Hello, I’m new to mongodb and I was hoping to get some help on my new project. I’m building a chat application(both one to one and group functionalities) with flutter, socket_io and mongodb as the database, any suggestions on how the schema should be like.", "username": "OPOKU_KELVIN_AGYEMANG" }, { "code": "", "text": "Hi @OPOKU_KELVIN_AGYEMANG,Welcome to the MongoDB Community forums!I’m building a chat application(both one to one and group functionalities) with flutter, socket_io and mongodb as the database, any suggestions on how the schema should be like.You can refer to this Building a Mobile Chat App Using Realm – The New and Easier Way | MongoDB article to gain insights into the schema design of the chat application.Also if you’re starting your MongoDB journey, I would recommend the following resources:Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Creating a schema for chat application in flutter
2023-07-15T07:05:17.388Z
Creating a schema for chat application in flutter
668
https://www.mongodb.com/…ae80cec0a01c.gif
[ "aggregation", "queries", "atlas-search" ]
[ { "code": "[\n {\n $search: {\n index: \"Town_And_Suburb\",\n compound: {\n should: [\n {\n autocomplete: {\n query: searchterm,\n path: \"properties.SUBURB\",\n },\n },\n {\n autocomplete: {\n query: searchterm,\n path: \"properties.TOWN\",\n tokenOrder: \"sequential\",\n },\n },\n ],\n minimumShouldMatch: 1,\n },\n },\n },\n {\n $project: {\n _id: 1,\n Town: \"$properties.TOWN\",\n Suburb: \"$properties.SUBURB\",\n score: { $meta: \"searchScore\" },\n },\n },\n {\n $group: {\n _id: \"$Town\",\n Areas: {\n $addToSet: {\n value: \"$Suburb\",\n label: \"$Suburb\",\n _id: \"$_id\",\n },\n },\n },\n },\n {\n $project: {\n label: \"$_id\",\n value: \"$_id\",\n options: \"$Areas\",\n },\n },\n ]\nCape TownCape TownCape PointCape TownTownSuburbs{\n \"_id\": {\n \"$oid\": \"123bnbm1231n23nmb\"\n },\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n ... removed for brevity\n ]\n },\n \"properties\": {\n \"SUBURB\": \"Seawinds\",\n \"TOWN\": \"Cape Town\",\n \"PROVINCE\": \"Western Cape\"\n }\n}\n{\n \"_id\": {\n \"$oid\": \"64b0b73456345nb3453g\"\n },\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n ... removed for brevity\n ]\n },\n \"properties\": {\n \"SUBURB\": \"Camps Bay\",\n \"TOWN\": \"Cape Town\",\n \"PROVINCE\": \"Western Cape\"\n }\n}\n", "text": "Hi Guys,I am busy implementing a Autocomplete utility using Atlas Search.I have a collection of documents for Suburbs. Each Suburb is part of some Town (City). In other words, one Town can have many suburbs.I created an aggregation pipeline as below.The results returned for the most part are correct, except for the order of relevance.If I were to search for a Town: Cape Town, I expect the first results in the array to be for Cape Town however, the first result ended up being for Cape Point with Cape Town only appearing way later in the list.\nThe returned list is displayed in a dropdown menu with the Town as the heading and all the Suburbs listed under it. See screen recording showing the issue below.I have mostly made a mistake in the aggregation pipeline, I am just not sure where.PS, this is what a typical document looks like:What am I doing wrong? Any assistance is greatly appreciated.", "username": "Stephan_06935" }, { "code": "Cape Point", "text": "Hi @Stephan_06935 , can you provide the example document for the entry which includes Cape Point?", "username": "amyjian" }, { "code": "[{\n \"_id\": {\n \"$oid\": \"64b0bf7f22a5418f0edc6755\"\n },\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"Polygon\"\n },\n \"properties\": {\n \"SUBURB\": \"Redhill\",\n \"STRCODE\": \"\",\n \"TOWN\": \"Cape Point\",\n \"PROVINCE\": \"Western Cape\"\n }\n},\n{\n \"_id\": {\n \"$oid\": \"64b0bf7f22a5418f0edc6854\"\n },\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"Polygon\"\n },\n \"properties\": {\n \"SUBURB\": \"Cape of Good Hope Nature Reserve\",\n \"STRCODE\": \"7975\",\n \"TOWN\": \"Cape Point\",\n \"PROVINCE\": \"Western Cape\"\n }\n},\n{\n \"_id\": {\n \"$oid\": \"64b0bf8122a5418f0edc6f4c\"\n },\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"Polygon\"\n },\n \"properties\": {\n \"SUBURB\": \"Smitswinkel Bay\",\n \"STRCODE\": \"\",\n \"TOWN\": \"Cape Point\",\n \"PROVINCE\": \"Western Cape\"\n }\n},\n{\n \"_id\": {\n \"$oid\": \"64b0bf9e22a5418f0edc9f97\"\n },\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"Polygon\"\n },\n \"properties\": {\n \"SUBURB\": \"Castle Rock\",\n \"STRCODE\": \"7975\",\n \"TOWN\": \"Cape Point\",\n \"PROVINCE\": \"Western Cape\"\n }\n}]\n", "text": "Hi @amyjian ,Sure thing, see below 4 results you can correlate to results in the video (I just removed the coordinates field):", "username": "Stephan_06935" }, { "code": "$push$addToSet[\n {\n $search:\n {\n index: \"Town_And_Suburb\",\n compound: {\n should: [\n {\n autocomplete: {\n query: searchterm,\n path: \"properties.SUBURB\",\n },\n },\n {\n autocomplete: {\n query: searchterm,\n path: \"properties.TOWN\",\n tokenOrder: \"sequential\",\n },\n },\n ],\n minimumShouldMatch: 1,\n },\n },\n },\n {\n $project:\n /**\n * specifications: The fields to\n * include or exclude.\n */\n {\n _id: 1,\n Town: \"$properties.TOWN\",\n Suburb: \"$properties.SUBURB\",\n score: {\n $meta: \"searchScore\",\n },\n },\n },\n {\n $sort:\n /**\n * Provide any number of field/order pairs.\n */\n {\n score: -1,\n },\n },\n {\n $group: {\n _id: \"$Town\",\n Areas: {\n $push: {\n // Use $push to maintain the order within each group\n value: \"$Suburb\",\n label: \"$Suburb\",\n _id: \"$_id\",\n },\n },\n maxScore: {\n $max: \"$score\",\n },\n },\n },\n {\n $sort:\n /**\n * Provide any number of field/order pairs.\n */\n {\n maxScore: -1,\n },\n },\n {\n $project:\n /**\n * specifications: The fields to\n * include or exclude.\n */\n {\n label: \"$_id\",\n value: \"$_id\",\n options: \"$Areas\",\n },\n },\n ]\n", "text": "I have modified the group stage to use the $push method to preserve the order instead of the $addToSet.\nThen added another sort stage to order by score.\nHerewith the updated aggregation pipeline. This seems to return more accurate results. Will continue to test.", "username": "Stephan_06935" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search Autocomplete query results are not in order of relevance
2023-07-17T14:56:44.523Z
Atlas Search Autocomplete query results are not in order of relevance
508
null
[ "atlas-cluster", "atlas", "api" ]
[ { "code": "", "text": "Is it possible to gain access to a MongoDB cluster using the Atlas Administrator API?\nI would like to programmatically access all the clusters inside an organization. For that I would have to manually create a user for each of the projects and store the credentials in order to access the MongoDB clusters. Is there a simpler way? Ideally I would like to be able to use the Atlas API (given sufficient permissions) to get credentials for accessing the clusters.", "username": "Aleksandar_Jelic" }, { "code": "Database User", "text": "Hi @Aleksandar_Jelic - Welcome to the community.Is it possible to gain access to a MongoDB cluster using the Atlas Administrator API?\nI would like to programmatically access all the clusters inside an organization.Just to clarify, what do you mean by gain access to the MongoDB cluster? Do you mean configuring the cluster itself or modifying / reading the data within the cluster?For that I would have to manually create a user for each of the projects and store the credentials in order to access the MongoDB clusters. Is there a simpler way?Is “user” in this context a Database User?I would like to be able to use the Atlas API (given sufficient permissions) to get credentials for accessing the clusters.You could possibly use the Return All Database Users from One Project request.The requested information is just so that I can get a better understanding of the context since there are multiple API’s.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi @Jason_Tran, thank you for your reply.Let me add some context.\nI’m using GitHub - mongodb/mongo-go-driver: The Official Golang driver for MongoDB to extract some data (database names, roles,…) from a MongoDB cluster. I’m connecting to the cluster via username and password. My goal is to extract data from all the clusters in the organization (in all the projects).Connecting to all the clusters in projects of all the organizations can be a bit cumbersome as I’d need to create a database user for each of the projects and store the credentials. Is there an easier way to connect to the clusters? As mentioned in the previous post, ideally I would like to be able to use the Atlas API to somehow connect to the clusters inside a project.You mentioned the Return All Database Users from One Project endpoint, but it doesn’t return the user’s password (which is perfectly reasonable, I would never expect it to).I’d really appreciate your input.\nThank you and best regards,\nAleksandar", "username": "Aleksandar_Jelic" }, { "code": "", "text": "Let me add some context.\nI’m using GitHub - mongodb/mongo-go-driver: The Official Golang driver for MongoDB to extract some data (database names, roles,…) from a MongoDB cluster. I’m connecting to the cluster via username and password. My goal is to extract data from all the clusters in the organization (in all the projects).Connecting to all the clusters in projects of all the organizations can be a bit cumbersome as I’d need to create a database user for each of the projects and store the credentials. Is there an easier way to connect to the clusters?Thanks for providing those details / context. Just for clarity, it sounds like you are after accessing the data within each of the clusters inside each project for an organization rather than information about the cluster(s) but correct me if I am wrong here. In saying so, the Atlas Administration API’s available allow you to perform administrative tasks for the Atlas organization rather than accessing the data inside each of the clusters.As mentioned in the previous post, ideally I would like to be able to use the Atlas API to somehow connect to the clusters inside a project.You can read and write data using the Data API if it suits your use case but as per the documentation:The Data API uses project-level API keys to manage access and prevent unauthorized requests. Every request must include a valid Data API key.A Data API key grants full read and write access every collection in a cluster and can access any enabled cluster in the project.Is this something you’ve looked into to see if it suits your use case?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Access to MongoDB cluster via Atlas Administrator API
2023-07-07T05:28:52.459Z
Access to MongoDB cluster via Atlas Administrator API
740
null
[ "node-js" ]
[ { "code": " public async foo()\n {\n let testCollection = Database.db.collection(\"test\");\n\n await testCollection.updateMany({ name:\"foo\" }, {\n $pull: { ids: \"bar\" }\n }).catch((error)=>{})\n }\nType 'string' is not assignable to type 'never'.\n", "text": "Why do i get this typescript error when i create a typeless collection when i try to use $pull on the testCollection? When i use non-array functions like $find etc. it works fine. Do i have to type every Collection? How would it work with dynamic Collections-Documents?I use [email protected] and [email protected]", "username": "Fabian_Kuhn" }, { "code": "collection<{}>(\"test\").updateMany({ name:\"foo\" }, {\n $pull: { ids: \"bar\" }\n })\n", "text": "Can confirm, I’m having the same issue.\nIt turns your you need to pass a generic type to the collection, like so:It just looks silly to me, because to my understanding, if I’m not passing a schema to the collection, it should just allow anything. Also because it works with $set.", "username": "Eduardo_Araujo" }, { "code": "", "text": "Thanks for the reply, it helped solve my problem.", "username": "Hertz_Ray" } ]
Type 'ObjectId' is not assignable to type 'never'
2022-01-02T16:09:26.513Z
Type &lsquo;ObjectId&rsquo; is not assignable to type &lsquo;never&rsquo;
7,918
null
[ "replication" ]
[ { "code": "", "text": "we have a 3 nodes mongo4.2 replicaset running on Linux7.3, because the data was small, all of the 3 nodes are running on one server, the data grows a lot recently, so we’re planning to move 2 copies to new servers, but the new server is Linux 7.6(Server compatibility issue, we’re unable to install Linux 7.3), is it okay to run a replicaset on different linux versions?\n(1 member on Linux 7.3 +2 members on Linux 7.6, mongodb version is the same.)", "username": "Eddy_Li" }, { "code": "", "text": "As long as same db version is supported, mongodb doesnt care", "username": "Kobe_W" }, { "code": "", "text": "Thanks for the quick reply, Kobe.", "username": "Eddy_Li" } ]
Is it possible to setup replicaset on different linux versions
2023-07-17T07:52:04.727Z
Is it possible to setup replicaset on different linux versions
525
null
[ "connector-for-bi" ]
[ { "code": "", "text": "Hello,I have a question. Is it possible to have a multicloud connection from PowerBI (part of the Microsoft Power Platform) to MongoDB Atlas? Or we have to ask Microsoft if they can provide us a BI Connector to connect Power BI Power Platform to Atlas?\nWe know the functionality exists with PowerBI Desktop, but now we have a requirement to link our PowerBI cloud to Atlas.Best Regards\nVeronika", "username": "vvpe" }, { "code": "", "text": "Hi @Veronika_Petrova My name is Alexi Antonino and I am the Product Manager for the MongoDB BI Connector as well as a new feature (in Preview soon) called Atlas SQL Interface. The Atlas SQL Interface, connects Atlas Data to SQL based BI tools. The difference from Atlas BIC to Atlas SQL is that the underlying dialect is not MySQL it is our own MongoSQL dialect that is SQL-92 compatible and affords us the ability to add in MongoDB specific functions to perform operations necessary for our document data (e.g. flattening & unwinding). Not only does Atlas SQL have this new dialect to translate SQL to MQL, we have also built a JDBC Driver and we are in the process of building an ODBC Driver. Another difference between Atlas BI Connector and Atlas SQL, is that we plan to build custom connectors to some of the most used tools. We are in the process of building a MongoDB Atlas Tableau Connector and will then build one for Power BI/Power Query.\nI wanted to give you this context, because your timing is good so that I can review our existing plan for the Power BI Connector and see if this includes PowerBI cloud. For reference, we are planning to have this new MongoDB Atlas SQL Power BI Connector later this year. I will keep you posted as to whether this first version will support PowerBI Cloud. And feel free to email me directly [email protected],\nAlexi", "username": "Alexi_Antonino" }, { "code": "", "text": "Thank you, Alexi!\nThat’s a good news.Best Regards\nVeronika", "username": "vvpe" }, { "code": "", "text": "@Alexi_Antonino Thank you. Imagine you have a multi-tenant SaaS where data for customers are logically separated in the collections, so identifiable/filterable by a property present in each collection.\nDo you know any best practices now or in future with Atlas SQL to offer each customer of ours Tableau/PowerBI? This would somehow require to enforce that the query always includes this tenant/customer property.", "username": "Manuel_Reil1" }, { "code": "", "text": "Hello,@Alexi_Antonino I saw ur answer, i am facing same issue than vvpe and want to connect PowerBI WebService to my Atlas instance to always get real-time data in dashboards and reports. Do you know approximately when we will be able to do that ?Thanks in advance,\nNans", "username": "Nans_KARAYANNIDIS" }, { "code": "", "text": "Hello @Nans_KARAYANNIDIS - We are planning for a Private Preview of the Power BI Atlas SQL connector early next year. Like January/Feb. time frame. This will be an opportunity for customers to try it out, verify the connector meets their needs and I can collect feedback for my team to iterate on the tool. We are shooting for Atlas SQL to be GA in June 2023. If you’d like to participate in the PPP (Private Preview Program), email me and I will give you the details. We are planning to have support for Direct Query, though I am not sure if this will be in the first version release for the Private Preview Program, but will come soon after. Here is my email: [email protected]", "username": "Alexi_Antonino" }, { "code": "", "text": "@Manuel_Reil1 I am so sorry I am just seeing your post now. Your use case for multi-tenant access to data is not unique. It have come up from a few other customers. I need to research how this would work, and any best practices for configuration and get back to you and a few others. Let me do some testing/playing with Atlas SQL + Tableau (since that connector is ready for use now) and get back to you.Best,\nAlexi", "username": "Alexi_Antonino" }, { "code": "", "text": "Hey, @Alexi_Antonino. I’m trying to connect our MongoDB Atlas cluster on PowerBI, and I successfully connected using the MongoDB BI Connector. The downside is that we must import all the data to work with it. You mentioned the possibility that the Atlas SQL Interface could work with DirectQuery. Is it supported as of now? We would have to migrate from Mongo version 4.4 to 5 to test it; that is why I first need to know if a direct way of accessing live data is possible so I can present it to the team before we carry out the migration.", "username": "Thomas_Marcelo" }, { "code": "", "text": "Greetings @Thomas_Marcelo Thanks for asking about our new Atlas SQL Power BI Connector. It is currently in Private Preview , but the first version is only supporting Import Mode. We are planning to add support for Direct Query in a future release this year. Atlas SQL does require your Atlas Cluster to be version 5.0 and higher, so I can appreciate that being on your radar.\nIf you would like access to the Atlas SQL Power BI connector now to see how it works and check out how it is different from the BI Connector, please email me and I will get you access. While this version only supports import mode, seeing how the data is displayed and how Atlas SQL leverages Data Federation might be helpful to understand now. You can also use Atlas SQL with a free tier cluster loaded with sample data, so it is easy to test out.Best,\nAlexi\[email protected]", "username": "Alexi_Antonino" } ]
Connect Power BI Microsoft Power Platform to MongoDB Atlas
2022-04-21T12:38:04.240Z
Connect Power BI Microsoft Power Platform to MongoDB Atlas
5,364
null
[ "python" ]
[ { "code": "", "text": "We are pleased to announce the 1.0.2 release of PyMongoArrow - a PyMongo extension containing tools for loading MongoDB query result sets as Apache Arrow tables, Pandas and NumPy arrays.The release includes bug fixes for projection on nested fields, and nested\nextension objects in auto schemas.See the changelog for a high-level summary of what is in this release or see the PyMongoArrow 1.0.2 release notes in JIRA for the complete list of resolved issues.Documentation: PyMongo 1.0.2 documentation\nSource: GitHubThank you to everyone who contributed to this release!", "username": "Steve_Silvester" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
PyMongoArrow 1.0.2 Released
2023-07-17T21:43:28.685Z
PyMongoArrow 1.0.2 Released
528
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi Team,I want to understand the difference between mongodb-mongosh and mongodb-mongosh-shared-openssl3.Is mongodb-mongosh-shared-openssl3 an improved and better than mongodb-mongosh.what is the major difference .{“Thanks and Regards”, “Satya”}", "username": "Satya_Oradba" }, { "code": "apt", "text": "mongodb-mongosh-shared-openssl3 is a package e.g., for Ubuntu to install via apt.There are two different openssl libraries floating around, the earlier 1.x version disappearing in favor of openssl3 which is the only openssl lib installed by default on most Linux distros these days.The package name denotes that it’s the cut of mongosh compiled against openssl3.", "username": "Jack_Woehr" }, { "code": "", "text": "Also see https://jira.mongodb.org/browse/MONGOSH-1231", "username": "Jack_Woehr" }, { "code": "", "text": "Hi @Jack_WoehrThanks for the update.Could you please elaborate.What are the major pros and cons with shared-openssl3 and why we have to use it and why not mongosh.{“Thanks and Regards”, “Satya”}", "username": "Satya_Oradba" }, { "code": "", "text": "A discussion of OpenSSL versions is beyond the scope of this forum, but you can look at OpenSSL 3.0 Has Been Released! - OpenSSL Blog and decide which versions of OpenSSL and mongosh you wish to use.", "username": "Jack_Woehr" }, { "code": "", "text": "Also, I think this is the case:", "username": "Jack_Woehr" } ]
What is the difference between mongodb-mongosh and mongodb-mongosh-shared-openssl3
2023-07-17T12:37:06.578Z
What is the difference between mongodb-mongosh and mongodb-mongosh-shared-openssl3
580
null
[ "transactions", "storage" ]
[ { "code": "", "text": "Hello,For simplicity I ask about a single node. When Mongo iterates a large portion of documents (to answer some query that requires scanning the collection), and requires that all iterated data will be coherent to the same point in time - Does this cause all reads in the iteration to be done under the same storage engine transaction?if not, what mechanism in Wiredtiger Mongo uses to ensure that:Thanks,Roey.", "username": "Roey_Maor" }, { "code": "", "text": "I recall by default its snapshot read, like repeatable read in mysql case.A big read on my guess will use it with a tramsaction.However its not same as serializable, so look it up on wikipedia.", "username": "Kobe_W" } ]
Question - iterating large amount of documents
2023-07-17T08:01:18.989Z
Question - iterating large amount of documents
512
null
[ "dot-net", "android" ]
[ { "code": "var client = new MongoClient(MongoDbConnectionString);\nvar db = client.GetDatabase(MongoDbName);\n", "text": "Hi,\nI coded a MAUI Blazor App which connects to MongoDB Atlas and loads some Data in the App. Everything worlks fine if I run the App on a Windows PC. But if I compile/run the same code on an Android device I get this error in the development tools:blazor.webview.js:1 List of configured name servers must not be empty. (Parameter ‘servers’)\nat DnsClient.LookupClient.QueryInternal(DnsQuestion question, DnsQuerySettings queryOptions, IReadOnlyCollection`1 servers)\nat DnsClient.LookupClient.Query(DnsQuestion question)\nat DnsClient.LookupClient.Query(String query, QueryType queryType, QueryClass queryClass)\nat MongoDB.Driver.Core.Misc.DnsClientWrapper.ResolveTxtRecords(String domainName, CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Configuration.ConnectionString.Resolve(Boolean resolveHosts, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoUrl.Resolve(Boolean resolveHosts, CancellationToken cancellationToken)\nat MongoDB.Driver.MongoClientSettings.FromUrl(MongoUrl url)\nat MongoDB.Driver.MongoClientSettings.FromConnectionString(String connectionString)\nat MongoDB.Driver.MongoClient…ctor(String connectionString)\nat MBz_MauiAndMongoDb.Pages.Index.OnInitializedAsync() in C:\\Users\\Are\\source\\repos\\MBz_MauiAndMongoDb\\MBz_MauiAndMongoDb\\Pages\\Index.razor:line 23\nat Microsoft.AspNetCore.Components.ComponentBase.RunInitAndSetParametersAsync()The error occures at is moment:The ConnectionString works perfect if run on a Windows maschine. The ConnectionString looks like this:“mongodb+srv://AtlasUser: myUserName+myCluster/test?retryWrites=true&w=majority&ssl=true”I use the MongoDB.Bson and the C# MongoDB.Driver.In the Documentations I only find the connectionString Syntax I use. But that seems to NOT work with MAUI App on a Android-Device! If I just use “mongodb://AtlasUser:…” without the ‘srv’ I get a timeout error. Any Ideas?Thx for your Help!", "username": "Henning" }, { "code": "LookupClientLookupClientmongodb://mongodb+srv://+srvConnectConnect your applicationC# / .NET2.4 or latermongodb://<username>:<password>@cluster0-shard-00-00.CLUSTER_NAME.mongodb.net:27017,cluster0-shard-00-01.CLUSTER_NAME.mongodb.net:27017,cluster0-shard-00-02.CLUSTER_NAME.mongodb.net:27017/?ssl=true&replicaSet=REPLSET_NAME&authSource=admin&retryWrites=true&w=majority\n", "text": "Hi, @Henning,Welcome back to the MongoDB Community Forums. I see that you’re having problems running a MAUI Blazor app using the MongoDB .NET/C# Driver. The error you’ve encountered is coming from one of our third-party dependencies, DnsClient.NET, which we use for SRV and TXT DNS lookups. The error indicates that autodiscovery of nameservers wasn’t successful. You can read more about it in DnsClient.NET issue #143.While in theory you could manually configure DNS nameservers when instantiating the LookupClient instance, our driver does not expose the ability to supply arguments to the LookupClient constructor.Two options:The latter option works because we only use DnsClient.NET for SRV and TXT lookups. A, AAAA, CNAME, and other DNS records are performed automatically by the operating system. Note that simply removing +srv from the connection string is not sufficient. The hostnames are different. If you’re using MongoDB Atlas, go to your cluster page, select Connect, Connect your application, C# / .NET 2.4 or later, and you should see a connection string that looks like:Please let us know if you have any additional questions.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "+srvConnectConnect your applicationC# / .NET2.4 or latermongodb://<username>:<password>@cluster0-shard-00-00.CLUSTER_NAME.mongodb.net:27017,cluster0-shard-00-01.CLUSTER_NAME.mongodb.net:27017,cluster0-shard-00-02.CLUSTER_NAME.mongodb.net:27017/?ssl=true&replicaSet=REPLSET_NAME&authSource=admin&retryWrites=true&w=majority\n", "text": "Thanky you very much… yes this solved the problem:Note that simply removing +srv from the connection string is not sufficient. The hostnames are different. If you’re using MongoDB Atlas, go to your cluster page, select Connect , Connect your application , C# / .NET 2.4 or later , and you should see a connection string that looks like:", "username": "Henning" }, { "code": "", "text": "Hello, I recently ran into this same issue and am worried about using the MongoDB without the srv for a production environment. Does anyone know if there has been any sort of development on this issue? This seems like it could turn into a serious issue for adoption of MongoDB on MAUI Blazor applications.", "username": "Chas_Phyle" }, { "code": "mongodb://mongodb+srv://srvMaxHostsmongosmongodb+srv://mongodb://mongodb+srv://mongodb://", "text": "Hi, @Chas_Phyle,Thank you for your additional question. MongoDB supports two types of connection strings, standard (mongodb://) and DNS seed list (mongodb+srv://). Certain features - like srvMaxHosts and dynamic mongos discovery - are only possible with the latter. Otherwise the two connection string formats are equivalent.For the .NET/C# Driver in particular, we use a third-party library, DnsClient.NET, for looking up SRV and TXT DNS records because the BCL does not support this out of the box. By default, DnsClient.NET uses the default DNS servers configured at the operating system level. Depending on your execution environment (e.g. mobile apps on some platforms), the default DNS servers configured by the operating system are not accessible due to the security sandbox. This results in DnsClient.NET failing to initialize correctly. We resolved this recently in 2.19.1 (see CSHARP-4436) by avoiding initialization of DnsClient.NET unless it is required (e.g. mongodb+srv:// is used).In summary mongodb:// and mongodb+srv:// are two different connection string formats and both are supported. If your execution environment only supports mongodb:// connection strings, you can still connect successfully to your MongoDB Atlas and self-hosted clusters.Sincerely,\nJames", "username": "James_Kovacs" } ]
MAUI Blazor App on Android Device how to connect to MongoDB Atlas (ConnectionString works on Windows PC but not on Android Device)?
2022-08-31T15:01:16.338Z
MAUI Blazor App on Android Device how to connect to MongoDB Atlas (ConnectionString works on Windows PC but not on Android Device)?
3,968
null
[ "atlas-device-sync", "flexible-sync" ]
[ { "code": "", "text": "If I synchronize based on user_id, should I create an index for it? Locally, every document will have a user_id, so it doesn’t really help. However, I am not sure if the server is effectively querying user_id when syncing, in which case an index will improve performance.Are queryable fields for flexible sync already indexed?", "username": "BPDev" }, { "code": "", "text": "Hi @BPDev,Sync does not query your Atlas data directly, but rather it keeps its own copy of your data. The data copy will have an index on each of your queryable fields, so you should not need to tweak those manually.", "username": "Kiro_Morkos" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Should synced property be indexed?
2023-07-17T20:02:14.393Z
Should synced property be indexed?
505
null
[ "aggregation" ]
[ { "code": "", "text": "Given is a collection with following fields:I’d like to group the data by taker and maker separately.\nEvery group has a added field amount.\nIn the taker group the field is a sum of token_bought_amount and in the maker group a sum of token_sold_amount.\nIn the last step the two groups are merged where amount of taker group is called bought and amount of maker group is called sold.I can do this in SQL with ease, but have my problems with MongoDB.\nAppreciate your help!", "username": "Florian_Baumann" }, { "code": "", "text": "How many groups are you looking at in the dataset? One option is facets but be wary that this will then impose a document size limit which is 16mb.", "username": "John_Sewell" }, { "code": "[\n {\n \"token_bought_address\": \"0xAA\",\n \"token_sold_address\": \"0xBB\",\n \"token_bought_amount\": 10,\n \"token_sold_amount\": 5,\n \"maker\": \"0x11\",\n \"taker\": \"0x22\"\n },\n {\n \"token_bought_address\": \"0xBB\",\n \"token_sold_address\": \"0xAA\",\n \"token_bought_amount\": 3,\n \"token_sold_amount\": 6,\n \"maker\": \"0x22\",\n \"taker\": \"0x11\"\n }\n]\n[\n {\n \"wallet\": \"0x11\",\n \"amount\": 3,\n },\n {\n \"wallet\": \"0x22\",\n \"amount\": 2,\n }\n]\n", "text": "Let’s take this dataset with just two entries as an exampleWhere taker is the one who bought the tokens and maker the one who sold and I’m only interested in the token 0xBBThe result I’m looking for would be this", "username": "Florian_Baumann" }, { "code": "", "text": "I don’t see how the sample data above gives the result you posted, unless I’m being stupid (which isn’t the first time and won’t be the last…)Can you post your SQL that you normally do this with as that’ll be a concrete definition.", "username": "John_Sewell" }, { "code": "with token_balances as (\n select -- tokens sold\n -sum(token_sold_amount) as amount\n , maker as wallet\n from trades\n where token_sold_address = '0xBB'\n group by 2\n \n union all\n \n select -- tokens bought\n sum(token_bought_amount) as amount\n , taker as wallet\n from trades\n where token_bought_address = '0xBB'\n group by 2\n)\n\nselect\n\twallet\n , sum(amount) as amount\n from token_balances\n group by 1\n", "text": "Sure", "username": "Florian_Baumann" }, { "code": "db.getCollection(\"token_balances\").aggregate([\n{\n $match:{\n $or:[\n {\"token_bought_address\":\"0xBB\"},\n {\"token_sold_address\":\"0xBB\"}\n ]\n }\n},\n{\n $facet:{\n sold:[\n {\n $match:{\n \"token_sold_address\":\"0xBB\"\n }\n },\n {\n $group:{\n _id:'$maker',\n total:{$sum:{$multiply:[-1, '$token_sold_amount']}}\n }\n }\n ],\n bought:[\n {\n $match:{\n \"token_bought_address\":\"0xBB\"\n }\n },\n {\n $group:{\n _id:'$taker',\n total:{$sum:'$token_bought_amount'}\n }\n }\n ],\n }\n},\n{\n $project:{\n allItem:{\n $setUnion:[\"$sold\",\"$bought\"]\n }\n }\n},\n{\n $unwind:\"$allItem\"\n},\n{\n $replaceRoot:{\n newRoot:\"$allItem\"\n }\n},\n{\n $group:{\n _id:'$_id',\n total:{$sum:\"$total\"}\n }\n},\n{\n $project:{\n _id:0,\n \"wallet\":\"$_id\",\n \"amount\":\"$total\"\n }\n}\n])\nwith token_balances as (\n select -- tokens sold\n -sum(token_sold_amount) as amount\n , maker as wallet\n from trades\n where token_sold_address = '0xBB'\n group by 2\n \n union all\n \n select -- tokens bought\n sum(token_bought_amount) as amount\n , taker as wallet\n from trades\n where token_bought_address = '0xBB'\n group by 2\n)\n\nselect\n\twallet\n , sum(amount) as amount\n from token_balances\n group by 1\n", "text": "This is the kind of thing I was thinking of, obviously will need a supporting index for the initial match stage and be wary of data volumes that hit the $facet as they cannot make use of indexes:With you data, I pushed it into a local MariaDB instance:\n\nand ran your query on it:Which gave this:\nThe Mongo equivalent gave:\nYou could eliminate the $repalceRoot stage from the aggregation pipeline as it’s just to simplify the next stage.I obviously don’t have your dataset, but have a play with volumes and see how it performs on your data.", "username": "John_Sewell" }, { "code": "", "text": "Thanks you very much", "username": "Florian_Baumann" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregate two groups on same collection
2023-07-15T19:22:28.330Z
Aggregate two groups on same collection
546
null
[ "aggregation" ]
[ { "code": "{\n \"$and\" : [\n {\n \"metaInfos\" : {\n \"$elemMatch\" : {\n name : 'nameOfField1',\n value : 'valueOfField1'\n }\n }\n },\n {\n \"metaInfos\" : {\n \"$elemMatch\" : {\n name : 'nameOfField2',\n value : 'valueOfField2'\n }\n }\n }\n ]\n}\ndb.getCollection(\"test\").aggregate(\n [\n {\n \"$group\" : {\n \"_id\" : \"$state\",\n \"count\" : {\n \"$sum\" : NumberInt(1)\n }\n }\n }, \n ], \n {\n \"allowDiskUse\" : false\n });\ndb.getCollection(\"test\").aggregate(\n [\n {\n \"$unwind\" : {\n \"path\" : \"$metaInfos\"\n }\n }, \n {\n \"$match\" : {\n \"metaInfos.name\" : \"state\"\n }\n }, \n {\n \"$group\" : {\n \"_id\" : \"$metaInfos.value\",\n \"count\" : {\n \"$sum\" : NumberInt(1)\n }\n }\n }\n ], \n {\n \"allowDiskUse\" : false\n }\n);\n", "text": "Hi all!I have a collection which has rather large documents (265.000 objects occupy 78GB space).\nUsers are able to query documents by creating any combination and order of search fields.\nCreating an index takes a long time, because server has to move 78GB objects through its memory.To not restrict the user to be able to only use certain combinations of search fields, I have come up with the following concept:Create an object metaInfos as array in each object.\nCreate objects in { name: “”, value: “” } form for each field that can occur in a search.\nCreate a SINGLE index on metaInfos.name + metaInfos.value.Query any combination of search fields like this :Explain shows me this uses the index for each of the queries I tested.Now I also want to use aggregation on it.\nE.g. I have a state field containing the current state of the document in string format.I would like to aggregate by group like this:But since the state is only available inside the array I have toExplain however shows this to NOT use the index, but rather make a collection scan.\nIs there any other way to use aggregation on a specific item in an array that is not location based like .0 ?Thanks for your help\nAndreas", "username": "Andreas_Kroll" }, { "code": "$arrayElemAt", "text": "Hi @Andreas_Kroll,Welcome to the MongoDB Community!Based on the shared information, I guess you can use the $arrayElemAt operator in your aggregation pipeline. This will allow you to access an element at a specific index within an array.To assist you more effectively, could you please provide the following details: the index you have created on your collection, a sample document, and the version of MongoDB you are using? Additionally, let us know if you are using MongoDB on-prem or MongoDB Atlas.Look forward to hearing from you.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "db.getCollection(\"Test\").explain().aggregate([\n{\n $match:{\n \"metaInfos.value\":{\n $in:[\"Thing D\", \"Thing F\"]\n }\n } \n},\n{\n $unwind:'$metaInfos'\n},\n{\n $match:{\n $or:[\n {$and:[{'metaInfos.value':'Thing D'},{'metaInfos.name':'Field 3'}]},\n {$and:[{'metaInfos.value':'Thing F'},{'metaInfos.name':'Field 3'}]}\n ]\n }\n},\n{\n $group:{\n \"_id\" : \"$metaInfos.value\",\n \"count\" : {\"$sum\" : 1} \n }\n}\n])\n \"inputStage\" : {\n \"stage\" : \"FETCH\",\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"metaInfos.value\" : 1.0\n },\n \"indexName\" : \"metaInfos.value_1\",\n \"isMultiKey\" : true,\n \"multiKeyPaths\" : {\n \"metaInfos.value\" : [\n \"metaInfos\"\n ]\n },\n{\n name:'Thing 1',\n metaInfos:[\n {\n value:'Thing A',\n name:'Field 1'\n },\n {\n value:'Thing B',\n name:'Field 2'\n },\n ]\n},\n{\n name:'Thing 2',\n metaInfos:[\n {\n value:'Thing C',\n name:'Field 1'\n },\n {\n value:'Thing D',\n name:'Field 3'\n },\n ]\n},\n", "text": "Once you unwind an index will not be hit, can you not just have a multi-key index on metaInfos.value and then have that as the first stage, you can then unwind and match again to filter out just what you want but that initial filter will hit the index and hopefully reduce the dataset before the aggregation has to filter on un-indexed data.Do you have some sample data if I’ve misunderstood this?I was thinking of something like this:So, show how many records matched “Thing D” or “Thing F” in the metadata within the “Field 3” metadata tag. Assuming that each document only has a metadata defined once within it…The explain for the above shows an IDXSCAN:/Edit - this was the data I used:", "username": "John_Sewell" }, { "code": "", "text": "Hi Kushagra,I cannot use $arrayElemAt as the tuples { name: “xxx”, value: “yyy” } are not in the same order and each document can have a different set of { name: “xxx”, value: “yyy” } tuples associated.Is there any function that could seach for contents of one property and deliver back the contents of another property?Like $arrayElemFind called with name: “xxx” and delivers contents of value back?", "username": "Andreas_Kroll" }, { "code": "", "text": "Hi @Andreas_Kroll,It’s a little difficult to assist without the sample document structure. However, I think the $arrayToObject aggregation pipeline operator might be beneficial in this scenario.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "{\n \"name1\": \"value1\", \n \"name2\": \"value2\"\n}\n", "text": "Hi!is there any way I can transform the array with { name: “”, value: “”} pairs into an object likeI tried using $arrayToObject, but this demands the properties to be named k and v and does not work otherwise.Any hints?", "username": "Andreas_Kroll" }, { "code": "{\n _id: ObjectId(\"64b531930b74e330f809d559\"),\n arrayField: [\n { name: 'name1', value: 'value1' },\n { name: 'name2', value: 'value2' }\n ]\n}\ndb.test.aggregate([{\n $replaceRoot: {\n newRoot: {\n $arrayToObject: {\n $map: {\n input: \"$arrayField\",\n in: {\n k: \"$$this.name\",\n v: \"$$this.value\"\n }\n }\n }\n }\n }\n}\n])\n{ name1: 'value1', name2: 'value2' }\n", "text": "Hi @Andreas_Kroll,I tried using $arrayToObject, but this demands the properties to be named k and v and does not work otherwise. Any hints?Considering the following sample document structure:I have tested the following aggregation pipeline using $replaceRoot and $arrayToObject operator, which may give you the required results.And it is returning the following output:I’ve only briefly tested this out, so I recommend either performing the same in the test environment to see if it suits your use case and requirements.Hope it helps!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi again,thanks for the quick reply.\nI think it was a step in the right direction.However - I checked with explain, and it still tells collection scan \nEven if I reduce the fields strictly to the metaInfos element.", "username": "Andreas_Kroll" } ]
Using Indices in dynamic queries
2023-07-12T10:04:45.211Z
Using Indices in dynamic queries
383
null
[ "queries", "atlas-functions", "atlas-online-archive" ]
[ { "code": "", "text": "Hi,I have a primary collection “collection1” and an archived Collection “collection2”. Both contain similar data. “collection2” contains data that is older than 3 years. Latest 3 years of data is available in “collection1”. The application queries “collection1” alone. Is there a way where we can internally search “collection2” incase data does not exist in “collection1” within MongoDB rather than modifying the application code.Scenario - The application queries for certain data from collection1. If data does not exist in that collection, there has to be an internal mechanism to query collection2 and fetch it to the application.\nAny suggestions would be appreciated. Thank you.", "username": "Satyanarayana_Ettamsetty1" }, { "code": "collection1", "text": "Hi @Satyanarayana_Ettamsetty1,What you describe doesn’t exist.You could implement this logic in the back end without too much effort if that’s an option.Another solution that comes to mind would be to use the $unionWith but this would mean that you always look for matching docs in both collections even if there are results in collection1.If you are using Atlas Online Archive then Data Federation could also help with this use case.Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Fetch Data from either of the collections
2023-07-17T12:41:15.646Z
Fetch Data from either of the collections
547
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"64a67f32dbe7c36e2e6c15c8\"\n },\n \"name\": {\n \"user\": \"xyz\"\n },\n \"data\": {\n \"age\": \"55\",\n \"address\": [{\n \"$ref\": \"Addresses\",\n \"$id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c6\"\n }\n }, {\n \"$ref\": \"Addresses\",\n \"$id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c7\"\n }\n },\n ]\n }\n}\n{\n \"_id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c6\"\n },\n \"name\": {\n \"type\": \"permenant\"\n },\n \"address\":\"permenant address 1\"\n}\n{\n \"_id\": {\n \"$oid\": \"64a67f2fdbe7c36e2e6c15c7\"\n },\n \"name\": {\n \"type\": \"secondary\"\n },\n \"address\":\"permenant address 2\"\n}\n{\n\t\"_id\": {\n\t\t\"$oid\": \"64a67f32dbe7c36e2e6c15c8\"\n\t},\n\t\"name\": {\n\t\t\"user\": \"xyz\"\n\t},\n\t\"data\": {\n\t\t\"age\": \"55\",\n\t\t\"address\": [\n\t\t\t{\n\t\t\t\t\"_id\": {\n\t\t\t\t\t\"$oid\": \"64a67f2fdbe7c36e2e6c15c6\"\n\t\t\t\t},\n\t\t\t\t\"name\": {\n\t\t\t\t\t\"type\": \"permenant\"\n\t\t\t\t},\n\t\t\t\t\"address\": \"permenant address 1\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"_id\": {\n\t\t\t\t\t\"$oid\": \"64a67f2fdbe7c36e2e6c15c7\"\n\t\t\t\t},\n\t\t\t\t\"name\": {\n\t\t\t\t\t\"type\": \"secondary\"\n\t\t\t\t},\n\t\t\t\t\"address\": \"permenant address 2\"\n\t\t\t}\n\t\t]\n\t}\n}\n", "text": "HiI have below structure data as child and parent\nParentChild records==========================I’m trying to build query but not able to find suitable output as per my requirement as below", "username": "Janak_Rana" }, { "code": "db.getCollection(\"Parent\").aggregate([\n{\n $unwind:'$data.address'\n},\n{\n $lookup:{\n from:'Child',\n localField:'data.address.id',\n foreignField:'_id',\n as:'lookupAddress'\n }\n},\n{\n $group:{\n _id:{\n _id:'$_id',\n name:'$name',\n age:'$data.age'\n },\n addressInfo:{$push:'$lookupAddress'}\n \n }\n},\n{\n $project:{\n _id:'$_id._id',\n data:{\n age:'$_id.age',\n address:'$addressInfo'\n }\n }\n}\n])\n", "text": "I think you’ll need to unwind the address to be able to lookup each element of the array and then put it back together again, something like this:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "", "text": "address:‘$addressInfo’ → returns array of array and also not returning respective foreign records. Its fetching all address which are not belongs to parent key.\n[\n[ ],[ ]\n]", "username": "Janak_Rana" }, { "code": "", "text": "I don’t see that:Mongo playground: a simple sandbox to test and share MongoDB queries onlineThis just does the unwind and lookup, you can see that each row just has the matching lookup and the unmatched data is not included in the output.Can you create a mongo playground with the data you’re using as well the query to show it not working?", "username": "John_Sewell" }, { "code": "", "text": "I have created mongo playground at below url:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "Janak_Rana" }, { "code": "", "text": "Child should be lowercase “c”:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "\"address\": [\n [\n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c6\"),\n \"address\": \"permenant address 1\",\n \"name\": {\n \"type\": \"permenant\"\n }\n }\n ],\n [\n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c7\"),\n \"address\": \"permenant address 2\",\n \"name\": {\n \"type\": \"secondary\"\n }\n }\n ]\n ],\n\"address\": [\n \n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c6\"),\n \"address\": \"permenant address 1\",\n \"name\": {\n \"type\": \"permenant\"\n }\n },\n \n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c7\"),\n \"address\": \"permenant address 2\",\n \"name\": {\n \"type\": \"secondary\"\n }\n }\n \n ],\n", "text": "I have updated few things in below urlMongo playground: a simple sandbox to test and share MongoDB queries onlineExisting Query ouptput:Expected output:", "username": "Janak_Rana" }, { "code": "db.parent.aggregate([\n {\n $unwind: \"$data.address\"\n },\n {\n $lookup: {\n from: \"Address\",\n localField: \"data.address.id\",\n foreignField: \"_id\",\n as: \"lookupAddress\"\n }\n },\n {\n $addFields: {\n \"lookupAddress\": {\n \"$arrayElemAt\": [\n \"$lookupAddress\",\n 0\n ]\n }\n }\n },\n {\n $group: {\n _id: {\n _id: \"$_id\",\n name: \"$name\",\n age: \"$data.age\"\n },\n addressInfo: {\n $push: \"$lookupAddress\"\n }\n }\n },\n {\n $project: {\n _id: \"$_id._id\",\n data: {\n age: \"$_id.age\",\n address: \"$addressInfo\"\n }\n }\n }\n])\n", "text": "Ok, this was as the lookup stage returns an array so we have arrays of arrays when we recombine them, you can strip it down to one item with an arrayElemAt operator:Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "", "text": "Or $unwind…Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "\"address\": [\n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c6\"),\n \"address\": \"permenant address 1\",\n \"name\": {\n \"type\": \"permenant\"\n }\n },\n {\n \"_id\": ObjectId(\"64a67f2fdbe7c36e2e6c15c6\"),\n \"address\": \"permenant address 1\",\n \"name\": {\n \"type\": \"permenant\"\n }\n }\n ],\n", "text": "It gives me perfect result in Mongo Playground but strange result I’m getting in MongoDB.\nUnwind didn’t helped me but using $arrayElemAt command it gives below output. Same object in both element.\nActual in MongoDB as below:", "username": "Janak_Rana" }, { "code": "", "text": "That’s strange, if you remove the stages after the $lookup, what’s the output you get?", "username": "John_Sewell" }, { "code": "\n\"address\" : [\n DBRef(\"Address\", ObjectId(\"64b5325a8795d410fdf06151\")),\n DBRef(\"Address\", ObjectId(\"64b5328d8795d410fdf06152\"))\n ]\n", "text": "I might need to reframe question. I have use dbref in my actual database and that is causing strange result. I have validated both way and If I’m using below DBRef then it gives me first Address object as Element we are selecting 0.", "username": "Janak_Rana" }, { "code": "db.getCollection(\"Parent2\").aggregate([\n{\n $unwind:'$data.address'\n},\n{\n $addFields:{\n 'refid':{$arrayElemAt:[{$objectToArray:'$data.address'}, 1]}\n }\n},\n{\n $lookup:{\n from:'child',\n localField:'refid.v',\n foreignField:'_id',\n as:'lookupAddress'\n }\n},\n{\n $unwind:'$lookupAddress'\n},\n{\n $group:{\n _id:{\n _id:'$_id',\n name:'$name',\n age:'$data.age'\n },\n addressInfo:{$push:'$lookupAddress'}\n \n }\n},\n{\n $project:{\n _id:'$_id._id',\n data:{\n age:'$_id.age',\n address:'$addressInfo'\n }\n }\n},\n])\n\n", "text": "Aha, sorry I missed that element of the issue, I’ve not used DBRef before, this seems to show a workaround using them in a pipeline:Problem Statement: While using aggregation pipeline on collection having DBRef to other co...So adding that in:", "username": "John_Sewell" }, { "code": "", "text": "Thanks, last solution works. ", "username": "Janak_Rana" }, { "code": "", "text": "Excellent, sorry I missed the DBRef at the start…not like you didn’t put it in the title!", "username": "John_Sewell" }, { "code": "", "text": "Thanks, I’m newbie for mongo and not sure what it should call.\nCheers and have a nice day.", "username": "Janak_Rana" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need dbref data to be appended in parent
2023-07-17T09:09:30.565Z
Need dbref data to be appended in parent
682
null
[ "replication", "java", "atlas-cluster", "spring-data-odm" ]
[ { "code": "<dependency>\n <groupId>org.mongodb</groupId>\n <artifactId>mongo-java-driver</artifactId>\n <version>3.12.11</version> <!-- Replace with the version compatible with your project -->\n</dependency>\n", "text": "Dear MongoDB Community,I am encountering an error while attempting to sync two databases residing on separate servers using Spring Boot. The MongoDB versions for both databases are 5.0.1, and I am using OpenJDK version “1.8.0_362” with the following details:OpenJDK Runtime Environment OpenJDK 64-Bit Server VM (build 25.362-b09, mixed mode)To handle the database synchronization, I have included the following dependency in my project’s pom.xml:However, when attempting to establish a connection, I encounter the following error message:\nError: No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@3e120c13 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address:27017=ac-rl8ettt-shard-00-02.4qgdyua.mongodb.net, type=UNKNOWN, state=CONNECTING}, ServerDescription{address:27017=ac-rl8ettt-shard-00-01.4qgdyua.mongodb.net, type=UNKNOWN, state=CONNECTING}, ServerDescription{address:27017=ac-rl8ettt-shard-00-00.4qgdyua.mongodb.net, type=UNKNOWN, state=CONNECTING}]}. Waiting for 30000 ms before timing out.\nI understand that the error is related to the MongoDB driver attempting to connect to the replica set. However, it seems unable to choose an appropriate server to connect to.Can you please provide guidance on how to resolve this issue? Any insights into the root cause or potential solutions would be greatly appreciated. If you require any additional information to assist with troubleshooting, please let me know, and I’ll be happy to provide it.Thank you for your time and assistance.", "username": "test_COLAB" }, { "code": "", "text": "Hi there,Here are some possible causes:Members from two different replsets in the same connection stringTwo replsets with the same nameHope this helps!", "username": "Carl_Champain" } ]
MongoDB Connection Error while Syncing Two Databases in Spring Boot
2023-07-17T13:13:46.079Z
MongoDB Connection Error while Syncing Two Databases in Spring Boot
565
null
[ "swift" ]
[ { "code": "Login failed: token contains an invalid number of segmentsimport SwiftUI\nimport AuthenticationServices\nimport RealmSwift\n\nstruct LoginWithApple: View {\n \n @EnvironmentObject var app: RealmSwift.App\n \n @Environment(\\.colorScheme) var colorScheme\n \n var body: some View {\n SignInWithAppleButton(.signIn) { request in\n request.requestedScopes = [.email]\n } onCompletion: { result in\n switch result {\n case .success(let authResults):\n print(\"Authorisation successful\")\n switch authResults.credential {\n case let credential as ASAuthorizationAppleIDCredential:\n if let idToken = credential.identityToken {\n loginWithApple(idToken: idToken.base64EncodedString().utf8EncodedString())\n print(\"TOKEN\", idToken.base64EncodedString().utf8EncodedString())\n }\n default:\n break\n }\n case .failure(let error):\n print(\"Authorisation failed: \\(error.localizedDescription)\")\n }\n }\n .frame(height: 50)\n .padding()\n .signInWithAppleButtonStyle(colorScheme == .dark ? .white : .black)\n \n }\n \n func loginWithApple(idToken: String){\n // Fetch IDToken via the Apple SDK\n let credentials = Credentials.apple(idToken: idToken)\n app.login(credentials: credentials) { (result) in\n switch result {\n case .failure(let error):\n print(\"Login failed: \\(error.localizedDescription)\")\n case .success(let user):\n print(\"Successfully logged in as user \\(user)\")\n // Now logged in, do something with user\n // Remember to dispatch to main if you are doing anything on the UI thread\n }\n }\n }\n}\n\nstruct LoginWithApple_Previews: PreviewProvider {\n static var previews: some View {\n LoginWithApple()\n }\n}\n\nextension String {\n func utf8DecodedString()-> String {\n let data = self.data(using: .utf8)\n let message = String(data: data!, encoding: .nonLossyASCII) ?? \"\"\n return message\n }\n \n func utf8EncodedString()-> String {\n let messageData = self.data(using: .nonLossyASCII)\n let text = String(data: messageData!, encoding: .utf8) ?? \"\"\n return text\n }\n}\nbase64EncodedString", "text": "I am trying to login with apple, but I get a Login failed: token contains an invalid number of segments.Here is the code I have so far:I don’t fully understand the code for getting UTF-8-encoded strings. Also, the only string method I found on the token is base64EncodedString. Is there another method I should be calling?", "username": "BPDev" }, { "code": "base64EncodedString().nonLossyASCIIcase let credential as ASAuthorizationAppleIDCredential:\n if let idToken = credential.identityToken {\n guard let idTokenString = String(data: idToken, encoding: .utf8) else { \n print(\"Unable to serialize token string from data: \\(idToken.debugDescription)\")\n return\n }\n loginWithApple(idToken: idTokenString)\n print(\"TOKEN\", idTokenString)\n }\n", "text": "I had a similar message with some encoding issues. I wonder if your base64EncodedString() call and .nonLossyASCII encoding are introducing issues.My login code is slightly different, but you could try something like:This removes the base64encoding and nonLossyASCII encoding from your call and just uses .utf8.", "username": "Dachary_Carey" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get idToken to login with Apple?
2023-07-17T02:07:49.036Z
How to get idToken to login with Apple?
486
null
[ "java" ]
[ { "code": "DBConnection.mdb.getCollection(\"Product List\").insertOne(document);Exception in thread \"AWT-EventQueue-0\" java.lang.NullPointerException: Cannot invoke \"com.mongodb.client.MongoDatabase.getCollection(String)\" because \"com.sanrios.productmaster.DBConnection.mdb\" is null", "text": "Hello,\nI was trying to insert some data into the database using java, for this, I’ve created a separate class file for the database connection. When inserting the data, I run this query:DBConnection.mdb.getCollection(\"Product List\").insertOne(document);(DBconnection is my class file)It gives me an error that states:\nException in thread \"AWT-EventQueue-0\" java.lang.NullPointerException: Cannot invoke \"com.mongodb.client.MongoDatabase.getCollection(String)\" because \"com.sanrios.productmaster.DBConnection.mdb\" is nullI checked permissions for the database but its fine, I don’t know why its returning a null, can anyone please help?", "username": "Sneha_Patel1" }, { "code": "mdb", "text": "java.lang.NullPointerException: Cannot invoke “com.mongodb.client.MongoDatabase.getCollection(String)”Hello there!This could happen due to various reasons, such as a failed database connection, an issue with initializing the mdb variable, or a method invocation issue. Can you please share the class file?The following link, java - What is a NullPointerException, and how do I fix it? - Stack Overflow, is a valuable resource that can assist you in comprehending and troubleshooting this specific type of issue.", "username": "Carl_Champain" }, { "code": "package com.sanrios.productmaster;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport java.net.UnknownHostException;\n\npublic class DBConnection {\n \n public static MongoClient mongoClient;\n public static MongoDatabase mdb;\n public static MongoCollection mdc;\n public static String databaseName;\n public static String collectionName;\n \n public static void main(String[]args) throws UnknownHostException {\n String uri = \"mongodb+srv://myusername:[email protected]/?retryWrites=true&w=majority\";\n \n try (MongoClient mongoClient = MongoClients.create(uri)) {\n databaseName = \"ProductMaster\";\n collectionName = \"Product List\";\n mdb = mongoClient.getDatabase(databaseName);\n mdc = mdb.getCollection(collectionName);\n mdc.find().first();\n System.out.println(\"\\n Connected Successfully \\n\");\n }\n }\n}\n", "text": "Hello!\nThank you for responding :))When I run the class file individually, there is no problem with the connection (The message connected successfully shows up). However, the problem occurs when I call a variable from it into another file. I tried following the link earlier as well but could not find a solution.To resolve my error, I tried multiple methods such as:and so on… but each time the issue ends up being either this exception or an exception that states “AWT-event queue 0, illegal state exception, state should be open”.the variable mdb that I have keeps returning a null value when I call it into my file of data insertion.Here is the class file:Thank you for your time, please review the code and help me if possible.", "username": "Sneha_Patel1" }, { "code": "MongoClient.closeillegal state exception, state should be open", "text": "No worries - happy to help!Do you call MongoClient.close anywhere in your application? You must ensure that a MongoClient is not used after it’s been closed, that’s what this error illegal state exception, state should be open generally means.", "username": "Carl_Champain" }, { "code": "", "text": "No, I haven’t closed the connection anywhere in the code", "username": "Sneha_Patel1" }, { "code": "", "text": "Ok… Can you please share your entire code so that I can run it locally on my machine?", "username": "Carl_Champain" } ]
mongoClient.getDatabase is returning a null when I call it in another file
2023-07-13T05:30:45.208Z
mongoClient.getDatabase is returning a null when I call it in another file
644
null
[ "node-js", "python", "mongoose-odm", "typescript" ]
[ { "code": "", "text": "I’m having trouble connecting to my MongoDB (6.0.6) instance, which is running on an EC2 machine with TLS, through an SSH Tunnel in TypeScript. Previously, I was able to establish the connection successfully using Python. However, when I try to establish the same connection using ssh2 and mongoose in TypeScript, it fails. Can anyone provide guidance or a solution to resolve this issue?", "username": "Tom_Cohen" }, { "code": "from sshtunnel import SSHTunnelForwarder\nfrom pymongo import MongoClient\n\n# VM IP/DNS - Will depend on your VM\nEC2_URL = '''<ec2 address>'''\n\n# Mongo URI Will depende on your DocDB instances\nDB_URI = '''<DB address>'''\n\n# DB user and password\nDB_USER = '<user>'\nDB_PASS = '<pass'\n\n# Create the tunnel\nserver = SSHTunnelForwarder(\n (EC2_URL, 22),\n ssh_username='<ec2 username>', \n ssh_pkey='AWS_User_Keys.pem', \n remote_bind_address=(DB_URI, 27017)\n)\n# Start the tunnel\nserver.start()\nserver.check_tunnels()\nprint(server.local_bind_port)\n# Connect to Database\nclient = MongoClient(\n host='localhost',\n port=server.local_bind_port,\n username=DB_USER,\n password=DB_PASS,\n tls=True,\n tlsCAFile='AWS_DocDB_Auth.pem',\n retryWrites=False,\n directConnection=True,\n tlsAllowInvalidHostnames=True\n)\nprint(\"list is:\")\nprint(client.list_database_names())\n\n# Close the tunnel once you are done\nserver.stop()\nimport { Client, ConnectConfig } from 'ssh2';\nimport * as mongoose from 'mongoose';\nimport * as fs from 'fs';\n\nconst localPort = 3000; // Local port to listen on\nconst remoteHost = '<ec2 address>'; // Remote MongoDB server\nconst remotePort = 27017; // Remote MongoDB port\n\nconst privateKeyPath = './AWS_User_Keys.pem'; // Path to your private key\n\nconst sshConfig: ConnectConfig = {\n host: '<ec2 address>', // SSH server\n port: 22, // SSH port\n username: '<ec2 username>', // SSH username\n privateKey: fs.readFileSync(privateKeyPath), // Read the private key from file\n};\n\nconst startSshTunnel = async (): Promise<Client> => {\n return new Promise((resolve, reject) => {\n const conn = new Client();\n conn.on('ready', () => {\n conn.forwardOut(\n 'localhost',\n localPort,\n remoteHost,\n remotePort,\n (err, stream) => {\n if (err) {\n reject(err);\n }\n resolve(conn);\n }\n );\n });\n conn.connect(sshConfig);\n });\n};\n\nstartSshTunnel()\n .then((sshClient) => {\n console.log('SSH tunnel started');\n\n // Set up Mongoose connection using the SSH tunnel with username and password\n mongoose.connect(`mongodb://<ec2 address>:27017/<DB name>`, {\n auth: {\n username: '<user>>',\n password: '<pass>',\n },\n tls: true,\n tlsCAFile: './Mongo_server_certificate_key.pem', // Path to the CA certificate file\n });\n\n // Perform Mongoose operations as needed\n const db = mongoose.connection;\n db.collection('test').insertOne({ name: 'John Doe' }); \n db.once('open', () => {\n console.log('Connected to MongoDB via SSH tunnel with TLS');\n db.collection('test').insertOne({ name: 'John Doe' }); \n });\n db.on('error', console.error.bind(console, 'MongoDB connection error:'));\n })\n .catch((error) => {\n console.error('Error starting SSH tunnel:', error);\n });\n....TS_SSH_Test\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:185\n const err = new MongooseError(message);\n ^\n\nMongooseError: Operation `test.insertOne()` buffering timed out after 10000ms\n at Timeout.<anonymous> (....TS_SSH_Test\\node_modules\\mongoose\\lib\\drivers\\node-mongodb-native\\collection.js:185:23)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7)\n", "text": "The Python code that works:The TypeScript code that doesnt:The error message:", "username": "Tom_Cohen" } ]
Support Connecting to MongoDB on EC2 with SSH in TypeScript
2023-07-17T11:25:58.930Z
Support Connecting to MongoDB on EC2 with SSH in TypeScript
715
null
[ "node-js", "mongodb-shell" ]
[ { "code": "var MongoClient = require('mongodb').MongoClient;\nvar url = \"mongodb://localhost:27017/mydb\";\n\nMongoClient.connect(url, function(err, db) {\n if (err) throw err;\n console.log(\"Database created!\");\n db.close();\n});\nconst { MongoClient, ServerApiVersion } = require(\"mongodb\");\n// Replace the placeholder with your Atlas connection string\nconst uri = \"mongodb://localhost:27017\";\n// Create a MongoClient with a MongoClientOptions object to set the Stable API version\nconst client = new MongoClient(uri, {\n serverApi: {\n version: ServerApiVersion.v1,\n strict: true,\n deprecationErrors: true,\n }\n }\n);\nasync function run() {\n try {\n // Connect the client to the server (optional starting in v4.7)\n await client.connect();\n // Send a ping to confirm a successful connection\n await client.db(\"admin\").command({ ping: 1 });\n console.log(\"Pinged your deployment. You successfully connected to MongoDB!\");\n } finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n }\n}\nrun().catch(console.dir);\nsudo systemctl start mongodsudo systemctl status mongod", "text": "Hello! I am a beginner. I recently started node.js tutorial on w3schools on node.js. I stumbled on the place where I have to connect to mongodb. So, I created my project folder, changed to it, ran npm init, installed mongodb. Then I had to create a .js file with the following content:But when I run it, the black cursor moves on a new line in the terminal and blinks for several seconds. It just hangs there and I get no error messages nor command prompt. After a few minutes I choose to Ctrl+C to stop it. I tried to use 127.0.0.1 instead of localhost, but that didn’t work either.\nSo, I started to read the doc on mongodb.com, which suggests using this code:I am not aware of what all of this code does but at least it prints that it successfully connected and there is command prompt after running it.\nI start my server by running: sudo systemctl start mongod\nI can check that it is running by: sudo systemctl status mongod\nI use Debian 11\nI have mongosh installed, it says:\n`Connecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1\nUsing MongoDB:\t\t6.0.8\nUsing Mongosh:\t\t1.10.1For mongosh info see: https://docs.mongodb.com/mongodb-shell/\n`\nMy question is firstly what is wrong with the w3schools tutorial code? I could take the latter from mongodb docs but it looks bulky. I wanted to start small and I just can’t get why it doesn’t work. Thanks.", "username": "Mykola_Petrovych" }, { "code": " await client.close();\n\n", "text": "Is this not as your code is not closing the connection, but the longer sample code is?", "username": "John_Sewell" }, { "code": "", "text": "Sounds reasonable. How do I check it. The line is present though.", "username": "Mykola_Petrovych" }, { "code": "", "text": "Actually…testing locally it seems the callback is not hit…the w3schools tutorial does not seem to work, which is weird, as their example are normally good.", "username": "John_Sewell" }, { "code": "", "text": "Yes, I like their tutorials.", "username": "Mykola_Petrovych" }, { "code": "", "text": "Sure someone with more experience of this can explain why the callback is not working as lots of example I can see online do the same.Looking in the release notes, 4.7 of the driver made it optional to call the connect method:I also saw mentions of deprecating the callbacks in favour of the promise returns, which explains the style in the second example.", "username": "John_Sewell" } ]
Connecting to local mongodb server using nodejs
2023-07-17T11:23:32.941Z
Connecting to local mongodb server using nodejs
1,020
null
[]
[ { "code": "", "text": "HI, I installed MongoDB for the first time. Tried to create users and connect to jdbc. Didn’t connect so I thought I had made some errors when making users. I wanted to start all over. I uninstalled, deleted the data and the config file.\nI reinstalled with apt-get according to the manual. It complained that there was no mongod.conf file, so I created it myself according to the manual. The service still wouldn’t start or it started and stopped immediately. I hope you can see what the problem is.oaj@pop-os:~$ sudo systemctl start mongod\noaj@pop-os:~$ sudo systemctl status mongod\n○ mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nActive: inactive (dead) since Tue 2022-05-24 18:05:09 AST; 3s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 109548 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=0/SUCCESS)\nMain PID: 109548 (code=exited, status=0/SUCCESS)\nCPU: 359msjournalctl -xe\nMay 24 18:09:47 pop-os sudo[109874]: oaj : TTY=pts/0 ; PWD=/home/oaj ; USER=root ; COMMAND=/usr/bin/systemctl start mongod\nMay 24 18:09:47 pop-os sudo[109874]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)\nMay 24 18:09:47 pop-os sudo[109874]: pam_unix(sudo:session): session closed for user root", "username": "Ole_J" }, { "code": "", "text": "oaj@pop-os:~$ lsb_release -dc\nDescription:\tPop!_OS 22.04 LTS\nCodename:\tjammy", "username": "Ole_J" }, { "code": "", "text": "Share the content of the mongod log file.", "username": "steevej" }, { "code": "", "text": "{“t”:{“$date”:“2022-05-24T18:05:09.344-04:00”},“s”:“I”, “c”:“REPL”, “id”:6015317, “ctx”:“initandlisten”,“msg”:“Setting new configuration state”,“attr”:{“newState”:“ConfigReplicationDisabled”,“oldState”:“ConfigPreStart”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.345-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.345-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.345-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23377, “ctx”:“SignalHandler”,“msg”:“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23378, “ctx”:“SignalHandler”,“msg”:“Signal was sent by kill(2)”,“attr”:{“pid”:1,“uid”:0}}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23381, “ctx”:“SignalHandler”,“msg”:“will terminate after current cmd ends”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“SignalHandler”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:15000}}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“REPL”, “id”:4794602, “ctx”:“SignalHandler”,“msg”:“Attempting to enter quiesce mode”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“SignalHandler”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“SignalHandler”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784903, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalSessionCache”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“SignalHandler”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23017, “ctx”:“listener”,“msg”:“removing socket file”,“attr”:{“path”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“SignalHandler”,“msg”:“Shutting down the global connection pool”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784906, “ctx”:“SignalHandler”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“SignalHandler”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784908, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToAbortExpiredTransactions”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784909, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicationCoordinator”}\n{“t”:{“$date”:“2022-05-24T18:05:09.346-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784910, “ctx”:“SignalHandler”,“msg”:“Shutting down the ShardingInitializationMongoD”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784911, “ctx”:“SignalHandler”,“msg”:“Enqueuing the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“-”, “id”:4784912, “ctx”:“SignalHandler”,“msg”:“Killing all operations for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“-”, “id”:4695300, “ctx”:“SignalHandler”,“msg”:“Interrupted all currently running operations”,“attr”:{“opsKilled”:3}}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“TENANT_M”, “id”:5093807, “ctx”:“SignalHandler”,“msg”:“Shutting down all TenantMigrationAccessBlockers on global shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784913, “ctx”:“SignalHandler”,“msg”:“Shutting down all open transactions”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784914, “ctx”:“SignalHandler”,“msg”:“Acquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“INDEX”, “id”:4784915, “ctx”:“SignalHandler”,“msg”:“Shutting down the IndexBuildsCoordinator”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784916, “ctx”:“SignalHandler”,“msg”:“Reacquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784917, “ctx”:“SignalHandler”,“msg”:“Attempting to mark clean shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“SignalHandler”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“SignalHandler”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20609, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“SignalHandler”,“msg”:“Shutting down the HealthLog”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“SignalHandler”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684100, “ctx”:“SignalHandler”,“msg”:“Shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684101, “ctx”:“SignalHandler”,“msg”:“Finished shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“SignalHandler”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784930, “ctx”:“SignalHandler”,“msg”:“Shutting down the storage engine”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22320, “ctx”:“SignalHandler”,“msg”:“Shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22321, “ctx”:“SignalHandler”,“msg”:“Finished shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22322, “ctx”:“SignalHandler”,“msg”:“Shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22323, “ctx”:“SignalHandler”,“msg”:“Finished shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:20282, “ctx”:“SignalHandler”,“msg”:“Deregistering all the collections”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22261, “ctx”:“SignalHandler”,“msg”:“Timestamp monitor shutting down”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22317, “ctx”:“SignalHandler”,“msg”:“WiredTigerKVEngine shutting down”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22318, “ctx”:“SignalHandler”,“msg”:“Shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.347-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22319, “ctx”:“SignalHandler”,“msg”:“Finished shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-24T18:05:09.349-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795902, “ctx”:“SignalHandler”,“msg”:“Closing WiredTiger”,“attr”:{“closeConfig”:“leak_memory=true,”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.350-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653429909:350200][109550:0x7fd563387640], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 31”}}\n{“t”:{“$date”:“2022-05-24T18:05:09.402-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795901, “ctx”:“SignalHandler”,“msg”:“WiredTiger closed”,“attr”:{“durationMillis”:53}}\n{“t”:{“$date”:“2022-05-24T18:05:09.402-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22279, “ctx”:“SignalHandler”,“msg”:“shutdown: removing fs lock…”}\n{“t”:{“$date”:“2022-05-24T18:05:09.402-04:00”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“SignalHandler”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:05:09.402-04:00”},“s”:“I”, “c”:“FTDC”, “id”:4784926, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time data capture”}\n{“t”:{“$date”:“2022-05-24T18:05:09.402-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20626, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time diagnostic data capture”}\n{“t”:{“$date”:“2022-05-24T18:05:09.402-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“SignalHandler”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2022-05-24T18:05:09.402-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“SignalHandler”,“msg”:“Shutting down”,“attr”:{“exitCode”:0}}{“t”:{“$date”:“2022-05-24T18:09:47.091-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“-”,“msg”:“***** SERVER RESTARTED *****”}\n{“t”:{“$date”:“2022-05-24T18:09:47.092-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“-”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-05-24T18:09:47.092-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“-”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2022-05-24T18:09:47.097-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-05-24T18:09:47.097-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2022-05-24T18:09:47.098-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-05-24T18:09:47.098-04:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“ns”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.099-04:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“ns”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.099-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2022-05-24T18:09:47.099-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:109880,“port”:27017,“dbPath”:“/var/lib/mongodb”,“architecture”:“64-bit”,“host”:“pop-os”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.099-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.8”,“gitVersion”:“c87e1c23421bf79614baf500fda6622bd90f674e”,“openSSLVersion”:“OpenSSL 1.1.1l 24 Aug 2021”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“ubuntu2004”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{“$date”:“2022-05-24T18:09:47.099-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Pop”,“version”:“22.04”}}}\n{“t”:{“$date”:“2022-05-24T18:09:47.099-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:“/etc/mongod.conf”,“net”:{“bindIp”:“localhost”,“port”:27017},“processManagement”:{“fork”:true},“storage”:{“dbPath”:“/var/lib/mongodb”,“journal”:{“enabled”:true}},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:“/var/log/mongodb/mongod.log”}}}}\n{“t”:{“$date”:“2022-05-24T18:09:47.101-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22270, “ctx”:“initandlisten”,“msg”:“Storage engine to use detected by data files”,“attr”:{“dbpath”:“/var/lib/mongodb”,“storageEngine”:“wiredTiger”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.101-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22297, “ctx”:“initandlisten”,“msg”:“Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",“tags”:[\"startupWarnings”]}\n{“t”:{“$date”:“2022-05-24T18:09:47.101-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=15464M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.296-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:296382][109880:0x7f8045f63200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.330-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:330560][109880:0x7f8045f63200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.379-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:379630][109880:0x7f8045f63200], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery loop: starting at 5/5376 to 6/256”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.448-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:448391][109880:0x7f8045f63200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.502-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:502367][109880:0x7f8045f63200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.532-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:532711][109880:0x7f8045f63200], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.532-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:532756][109880:0x7f8045f63200], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.539-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:539308][109880:0x7f8045f63200], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 39”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.562-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795906, “ctx”:“initandlisten”,“msg”:“WiredTiger opened”,“attr”:{“durationMillis”:461}}\n{“t”:{“$date”:“2022-05-24T18:09:47.562-04:00”},“s”:“I”, “c”:“RECOVERY”, “id”:23987, “ctx”:“initandlisten”,“msg”:“WiredTiger recoveryTimestamp”,“attr”:{“recoveryTimestamp”:{“$timestamp”:{“t”:0,“i”:0}}}}\n{“t”:{“$date”:“2022-05-24T18:09:47.562-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4366408, “ctx”:“initandlisten”,“msg”:“No table logging settings modifications are required for existing WiredTiger tables”,“attr”:{“loggingEnabled”:true}}\n{“t”:{“$date”:“2022-05-24T18:09:47.563-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22262, “ctx”:“initandlisten”,“msg”:“Timestamp monitor starting”}\n{“t”:{“$date”:“2022-05-24T18:09:47.580-04:00”},“s”:“W”, “c”:“CONTROL”, “id”:22120, “ctx”:“initandlisten”,“msg”:“Access control is not enabled for the database. Read and write access to data and configuration is unrestricted”,“tags”:[“startupWarnings”]}\n{“t”:{“$date”:“2022-05-24T18:09:47.584-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915702, “ctx”:“initandlisten”,“msg”:“Updated wire specification”,“attr”:{“oldSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true},“newSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:13,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:13,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-05-24T18:09:47.584-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:5071100, “ctx”:“initandlisten”,“msg”:“Clearing temp directory”}\n{“t”:{“$date”:“2022-05-24T18:09:47.584-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20536, “ctx”:“initandlisten”,“msg”:“Flow Control is enabled on this deployment”}\n{“t”:{“$date”:“2022-05-24T18:09:47.585-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20625, “ctx”:“initandlisten”,“msg”:“Initializing full-time diagnostic data capture”,“attr”:{“dataDirectory”:“/var/lib/mongodb/diagnostic.data”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.588-04:00”},“s”:“I”, “c”:“REPL”, “id”:6015317, “ctx”:“initandlisten”,“msg”:“Setting new configuration state”,“attr”:{“newState”:“ConfigReplicationDisabled”,“oldState”:“ConfigPreStart”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.590-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.590-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.590-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23377, “ctx”:“SignalHandler”,“msg”:“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23378, “ctx”:“SignalHandler”,“msg”:“Signal was sent by kill(2)”,“attr”:{“pid”:1,“uid”:0}}\n{“t”:{“$date”:“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23381, “ctx”:“SignalHandler”,“msg”:“will terminate after current cmd ends”}\n{“t”:{“$date”:“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“SignalHandler”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:15000}}\n{“t”:{“$date”:“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“REPL”, “id”:4794602, “ctx”:“SignalHandler”,“msg”:“Attempting to enter quiesce mode”}\n{“t”:{“$date”:“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“SignalHandler”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{“$date”:“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“SignalHandler”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784903, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalSessionCache”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“SignalHandler”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23017, “ctx”:“listener”,“msg”:“removing socket file”,“attr”:{“path”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“SignalHandler”,“msg”:“Shutting down the global connection pool”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784906, “ctx”:“SignalHandler”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“SignalHandler”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784908, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToAbortExpiredTransactions”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784909, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicationCoordinator”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784910, “ctx”:“SignalHandler”,“msg”:“Shutting down the ShardingInitializationMongoD”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784911, “ctx”:“SignalHandler”,“msg”:“Enqueuing the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“-”, “id”:4784912, “ctx”:“SignalHandler”,“msg”:“Killing all operations for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“-”, “id”:4695300, “ctx”:“SignalHandler”,“msg”:“Interrupted all currently running operations”,“attr”:{“opsKilled”:3}}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“TENANT_M”, “id”:5093807, “ctx”:“SignalHandler”,“msg”:“Shutting down all TenantMigrationAccessBlockers on global shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784913, “ctx”:“SignalHandler”,“msg”:“Shutting down all open transactions”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784914, “ctx”:“SignalHandler”,“msg”:“Acquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“INDEX”, “id”:4784915, “ctx”:“SignalHandler”,“msg”:“Shutting down the IndexBuildsCoordinator”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784916, “ctx”:“SignalHandler”,“msg”:“Reacquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784917, “ctx”:“SignalHandler”,“msg”:“Attempting to mark clean shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“SignalHandler”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{“$date”:“2022-05-24T18:09:47.594-04:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“SignalHandler”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20609, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“SignalHandler”,“msg”:“Shutting down the HealthLog”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“SignalHandler”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684100, “ctx”:“SignalHandler”,“msg”:“Shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684101, “ctx”:“SignalHandler”,“msg”:“Finished shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“SignalHandler”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784930, “ctx”:“SignalHandler”,“msg”:“Shutting down the storage engine”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22320, “ctx”:“SignalHandler”,“msg”:“Shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22321, “ctx”:“SignalHandler”,“msg”:“Finished shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22322, “ctx”:“SignalHandler”,“msg”:“Shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22323, “ctx”:“SignalHandler”,“msg”:“Finished shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:20282, “ctx”:“SignalHandler”,“msg”:“Deregistering all the collections”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22261, “ctx”:“SignalHandler”,“msg”:“Timestamp monitor shutting down”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22317, “ctx”:“SignalHandler”,“msg”:“WiredTigerKVEngine shutting down”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22318, “ctx”:“SignalHandler”,“msg”:“Shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.595-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22319, “ctx”:“SignalHandler”,“msg”:“Finished shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-24T18:09:47.597-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795902, “ctx”:“SignalHandler”,“msg”:“Closing WiredTiger”,“attr”:{“closeConfig”:“leak_memory=true,”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.598-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653430187:598563][109880:0x7f8045f5f640], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 39”}}\n{“t”:{“$date”:“2022-05-24T18:09:47.662-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795901, “ctx”:“SignalHandler”,“msg”:“WiredTiger closed”,“attr”:{“durationMillis”:65}}\n{“t”:{“$date”:“2022-05-24T18:09:47.662-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22279, “ctx”:“SignalHandler”,“msg”:“shutdown: removing fs lock…”}\n{“t”:{“$date”:“2022-05-24T18:09:47.662-04:00”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“SignalHandler”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{“$date”:“2022-05-24T18:09:47.662-04:00”},“s”:“I”, “c”:“FTDC”, “id”:4784926, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time data capture”}\n{“t”:{“$date”:“2022-05-24T18:09:47.662-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20626, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time diagnostic data capture”}\n{“t”:{“$date”:“2022-05-24T18:09:47.662-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“SignalHandler”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2022-05-24T18:09:47.663-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“SignalHandler”,“msg”:“Shutting down”,“attr”:{“exitCode”:0}}", "username": "Ole_J" }, { "code": "", "text": "I do not really know this distribution but it may be related to SELinux config being too restrictive.", "username": "steevej" }, { "code": "", "text": "It worked perfectly until I removed and reinstalled it.\nI deleted the conf file and after installation, with apt-get, there was no config file. I made a conf file and copied the suggested settings in the manual.", "username": "Ole_J" }, { "code": "", "text": "Hi, do you know what happens here ?{“t”:{\"$date\":“2022-05-24T18:09:47.590-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}\n{“t”:{\"$date\":“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23377, “ctx”:“SignalHandler”,“msg”:“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}\n{“t”:{\"$date\":“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23378, “ctx”:“SignalHandler”,“msg”:“Signal was sent by kill(2)”,“attr”:{“pid”:1,“uid”:0}}\n{“t”:{\"$date\":“2022-05-24T18:09:47.593-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23381, “ctx”:“SignalHandler”,“msg”:“will terminate after current cmd ends”}", "username": "Ole_J" }, { "code": "", "text": "POP_OS is in UBUNTU family.", "username": "Ole_J" }, { "code": "{“t”:{\"$date\":“2022-05-24T18:09:47.098-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}sudo mkdir /data/dbsudo chown -R $USER /data/dbmongod --port 27017 --dbpath /data/db --auth{“t”:{\"$date\":“2022-05-24T18:09:47.580-04:00”},“s”:“W”, “c”:“CONTROL”, “id”:22120, “ctx”:“initandlisten”,“msg”:“Access control is not enabled for the database. Read and write access to data and configuration is unrestricted”,“tags”:[“startupWarnings”]}", "text": "Hello, a junior and a Mongo Newbie/Enthusiast here,I have noticed in your log files the following line:\n{“t”:{\"$date\":“2022-05-24T18:09:47.098-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}You should check permissions in your /data/db folder, try applying these steps :\nsudo mkdir /data/db\nsudo chown -R $USER /data/db\nnow run mongo shell with access control\nmongod --port 27017 --dbpath /data/db --authAs can be seen:\n{“t”:{\"$date\":“2022-05-24T18:09:47.580-04:00”},“s”:“W”, “c”:“CONTROL”, “id”:22120, “ctx”:“initandlisten”,“msg”:“Access control is not enabled for the database. Read and write access to data and configuration is unrestricted”,“tags”:[“startupWarnings”]}I hope my review atleast gives you insight in what to do next.", "username": "Tin_Cvitkovic" }, { "code": "", "text": "Hi\n, it worked and the mongod started in terminal with my user… But it still can’t run as a service.------------------------------------- start as service ---------------------------------------\noaj@pop-os:/var/log/mongodb$ sudo systemctl start mongod\noaj@pop-os:/var/log/mongodb$ sudo systemctl status mongod\n× mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\nActive: failed (Result: exit-code) since Wed 2022-05-25 12:46:32 AST; 6s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 11364 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=100)\nMain PID: 11364 (code=exited, status=100)\nCPU: 42msSo it could run in terminal with me as user but not as service.\nI found this fix:\nsudo chown -R mongodb: /var/lib/mongodbNow I didn’t get : Attempted to create a lock file on a read-only directory: /var/lib/mongodb\n, but it still didn’t run as a service.I’m stuck\n-------------------------------------- log ------------------------\n{“t”:{“$date”:“2022-05-25T12:53:35.962-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“-”,“msg”:“***** SERVER RESTARTED *****”}\n{“t”:{“$date”:“2022-05-25T12:53:35.967-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“main”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-05-25T12:53:35.967-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2022-05-25T12:53:35.968-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-05-25T12:53:35.968-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2022-05-25T12:53:35.970-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-05-25T12:53:35.970-04:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“ns”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2022-05-25T12:53:35.970-04:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“ns”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2022-05-25T12:53:35.970-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2022-05-25T12:53:35.970-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:11844,“port”:27017,“dbPath”:“/var/lib/mongodb”,“architecture”:“64-bit”,“host”:“pop-os”}}\n{“t”:{“$date”:“2022-05-25T12:53:35.971-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.8”,“gitVersion”:“c87e1c23421bf79614baf500fda6622bd90f674e”,“openSSLVersion”:“OpenSSL 1.1.1l 24 Aug 2021”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“ubuntu2004”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{“$date”:“2022-05-25T12:53:35.971-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Pop”,“version”:“22.04”}}}\n{“t”:{“$date”:“2022-05-25T12:53:35.971-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:“/etc/mongod.conf”,“net”:{“bindIp”:“localhost”,“port”:27017},“processManagement”:{“fork”:true},“storage”:{“dbPath”:“/var/lib/mongodb”,“journal”:{“enabled”:true}},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:“/var/log/mongodb/mongod.log”}}}}\n{“t”:{“$date”:“2022-05-25T12:53:35.973-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22270, “ctx”:“initandlisten”,“msg”:“Storage engine to use detected by data files”,“attr”:{“dbpath”:“/var/lib/mongodb”,“storageEngine”:“wiredTiger”}}\n{“t”:{“$date”:“2022-05-25T12:53:35.973-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22297, “ctx”:“initandlisten”,“msg”:“Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",“tags”:[\"startupWarnings”]}\n{“t”:{“$date”:“2022-05-25T12:53:35.973-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=15464M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.091-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:91152][11844:0x7f0d8ac00200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.120-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:120643][11844:0x7f0d8ac00200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.163-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:163086][11844:0x7f0d8ac00200], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery loop: starting at 5/5248 to 6/256”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.219-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:219459][11844:0x7f0d8ac00200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 5 through 6”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.265-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:265710][11844:0x7f0d8ac00200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 6 through 6”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.298-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:298416][11844:0x7f0d8ac00200], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.298-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:298596][11844:0x7f0d8ac00200], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.305-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:305064][11844:0x7f0d8ac00200], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 45”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.327-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795906, “ctx”:“initandlisten”,“msg”:“WiredTiger opened”,“attr”:{“durationMillis”:354}}\n{“t”:{“$date”:“2022-05-25T12:53:36.327-04:00”},“s”:“I”, “c”:“RECOVERY”, “id”:23987, “ctx”:“initandlisten”,“msg”:“WiredTiger recoveryTimestamp”,“attr”:{“recoveryTimestamp”:{“$timestamp”:{“t”:0,“i”:0}}}}\n{“t”:{“$date”:“2022-05-25T12:53:36.327-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4366408, “ctx”:“initandlisten”,“msg”:“No table logging settings modifications are required for existing WiredTiger tables”,“attr”:{“loggingEnabled”:true}}\n{“t”:{“$date”:“2022-05-25T12:53:36.328-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22262, “ctx”:“initandlisten”,“msg”:“Timestamp monitor starting”}\n{“t”:{“$date”:“2022-05-25T12:53:36.344-04:00”},“s”:“W”, “c”:“CONTROL”, “id”:22120, “ctx”:“initandlisten”,“msg”:“Access control is not enabled for the database. Read and write access to data and configuration is unrestricted”,“tags”:[“startupWarnings”]}\n{“t”:{“$date”:“2022-05-25T12:53:36.348-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915702, “ctx”:“initandlisten”,“msg”:“Updated wire specification”,“attr”:{“oldSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true},“newSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:13,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:13,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-05-25T12:53:36.348-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:5071100, “ctx”:“initandlisten”,“msg”:“Clearing temp directory”}\n{“t”:{“$date”:“2022-05-25T12:53:36.349-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20536, “ctx”:“initandlisten”,“msg”:“Flow Control is enabled on this deployment”}\n{“t”:{“$date”:“2022-05-25T12:53:36.350-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20625, “ctx”:“initandlisten”,“msg”:“Initializing full-time diagnostic data capture”,“attr”:{“dataDirectory”:“/var/lib/mongodb/diagnostic.data”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.353-04:00”},“s”:“I”, “c”:“REPL”, “id”:6015317, “ctx”:“initandlisten”,“msg”:“Setting new configuration state”,“attr”:{“newState”:“ConfigReplicationDisabled”,“oldState”:“ConfigPreStart”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.354-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.354-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.354-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23377, “ctx”:“SignalHandler”,“msg”:“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23378, “ctx”:“SignalHandler”,“msg”:“Signal was sent by kill(2)”,“attr”:{“pid”:1,“uid”:0}}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23381, “ctx”:“SignalHandler”,“msg”:“will terminate after current cmd ends”}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“SignalHandler”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:15000}}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“REPL”, “id”:4794602, “ctx”:“SignalHandler”,“msg”:“Attempting to enter quiesce mode”}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“SignalHandler”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“SignalHandler”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784903, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalSessionCache”}\n{“t”:{“$date”:“2022-05-25T12:53:36.357-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“SignalHandler”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23017, “ctx”:“listener”,“msg”:“removing socket file”,“attr”:{“path”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“SignalHandler”,“msg”:“Shutting down the global connection pool”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784906, “ctx”:“SignalHandler”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“SignalHandler”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784908, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToAbortExpiredTransactions”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784909, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicationCoordinator”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784910, “ctx”:“SignalHandler”,“msg”:“Shutting down the ShardingInitializationMongoD”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784911, “ctx”:“SignalHandler”,“msg”:“Enqueuing the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“-”, “id”:4784912, “ctx”:“SignalHandler”,“msg”:“Killing all operations for shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“-”, “id”:4695300, “ctx”:“SignalHandler”,“msg”:“Interrupted all currently running operations”,“attr”:{“opsKilled”:3}}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“TENANT_M”, “id”:5093807, “ctx”:“SignalHandler”,“msg”:“Shutting down all TenantMigrationAccessBlockers on global shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784913, “ctx”:“SignalHandler”,“msg”:“Shutting down all open transactions”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784914, “ctx”:“SignalHandler”,“msg”:“Acquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“INDEX”, “id”:4784915, “ctx”:“SignalHandler”,“msg”:“Shutting down the IndexBuildsCoordinator”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784916, “ctx”:“SignalHandler”,“msg”:“Reacquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784917, “ctx”:“SignalHandler”,“msg”:“Attempting to mark clean shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“SignalHandler”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“SignalHandler”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20609, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-25T12:53:36.358-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“SignalHandler”,“msg”:“Shutting down the HealthLog”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“SignalHandler”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684100, “ctx”:“SignalHandler”,“msg”:“Shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684101, “ctx”:“SignalHandler”,“msg”:“Finished shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“SignalHandler”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784930, “ctx”:“SignalHandler”,“msg”:“Shutting down the storage engine”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22320, “ctx”:“SignalHandler”,“msg”:“Shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22321, “ctx”:“SignalHandler”,“msg”:“Finished shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22322, “ctx”:“SignalHandler”,“msg”:“Shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22323, “ctx”:“SignalHandler”,“msg”:“Finished shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:20282, “ctx”:“SignalHandler”,“msg”:“Deregistering all the collections”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22261, “ctx”:“SignalHandler”,“msg”:“Timestamp monitor shutting down”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22317, “ctx”:“SignalHandler”,“msg”:“WiredTigerKVEngine shutting down”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22318, “ctx”:“SignalHandler”,“msg”:“Shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.359-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22319, “ctx”:“SignalHandler”,“msg”:“Finished shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-25T12:53:36.361-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795902, “ctx”:“SignalHandler”,“msg”:“Closing WiredTiger”,“attr”:{“closeConfig”:“leak_memory=true,”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.362-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653497616:362959][11844:0x7f0d8abfc640], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 45”}}\n{“t”:{“$date”:“2022-05-25T12:53:36.430-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795901, “ctx”:“SignalHandler”,“msg”:“WiredTiger closed”,“attr”:{“durationMillis”:69}}\n{“t”:{“$date”:“2022-05-25T12:53:36.430-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22279, “ctx”:“SignalHandler”,“msg”:“shutdown: removing fs lock…”}\n{“t”:{“$date”:“2022-05-25T12:53:36.430-04:00”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“SignalHandler”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{“$date”:“2022-05-25T12:53:36.430-04:00”},“s”:“I”, “c”:“FTDC”, “id”:4784926, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time data capture”}\n{“t”:{“$date”:“2022-05-25T12:53:36.430-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20626, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time diagnostic data capture”}\n{“t”:{“$date”:“2022-05-25T12:53:36.430-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“SignalHandler”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2022-05-25T12:53:36.431-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“SignalHandler”,“msg”:“Shutting down”,“attr”:{“exitCode”:0}}", "username": "Ole_J" }, { "code": "", "text": "I uninstalled and removed the data lib and the config file, and then reinstalled with apt-get.\nWhat I don’t understand is, that the installation didn’t make a new config file and the service suddenly didn’t work. I would expect a clean installation, that works ?", "username": "Ole_J" }, { "code": "", "text": "ERROR: child process failed, exited with 100That is not the same error as before. The most likely reason for this is that you did not terminated the mongod you started manually.", "username": "steevej" }, { "code": "", "text": "I did terminate that process. And I have seen that error before.\nI would like to find out why that child process gave an error.", "username": "Ole_J" }, { "code": "ls -ld /var/lib/mongodb\n", "text": "Share the output of", "username": "steevej" }, { "code": "", "text": "oaj@pop-os:/$ ls -ld /var/lib/mongodb\ndrwxr-xr-x 4 mongodb nogroup 4096 May 25 14:56 /var/lib/mongodb", "username": "Ole_J" }, { "code": "", "text": "Okay that looks good.Share your mongodb service file.", "username": "steevej" }, { "code": "", "text": "[Unit]\nDescription=MongoDB Database Server\nDocumentation=https://docs.mongodb.org/manual\nAfter=network-online.target\nWants=network-online.target[Service]\nUser=mongodb\nGroup=mongodb\nEnvironmentFile=-/etc/default/mongod\nExecStart=/usr/bin/mongod --config /etc/mongod.conf\nPIDFile=/var/run/mongodb/mongod.pidLimitFSIZE=infinityLimitCPU=infinityLimitAS=infinityLimitNOFILE=64000LimitNPROC=64000LimitMEMLOCK=infinityTasksMax=infinity\nTasksAccounting=false[Install]\nWantedBy=multi-user.target", "username": "Ole_J" }, { "code": "ls -ld /var/lib/mongodb/\n", "text": "Because ofGroup=mongodbin service file, try to chgrp mongodb /var/lib/mongodb/It is nogroup now.Output of", "username": "steevej" }, { "code": "", "text": "( I previously sent this by email. Forum said too many posts by a newbie. )oaj@pop-os:/$ sudo chgrp mongodb /var/lib/mongodb/\n[sudo] password for oaj:\noaj@pop-os:/$ ls -ld /var/lib/mongodb/\ndrwxr-xr-x 4 mongodb mongodb 4096 May 25 14:56 /var/lib/mongodb/oaj@pop-os:/$ sudo systemctl start mongod\noaj@pop-os:/$ sudo systemctl status mongod\n○ mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor prese>\nActive: inactive (dead)\nDocs: https://docs.mongodb.org/manualMay 25 14:56:10 pop-os systemd[1]: Started MongoDB Database Server.\nMay 25 14:56:10 pop-os mongod[16481]: about to fork child process, waiting unti>\nMay 25 14:56:10 pop-os mongod[16483]: forked process: 16483\nMay 25 14:56:10 pop-os mongod[16481]: child process started successfully, paren>\nMay 25 14:56:10 pop-os systemd[1]: mongod.service: Deactivated successfully.\nMay 25 17:22:21 pop-os systemd[1]: Started MongoDB Database Server.\nMay 25 17:22:21 pop-os mongod[23189]: about to fork child process, waiting unti>\nMay 25 17:22:21 pop-os mongod[23191]: forked process: 23191\nMay 25 17:22:21 pop-os mongod[23189]: child process started successfully, paren>\nMay 25 17:22:22 pop-os systemd[1]: mongod.service: Deactivated successfully.------------------------- log ------------------------------{“t”:{“$date”:“2022-05-25T17:22:21.526-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“-”,“msg”:“***** SERVER RESTARTED *****”}\n{“t”:{“$date”:“2022-05-25T17:22:21.527-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“-”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2022-05-25T17:22:21.529-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“main”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-05-25T17:22:21.531-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-05-25T17:22:21.531-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“W”, “c”:“ASIO”, “id”:22601, “ctx”:“main”,“msg”:“No TransportLayer configured during NetworkInterface startup”}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“ns”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“ns”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:23191,“port”:27017,“dbPath”:“/var/lib/mongodb”,“architecture”:“64-bit”,“host”:“pop-os”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.8”,“gitVersion”:“c87e1c23421bf79614baf500fda6622bd90f674e”,“openSSLVersion”:“OpenSSL 1.1.1l 24 Aug 2021”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“ubuntu2004”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Pop”,“version”:“22.04”}}}\n{“t”:{“$date”:“2022-05-25T17:22:21.533-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:“/etc/mongod.conf”,“net”:{“bindIp”:“localhost”,“port”:27017},“processManagement”:{“fork”:true},“storage”:{“dbPath”:“/var/lib/mongodb”,“journal”:{“enabled”:true}},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:“/var/log/mongodb/mongod.log”}}}}\n{“t”:{“$date”:“2022-05-25T17:22:21.535-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22270, “ctx”:“initandlisten”,“msg”:“Storage engine to use detected by data files”,“attr”:{“dbpath”:“/var/lib/mongodb”,“storageEngine”:“wiredTiger”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.535-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22297, “ctx”:“initandlisten”,“msg”:“Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",“tags”:[\"startupWarnings”]}\n{“t”:{“$date”:“2022-05-25T17:22:21.535-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22315, “ctx”:“initandlisten”,“msg”:“Opening WiredTiger”,“attr”:{“config”:“create,cache_size=15464M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.661-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:661461][23191:0x7f95c71df200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 7 through 8”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.689-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:689189][23191:0x7f95c71df200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 8 through 8”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.733-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:733557][23191:0x7f95c71df200], txn-recover: [WT_VERB_RECOVERY_ALL] Main recovery loop: starting at 7/5376 to 8/256”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.791-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:791973][23191:0x7f95c71df200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 7 through 8”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.842-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:842006][23191:0x7f95c71df200], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 8 through 8”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.873-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:873777][23191:0x7f95c71df200], txn-recover: [WT_VERB_RECOVERY_ALL] Set global recovery timestamp: (0, 0)”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.873-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:873815][23191:0x7f95c71df200], txn-recover: [WT_VERB_RECOVERY_ALL] Set global oldest timestamp: (0, 0)”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.882-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“initandlisten”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:882774][23191:0x7f95c71df200], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 1, snapshot max: 1 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 61”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.914-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795906, “ctx”:“initandlisten”,“msg”:“WiredTiger opened”,“attr”:{“durationMillis”:379}}\n{“t”:{“$date”:“2022-05-25T17:22:21.914-04:00”},“s”:“I”, “c”:“RECOVERY”, “id”:23987, “ctx”:“initandlisten”,“msg”:“WiredTiger recoveryTimestamp”,“attr”:{“recoveryTimestamp”:{“$timestamp”:{“t”:0,“i”:0}}}}\n{“t”:{“$date”:“2022-05-25T17:22:21.915-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4366408, “ctx”:“initandlisten”,“msg”:“No table logging settings modifications are required for existing WiredTiger tables”,“attr”:{“loggingEnabled”:true}}\n{“t”:{“$date”:“2022-05-25T17:22:21.916-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22262, “ctx”:“initandlisten”,“msg”:“Timestamp monitor starting”}\n{“t”:{“$date”:“2022-05-25T17:22:21.941-04:00”},“s”:“W”, “c”:“CONTROL”, “id”:22120, “ctx”:“initandlisten”,“msg”:“Access control is not enabled for the database. Read and write access to data and configuration is unrestricted”,“tags”:[“startupWarnings”]}\n{“t”:{“$date”:“2022-05-25T17:22:21.943-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4915702, “ctx”:“initandlisten”,“msg”:“Updated wire specification”,“attr”:{“oldSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:0,“maxWireVersion”:13},“isInternalClient”:true},“newSpec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:13},“incomingInternalClient”:{“minWireVersion”:13,“maxWireVersion”:13},“outgoing”:{“minWireVersion”:13,“maxWireVersion”:13},“isInternalClient”:true}}}\n{“t”:{“$date”:“2022-05-25T17:22:21.944-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:5071100, “ctx”:“initandlisten”,“msg”:“Clearing temp directory”}\n{“t”:{“$date”:“2022-05-25T17:22:21.944-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20536, “ctx”:“initandlisten”,“msg”:“Flow Control is enabled on this deployment”}\n{“t”:{“$date”:“2022-05-25T17:22:21.945-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20625, “ctx”:“initandlisten”,“msg”:“Initializing full-time diagnostic data capture”,“attr”:{“dataDirectory”:“/var/lib/mongodb/diagnostic.data”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.948-04:00”},“s”:“I”, “c”:“REPL”, “id”:6015317, “ctx”:“initandlisten”,“msg”:“Setting new configuration state”,“attr”:{“newState”:“ConfigReplicationDisabled”,“oldState”:“ConfigPreStart”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.950-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.950-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23015, “ctx”:“listener”,“msg”:“Listening on”,“attr”:{“address”:“127.0.0.1”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.950-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23377, “ctx”:“SignalHandler”,“msg”:“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23378, “ctx”:“SignalHandler”,“msg”:“Signal was sent by kill(2)”,“attr”:{“pid”:1,“uid”:0}}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23381, “ctx”:“SignalHandler”,“msg”:“will terminate after current cmd ends”}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“SignalHandler”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:15000}}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“REPL”, “id”:4794602, “ctx”:“SignalHandler”,“msg”:“Attempting to enter quiesce mode”}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“SignalHandler”,“msg”:“Shutting down the MirrorMaestro”}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“SignalHandler”,“msg”:“Shutting down the WaitForMajorityService”}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784903, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalSessionCache”}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“SignalHandler”,“msg”:“Shutdown: going to close listening sockets”}\n{“t”:{“$date”:“2022-05-25T17:22:21.953-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:23017, “ctx”:“listener”,“msg”:“removing socket file”,“attr”:{“path”:“/tmp/mongodb-27017.sock”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“SignalHandler”,“msg”:“Shutting down the global connection pool”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784906, “ctx”:“SignalHandler”,“msg”:“Shutting down the FlowControlTicketholder”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“SignalHandler”,“msg”:“Stopping further Flow Control ticket acquisitions.”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784908, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToAbortExpiredTransactions”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784909, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicationCoordinator”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784910, “ctx”:“SignalHandler”,“msg”:“Shutting down the ShardingInitializationMongoD”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784911, “ctx”:“SignalHandler”,“msg”:“Enqueuing the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“-”, “id”:4784912, “ctx”:“SignalHandler”,“msg”:“Killing all operations for shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“-”, “id”:4695300, “ctx”:“SignalHandler”,“msg”:“Interrupted all currently running operations”,“attr”:{“opsKilled”:3}}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“TENANT_M”, “id”:5093807, “ctx”:“SignalHandler”,“msg”:“Shutting down all TenantMigrationAccessBlockers on global shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784913, “ctx”:“SignalHandler”,“msg”:“Shutting down all open transactions”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784914, “ctx”:“SignalHandler”,“msg”:“Acquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“INDEX”, “id”:4784915, “ctx”:“SignalHandler”,“msg”:“Shutting down the IndexBuildsCoordinator”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784916, “ctx”:“SignalHandler”,“msg”:“Reacquiring the ReplicationStateTransitionLock for shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“REPL”, “id”:4784917, “ctx”:“SignalHandler”,“msg”:“Attempting to mark clean shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicaSetMonitor”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“SignalHandler”,“msg”:“Shutting down the MigrationUtilExecutor”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“MigrationUtil-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“COMMAND”, “id”:4784923, “ctx”:“SignalHandler”,“msg”:“Shutting down the ServiceEntryPoint”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20609, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784927, “ctx”:“SignalHandler”,“msg”:“Shutting down the HealthLog”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784928, “ctx”:“SignalHandler”,“msg”:“Shutting down the TTL monitor”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684100, “ctx”:“SignalHandler”,“msg”:“Shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.954-04:00”},“s”:“I”, “c”:“INDEX”, “id”:3684101, “ctx”:“SignalHandler”,“msg”:“Finished shutting down TTL collection monitor thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784929, “ctx”:“SignalHandler”,“msg”:“Acquiring the global lock for shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:4784930, “ctx”:“SignalHandler”,“msg”:“Shutting down the storage engine”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22320, “ctx”:“SignalHandler”,“msg”:“Shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22321, “ctx”:“SignalHandler”,“msg”:“Finished shutting down journal flusher thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22322, “ctx”:“SignalHandler”,“msg”:“Shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22323, “ctx”:“SignalHandler”,“msg”:“Finished shutting down checkpoint thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:20282, “ctx”:“SignalHandler”,“msg”:“Deregistering all the collections”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22261, “ctx”:“SignalHandler”,“msg”:“Timestamp monitor shutting down”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22317, “ctx”:“SignalHandler”,“msg”:“WiredTigerKVEngine shutting down”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22318, “ctx”:“SignalHandler”,“msg”:“Shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.955-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22319, “ctx”:“SignalHandler”,“msg”:“Finished shutting down session sweeper thread”}\n{“t”:{“$date”:“2022-05-25T17:22:21.959-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795902, “ctx”:“SignalHandler”,“msg”:“Closing WiredTiger”,“attr”:{“closeConfig”:“leak_memory=true,”}}\n{“t”:{“$date”:“2022-05-25T17:22:21.960-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1653513741:960964][23191:0x7f95c71db640], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 4, snapshot max: 4 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 61”}}\n{“t”:{“$date”:“2022-05-25T17:22:22.054-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:4795901, “ctx”:“SignalHandler”,“msg”:“WiredTiger closed”,“attr”:{“durationMillis”:95}}\n{“t”:{“$date”:“2022-05-25T17:22:22.054-04:00”},“s”:“I”, “c”:“STORAGE”, “id”:22279, “ctx”:“SignalHandler”,“msg”:“shutdown: removing fs lock…”}\n{“t”:{“$date”:“2022-05-25T17:22:22.054-04:00”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“SignalHandler”,“msg”:“Dropping the scope cache for shutdown”}\n{“t”:{“$date”:“2022-05-25T17:22:22.054-04:00”},“s”:“I”, “c”:“FTDC”, “id”:4784926, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time data capture”}\n{“t”:{“$date”:“2022-05-25T17:22:22.054-04:00”},“s”:“I”, “c”:“FTDC”, “id”:20626, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time diagnostic data capture”}\n{“t”:{“$date”:“2022-05-25T17:22:22.057-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“SignalHandler”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2022-05-25T17:22:22.057-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“SignalHandler”,“msg”:“Shutting down”,“attr”:{“exitCode”:0}}", "username": "Ole_J" }, { "code": "ss -tlnp\nps -aef | grep [m]ongod\nls -l /var/run/mongodb/mongod.pid\nls -l /tmp/mongodb-*\ndate\n sudo systemctl start mongod\n", "text": "Please show the output ofAnd have you looked at:I do not really know this distribution but it may be related to SELinux config being too restrictive.I usually see a click count but I don’t see any right now.", "username": "steevej" } ]
Mongod.service wont start after reinstall
2022-05-24T22:15:00.627Z
Mongod.service wont start after reinstall
7,395
null
[]
[ { "code": "const timeoutError = new error_1.MongoServerSelectionError(`Server selection timed out after ${serverSelectionTimeoutMS} ms`, this.description);\n ^\n\nMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (C:\\Users\\TUF\\OneDrive\\Desktop\\Albin\\tech-trinkets\\server\\node_modules\\mongodb\\lib\\sdam\\topology.js:278:38)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 231993300,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (C:\\Users\\TUF\\OneDrive\\Desktop\\Albin\\tech-trinkets\\server\\node_modules\\mongodb\\lib\\cmap\\connect.js:367:20)\n at Socket.<anonymous> (C:\\Users\\TUF\\OneDrive\\Desktop\\Albin\\tech-trinkets\\server\\node_modules\\mongodb\\lib\\cmap\\connect.js:290:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {\n cause: Error: connect ECONNREFUSED ::1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1487:16) {\n errno: -4078,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '::1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\nimport { MongoClient } from \"mongodb\"\n\n\nconst state={\n db:null\n}\nexport const connectToDb = async (cb) => {\n const url='mongodb://localhost:27017'\n const dbName='techtrinkets'\n\n MongoClient.connect(url,(err,client)=>{\n if(err) return cb(err)\n state.db=client.db(dbName)\n console.log(state.db);\n })\n cb();\n}\n\nexport const getDB = () => {\n return state.db\n}\nimport express from \"express\";\nimport dotenv from \"dotenv\"\nimport { connectToDb } from './db/connection.js'\n\nconst app = express()\ndotenv.config()\n\n//Database Connection\nconnectToDb((err)=>{\n if(err) return console.log(\"Database Error\");\n console.log(\"Database Connected\");\n})\n\napp.listen(process.env.PORT, () => {\n console.log(\"Server Started port:\"+process.env.PORT);\n})\n", "text": "ERROR is MY CODE :-\nconnection.jsinde.js", "username": "Albin_N_J" }, { "code": "", "text": "Try with 127.0.0.1 in your uri instead of localhost", "username": "Ramachandra_Tummala" } ]
I am getting an error when i connect to mongodb drive using node js
2023-07-17T08:21:46.254Z
I am getting an error when i connect to mongodb drive using node js
785
null
[]
[ { "code": "\"_accountId\": {\"$in\": \"%%user.custom_data.accountIds\" }\naccounts: [\n{accountId:\"aaa\", role:\"admin\"},\n{accountId:\"bbb\", role:\"user\", otherPermission:true}\n]\n", "text": "In setting up the rules, I am trying to configure access to a collection when a single user belongs to multiple accounts. Examples often point to adding an array to user_data and check for the existence of a value…Ideally, my custom_data object would be setup up as such…I can’t figure our how to write the document permission that validates if the accountId in the document exists in the array of account values on the user custom_data object.Am I going at this all wrong and is there a different way to handle this scenario? I am concerned that a user with an email address is added to Business A and then later starts working for Business B who also uses my app.Thanks", "username": "Robert_Charest" }, { "code": "", "text": "Hi @Robert_Charest,a user with an email address is added to Business A and then later starts working for Business BCan you please clarify whether you expect the same user to work for multiple accounts at the same time? Your example looks a bit ambiguous, in that respect.Ideally, my custom_data object would be setup up as suchThat type of structure wouldn’t really fit for a match in a role: where and how are you looking to apply the rule? As there are special rule restrictions for Device Sync, you may have to limit the expressions if you plan to use Device Sync.", "username": "Paolo_Manna" }, { "code": "", "text": "@Paolo_Manna,I am looking at building a time tracking system. Ideally I was looking for logins to be users personal email address. Company A and Company B both use my system (unrelated to each other). If an employee who works for Company A also gets a part time job with Company B, that one user would be attached to 2 different companies. I agree that Roles may not support this scenario.What would be the correct way to setup logins to ensure there are no conflicts between the 2 companies?", "username": "Robert_Charest" }, { "code": "accountId", "text": "Hi @Robert_Charest,What would be the correct way to setup logins to ensure there are no conflicts between the 2 companies?There isn’t a single solution to such setup: for example, there’s a lot of difference in setting up things if you use Device Sync (and which type of it - Partition-based or Flexible), and where the accountId is stored in your data model (to filter records properly, you should have it almost everywhere, except in EmbeddedObjects).Overall, I’d suggest to experiment with multiple paths, trying to adapt to the Permission system, and, for example, the different subscriptions of Flexible Sync, to clarify what would be the easiest integration with your app: with this latest possibility, again as an example, the user would be identified as pertaining to multiple accounts at login, and it would have the possibility to filter the subscriptions related only to the account he’s willing to work on, as mixing the data from two accounts likely isn’t what you want.", "username": "Paolo_Manna" } ]
Single User, Multiple Accounts
2023-07-03T19:24:29.297Z
Single User, Multiple Accounts
708
null
[ "atlas-cluster", "php", "field-encryption" ]
[ { "code": "", "text": "Hello AllThis is frustrating. I keep getting No suitable servers foundI had used the official driver (composer require mongodb/mongodb )A shared deployment on cloud.mongodb.com . I had also allowed access from ALL IPs and it didn’t work neither.phpinfo shows Mongo installedMongoDB support\tenabled\nMongoDB extension version\t1.14.0\nMongoDB extension stability\tstable\nlibbson bundled version\t1.22.0\nlibmongoc bundled version\t1.22.0\nlibmongoc SSL\tenabled\nlibmongoc SSL library\tOpenSSL\nlibmongoc crypto\tenabled\nlibmongoc crypto library\tlibcrypto\nlibmongoc crypto system profile\tdisabled\nlibmongoc SASL\tdisabled\nlibmongoc ICU\tdisabled\nlibmongoc compression\tenabled\nlibmongoc compression snappy\tdisabled\nlibmongoc compression zlib\tenabled\nlibmongoc compression zstd\tdisabled\nlibmongocrypt bundled version\t1.5.0\nlibmongocrypt crypto\tenabled\nlibmongocrypt crypto library\tlibcryptoI saw /community/forums/t/php-error-no-suitable-servers-found/2863/8 and as am on CentOs, gave the following command as root:setsebool httpd_can_network_connect=1\nsetsebool: SELinux is disabled.\nRestarted httpd after the above, but no luckSet debug=1 in the php ini file. Following is the debug log. I can’t figure out whats wrong. (username, password changed in the log. )[2022-09-02T08:33:18.417101+00:00] mongoc: TRACE > ENTRY: _mongoc_linux_distro_scanner_get_distro():389\n[2022-09-02T08:33:18.417182+00:00] mongoc: TRACE > ENTRY: _mongoc_linux_distro_scanner_read_key_value_file():154\n[2022-09-02T08:33:18.417236+00:00] mongoc: TRACE > ENTRY: _process_line():93\n[2022-09-02T08:33:18.417254+00:00] mongoc: TRACE > TRACE: _process_line():121 Found name: CentOS Linux\n[2022-09-02T08:33:18.417265+00:00] mongoc: TRACE > EXIT: _process_line():128\n[2022-09-02T08:33:18.417272+00:00] mongoc: TRACE > ENTRY: _process_line():93\n[2022-09-02T08:33:18.417278+00:00] mongoc: TRACE > EXIT: _process_line():128\n[2022-09-02T08:33:18.417285+00:00] mongoc: TRACE > ENTRY: _process_line():93\n[2022-09-02T08:33:18.417291+00:00] mongoc: TRACE > EXIT: _process_line():128\n[2022-09-02T08:33:18.417298+00:00] mongoc: TRACE > ENTRY: _process_line():93\n[2022-09-02T08:33:18.417305+00:00] mongoc: TRACE > EXIT: _process_line():128\n[2022-09-02T08:33:18.417311+00:00] mongoc: TRACE > ENTRY: _process_line():93\n[2022-09-02T08:33:18.417318+00:00] mongoc: TRACE > TRACE: _process_line():125 Found version: 7\n[2022-09-02T08:33:18.417324+00:00] mongoc: TRACE > EXIT: _process_line():128\n[2022-09-02T08:33:18.417343+00:00] mongoc: TRACE > EXIT: _mongoc_linux_distro_scanner_read_key_value_file():205\n[2022-09-02T08:33:18.417352+00:00] mongoc: TRACE > EXIT: _mongoc_linux_distro_scanner_get_distro():398\n[2022-09-02T08:33:18.438255+00:00] PHONGO: DEBUG > Connection string: ‘mongodb+srv://userNameChanged:[email protected]/?retryWrites=true&w=majority’\n[2022-09-02T08:33:18.438305+00:00] PHONGO: DEBUG > Creating Manager, phongo-1.14.0[stable] - mongoc-1.22.0(bundled), libbson-1.22.0(bundled), php-7.4.30\n[2022-09-02T08:33:18.438319+00:00] PHONGO: DEBUG > Setting driver handshake data: { name: 'ext-mongodb:PHP / PHPLIB ', version: '1.14.0 / 1.13.0 ', platform: 'PHP 7.4.30 ’ }\n[2022-09-02T08:33:18.438337+00:00] client: TRACE > ENTRY: mongoc_client_new_from_uri_with_error():1088\n[2022-09-02T08:33:18.438371+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_init():78\n[2022-09-02T08:33:18.438397+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_init():97\n[2022-09-02T08:33:18.438473+00:00] client: TRACE > ENTRY: _mongoc_get_rr_search():453\n[2022-09-02T08:33:18.578278+00:00] client: TRACE > EXIT: _mongoc_get_rr_search():568\n[2022-09-02T08:33:18.578319+00:00] client: TRACE > ENTRY: _mongoc_get_rr_search():453\n[2022-09-02T08:33:18.825900+00:00] client: TRACE > EXIT: _mongoc_get_rr_search():568\n[2022-09-02T08:33:18.826016+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_init():124\n[2022-09-02T08:33:18.826040+00:00] mongoc: TRACE > EXIT: mongoc_server_description_init():152\n[2022-09-02T08:33:18.826073+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_init():124\n[2022-09-02T08:33:18.826088+00:00] mongoc: TRACE > EXIT: mongoc_server_description_init():152\n[2022-09-02T08:33:18.826110+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_init():124\n[2022-09-02T08:33:18.826127+00:00] mongoc: TRACE > EXIT: mongoc_server_description_init():152\n[2022-09-02T08:33:18.826182+00:00] cluster: TRACE > ENTRY: mongoc_cluster_init():2667\n[2022-09-02T08:33:18.826205+00:00] cluster: TRACE > EXIT: mongoc_cluster_init():2694\n[2022-09-02T08:33:18.826233+00:00] client: TRACE > EXIT: mongoc_client_new_from_uri_with_error():1118\n[2022-09-02T08:33:18.826265+00:00] PHONGO: DEBUG > Created client with hash: a:4:{s:3:“pid”;i:31771;s:3:“uri”;s:99:“mongodb+srv://userNameChanged:[email protected]/?retryWrites=true&w=majority”;s:7:“options”;a:0:{}s:13:“driverOptions”;a:1:{s:6:“driver”;a:2:{s:4:“name”;s:6:“PHPLIB”;s:7:“version”;s:6:“1.13.0”;}}}\n[2022-09-02T08:33:18.826279+00:00] PHONGO: DEBUG > Stored persistent client with hash: a:4:{s:3:“pid”;i:31771;s:3:“uri”;s:99:“mongodb+srv://userNameChanged:[email protected]/?retryWrites=true&w=majority”;s:7:“options”;a:0:{}s:13:“driverOptions”;a:1:{s:6:“driver”;a:2:{s:4:“name”;s:6:“PHPLIB”;s:7:“version”;s:6:“1.13.0”;}}}\n[2022-09-02T08:33:18.829095+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:18.829136+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:18.829152+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_init():78\n[2022-09-02T08:33:18.829161+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_init():97\n[2022-09-02T08:33:18.829171+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:18.829179+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:18.829198+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_destroy():256\n[2022-09-02T08:33:18.829207+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:18.829214+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:18.829221+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:18.829228+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:18.829235+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:18.829242+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:18.829249+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:18.829255+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:18.829262+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_destroy():265\n[2022-09-02T08:33:18.829272+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:18.829282+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:18.829293+00:00] topology_scanner: TRACE > ENTRY: mongoc_topology_scanner_node_setup_tcp():926\n[2022-09-02T08:33:19.142142+00:00] topology_scanner: TRACE > EXIT: mongoc_topology_scanner_node_setup_tcp():984\n[2022-09-02T08:33:19.142196+00:00] topology_scanner: TRACE > ENTRY: mongoc_topology_scanner_node_setup_tcp():926\n[2022-09-02T08:33:19.280348+00:00] topology_scanner: TRACE > EXIT: mongoc_topology_scanner_node_setup_tcp():984\n[2022-09-02T08:33:19.280389+00:00] topology_scanner: TRACE > ENTRY: mongoc_topology_scanner_node_setup_tcp():926\n[2022-09-02T08:33:19.575304+00:00] topology_scanner: TRACE > EXIT: mongoc_topology_scanner_node_setup_tcp():984\n[2022-09-02T08:33:19.575355+00:00] socket: TRACE > ENTRY: mongoc_socket_new():1000\n[2022-09-02T08:33:19.575379+00:00] socket: TRACE > ENTRY: _mongoc_socket_setnodelay():570\n[2022-09-02T08:33:19.575390+00:00] socket: TRACE > EXIT: _mongoc_socket_setnodelay():582\n[2022-09-02T08:33:19.575398+00:00] socket: TRACE > ENTRY: _mongoc_socket_setkeepalive():535\n[2022-09-02T08:33:19.575431+00:00] socket: TRACE > TRACE: _mongoc_socket_setkeepalive():539 Setting SO_KEEPALIVE\n[2022-09-02T08:33:19.575446+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPIDLE’ is 7200, target value is 120\n[2022-09-02T08:33:19.575456+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():493 ‘TCP_KEEPIDLE’ value changed to 120\n[2022-09-02T08:33:19.575465+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPINTVL’ is 75, target value is 10\n[2022-09-02T08:33:19.575474+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():493 ‘TCP_KEEPINTVL’ value changed to 10\n[2022-09-02T08:33:19.575483+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPCNT’ is 9, target value is 9\n[2022-09-02T08:33:19.575491+00:00] socket: TRACE > EXIT: _mongoc_socket_setkeepalive():551\n[2022-09-02T08:33:19.575498+00:00] socket: TRACE > EXIT: mongoc_socket_new():1038\n[2022-09-02T08:33:19.575516+00:00] socket: TRACE > ENTRY: mongoc_socket_connect():860\n[2022-09-02T08:33:19.578517+00:00] socket: TRACE > TRACE: _mongoc_socket_capture_errno():68 setting errno: 115 Operation now in progress\n[2022-09-02T08:33:19.578537+00:00] socket: TRACE > TRACE: _mongoc_socket_errno_is_again():631 errno is: 115\n[2022-09-02T08:33:19.578552+00:00] socket: TRACE > ENTRY: _mongoc_socket_wait():161\n[2022-09-02T08:33:19.578569+00:00] socket: TRACE > EXIT: _mongoc_socket_wait():254\n[2022-09-02T08:33:19.578583+00:00] socket: TRACE > EXIT: mongoc_socket_connect():890\n[2022-09-02T08:33:19.578610+00:00] stream-tls-openssl: TRACE > ENTRY: mongoc_stream_tls_openssl_new():771\n[2022-09-02T08:33:19.585038+00:00] stream-tls-openssl: TRACE > EXIT: mongoc_stream_tls_openssl_new():895\n[2022-09-02T08:33:19.585067+00:00] socket: TRACE > ENTRY: mongoc_socket_new():1000\n[2022-09-02T08:33:19.585090+00:00] socket: TRACE > ENTRY: _mongoc_socket_setnodelay():570\n[2022-09-02T08:33:19.585101+00:00] socket: TRACE > EXIT: _mongoc_socket_setnodelay():582\n[2022-09-02T08:33:19.585108+00:00] socket: TRACE > ENTRY: _mongoc_socket_setkeepalive():535\n[2022-09-02T08:33:19.585117+00:00] socket: TRACE > TRACE: _mongoc_socket_setkeepalive():539 Setting SO_KEEPALIVE\n[2022-09-02T08:33:19.585128+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPIDLE’ is 7200, target value is 120\n[2022-09-02T08:33:19.585137+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():493 ‘TCP_KEEPIDLE’ value changed to 120\n[2022-09-02T08:33:19.585146+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPINTVL’ is 75, target value is 10\n[2022-09-02T08:33:19.585155+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():493 ‘TCP_KEEPINTVL’ value changed to 10\n[2022-09-02T08:33:19.585163+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPCNT’ is 9, target value is 9\n[2022-09-02T08:33:19.585170+00:00] socket: TRACE > EXIT: _mongoc_socket_setkeepalive():551\n[2022-09-02T08:33:19.585177+00:00] socket: TRACE > EXIT: mongoc_socket_new():1038\n[2022-09-02T08:33:19.585184+00:00] socket: TRACE > ENTRY: mongoc_socket_connect():860\n[2022-09-02T08:33:19.589606+00:00] socket: TRACE > TRACE: _mongoc_socket_capture_errno():68 setting errno: 115 Operation now in progress\n[2022-09-02T08:33:19.589625+00:00] socket: TRACE > TRACE: _mongoc_socket_errno_is_again():631 errno is: 115\n[2022-09-02T08:33:19.589633+00:00] socket: TRACE > ENTRY: _mongoc_socket_wait():161\n[2022-09-02T08:33:19.589643+00:00] socket: TRACE > EXIT: _mongoc_socket_wait():254\n[2022-09-02T08:33:19.589650+00:00] socket: TRACE > EXIT: mongoc_socket_connect():890\n[2022-09-02T08:33:19.589658+00:00] stream-tls-openssl: TRACE > ENTRY: mongoc_stream_tls_openssl_new():771\n[2022-09-02T08:33:19.595296+00:00] stream-tls-openssl: TRACE > EXIT: mongoc_stream_tls_openssl_new():895\n[2022-09-02T08:33:19.595319+00:00] socket: TRACE > ENTRY: mongoc_socket_new():1000\n[2022-09-02T08:33:19.595341+00:00] socket: TRACE > ENTRY: _mongoc_socket_setnodelay():570\n[2022-09-02T08:33:19.595352+00:00] socket: TRACE > EXIT: _mongoc_socket_setnodelay():582\n[2022-09-02T08:33:19.595359+00:00] socket: TRACE > ENTRY: _mongoc_socket_setkeepalive():535\n[2022-09-02T08:33:19.595368+00:00] socket: TRACE > TRACE: _mongoc_socket_setkeepalive():539 Setting SO_KEEPALIVE\n[2022-09-02T08:33:19.595379+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPIDLE’ is 7200, target value is 120\n[2022-09-02T08:33:19.595388+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():493 ‘TCP_KEEPIDLE’ value changed to 120\n[2022-09-02T08:33:19.595397+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPINTVL’ is 75, target value is 10\n[2022-09-02T08:33:19.595429+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():493 ‘TCP_KEEPINTVL’ value changed to 10\n[2022-09-02T08:33:19.595442+00:00] socket: TRACE > TRACE: _mongoc_socket_set_sockopt_if_less():482 ‘TCP_KEEPCNT’ is 9, target value is 9\n[2022-09-02T08:33:19.595449+00:00] socket: TRACE > EXIT: _mongoc_socket_setkeepalive():551\n[2022-09-02T08:33:19.595455+00:00] socket: TRACE > EXIT: mongoc_socket_new():1038\n[2022-09-02T08:33:19.595462+00:00] socket: TRACE > ENTRY: mongoc_socket_connect():860\n[2022-09-02T08:33:19.598354+00:00] socket: TRACE > TRACE: _mongoc_socket_capture_errno():68 setting errno: 115 Operation now in progress\n[2022-09-02T08:33:19.598370+00:00] socket: TRACE > TRACE: _mongoc_socket_errno_is_again():631 errno is: 115\n[2022-09-02T08:33:19.598377+00:00] socket: TRACE > ENTRY: _mongoc_socket_wait():161\n[2022-09-02T08:33:19.598387+00:00] socket: TRACE > EXIT: _mongoc_socket_wait():254\n[2022-09-02T08:33:19.598394+00:00] socket: TRACE > EXIT: mongoc_socket_connect():890\n[2022-09-02T08:33:19.598402+00:00] stream-tls-openssl: TRACE > ENTRY: mongoc_stream_tls_openssl_new():771\n[2022-09-02T08:33:19.605141+00:00] stream-tls-openssl: TRACE > EXIT: mongoc_stream_tls_openssl_new():895\n[2022-09-02T08:33:19.605191+00:00] stream: TRACE > ENTRY: _mongoc_stream_socket_poll():227\n[2022-09-02T08:33:19.605202+00:00] socket: TRACE > ENTRY: mongoc_socket_poll():297\n[2022-09-02T08:33:20.583428+00:00] stream: TRACE > EXIT: _mongoc_stream_socket_poll():253\n[2022-09-02T08:33:20.583490+00:00] stream: TRACE > ENTRY: mongoc_stream_failed():80\n[2022-09-02T08:33:20.583565+00:00] stream: TRACE > ENTRY: mongoc_stream_destroy():104\n[2022-09-02T08:33:20.583588+00:00] stream: TRACE > ENTRY: _mongoc_stream_socket_destroy():72\n[2022-09-02T08:33:20.583601+00:00] socket: TRACE > ENTRY: mongoc_socket_close():790\n[2022-09-02T08:33:20.583626+00:00] socket: TRACE > EXIT: mongoc_socket_close():825\n[2022-09-02T08:33:20.583634+00:00] stream: TRACE > EXIT: _mongoc_stream_socket_destroy():86\n[2022-09-02T08:33:20.583641+00:00] stream: TRACE > EXIT: mongoc_stream_destroy():114\n[2022-09-02T08:33:20.584374+00:00] stream: TRACE > EXIT: mongoc_stream_failed():90\n[2022-09-02T08:33:20.584407+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:20.584635+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:20.584653+00:00] mongoc: TRACE > TRACE: _mongoc_topology_description_clear_connection_pool():2553 clearing pool for server: ac-bsvxmfd-shard-00-00.xnbzkj9.mongodb.net:27017\n[2022-09-02T08:33:20.584669+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:20.584681+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:20.584698+00:00] mongoc: TRACE > TRACE: mongoc_topology_description_handle_hello():2178 hello_response = \n[2022-09-02T08:33:20.584717+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_handle_hello():564\n[2022-09-02T08:33:20.584726+00:00] mongoc: TRACE > EXIT: mongoc_server_description_handle_hello():571\n[2022-09-02T08:33:20.584737+00:00] mongoc: TRACE > TRACE: mongoc_topology_description_handle_hello():2233 Topology description Unknown ignoring server description Unknown\n[2022-09-02T08:33:20.584746+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:20.584757+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584769+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.584777+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584784+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.584791+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584819+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.584827+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:20.584834+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584842+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.584851+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_destroy():256\n[2022-09-02T08:33:20.584858+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:20.584865+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584872+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.584883+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584891+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.584898+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584905+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.584912+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:20.584923+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_destroy():265\n[2022-09-02T08:33:20.584931+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.584938+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():177\n[2022-09-02T08:33:20.584949+00:00] stream: TRACE > ENTRY: _mongoc_stream_socket_poll():227\n[2022-09-02T08:33:20.584956+00:00] socket: TRACE > ENTRY: mongoc_socket_poll():297\n[2022-09-02T08:33:20.593722+00:00] stream: TRACE > EXIT: _mongoc_stream_socket_poll():253\n[2022-09-02T08:33:20.593739+00:00] stream: TRACE > ENTRY: mongoc_stream_failed():80\n[2022-09-02T08:33:20.593766+00:00] stream: TRACE > ENTRY: mongoc_stream_destroy():104\n[2022-09-02T08:33:20.593782+00:00] stream: TRACE > ENTRY: _mongoc_stream_socket_destroy():72\n[2022-09-02T08:33:20.593793+00:00] socket: TRACE > ENTRY: mongoc_socket_close():790\n[2022-09-02T08:33:20.593813+00:00] socket: TRACE > EXIT: mongoc_socket_close():825\n[2022-09-02T08:33:20.593821+00:00] stream: TRACE > EXIT: _mongoc_stream_socket_destroy():86\n[2022-09-02T08:33:20.593828+00:00] stream: TRACE > EXIT: mongoc_stream_destroy():114\n[2022-09-02T08:33:20.594577+00:00] stream: TRACE > EXIT: mongoc_stream_failed():90\n[2022-09-02T08:33:20.594594+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:20.594819+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:20.594836+00:00] mongoc: TRACE > TRACE: _mongoc_topology_description_clear_connection_pool():2553 clearing pool for server: ac-bsvxmfd-shard-00-01.xnbzkj9.mongodb.net:27017\n[2022-09-02T08:33:20.594845+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:20.594862+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:20.594877+00:00] mongoc: TRACE > TRACE: mongoc_topology_description_handle_hello():2178 hello_response = \n[2022-09-02T08:33:20.594889+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_handle_hello():564\n[2022-09-02T08:33:20.594897+00:00] mongoc: TRACE > EXIT: mongoc_server_description_handle_hello():571\n[2022-09-02T08:33:20.594907+00:00] mongoc: TRACE > TRACE: mongoc_topology_description_handle_hello():2233 Topology description Unknown ignoring server description Unknown\n[2022-09-02T08:33:20.594916+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:20.594923+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.594930+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.594943+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.594951+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.594958+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.594965+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.594972+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:20.594979+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.594986+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.594994+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_destroy():256\n[2022-09-02T08:33:20.595002+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:20.595008+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.595016+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.595022+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.595031+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.595038+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.595045+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.595052+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:20.595060+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_destroy():265\n[2022-09-02T08:33:20.595066+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.595073+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():177\n[2022-09-02T08:33:20.595082+00:00] stream: TRACE > ENTRY: _mongoc_stream_socket_poll():227\n[2022-09-02T08:33:20.595090+00:00] socket: TRACE > ENTRY: mongoc_socket_poll():297\n[2022-09-02T08:33:20.598514+00:00] stream: TRACE > EXIT: _mongoc_stream_socket_poll():253\n[2022-09-02T08:33:20.598534+00:00] stream: TRACE > ENTRY: mongoc_stream_failed():80\n[2022-09-02T08:33:20.598556+00:00] stream: TRACE > ENTRY: mongoc_stream_destroy():104\n[2022-09-02T08:33:20.598566+00:00] stream: TRACE > ENTRY: _mongoc_stream_socket_destroy():72\n[2022-09-02T08:33:20.598573+00:00] socket: TRACE > ENTRY: mongoc_socket_close():790\n[2022-09-02T08:33:20.598590+00:00] socket: TRACE > EXIT: mongoc_socket_close():825\n[2022-09-02T08:33:20.598598+00:00] stream: TRACE > EXIT: _mongoc_stream_socket_destroy():86\n[2022-09-02T08:33:20.598605+00:00] stream: TRACE > EXIT: mongoc_stream_destroy():114\n[2022-09-02T08:33:20.599268+00:00] stream: TRACE > EXIT: mongoc_stream_failed():90\n[2022-09-02T08:33:20.599285+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:20.599514+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:20.599529+00:00] mongoc: TRACE > TRACE: _mongoc_topology_description_clear_connection_pool():2553 clearing pool for server: ac-bsvxmfd-shard-00-02.xnbzkj9.mongodb.net:27017\n[2022-09-02T08:33:20.599538+00:00] mongoc: TRACE > ENTRY: _mongoc_topology_description_copy_to():126\n[2022-09-02T08:33:20.599548+00:00] mongoc: TRACE > EXIT: _mongoc_topology_description_copy_to():165\n[2022-09-02T08:33:20.599559+00:00] mongoc: TRACE > TRACE: mongoc_topology_description_handle_hello():2178 hello_response = \n[2022-09-02T08:33:20.599566+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_handle_hello():564\n[2022-09-02T08:33:20.599573+00:00] mongoc: TRACE > EXIT: mongoc_server_description_handle_hello():571\n[2022-09-02T08:33:20.599584+00:00] mongoc: TRACE > TRACE: mongoc_topology_description_handle_hello():2233 Topology description Unknown ignoring server description Unknown\n[2022-09-02T08:33:20.599599+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:20.599607+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599615+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599622+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599629+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599636+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599642+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599650+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:20.599657+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599664+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599672+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_destroy():256\n[2022-09-02T08:33:20.599679+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:20.599686+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599693+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599699+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599706+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599713+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599720+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599810+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:20.599823+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_destroy():265\n[2022-09-02T08:33:20.599830+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599837+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():177\n[2022-09-02T08:33:20.599909+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_destroy():256\n[2022-09-02T08:33:20.599924+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_cleanup():220\n[2022-09-02T08:33:20.599935+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599946+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599953+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599960+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599967+00:00] mongoc: TRACE > ENTRY: mongoc_server_description_destroy():174\n[2022-09-02T08:33:20.599974+00:00] mongoc: TRACE > EXIT: mongoc_server_description_destroy():184\n[2022-09-02T08:33:20.599981+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_cleanup():234\n[2022-09-02T08:33:20.599987+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_destroy():265\n[2022-09-02T08:33:20.599995+00:00] mongoc: TRACE > ENTRY: mongoc_topology_description_select():976\n[2022-09-02T08:33:20.600004+00:00] mongoc: TRACE > EXIT: mongoc_topology_description_select():1024\n[2022-09-02T08:33:20.600452+00:00] PHONGO: DEBUG > Not destroying persistent client for ManagerWould very much appreciate any pointers/ help, etcThanks!​ ​ ​", "username": "VenkatMSN" }, { "code": "", "text": "Looks like you can’t reach the server.\nPossibly:", "username": "Jack_Woehr" }, { "code": "mongo \"mongodb+srv://cluster0.xnbzkj9.mongodb.net\"\n", "text": "download/install mongo shell and invoke the following command: (use “mongosh” if “mongo” won’t work)your server is accessible from outside. so possibly either your username/password is not correct or firewall prevents the connections. check for these possibilities first and report back.", "username": "Yilmaz_Durmaz" } ]
Cannot connect from PHP to cloud.mongodb.com
2022-09-02T16:03:49.178Z
Cannot connect from PHP to cloud.mongodb.com
2,407
null
[ "sharding", "ops-manager" ]
[ { "code": "", "text": "Hello there,\nI have this situation. 10 Mongos where the release is 4.2.14-ent, I need to update to 6.0 release. I already know that I can’t do the upgrade to the 6.0 in one shot but prior to that release I have to install some intermediate versions. Other than that the type of the installation of the binary files, as I can see on the Ops Manager is local.\nEven the Ops Manager has to be updated. So which one is better to update first? Ops Manager or Mongo?\nFor you experience what I have to do order to run the update smoothly? Is that correct that I have to upgrade to 4.4 release first, then I can do the update to 6.0 directly?", "username": "Enrico_Bevilacqua1" }, { "code": "", "text": "Hi @Enrico_Bevilacqua1To manage 6.0 deployments Ops Manager will need to be at 6.0.Step though the upgrades of the Ops Manger to Ops Manager 6.0. Remember to upgrade the AppDB backing the Ops Manager installation and the OplogDB if you have one.Other than that the type of the installation of the binary files, as I can see on the Ops Manager is local.I assume you mean this is running with the download mode set to local. Make sure you update the Version Manifest and have the latest versions for 4.2,4.4, 4.4, 5.0 and 6.0 populated in Ops Managers Versions directory.Then use OM to upgrade the deployment to the latest 4.2.Then you can start upgrading the major versions. 4.2 → 4.4 → 5.0 → 6.0In fact I’d recommend you to open a support case and have MongoDB Support guide you through the entire process.", "username": "chris" }, { "code": "", "text": "To manage 6.0 deployments Ops Manager will need to be at 6.0.Step though the upgrades of the Ops Manger to Ops Manager 6.0. Remember to upgrade the AppDB backing the Ops Manager installation and the OplogDB if you have one.Other than that the type of the installation of the binary files, as I can see on the Ops Manager is local.I assume you mean this is running with the download mode set to local. Make sure you update the Version Manifest and have the latest versions for 4.2,4.4, 4.4, 5.0 and 6.0 populated in Ops Managers Versions directory.Then use OM to upgrade the deployment to the latest 4.2.Then you can start upgrading the major versions. 4.2 → 4.4 → 5.0 → 6.0In fact I’d recommend you to open a support case and have MongoDB Support guide you through the entire process.Thank you for your reply.", "username": "Enrico_Bevilacqua1" } ]
How do I do the update Mongo from 4.2.14-ent to 6.0
2023-07-14T15:02:05.665Z
How do I do the update Mongo from 4.2.14-ent to 6.0
571
null
[]
[ { "code": "", "text": "Hello.I am trying to install Mongodb in Ubuntu22.04 but it is not working. Can anyone from the community tell me that, is it possible to install mongodb in Ubuntu 22.04? As , I was trying to solve this problem, I hadlooked a lot of online documentations and support where people mentioned that it is not possible to use the db with ubuntu 22.04.\nLike first some kind of impish security file Release was giving error e.g., file format is not valid and\n‘404: IP not found.’ Then I removed that whole file as I thought it might be of different previous versions. But in the end, got other errors and a lot of complications…Are there other methods?\nLike using Atlas or some kind of containerization using Dockers?As one of the other methods I tried using MongoDB Atlas and trying to connect the atlas server through VS-Code, but the connection never occurs despite of using the right connection string from the hosted server…Any suggestions or help will be welcomed:)\nThanks.", "username": "Anirudh_Suri" }, { "code": "", "text": "Atlas with Vscode should work.What error are you getting\nCheck this link\nAlso check our forum threads for alternative solutions for Mongodb on Ubuntu 20.04", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for replying.The problem is with IP address. It was not whitelisted before. I changed the setting. And now it is working.\nThanks.", "username": "Anirudh_Suri" }, { "code": "", "text": "Hi!It will soon be a year since the stable version of ubuntu 22.04 came out and still mongo db does not support it. Please tell me if the developers plan to support mongo db for ubuntu at all? It seems that the developers of mongo db do not care about ubuntu support", "username": "chaosmos" }, { "code": "focaljammy", "text": "Related Installing mongodb over Ubuntu 22.04 - #7 by StennieThe JIRA tickets are completed so it should be possible to install following instructions in that thread.I don’t know the reasons why it’s not in the docs yet.I tried and replacing the last MongoDB installation commands for ubuntu 20.04 focal with with jammy installed the program.Edit: as said above, and by @chris (below) here is the link to instructions. @chaosmos.", "username": "santimir" }, { "code": "", "text": "Could you point out here exactly how i can now install mongo db on ubuntu 22.04? There are so many different options in the thread you mentioned that it is not clear which one is working and optimalAlso, could you tell me when we should expect an official release with support for ubuntu 22.04? And what is the reason for the fact that for almost a year it is impossible to install mongo db on the most popular Linux distribution in a regular way?", "username": "chaosmos" }, { "code": "", "text": "@chaosmos please check this recent shorter discussion we had:\nHow to install mongodb 6.0 on Ubuntu 22.04 - Ops and Admin / Installation & Upgrades - MongoDB Developer Community Forums“Fast release cycles” of operating systems favor the latest versions of libraries (ubuntu comes every 6 months). unfortunately, “latest” does not always mean proven to be “stable” or easy to migrate. Stability is important for server programs like MongoDB so they have to use battle-tested libraries (even if they may have bugs) hence the delay of support to newer OS versions.The fortunate thing is older libraries can still be installed and used mostly without any problems if all their dependencies can also be installed. you just need to dive into some settings manually. that is until developers manage to migrate to new libraries, which MongoDB also did but that is not just reflected in documents yet.", "username": "Yilmaz_Durmaz" }, { "code": "focaljammy", "text": "Follow the existing instructions. Replace focal with jammy.Done.", "username": "chris" }, { "code": "", "text": "Thank you. This worked perfectly for me.", "username": "Roy_McClanahan" }, { "code": "", "text": "Hi Chris please I am completely a newbie to MongoDB but I am using ubuntu 22.04, could you help me with the clear steps to install the latest MongoDB", "username": "Emeka_Chukwudozie" }, { "code": "focaljammyecho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list\n", "text": "As per @santimir:The instructions are hereStep 2: Is where focal needs to be replaced withjammy. When you copy and paste that line\nimage1113×480 73 KB\nSo that would become:", "username": "chris" }, { "code": "", "text": "Just switch back to ubuntu 20.04 or some fedora or arch. Or learn and use docker instead. because this is only one package(mongoDB). God knows how many more softwares are not supported in 22.04 (They have done many critical upgrades like wayland, libssl1 → libssl3, etc. Wait a year or so for everything get supported in ubuntu 22.04 and based distros( like popos, mint, distros).\nHappy coding fella:)", "username": "mkbhru" }, { "code": "echo \"deb http://security.ubuntu.com/ubuntu focal-security main\" | sudo tee /etc/apt/sources.list.d/focal-security.listecho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list", "text": "actually, it is not that easy (or complicated, depending on where you look at it) as you describe.you can use any version of any library (even 30 years old ones) as long as their dependencies do not contradict what the current OS uses, provided they work with the architecture.Ubuntu 22.x is called “jammy” and installs packages with that name. but you can still install packages from older (bionic, focal, trusty etc.) repositories if you need them. You just need to add their repository url to the apt source list file. the same also goes for independent packages like MongoDB: they have their own repository urls you need to add to apt source list.Here in this topic, installing MongoDB 6.0 on Ubuntu 22.x, we have two options:add one of older ubuntu repositories to your apt list (then follow remaining steps):wait for a new build (which is already here) and use it (@chris mentioned this many times)The first method uses MongoDB built for “ubuntu/focal” and the second one uses “ubuntu/jammy” build. The only thing missing is the official installation page does not yet have this on it.One can easily see which versions have been built for which ubuntu here: MongoDB Repositories. If you follow “Parent Directory” link, you can even find builds for other Linux distros there.", "username": "Yilmaz_Durmaz" }, { "code": "", "text": "As you said the mongoDb server uses dependency libss1 which is upgraded to libssl3 in ubuntu 22.04. So I have to install this dependency externally which opens several velnerabilities and result in breaking system. Since there is some point in canonical’s decision to depricate libssl1\n\nimage714×89 52 KB\n", "username": "mkbhru" }, { "code": "", "text": "@mkbhru i am also facing same issue by installing mangodb in raspberrypi4b .please let me know how to resolve the issue", "username": "Siva_kumar8" } ]
Installation of MongoDb on Ubuntu 22.04
2022-11-11T13:42:56.918Z
Installation of MongoDb on Ubuntu 22.04
10,453
null
[ "queries", "node-js", "data-modeling" ]
[ { "code": "searches.datethirtysearches.datetodaysearches.numbersearches.numbersearches.datesearches.datetodaysearches.numberlet today = new Date()\nlet thirty = new Date()\nthirty.setDate(thirty.getDate() - 30);\n\n const possible = await UserInfo.findOneAndUpdate({_id:user._id}, \n [\n {\n $set: {\n searches: {\n $cond: {\n if: {\"searches.date\":{$lte: +thirty}},\n then: {{\"searches.date\":+today, \"searches.number\": 0}]},\n else: {$ifNull: [{ $inc:{\"searches.number\":1}}, \n {\"searches.date\":+today,\"searches.number\":1}]},\n }\n },\n }\n },\n ],\n {upsert: true})\nfunction timestamp() {\n return +new Date();\n }\n\nconst SearchesSchema = new mongoose.Schema({\n number: Number,\n date: {type:Number, default: timestamp}\n})\nconst UserInfoSchema = new mongoose.Schema({\n _id: {\n type:mongoose.Schema.Types.ObjectId, ref:'User'\n },\n searches: SearchesSchema,\n info: [InfoSchema]\n})\n", "text": "Hi,\nI’m working on a project where I can only allow so many “searches” per a month (30 days) per user. Specifically, what I was trying to do was have the document conditionally update where if the searches.date was less than 30 days from the thirty value searches.date became today’s value and searches.number went back to 0, else searches.number increased by 1, but if searches.date was null then searches.date became today and searches.number became equal to 1. I was able to do something similar with arrays, but I can’t figure out how to get this to work with objects. I would really appreciate any help or advice. Thank you!Schema", "username": "Geenzie" }, { "code": "db.UserInfo.findOneAndUpdate(\n{\n _id:\"john\"\n},\n[\n {\n $set:{\n searches:{\n $cond: {\n if: {$lte:[\"$searches.date\", thirty]},\n then: {\"date\":today, \"number\": 0},\n else: {\n \"date\":today, \n \"number\": {$add:[{$ifNull:[\"$searches.number\", 0]},1]}\n }\n } \n }\n }\n },\n],\n{upsert: true}\n)\ndb.UserInfo.aggregate([\n{\n $addFields:{\n newData:{\n $cond: {\n if: {$lte:[\"$searches.date\", thirty]},\n then: {\"date\":today, \"number\": 0},\n else: {\n \"date\":today, \n \"number\": {$add:[{$ifNull:[\"$searches.number\", 0]},1]}\n }\n } \n }\n }\n}\n])\n", "text": "Playing just in the shell, something like this?Note adding square brackets to enable aggregation statements in the update section.When doing this kind of thing I find it useful to start with an aggregation in the shell and build the update from there:", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to conditionally findOneAndUpdate with objects?
2023-07-17T05:21:24.023Z
How to conditionally findOneAndUpdate with objects?
597
null
[ "dot-net" ]
[ { "code": "{\"_id\" : ObjectId(\"60054d3e20b5c978eb93bb7f\"), \"Image\": {\"en-us\" : {\"Id\":o1n2rpon12prn\" , \"IsPublished\": \"false\" }, \"en-ca\" : {\"Id\":o1n2rpon12prn\" , \"IsPublished\": \"true\" }, \"fr-ca\" : {\"Id\":o1n2rpon12prn\" , \"IsPublished\": \"false\" }}}IsPublished= true", "text": "Hello there, need some help on building Mongodb filter. I have a below data structure\n{\n\"_id\" : ObjectId(\"60054d3e20b5c978eb93bb7f\"),\n \"Image\":\n {\n\"en-us\" : {\"Id\":o1n2rpon12prn\" , \"IsPublished\": \"false\" },\n \"en-ca\" : {\"Id\":o1n2rpon12prn\" , \"IsPublished\": \"true\" },\n \"fr-ca\" : {\"Id\":o1n2rpon12prn\" , \"IsPublished\": \"false\" }\n}\n} I would like to build a filter query which will bring all the data where IsPublished= true . A data might have one locale ( en-us) or it can have multiple ( en-us, en-ca) . So, key is dynamic( en-us). Anyone has face similar use case in past( filtering data from dictionary with dynamic keys)? appreciate any help/pointers. TIA", "username": "Rajesh_Patel1" }, { "code": "\"Image\":\n{\n\"en-us\" : {\"Id\": \"o1n2rpon12prn\", \"IsPublished\": \"false\" },\n\"en-ca\" : {\"Id\": \"o1n2rpon12prn\", \"IsPublished\": \"true\" },\n\"fr-ca\" : {\"Id\": \"o1n2rpon12prn\", \"IsPublished\": \"false\" }\n}\nmyTestDb> db.test.find()\n[\n {\n _id: ObjectId(\"64b4e7bd0b74e330f809d554\"),\n Image: {\n 'en-us': { Id: 'o1n2rpon12prn', IsPublished: 'false' },\n 'en-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' },\n 'fr-ca': { Id: 'o1n2rpon12prn', IsPublished: 'false' }\n }\n },\n {\n _id: ObjectId(\"64b4e7bd0b74e330f809d555\"),\n Image: {\n 'en-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' },\n 'fr-ca': { Id: 'o1n2rpon12prn', IsPublished: 'false' }\n }\n },\n {\n _id: ObjectId(\"64b4e7bd0b74e330f809d556\"),\n Image: { 'en-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' } }\n },\n {\n _id: ObjectId(\"64b4e7bd0b74e330f809d557\"),\n Image: { 'en-us': { Id: 'o1n2rpon12prn', IsPublished: 'true' } }\n },\n {\n _id: ObjectId(\"64b4e7bd0b74e330f809d558\"),\n Image: {\n 'en-us': { Id: 'o1n2rpon12prn', IsPublished: 'true' },\n 'fr-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' }\n }\n }\n]\ndb.test.aggregate([\n {\n $addFields: {\n filteredArray: {\n $filter: {\n input: {\n $objectToArray: \"$Image\",\n },\n cond: {\n $eq: [\"$$this.v.IsPublished\", \"true\"],\n },\n },\n },\n },\n },\n {\n $addFields: {\n Imagev2: {\n $arrayToObject: {\n $map: {\n input: \"$filteredArray\",\n in: {\n k: \"$$this.k\",\n v: \"$$this.v\",\n },\n },\n },\n },\n },\n },\n {\n $project: {\n _id: 0,\n Image: 0,\n filteredArray: 0,\n },\n },\n])\n[\n {\n Imagev2: { 'en-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' } }\n },\n {\n Imagev2: { 'en-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' } }\n },\n {\n Imagev2: { 'en-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' } }\n },\n {\n Imagev2: { 'en-us': { Id: 'o1n2rpon12prn', IsPublished: 'true' } }\n },\n {\n Imagev2: {\n 'en-us': { Id: 'o1n2rpon12prn', IsPublished: 'true' },\n 'fr-ca': { Id: 'o1n2rpon12prn', IsPublished: 'true' }\n }\n }\n]\n", "text": "Hey @Rajesh_Patel1,Welcome to the MongoDB Community!Based on the sample document you shared, it seems that the image data is stored as a single document in MongoDB, and the language key is dynamic in nature. Therefore, to perform further filtration based on the specific requirements you mentioned, I think using application-side code would be beneficial in this scenario.However, I have tested the pipeline below, which may give you the required results within ImageV2, but with the language being in the “k” field. Here, I used $objectToArray and then $arrayToObject to transform it back to the original sample document structure.Here are the sample documents from my test environment:The pipeline is as follows:It will give the following output when using the aforementioned pipeline:I’ve only briefly tested this out, so I recommend either performing the same in the test environment to see if it suits your use case and requirements or doing it application-side as suggested earlier in my response.Hope the above helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C# MongoDB filter a value from the Dictionary like object
2023-07-14T21:37:11.501Z
C# MongoDB filter a value from the Dictionary like object
501
null
[ "atlas-online-archive" ]
[ { "code": "", "text": "Created a custom query online archiving rule.But the archiving gets stuck once it reaches 500MB.Even after pausing and resuming the process multiple times its showing no progress.Trying to archive about 4GB of data", "username": "Varun_Sumesh" }, { "code": "", "text": "Hey @Varun_Sumesh,Welcome to the MongoDB Community Forums! Please contact the Atlas in-app chat support regarding this. They will be better able to help you out in this case.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "Hi @Varun_Sumesh ,Can you please let us know if this issue is still existing. If so, can you please clarify by “archiving gets stuck”, does the archive display an “error” status or does it show the “archiving” status.Thanks !", "username": "Prem_PK_Krishna" }, { "code": "", "text": "The issue is still existing.The status is shown as archiving and the cpu utilisation goes up but there is no decrease in documents count in the collection we are archiving.Its stuck at 2.03 GB.", "username": "Varun_Sumesh" }, { "code": "", "text": "We’d like to help you further, but have a few clarifying questions to help diagnose:", "username": "Prem_PK_Krishna" }, { "code": "", "text": "The time taken to archive 2.03GB is approximately 45 minutes.\nWe have two partition fields in the archive.One is a timestamp value in seconds which is an integer.It has high cardinality.The other is a random number user id which is stored as string,it also has high cardinality", "username": "Varun_Sumesh" }, { "code": "", "text": "Thanks for your response. We do not recommend providing high cardinality string fields as partition keys for Online Archive. This is mentioned in the documentation and we also request you to refer this blog: Optimizing your Online Archive for Query Performance | MongoDBAs part of our roadmap, we are changing the backend storage service of Online Archive in our next version and as part of this new release, we will allow the option to choose high cardinality string fields.If you’d like to test out without the Private Preview mode in your non-production environment, then you can delete the archive and create a new archive without high cardinality string field. Also you can “schedule” the archive to run during non-peak hours, so that way you are ensuring the CPU utilization doesn’t peak.", "username": "Prem_PK_Krishna" } ]
MongoDB Atlas online archive getting stuck
2023-05-03T07:33:25.435Z
MongoDB Atlas online archive getting stuck
1,108
null
[ "queries", "atlas-data-lake", "atlas-online-archive" ]
[ { "code": "", "text": "Hi there,I have a scenario where the collection has the online archive enabled and is partitioned by the “timestamp” (ISODate) and “contactId” (UUID) field. The limit for data to go to archive is 90 days.If I run the query below, it returns almost instantly if I connect directly to the cluster, but connecting to the archive it takes about 5 seconds.db.messages.find({timestamp:{$gte: ISODate(‘2023-02-19T00:00:00Z’), $lt: ISODate(‘2023-02-21T00:00:00Z’)}, contactId: UUID(‘3ac88cfc-3ac9-46da-9106-087c53058de5’) })For the archive this time of 5 seconds is expected and ok, but with unarchived data, as the data is partitioned by timestamp, shouldn’t it understand that it is not archived and query only the online data?Regards!", "username": "Bruno_Feldman" }, { "code": "", "text": "Hi Bruno,Regarding your question of the time taken to query against the archive vs the cluster, it is expected to take that long when querying the archive. With the existing version of Online Archive, you can expect some latency to list down the partitions in the archive in object storage and the querying time will increase with more TBs of data in the archive.With the new version of Online Archive, we are improving the querying performance and we are going to optimize storage and incorporate rebalancing techniques. This will improve query performance and decrease costs when querying against the archives. This will also understand which partition to query against as data is sorted/rebalanced.The new version will be much faster when running similar queries such as yours to find data in the archive, but a general thumb rule is that querying against the archive (in object storage) will be a notch slower than querying against the cluster. However, with the new version of Online archive, it will show good performance improvements compared to the previous version of the archive. If you are interested in testing out the new feature in your non-production environment, I can sign you up for the Private Preview program.Details of the announcement of the new feature and the “Private Preview program” are mentioned here: https://www.mongodb.com/community/forums/t/invitation-to-participate-in-the-early-access-program-of-online-archives-query-performance-improvements/204188Thanks,\nPrem", "username": "Prem_PK_Krishna" } ]
Slow queries when using online archive connection
2023-03-08T17:19:07.798Z
Slow queries when using online archive connection
1,430
null
[ "connecting", "atlas-cluster", "serverless", "api" ]
[ { "code": "", "text": "I am using terraform to set up a serverless instance with two private endpoints. I want to then store the private endpoint aware connection strings in a key vault. The issue I am having is that the order of which private endpoint gets created first may not always be the same, and there does not look like there is any way to tell which connection string is for which private endpoint so I can store it accordingly.For example, if I get the connection strings for my serverless instance it will return something like:\n[“mongodb+srv:/my-serverless-instance-pe-0.asdfasdf.mongodb.net”, “mongodb+srv:/my-serverless-instance-pe-1.asdfasdf.mongodb.net”,]How do I know which is pe-0 and which is pe-1?", "username": "Kate_Lewis" }, { "code": ".tfmongodbatlas_serverless_instance.<resource_name>.connection_strings_private_endpoint_srv", "text": "Hi @Kate_Lewis - Welcome to the community.I am using terraform to set up a serverless instance with two private endpoints.Would you be able to provide the sample .tf files you’re using to create the infrastructure specified in your post? Please redact any credentials and sensitive information before posting here.For example, if I get the connection strings for my serverless instance it will return something like:\n[“mongodb+srv:/my-serverless-instance-pe-0.asdfasdf.mongodb.net”, “mongodb+srv:/my-serverless-instance-pe-1.asdfasdf.mongodb.net”,]Could you also confirm how you’re returning the connection strings in this format? I assume the contents is the value(s) from mongodbatlas_serverless_instance.<resource_name>.connection_strings_private_endpoint_srv but please correct me if I am wrong here.Lastly, could you help describe the use case / more details on the reason for distinguishing between the two endpoints?Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" } ]
How to identify private endpoint connection strings?
2023-07-13T08:45:52.062Z
How to identify private endpoint connection strings?
687
null
[ "flutter" ]
[ { "code": "", "text": "‘’‘select * from tbl_outlets_master_185 where country = “UAE” and city=“Abu Dhabi” and active=‘yes’ and outlet_code NOT in (SELECT outlet_code from tbl_asngd_templates_185) and (strftime(’%Y-%m-%d’, scheduled_date) <= DATE(‘now’)) and (strftime(‘%Y-%m-%d’, scheduled_to_date) >= DATE(‘now’))\n‘’’\ni need to convert flutter realm query language. Just give your idea to convert this one\nmy tried query\n‘’’\noutletTable.query(\n‘’‘clientId == “$clientId” AND country == “${event.data[“country”]}” AND\ncity == “${event.data[“city”]}” AND active == “yes” AND ‘outlet_code’ IN $outletCodeList AND\nscheduled_date <= $formatDate AND\nscheduled_to_date >= $formatDate’‘’);\n‘’’", "username": "subash_sethuraman" }, { "code": "", "text": "If you wanna connect 2 collections into one app then open to synced realm or you can use disconnect d sync", "username": "33_ANSHDEEP_Singh" } ]
How to connect two collection in flutter realm flexible sync
2023-06-23T10:14:06.827Z
How to connect two collection in flutter realm flexible sync
718
null
[ "swift" ]
[ { "code": "email address --> appID", "text": "I have a Swift Mac app built with Realm 10.41.0. I’m using Device Sync. This is an enterprise app with, say, 20 different customers. For security, each customer has a completely separate database in MongoDB—a separate cluster and separate Realm App.Each Realm App has email/password authentication turned on and the customer will create user accounts for their employees. These user accounts will obviously have access to only the database associated with their company.Given a user’s email address, the Mac app needs to look up the correct Atlas App ID to use so that it opens a Realm to the correct MongoDB App (and therefore the correct database). To accomplish this, I have an additional Atlas App with a separate cluster/database/collection that keeps a universal mapping of email address --> appID. This database gets updated via a server-side function anytime a customer adds/edits/removes a user to their separate Atlas App.My Mac app needs to access this “master database”. I’ve chosen to do that via API key, but since the API key can be easily dumped from the app binary, I want to make sure this API Key has read-only access to the database. No malicious user should ever be able to dump the API key and use it to edit the “master list” of users.How can I specify that the API key is read-only?I did find this: Data API: Restrict access to read/write to collection, but it’s using the “Data API” rather than Swift Framework. Thanks!", "username": "Bryan_Jones" }, { "code": "%%user{ \"%%user.identities.providerType\": api-key}\n", "text": "Hi, the best way to do this is to lean on the permissions system for Atlas App Services. When you define a rule, you can define a set of “roles” that have a waterfall like evaluation where they use the apply_when expression to choose the first role that applies to a user. Therefore, you can detect an API Key user with the %%user expansion to detect the provider type like:See here for more details: https://www.mongodb.com/docs/atlas/app-services/rules/expressions/#mongodb-json-expansion---userThen, for that role, you can define { read: true, write: false } and it should give you read-only permissions.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "{ \"%%user.identities.providerType\": api-key}apply_when", "text": "{ \"%%user.identities.providerType\": api-key}Mmm, the editor didn’t like that. I can’t figure out if this textfield is supposed to be just the expression of the apply_when field, or if I need to provide a more complete JSON object for each document in my collections, as shown in the docs.But it does not seem to approve of “api-key” as a raw value. Did you mean to make that a string?\nScreenshot 2023-07-14 at 21.56.221934×1266 179 KB\n", "username": "Bryan_Jones" }, { "code": "", "text": "Yes, that should be a string. My apologies", "username": "Tyler_Kaye" } ]
Limit an API Key to Read-Only Access?
2023-07-14T05:27:54.081Z
Limit an API Key to Read-Only Access?
694
https://www.mongodb.com/…e_2_1024x189.png
[ "atlas-device-sync" ]
[ { "code": "{\n \"rules\": {},\n \"defaultRoles\": [\n {\n \"name\": \"owner-read-write\",\n \"applyWhen\": {},\n \"read\": {\n \"ownerId\": \"%%user.id\"\n },\n \"write\": {\n \"ownerId\": \"%%user.id\"\n }\n }\n ]\n}\nownerId\"ownerId\": \"%%user.id\"", "text": "In this tutorial, it says to paste the following permissions:I selected the option to create a new database, but it isn’t created yet.\n\nimage1338×248 22.2 KB\n\n\nimage798×550 19.1 KB\nTo clarify, when creating a (default) role, does it apply to all collections? (If a document doesn’t have ownerId, the rule is simply ignored?)With the new UI, do I just need to check the boxes for “Document Permissions” and add \"ownerId\": \"%%user.id\"?\n\nimage798×550 34.4 KB\n", "username": "BPDev" }, { "code": "", "text": "Hi, I just filed a ticket to update the docs to remove that section and make it clearer since we recently changes where permissions are viewed and edited.Permissions should be defined in the rules tab. See here for a good overview: https://www.mongodb.com/docs/atlas/app-services/sync/app-builder/device-sync-permissions-guide/Default rules apply to any collection that does not have a specific collection rule defined. And for sync, document filters are indeed required.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "\"write\": {\n \"$or\": [\n {\n \"owner_id\": \"%%user.id\"\n },\n {\n \"collaborators\": \"%%user.id\"\n }\n ]\n }\n\"apply_when\": { \"owner_id\": \"%%user.id\"}", "text": "I didn’t find an explanation for Document Permissions (Insert, Delete, Search). (My guess is that Insert is for adding new items to a list and Delete is to prevent some users from deleting a document. I don’t know how Search can be disallowed when queries are performed locally (?).)Can I conditionally give Delete permissions? For example, I want to let collaborators edit a document, but not delete it.Also, can \"apply_when\": { \"owner_id\": \"%%user.id\"} be used instead of document filters? (That way I can apply different rules when a user is the owner or a collaborator) It should be equivalent, but maybe it is less efficient with Flexible Sync.", "username": "BPDev" }, { "code": "", "text": "Hi. Yes, you can conditionally give delete permissions. Please see this flow chart which should explain how permissions are valuated: https://www.mongodb.com/docs/atlas/app-services/rules/roles/#write-permissions-flowchartUnfortunately, permissions can be a touch confusing given how open and expressive they try to be.In your case, you would want to give collaborators “write” access but not “delete” access.Best,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Where to paste Realm permissions?
2023-07-13T16:37:46.508Z
Where to paste Realm permissions?
579
https://www.mongodb.com/…1_2_1024x692.png
[ "compass", "php" ]
[ { "code": "adminuserAdminAnyDatabaserootmongod.cfgsecurity:\n authorization: enabled\nadmin.temprolesadmin.tempusers", "text": "When I use MongoDB Compass, I always have and wonder, why there is no content in the admin table, I did create an admin account, and switched the role from userAdminAnyDatabase to root, and also set the mongod.cfg file :In MongoDB Compass, admin table showed some information, admin.temproles and admin.tempusers, but I don’t know what is this. when I uses the PHP-driven MongoDB-PHP-GUI, which has always displayed the complete content of the admin table : ↓\nimage1083×732 24.9 KB\nI think it’s important to show the contents of the admin table, if you don’t know what’s in there, everything gets blurred, this problem is so strange. Can someone explain why and how to make the admin table display all the complete content ?", "username": "XJ.Chen_N_A" }, { "code": "mongoshmongosh", "text": "Can someone explain why and how to make the admin table display all the complete content ?See the following also:", "username": "Jack_Woehr" }, { "code": "rootuserAdminAnyDatabaseadminadmin", "text": "Oh, I understand that this is based on security considerations, but in fact it also brings inconvenience. According to normal logical thinking, accounts with root or userAdminAnyDatabase should be able to see the admin database Content.The user experience of MongoDB Compass is very poor, and it is not even comparable to MongoDB-PHP-GUI in this respect, because the degree of freedom is very important, even if it poses a threat to security, it is the user’s own business, MongoDB Compass manages too much Already…So MongoDB Compass can’t display the contents of the admin database no matter what, or what settings can be used to achieve it ?", "username": "XJ.Chen_N_A" }, { "code": "mongosh", "text": "In my earlier comments, I’m just guessing, I’m not a MongoDB employee, so I can’t be sure of their motivations.I use mongosh in Compass a lot, and I use the facilities provided in the privileges API to adjust things to my liking. I imagine MongoDB-PHP-GUI is doing something like that behind the scenes.So I’m happy with Compass. I may try MongoDB-PHP-GUI as well, though there don’t seem to be any updates to that project since last July.", "username": "Jack_Woehr" } ]
Why in MongoDB Compass the admin table always empty?
2023-07-15T19:49:00.672Z
Why in MongoDB Compass the admin table always empty?
676
null
[ "queries", "dot-net", "performance" ]
[ { "code": "", "text": "Hi everyone,I am quite new to MongoDB and I’ve been assigned to deal with a really large data set (a Collection with over a billion documents) by querying it with a filter of dates and another value. Naturally I created a Compound Index because without it nothing would have worked but still performance time of queries is poor. I don’t know if adding more Indexes would really help (Compund Index includes the date field and the other one is ASC order). I consider migrating the Collection to Time-Series collection. I’d be glad to head everyone’s thoughts and advises + what would be cheapest way to migrate to Time-Series collection?", "username": "Guy_Gontar" }, { "code": "", "text": "Does your query return a lot of fields? If it’s targeted for a few, then it may be worth looking at a covering index, this means that all the data needed for the query to return results is within the index so no document will need to be retrieved.I’m afraid I’ve not used time-series, but I’m sure there are lots of experts about.It may be helpful to post a .explain of your query to look at what is actually taking the time. It’s probably also worth looking at server setup. Is it Atlas or on-prem and how large are the indexes, can they fit into the available memory.", "username": "John_Sewell" }, { "code": "", "text": "Hi Jown_Sewell,My query does return a lot of fields, mainly because of the data structure.\nI am trying to get the output of the .explain() method for my query but so far for today it’s not success in my Robo 3T Studio client as I get a blank response.\nI am using an on-prem server and the Compound Index size is 48.9 GB. The unique _id Index size is 41.4 GB. I would need to ask Sysadmin about the available memory but he never prompted me of memory over usage so far.", "username": "Guy_Gontar" }, { "code": "", "text": "Sounds like a covering index may consume a vast quantity of ram! I guess an example document as well as example query may help others assist with optimisations and comments on conversion to a time-series if that’s the best option.", "username": "John_Sewell" }, { "code": "{\n \"_id\" : ObjectId(\"647dd8996bb8634c52e492cd\"),\n \"dateTime\" : ISODate(\"2018-01-01T09:00:00.920+0000\"),\n \"securityId\" : NumberInt(1111),\n \"priceType\" : \" \",\n \"lastVolume\" : NumberInt(0),\n \"isMega\" : false,\n \"lastPrice\" : NumberInt(222),\n \"bidLevel1\" : NumberInt(0),\n \"bidLevel2\" : NumberInt(0),\n \"bidLevel3\" : NumberInt(0),\n \"bidLevel4\" : NumberInt(0),\n \"bidLevel5\" : NumberInt(0),\n \"bidSizeLevel1\" : NumberInt(0),\n \"bidSizeLevel2\" : NumberInt(0),\n \"bidSizeLevel3\" : NumberInt(0),\n \"bidSizeLevel4\" : NumberInt(0),\n \"bidSizeLevel5\" : NumberInt(0),\n \"askLevel1\" : 51,\n \"askLevel2\" : 52,\n \"askLevel3\" : NumberInt(54),\n \"askLevel4\" : NumberInt(0),\n \"askLevel5\" : NumberInt(0),\n \"askSizeLevel1\" : NumberInt(22),\n \"askSizeLevel2\" : NumberInt(23),\n \"askSizeLevel3\" : NumberInt(24),\n \"askSizeLevel4\" : NumberInt(0),\n \"askSizeLevel5\" : NumberInt(0)\n}\nvar filter = Builders<Quote>.Filter.And(\n Builders<Quote>.Filter.Eq(x => x.SecurityId, int.Parse(assetId, System.Globalization.NumberStyles.Integer)),\n Builders<Quote>.Filter.Gte(x => x.DateTime, from),\n Builders<Quote>.Filter.Lte(x => x.DateTime, to));\n", "text": "This is an example document:Example query as written in C# driver:", "username": "Guy_Gontar" }, { "code": "", "text": "Did you check ESR rule?And if your index-ed fields have lots of same values, then index doesn’t help a lot. (e.g. number of match-ed doc is high)", "username": "Kobe_W" }, { "code": "", "text": "I did, Nothing helpful.", "username": "Guy_Gontar" } ]
Improving Performance for a Find query with filters
2023-07-11T07:09:14.600Z
Improving Performance for a Find query with filters
709
null
[ "queries", "node-js", "data-modeling", "compass" ]
[ { "code": "", "text": "hi\ni have mongodb compass with collection in it got subdocument ref to other collection i want to update ref subdocument to only id of, and remove the subdocument\nexample:\ncat{id, code}\nproc{id, title, cat{id, code}\ni want to get proc{id, title, catid} for document in proc.\nthanks", "username": "Mhamed_B" }, { "code": "db.collection.update({},\n[\n {\n $set: {\n catid: \"$cat.catid\"\n }\n },\n {\n $unset: \"cat\"\n }\n],\n{\n multi: true\n})\n", "text": "Something like this?Mongo playground: a simple sandbox to test and share MongoDB queries online", "username": "John_Sewell" }, { "code": "", "text": "hi\ni have tested it but not work and gives {\nacknowledged: true,\ninsertedId: null,\nmatchedCount: 0,\nmodifiedCount: 0,\nupsertedCount: 0\n} why ? “$cat.catid” is as object in mongodb compass", "username": "Mhamed_B" }, { "code": "", "text": "I thought you wanted to replace the aub document with just the reference?Put whats not working in mongo playground so we can take a look, just one document shoukd be ok with the update youre trying to do.Or a screenshot of the document and update.", "username": "John_Sewell" }, { "code": "", "text": "matchedCount: 0,If the update is not matching anything in the filter to update then nothing will change, what filter did you apply when running the update?", "username": "John_Sewell" } ]
Update subdocument reference in collection by only id
2023-07-15T08:21:30.575Z
Update subdocument reference in collection by only id
562
null
[]
[ { "code": "", "text": "Hi everyone, Im a computer science student at Western and founded the Women in Computer Science & Developer Student Club as lead this year. I am passionate about technical literacy and community organizing and like to organize or attend local meetups, as well as reading up on tech news/trying new technologies.\nI have a tech blog at Grace Gong - DEV Community\nLooking forward to help organize events this yr!", "username": "Grace_Gong" }, { "code": "", "text": "Welcome to the MongoDB Community @Grace_Gong!It’s great to hear that you’re planning to organize community events soon! I look forward to seeing all the wonderful things you have in store.Hope you’ll find this community to be a valuable resource for all things MongoDB!", "username": "Harshit" }, { "code": "", "text": "Hi Grace! Looking forward to you and @chris combining your super powers for the Toronto MUG!", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "Welcome to the MUG family @Grace_Gong", "username": "eliehannouch" } ]
Introducing myself as a new MUG Leader
2023-07-14T15:31:14.455Z
Introducing myself as a new MUG Leader
649
null
[ "aggregation" ]
[ { "code": "[\n {\n $addFields: {\n category: {\n $cond: {\n if: {\n $in: [\n \"$name\",\n [\n \"Name8\",\n \"Name7\"\n ]\n ]\n },\n then: \"category1\",\n else: {\n $cond: {\n if: {\n $in: [\n \"$name\",\n [\n \"name3\",\n \"name 4\"\n\n ]\n ]\n },\n then: \"category2\",\n else: {\n $cond: {\n if: {\n $in: [\n \"name\",\n [\n \"Name1\",\n \"Name2\"\n ]\n ]\n },\n then: \"category3\",\n else: \"other\"\n }\n }\n }\n }\n }\n }\n }\n }\n]\n", "text": "Hey, I currently have a bar chart representing some data.\nName on x axis and amount on Y Axis.\nSo a pretty basic bar chart.But I want to have some of the bars be the same color as others. So I tried using the query for a category:But the problem here is that its not really made (I guess?) for this kind of query there because my bars become very thin and missplaced (too much right/left)I added the new field category in Series.Greetings", "username": "Julian_Fuchs" }, { "code": "", "text": "Change your chart from a Grouped Bar to a Stacked Bar. That way the bars will retain their maximum width.Tom", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Change color for single bars?
2023-07-14T22:05:46.378Z
Change color for single bars?
530
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Hi\ni have database mongodb with nodje js mongoose, suddly my data has become in previous state i have lost some columns create via nodejs, my code nodejs is same no change bu data have changed to previous state any way to recover data as my code node js.\nis it a bug or what?\nthanks", "username": "Mhamed_B" }, { "code": "", "text": "That does seem “strange”, could you upload the code somewhere to look over it? Could it be the code was inserting default data?What environment are you running against? Is it a local instance, or a docker instance or an Atlas instance (and if so, what tier)?John", "username": "John_Sewell" }, { "code": "", "text": "Hi and thanks for reply\nyes i very very strange i got this twice or three, i dont know why, i have a programme angular mean stack, mongoose installed in node js express, the programme keep the same but data return to previous state and i lost change i must recreate them in mongodb shell (can i force schema from node to mongodb?)\nthe env is windows 11,all in local machine, mongodb compass community.\nthanks for your help", "username": "Mhamed_B" }, { "code": "", "text": "sorry i have logs can i restore data from logs?\nthanks", "username": "Mhamed_B" }, { "code": "", "text": "thanks for your help i found my problem it was service mongodb off, i shutdown all mongod, mongodb compass and restart service from windows service.msc it work finally with my recent data (bizzard!!!).\nthanks", "username": "Mhamed_B" }, { "code": "", "text": "So you had multiple servers running one of which had the new data and one the old?", "username": "John_Sewell" }, { "code": "", "text": "no i have one only and localy", "username": "Mhamed_B" } ]
My data is returned to previous state
2023-07-05T19:24:24.261Z
My data is returned to previous state
472
null
[]
[ { "code": "{\"message\":\"Value is not an object: undefined\",\"name\":\"TypeError\"}, HTTP Status Code=400}const jwt = require('jsonwebtoken')\nlet token = jwt.sign({ ...});\nexports = function(){\n const jwt = require('jsonwebtoken')\n let token = jwt.sign({ foo: 'bar' }, \"secret\", { algorithm: 'RS256' });\n return token;\n};\n", "text": "I am implementing an update to my IOS app to provide Sign in with Apple. Apples policy also demands the ability to revoke the user token. This requires the construction and presentation of a specific Json Web Token to the Apple REST API and I need to use an Atlas Function to structure a signed JWT (using my private key) for presentation to that API accordingly. However, I have been unable to obtain a token as I expected and simply receive the following message\n{\"message\":\"Value is not an object: undefined\",\"name\":\"TypeError\"}, HTTP Status Code=400}\non the line in my makeJWT function that is meant to get the token. I can confirm that I have (apparently successfully) added the jsonwebtoken package to dependencies.To satisfy Apples requirements this has a reasonably extended and complicated object definition (represented by … in the brackets) and I suspected that I had made an error in constructing that. So to get past first base I created a simpler makeJWTTest function with a simpler payload to see if I can call jwt.sign({…}) successfully at all as follows:…which I would expect to deliver a token (albeit one that would not be suitable for revoking an Apple token!).\nBut this also results in the same error message. I am clearly missing something fundamental. I should be grateful for any help that can be provided for me to make some progress to at least be able to generate a JWT (any JWT!). Thanks. Chris", "username": "Chris_Lindsey" }, { "code": "> ran at 1689288343169\n> took \n> error: \n'crypto' module: error signing message\n", "text": "I noticed another related post from Dec 22 titled:\n[Can’t use JWT (jsonwebtoken) in mongodb realm functions to verify]\nby @Bhushan_Bharat with comment from @Raphael_Eskander identifying the jsonwebtoken version as a potential cause due to incompatibilities between the latest versions of jsonwebtoken and the node version on which the Realm functions run. I have therefore tried changing the version of jsonwebtoken to both 8.0.0 and 8.5.1. Unfortunately that just changed the error.", "username": "Chris_Lindsey" }, { "code": "", "text": "Update. With jsonwebtoken at version 8.5.1 and the algorithm changed to HS512 or HS256 the signed token is created successfully. Requiring the use of the RS256 or ES256 algorithms however produces the crypto module error. Is there perhaps another dependency or environmental setting required to allow these algorithms to work as well?", "username": "Chris_Lindsey" }, { "code": "`-----BEGIN PRIVATE KEY-----`\n\n`-----END PRIVATE KEY-----`\n", "text": "In order to implement Sign In with Apple, I have been working through @Sonisan excellent post titled “Apple sign in: revoke token”, but came across the issues raised in this post, which with his direct help and advice and another useful post titled “Can’t use JWT (jsonwebtoken) in mongodb realm functions to verify” from by @Bhushan_Bharat with comment from @Raphael_Eskander have now been overcome.\nIssue 1: Since his original post jsonwebtoken has been upgraded to version 9 which is incompatible with the node level in which the Mongo DB Realm functions run. As a result the jsonwebtoken dependency needs to be set at version 5.8.1.\nIssue 2: you need to include the start and close parts of your private key (easily missed) ie include:", "username": "Chris_Lindsey" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Function to create a JWT
2023-07-13T08:51:28.015Z
Function to create a JWT
680
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "", "text": "My team and I have recently figured out how to create a cumulative graph across multiple categories in Mongodb Atlas Charts. The X axis is time and the Y axis, or response variable, is the cumulative total. What we are looking to do now is to insert something in the aggregation pipeline that allows us to reset the cumulative total for each category to zero after every day. Any ideas or suggestions will be appreciated. Thank you.", "username": "Matthew_Taylor" }, { "code": "$sum", "text": "Hi @Matthew_Taylor -You can’t do this with the “compare values” option in Charts, but you can likely achieve this by using an aggregation pipeline that uses Window Functions. I don’t have the exact query for you, but you should be able use a $sum window operator over the preceding day to achieve this.Tom", "username": "tomhollander" } ]
Periodically resetting cumulative data
2023-07-10T17:45:24.398Z
Periodically resetting cumulative data
548
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "We have a requirement to get the total number of records from web application. We are using the mongo DB count function to get the total records count from collection.Problem: we have millions of records in a collections and the count function is not performant efficient to get the result.What is the quickest way or alternative soultion to get the total number of records in a collection through mongo query ?", "username": "Venkatesh_Bingi" }, { "code": "", "text": "there are a few count method:count and countdocuments and estimatedDocumentCount.you can check which can meet your requirements.But another way is to maintain a total number of docs on your own, which may still be inconsistent sometimes. (can happen without using a transaction).", "username": "Kobe_W" } ]
Mongo Query Total number of records : count
2023-07-14T17:18:27.066Z
Mongo Query Total number of records : count
486
null
[ "node-js", "mongoose-odm", "mongodb-shell" ]
[ { "code": "const dbUrl = 'mongodb://localhost:27017/macro-tickets';\nmongosh Current Mongosh Log ID:\t64b17475ab3453969945af15\n Connecting to:\t\tmongodb://127.0.0.1:27017/? directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0\n Using MongoDB:\t\t6.0.1\n Using Mongosh:\t\t1.6.0\ndbUrlmongodb://127.0.0.1:27017/macro-ticketsmongodb://0.0.0.0:27017/macro-ticketsmongodb://127.0.0.1/macro-ticketsmongodb://127.0.0.1/macro-ticketssudo mongodMongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (/Users/myname/Downloads/mtapp/node_modules/connect-mongo/node_modules/mongodb/lib/core/sdam/topology.js:439:30)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7)`\n MongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (/Users/myname/Downloads/mtapp/node_modules/connect-mongo/node_modules/mongodb/lib/core/sdam/topology.js:439:30)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7)\n MongoServerSelectionError: connect ECONNREFUSED ::1:27017\n at Timeout._onTimeout (/Users/myname/Downloads/mtapp/node_modules/connect-mongo/node_modules/mongodb/lib/core/sdam/topology.js:439:30)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7)\n connection error: MongooseServerSelectionError: connect ECONNREFUSED ::1:27017\n at Connection.openUri (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/connection.js:796:32)\n at /Users/myname/Downloads/mtapp/node_modules/mongoose/lib/index.js:328:10\n at /Users/myname/Downloads/mtapp/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/index.js:1149:10)\n at Mongoose.connect (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/index.js:327:20)\n at Object.<anonymous> (/Users/myname/Downloads/mtapp/app.js:38:10)\n at Module._compile (node:internal/modules/cjs/loader:1218:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1272:10) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) { 'localhost:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n logicalSessionTimeoutMinutes: undefined\n }\n }\n node:internal/process/promises:289\n triggerUncaughtException(err, true /* fromPromise */);\n ^\n\n MongooseServerSelectionError: connect ECONNREFUSED ::1:27017\n at Connection.openUri (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/connection.js:796:32)\n at /Users/myname/Downloads/mtapp/node_modules/mongoose/lib/index.js:328:10\n at /Users/myname/Downloads/mtapp/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)\n at Mongoose._promiseOrCallback (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/index.js:1149:10)\n at Mongoose.connect (/Users/myname/Downloads/mtapp/node_modules/mongoose/lib/index.js:327:20)\n at Object.<anonymous> (/Users/myname/Downloads/mtapp/app.js:38:10)\n at Module._compile (node:internal/modules/cjs/loader:1218:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1272:10) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n _hostAddress: HostAddress { isIPv6: false, host: 'localhost', port: 27017 },\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 166365218,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED ::1:27017\n at connectionFailureError (/Users/myname/Downloads/mtapp/node_modules/mongodb/lib/cmap/connect.js:293:20)\n at Socket.<anonymous> (/Users/myname/Downloads/mtapp/node_modules/mongodb/lib/cmap/connect.js:267:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:151:8)\n at emitErrorCloseNT (node:internal/streams/destroy:116:3)\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n logicalSessionTimeoutMinutes: undefined\n }\n }\n\n Node.js v19.3.0\n [nodemon] app crashed - waiting for file changes before starting...\n", "text": "I recently started working on a script unrelated to the app I’ve been building and now I can’t get my app to connect to the database again. It’s been about three weeks since I was in the app and the db connection worked beforehand. I’m not totally sure how long MongoDB stays connected so bringing this up in case it’s relevant.I’ve tried a bunch of things from stack overflow posts and youtube but can’t get my connection to work.Connection stringIf I run mongosh in my terminal, I get a connection (below) and noticed that the db route string is different from what I have above…Attempted fixesAfter noticing the db route is different in the terminal, I changed dbUrl to mongodb://127.0.0.1:27017/macro-tickets, mongodb://0.0.0.0:27017/macro-tickets, mongodb://127.0.0.1/macro-tickets, and mongodb://127.0.0.1/macro-tickets but none work.I also ran sudo mongod in VSCode’s terminal but that didn’t help.I included my full error’s below (sorry I know it’s a lot of text) for context but any ideas why I can’t get this thing to work?ErrorsError from the clientError in my VSCode", "username": "Chase_Schlachter" }, { "code": "\ntail -n 20 $(brew --prefix)/var/log/mongodb/mongo.log\n{\"t\":{\"$date\":\"2023-07-14T11:14:46.106-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn4\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:61881\",\"client\":\"conn4\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"21.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}\n{\"t\":{\"$date\":\"2023-07-14T11:14:56.941-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:61885\",\"uuid\":\"7c9e0af1-1eaf-49ff-bc48-a2f262e8adaf\",\"connectionId\":5,\"connectionCount\":5}}\n{\"t\":{\"$date\":\"2023-07-14T11:14:56.994-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn5\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:61885\",\"client\":\"conn5\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"21.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}\n{\"t\":{\"$date\":\"2023-07-14T12:10:26.836-05:00\"},\"s\":\"W\", \"c\":\"NETWORK\", \"id\":4615610, \"ctx\":\"conn1\",\"msg\":\"Failed to check socket connectivity\",\"attr\":{\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Connection closed by peer\"}}}\n{\"t\":{\"$date\":\"2023-07-14T12:10:26.837-05:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn1\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":344305}}\n{\"t\":{\"$date\":\"2023-07-14T12:10:27.099-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61878\",\"uuid\":\"99630538-48de-46ea-abd0-556112ea0df4\",\"connectionId\":1,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-07-14T12:10:35.214-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:62668\",\"uuid\":\"5c7dec68-063a-46bd-ad9a-6425fd459654\",\"connectionId\":6,\"connectionCount\":5}}\n{\"t\":{\"$date\":\"2023-07-14T12:10:35.221-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn6\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:62668\",\"client\":\"conn6\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"21.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:44.199-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:63007\",\"uuid\":\"cf6d1cf6-ef70-4004-9ee8-8e111a396924\",\"connectionId\":7,\"connectionCount\":6}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:44.199-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn2\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61879\",\"uuid\":\"70450628-bfb7-4ea9-8523-29c5623259d9\",\"connectionId\":2,\"connectionCount\":5}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:44.200-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn4\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61881\",\"uuid\":\"20cb14c6-1e92-4ba8-b54d-b7c5696dc976\",\"connectionId\":4,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:44.200-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn3\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61880\",\"uuid\":\"cdad86ba-69b8-426d-8776-6ad883172a4b\",\"connectionId\":3,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:44.203-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn7\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:63007\",\"client\":\"conn7\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"21.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:50.949-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:63008\",\"uuid\":\"efccaf7a-f563-4367-8e39-69397f8036cf\",\"connectionId\":8,\"connectionCount\":4}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:50.949-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:63009\",\"uuid\":\"168c560e-c671-45e9-a257-e5aff01f76ac\",\"connectionId\":9,\"connectionCount\":5}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:50.953-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn8\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:63008\",\"client\":\"conn8\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"21.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:50.955-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn9\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:63009\",\"client\":\"conn9\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"21.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:50.958-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:63010\",\"uuid\":\"f4f52d36-2ec4-4b7f-a37e-34649cdcf653\",\"connectionId\":10,\"connectionCount\":6}}\n{\"t\":{\"$date\":\"2023-07-14T12:17:50.960-05:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn10\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:63010\",\"client\":\"conn10\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"21.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}\n{\"t\":{\"$date\":\"2023-07-14T13:17:06.880-05:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":5479200, \"ctx\":\"TTLMonitor\",\"msg\":\"Deleted expired documents using index\",\"attr\":{\"namespace\":\"macro-tickets.sessions\",\"index\":\"expires_1\",\"numDeleted\":0,\"durationMillis\":162}}\n", "text": "I think the issue has to do with mongodb not running but it’s throwing me off that it shows as running from the terminal.Here are my last 20 lines of logs:", "username": "Chase_Schlachter" }, { "code": "", "text": "This answer from stack plus a computer restart worked for me.", "username": "Chase_Schlachter" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoServerSelectionError: connect ECONNREFUSED ::1:27017 - worked fine 3 weeks ago but I worked on another program between
2023-07-14T18:22:50.528Z
MongoServerSelectionError: connect ECONNREFUSED ::1:27017 - worked fine 3 weeks ago but I worked on another program between
569
null
[ "aggregation", "queries", "indexes", "atlas-search" ]
[ { "code": "{\n\"_id\": \"<id>\",\n\"name\": \"James Smith\",\n\"aliases\": [\n\"Jimmy Smith\",\n\"Jammy Smith\",\n\"Johnny Smith\"\n]\n}\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"aliases\": {\n \"type\": \"string\"\n },\n \"name\": {\n \"type\": \"string\"\n }\n }\n },\n \"storedSource\": {\n \"include\": [\n \"aliases\",\n \"names\",\n ]\n }\n}\n[\n {\n '$search': {\n 'index' : 'my-index',\n 'text' : {\n 'query': \"<NAME>\",\n 'path' : ['name', 'aliases'],\n },\n 'scoreDetails': true,\n }\n }, {\n \"$project\": {\n \"_id\" : 0,\n \"name\" : \"$name\",\n \"aliases\" : \"$aliases\",\n 'score': {'$meta': 'searchScore'}\n }\n }, {\n \"$sort\": {\n \"score\": -1\n }\n }\n]\nlucene.standardlucene.keyword", "text": "Greetings,I am currently managing a MongoDB 5.0 collection that stores data in the following structure:This collection encompasses more than 100k records. To facilitate text search, I have established an index as follows:My search function executes a query on this index in the following manner:However, I’m facing an issue where a search for “Gunther Lopez Smith” returns results that include “James Smith”. This issue extends to returning results for all instances of “Smith” in the database, resulting in approximately 2k records.While I am currently using lucene.standard, attempts to switch to lucene.keyword yielded no records for a “Smith James” query and also seemed to introduce case-sensitivity.My objective is to refine the search such that a query for “Johnny Smith” returns “James Smith”, and potentially other matches for the phrase “smith James”, but without including all entries with “Smith”. How can I accomplish this level of specificity in my text search results?Thank you for your assistance!", "username": "adhishthite" }, { "code": "", "text": "Hi @adhishthite , welcome to the MongoDB Community! Have you seen this article on Partial Matching in Atlas Search? I think the phrase operator with the slop parameter specifying the allowable distance between words may help achieve the functionality you are looking for.Let me know if this helps!", "username": "amyjian" }, { "code": "slopphrase", "text": "Hi @amyjian,Thanks for your response.I have already tried using slop and phrase operators, but they do not work in cases where order is reversed. At least for me, they didn’t.", "username": "adhishthite" } ]
Refining Text Search Results in MongoDB 5.0 - Specificity Over General Matche
2023-07-14T17:29:02.696Z
Refining Text Search Results in MongoDB 5.0 - Specificity Over General Matche
493
https://www.mongodb.com/…bf9a5065e3b8.png
[ "flutter", "dart" ]
[ { "code": "import 'dart:convert';\nimport 'dart:io';\nimport 'package:flutter/services.dart' show rootBundle;\nimport 'package:flutter/material.dart';\nimport 'package:path_provider/path_provider.dart';\nimport 'package:realm/realm.dart';\nimport 'model.dart';\n\n// part 'app.g.dart';\n\n@RealmModel() // define a data model class named `_Car`.\nclass _Car {\n late String make;\n\n late String model;\n\n int? kilometers = 500;\n}\n\nvoid main() async {\n WidgetsFlutterBinding.ensureInitialized();\n final realmConfig = {\n \"config_version\": 20210101,\n \"app_id\": \"teescanfire-jumnr\",\n \"name\": \"flutter_flexible_sync\",\n \"location\": \"DE-FF\",\n \"deployment_model\": \"LOCAL\"\n };\n String appId = \"teescanfire-jumnr\";\n // String appId = realmConfig['app_id'];\n MyApp.allTasksRealm = await createRealm(appId, CollectionType.allTasks);\n MyApp.importantTasksRealm =\n await createRealm(appId, CollectionType.importantTasks);\n MyApp.normalTasksRealm = await createRealm(appId, CollectionType.normalTasks);\n\n runApp(const MyApp());\n}\n\nenum CollectionType { allTasks, importantTasks, normalTasks }\n\nFuture<Realm> createRealm(String appId, CollectionType collectionType) async {\n final appConfig = AppConfiguration(appId);\n final app = App(appConfig);\n final user = await app.logIn(Credentials.anonymous());\n\n final flxConfig = Configuration.flexibleSync(user, [Task.schema],\n path: await absolutePath(\"db_${collectionType.name}.realm\"));\n var realm = Realm(flxConfig);\n print(\"Created local realm db at: ${realm.config.path}\");\n\n final RealmResults<Task> query;\n\n if (collectionType == CollectionType.allTasks) {\n query = realm.all<Task>();\n } else {\n query = realm.query<Task>(r'isImportant == $0',\n [collectionType == CollectionType.importantTasks]);\n }\n\n realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(query);\n });\n\n await realm.subscriptions.waitForSynchronization();\n await realm.syncSession.waitForDownload();\n print(\"Syncronization completed for realm: ${realm.config.path}\");\n\n return realm;\n}\n\nFuture<String> absolutePath(String fileName) async {\n final appDocsDirectory = await getApplicationDocumentsDirectory();\n final realmDirectory = '${appDocsDirectory.path}/mongodb-realm';\n if (!Directory(realmDirectory).existsSync()) {\n await Directory(realmDirectory).create(recursive: true);\n }\n return \"$realmDirectory/$fileName\";\n}\n\nclass MyApp extends StatelessWidget {\n static late Realm allTasksRealm;\n static late Realm importantTasksRealm;\n static late Realm normalTasksRealm;\n\n const MyApp({Key? key}) : super(key: key);\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n title: 'Flutter Demo',\n theme: ThemeData(\n primarySwatch: Colors.green,\n ),\n home: const MyHomePage(title: 'Flutter Realm Flexible Sync'),\n );\n }\n}\n\nclass MyHomePage extends StatefulWidget {\n const MyHomePage({Key? key, required this.title}) : super(key: key);\n\n final String title;\n\n @override\n State<MyHomePage> createState() => _MyHomePageState();\n}\n\nclass _MyHomePageState extends State<MyHomePage> {\n int _allTasksCount = MyApp.allTasksRealm.all<Task>().length;\n int _importantTasksCount = MyApp.importantTasksRealm.all<Task>().length;\n int _normalTasksCount = MyApp.normalTasksRealm.all<Task>().length;\n\n void _createImportantTasks() async {\n // print(\"going to try insert method I found on GitHub\");\n // MyApp.importantTasksRealm.subscriptions.update((mutableSubscriptions) {\n // mutableSubscriptions.add(MyApp.importantTasksRealm.query<Task>(\n // r'isCompleted == $0 AND isImportant == $1', [true, false]));\n // });\n // await MyApp.importantTasksRealm.subscriptions.waitForSynchronization();\n // MyApp.normalTasksRealm.write(() {\n // MyApp.importantTasksRealm\n // .add(Task(ObjectId(), \"Send an email\", true, false));\n // });\n\n print(\"Starting to create an important task\");\n var importantTasks = MyApp.importantTasksRealm.all<Task>();\n print(\"Starting to count an important task\");\n var allTasksCount = MyApp.allTasksRealm.all<Task>();\n print(\"Starting to write important task\");\n MyApp.allTasksRealm.write(() {\n MyApp.allTasksRealm.add(Task(ObjectId(),\n \"Important task ${importantTasks.length + 1}\", false, true));\n });\n print(\"Waiting for upload\");\n await MyApp.allTasksRealm.syncSession.waitForUpload();\n print(\"Waiting for sync\");\n await MyApp.importantTasksRealm.subscriptions.waitForSynchronization();\n print(\"Setting State\");\n setState(() {\n _importantTasksCount = importantTasks.length;\n _allTasksCount = allTasksCount.length;\n });\n }\n\n void _createNormalTasks() async {\n var normalTasks = MyApp.normalTasksRealm.all<Task>();\n var allTasksCount = MyApp.allTasksRealm.all<Task>();\n MyApp.allTasksRealm.write(() {\n MyApp.allTasksRealm.add(Task(\n ObjectId(), \"Normal task ${normalTasks.length + 1}\", false, false));\n });\n await MyApp.allTasksRealm.syncSession.waitForUpload();\n await MyApp.normalTasksRealm.subscriptions.waitForSynchronization();\n setState(() {\n _normalTasksCount = normalTasks.length;\n _allTasksCount = allTasksCount.length;\n });\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n appBar: AppBar(\n title: Text(widget.title),\n ),\n body: Center(\n child: Column(\n mainAxisAlignment: MainAxisAlignment.center,\n children: <Widget>[\n const Text('Important tasks count:',\n style: TextStyle(fontWeight: FontWeight.bold)),\n Text('$_importantTasksCount',\n style: Theme.of(context).textTheme.headline4),\n const Text('Normal tasks count:',\n style: TextStyle(fontWeight: FontWeight.bold)),\n Text('$_normalTasksCount',\n style: Theme.of(context).textTheme.headline4),\n const Text('All tasks count:',\n style: TextStyle(fontWeight: FontWeight.bold)),\n Text('$_allTasksCount',\n style: Theme.of(context).textTheme.headline4),\n ],\n ),\n ),\n floatingActionButton: Stack(\n children: <Widget>[\n Padding(\n padding: const EdgeInsets.only(left: 0.0),\n child: Align(\n alignment: Alignment.bottomLeft,\n child: FloatingActionButton(\n onPressed: _createImportantTasks,\n tooltip: 'High priority task',\n child: const Icon(Icons.add),\n )),\n ),\n Padding(\n padding: const EdgeInsets.only(right: 40.0),\n child: Align(\n alignment: Alignment.bottomRight,\n child: FloatingActionButton(\n onPressed: _createNormalTasks,\n tooltip: 'Normal task',\n child: const Icon(Icons.add),\n )),\n ),\n ],\n ),\n floatingActionButtonLocation: FloatingActionButtonLocation.startFloat);\n }\n}\n\n// class _MyHomePageState extends State<MyHomePage> {\n// int _allTasksCount = 0;\n// int _importantTasksCount = 0;\n// int _normalTasksCount = 0;\n\n// @override\n// void initState() {\n// super.initState();\n// _fetchTaskCounts();\n// }\n\n// void _fetchTaskCounts() {\n// _allTasksCount = MyApp.allTasksRealm.all<Task>().length;\n// _importantTasksCount = MyApp.importantTasksRealm.all<Task>().length;\n// _normalTasksCount = MyApp.normalTasksRealm.all<Task>().length;\n// }\n\n// void _createTask(bool isImportant) async {\n// MyApp.allTasksRealm.write(() {\n// MyApp.allTasksRealm.add(\n// Task(ObjectId(), \"New task\", false, isImportant),\n// );\n// });\n\n// await MyApp.allTasksRealm.syncSession.waitForUpload();\n// await MyApp.allTasksRealm.subscriptions.waitForSynchronization();\n\n// setState(() {\n// _fetchTaskCounts();\n// });\n// }\n\n// void writeObjectToDatabase() async {}\n// @override\n// Widget build(BuildContext context) {\n// return Scaffold(\n// appBar: AppBar(\n// title: Text(widget.title),\n// ),\n// body: Center(\n// child: Column(\n// mainAxisAlignment: MainAxisAlignment.center,\n// children: <Widget>[\n// const Text(\n// 'Important tasks count:',\n// style: TextStyle(fontWeight: FontWeight.bold),\n// ),\n// Text(\n// '$_importantTasksCount',\n// style: Theme.of(context).textTheme.headline4,\n// ),\n// const Text(\n// 'Normal tasks count:',\n// style: TextStyle(fontWeight: FontWeight.bold),\n// ),\n// Text(\n// '$_normalTasksCount',\n// style: Theme.of(context).textTheme.headline4,\n// ),\n// const Text(\n// 'All tasks count:',\n// style: TextStyle(fontWeight: FontWeight.bold),\n// ),\n// Text(\n// '$_allTasksCount',\n// style: Theme.of(context).textTheme.headline4,\n// ),\n// ],\n// ),\n// ),\n// floatingActionButton: FloatingActionButton(\n// onPressed: () => _createTask(true),\n// tooltip: 'Create Task',\n// child: const Icon(Icons.add),\n// ),\n// );\n// }\n// }\n\n", "text": "Hi all, what I am trying to do is to read and write from a MongoDB atlas cloud database from my Flutter application. I am trying to use Realm to do so (I was previously using the mongo_dart plugin which was much simpler but this doesn’t work for web apps). To do so, I downloaded and unzipped the Flutter flexible sync demo available here: https://github.com/realm/realm-dart-samples/tree/main/flutter_flexible_sync. After downloading this file, I just replaced my app ID and ran it. After running the app, I realized that I had a defined schema in mongoDB but pressing the buttons at the bottom of the screen did not write anything to the database. The app is successfully reading from the database but it cannot write to it. You’ll realize that I thought this was due to me not subscribing to it so I pasted (now commented out in the code below) some sample code I found on the Flutter realm package website to create a mutable subscription). I added print statements in the _createImportantTasks function to see if it was exiting somewhere in the middle of the function but it seemed to be working just fine and I don’t have errors in my debug console. All I want to do is to be able to read the “linkID” property from a Links Schema I would like to create by querying for it for a user with the specified “username” property (i.e. finding the linkID of user with the email [email protected]). I also want to be able to insert new users into the Link collection (with a name string property, a linkID string property, and an email string property). How can I do this? Here’s the code I was trying out:Here’s how my UI looks (I’d like to press on a button on the screen to write a new object to a specific collection):\n\nimage381×839 31.4 KB\n\nNote: I’ve allowed anonymous access to my database, it’s in development mode, and I’ve allowed access from any IP", "username": "Matthew_Gerges" }, { "code": "", "text": "Hi @Matthew_Gerges,If you look at the logs tab in the your app services dashboard, you’ll notice that your app is triggering compensating writes due to the writes violating your write permissions, which is why your writes are being reverted. You should double check your app logic to ensure it’s compatible with your rules and tweak it accordingly.", "username": "Kiro_Morkos" }, { "code": "Realm CLIassets\\atlas_appdefault_rule.jsonApp Services/Rules/Default roles and filters[ERROR] Realm: CompensatingWriteError message: Client attempted a write that is disallowed by permissions, or modifies an object outside the current query, and the server undid the change category: SyncErrorCategory.session code: SyncSessionErrorCode.compensatingWrite.Realm.logger.level =RealmLogLevel.debug;", "text": "Hi @Matthew_Gerges!\nYou could skip doing all the configurations by hand. There is a Readme file in this sample that explains how you can upload all the settings using Realm CLI . The settings are in folder assets\\atlas_app. It seems you have already configured most of them, but you can compare if something is missing in your App Service. The only thing that you haven’t mentioned are the rules. What are the rules in your App Service? They should be as the one defined in default_rule.json. Do you have write permissions? You can check that in App Services/Rules/Default roles and filters.If this was the case you should’ve seen information printed in the console similar to\n[ERROR] Realm: CompensatingWriteError message: Client attempted a write that is disallowed by permissions, or modifies an object outside the current query, and the server undid the change category: SyncErrorCategory.session code: SyncSessionErrorCode.compensatingWrite.Please, let me know if the issue is related to your permissions! You can also set Realm.logger.level =RealmLogLevel.debug; and to investigate the details in the log.", "username": "Desislava_St_Stefanova" }, { "code": "import 'package:realm/realm.dart';\npart 'model.g.dart';\n\n@RealmModel()\nclass _Task {\n @PrimaryKey()\n @MapTo('_id')\n late ObjectId id;\n late String title;\n late bool isCompleted;\n late bool isImportant;\n}\n\n@RealmModel()\nclass _Links {\n @PrimaryKey()\n @MapTo('_id')\n late ObjectId id;\n late String name;\n late String email;\n late bool linkID;\n}\n// GENERATED CODE - DO NOT MODIFY BY HAND\n\npart of 'model.dart';\n\n// **************************************************************************\n// RealmObjectGenerator\n// **************************************************************************\n\nclass Task extends _Task with RealmEntity, RealmObjectBase, RealmObject {\n Task(\n ObjectId id,\n String title,\n bool isCompleted,\n bool isImportant,\n ) {\n RealmObjectBase.set(this, '_id', id);\n RealmObjectBase.set(this, 'title', title);\n RealmObjectBase.set(this, 'isCompleted', isCompleted);\n RealmObjectBase.set(this, 'isImportant', isImportant);\n }\n\n Task._();\n\n @override\n ObjectId get id => RealmObjectBase.get<ObjectId>(this, '_id') as ObjectId;\n @override\n set id(ObjectId value) => RealmObjectBase.set(this, '_id', value);\n\n @override\n String get title => RealmObjectBase.get<String>(this, 'title') as String;\n @override\n set title(String value) => RealmObjectBase.set(this, 'title', value);\n\n @override\n bool get isCompleted =>\n RealmObjectBase.get<bool>(this, 'isCompleted') as bool;\n @override\n set isCompleted(bool value) =>\n RealmObjectBase.set(this, 'isCompleted', value);\n\n @override\n bool get isImportant =>\n RealmObjectBase.get<bool>(this, 'isImportant') as bool;\n @override\n set isImportant(bool value) =>\n RealmObjectBase.set(this, 'isImportant', value);\n\n @override\n Stream<RealmObjectChanges<Task>> get changes =>\n RealmObjectBase.getChanges<Task>(this);\n\n @override\n Task freeze() => RealmObjectBase.freezeObject<Task>(this);\n\n static SchemaObject get schema => _schema ??= _initSchema();\n static SchemaObject? _schema;\n static SchemaObject _initSchema() {\n RealmObjectBase.registerFactory(Task._);\n return const SchemaObject(ObjectType.realmObject, Task, 'Task', [\n SchemaProperty('id', RealmPropertyType.objectid,\n mapTo: '_id', primaryKey: true),\n SchemaProperty('title', RealmPropertyType.string),\n SchemaProperty('isCompleted', RealmPropertyType.bool),\n SchemaProperty('isImportant', RealmPropertyType.bool),\n ]);\n }\n}\n\nclass Links extends _Links with RealmEntity, RealmObjectBase, RealmObject {\n Links(\n ObjectId id,\n String name,\n String email,\n bool linkID,\n ) {\n RealmObjectBase.set(this, '_id', id);\n RealmObjectBase.set(this, 'name', name);\n RealmObjectBase.set(this, 'email', email);\n RealmObjectBase.set(this, 'linkID', linkID);\n }\n\n Links._();\n\n @override\n ObjectId get id => RealmObjectBase.get<ObjectId>(this, '_id') as ObjectId;\n @override\n set id(ObjectId value) => RealmObjectBase.set(this, '_id', value);\n\n @override\n String get name => RealmObjectBase.get<String>(this, 'name') as String;\n @override\n set name(String value) => RealmObjectBase.set(this, 'name', value);\n\n @override\n String get email => RealmObjectBase.get<String>(this, 'email') as String;\n @override\n set email(String value) => RealmObjectBase.set(this, 'email', value);\n\n @override\n bool get linkID => RealmObjectBase.get<bool>(this, 'linkID') as bool;\n @override\n set linkID(bool value) => RealmObjectBase.set(this, 'linkID', value);\n\n @override\n Stream<RealmObjectChanges<Links>> get changes =>\n RealmObjectBase.getChanges<Links>(this);\n\n @override\n Links freeze() => RealmObjectBase.freezeObject<Links>(this);\n\n static SchemaObject get schema => _schema ??= _initSchema();\n static SchemaObject? _schema;\n static SchemaObject _initSchema() {\n RealmObjectBase.registerFactory(Links._);\n return const SchemaObject(ObjectType.realmObject, Links, 'Links', [\n SchemaProperty('id', RealmPropertyType.objectid,\n mapTo: '_id', primaryKey: true),\n SchemaProperty('name', RealmPropertyType.string),\n SchemaProperty('email', RealmPropertyType.string),\n SchemaProperty('linkID', RealmPropertyType.bool),\n ]);\n }\n}\n\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"isCompleted\": {\n \"bsonType\": \"bool\"\n },\n \"isImportant\": {\n \"bsonType\": \"bool\"\n },\n \"title\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [\n \"_id\",\n \"title\",\n \"isCompleted\",\n \"isImportant\"\n ],\n \"title\": \"Task\"\n}\nimport 'dart:convert';\nimport 'dart:io';\nimport 'package:flutter/services.dart' show rootBundle;\nimport 'package:flutter/material.dart';\nimport 'package:path_provider/path_provider.dart';\nimport 'package:realm/realm.dart';\nimport 'model.dart';\n\n// part 'app.g.dart';\n\nvoid main() async {\n WidgetsFlutterBinding.ensureInitialized();\n final realmConfig = {\n \"config_version\": 20210101,\n \"app_id\": \"teescanfire-jumnr\",\n \"name\": \"flutter_flexible_sync\",\n \"location\": \"DE-FF\",\n \"deployment_model\": \"LOCAL\"\n };\n String appId = \"teescanfire-jumnr\";\n // String appId = realmConfig['app_id'];\n MyApp.allTasksRealm = await createRealm(appId, CollectionType.allTasks);\n MyApp.importantTasksRealm =\n await createRealm(appId, CollectionType.importantTasks);\n MyApp.normalTasksRealm = await createRealm(appId, CollectionType.normalTasks);\n MyApp.linksRealm = await createRealm2(appId, CollectionType.normalTasks);\n\n runApp(const MyApp());\n}\n\nenum CollectionType { allTasks, importantTasks, normalTasks }\n\nFuture<Realm> createRealm(String appId, CollectionType collectionType) async {\n final appConfig = AppConfiguration(appId);\n final app = App(appConfig);\n final user = await app.logIn(Credentials.anonymous());\n\n final flxConfig = Configuration.flexibleSync(user, [Task.schema],\n path: await absolutePath(\"db_${collectionType.name}.realm\"));\n var realm = Realm(flxConfig);\n print(\"Created local realm db at: ${realm.config.path}\");\n\n final RealmResults<Task> query;\n\n if (collectionType == CollectionType.allTasks) {\n query = realm.all<Task>();\n } else {\n query = realm.query<Task>(r'isImportant == $0',\n [collectionType == CollectionType.importantTasks]);\n }\n\n realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(query);\n });\n\n await realm.subscriptions.waitForSynchronization();\n await realm.syncSession.waitForDownload();\n print(\"Syncronization completed for realm: ${realm.config.path}\");\n\n return realm;\n}\n\nFuture<Realm> createRealm2(String appId, CollectionType collectionType) async {\n final appConfig = AppConfiguration(appId);\n final app = App(appConfig);\n final user = await app.logIn(Credentials.anonymous());\n\n final flxConfig = Configuration.flexibleSync(user, [Links.schema],\n path: await absolutePath(\"db_${collectionType.name}.realm\"));\n var realm = Realm(flxConfig);\n print(\"Created local realm db at: ${realm.config.path}\");\n\n final RealmResults<Links> query;\n\n // if (collectionType == CollectionType.allTasks) {\n // query = realm.all<Links>();\n // } else {\n // query = realm.query<Links>(r'isImportant == $0',\n // [collectionType == CollectionType.importantTasks]);\n // }\n query = realm.all<Links>();\n\n realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(query);\n });\n\n await realm.subscriptions.waitForSynchronization();\n await realm.syncSession.waitForDownload();\n print(\"Syncronization completed for realm: ${realm.config.path}\");\n\n return realm;\n}\n\nFuture<String> absolutePath(String fileName) async {\n final appDocsDirectory = await getApplicationDocumentsDirectory();\n final realmDirectory = '${appDocsDirectory.path}/mongodb-realm';\n if (!Directory(realmDirectory).existsSync()) {\n await Directory(realmDirectory).create(recursive: true);\n }\n return \"$realmDirectory/$fileName\";\n}\n\nclass MyApp extends StatelessWidget {\n static late Realm allTasksRealm;\n static late Realm importantTasksRealm;\n static late Realm normalTasksRealm;\n static late Realm linksRealm;\n\n const MyApp({Key? key}) : super(key: key);\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n title: 'Flutter Demo',\n theme: ThemeData(\n primarySwatch: Colors.green,\n ),\n home: const MyHomePage(title: 'Flutter Realm Flexible Sync'),\n );\n }\n}\n\nclass MyHomePage extends StatefulWidget {\n const MyHomePage({Key? key, required this.title}) : super(key: key);\n\n final String title;\n\n @override\n State<MyHomePage> createState() => _MyHomePageState();\n}\n\nclass _MyHomePageState extends State<MyHomePage> {\n int _allTasksCount = MyApp.allTasksRealm.all<Task>().length;\n int _importantTasksCount = MyApp.importantTasksRealm.all<Task>().length;\n int _normalTasksCount = MyApp.normalTasksRealm.all<Task>().length;\n int _linksCount = MyApp.normalTasksRealm.all<Links>().length;\n // int _linksCount = MyApp.linksRealm.all<Link>().length;\n // int _LinksCount = MyApp.linksRealm.all<Link>().length;\n\n void _createImportantTasks() async {\n // print(\"going to try insert method I found on GitHub\");\n // MyApp.importantTasksRealm.subscriptions.update((mutableSubscriptions) {\n // mutableSubscriptions.add(MyApp.importantTasksRealm.query<Task>(\n // r'isCompleted == $0 AND isImportant == $1', [true, false]));\n // });\n // await MyApp.importantTasksRealm.subscriptions.waitForSynchronization();\n // MyApp.normalTasksRealm.write(() {\n // MyApp.importantTasksRealm\n // .add(Task(ObjectId(), \"Send an email\", true, false));\n // });\n\n print(\"Starting to create an important task\");\n var importantTasks = MyApp.importantTasksRealm.all<Task>();\n print(\"Starting to count an important task\");\n var allTasksCount = MyApp.allTasksRealm.all<Task>();\n print(\"Starting to write important task\");\n MyApp.allTasksRealm.write(() {\n MyApp.allTasksRealm.add(Task(ObjectId(),\n \"Important task ${importantTasks.length + 1}\", false, true));\n });\n print(\"Waiting for upload\");\n await MyApp.allTasksRealm.syncSession.waitForUpload();\n print(\"Waiting for sync\");\n await MyApp.importantTasksRealm.subscriptions.waitForSynchronization();\n print(\"Setting State\");\n setState(() {\n _importantTasksCount = importantTasks.length;\n _allTasksCount = allTasksCount.length;\n });\n }\n\n void _createNormalTasks() async {\n var normalTasks = MyApp.normalTasksRealm.all<Task>();\n var allTasksCount = MyApp.allTasksRealm.all<Task>();\n MyApp.allTasksRealm.write(() {\n MyApp.allTasksRealm.add(Task(\n ObjectId(), \"Normal task ${normalTasks.length + 1}\", false, false));\n });\n await MyApp.allTasksRealm.syncSession.waitForUpload();\n await MyApp.normalTasksRealm.subscriptions.waitForSynchronization();\n setState(() {\n _normalTasksCount = normalTasks.length;\n _allTasksCount = allTasksCount.length;\n });\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n appBar: AppBar(\n title: Text(widget.title),\n ),\n body: Center(\n child: Column(\n mainAxisAlignment: MainAxisAlignment.center,\n children: <Widget>[\n const Text('Important tasks count:',\n style: TextStyle(fontWeight: FontWeight.bold)),\n Text('$_importantTasksCount',\n style: Theme.of(context).textTheme.headline4),\n const Text('Normal tasks count:',\n style: TextStyle(fontWeight: FontWeight.bold)),\n Text('$_normalTasksCount',\n style: Theme.of(context).textTheme.headline4),\n const Text('All tasks count:',\n style: TextStyle(fontWeight: FontWeight.bold)),\n Text('$_allTasksCount',\n style: Theme.of(context).textTheme.headline4),\n Text('$_linksCount',\n style: Theme.of(context).textTheme.headline4),\n ],\n ),\n ),\n floatingActionButton: Stack(\n children: <Widget>[\n Padding(\n padding: const EdgeInsets.only(left: 0.0),\n child: Align(\n alignment: Alignment.bottomLeft,\n child: FloatingActionButton(\n onPressed: _createImportantTasks,\n tooltip: 'High priority task',\n child: const Icon(Icons.add),\n )),\n ),\n Padding(\n padding: const EdgeInsets.only(right: 40.0),\n child: Align(\n alignment: Alignment.bottomRight,\n child: FloatingActionButton(\n onPressed: _createNormalTasks,\n tooltip: 'Normal task',\n child: const Icon(Icons.add),\n )),\n ),\n ],\n ),\n floatingActionButtonLocation: FloatingActionButtonLocation.startFloat);\n }\n}\n", "text": "Ok thank you! This actually solved one of my problems - I went into the MongoDB atlas UI and under rules changed both read and write to true. Using the template app from the gitHub I linked above, I can read from and write to “tasks” under my todo but I can’t write to userLinks under my test database in my collections. This is the error I get:\n\nimage386×839 35.5 KB\n\nI thought this was because I hadn’t configured it in my model.dart so I did configure it in my model.dart file like so:And then I also ran the generate command so it can update my model.g.dart file which looks like so:But when I did so I was told that Links wasn’t in the schema and I don’t know how to fix this I tried just going into my C:\\Users\\matth\\Documents\\Flutter App Development\\TeeScanFire_3\\teescan_fire\\assets\\atlas_app\\data_sources\\mongodb-atlas\\flutter_flexible_sync\\Task\\schema.json and I tried adding a new object called Link to this - but it wouldn’t let me add a comma anywhere without showing a red squiggly:Even when I erased this whole file and saved, my app still ran fine so I don’t think this is the issue. All I want to do is be able to read and write to a Links collection under my test database the properties linkID (string), name (string), and email (string).This is how my main.dart file looks by the way (I tried copying all the steps the original coder took to set up the process for reading from and writing to the tasks):Also as a side note, I tried doing the realm-cli push inside my assets/atlas_app directory but it told me push failed: failed to find group", "username": "Matthew_Gerges" }, { "code": "LinksConfigurationConfiguration.flexibleSyncfinal flxConfig = Configuration.flexibleSync(user, [Task.schema, Links.schema], path.......", "text": "Hi @Matthew_Gerges!\nIt is good to see that your project is moving forward. The next error that you have received is because the new class that you have added Links is not added to the Configuration. You should just update Configuration.flexibleSync in the code like this:final flxConfig = Configuration.flexibleSync(user, [Task.schema, Links.schema], path.......Probably, we should consider adding more explanations to this error text.", "username": "Desislava_St_Stefanova" } ]
Cannot Write to Collection Using Realm Sync in Flutter
2023-07-12T13:22:29.257Z
Cannot Write to Collection Using Realm Sync in Flutter
668
null
[ "queries", "replication", "python", "golang" ]
[ { "code": "indb.collection.find({\"_id\":{\"$in\":[]}}golangpymongo", "text": "Hi,\nI have a 3 Node MongoDB cluster with Replica Set.I am executing a find query using an AWS EC2 instance (which has 2 NIC/network interface cards, for reasons undisclosed). My find query has a list with in operator (i.e., db.collection.find({\"_id\":{\"$in\":[]}}) and I am using golang mongo driver for this to connect to this database.In some cases (when the length of the list exceed some number) my MongoDB find query stuck, even though the connection to MongoDB was created. The length of the list is different for different collection and different mongo drivers (I tried the same with pymongo on the same machine), I am able to reproduce the same repeatedly with those specific numbers, even if I changed the data but keep same schema. The same query is working on some other machine. The number is very low, like for one collection that number is 58.I am not understanding why I am facing this issue, only when I am using a machine with 2 NIC cards and a MongoDB cluster setup.\nAny help would be appreciated.\nThank you<3", "username": "Aman_Gupta5" }, { "code": "explain", "text": "How much data is being returned for those slow queries?\nDo you have an explain output?", "username": "Kobe_W" }, { "code": "_idpymongo", "text": "Hi @Kobe_W, thank you for your quick reply,\nThe problem I am facing isn’t exactly how much data is being returned, When the list is over the length of > 58 in case of Go Mongo Drivers, it is not returning any documents, whereas if it is less than 58, it returns me the expected number of data.The query is executed on _id which is an index for the collection. Even for Python pymongo driver, the number is found to be > 60 and also this was not a problem if I change the machine (one with a single NIC card)\nThank you", "username": "Aman_Gupta5" }, { "code": "findfind", "text": "Something I noticed is that a find command with 58 IDs (I used ObjectId as the IDs) generates an unencrypted message size of about 1200 bytes. The default packet MTU for most EC2 machines is around 1500 bytes, so any messages >1500b must be split into multiple packets.If the routing rules for the machine with 2 NICs are somehow misconfigured, it’s possible that the find command is being split into multiple packets and one of the packets is never reaching the database, causing the operation to appear “stuck”. Are you experiencing any other network communication issues between the EC2 machine and MongoDB host(s)?", "username": "Matt_Dale" } ]
Mongodb query stuck in cluster
2023-05-08T13:54:41.852Z
Mongodb query stuck in cluster
832
null
[ "java", "android", "realm-studio" ]
[ { "code": "", "text": "Hi,\nI’m following this tutorial: https://www.mongodb.com/docs/realm/sdk/java/app-services/mongodb-remote-access/But I keep getting the error: Unable to start activity CoomponentInfo{com.example.querytest3/com.example.querytest3.MainActivity}: java.lang.NullPointerException: Attempt to invoke virtual method ‘io.realm.mongodb.mongo.MongoClient io.realm.mongodb.User.getMongoClient(java.lang.String)’ on a null object referenceI get that it’s something with the anonymous user, from the code:\nUser user = app.currentUser();\nMongoClient mongoClient =\nuser.getMongoClient(“mongodb-atlas”);But how do I fix it? I know very very little about apps, I’m self-taught, and all I need is to learn how the tutorial works. Hopefully someone can help me with an extra line of code here or there? I can’t imagine it’s hard to fix, but I don’t have the experience to fix it, and I’ve literally spent hours searching the web for answers >.<Thanks!", "username": "Evelien_Heyselaar" }, { "code": "", "text": "Okay, I’ve figured out that I’m missing the anonymous user declaration from here: https://www.mongodb.com/docs/realm/sdk/java/users/authenticate-users/#std-label-java-authenticateBut when I put that code above my current code, my next error is the line user.getMongoClient(“mongodb-atlas”): Cannot resolve method “getMongoClient” in ‘AtomicReference’.Again I’m just literally following the tutorial, I’m not adding any extra code or editing it in any way. Why doesn’t it work?", "username": "Evelien_Heyselaar" } ]
N00b needs help: Tutorial doesn't work
2023-07-14T13:01:14.319Z
N00b needs help: Tutorial doesn&rsquo;t work
511
null
[]
[ { "code": "", "text": "please help how to install mongodb on aws amazon linux 2023", "username": "sattu_rocks" }, { "code": "", "text": "Hello @sattu_rocks ,Welcome to The MongoDB Community Forums! Please take a look atRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Hi tarun_gaurits not working , this for amazon linux-2my requirement mongodb for amazon linux 2023", "username": "sattu_rocks" }, { "code": "", "text": "Amazon Linux 2023 is a recently released platform and is not on the MongoDB Supported platforms yet.You can keep an eye out for any additions to this list of supported platforms.", "username": "Tarun_Gaur" }, { "code": "", "text": "check the first answer in this question of Stack Overflow works", "username": "Ale_DC" } ]
How to install mongodb in amazon linux 2023
2023-04-05T06:12:54.013Z
How to install mongodb in amazon linux 2023
3,970
https://www.mongodb.com/…7_2_1024x123.png
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to accessthe database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://www.mongodb.com/docs/atlas/security-whitelist/\n at NativeConnection.Connection.openUri (/home/cryptojo/nodevenv/marketserver/14/lib/node_modules/mongoose/lib/connection.js:825:32)\n at /home/cryptojo/nodevenv/marketserver/14/lib/node_modules/mongoose/lib/index.js:414:10\n at /home/cryptojo/nodevenv/marketserver/14/lib/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/home/cryptojo/nodevenv/marketserver/14/lib/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)\n at Mongoose._promiseOrCallback (/home/cryptojo/nodevenv/marketserver/14/lib/node_modules/mongoose/lib/index.js:1288:10)\n at Mongoose.connect (/home/cryptojo/nodevenv/marketserver/14/lib/node_modules/mongoose/lib/index.js:413:20)\n at Object.<anonymous> (/home/cryptojo/marketserver/app.js:55:4)\n at Module._compile (internal/modules/cjs/loader.js:1085:14)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'cluster0-shard-00-00.fuaga.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-01.fuaga.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-02.fuaga.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-vffn0h-shard-0',\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n .connect(process.env.DB, { useNewUrlParser: true })\n .then(() => {\n initSocket(http, corsOptions); \n http.listen(port);\n })\n .catch((err) => console.log(err));\nDB = 'mongodb+srv://username:[email protected]/workplatform?retryWrites=true'", "text": "Hello, I have problem connecting my new domain to mongoDB on other domain everything working fine.\nCurrently in my accesslist I have added\nNetwork-Access-Cloud-MongoDB-Cloud1685×203 6.88 KBBut even with this im still getting error:and my app.js on end have:\nmongooseand my env file looks like:\nDB = 'mongodb+srv://username:[email protected]/workplatform?retryWrites=true'So localy everything working fine, I have even get compas desktop application and with same string it was succesfull.Best regards", "username": "G_G2" }, { "code": "", "text": "Hello, you solve it? I have almost the same issue with my project, it gives an error with the whitelist and I whitelisted everything yet when it runs locally is just fine even when I have my VSC open and runs the main script “npm run start” from the server and Front runs on my desktop the URL, but when stop de npm, starts to lose the DB connection.", "username": "Leonardo_Jose_Lopez_Escalona" }, { "code": "", "text": "Hello, I have solved this I had to contact a support of my hosting so they can open a port 27017 so the connection to mongoDB is succesfull.", "username": "G_G2" } ]
Connection problem
2023-05-24T21:39:21.517Z
Connection problem
813
null
[ "containers", "ops-manager" ]
[ { "code": "", "text": "Hello Team,We are using mongodb single instance in docker container, how I can confirm if I am using MongoDB Ops Manager , my current mongodb version is 4.4 & 5.0I want to confirm if we have that in our instance configured or not, we have not done any additional settings for MongoDB Ops Manager.Please guide,\nSAM", "username": "sameer_khamkar" }, { "code": "pgrep mms", "text": "If your instance is managed by Ops Manager you will have an agent running. Usually I pgrep mms to find a running agent(mongodb-mms-automation-agent). If you’re not using the MongoDB Enterprise Kubernetes Operator you will have deployed a custom container with the agent installed.You will also have one or more Ops Manager servers running, usually behind a load balanacer and a supporting AppDB which is a replica set.You will also be running mongodb enterprise advance and have an account, agreement and account manager with MongoDB as this is licensed software.Typically this is not something an organisation is unaware of.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to check if we are using MongoDB Ops Manager or not
2023-07-13T05:27:55.008Z
How to check if we are using MongoDB Ops Manager or not
463
null
[]
[ { "code": "Get:8 http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release.gpg [801 B]\nErr:8 http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/4.0 Release.gpg\n The following signatures were invalid: KEYEXPIRED 1681763820\nuser@host:/var/snap/amazon-ssm-agent/6312$ sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E52529D4\nExecuting: /tmp/tmp.kQ87jVKCKE/gpg.1.sh --keyserver\nkeyserver.ubuntu.com\n--recv-keys\nE52529D4\ngpg: requesting key E52529D4 from hkp server keyserver.ubuntu.com\ngpg: key E52529D4: \"MongoDB 4.0 Release Signing Key <[email protected]>\" not changed\ngpg: Total number processed: 1\ngpg: unchanged: 1\n", "text": "apt update is showing the key for mongodb-org 4.0 in xenial is expired and the server has not published a new key. I don’t see an issue for this in JIRA.Attempting to re-import a new key shows the server has not yet published a new keyThe support matrix shows MongoDB 4.0 is still supported on Ubuntu 16, is there an ETA for publishing new keys?", "username": "Andrew_Kesterson" }, { "code": "", "text": "Dear all,I have the same error in Ubuntu 22.04\nimage748×93 3.44 KB\nIt’s possible update the key.Can you help me, please??Thanks.", "username": "Carlos_Sierra" }, { "code": "\n", "text": "The support matrix shows MongoDB 4.0 is still supported on Ubuntu 16, is there an ETA for publishing new keys?Hi there, I would like to know if the GPG key will be refreshed anytime soon as we’re still using the 4.0 repository on some of our Ubuntu 18.04 installations.Thank you for your answers !", "username": "Cyril_CANONICO" }, { "code": "", "text": "Any update on this issue ?", "username": "Roland_Bole1" }, { "code": "", "text": "Is anyone even working on that?", "username": "Mateusz_Lemieszko" }, { "code": "", "text": "Same answer as: Mongo db 4.0 GPG key expired for ubuntu 18.04 - #2 by chris", "username": "chris" } ]
Mongo 4.0 Apt key for Ubuntu 16 is expired
2023-04-18T13:19:41.292Z
Mongo 4.0 Apt key for Ubuntu 16 is expired
3,538
null
[ "aggregation", "charts" ]
[ { "code": "{ timestamp: {$gte: {ISODate() - 7*24*60*60*1000}}}\n$dateSubtract", "text": "Hi there,I’d like to create a chart in my dashboard where I show the number of documents created per hour in the last 24 hours (and also last week). Each document has a timestamp field with the date created. I don’t want to update the query every time.So I would imagine a query likebut this is not supported.I know things like that are possible in code. Also I found $dateSubtract, but that’s aggregation only.I don’t seem to be able to edit the raw aggregation of the chart either, although I can view it.Could somebody point me in the right direction? I feel like this could be a common use case.Bests,\nCanwiper", "username": "canwiper" }, { "code": "new Date(...)", "text": "You can use simple JavaScript expressions in Charts filters. You just need to wrap everything in new Date(...) to ensure the calculated value is of the expected type.Alternatively, rather than write an MQL query, you can add a date filter and use the Relative or Period options.", "username": "tomhollander" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query for documents in the last 24h in charts
2023-07-14T09:52:32.466Z
Query for documents in the last 24h in charts
678
https://www.mongodb.com/…e_2_1024x541.png
[ "storage" ]
[ { "code": "", "text": "Hi team, we are using MongoDB server version 5.0.10.In the past few weeks, we have seen a spike in Page evicted by application threads, and coincidentally at the same time a spike in Operational Latencies (Writes and Reads).\nimage2886×1526 222 KB\nThe number of pages evicted during the operational latency spikes - 10k.\nWhen going through the cache eviction documentation of WiredTiger, we came across the trigger values when application threads are supposed to be used in page eviction process.The attached figures below are the cache sizes and dirty pages at the instance when application threads are being used:The values are clearly under the threshold, is there any other reason why the application threads were used? How can we tune the mongoDB to fix this issue?Please note : current cache eviction threads being used are the default set by MongoDB (4).", "username": "Uddeshya_Singh" }, { "code": "", "text": "Application threads used\n\nimage2894×1498 173 KB\n", "username": "Uddeshya_Singh" }, { "code": "", "text": "Dirty pages used\n\nimage2878×1506 264 KB\n", "username": "Uddeshya_Singh" }, { "code": "", "text": "Cache pages used.\n\nimage2894×1524 227 KB\n", "username": "Uddeshya_Singh" }, { "code": "", "text": "Hi @Uddeshya_Singh and welcome to MongoDB community forums!!As mentioned in the Cache and eviction tuning, the eviction would begin when the threshold values are reached.The cache serves as a valuable intermediary between application operations and disk I/O. WiredTiger, the system in question, strives to maintain cache usage at a maximum of 80%. Pushing the cache to its limit of 95% could introduce latency issues within the application.\nHowever, in your case, since you are seeing evection at 80% of the cache utilisation might be a result of other operations which are utilising the cache.\nThere might be more than one reason when you seeing the increase and latency in your application.I would recommend taking a look at the application with the above mentioned scenarios.If, however, you are unable to resolve, you can reach out to MongoDB Support Hub for more detailed troubleshooting and observations.Regards\nAasawari", "username": "Aasawari" } ]
Why is mongodb using application threads?
2023-07-11T06:19:07.406Z
Why is mongodb using application threads?
694