image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"atlas-online-archive"
] | [
{
"code": "",
"text": "Hello everyone, our team currently uses atlas as main db and we have a lot of useless data. I am considering a different options and have some questions:",
"username": "Bohdan_Chystiakov"
},
{
"code": "",
"text": "Hi @Bohdan_Chystiakov ,Great questions ! Here are the responses:",
"username": "Prem_PK_Krishna"
},
{
"code": "",
"text": "Here are a few additional questions, if I may:",
"username": "Bohdan_Chystiakov"
},
{
"code": "",
"text": "Hi @Bohdan_Chystiakov",
"username": "Prem_PK_Krishna"
}
] | Atlas archiving | 2023-01-25T15:41:59.898Z | Atlas archiving | 1,495 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.9.0 MongoDB Java & JVM Drivers has been released.The documentation hub includes extensive documentation of the 4.9 driver.You can find a full list of bug fixes here.You can find a full list of improvements here.You can find a full list of new features here.",
"username": "Valentin_Kovalenko"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Java Driver 4.9.0 Released | 2023-02-13T15:11:55.142Z | MongoDB Java Driver 4.9.0 Released | 3,256 |
null | [
"api"
] | [
{
"code": "",
"text": "Hi there,I’m trying to delete IP addresses from a project’s IP access list programmatically using the Admin Atlas API called with powershell, but I’m getting a 405 Method Not Allowed response. The docs here MongoDB Atlas Administration API state that the API key used for authentication requires the ‘Project Atlas Admin’ role. This role doesn’t appear as an option when I set up keys via the UI. I’ve tried with API keys which have the highest available permissions available to me (Project Owner, Organization Owner), but still get the same error.We’re using an older version of Atlas, 4.4.18, if that’s likely to be relevant. Not that get and post calls to the API are working fine.Any idea how I get round this?\nMany thanks,\nKiera",
"username": "Kiera_Jones"
},
{
"code": "HTTP code - 405",
"text": "Hello @Kiera_Jones ,Welcome to The MongoDB Community Forums! To understand your use-case better, could you please share below details:405 Method Not Allowed responseNote: Please redact any credentials and sensitive information before sharing any details.Please go through API error document for more information around HTTP code - 405.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hi Tarun,Many thanks for your reply. Oddly, checking again this morning it now works, which is slightly baffling but nonetheless good!Thanks ",
"username": "Kiera_Jones"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Removing entries from IP Access List using the Atlas Admin API returns 405 error | 2023-02-07T11:00:50.299Z | Removing entries from IP Access List using the Atlas Admin API returns 405 error | 1,216 |
null | [
"queries",
"java",
"security"
] | [
{
"code": "",
"text": "Hi folks, wanted to share a new project I’ve been working on called NIVA (NoSQL Injection Vulnerable App)NIVA is a simple web application which is intentionally vulnerable to NoSQL injection. The purpose of this project is to facilitate a better understanding of the NoSQL injection vulnerability among a wide audience of software engineers, security engineers, pentesters, and trainers. This is achieved by giving users both secure and insecure code examples which they can run and inspect on their own, complimented by easy to read documentation.This edition utilizes MongoDB as the NoSQL database and the official Java driver for data access.Github: GitHub - aabashkin/nosql-injection-vulnapp: NIVA is a simple web application which is intentionally vulnerable to NoSQL injection. The purpose of this project is to facilitate a better understanding of the NoSQL injection vulnerability among a wide audience of software engineers, security engineers, pentesters, and trainers.Feedback appreciated! I hope people find this resource useful.",
"username": "Anton_Abashkin"
},
{
"code": "let password = resposne.get('password')",
"text": "Hi - This was an interesting read although it raised a few questions.You are using BasicDBObject which is part of the legacy deprecated driver stack. It would be better if you examples used the modern API.More interestingly, In 10 years of working with customers at MongoDB I’ve never seen anyone concatenating strings OR using $where in production. Both of which as you rightly point out are such major antipatterns my assumption is that no-one does it, have you seen this mistake made ($where is often even disabled at the server side)The cases of NoSQL injection I’ve seen have typically happened when input from a web service isn’t sanitised with a JS backend so somethign as simple aslet password = resposne.get('password')Assuming password will be a string but it could also be any other valid JSON and treated as such.",
"username": "John_Page"
}
] | How to prevent NoSQL injection security vulnerabilities | 2022-08-09T15:16:49.844Z | How to prevent NoSQL injection security vulnerabilities | 3,862 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "details.insertOne()const { request, response } = require('express')\nconst express = require('express');\nconst hbs = require('hbs');\nconst app = express();\nconst mongoose = require(\"mongoose\");\n\nconst routes = require('./routes/main');\nconst Detail = require('./models/Detail');\n\n// /static/css/style.css\napp.use('/static', express.static(\"public\"))\n// app.use(express.static(\"public\"))\n\n\napp.use('', routes)\n\n// Template Engine HBS \napp.set('view engine', 'hbs')\n// this is the path where our all HTML files are available\napp.set('views', 'views')\nhbs.registerPartials('views/partials')\n\n\n\n// MogoDB Connections\nmongoose.set(\"strictQuery\", false);\nmongoose.connect(\"mongodb://localhost/nodejslearning\", () => {\n console.log(\"database connected\") \n Detail.create(\n {\n brandName:\"Learn NodeJS\",\n brandIconUrl:\"/\",\n links:[\n {\n label:\"Home\",\n url:\"/\",\n },\n {\n label:\"Services\",\n url:\"/\",\n },\n \n ]\n }\n )\n})\n\n \napp.get('/', (request, response) => {\n response.send(\"Wow This is the data from our server\")\n})\n\napp.listen(process.env.PORT | 1111, () => {\n console.log('our website server is running now')\n})\n\n\n",
"text": "Hi All,I just started to learn mongoDB and node js.I am facing a issue during learning the first time setup with nodejs and mongoDB.onst err = new MongooseError(message);\n^MongooseError: Operation details.insertOne() buffering timed out after 10000msthis is my app.js file data with that i am getting this error.",
"username": "Pervez_Alam"
},
{
"code": "",
"text": "after updating 127.0.0.1 in place of localhost this problem is resolved for me.",
"username": "Pervez_Alam"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongooseError: Operation `details.insertOne()` buffering timed out after 10000ms | 2023-02-11T15:16:48.937Z | MongooseError: Operation `details.insertOne()` buffering timed out after 10000ms | 1,439 |
null | [
"java",
"atlas-cluster",
"transactions"
] | [
{
"code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<persistence version=\"2.1\" xmlns=\"http://xmlns.jcp.org/xml/ns/persistence\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemalocation=\"http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd\">\n <persistence-unit name=\"mongo-test\" transaction-type=\"RESOURCE_LOCAL\">\n <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>\n <properties>\n <property name=\"eclipselink.target-database\" value=\"org.eclipse.persistence.nosql.adapters.mongo.MongoPlatform\"/>\n <property name=\"eclipselink.nosql.connection-spec\" value=\"org.eclipse.persistence.nosql.adapters.mongo.MongoConnectionSpec\"/>\n <property name=\"eclipselink.nosql.property.mongo.port\" value=\"27017\"/>\n <property name=\"eclipselink.nosql.property.mongo.host\" value=\"localhost\"/>\n <property name=\"eclipselink.nosql.property.mongo.db\" value=\"xrn-testing\"/>\n <property name=\"eclipselink.logging.level\" value=\"FINEST\"/>\n </properties>\n </persistence-unit>\n</persistence>\nmongodb+srv://<user>:<password>@test-cluster-0.xxxx.mongodb.net/?retryWrites=true&w=majority\n",
"text": "I would like to connect to my Atlas cluster using eclipselinks nosql feature. I manage to connect to a local mongodb using this persistance.xml:However from Atlas I just get a connection string which looks like this:Question: How do I need to set up my persistence.xml to connect to the cluster. In particular:Or does anybody have an example persistence.xml for an Atlas cluster?Greetings,\nMarcus",
"username": "Marcus_Blumel"
},
{
"code": "<property name=\"eclipselink.nosql.property.mongo.port\" value=\"27017\"/>\n<property name=\"eclipselink.nosql.property.mongo.host\" value=\"localhost\"/>\n<property name=\"eclipselink.nosql.property.mongo.db\" value=\"xrn-testing\"/>\nlocalhostmongoshtest-cluster-0.xxxx.mongodb.net",
"text": "Hi @Marcus_Blumel and welcome to the MongoDB community forum!!Question: How do I need to set up my persistence.xml to connect to the cluster. In particular:The localhost connection you advised was working may be due to the above configuration details (and possibly more). However, this appears to be more of a eclipselink question rather than a MongoDB question. The connection string provided is generally for the official MongoDB drivers, MongoDB Compass or mongosh.You may wish to raise this in the Eclipse forum or possibly Stack Overflow to see if the software is capable of using the DNS Seed List Connection Format connection string example you provided.However, to answer your questions:what’s the port? Is it still 27017? Do I need to provide it at all?In saying so, for the port specific question, I would refer to the following Connecting to a Database Deployment documentation which includes the port details for Atlas.is test-cluster-0.xxxx.mongodb.net the correct host? What about the query string?Based off your connection string, the test-cluster-0.xxxx.mongodb.net value is most likely the SRV (not inclusive of the prefix). More details regarding the SRV record within DNS Seed List Connection Format can be detailed on the following topic reply.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Connection with Eclipselink (JPA) to Atlas | 2023-01-29T18:49:50.597Z | Connection with Eclipselink (JPA) to Atlas | 1,259 |
null | [
"node-js"
] | [
{
"code": "",
"text": "I’m trying to add triggers to an object which has an embedded array of documents. How can I check if a new document has been created in the array. I’m making an app in SwiftUI and I’m fairly new to mongoDB. So from the documentation it seems like I can set up a trigger for an update operation and then check for the field updated. But the documents in the array can be edited so the update operation might not always be reliable to only look for documents created and can return an object that exists but has been edited. How can I work around this? Is there something in the documents I’m missing. Any help is appreciated, Thank you in advance.",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "Hi @Timothy_Tati and welcome to the MongoDB community forum!!For better understanding of the requirement, it would be helpful if you confirm if my understanding is correct.But the documents in the array can be edited so the update operation might not always be reliable to only look for documents created and can return an object that exists but has been edited.Are you looking for a trigger operation to insert in an array field for any new element inserted and not enabling the trigger on updating the existing element of the array?Could you also confirm the below understanding for more clarity:I can set up a trigger for an update operationIs the trigger same as Atlas Triggers? If yes, could you also share the documentation link which you are referring for creating the Atlas trigger.Best regards\nAasawari",
"username": "Aasawari"
},
{
"code": " Post: { comments: [Comments] } // Linked Objects\nComment: { replies: [Reply] } // Embedded Objects\n",
"text": "Hello,Thank you for getting back.Sorry, There is no relevance of SwiftUI. Just wanted to establish that I was new to the web dev/backend side of things. Yes it’s an atlas trigger from the official docs https://www.mongodb.com/docs/atlas/app-services/triggers/database-triggers/This is how the schema is set up. Every Post has a an Array of Comment ObjectsAnd Every Comment has an array of embedded replies.The way the trigger is set up is that whenever the comment is updated. It runs a “newReply” function using the last element of the array. But I realised that if somebody edits their earlier reply that would count as an update operation as well, isn’t it? Needlessly running the “newReply” func on the same last element again. Currently I’m using the pre-image to compare the two but it feels too intensive for a simple task and I couldn’t find any docs where I could find such examples.Are you looking for a trigger operation to insert in an array field for any new element inserted and not enabling the trigger on updating the existing element of the array?So yes is there a way to set up triggers which ONLY gets triggered if there’s\na new insert in the array( without using the doc pre-Image, if possible) and not when the array is updated.Also if yes, will that work for all kinds of arrays irrespective of type?",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "Hi @Timothy_Tati and thank you for sharing the above information.As schema design in MongoDB plays an important role and depending on the requirement, the solution may differ depending on the actual document you’re working with.\nTherefore, it would be helpful if you could share a sample document for the above schema which would help me replicate and provide solution if it is possible.Let us know if you have any further questions .Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hello Again. I think I might be complicating the question by providing the Post Schema. I’d like to ask a simple question, Is there a way to set up a trigger only for an array insertion? Whatever the value the array might hold, Documents, String, Int et al. So the trigger would go off whenever a new element is appended into the array and nothing else. Is that possible?",
"username": "Timothy_Tati"
},
{
"code": "",
"text": "Hi @Timothy_Tati and welcome to the MongoDB community forum!!The Triggers in MongoDB use the concept of change streams which works on the per document basis.\nTherefore, for insert operation, it would be triggered when en entire new document is inserted into the collection.Based on the above requirement, I tried to create a sample document with objects and array and tried to create triggers for insert and update operation individually.\nBased on the documentation, the insert trigger is initiated when a new document is inserted whereas the update trigger is initiated when any sort of update is made to the document.\nModifying an array by adding new elements as per your requirement would fall under an update workflow.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Insert Trigger for Embedded Objects | 2023-01-14T17:28:27.911Z | Insert Trigger for Embedded Objects | 1,302 |
null | [] | [
{
"code": "",
"text": "So we are planning to migrate atlas cloud from azure to aws, so after doing that will there be a new connection string created ?",
"username": "buvanesh_j"
},
{
"code": "",
"text": "Hi @buvanesh_j - Welcome to the community I would confirm this with the Atlas in-app chat support team. In saying so, there are some cases where the connection string may change.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Is Modifying the Cloud Provider & Region will result in new connection string? | 2023-02-07T15:38:35.800Z | Is Modifying the Cloud Provider & Region will result in new connection string? | 1,035 |
null | [] | [
{
"code": "",
"text": "Thank you all for suggestions on Upgrade Mongo ver and machine OSI have successfully updated the cluster to 4.4. Now, before replacing CentOS nodes with Ubuntu nodes I would like to test the environment for some daysI have a small issue with backups: with mongodump I backup all my DBs on a separate storage. Normally I take backup also of system DBs (admin, local, config). After the upgrade, however, config db is not backuped anymore (no output, no errors), while local db give me this error “error counting local.replset.initialSyncId: (Unauthorized) not authorized on local to execute command { count: “replset.initialSyncId”, lsid: { id: UUID(“aa0aaa23-b78f-45a6-a20d-9843ba87bfc9”) }, $clusterTime: { clusterTime: Timestamp(1674352920, 1), signature: { hash: BinData(0, 3B344CF96933DD76FF714F55F5502D9FF911ACE1), keyId: 7130904677943607311 } }, $db: “local”, $readPreference: { mode: “primary” }”Anyone had experienced those issues?Thanks in advanceBest Regards",
"username": "jack_c"
},
{
"code": "",
"text": "Hello!\nDoes anyone have a suggestion?Thanks in advance.",
"username": "jack_c"
},
{
"code": "",
"text": "Hello!\nAny suggestion?Thanks guys!",
"username": "jack_c"
},
{
"code": "",
"text": "What are the full command line of the mongodump(s)?",
"username": "chris"
}
] | Issue backing up system databases with mongodump after upgrading to 4.4 | 2023-01-22T10:37:07.781Z | Issue backing up system databases with mongodump after upgrading to 4.4 | 710 |
null | [] | [
{
"code": "",
"text": "Hi, some of the collections are huge and the database is growing causing space issues on the drive. Developers said to delete data older than one year. Not sure how to do that. What is the best way to delete and also how do we compact them?",
"username": "Ana"
},
{
"code": "collectionNamedateFieldISODatecompactcollectionNamecompactcompact",
"text": "Hello @Ana ,delete data older than one year. Not sure how to do that. What is the best way to deleteAssuming you have a “date created” field or similar in your documents, to delete documents older than one year, you can use the following command in MongoDB’s shell:db.collectionName.deleteMany({dateField: {$lt: ISODate(“YYYY-MM-DDTHH:mm:ss.sssZ”)}})In this command, collectionName should be replaced with the name of your collection and dateField should be replaced with the name of the field in the document that holds the date. The ISODate expression specifies the date before which documents should be deleted.If this is not how your document looks like, please provide some example documents, and a method to determine the age of the document in question.how do we compact them?To compact a MongoDB collection and reclaim disk space, you can use the compact command. This will compact the extents of the collection, defragmenting and compacting its underlying data files:db.runCommand({compact: ‘collectionName’})Replace collectionName with the name of the collection you want to compact. Note that the compact command may take a long time to complete for large collections, and it may also cause increased CPU and I/O usage during the process. Note that the compact command is not guaranteed to free up disk space as per the disk space section in the compact documentation page.Note: Please update and test the queries as per your use-case and requirements in your test environment before making any changes in production. Always have an up-to-date backup before performing server maintenance such as the compact operation.Please refer compact command documentation to learn more about it.If these solutions doesn’t work for you, please provide more details, such as your MongoDB version, your deployment topology, your data size and remaining disk space, and any other information that may help.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thank you so much. But fs.chunks is the collection that is huge but it doesn’t have date column. Now how do I delete? fs.files has upload date. Please advise.\n",
"username": "Ana"
},
{
"code": "// Defines the date range from which you want to remove documents\nconst startDate = new Date(\"2021-01-01\");\nconst endDate = new Date(\"2021-12-31\");\n\n// Finds all files with upload date within specified range\nvar cursor = db.fs.files.find({ \"uploadDate\": { $gte: startDate, $lte: endDate } });\n\n// Loop through each document found and delete it\nwhile (cursor.hasNext()) {\n var file = cursor.next();\n db.fs.chunks.deleteMany({ \"files_id\": file._id });\n db.fs.files.deleteOne({ \"_id\": file._id });\n}\n",
"text": "Hi @Ana, an approach to this would be something like:Removing data from GridFS must be done this way, as GridFS stores the large file data as smaller chunks in the “fs.chunks” collection, and the file metadata in a single entry in the “fs.files” collection.Therefore, to remove an entire file, you must first remove the corresponding record in the “fs.files” collection, and then remove all fragments of that file in the “fs.chunks” collection. This ensures that all data in the file is removed correctly.Also, to ensure that all fragments of a file are removed, it is important to search for the file using “filename” and remove all fragments using “files_id”.After that you can run the: db.runCommand({compact: ‘collectionName’}), considering the points mentioned by @Tarun_Gaur.Best",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Thank you so much. Will try this.",
"username": "Ana"
},
{
"code": "",
"text": "Feel free to share any problems you have!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "db.runCommand({compact: ‘collectionName’})Hello, I have ran the delete and then compact. I see difference in the collection sizes but can’t see any difference in the drive size. Please check before and after.\n\n",
"username": "Ana"
},
{
"code": "",
"text": "Oh…storage size is the same but the size is different. Any idea what else I can do now?On test, storage size also has gone down after purge and compact significantly.",
"username": "Ana"
},
{
"code": "",
"text": "Tried to run Repair but got an error TypeError: db.repairDatabase is not a function. Version of mongo is 4.2. Appreciate your help.",
"username": "Ana"
}
] | Database Shrink and data deletion | 2023-02-01T16:48:00.827Z | Database Shrink and data deletion | 2,182 |
null | [
"crud"
] | [
{
"code": "{\n 'asset_name': 'asset1',\n 'asset_unique_name': 'asset1_unique',\n 'connections': [\n {\n 'connection_name': 'conn1',\n 'connection_unique_name': 'conn1_unique',\n 'id': 'conn_id_1',\n 'data': {\n 'value': 7,\n 'id': 'data_id_1'\n }\n },\n {\n 'connection_name': 'conn2',\n 'connection_unique_name': 'conn2_unique',\n 'id': 'conn_id_2',\n 'data': {\n 'value': 1,\n 'id': 'data_id_2'\n }\n },\n {\n 'connection_name': 'conn3',\n 'connection_unique_name': 'conn2_unique',\n 'id': 'conn_id_3',\n 'data': {\n 'value': 12,\n 'id': 'data_id_3'\n }\n }\n ]\n}\ndb.getCollection('assets').update(\n {'connections':\n {'$elemMatch':\n {'id':\n {'$in': ['conn_id_2', 'conn_id_3']}\n }\n }\n },\n {'$pull':\n {'connections':\n {'id': \n {'$in':['conn_id_2', 'conn_id_3']}\n }\n }\n\n }\n)\ndb.getCollection('assets').update(\n {},\n {'$pull':\n {'connections':\n {'quick_id': \n {'$in':['conn_id_2', 'conn_id_3']}\n }\n }\n\n }\n)\n",
"text": "Hi,\nTrying to remove multiple items from multiple documents using $pull. here is the structure of the document in assets collection:I want to remove elements from the connections array by id from all the documents in the collection. here is what I’m trying:I’m getting E11000 duplicate key error collection\nindex: connections.connection_unique_name_1_connections.data.id_1 dup key: { connections.connection_unique_name: null, connections.data.id: null }I assumed I have some keys with null values, I searched for connection_unique_name = null, I get 0 results.\nI tried using updateMany, setting upsert=false, multi=true. didn’t work, does not return an error, but returned 0 results.\nI tried also putting empty find object {} - still 0 results, nothing was removed.What am I missing?\nThanks!",
"username": "aleph-0"
},
{
"code": "{\n 'asset_name': 'asset1',\n 'asset_unique_name': 'asset1_unique',\n 'connections': [\n ]\n}\n{\n 'asset_name': 'asset1',\n 'asset_unique_name': 'asset1_unique',\n 'connections': [\n {\n 'connection_name': 'conn2',\n 'connection_unique_name': 'conn2_unique',\n 'id': 'conn_id_2',\n 'data': {\n 'value': 1,\n 'id': 'data_id_2'\n }\n },\n {\n 'connection_name': 'conn3',\n 'connection_unique_name': 'conn2_unique',\n 'id': 'conn_id_3',\n 'data': {\n 'value': 12,\n 'id': 'data_id_3'\n }\n }\n ]\n}\n",
"text": "Hello @aleph-0 ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet, were you able to find a solution?I’m getting E11000 duplicate key error collection\nindex: connections.connection_unique_name_1_connections.data.id_1 dup key: { connections.connection_unique_name: null, connections.data.id: null }The error message specifically mentions the index “connections.connection_unique_name_1_connections.data.id_1” and the duplicate key values of “{ connections.connection_unique_name: null, connections.data.id: null }”. This means that you are trying to update a document with a “connections.connection_unique_name” field value of “null” and a “connections.data.id” field value of “null” into a collection that has a unique index defined on these two fields, and a document with these exact field values already exists in the collection.When I tried your query with the document you provided, it was working as expected.I believe that in your scenario, you have documents as below:And when you ran your query, it tried to update second example document but gave that respective error as two documents cannot have same values(means both the fields are missing hence it is showing it’s value as null).Could you please confirm if such cases exist in your collection?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $pull giving duplicate key error | 2023-02-03T16:59:32.647Z | $pull giving duplicate key error | 853 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hello community,in my company we have a Replica Set Cluster, with 3 members, using Mongo 4.0.28 and based on Centos 7.We would like to upgrade our systems/softwares, porting the machines from Centos 7 to Ubuntu 20, and mongo from 4.0.28 to 4.4 (passing by 4.2).Is any of this two scenarios i thought of vaiable?Case 1:3 members with centos 7 and mongo 4.0.28\nupdate whole cluster to mongo 4.4, then add a new member with ubuntu 20 and mongo 4.4 and as last remove one member with centos 7\nrepeat for the other 2 members of the clusterCase 2:3 members with centos 7 and mongo 4.0.28\nstay with the cluster as mongo 4.0.28, then add a new member with ubuntu 20 and mongo 4.4 and as last remove one member with centos 7\nrepeat for the other 2 members of the clusterIf you have any different suggestion i’m all ears.Thanks!",
"username": "jack_c"
},
{
"code": "",
"text": "First if you are going to upgrade you can’t skip a major version you must upgrade them in order so for example. 4.0 → 4.2 → 4.4 this is outlined in the MongoDB documentation.Are you trying to do this with 0 downtime or is it acceptable? Also it’s helpful to know how much data is in your clusters because that will affect your options.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thank you for fast answer.\nYes, I scheduled update to 4.2 first If it’s possibile I’d prefer no downtimeEach node of replicaset host about 500 GB of data",
"username": "jack_c"
},
{
"code": "",
"text": "Hi @jack_c,\nif you want to take a little bit of a risk, you could install a replica set with alredy the version 4.4 and migrate directly data from 4.0 (centos) to the 4.4 (ubuntu).\nDISCLAIMER: is not 100 percent guaranteed to work, but if you have a test environment, you could give it a try to verify.\nOtherwise the best way is that have suggested @tapiocaPENGUIN !!Best Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "I think there would need to be downtime in the application or a code change at some point. Because even if you successfully add all the new hosts to the replica set the application will need to get the new connection strings at some point. Because I’m assuming the hostnames are going to to change.Case 1: I believe this should work with no issues\nCase 2: I wouldn’t do because you have two versions of mongodb and it just feels more awkward to me (if that is a valid reason )\nCase 3: Setup a new cluster on 4.4, upgrade and set fcv on old for 4.4, do a maint period and mongodump/restore to the new cluster, switch application conn string, test. This way if something fails on the new cluster you haven’t touched the old cluster, so you just failback your app to the old connection string and you are back and running.",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "4 posts were split to a new topic: Issue backing up system databases with mongodump after upgrading to 4.4",
"username": "Stennie_X"
}
] | Upgrade Mongo ver and machine OS | 2023-01-13T14:57:48.686Z | Upgrade Mongo ver and machine OS | 1,310 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hello have any of you guys have any advice in solving a “user is not allowed to do action [find] on [test.postmessages]”}\" error",
"username": "Kang_Lin"
},
{
"code": "",
"text": "Hi @Kang_Lin, welcome to the MongoDB Community forums. Are you authenticated with a user that has permissions to read from the database/collection that is giving you the error?",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hello, @Doug_Duncan thanks for responding I was able to solve the problem thank you.",
"username": "Kang_Lin"
},
{
"code": "",
"text": "Is it possible to explain on how to solve this problem please. I am getting the same error in my project.Thank you",
"username": "fahad_Mohamed"
},
{
"code": "",
"text": "The standard readWriteAnyDatabase@admin worked",
"username": "fahad_Mohamed"
}
] | Getting a "user is not allowed to do action [find]" | 2022-09-18T00:55:12.953Z | Getting a “user is not allowed to do action [find]” | 8,498 |
null | [
"dot-net",
"field-encryption"
] | [
{
"code": "{\n \"ArrayField\": [\n {\n \"NonEncryptedField\": \"NonEncryptedData\",\n \"EncryptedField\": \"EncryptedData\"\n },\n {\n \"NonEncryptedField\": \"NonEncryptedData\",\n \"EncryptedField\": \"EncryptedData\"\n }\n ]\n}\n",
"text": "Hello, I’m trying to encrypt a specific field in an object, that is inside of an array, using CSFLE.\nFor example:I cannot encrypt the whole data because of heavy write/read operations.\nIs it possible to encrypt that data? If so, Is there any reference for that in C# ?",
"username": "Skr_Official"
},
{
"code": "",
"text": "Hi , Im upping this thread, does anyone has an answer?",
"username": "Skr_Official"
},
{
"code": "",
"text": "Hello Skr_Official,Thanks for the question. In our CSFLE documentation there is a section for what we call Explicit Encryption, where you manually specify the encrypt and decrypt calls. At the top left of the page is a “Select your language” box and you can choose C# to get specific code examples.I hope that helps,Cynthia",
"username": "Cynthia_Braund"
},
{
"code": "",
"text": "Is it possible only with Explicit Encryption?\nI cant do it using Auto Encryption?basically I am migrating from an object field , to an array of objects, so I want to keep the usage as same as possible.\nThank you very much ",
"username": "Skr_Official"
},
{
"code": "",
"text": "Hi Skr_Official,It is not possible via automatic encryption, the only way to do it is via explicit encryption.Thanks,Cynthia",
"username": "Cynthia_Braund"
}
] | CSFLE of specific fields nested in array of objects | 2023-02-06T09:54:30.766Z | CSFLE of specific fields nested in array of objects | 1,252 |
null | [
"queries",
"node-js",
"mongoose-odm",
"indexes"
] | [
{
"code": " const results = await Offer.find({\n validFrom: { $gte: new Date(1675635799000) },\n }).sort({ price_decimal: \"desc\" }).limit(4);\nprice_decimal_-1_validFrom_-1.explain(\"allPlansExecution\")",
"text": "We have a collection of around 60 million docs.When the date in the following query increases and reaches a certain timestamp, the query takes much longer (> 2min compared to <1s).It’s not clear to me what is going on internally.\nEven if some malformed docs were causing this, it doesn’t make sense that the execution takes longer when less docs need to be returned (when the $gte timestamp is increased).Note that if we remove sorting from the query there is no issue and it takes < 1s.The collection has 13 indexes and the one used for this query is price_decimal_-1_validFrom_-1Here is a diff after running .explain(\"allPlansExecution\") for both queries:\nJSONCompare - The Advanced JSON Linting & Comparison ToolAny help would be appreciated ",
"username": "FreedomCoder"
},
{
"code": "validFrom_-1_price_decimal_-1.hint({validFrom: -1, price_decimal: -1})",
"text": "Ok, apparently the indexes that we had were not selective enough for recent data.\nThe solution was to create a new index validFrom_-1_price_decimal_-1\nand force it with .hint({validFrom: -1, price_decimal: -1}) when querying recent data.",
"username": "FreedomCoder"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Changing a date in the query by 1ms increases examined docs by a factor of 300! | 2023-02-09T21:36:53.208Z | Changing a date in the query by 1ms increases examined docs by a factor of 300! | 814 |
[
"server",
"installation"
] | [
{
"code": "",
"text": "Hi, everyone.I’m learning mongodb for web designing. As you see in the picture, I installed [email protected] properly and it seems to work well.\n\nEkran Resmi 2023-02-12 13.11.121448×836 62.7 KB\nIn the picture below, when I wrote “mongod” in terminal, it doesn’t work.\n\nEkran Resmi 2023-02-12 13.11.181444×830 35.1 KB\nDo you have any suggestions for my problem?Note : I have mac-Ventura and I installed mongodb for using homebrew.",
"username": "OZGUN_TEKSOY"
},
{
"code": "",
"text": "Hi @OZGUN_TEKSOY,\nTo start the instance, as mentioned from the documentation, you Need to follow this step:“To run MongoDB (i.e. the mongod process) manually as a background process, run:”\nhttps://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-os-x/#:~:text=To%20run%20MongoDB%20(i.e.%20the%20mongod%20process)%20manually%20as%20a%20background%20process%2C%20run%3ARegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Your service is already up.No need to run mongod again\nJust run mongosh and see if you can connect\nmongod not found could be path issue\nIdeally command should have worked to bring up default mongod",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "thank you @Ramachandra_Tummala . I’m trying to learn for watching a video belong to 2018. I guess, there are important differences between 2018 mongodb and 2023 mongodb",
"username": "OZGUN_TEKSOY"
}
] | Brew install [email protected] | 2023-02-12T10:17:17.252Z | Brew install [email protected] | 1,203 |
|
null | [
"queries"
] | [
{
"code": "linearizablemajoritylinearizablewrite concern majoritylinearizablemajority commitlinearizablew:majority point",
"text": "I was reading the manual and doing a deep dive into how MongoDB provide consistency.Regarding read concerns linearizable and majority I can only find one scenario where it would be preferable to use linearizable:Assuming that writes are made with write concern majority and read preferences to Primary, I want to:The only reason to use linearizable would be to avoid a double primary situation due to network partition where both primaries would believe to have the latest majority commitIn this scenario, only a linearizable read would be able to return an error if reading from the former primary due to the former primary not being able to reach secondaries to confirm w:majority point.Is this reasoning correct?",
"username": "Adriano_Tirloni"
},
{
"code": "linearizablemajority commit\"linearizable read\"\"majority\"linearizable{ w: \"majority\" }",
"text": "Hey @Adriano_Tirloni,The only reason to use linearizable would be to avoid a double primary situation due to network partition where both primaries would believe to have the latest majority commitYes, in the scenario of a network issue, a \"linearizable read\" can be more useful. A linearizable read ensures that the latest committed write is returned. Unlike \"majority\", a linearizable read concern confirms with secondary members that the read operation is reading from a primary that is capable of confirming writes with { w: \"majority\" } write concern. If a linearizable read is performed on the former primary, it will return an error if the former primary is unable to reach a majority of secondaries to confirm the write concern “majority” point. This helps to ensure that only the latest committed data is returned and that no rolled-back data is seen. Linearizable read concern guarantees only apply if read operations specify a query filter that uniquely identifies a single document.Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "linearizable",
"text": "Thanks for the reply Satyam, I was only figuring out the use case for linearizable, due to the speed impact.",
"username": "Adriano_Tirloni"
},
{
"code": "",
"text": "So that means, suppose network is not an issue (always only one primary exists which is being read from), linearizable and majority read doesn’t have any difference except performance? (e.g. results in these two would always be the same).",
"username": "Kobe_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Read Concern: Linearizable vs Majority | 2023-02-08T14:53:00.420Z | Read Concern: Linearizable vs Majority | 782 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "Hi,\nI’m in the process of migrating to Realm 10.0.\nI used to have realm functionality for reseting password. That looks to be gone. I now need to implement something myself, maybe handle the reset in an app deep link. Is that correct?",
"username": "donut"
},
{
"code": "",
"text": "Hi @donut, is this for iOS or Android?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I also have the same problem here.",
"username": "Panashe_Makomo"
},
{
"code": "",
"text": "iOS. Simply put I cannot enable Email/Passowrd authentication provider without providing a password reset URL. Which unless I am misunderstanding means I have to handle the reset myself.",
"username": "donut"
},
{
"code": "callResetPasswordFunction",
"text": "Hi @donut & @Panashe_Makomo,in the Realm UI, you can select to use a Realm function for password resets rather than an email + reset URL:\n\nimage1122×328 29.8 KB\nIf you opt to create a new reset function, the template includes comments with the required boilerplate code – which you can use or build upon.You can invoke the reset function from the mobile iOS app by calling callResetPasswordFunction (EmailPasswordAuth Extension Reference)",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "I see this option, but I don’t understand. Without an email sent, how does one reset his own password?",
"username": "donut"
},
{
"code": "",
"text": "I think the function will be invoked from the client sdk, wehn you run the client sdk code. However i also have the same issue that the functions are not working. I am using Android btw.",
"username": "Panashe_Makomo"
},
{
"code": "",
"text": "Your app (e.g. through a web page) is responsible for deciding whether or not the user requesting the reset is really who they claim to be – the assumption being that the user may have lost their password.The simplest option to implement is to have Realm send the confirmation email - in which case, you have to provide the target web page where the user can be reset (or for a mobile app, you can provide a deep/universal link so that it’s your app that does the reset.)If you don’t want the automated email then you can provide a Realm function instead which can handle the confirmation howere you like. The function will indicate the result as either:The docs have more details: https://docs.mongodb.com/realm/authentication/email-password/#run-a-password-reset-function",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Dear Andrew,For the option where we get Realm to send the confirmation email, is there a sample of a simple target web page that we can use for this purpose? I am not very sure what is the role of this web page and how to go about linking the page to the password reset process.Thank you.",
"username": "Hao_Ming"
},
{
"code": "",
"text": "Hi Could anybody give a sample target webpage to reset the password? I got the error “This XML file does not appear to have any style information associated with it. The document tree is shown below.” Also the xml expires date has expired.",
"username": "Marc_Mandis"
},
{
"code": ".onOpenURLawait app.emailPasswordAuth.resetPassword(token, tokenId, \"newPassword\");\n",
"text": "Hao_Ming,There is a lot of work behind the scenes that you will have to do! I also struggled with this for a few days. I am posting here to help anyone else who is searching for a solution to how to implement password reset with the email option. The doc’s are not very clear and the examples provided are sometimes very difficult to follow.I eventually figured out how this all works following Andrew’s suggestion.The simplest option to implement is to have Realm send the confirmation email - in which case, you have to provide the target web page where the user can be reset (or for a mobile app, you can provide a deep/universal link so that it’s your app that does the reset.)In order to do this you should have a basic understanding of the following:Essentially what happens is when your app user request a password reset, an email is sent to the users email account. That email will have a link to website, that you provide. Within the url of that link, there is a token parameter added and a tokenId parameter added to the complete url address.\nWhen you set up your app with universal link, it essentially allows your app to open automatically when the link is clicked on. So when the user requesting a password reset, clicks on the verification email, the universal link will automatically open the iOS app.\nWhen the app opens, you can use .onOpenURL to perform what ever actions you want, including using the token parameter and the tokenId parameter that was in the URL sent from the email. With the token parameter and the tokenId parameter available, you can now use these values and pass them to the function, along with a new password to use.This function will update the users password provided the token and tokenId have not expired, I believe they expire after 30 mins.",
"username": "Chris_Stromberg"
},
{
"code": "",
"text": "Hello Andrew,I am able to successfully reset a password with the confirmation email being sent and a deep link/universal link to my iOS app.How do you normally handle the reset when the universal link does not open the app? For example, the user is getting the confirmation email on a pc, that does not have an iOS app installed.Thanks.",
"username": "Chris_Stromberg"
}
] | Reset password in MongoRealm | 2021-05-17T12:19:25.589Z | Reset password in MongoRealm | 5,611 |
null | [
"java"
] | [
{
"code": "",
"text": "As part of the course \" Connecting to MongoDB in Java\", i have installed IntelliJ and trying to follow the steps given in the guide with MongoDB java drivers. But when i try to run connection command it says \"mvn’ not found or compile not found.\nI have configured pom.xml and Connection.java as mentioned. But in guide not mentioned where i should create Connection.java etc.\nSo it will be great if i have full guide step by step from which driver i have to use and where i have to create COnnection.java and where i have to run mvn etc. Please help.",
"username": "udhaya_kumar_Bagavathiappan"
},
{
"code": "mvn",
"text": "You need to install maven. That’s the mvn command.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "HI Jack,\nMy IntelliJ has Maven already. I have attached the plugin’s already installed in IntelliJ i have\nimage1730×777 84.9 KB\nAlso here i have given the screenshot of the erro i am getting while running command to connect mongodb\nimage1920×1080 316 KB\nA",
"username": "udhaya_kumar_Bagavathiappan"
},
{
"code": "mvnmvn%PATH%",
"text": "I don’t use IntelliJ (nor Windows) but the error makes it clear that, even though you have Maven support in IntelliJ, the mvn executable is not being found at the shell command line.If IntelliJ support for Maven includes a full mvn system, maybe you can find that executable and add it to the shell %PATH%. Otherwise, look on the Maven website for instructions on installing Maven on Windows.",
"username": "Jack_Woehr"
}
] | Connecting to MongoDB in Java | 2023-02-10T16:32:18.345Z | Connecting to MongoDB in Java | 779 |
null | [
"aggregation",
"queries",
"dot-net"
] | [
{
"code": "using System;\nusing System.Linq;\nusing System.Security.Authentication;\nusing MongoDB.Driver;\nusing MongoDB.Driver.Linq;\n\nvar settings = MongoClientSettings.FromUrl(new MongoUrl(\"mongodb://localhost:27017/test\"));\nsettings.SslSettings = new SslSettings {EnabledSslProtocols = SslProtocols.Tls12};\nsettings.LinqProvider = LinqProvider.V3;\nvar mongoClient = new MongoClient(settings);\nvar mongoDatabase = mongoClient.GetDatabase(\"test\");\nvar collection = mongoDatabase.GetCollection<OrderDao>(\"test\");\n\nvar query1 = collection\n .AsQueryable()\n .SelectMany(i => i.Lines)\n .GroupBy(l => l.ItemId)\n .Select(g => new ItemSummary\n {\n Id = g.Key,\n TotalAmount = g.Sum(l => l.TotalAmount)\n });\n\nvar query1txt = query1.ToString();\n\nConsole.WriteLine(query1txt);\nConsole.WriteLine(query1txt.Contains(\"$push\") ? \"Uses $push :(\" : \"No $push here, hurray!\");\n\nvar query2 = collection\n .AsQueryable()\n .GroupBy(l => l.Id)\n .Select(g => new ItemSummary\n {\n Id = g.Key,\n TotalAmount = g.Sum(l => l.TotalAmount)\n });\n\nvar query2txt = query2.ToString();\n\nConsole.WriteLine(query2txt);\nConsole.WriteLine(query2txt.Contains(\"$push\") ? \"Uses $push :(\" : \"No $push here, hurray!\");\n\npublic class OrderDao\n{\n public OrderLineDao[] Lines { get; set; }\n \n public decimal TotalAmount { get; set; }\n public Guid Id { get; set; }\n}\n\npublic class OrderLineDao\n{\n public decimal TotalAmount { get; set; }\n public Guid ItemId { get; set; }\n}\n\npublic class ItemSummary\n{\n public Guid Id { get; set; }\n public decimal TotalAmount { get; set; }\n}\naggregate([{ \"$unwind\" : \"$Lines\" }, { \"$project\" : { \"Lines\" : \"$Lines\", \"_id\" : 0 } }, { \"$group\" : { \"_id\" : \"$Lines.ItemId\", \"__agg0\" : { \"$sum\" : \"$Lines.TotalAmount\" } } }, { \"$project\" : { \"Id\" : \"$_id\", \"TotalAmount\" : \"$__agg0\", \"_id\" : 0 } }])\n\nNo $push here, hurray!\n\naggregate([{ \"$group\" : { \"_id\" : \"$_id\", \"__agg0\" : { \"$sum\" : \"$TotalAmount\" } } }, { \"$project\" : { \"Id\" : \"$_id\", \"TotalAmount\" : \"$__agg0\", \"_id\" : 0 } }])\n\nNo $push here, hurray!\ntest.test.Aggregate([{ \"$project\" : { \"_v\" : \"$Lines\", \"_id\" : 0 } }, { \"$unwind\" : \"$_v\" }, { \"$group\" : { \"_id\" : \"$_v.ItemId\", \"_elements\" : { \"$push\" : \"$_v\" } } }, { \"$project\" : { \"_id\" : \"$_id\", \"TotalAmount\" : { \"$sum\" : \"$_elements.TotalAmount\" } } }])\n\nUses $push :(\n\ntest.test.Aggregate([{ \"$group\" : { \"_id\" : \"$_id\", \"__agg0\" : { \"$sum\" : \"$TotalAmount\" } } }, { \"$project\" : { \"_id\" : \"$_id\", \"TotalAmount\" : \"$__agg0\" } }])\n\nNo $push here, hurray!\n",
"text": "After switching to LinqV3 some reports starts too fail because of Mongo exception: Command aggregate failed: Exceeded memory limit for $group, but didn’t allow external sort. Pass allowDiskUse:true to opt inSeems like the reason is the query generated with v3 which contains unnecessary $push within $group phase, below is sample code to reproduce the issue:Output when executed with LinqV2:Output when executed with LinqV3:",
"username": "Marek_Olszewski"
},
{
"code": "",
"text": "Thank you for reporting this issue and for the very complete information to reproduce it.I am able to reproduce this using your example code.I have created a JIRA ticket for this issue:https://jira.mongodb.org/browse/CSHARP-4468Please follow the JIRA ticket for further information.",
"username": "Robert_Stam"
},
{
"code": "AllowDiskUsage: true",
"text": "@Robert_Stam Is it possible to specify AllowDiskUsage: true in case of such slow performance queries for LINQ V2 or V3 as workaround? Or do I have to rewrite them from LINQ to .Aggregate()?",
"username": "Pavel_Levchuk"
}
] | LINQ V3 SelectMany + GroupBy results with redundant $push within $group | 2022-12-24T14:29:18.014Z | LINQ V3 SelectMany + GroupBy results with redundant $push within $group | 1,723 |
null | [
"react-native"
] | [
{
"code": "export const RegisterUser = async (email, password) => {\n const credentials = Realm.Credentials.emailPassword(email, password);\n try {\n const user = await myapp.emailPasswordAuth.registerUser({email, password});\n } catch (error ) {\n console.log(\"Error is \", error);\n }\n}\nconst saveHandler = async () => {\n\n try {\n const realm = await Realm.open(RealmConfiguration1);\n const ifEmailAlreadyExists = realm.objects(\"User\").filtered(`email == \"${username}\" `);\n if(ifEmailAlreadyExists.length > 0){\n staffAlertHandler();\n } else {\n const newUserId = generateRandomId().toString()\n const staffData = {\n _id: new ObjectId(),\n firstname: firstname,\n lastname: lastname,\n mobile_no: mobileno,\n role_id: roleId, \n email: username,\n user_id: ,\n is_active:\"active\",\n _partition: \"some partition\"\n }\n\n realm.write(() => {\n realm.create(\"User\", staffData)\n })\n\n RegisterUser(username, password)\n props.navigation.goBack();\n }\n } catch (err) {\n console.error(\"Failed to open the realm\", err);\n }\n }\n",
"text": "I building an app who has a admin, admin can registered it’s subordinates as realm app users but subordinates can not registered by themselves. After registering admin team can login in to the app and can take orders from their client. The problem I am facing is when admin is registering a user by this functionthe user is registered in the app users and I also create a trigger function on authentication which update the collection User. I want to relate the app user to the users in the User collection, how can be this acheived because nothing is returned Realm.Credentials.emailPassword so that we take id and save it the collection User.This is my whole function to register a userFirst of all I am creating a staff member in the User collection and then hitting RegisterUser function\nwhich is myapp.emailPasswordAuth.registerUser({email, password}) thisTrigger function only works when any staff authenticates first time then my trigger function takes the user.id\nand update the by matching the email and update user_id to object id string. I think you are understanding what I am writing here, it’s a multi users app.",
"username": "Zubair_Rajput"
},
{
"code": "",
"text": "Hello, did you find any solution? It’s exactly what I need to do right now",
"username": "Victor"
}
] | How admin of application registered subordinates in realm react native | 2022-06-26T06:02:50.916Z | How admin of application registered subordinates in realm react native | 1,940 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Respected Authority,This is to inform that connection or use of my mongo atlas with this IP Address: 152.58.148.120 is showing error. Edtimeout error.\nIs my ip address blocked for some reasons? which is not giving me access to use my mongo Atlas? If so why and how to resolve it?Thanking you,\nGourav Chaki",
"username": "Gourav_Chaki"
},
{
"code": "",
"text": "Hi @Gourav_Chaki ,\nI think this link can be useful for your problem:Regards",
"username": "Fabio_Ramohitaj"
}
] | IP Address Blocked | 2023-02-10T20:54:33.577Z | IP Address Blocked | 986 |
null | [
"ops-manager"
] | [
{
"code": "",
"text": "Hey folks!Billy Lim, program lead for the Community Advocacy Program (née Champions) which you should totally apply for in our next intake (TBD)!One of our stellar members @Arkadiusz_Borucki has published the third and final piece of his series for the inaugural season of the MongoDB Developer Center’s Guest Author Program.You should most definitely take a read.Also please feel free to do a Arek a solid and support your fellow Community members here and in the future generally by sharing valuable knowledge on your social media. Let knowledge spread from more to more.Part One A: Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud\nPart One B: Mastering MongoDB Ops Manager on Kubernetes\nPart Two: Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti\nPart Three: Mastering the Advanced Features of the Data API with Atlas CLIIf you’re interested in getting your own learnings + knowledge published, growing your written advocacy skills, and/or just supporting the community’s technical acumen — participating in the Guest Author’s Program is an excellent strategy.Please reach out to @Joel_Lord if you’re interested. You’re free to talk to me as well in case Joel’s unavailable - not quite as close to the programming as he is, but should be able to talk you through. :o)Hugs,Billy LimCAP Lead\nSenior Community Engagement Manager, Global Developer Community",
"username": "Billy"
},
{
"code": "",
"text": "If you want to submit a topic for the guest author program, fill out the following form, and I’ll get back to you as soon as possible.You've got an idea about an article about MongoDB? Great! We've got a platform that is waiting for your content.\n\nStart by filling in this form. A MongoDB coach will be assigned to you to work through the steps to get your article published on the...",
"username": "Joel_Lord"
}
] | Guest Author Series: MongoDB + Kubernetes Operator on GCloud, Ops Manager on Kubernetes, MDB on Multiple Kubernetes Clusters, and Mastering Data API Features w/ Atlas CLI! | 2023-02-05T18:12:17.989Z | Guest Author Series: MongoDB + Kubernetes Operator on GCloud, Ops Manager on Kubernetes, MDB on Multiple Kubernetes Clusters, and Mastering Data API Features w/ Atlas CLI! | 1,218 |
null | [
"java",
"containers",
"field-encryption"
] | [
{
"code": "",
"text": "Hi, have a java application connecting to Mongo Atlas 6.0. The application is docker containerized and deployed on Kubernetes (cloud platform). Would like to enable automatic client side field level encryption. How to make mongocryptd available to the application running on Kubernetes?I tried initContainer to copy mongocryptd to a specific mount point and refer it. But the application is going into CrashLoop with mongocryptd not available.",
"username": "Pradeep_CS1"
},
{
"code": "",
"text": "Hello Pradeep_CS1 and welcome!There is a nice example here using docker and kubernetes that you may help.Cynthia",
"username": "Cynthia_Braund"
}
] | Auto client side field level encryption by application running on Kubernetes | 2023-02-09T09:21:59.929Z | Auto client side field level encryption by application running on Kubernetes | 1,049 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Say, I’d like to do some type checking in functions. I was wondering how I could create a helper file that I can import on several different functions. Ideally without creating a package and uploading it to NPM.",
"username": "Alexandar_Dimcevski"
},
{
"code": "",
"text": "Hi @Alexandar_DimcevskiDo you mean to include external dependencies to an Atlas Function? If yes, please have a look at External Dependencies to see if this answers your question.If not, please provide more details and examples regarding your requirements.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hey Kevin!I’m trying to avoid using external packages. And just have a one local helper file with code that’s being used in multiple functions.",
"username": "Alexandar_Dimcevski"
}
] | Functions Middleware | 2023-02-02T19:48:29.580Z | Functions Middleware | 866 |
null | [
"atlas-cluster",
"spring-data-odm",
"time-series"
] | [
{
"code": "Error: Bulk write operation error on server ssssss.mongodb.net:27017. Write errors: [BulkWriteError{index=8990, code=11000, message='E11000 duplicate key error collection: usmobile_analytics_test.system.buckets.prr_data_records dup key: { _id: ObjectId('63d0b750abbdf51035943f6c') }', details={}}]. ; nested exception is com.mongodb.MongoBulkWriteException: Bulk write operation error on server cluster0-shard-00-01.iyvqq.mongodb.net:27017. Write errors: [BulkWriteError{index=8990, code=11000, message='E11000 duplicate key error collection: usmobile_analytics_test.system.buckets.prr_data_records dup key: { _id: ObjectId('63d0b750abbdf51035943f6c') }', details={}}].\nprivate void writeToDataRecordsCollection(List<DataRecord> dataRecords, String filename) {\n\n log.info(\"Writing Data Records to collection for file {}.\", filename);\n dataRecordsRepository.saveAll(dataRecords);\n }\n",
"text": "Hi All,I am using MongoDB 6 Time Series collections and using Spring to write data in to the collection. Doing a Bulk Write using the saveAll() method sometimes throws this error:From what I have read, time series collections does not create an index on the _id field and we cannot do updates. The why would this error happen?\nSample code below:",
"username": "Khusro_Siddiqui"
},
{
"code": "_idtimeseriestimeseriestime seriesmongosh",
"text": "Hi @Khusro_Siddiqui,Welcome to the MongoDB Community forums time-series collections do not create an index on the _id field and we cannot do updates. Then why would this error happen?Yes, in a time-series collection, the uniqueness of the _id field is not enforced, and you shouldn’t be seeing this duplicate key error.As a quick first check, timeseries collection was introduced in MongoDB 5.0 and requires a createCollection command to create one. Could you provide your MongoDB version and how did you create the timeseries collection?Also, could you please share the following:Furthermore, can you confirm the following:Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "db.createCollection(\n \"prr_data_records\",\n {\n timeseries: {\n timeField: \"recordStartDate\",\n metaField: \"mdn\",\n granularity: \"hours\"\n },\n expireAfterSeconds: 7776000\n }\n);\n{\n \"recordStartDate\" : ISODate(\"2022-12-08T05:00:00.000+0000\"),\n \"mdn\" : \"**********\",\n \"imsi\" : \"**********\",\n \"secondInEst\" : NumberInt(37),\n \"lastModifiedDate\" : ISODate(\"2023-02-03T03:01:48.059+0000\"),\n \"hourInEst\" : NumberInt(20),\n \"totalKbUnits\" : \"2375.7148437500\",\n \"endDateTime\" : ISODate(\"2022-12-09T03:42:41.000+0000\"),\n \"monthInEst\" : NumberInt(12),\n \"minuteInEst\" : NumberInt(1),\n \"idempotentKey\" : \"1516771233047_2022342\",\n \"yearInEst\" : NumberInt(2022),\n \"_id\" : ObjectId(\"63dc7926a8ba1b3326799703\"),\n \"dayInEst\" : NumberInt(8),\n \"countryCode\" : \"USA\"\n}\n",
"text": "Hello @Kushagra_Kesav. I am using MongoDB Version 6.0.4 hosted in Atlas.The collection is a time series collection and this is how I have created itI am seeing the error when I do bulk inserts around 50000-60000 records. But the error is very random and not consistent at all. Very difficult to reproduce it again.If you look at the error, it says error while inserting into the system.buckets.prr_data_records. Isn’t this the internal collection that mongo uses to store TS data?Sample record:",
"username": "Khusro_Siddiqui"
},
{
"code": "Error: Bulk write operation error on server sssssss.mongodb.net:27017. Write errors: [BulkWriteError{index=1528, code=11000, message='E11000 duplicate key error collection: usmobile_analytics_test.system.buckets.prr_data_records dup key: { _id: ObjectId('63d1c280aadc99d3ff3d8189') }', details={}}]. ; nested exception is com.mongodb.MongoBulkWriteException: Bulk write operation error on server ssssss.mongodb.net:27017. Write errors: [BulkWriteError{index=1528, code=11000, message='E11000 duplicate key error collection: usmobile_analytics_test.system.buckets.prr_data_records dup key: { _id: ObjectId('63d1c280aadc99d3ff3d8189') }', details={}}].\n",
"text": "Encountered the issue again todayDoes mongodb not support bulk write operations well enough when using Spring?",
"username": "Khusro_Siddiqui"
},
{
"code": ".saveAll(batch)bulkOps.insert(batch)mongoTemplate.insertAll()public class Prr_data_records {\n...\n\tprivate Date recordStartDate;\n ...\n\t private static final int BATCH_SIZE = 60000;\n\t\t\tList<Prr_data_records> data = new ArrayList<>();\n\t\t\tRandom random = new Random();\n\t\t\tfor (int i = 0; i < 100_0000; i++) {\n\t\t\t\tdata.add(new Prr_data_records(\"John\", \"Doe\", new Date(System.currentTimeMillis() + random.nextInt(1000 * 60 * 60 * 24))));\n\t\t\t}\n\n\t\t\tfor (int i = 0; i < data.size(); i += BATCH_SIZE) {\n\t\t\t\tint endIndex = Math.min(i + BATCH_SIZE, data.size());\n\t\t\t\tList<Prr_data_records> batch = data.subList(i, endIndex);\n\n\t\t\t\tPrr_data_recordsRepository.saveAll(batch);\n\t\t\t}\n ...\npublic class MongodbexampleApplication {\n\tprivate static final int BATCH_SIZE = 60000;\n\n\tpublic static void main(String[] args) {\n ...\n\t\t\tBulkOperations bulkOps = mongoTemplate.bulkOps(BulkMode.UNORDERED, Prr_data_records.class);\n\t\t\tfor (int i = 0; i < data.size(); i += BATCH_SIZE) {\n\t\t\t\tint endIndex = Math.min(i + BATCH_SIZE, data.size());\n\t\t\t\tList<Prr_data_records> batch = data.subList(i, endIndex);\n\n\t\t\t\tbulkOps.insert(batch);\n\t\t\t\tbulkOps.execute();\n\t\t\t}\n\t \t...\npublic class MongodbexampleApplication {\n\tprivate static final int BATCH_SIZE = 60000;\n\n\tpublic static void main(String[] args) {\n ...\n\t\t\tfor (int i = 0; i < data.size(); i += BATCH_SIZE) {\n\t\t\t\tint endIndex = Math.min(i + BATCH_SIZE, data.size());\n\t\t\t\tList<Prr_data_records> batch = data.subList(i, endIndex);\n\t\t\t\tmongoTemplate.insertAll(batch);\n\t\t\t}\n\t \t...\nfrom pymongo import MongoClient, InsertOne\nfrom datetime import datetime, timedelta\n\nclient = MongoClient(\"mongodb://localhost:27017/\")\ndb = client[\"test\"]\ncollection_name = \"prr_data_records\"\n\ndb.create_collection(name=collection_name, timeseries={\"timeField\": \"recordStartDate\", \"metaField\": \"mdn\", \"granularity\": \"hours\"})\n\ncollection = db[collection_name]\ntotal_docs = 1000000\nops = []\nd = datetime.now()\nfor i in range(total_docs):\n record_start_date = d + timedelta(seconds=i+1)\n ops.append(InsertOne({\n \"recordStartDate\": record_start_date,\n \"lastModifiedDate\" : d,\n ...\n }))\n\ncollection.bulk_write(ops)\nprint(\"Data Inserted Successfully\")\n",
"text": "Hi @Khusro_Siddiqui,Thanks for sharing the requested details.I tried with .saveAll(batch), bulkOps.insert(batch) and mongoTemplate.insertAll(). After running it multiple times, unfortunately, I cannot reproduce what you’re seeing.Sharing all three code snippets for your reference.Also, we tried with a different driver (Pymongo) to bulk insert in the time series collection and it worked for us without throwing any duplicate errors.Can you please execute this code and see if it still throws the error?Very difficult to reproduce it againCan you confirm if you notice the pattern - might be related to the load on the database at certain times, like during the day or week? Have you checked your server logs? That might give us some hints as to the exact cause.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Kushagra_Kesav running all the operations only once does not encounter the issue. We process incoming files continuously. Lets say over the span of an hour, if we are processing 100 files, with each file containing around 200,000 records, we see the issue happen for 2-3 files.I am pretty sure you will see the error if you run the above code snippets in a loop continuously for 1-2 hours",
"username": "Khusro_Siddiqui"
}
] | MongoDB Timeseries BulkWriteError code=11000 | 2023-02-02T21:32:11.563Z | MongoDB Timeseries BulkWriteError code=11000 | 1,753 |
null | [] | [
{
"code": "m getting this error when Iimport clientPromise from \"@/lib/mongodb\";\nimport { ObjectId } from \"mongodb\";\n\nexport async function findIsomoById(itemId: string) {\n try {\n // Connect to the MongoDB database\n const client = await clientPromise;\n const db = client.db(\"amasomo_ya_misa\");\n const collection = db.collection(\"amasomo\");\n\n \n\n console.log(new ObjectId(itemId));\n\n // Find the item in the collection by its _id\n const isomo = await collection.findOne({ _id: new ObjectId(itemId) });\n\n return isomo;\n } catch (error) {\n console.error(error);\n throw new Error(\"An error occurred while finding the item.\");\n } finally {\n //ServerClosedEvent.apply;\n }\n}\n\n[= ] info - Generating static pages (3/4)BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer\n at new BSONTypeError (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\node_modules\\bson\\lib\\error.js:41:28)\n at new ObjectId (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\node_modules\\bson\\lib\\objectid.js:67:23)\n at findIsomoById (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\.next\\server\\app\\isomo\\[id]\\page.js:553:21)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async AmasomoReview (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\.next\\server\\app\\isomo\\[id]\\page.js:573:19)\nError: An error occurred while finding the item.\n at findIsomoById (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\.next\\server\\app\\isomo\\[id]\\page.js:561:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async AmasomoReview (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\.next\\server\\app\\isomo\\[id]\\page.js:573:19)\n\nError occurred prerendering page \"/isomo/[id]\". Read more: https://nextjs.org/docs/messages/prerender-error\nError: An error occurred while finding the item.\n at findIsomoById (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\.next\\server\\app\\isomo\\[id]\\page.js:561:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async AmasomoReview (C:\\Users\\InezaGuy\\reactpr\\amasomo_yamisa\\.next\\server\\app\\isomo\\[id]\\page.js:573:19)\ninfo - Generating static pages (4/4)\n",
"text": "Im getting this error when Im trying to build my app but when I run it on my pc everything is fine. what is the issue?",
"username": "Ineza_Guy"
},
{
"code": "itemIdObjectId()itemId.find()mongoshDB> ObjectId(\"63e58fb98a67c15905ee306c\") /// <--- Valid ObjectId value\nObjectId(\"63e58fb98a67c15905ee306c\")\n\nDB> ObjectId(\"63e58fb98a67c15905ee306zzzzzzzzz\") /// <--- Invalid ObjectId value\nBSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer\n",
"text": "Hi @Ineza_Guy - Welcome to the community.Generating static pages (3/4)BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integerWhat’s the value of the itemId variable being passed into ObjectId()? The error indicates that an invalid value is being passed through. Try logging the value of itemId prior to running the .find() for troubleshooting purposes. More information regarding ObjectId that may be of use.mongosh example returning a similar error when using an invalid ObjectId value:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "so why is it working on npm run dev but on npm run build I get the errors?",
"username": "Ineza_Guy"
}
] | Generating static pages (3/4)BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer | 2023-02-08T17:54:51.760Z | Generating static pages (3/4)BSONTypeError: Argument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer | 1,551 |
null | [
"replication",
"python",
"sharding"
] | [
{
"code": "{\"t\":{\"$date\":\"2023-01-31T08:46:18.246+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"OplogApplier-0\",\"msg\":\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"configReplSet\",\"host\":\"primary_config:27019\",\"error\":{\"code\":202,\"codeName\":\"NetworkInterfaceExceededTimeLimit\",\"errmsg\":\"Couldn't get a connection within the time limit of 104ms\"},\"action\":{\"dropConnections\":false,\"requestImmediateCheck\":false,\"outcome\":{\"host\":\"primary_config:27019\",\"success\":false,\"errorMessage\":\"NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit of 104ms\"}}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.247+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22739, \"ctx\":\"OplogApplier-0\",\"msg\":\"Operation timed out\",\"attr\":{\"error\":\"NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit of 104ms\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.247+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22079, \"ctx\":\"OplogApplier-0\",\"msg\":\"Couldn't create config.changelog collection\",\"attr\":{\"error\":{\"code\":202,\"codeName\":\"NetworkInterfaceExceededTimeLimit\",\"errmsg\":\"Couldn't get a connection within the time limit of 104ms\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.247+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23093, \"ctx\":\"OplogApplier-0\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":40107,\"error\":\"NetworkInterfaceExceededTimeLimit: Couldn't get a connection within the time limit of 104ms\",\"file\":\"src/mongo/db/repl/replication_coordinator_external_state_impl.cpp\",\"line\":883}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.247+00:00\"},\"s\":\"F\", \"c\":\"-\", \"id\":23094, \"ctx\":\"OplogApplier-0\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.248+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"OplogApplier-0\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"Got signal: 6 (Aborted).\\n\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.441+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31431, \"ctx\":\"OplogApplier-0\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"854096E78A\",\"b\":\"853DB75000\",\"o\":\"2DF978A\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1EA\"},{\"a\":\"8540970219\",\"b\":\"853DB75000\",\"o\":\"2DFB219\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"},{\"a\":\"854096D5A6\",\"b\":\"853DB75000\",\"o\":\"2DF85A6\",\"s\":\"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP7siginfoPv\",\"s+\":\"66\"},{\"a\":\"7FCB960197E0\",\"b\":\"7FCB9600A000\",\"o\":\"F7E0\",\"s\":\"_L_unlock_16\",\"s+\":\"2D\"},{\"a\":\"7FCB95CA84F5\",\"b\":\"7FCB95C76000\",\"o\":\"324F5\",\"s\":\"gsignal\",\"s+\":\"35\"},{\"a\":\"7FCB95CA9CD5\",\"b\":\"7FCB95C76000\",\"o\":\"33CD5\",\"s\":\"abort\",\"s+\":\"175\"},{\"a\":\"853EAC1E45\",\"b\":\"853DB75000\",\"o\":\"F4CE45\",\"s\":\"_ZN5mongo35fassertFailedWithStatusWithLocationEiRKNS_6StatusEPKcj\",\"s+\":\"178\"},{\"a\":\"853E7D1817\",\"b\":\"853DB75000\",\"o\":\"C5C817\",\"s\":\"_ZN5mongo4repl39ReplicationCoordinatorExternalStateImpl34_shardingOnTransitionToPrimaryHookEPNS_16OperationContextE.cold.1173\",\"s+\":\"4B\"},{\"a\":\"853EE63CE7\",\"b\":\"853DB75000\",\"o\":\"12EECE7\",\"s\":\"_ZN5mongo4repl39ReplicationCoordinatorExternalStateImpl21onTransitionToPrimaryEPNS_16OperationContextE\",\"s+\":\"2F7\"},{\"a\":\"853EEA5736\",\"b\":\"853DB75000\",\"o\":\"1330736\",\"s\":\"_ZN5mongo4repl26ReplicationCoordinatorImpl19signalDrainCompleteEPNS_16OperationContextEx\",\"s+\":\"556\"},{\"a\":\"853EF36F2E\",\"b\":\"853DB75000\",\"o\":\"13C1F2E\",\"s\":\"_ZN5mongo4repl16OplogApplierImpl4_runEPNS0_11OplogBufferE\",\"s+\":\"8DE\"},{\"a\":\"853EF8D428\",\"b\":\"853DB75000\",\"o\":\"1418428\",\"s\":\"_ZZN5mongo15unique_functionIFvRKNS_8executor12TaskExecutor12CallbackArgsEEE8makeImplIZNS_4repl12OplogApplier7startupEvEUlS5_E_EEDaOT_EN12SpecificImpl4callES5_\",\"s+\":\"F8\"},{\"a\":\"85402E2E73\",\"b\":\"853DB75000\",\"o\":\"276DE73\",\"s\":\"_ZN5mongo8executor22ThreadPoolTaskExecutor11runCallbackESt10shared_ptrINS1_13CallbackStateEE\",\"s+\":\"113\"},{\"a\":\"85402E3282\",\"b\":\"853DB75000\",\"o\":\"276E282\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZNS_8executor22ThreadPoolTaskExecutor23scheduleIntoPool_inlockEPNSt7__cxx114listISt10shared_ptrINS6_13CallbackStateEESaISB_EEERKSt14_List_iteratorISB_ESI_St11unique_lockINS_12latch_detail5LatchEEEUlT_E1_EEDaOSN_EN12SpecificImpl4callEOS1_\",\"s+\":\"A2\"},{\"a\":\"854048BFF2\",\"b\":\"853DB75000\",\"o\":\"2916FF2\",\"s\":\"_ZN5mongo10ThreadPool10_doOneTaskEPSt11unique_lockINS_12latch_detail5LatchEE\",\"s+\":\"132\"},{\"a\":\"854048E636\",\"b\":\"853DB75000\",\"o\":\"2919636\",\"s\":\"_ZN5mongo10ThreadPool13_consumeTasksEv\",\"s+\":\"86\"},{\"a\":\"854048F3E1\",\"b\":\"853DB75000\",\"o\":\"291A3E1\",\"s\":\"_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE\",\"s+\":\"E1\"},{\"a\":\"854048F710\",\"b\":\"853DB75000\",\"o\":\"291A710\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_10ThreadPool25_startWorkerThread_inlockEvEUlvE2_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"60\"},{\"a\":\"8540B1907F\",\"b\":\"853DB75000\",\"o\":\"2FA407F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"},{\"a\":\"7FCB96011AA1\",\"b\":\"7FCB9600A000\",\"o\":\"7AA1\",\"s\":\"start_thread\",\"s+\":\"D1\"},{\"a\":\"7FCB95D5EC4D\",\"b\":\"7FCB95C76000\",\"o\":\"E8C4D\",\"s\":\"clone\",\"s+\":\"6D\"}],\"processInfo\":{\"mongodbVersion\":\"4.4.13\",\"gitVersion\":\"df25c71b8674a78e17468f48bcda5285decb9246\",\"compiledModules\":[],\"uname\":{\"sysname\":\"Linux\",\"release\":\"4.1.12-124.48.6.el6uek.x86_64\",\"version\":\"#2 SMP Tue Mar 16 15:39:03 PDT 2021\",\"machine\":\"x86_64\"},\"somap\":[{\"b\":\"853DB75000\",\"elfType\":3,\"buildId\":\"781A3955310D52A5503CEA4EAC13DEB84CCF5E2C\"}]}}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"854096E78A\",\"b\":\"853DB75000\",\"o\":\"2DF978A\",\"s\":\"_ZN5mongo18stack_trace_detail12_GLOBAL__N_119printStackTraceImplERKNS1_7OptionsEPNS_14StackTraceSinkE.constprop.606\",\"s+\":\"1EA\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"8540970219\",\"b\":\"853DB75000\",\"o\":\"2DFB219\",\"s\":\"_ZN5mongo15printStackTraceEv\",\"s+\":\"29\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"854096D5A6\",\"b\":\"853DB75000\",\"o\":\"2DF85A6\",\"s\":\"_ZN5mongo12_GLOBAL__N_116abruptQuitActionEiP7siginfoPv\",\"s+\":\"66\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FCB960197E0\",\"b\":\"7FCB9600A000\",\"o\":\"F7E0\",\"s\":\"_L_unlock_16\",\"s+\":\"2D\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FCB95CA84F5\",\"b\":\"7FCB95C76000\",\"o\":\"324F5\",\"s\":\"gsignal\",\"s+\":\"35\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FCB95CA9CD5\",\"b\":\"7FCB95C76000\",\"o\":\"33CD5\",\"s\":\"abort\",\"s+\":\"175\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"853EAC1E45\",\"b\":\"853DB75000\",\"o\":\"F4CE45\",\"s\":\"_ZN5mongo35fassertFailedWithStatusWithLocationEiRKNS_6StatusEPKcj\",\"s+\":\"178\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"853E7D1817\",\"b\":\"853DB75000\",\"o\":\"C5C817\",\"s\":\"_ZN5mongo4repl39ReplicationCoordinatorExternalStateImpl34_shardingOnTransitionToPrimaryHookEPNS_16OperationContextE.cold.1173\",\"s+\":\"4B\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"853EE63CE7\",\"b\":\"853DB75000\",\"o\":\"12EECE7\",\"s\":\"_ZN5mongo4repl39ReplicationCoordinatorExternalStateImpl21onTransitionToPrimaryEPNS_16OperationContextE\",\"s+\":\"2F7\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"853EEA5736\",\"b\":\"853DB75000\",\"o\":\"1330736\",\"s\":\"_ZN5mongo4repl26ReplicationCoordinatorImpl19signalDrainCompleteEPNS_16OperationContextEx\",\"s+\":\"556\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"853EF36F2E\",\"b\":\"853DB75000\",\"o\":\"13C1F2E\",\"s\":\"_ZN5mongo4repl16OplogApplierImpl4_runEPNS0_11OplogBufferE\",\"s+\":\"8DE\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"853EF8D428\",\"b\":\"853DB75000\",\"o\":\"1418428\",\"s\":\"_ZZN5mongo15unique_functionIFvRKNS_8executor12TaskExecutor12CallbackArgsEEE8makeImplIZNS_4repl12OplogApplier7startupEvEUlS5_E_EEDaOT_EN12SpecificImpl4callES5_\",\"s+\":\"F8\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"85402E2E73\",\"b\":\"853DB75000\",\"o\":\"276DE73\",\"s\":\"_ZN5mongo8executor22ThreadPoolTaskExecutor11runCallbackESt10shared_ptrINS1_13CallbackStateEE\",\"s+\":\"113\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"85402E3282\",\"b\":\"853DB75000\",\"o\":\"276E282\",\"s\":\"_ZZN5mongo15unique_functionIFvNS_6StatusEEE8makeImplIZNS_8executor22ThreadPoolTaskExecutor23scheduleIntoPool_inlockEPNSt7__cxx114listISt10shared_ptrINS6_13CallbackStateEESaISB_EEERKSt14_List_iteratorISB_ESI_St11unique_lockINS_12latch_detail5LatchEEEUlT_E1_EEDaOSN_EN12SpecificImpl4callEOS1_\",\"s+\":\"A2\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"854048BFF2\",\"b\":\"853DB75000\",\"o\":\"2916FF2\",\"s\":\"_ZN5mongo10ThreadPool10_doOneTaskEPSt11unique_lockINS_12latch_detail5LatchEE\",\"s+\":\"132\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"854048E636\",\"b\":\"853DB75000\",\"o\":\"2919636\",\"s\":\"_ZN5mongo10ThreadPool13_consumeTasksEv\",\"s+\":\"86\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"854048F3E1\",\"b\":\"853DB75000\",\"o\":\"291A3E1\",\"s\":\"_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE\",\"s+\":\"E1\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"854048F710\",\"b\":\"853DB75000\",\"o\":\"291A710\",\"s\":\"_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZN5mongo4stdx6threadC4IZNS3_10ThreadPool25_startWorkerThread_inlockEvEUlvE2_JELi0EEET_DpOT0_EUlvE_EEEEE6_M_runEv\",\"s+\":\"60\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"8540B1907F\",\"b\":\"853DB75000\",\"o\":\"2FA407F\",\"s\":\"execute_native_thread_routine\",\"s+\":\"F\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FCB96011AA1\",\"b\":\"7FCB9600A000\",\"o\":\"7AA1\",\"s\":\"start_thread\",\"s+\":\"D1\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:18.442+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31427, \"ctx\":\"OplogApplier-0\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FCB95D5EC4D\",\"b\":\"7FCB95C76000\",\"o\":\"E8C4D\",\"s\":\"clone\",\"s+\":\"6D\"}}}\n\n{\"t\":{\"$date\":\"2023-01-31T09:18:11.591+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"main\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:07.101+00:00\"},\"s\":\"I\", \"c\":\"ELECTION\", \"id\":21450, \"ctx\":\"ReplCoord-5820\",\"msg\":\"Election succeeded, assuming primary role\",\"attr\":{\"term\":56}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:07.102+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21358, \"ctx\":\"ReplCoord-5820\",\"msg\":\"Replica set state transition\",\"attr\":{\"newState\":\"PRIMARY\",\"oldState\":\"SECONDARY\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:07.105+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21106, \"ctx\":\"ReplCoord-5820\",\"msg\":\"Resetting sync source to empty\",\"attr\":{\"previousSyncSource\":\":27017\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:07.106+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21359, \"ctx\":\"ReplCoord-5820\",\"msg\":\"Entering primary catch-up mode\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:07.960+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"primary_shard:27018\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.831+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21364, \"ctx\":\"ReplCoord-5823\",\"msg\":\"Caught up to the latest optime known via heartbeats after becoming primary\",\"attr\":{\"targetOpTime\":{\"ts\":{\"$timestamp\":{\"t\":1675154745,\"i\":6}},\"t\":55},\"myLastApplied\":{\"ts\":{\"$timestamp\":{\"t\":1675154745,\"i\":6}},\"t\":55}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.831+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21363, \"ctx\":\"ReplCoord-5823\",\"msg\":\"Exited primary catch-up mode\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.831+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21107, \"ctx\":\"ReplCoord-5823\",\"msg\":\"Stopping replication producer\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.831+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21239, \"ctx\":\"ReplBatcher\",\"msg\":\"Oplog buffer has been drained\",\"attr\":{\"term\":56}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21343, \"ctx\":\"RstlKillOpThread\",\"msg\":\"Starting to kill user operations\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21344, \"ctx\":\"RstlKillOpThread\",\"msg\":\"Stopped killing user operations\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21340, \"ctx\":\"RstlKillOpThread\",\"msg\":\"State transition ops metrics\",\"attr\":{\"metrics\":{\"lastStateTransition\":\"stepUp\",\"userOpsKilled\":0,\"userOpsRunning\":30}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4508103, \"ctx\":\"OplogApplier-0\",\"msg\":\"Increment the config term via reconfig\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015313, \"ctx\":\"OplogApplier-0\",\"msg\":\"Replication config state is Steady, starting reconfig\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"OplogApplier-0\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigReconfiguring\",\"oldState\":\"ConfigSteady\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21353, \"ctx\":\"OplogApplier-0\",\"msg\":\"replSetReconfig config object parses ok\",\"attr\":{\"numMembers\":3}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.832+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":51814, \"ctx\":\"OplogApplier-0\",\"msg\":\"Persisting new config to disk\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.833+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015315, \"ctx\":\"OplogApplier-0\",\"msg\":\"Persisted new config to disk\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.833+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"OplogApplier-0\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigSteady\",\"oldState\":\"ConfigReconfiguring\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.834+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21392, \"ctx\":\"OplogApplier-0\",\"msg\":\"New replica set config in use\",\"attr\":{\"config\":{\"_id\":\"configReplSet\",\"version\":119991,\"term\":56,\"configsvr\":true,\"protocolVersion\":1,\"writeConcernMajorityJournalDefault\":true,\"members\":[{\"_id\":0,\"host\":\"primary_config:27019\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":1.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1},{\"_id\":1,\"host\":\"secondary_config::27019\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":0.5,\"tags\":{},\"slaveDelay\":0,\"votes\":1},{\"_id\":3,\"host\":\"hidden_config27029\",\"arbiterOnly\":false,\"buildIndexes\":true,\"hidden\":false,\"priority\":0.0,\"tags\":{},\"slaveDelay\":0,\"votes\":1}],\"settings\":{\"chainingAllowed\":true,\"heartbeatIntervalMillis\":2000,\"heartbeatTimeoutSecs\":10,\"electionTimeoutMillis\":10000,\"catchUpTimeoutMillis\":-1,\"catchUpTakeoverDelayMillis\":30000,\"getLastErrorModes\":{},\"getLastErrorDefaults\":{\"w\":1,\"wtimeout\":0},\"replicaSetId\":{\"$oid\":\"5773d2d1374047c92751c502\"}}}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.834+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21393, \"ctx\":\"OplogApplier-0\",\"msg\":\"Found self in config\",\"attr\":{\"hostAndPort\":\"primary_config:27019\"}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.835+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015310, \"ctx\":\"OplogApplier-0\",\"msg\":\"Starting to transition to primary.\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.838+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015309, \"ctx\":\"OplogApplier-0\",\"msg\":\"Logging transition to primary to oplog on stepup\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.856+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":21856, \"ctx\":\"Balancer\",\"msg\":\"CSRS balancer is starting\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.857+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22049, \"ctx\":\"PeriodicShardedIndexConsistencyChecker\",\"msg\":\"Checking consistency of sharded collection indexes across the cluster\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.858+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20657, \"ctx\":\"OplogApplier-0\",\"msg\":\"IndexBuildsCoordinator::onStepUp - this node is stepping up to primary\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.858+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21331, \"ctx\":\"OplogApplier-0\",\"msg\":\"Transition to primary complete; database writes are now permitted\"}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.866+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.178.96.17:43944\",\"connectionId\":1918262,\"connectionCount\":84}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.866+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1918262\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.178.96.17:43944\",\"client\":\"conn1918262\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"4.4.13\"},\"os\":{\"type\":\"Linux\",\"name\":\"Oracle Linux Server release 6.9\",\"architecture\":\"x86_64\",\"version\":\"Kernel 4.1.12-124.48.6.el6uek.x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.868+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1918262\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"__system\",\"authenticationDatabase\":\"local\",\"remote\":\"10.178.96.17:43944\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.868+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.80.10.113:64024\",\"connectionId\":1918263,\"connectionCount\":85}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.869+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1918263\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.80.10.113:64024\",\"client\":\"conn1918263\",\"doc\":{\"driver\":{\"name\":\"mongo-go-driver\",\"version\":\"v1.11.1\"},\"os\":{\"type\":\"linux\",\"architecture\":\"amd64\"},\"platform\":\"go1.19\",\"application\":{\"name\":\"pbm-agent\"}}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.869+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"10.80.10.113:64026\",\"connectionId\":1918264,\"connectionCount\":86}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.871+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1918264\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"10.80.10.113:64026\",\"client\":\"conn1918264\",\"doc\":{\"driver\":{\"name\":\"mongo-go-driver\",\"version\":\"v1.11.1\"},\"os\":{\"type\":\"linux\",\"architecture\":\"amd64\"},\"platform\":\"go1.19\",\"application\":{\"name\":\"pbm-agent\"}}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.872+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1918263\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"pbmuser\",\"authenticationDatabase\":\"admin\",\"remote\":\"10.80.10.113:64024\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:09.874+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1918264\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":true,\"principalName\":\"pbmuser\",\"authenticationDatabase\":\"admin\",\"remote\":\"10.80.10.113:64026\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:10.115+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:57370\",\"connectionId\":1918265,\"connectionCount\":87}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:10.116+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1918265\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:57370\",\"client\":\"conn1918265\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Red Hat Enterprise Linux Server 6.9 Santiago\",\"architecture\":\"x86_64\",\"version\":\"4.1.12-124.48.6.el6uek.x86_64\"},\"platform\":\"CPython 2.7.16.final.0\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:10.148+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:57371\",\"connectionId\":1918266,\"connectionCount\":88}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:10.203+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1918266\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:57371\",\"client\":\"conn1918266\",\"doc\":{\"driver\":{\"name\":\"PyMongo\",\"version\":\"3.8.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"Red Hat Enterprise Linux Server 6.9 Santiago\",\"architecture\":\"x86_64\",\"version\":\"4.1.12-124.48.6.el6uek.x86_64\"},\"platform\":\"CPython 2.7.16.final.0\"}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:10.435+00:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1918266\",\"msg\":\"Authentication succeeded\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"speculative\":false,\"principalName\":\"datadog\",\"authenticationDatabase\":\"admin\",\"remote\":\"127.0.0.1:57371\",\"extraInfo\":{}}}\n{\"t\":{\"$date\":\"2023-01-31T08:46:10.827+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:57372\",\"connectionId\":1918267,\"connectionCount\":89}}\n@\n",
"text": "Hi,We are running 3 shards with 3 replica sets each (primary,secondary and hidden secondary) and 1 config replica set (primary secondary and hidden secondary) + 2 routers on a total of 6 nodes.running community version 4.4.13 (latest is 4.4.18)Yesterday 1 node went down which host primary shard2 replica set and primary config replica set.\nThe node which hosts secondary shard2 replica set got aborted (fatal assertion) after being elected to PRIMARY.\nCould the abort related to primary config being down and the secondary config was not elected to primary on time or not known yet by secondary shard2 replica (elected to primary) ?secondary shard2 log (replaced our hostname):secondary config log",
"username": "Kin_Wai_Cheung"
},
{
"code": "mongod",
"text": "Hello @Kin_Wai_Cheung ,Welcome back to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a reason for the error?\nIf not, can you please share more details for me to understand your situation better?Yesterday 1 node went down which host primary shard2 replica set and primary config replica set.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "That’s correct.output will be shared once I reviewed it",
"username": "Kin_Wai_Cheung"
},
{
"code": "",
"text": "mongod processes on the same hosts use a different port. (there is no hardware constraints atm)My followup questions:Should a mongodb cluster still work if both primary config replica set and any of the primary data bearing shard is unavailable at the same time?I believe the fatal assertion happened on the secondary data bearing shard which is already elected to be primary is unable to connect the primary config replica set ?",
"username": "Kin_Wai_Cheung"
},
{
"code": "",
"text": "cluster_status_redacted.txt (23.7 KB)",
"username": "Kin_Wai_Cheung"
},
{
"code": "",
"text": "Hello @Tarun_Gaur do you need more info?",
"username": "Kin_Wai_Cheung"
}
] | Primary shard replica set & primary config replica set went down, secondary shard aborted afterwards | 2023-02-01T10:38:05.086Z | Primary shard replica set & primary config replica set went down, secondary shard aborted afterwards | 1,166 |
null | [
"graphql"
] | [
{
"code": "",
"text": "A specific object/json/graphql combo in our setup has worked on production for more than a year. I have added some more properties to this object in Mongodb json collection and updated the schema, added data, which i can see in the collection, yet graphql queries only show the new property (which is an array) as null. I have tried the application graphql (nextjs), Postman and the mongodb website tools to query app services, but the new property is always null in the returned json. Am i missing something?",
"username": "James_Houston"
},
{
"code": "query {\n\tnav {\n\t\t_id\n\t\tkey\n\t\tpageId\n\t\tchildren {\n\t\t\talternateurl\n\t\t\tcontentType\n\t\t\tkey\n\t\t\tnaame\n\t\t\tpageId\n\t\t\tpageLevel\n\t\t\tshowonnav\n\t\t\tshowonsitemap\n\t\t\ttitle\n\t\t\turl\n\t\t\tchildren .... etc\n\t\t}\n\t}\n} \nquery {\n\tnav {\n\t\t_id\n\t\tkey\n\t\tpageId\n\t\tchildren {\n\t\t\talternateurl\n\t\t\tcontentType\n\t\t\tkey\n\t\t\tnaame\n\t\t\tpageId\n\t\t\tpageLevel\n\t\t\tshowonnav\n\t\t\tshowonsitemap\n\t\t\ttitle\n\t\t\turl\n\t\t\tinfoItem {\n\t\t\t\titemImageId\n\t\t\t\titemLinkObjectId\n\t\t\t\titemLinkText\n\t\t\t\titemLinkType\n\t\t\t\titemLinkUrl\n\t\t\t\titemText\n\t\t\t\titemTitle\n\t\t\t}\n\t\t\tchildren .... etc\n\t\t}\n\t}\n}\n",
"text": "Here is the original query:Here is the updated query:the infoItem section, which is the new section, is only at one level, and does not appear at the lower nested children",
"username": "James_Houston"
},
{
"code": "",
"text": "Here you can see there there are values for the new properties in the collection. The schema has been updated to suit the new props (by the “generate” tool)",
"username": "James_Houston"
},
{
"code": "",
"text": "Here is a screengrab of the data returned from Postman - with the new property “infoItem” present, but with a null value.",
"username": "James_Houston"
},
{
"code": "",
"text": "Can anyone help with this?",
"username": "James_Houston"
}
] | Additional mongo json properties show as null in graphql query, despite updated schema | 2023-02-08T10:22:48.870Z | Additional mongo json properties show as null in graphql query, despite updated schema | 1,166 |
null | [
"queries",
"data-modeling",
"indexes",
"atlas-search",
"text-search"
] | [
{
"code": " products: [{},{}],\n availableCategories: [\"Phones\", \"Tools\"], \n total_records_found: 1000\ndb.products.aggregate([\n{$match: { $text: { $search: \"Iphone 11 screen protector\" } } },\n{$sort: { score: { $meta: \"textScore\" } }},\n{$skip: 0 },\n{$limit: 10}\n])\n",
"text": "I want to have an efficient query witch will give me all products and it’s categories from 500k docs collection.It should be a search by product title query which return something like thisFor now I have this query and thinking how to extend itI have also text index for product_title and 1 (ask) index for category_nameI am also thinking how to select a particular collection like “Phones” and do search only there if user provides itWould be grateful for any help!",
"username": "Denys_Medvediev"
},
{
"code": "",
"text": "Hi @Denys_Medvediev, could you please share your sample document and index fields?By given information, I can suggest you to please use and specify the index name and path too while searching for data in this way y can get the results much faster.",
"username": "Utsav_Upadhyay2"
}
] | How to get all categories for a product in MongoDB | 2023-02-09T14:25:06.385Z | How to get all categories for a product in MongoDB | 1,363 |
null | [] | [
{
"code": " 1: from /opt/vagrant/embedded/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require'\n/opt/vagrant/embedded/lib/ruby/2.7.0/rubygems/core_ext/kernel_require.rb:83:in `require': dlopen(/opt/vagrant/embedded/gems/2.3.4/gems/google-protobuf-3.21.11-x86_64-darwin/lib/google/2.7/protobuf_c.bundle, 9): no suitable image found. Did find: (LoadError)\n /opt/vagrant/embedded/gems/2.3.4/gems/google-protobuf-3.21.11-x86_64-darwin/lib/google/2.7/protobuf_c.bundle: cannot load 'protobuf_c.bundle' (load command 0x80000034 is unknown)\n /opt/vagrant/embedded/gems/2.3.4/gems/google-protobuf-3.21.11-x86_64-darwin/lib/google/2.7/protobuf_c.bundle: cannot load 'protobuf_c.bundle' (load command 0x80000034 is unknown) - /opt/vagrant/embedded/gems/2.3.4/gems/google-protobuf-3.21.11-x86_64-darwin/lib/google/2.7/protobuf_c.bundle\n 13: from /opt/vagrant/embedded/gems/2.3.4/gems/vagrant-2.3.4/bin/vagrant:111:in `<main>'\n",
"text": "Cannot get M312 going without resolving these errors. I tried searching the web but did not find a solution.macOS High Sierra V10.13.605.01 $ cd m312-vagrant-env < directory did not exist >\n05.02 $ vagrant plugin install vagrant-vbguest < failed >\n05.03 $ vagrant up < did not run >LOG SNIPPET:Thanks and kind regards,\nBill",
"username": "william_roberts"
},
{
"code": "",
"text": "Hi @william_roberts,Welcome to the MongoDB Community forums We are assuming that you are doing this setup locally. You can skip installing vagrant on your local machine as it is not required for the course completion. All the labs can be attempted from the Instruqt lab without installing anything on your local system.Let us know if you have any questions related to running the lab instructions on instruqt.Thanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | M312 cannot load 'protobuf_c.bundle' | 2023-02-08T21:31:39.041Z | M312 cannot load ‘protobuf_c.bundle’ | 1,176 |
null | [] | [
{
"code": "{\"detail\":\"The policy id 000000000000000000000001 is invalid.\",\"error\":400,\"errorCode\":\"INVALID_POLICY_ID\",\"parameters\":[\"000000000000000000000001\"],\"reason\":\"Bad Request\"}%\ncurl --user \"\\<public-api-key\\>:\\<private-api-key\\><private-api-key>\" --digest --include \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n > --request PATCH \"https://cloud.mongodb.com/api/atlas/v1.0/groups/<redacted\\>/clusters/\\<redacted>/backup/schedule\" \\\n --data '\n {\n \"referenceHourOfDay\": 12,\n \"referenceMinuteOfHour\": 30,\n \"policies\": [\n {\n \"id\": \"000000000000000000000001\",\n \"policyItems\": [\n {\n \"frequencyType\": \"HOURLY\",\n \"frequencyInterval\": 6,\n \"retentionValue\": 2,\n \"retentionUnit\": \"DAYS\"\n }\n ]\n }\n ],\n \"updateSnapshots\": true,\n \"autoExportEnabled\" : true\n}'\n new CfnResource(this, 'mongodb_backup_schedule', {\n type: 'MongoDB::Atlas::CloudBackupSchedule',\n properties: {\n ProjectId: mongoproject.ref,\n ClusterName: \"<redacted>\",\n UseOrgAndGroupNamesInExportPrefix: true,\n AutoExportEnabled: \"false\",\n ApiKeys: {\n PublicKey: '{{resolve:secretsmanager:atlas/<redacted>/apiKey:SecretString:publicKey}}',\n PrivateKey: '{{resolve:secretsmanager:atlas/<redacted>/apiKey:SecretString:privateKey}}',\n },\n \"Policies\": [\n {\n \"ID\": \"000000000000000000000001\",\n \"PolicyItems\": [\n { \n \"FrequencyInterval\": 6,\n \"FrequencyType\": \"hourly\",\n \"RetentionUnit\": \"days\",\n \"RetentionValue\": 7\n }\n ]\n }\n ],\n ReferenceHourOfDay: \"0\",\n ReferenceMinuteOfHour: \"0\",\n RestoreWindowDays: \"1\",\n },\n });\nResource handler returned message: \"Error updating cloud backup schedule : PATCH https://cloud.mongodb.com/api/atlas/v1.0/groups/<redacted>/clusters/<redacted>/backup/schedule: 400 (request \"INVALID_POLICY_ID\") The policy id 000000000000000000000001 is invalid.\" (RequestToken: 2493f47d-d21d-8ca4-179c-e6874210c26d, HandlerErrorCode: InvalidRequest)\n",
"text": "The api spec states that it requires a 24 char hexadecimal string. When I try this I get the following 400 errorMy curl request looks as followsI get the same error when using the Cloudformation resource as follows:resulting in:Can someone tell me what the correct constraints for the POLICY_ID are?\nI’m basing my code on api spec described here MongoDB Atlas Administration APIKind regards",
"username": "Maarten_Suetens"
},
{
"code": "\"000000000000000000000001\"policies.id",
"text": "Hey @Maarten_Suetens,Welcome to the MongoDB Community Forums! Is \"000000000000000000000001\" the actual value used in the request? If not, have you tried using the policies.id value provided in the response of the Return One Cloud Backup Schedule?Please let us know if this helps or not. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Hey Satyam,‘000000000000000000000001’ is the value I used, but I have tried with several 24-char hexadecimal values.Regarding the return value:\nI am trying to create this via CDK(cloudformation) and the policy.id is only known after creation, but the creation fails because I’m not providing a correct ID.Kind regards\nMaarten",
"username": "Maarten_Suetens"
},
{
"code": "\"policy.id\"\"policy.id\"\"policy.id\"\"policy.id\"",
"text": "Hey @Maarten_Suetens,Regarding the return value:\nI am trying to create this via CDK(cloudformation) and the policy.id is only known after creation, but the creation fails because I’m not providing a correct ID.You have stated the \"policy.id\" is known after creation - Is there a reason you are not using this \"policy.id\"? Additionally, when you refer to creation, do you mean creating/enabling the backups or creating a cluster? When backups are enabled, there is a default backup policy already (I.e. a \"policy.id\" already exists). Using the Return One Cloud Backup Schedule API provides the \"policy.id\" value which you can use in the Update Cloud Backup Schedule for One Cluster API which you had linked previously:I’m basing my code on api spec described here MongoDB Atlas Administration API Additionally, are you thinking to define the default backup policy for when backups are enabled for a cluster (at creation)? If so, there is a feedback that you can upvote. But as of now, this feature is not supported. You can read about this feedback here: Define Default Backup PolicyRegards,\nSatyam",
"username": "Satyam"
},
{
"code": " new CfnResource(this, 'mongo_cluster', {\n type: 'MongoDB::Atlas::Cluster',\n properties: {\n Name: '<redacted>',\n ProjectId: mongoproject.ref,\n ApiKeys: {\n PublicKey: '{{resolve:secretsmanager:atlas/<redacted>/apiKey:SecretString:publicKey}}',\n PrivateKey: '{{resolve:secretsmanager:atlas/<redacted>/apiKey:SecretString:privateKey}}',\n },\n MongoDBMajorVersion: \"6.0\",\n AdvancedSettings: {\n DefaultReadConcern: 'available',\n DefaultWriteConcern: '1',\n JavascriptEnabled: 'true',\n MinimumEnabledTLSProtocol: 'TLS1_2',\n NoTableScan: 'false',\n OplogSizeMB: '4000',\n SampleSizeBIConnector: '110',\n SampleRefreshIntervalBIConnector: '310',\n },\n BiConnector: {\n ReadPreference: \"secondary\",\n Enabled: \"true\"\n },\n BackupEnabled: 'true',\n ClusterType: 'REPLICASET',\n ReplicationSpecs: [\n {\n NumShards: '1',\n AdvancedRegionConfigs: [advancedRegionConfig],\n },\n ],\n },\n });\n",
"text": "Hey @SatyamThank you, this is clearing a lot up for me. I did not know that enabling backups installs a default backup policy whose ID I needed to provide. I was under the impression that Adding a policy would create the ID.We create the cluster using cloudformation resources as described hereWe create the cluster as follows:How would we be able to get the POLICY_ID from this resource? It does not seem to be listed among the return valuesKind regards",
"username": "Maarten_Suetens"
},
{
"code": "policies.id",
"text": "Hey @Maarten_Suetens,Can you run the API described here: Return One Cloud Backup Schedule? The result of this API should contain policies.id.Regards,\nSatyam",
"username": "Satyam"
}
] | POLICY_ID wrongly described in docs? | 2023-02-02T14:04:53.956Z | POLICY_ID wrongly described in docs? | 560 |
null | [
"replication",
"golang"
] | [
{
"code": "opts = options.Client().ApplyURI(\"mongodb://\" + ipPort + \",\" + secIpPort + \",\" + sec2IpPort + \"/?replicaSet=\" + repSetName).SetConnectTimeout(5 * time.Second).SetAuth(serverAuth).SetWriteConcern(writeconcern.New(writeconcern.WMajority()))\ndb.adminCommand({ \"setDefaultRWConcern\" : 1, \"defaultWriteConcern\" : { \"w\" : \"majority\", \"j\":true, \"wtimeout\" : 5000 }, writeConcern: { \"w\" : \"majority\", \"j\":true, \"wtimeout\" : 5000 }, })\nIP(192.168.1.237)IP(192.168.1.239)",
"text": "I am using 4 nodes for my MongoDB replica set, one as the primary node, two as a secondary node, and one as an arbitrator node. I am using the following connection string in Golang to connect with MongoDB.Ip used to make connection are, ipPort= 192.168.1.237 secipPort= 192.168.1.239, sec2IpPort= 192.168.1.2In My replica Set( PRIMARY-SECONDAY-SECONDAY-ARBITER (PSSA) ), look like the following:\n“majorityVoteCount” : 3,\n“writeMajorityCount” : 3,\n“votingMembersCount” : 4,\n“writableVotingMembersCount” : 3I have setted the Default RW Concern as following,Problem:When the MongoDb service of the first IP(192.168.1.237) of the connection string is shut down, the secondary IP(192.168.1.239) becomes primary and performs all the read operations well. But does not perform the write operations.How can I deal with this problem? If the first IP of the connection string is down, the secondary IP/node should be able to perform both read and write operations.",
"username": "rohit_arora3"
},
{
"code": "writeConcern",
"text": "Hi @rohit_arora3 and welcome to the MongoDB community forum!!Generally, for a replica set configuration, having odd number of voting members is the recommended method.\nTherefore, the addition of an arbiter must be done with the utmost care and planning.An arbiter in MongoDB only contributes to the voting but does not play any role for the write operations.\nAs mentioned in MongoDB documentation, if the voting majority is more than the writeConcern majority, the write concern is not acknowledged.Please visit the documentation on How to Mitigate Performance Issues with PSA Architecture for more details.Therefore, the recommendation here is to remove the arbiter since it does not add value to the replica set deployment.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDb Replica Set write concern issues on Secondary IP, while Primary IP is Shutdown | 2023-02-07T07:12:54.944Z | MongoDb Replica Set write concern issues on Secondary IP, while Primary IP is Shutdown | 995 |
null | [] | [
{
"code": "",
"text": "I’m running a free cluster, M0 Sandbox. I create a search index and get the search response first time. but after that I don’t get response. what could be the causes?",
"username": "Nazmus_Sakib"
},
{
"code": "",
"text": "Hi @Nazmus_Sakib ! Can you share your index definition in JSON and the query you are running?",
"username": "amyjian"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"hotelName\": {\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n[\n {\n $search: {\n index: 'hotel-search',\n text: {\n query: 'Hot',\n path: {\n 'wildcard': '*'\n }\n }\n }\n }\n]\n{\n \"_id\":{\"$oid\":\"63e47ec031acc217f8d9b929\"},\n \"hotelId\":\"H1153858\",\n \"hotelName\":\"Hotel Spa Elia\"\n}\n",
"text": "Index definition:Query (Copied from atlas search tester on web):And my collection:",
"username": "Nazmus_Sakib"
},
{
"code": "[\n {\n $search: {\n index: 'hotel-search',\n text: {\n query: 'Hot',\n path: {\n 'wildcard': '*'\n }\n }\n }\n }\n]\n[\n {\n $search: {\n index: 'hotel-search',\n autocomplete: {\n query: 'Hot',\n path: 'hotelName'\n }\n }\n }\n]\ntextautcompletepath{'wildcard' : '*'}'hotelName'",
"text": "Can you try the following search query to see if you get any documents?:Changes made:",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas search don't return response | 2023-02-08T12:17:49.456Z | Atlas search don’t return response | 657 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Below are the three collections i am referencing in the topicTags:\n{“id”:909,“name”:“newTag”}contacts\n{“id\":1,“name”,“testcontact”,“email”:\"[email protected]”,“tags”:[]}\n–tags field contaains list of References of linked tags collection documentCampaign:\n{“id”:1,“name”,“testcampaign”,“tags”:[]}\n–tags field contaains list of References of linked tags collection documentObjective is to get list of all tags ,with the Count of Contacts & Campaigns who are referencing that particular tag.\nI want to understand the best practice to do that through aggregation which is scalable and responds quickly as i am only interested in count of other collections.An Example Result would be\n[\n{“id”:909,“name”:“newTag”,“campaignCount”:2,“contactCount”:5},\n{“id”:909,“name”:“newTag”,“campaignCount”:0,“contactCount”:10}\n]",
"username": "Zeeshan_Ali1"
},
{
"code": "",
"text": "Your expected result should match the supplied source documents.We do not know how to generated 2 different counts from the same name:newTag.We do not know how you refer to Tags from your tags: because you do not share what you have in the arrays. It could be id or name. But is it id or _id?Please read Formatting code and log snippets in postsand then update sample documents and expected results.",
"username": "steevej"
},
{
"code": "{\n \"_id\" : NumberLong(98),\n \"name\" : \"testing\"\n}\n{\n \"_id\" : NumberLong(198),\n \"email\" : \"[email protected]\",\n \"firstname\" : \"Beta\",\n \"lastname\" : \"Jones\",\n \"extraFields\" : {},\n \"tags\" : [ \n NumberLong(98),\n NumberLong(75),\n ],\n}\n\n \"_id\" : NumberLong(32),\n \"title\" : \"test campaign\",\n \"description\" : \"test cam desc\",\n \"type\" : \"onetime\",\n \"status\" : \"created\",\n \"statusDesc\" : \"2/4 Steps\",\n \"tags\" : [ \n NumberLong(98), \n NumberLong(123)\n ]\n}\n{\n \"_id\" : NumberLong(98),\n \"name\" : \"testing\",\n \"campaignCount\":1,\n \"contactCount\":1\n}\n",
"text": "Hi Steeve,\nI mistakenly typed same tag in expected results, and tried to edit,but there wasnt any option to edit.\nApologies.\nHere is the Updated Sample Data and expected Result",
"username": "Zeeshan_Ali1"
},
{
"code": "",
"text": "My approach would be1 - $lookup stage from:Contacts localField:_id foreignField:tags as:contacts\n2 - $set stage to replace contacts with its $size\n3 - 1 and 2 for Campaings",
"username": "steevej"
},
{
"code": "",
"text": "1 and 2 for Campaings@steevej i have achieved the Step 1 and Step 2 . However i am not sure how to pass /use the result from the first aggregation in the second. any suggestions?",
"username": "Zeeshan_Ali1"
},
{
"code": "",
"text": "nevermind figured it out. i had to do all steps in single agg query.",
"username": "Zeeshan_Ali1"
},
{
"code": "",
"text": "Hi Zeeshan Ali,\nI’m new to Mongo DB and trying to get count from diff collections. Could you tell me how did you achieve?",
"username": "rajesh_b"
},
{
"code": "",
"text": "If your situation is different enough from the one from this thread that you cannot map the solution presented here it will be better if you start a new thread and post as much details as possible for your use case.",
"username": "steevej"
},
{
"code": "",
"text": "OK Steeve,\nThank you",
"username": "rajesh_b"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Best Way to get count of Referenced Entities across collections | 2022-06-28T16:53:45.190Z | Best Way to get count of Referenced Entities across collections | 4,930 |
null | [
"aggregation"
] | [
{
"code": "db.getCollection(classrooms).aggregate(\n [\n {\n \"$sort\": {\n \"_id\": -1\n }\n },\n {\n \"$match\": {\n \"$and\": [\n {\n \"is_deleted\": 0,\n \"site_id\": \"111\"\n }\n ]\n }\n },\n { \"$addFields\": { \"Class_Id\": { \"$toString\": \"$_id\" }}},\n { \"$lookup\": {\n \"from\":'student_collection',\n \"localField\": \"Class_Id\",\n \"foreignField\": \"Class_Id\",\n \"as\": \"student_data\",\n \"pipeline\":[\n {\"$match\":{\n \"classrooms.site_id\" : \"111\"\n }}\n ], \n }},\n { \"$addFields\": {\"studentCount\": { \"$size\": \"$student_data\"}}},\n {\n \"$limit\": 100\n },\n {\n \"$skip\": 0\n },\n])\n",
"text": "Hi Team,\nI’m new to Mongo DB and using Mongo DB API for Cosmos DB.\nCosmos DB does not fully support the $lookup & Pipeline command so I have an issue with using both in a single Aggregate. So trying to avoid but I need to get the total students to count based on Site id.\nhave 2 collections … Class, Students. need to get the total students to count based on site id without the pipeline.\nneed an alternative to use that.\nhere below my code:any help!!Thank you",
"username": "rajesh_b"
},
{
"code": "ClassStudents",
"text": "Hi @rajesh_b,Welcome to the MongoDB Community forums To better understand your question:Could you share the followingBest,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "{ \"_id\" : ObjectId(\"123A1\"), \"Class_Name\" : \"Elementary R1\", \"Org_Id\" : \"A123\", \"Org_Data\" : { \"label\" : \"Montgomery County School\", \"value\" : \"A123\" }, \"Site_Id\" : \"111\", \"Site_Data\" : { \"label\" : \"College Gardens Elementary\", \"value\" : \"111\" }, \"Periods\" : false, \"Created_By\" : \"100003\", \"Created_Date\" : ISODate(\"2022-10-14T14:41:55.280+0000\"), \"is_active\" : NumberInt(0), \"Class_Id\" : NumberInt(10000) } \"_id\" : ObjectId(\"01A\"),\n \"Student_id\" : \"8968\",\n \"Org_Id\" : \"A123\",\n \"Org_Data\" : {\n \"label\" : \"Montgomery County School\",\n \"value\" : \"A123\"\n },\n \"Site_Id\" : \"111\",\n \"Site_Data\" : {\n \"label\" : \"College Gardens Elementary\",\n \"value\" : \"111\"\n },\n \"Class_Id\" : \"123A1\",\n \"Class_Data\" : {\n \"label\" : \"Elementary R1\",\n \"value\" : \"123A1\"\n },\n \"First_Name\" : \"Eddie111\",\n \"Last_Name\" : \"Bauer111\",\n \"Gender\" : \"male\", \n \"Student_Image_Id\" : \"\",\n \"Created_By\" : NumberInt(100007),\n \"Created_Date\" : (\"2022-10-27T17:50:46.065+0000\"),\n \"is_active\" : NumberInt(0),\n \"Student_Id\" : NumberInt(10000)\n},```\n\n```{\n \"_id\" : ObjectId(\"01A\"),\n \"Student_id\" : \"8968\",\n \"Org_Id\" : \"A123\",\n \"Org_Data\" : {\n \"label\" : \"Montgomery County School\",\n \"value\" : \"A123\"\n },\n \"Site_Id\" : \"111\",\n \"Site_Data\" : {\n \"label\" : \"College Gardens Elementary\",\n \"value\" : \"111\"\n },\n \"Class_Id\" : \"123A1\",\n \"Class_Data\" : {\n \"label\" : \"Elementary R1\",\n \"value\" : \"123A1\"\n },\n \"First_Name\" : \"testname\",\n \"Last_Name\" : \"test lastname\", \n \"Gender\" : \"male\", \n \"Student_Image_Id\" : \"\",\n \"Created_By\" : NumberInt(100007),\n \"Created_Date\" : (\"2022-10-27T17:50:46.065+0000\"),\n \"is_active\" : NumberInt(0),\n \"Student_Id\" : NumberInt(10000)\n}```\n/>\nLet me know if need any!",
"text": "@Kushagra_Kesav : Kesav, Thank you for your response. my data is in Cosmos Db (mongo api)\nPlease see sample data as below.\nand * The MongoDB version you are using : Cosmos Db Mongo 4.2 Version , Node js application.– classroom\n<{ \"_id\" : ObjectId(\"123A1\"), \"Class_Name\" : \"Elementary R1\", \"Org_Id\" : \"A123\", \"Org_Data\" : { \"label\" : \"Montgomery County School\", \"value\" : \"A123\" }, \"Site_Id\" : \"111\", \"Site_Data\" : { \"label\" : \"College Gardens Elementary\", \"value\" : \"111\" }, \"Periods\" : false, \"Created_By\" : \"100003\", \"Created_Date\" : ISODate(\"2022-10-14T14:41:55.280+0000\"), \"is_active\" : NumberInt(0), \"Class_Id\" : NumberInt(10000) }\n/>\nStudent collection\n<",
"username": "rajesh_b"
},
{
"code": "$lookup$lookup",
"text": "Hi @rajesh_b,Thank you for providing the sample collection and version information along with your desired output. I think it’s clear that the workflow you desire is easier to do with $lookup. Unfortunately, as you have discovered, CosmosDB does not fully support this, so you don’t have many options on this front. It’s important to note that CosmosDB is not a MongoDB product so we cannot comment on what it can or cannot do, or its compatibility with a genuine MongoDB product.An alternative approach would be to implement this function within the application code, e.g. by simulating the functionality of $lookup in code. Since this forum specializes in MongoDB, if you require additional assistance, I would suggest visiting the CosmosDB forums or reaching out to their support team.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Thank you for your reply Kesav.\nCould you help me with one solution based on provided information so that I will test that whether it’s working or not?\nRegards,\nRajesh",
"username": "rajesh_b"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using $lookup & $Unwind without pipeline command | 2023-01-30T20:55:43.934Z | Using $lookup & $Unwind without pipeline command | 767 |
null | [
"connecting",
"atlas-cluster",
"golang"
] | [
{
"code": "ServerSelectionTimeoutHeartbeatIntervalConnectionTimeoutMaxConnIdleTimeSocketTimeout",
"text": "We have a golang server in production that writes logs to a database in mongo atlas. Our mongo atlas configuration automatically increases the server disk when it’s about to get full. When the upgrade happens the server is down, apparently for a few minutes, but when it comes up again, the driver is not able to reconnect and all the following logs fail to be inserted to the database.We tried to reproduce this problem locally by stoping and restarting a docker container running a mongo db database, but the driver is perfectly capable of reconnecting in this situation. It seems to be something related to DNS resolution.We have seen several time-related parameters that we can tune like:But since we don’t know the exact error (and we can’t reproduce the problem locally) it’s not clear which one to use (we’ve tried them all locally).We think it could be something like the driver storing the server ip behind the connection string and storing it to skip the DNS in future requests. Then, after Atlas upgrades the server maybe the IP is no longer the same and that’s why the driver is not able to communicate remotely but it is able to do so locally (since the server IP doesn’t change and there’s no DNS translation).Has anyone gone through something similar? How do you handle reconnecting to a remote mongo db on Atlas?We found this thread but didn’t find the solution to our problem.Thanks in advance!",
"username": "Jairo_Lozano"
},
{
"code": "",
"text": "Hi @Jairo_Lozano,We’re looking at a bug related to this right now, and are planning to get a fix out for the next patch release.",
"username": "Isabella_Siu"
},
{
"code": "",
"text": "Thanks @Isabella_Siu! Please let me know when it’s released!",
"username": "Jairo_Lozano"
},
{
"code": "",
"text": "Hi again @Jairo_Lozano ! It’ll be released in v1.5.4, which is scheduled for July 6th.",
"username": "Isabella_Siu"
},
{
"code": "",
"text": "cool! thanks @Isabella_Siu ",
"username": "Jairo_Lozano"
},
{
"code": "",
"text": "@Jairo_Lozano we just released Go driver v1.5.4, which includes a fix for SRV polling that should resolve the problem you encountered with having to restart your application after scaling an Atlas cluster.Check out the v1.5.4 release on GitHub.",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Thanks @Matt_Dale!! I’ll upgrade the driver and let you know if that solves the problem!",
"username": "Jairo_Lozano"
},
{
"code": "",
"text": "Hi, i know the old post, it’s happening to me in php with laravel, i use “jenssegers/mongodb”: “^3.8.4”, maybe you can help me with that problem?",
"username": "Elvin_Gonzalez"
},
{
"code": "",
"text": "@Elvin_Gonzalez I work on the Go driver, so I’m not as effective at answering questions about the PHP driver. You’re more likely to get someone with PHP driver expertise if you create a new topic describing your issue in the “Drivers & ODMs” section with tag “php”.",
"username": "Matt_Dale"
},
{
"code": "",
"text": "ok, I understand, thank you very much.",
"username": "Elvin_Gonzalez"
}
] | Mongo-golang driver is not reconnecting after mongo atlas server is down due to server size increaese | 2021-06-29T02:11:35.315Z | Mongo-golang driver is not reconnecting after mongo atlas server is down due to server size increaese | 4,358 |
null | [
"sharding"
] | [
{
"code": "Shard zn-stg-dn-sh0 at ...\n{\n data: '23.21GiB',\n docs: 3092671,\n chunks: 10,\n 'estimated data per chunk': '2.32GiB',\n 'estimated docs per chunk': 309267\n}\n---\nShard zn-stg-dn-sh1 at ...\n{\n data: '23.73GiB',\n docs: 3363251,\n chunks: 4,\n 'estimated data per chunk': '5.93GiB',\n 'estimated docs per chunk': 840812\n}\n---\nTotals\n{\n data: '46.95GiB',\n docs: 6455922,\n chunks: 14,\n 'Shard zn-stg-dn-sh0': [\n '49.44 % data',\n '47.9 % docs in cluster',\n '7KiB avg obj size on shard'\n ],\n 'Shard zn-stg-dn-sh1': [\n '50.55 % data',\n '52.09 % docs in cluster',\n '7KiB avg obj size on shard'\n ]\n}\n\n",
"text": "Hi,I just configured a new MongoDB shard with the latest version available (6.0.4). All seems to be working as expected, but when I run the “getShardDistribution” command, the estimated data per chunk is very high, much more than the default chunksize of 128 MB.As seen in the docs, “Starting in MongoDB 6.0.3, automatic chunk splitting is not performed.”. This means that the aforementioned chunk sizes seen in my ensemble are normal?May I do something with this or is it something that we have to worry about?Thanks in advance!",
"username": "Javi_Martin"
},
{
"code": "",
"text": "Hey Javi,Starting in 6.0.3, we balance by data size instead of the number of chunks. So the 128MB is now only the size of data we migrate at-a-time. So large data size per chunk is good now, as long as the data size per shard is even for the collection.Garaudy (Sharding Product Manager)",
"username": "Garaudy_Etienne"
},
{
"code": "Totals\n{\n data: '121.08GiB',\n docs: 15634531,\n chunks: 27,\n 'Shard zn-stg-dn-sh1': [\n '49.88 % data',\n '51.2 % docs in cluster',\n '7KiB avg obj size on shard'\n ],\n 'Shard zn-stg-dn-sh0': [\n '50.11 % data',\n '48.79 % docs in cluster',\n '8KiB avg obj size on shard'\n ]\n}\n",
"text": "Hi Garaudy,thanks for your answer. I was worried about having large chunks (about 8 GB) but data is well balanced, so it seems to be working as expected.Closing this thread!\nThanks again.",
"username": "Javi_Martin"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Chunk size many times bigger than configure chunksize (128 MB) | 2023-02-09T08:46:57.144Z | Chunk size many times bigger than configure chunksize (128 MB) | 1,173 |
null | [
"queries",
"dot-net"
] | [
{
"code": "",
"text": "I recently tried to update the MongoDB C# driver from 2.18 to 2.19 using the standard Visual Studio NuGet package updating process.While the update itself went smoothly, I had system wide failures every where with the following exception:“(x as ARoot) is not supported” I went through the patch notes but could not find anything directly related to this other than a small blurb about switching from LINQ2 to the LINQ3. I followed the instructions to manually set to LINQ2 but the issue still persisted. I have now rolled back to 2.18 but I would like to figure this out.Essentially all objects in my system which are stored in mongo inherit from “AMongoThing”, which has some basic properties like the Mongo ObjectID, CreatedBy/CreatedDate, etc. The specific properties are not important.There are a number of queries I make in the system, both get and set, where I don’t care what is actually stored in Mongo(Car, Person, Animal) because I am updating one of those root properties so my mongo call looks something like:collection.Find( x => (x as AMongoThing).Created >= DateTime.Now.AddHours(-24))This is obviously a super silly example but I can replicate the issue described above with this one line. That line works in 2.18 and fails in 2.19I’m not sure if this is truly no longer support or I have some serializer or setting as part of my connection process which is causing the issue.",
"username": "Mark_Mann"
},
{
"code": "",
"text": "Hi, @Mark_Mann,Thank you for reporting this issue. This was not an intentional breaking change. Please file a bug in our CSHARP JIRA project with a self-contained repro and we will investigate further.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "We are encountering a large number of ExpressionNotSupportedException errors in the version 2.19. Could you provide some insight into what may be causing this issue?",
"username": "EMD_LAB"
},
{
"code": "ExpressionNotSupportedExceptionsvar connectionString = \"mongodb://localhost\";\nvar clientSettings = MongoClientSettings.FromConnectionString(connectionString);\nclientSettings.LinqProvider = LinqProvider.V2;\nvar client = new MongoClient(clientSettings);\n",
"text": "Hi, @EMD_LAB,In 2.19.0, we changed the default LINQ provider from our older LINQ2 provider to our new implementation LINQ3. We updated our test suite to run all the LINQ2 tests using LINQ3, but there are inevitably gaps in test coverage. It would be helpful and appreciated if you provided examples of ExpressionNotSupportedExceptions with stack traces so that we can investigate further. Please file these bugs in our CSHARP JIRA project.Note that you are still able to switch back to LINQ2:Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "clientSettings.LinqProvider = LinqProvider.V2;",
"text": "clientSettings.LinqProvider = LinqProvider.V2;Great news! We have successfully rolled back to version 2.18 and are looking forward to upgrading to 2.19 in our upcoming product releases. Thank you for your support.",
"username": "EMD_LAB"
},
{
"code": "mcSettings.LinqProvider = LinqProvider.V2;\nusing MongoDB.Bson.Serialization.Serializers;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\nusing System;\nusing System.Collections;\nusing System.Collections.Generic;\nusing System.IO;\nusing System.Linq;\nusing System.Linq.Expressions;\nusing System.Threading.Tasks;\nusing System.Reflection;\nusing static MyFirstCoreApp.ExpressionCombiner;\nusing System.Xml.Linq;\nusing Mongo219;\nusing MongoDB.Driver.Linq;\n\nnamespace MyFirstCoreApp\n{\n public static class ExpressionCombiner\n {\n public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> exp, Expression<Func<T, bool>> newExp)\n {\n var visitor = new ParameterUpdateVisitor(newExp.Parameters.First(), exp.Parameters.First());\n newExp = visitor.Visit(newExp) as Expression<Func<T, bool>>;\n var binExp = Expression.And(exp.Body, newExp.Body);\n return Expression.Lambda<Func<T, bool>>(binExp, newExp.Parameters);\n }\n\n public class CsharpLegacyGuidSerializationProvider : IBsonSerializationProvider\n {\n public IBsonSerializer GetSerializer(Type type)\n {\n if (type == typeof(Guid))\n return new GuidSerializer(GuidRepresentation.Standard);\n\n return null;\n }\n }\n\n public class ParameterUpdateVisitor : System.Linq.Expressions.ExpressionVisitor\n {\n private ParameterExpression _oldParameter;\n private ParameterExpression _newParameter;\n\n public ParameterUpdateVisitor(ParameterExpression oldParameter, ParameterExpression newParameter)\n {\n _oldParameter = oldParameter;\n _newParameter = newParameter;\n }\n\n protected override Expression VisitParameter(ParameterExpression node)\n {\n if (object.ReferenceEquals(node, _oldParameter))\n return _newParameter;\n\n return base.VisitParameter(node);\n }\n }\n }\n\n public class Program\n {\n public static void CreateClassMaps()\n {\n var types = Assembly.GetExecutingAssembly().GetTypes();\n\n foreach (var item in types)\n {\n try\n {\n if (!item.IsInterface)\n {\n var classMap = new BsonClassMap(item);\n classMap.AutoMap();\n classMap.SetDiscriminator(item.FullName);\n\n if (!BsonClassMap.IsClassMapRegistered(item))\n {\n BsonClassMap.RegisterClassMap(classMap);\n }\n }\n }\n catch (Exception)\n {\n //unable to create specific class map\n }\n }\n }\n\n public static void Main(string[] args)\n {\n BsonSerializer.RegisterSerializationProvider(new CsharpLegacyGuidSerializationProvider());\n MongoClientSettings mcSettings = new MongoClientSettings();\n mcSettings.Server = new MongoServerAddress(\"localhost\", 27017);\n mcSettings.LinqProvider = LinqProvider.V2;\n MongoClient client = new MongoClient(mcSettings); \n IMongoDatabase mongoDB = client.GetDatabase(\"MongoTest\");\n \n //clean it for fresh test each time\n mongoDB.GetCollection<AMongoThing>(\"Animals\").DeleteMany(x => true);\n\n CreateClassMaps();\n AddSomeData(mongoDB);\n\n //just test we get all animals\n var getAllAnimals = GetThings<AAnimal>(\n mongoDB,\n filter: null);\n\n //should only get 1\n var getAnimalsBasedOnSomething = GetThings<AAnimal>(\n mongoDB,\n filter: x => (x as Pig).WillBeFood);\n }\n\n public static void AddSomeData(IMongoDatabase DB)\n {\n UpsertThing<Cat>(\n DB,\n filter: null,\n new Cat()\n {\n ID = \"63e169c103f81b89b23add99\", // only setting this manually to prevent duplicates when re-running the program\n IsDomesticated = true,\n Age = 1,\n Gender = \"Male\",\n Name = \"Fluffanutter\"\n }\n );\n UpsertThing<Cat>(\n DB,\n filter: null,\n new Cat()\n {\n ID = \"63e169f4b42641ce7c5e85af\", // only setting this manually to prevent duplicates when re-running the program\n IsDomesticated = false,\n Age = 2,\n Gender = \"Female\",\n Name = \"Brown Cat\"\n }\n );\n UpsertThing<Horse>(\n DB,\n filter: null,\n new Horse()\n {\n ID = \"63e169f73aad61eaad4a78aa\", // only setting this manually to prevent duplicates when re-running the program\n LivesOnFarm = true,\n Age = 6,\n Gender = \"Male\",\n Name = \"Neigh Neigh\"\n }\n );\n UpsertThing<Horse>(\n DB,\n filter: null,\n new Horse()\n {\n ID = \"63e169fbbfb45bed8c515fe4\", // only setting this manually to prevent duplicates when re-running the program\n LivesOnFarm = false,\n Age = 12,\n Gender = \"Male\",\n Name = \"Mr. Ed\"\n }\n );\n UpsertThing<Pig>(\n DB,\n filter: null,\n new Pig()\n {\n ID = \"63e169ffb57d09e93a93251c\", // only setting this manually to prevent duplicates when re-running the program\n WillBeFood = true,\n Age = 3,\n Gender = \"Male\",\n Name = \"Wilbur\"\n }\n );\n UpsertThing<Pig>(\n DB,\n filter: null,\n new Pig()\n {\n ID = \"63e16a03db8e428dd6240b43\", // only setting this manually to prevent duplicates when re-running the program\n WillBeFood = false,\n Age = 15,\n Gender = \"Female\",\n Name = \"Sir Oinks\"\n }\n );\n }\n\n public static T UpsertThing<T>(\n IMongoDatabase DB,\n Expression<Func<T, bool>> filter,\n T record)\n {\n var collectionName = (record as AMongoThing).StorageGrouping;\n var coll = DB.GetCollection<T>(collectionName);\n\n if ((record as AMongoThing).Created == null)\n {\n (record as AMongoThing).Created = DateTime.UtcNow;\n }\n\n (record as AMongoThing).LastModified = DateTime.UtcNow;\n\n if (string.IsNullOrEmpty((record as AMongoThing).ID))\n {\n coll.InsertOne(record);\n return record;\n }\n else\n {\n if (filter == null)\n {\n filter = x => (x as AMongoThing).ID == (record as AMongoThing).ID;\n }\n else\n {\n filter = filter.And<T>(x => (x as AMongoThing).ID == (record as AMongoThing).ID);\n }\n\n return coll.FindOneAndReplace(\n filter, \n record, \n new FindOneAndReplaceOptions<T, T> { IsUpsert = true, ReturnDocument = ReturnDocument.After });\n }\n }\n\n public static List<T> GetThings<T>(\n IMongoDatabase DB,\n Expression<Func<T, bool>> filter)\n {\n var collectionName = \"Unknown\";\n\n if (typeof(T) == typeof(AMongoThing) || typeof(T).IsSubclassOf(typeof(AMongoThing)))\n {\n var temp = Activator.CreateInstance(typeof(T));\n collectionName = (temp as AMongoThing).StorageGrouping;\n }\n\n var coll = DB.GetCollection<T>(collectionName);\n var myCursor = coll.FindSync<T>(filter ?? FilterDefinition<T>.Empty);\n\n List<T> returnValue = new List<T>();\n while (myCursor.MoveNext())\n {\n returnValue.AddRange(myCursor.Current as List<T>);\n }\n\n return returnValue;\n }\n }\n}\n\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Bson;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Text;\nusing System.Threading.Tasks;\n\nnamespace Mongo219\n{\n [BsonIgnoreExtraElements]\n public class AMongoThing\n {\n [BsonId]\n [BsonIgnoreIfDefault]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? ID { get; set; }\n\n public string Name { get; set; } = \"\";\n\n public string StorageGrouping { get; set; }\n\n [BsonDateTimeOptions(Kind = DateTimeKind.Utc)]\n public DateTime? Created { get; set; }\n\n [BsonDateTimeOptions(Kind = DateTimeKind.Utc)]\n public DateTime? LastModified { get; set; }\n }\n\n [BsonIgnoreExtraElements]\n public class AAnimal : AMongoThing\n {\n public string Gender { get; set; }\n public int Age { get; set; }\n\n public AAnimal()\n {\n this.StorageGrouping = \"Animals\";\n }\n }\n\n [BsonIgnoreExtraElements]\n public class Cat : AAnimal\n {\n public bool IsDomesticated { get; set; }\n\n public Cat()\n {\n this.StorageGrouping = \"Animals\";\n }\n }\n\n [BsonIgnoreExtraElements]\n public class Horse : AAnimal\n {\n public bool LivesOnFarm { get; set; }\n\n public Horse()\n {\n this.StorageGrouping = \"Animals\";\n }\n }\n\n [BsonIgnoreExtraElements]\n public class Pig : AAnimal\n {\n public bool WillBeFood { get; set; }\n\n public Pig()\n {\n this.StorageGrouping = \"Animals\";\n }\n }\n}\n\n",
"text": "Thank you everyone for the feedback and comments.I ended up creating a simple program using both 2.18 and 2.19, and it seems the LinqProvider.V2 did fix the problem. The problem when I originally tried that was in the way I was setting the linqProvider property/value.I will however provide my sample program if anyone is interested. The failure will/not occur as you comment out the below line:I cannot upload the code so I will try to paste it all here, I hope it works…",
"username": "Mark_Mann"
},
{
"code": "x as AMongoThing$convertAMongoThingpublic static T UpsertThing<T>(\n IMongoDatabase DB,\n Expression<Func<T, bool>> filter,\n T record) where T: AMongoThing\nwhere T: AMongoThingas AMongoThingAMongoThingNullReferenceExceptionAMongoThing",
"text": "Hi, @Mark_Mann,Thank you for your code example. I understand the problem that you’ve encountered.You are creating a filter using x as AMongoThing, which LINQ3 attempts to convert into a server-side $convert operation. The server is not aware of C# class definitions and has no way to know how to cast an arbitrary object to AMongoThing and thus fails.This worked by happenstance in LINQ2 because we blindly discarded cast operations that we didn’t understand. This is dangerous as the cast operation may be important to your logic.Fortunately the fix in your code is straightforward. You can use a generic type constraint on your method, which is much safer than the cast.By annotating the method with where T: AMongoThing, you can safely eliminate all the as AMongoThing casts. Not only is the code clearer, but it is safer as the compiler prevents you from passing in objects that do not derive from AMongoThing. Previously you would have encountered a NullReferenceException at runtime if the object passed did not derive from AMongoThing.I hope this resolves your problem.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "James,Interesting and thank you for that feedback. I think the way I am using Mongo is extremely strange then, but it has been profoundly successful for us from a code management, expansion, and maintenance perspective.\nWe decided, for right or wrong I suppose, to put the responsibility on the developer to know what objects they have and derive from. You are correct that it would throw a null exception and we do catch that(and others) and deal with them accordingly.In your above example it would me to specify a single “where T: ” at the end, but we actually use filters where there are multiple types assumed/used. So I cannot specify a single “AMongoThing” without sacrificing other aspects of my query In my provided example I simplified things just to highlight the error I encountered, but we use mongoDB in some very interesting ways. I would be happy to show you what we’ve done if you were interested.I will be using my sample project to attempt an upgrade to 2.19 as we encountered some other issues as well. Do you happen to know if/when LINQ2 will no longer be supported?",
"username": "Mark_Mann"
},
{
"code": "$convert$convert",
"text": "Hi, @Mark_Mann,Thank you for the additional information and continued discussion. It was a design decision to be more rigorous about only removing casts (aka $convert) with LINQ3 as LINQ2 allows you to do strange things like cast a string to a bool - which will fail with LINQ-to-Objects but magically work server-side because the cast is simply stripped out of the expression.In your use case, you use the casts to make the C# compiler happy, not to express server-side $convert expressions. While unusual, it is not as uncommon as we may have initially thought. I’m going to discuss this with the engineering team to see if and how we can support use cases such as yours.It would be helpful to file a CSHARP ticket in JIRA along with a description of your use case, a repro, and any publicly available code so that we can review and triage it. Thank you in advance.Removing LINQ2 support is a breaking change and will not be done until the next major version, 3.0.0. We do not have a timeline for the 3.0.0 release yet, but the soonest would be later this year or early next.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "James,You are correct, the intention is not server-side conversion operations but rather to facilitate c# code LINQ type statements. I originally tried to do the (x is AThing) but of course MongoDB wouldn’t understand that and I’m not sure the discriminator is meant to work that way, my understanding was that the discriminator was more related to the de/serialization process.I will file the bug accordingly. Thank you for responding to this with meaningful replies!Ticket Made: https://jira.mongodb.org/browse/CSHARP-4522If I did not provide information just let me know",
"username": "Mark_Mann"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issue with 2.18 to 2.19 NuGet Upgrade of MongoDB C# Driver | 2023-02-03T16:29:51.767Z | Issue with 2.18 to 2.19 NuGet Upgrade of MongoDB C# Driver | 3,627 |
null | [
"aggregation",
"compass"
] | [
{
"code": "",
"text": "Hello.\nI’m trying to build a view using Compass aggregation builder. I want to use $ group. Due to the size of the source collection, I need to add {allowDiskUse: true} to the aggregation pipeline. How and where can I set this parameter using Compass?\nThank you in advance for your help,\nRegards,\nMarek",
"username": "Marek_Bisztyga"
},
{
"code": "{allowDiskUse: true}",
"text": "Hi @Marek_Bisztyga,Welcome to MongoDB community!Compass allows you to use a sample mode in the aggregation builder therefore you should be able to construct your pipeline in compass and use the “export” button in the end.Only in 4.4 you can add the allowDiskUse is allowed in find.The {allowDiskUse: true} should be added to the query/aggregate command used on the already defined view therefore it cannot be done as part of the agg builder.Hope that helps.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{allowDiskUse: true}",
"text": "The {allowDiskUse: true} should be added to the query/aggregate command used on the already defined view therefore it cannot be done as part of the agg builder.Hi @Pavel_Duchovny\nTo add {allowDiskUse: true} to an already defined (through the aggregate command) view, I have to define the view first, and I cannot do this because to define it I have to add {allowDiskUse: true}.\nIs my reasoning correct?\nBest\nMarek",
"username": "Marek_Bisztyga"
},
{
"code": " db.createView(\"myView\", \"myColl\" , [ { $group : {\"_id\" : \"$id\" , arr : { $push : \"$value\" }} }])\ndb.myView.aggregate([],{allowDiskUse : true });\n db.myView.find({},{},{allowDiskUse : true })\n",
"text": "Hi @Marek_Bisztyga,Not necessarily. You can attempt the following:Create view with pipeline without the allowDiskUse, example:The view only consume the data when you query it. Here you can use the allowDiskUse:\nPrior 4.4 -From 4.4 you can use find:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel.\nIt works.\nNow I have to figure out how to make the Tableau, which is supposed to see this view, work the same way.\nThanks for the help.\nBest,\nMarek",
"username": "Marek_Bisztyga"
},
{
"code": "",
"text": "Hi @Marek_Bisztyga,How do you connect to tableau? If its through the BI connector it always set allowDiskUse option on . So it should work:Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny\nOh! Great news!\nThank you.",
"username": "Marek_Bisztyga"
},
{
"code": "",
"text": "Hello,I have exactly the same issue, but I’m afraid I still haven’t managed to resolve it. When I try to use find with {},{},{allowDiskUse : true } on my view in Compass, I get the same error.Any chance you could help me out with this please?Thanks a lot,\nEmily",
"username": "jamiesone"
},
{
"code": "",
"text": "Welcome to the MongoDB Community Forums @jamiesone!Please provide some more details on the issue you are encountering:Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for your reply!I have exactly the same issue as the other post in this thread. I’m using the Compass aggregation builder to run a long pipeline including $group which previously worked, but now gives me the error",
"username": "jamiesone"
},
{
"code": "{},{},{allowDiskUse : true }",
"text": "ah sorry I can’t work out how to edit my previous post…It now gives me the error “PlanExecutor error during aggregation :: caused by :: Exceeded memory limit for $group, but didn’t allow external sort. Pass allowDiskUse:true to opt in.”I tried filtering the view as you suggested,{},{},{allowDiskUse : true }But it gives me the same error.I’m using Compass version 1.20.5 and the deployment is a MongoDB 5.0.2 Community on a kubernetes clusterThanks again,\nEmily",
"username": "jamiesone"
},
{
"code": "",
"text": "Hi @jamiesone ,\nI honestly don’t recall how to pass that in compass.Can you build aggregation and run it with this setting in the shell tab?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for your reply! Yes, it definitely works in the shell once I’ve added the allowDiskUse setting. But we’d really like to find out how to enable this setting in Compass if possible as my colleague then runs a whole lot of other filters on the view, so it’s really practical to be able to do this in Compass.Thanks again,\nEmily",
"username": "jamiesone"
},
{
"code": "",
"text": "Hi @jamiesone ,Are you working with atlas? If yes what instances?Compass should add this by default…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello,No, I’m not working with Atlas, I’m using MongoDB 5.0.2 Community on a kubernetes cluster.It seems like the functionality was added at some point?\nhttps://jira.mongodb.org/browse/COMPASS-2722Thanks again for your help,\nEmily",
"username": "jamiesone"
},
{
"code": "",
"text": "Hi Emily. I am having the exact same problem. I follow the instructions line by line but at the end of it I only get an empty view. How did you solve it?",
"username": "Victor_Hernandez"
},
{
"code": "",
"text": "Hi Marek!Have you found a way to solve this?",
"username": "Fernando_Lumbreras"
},
{
"code": "",
"text": "I’m also facing this issue with Compass Version 1.34.2 (1.34.2).\nCleanShot 2023-02-09 at 10.33.45@2x2342×142 12.6 KB\nHow does one pass in this option to the fields?",
"username": "Dylan_Pierce"
}
] | allowDiskUse setting in Compass | 2020-11-18T17:22:37.967Z | allowDiskUse setting in Compass | 20,174 |
null | [
"kafka-connector"
] | [
{
"code": "sleep 5\n\necho \"\\n\\Configuring MongoDB Connector for Apache Kafka...\\n\\n\"\ncurl --silent -X POST -H \"Content-Type: application/json\" -d @mongodb-sink.json http://localhost:8083/connectors\n{\"name\": \"mongo-ts-sink\",\n \"config\": {\n \"connector.class\":\"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\":\"1\",\n \"topics\":\"activity\",\n \"connection.uri\"mongodb://mongo1\",\n \"database\":\"Stocks\",\n \"collection\":\"StockData\",\n \"key.converter\":\"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter.schemas.enable\":\"false\",\n \"timeseries.timefield\":\"tx_time\",\n \"timeseries.timefield.auto.convert\":\"true\"\n \n }}\n connect:\n image: confluentinc/cp-kafka-connect-base:latest\n build:\n context: .\n dockerfile: Dockerfile-MongoConnect\n depends_on:\n - redpanda\n ports:\n - '8083:8083'\n",
"text": "I am new to MongoDB-kafka - getting the following error —>WARNING: Could not reach configured kafka connect system on http://localhost: 8083\nNote: This script requires curl.If using OSX please try reconfiguring Docker and increasing RAM and CPU. Then restart and try again.Configuring MongoDB Connector —>My Sink-Connector → My connection —>Thank you in advance.",
"username": "Onesmus_Nyakotyo"
},
{
"code": "",
"text": "How is your redpanda configured in your docker-compose file? Can you send the entire compose?",
"username": "Robert_Walters"
},
{
"code": " if [[ \"$OSTYPE\" == \"darwin\"* ]]; then\n MSG+=\"\\nIf using OSX please try reconfiguring Docker and increasing RAM and CPU. Then restart and try again.\\n\\n\"\n fi\n\n echo -e $MSG\n clean_up \"$MSG\"\n exit 1\n fi\n jq '. | to_entries[] | [ .value.info.type, .key, .value.status.connector.state,.value.status.tasks[].state,.value.info.config.\"connector.class\"]|join(\":|:\")' | \\*\n column -s : -t| sed 's/\\\"//g'| sort*\n # env_file: ./server/.env # TODO - uncomment this to auto-load your .env file!\nenvironment:\n - NODE_ENV=development\n # - CHOKIDAR_USEPOLLING=true\n",
"text": "Hi Robert_Walters,I npm installed curl - '“WARNING: Could not reach configured kafka connect system on http://localhost: 8083, Note: This script requires curl.” – > error is gone - see my → docker-compose.yml file below, but when i run ‘sh run.sh’, i am getting nowKafka Connectors status:run.sh: line 54: jq: command not foundVersion of MongoDB Connector for Apache Kafka installed:run.sh: line 58: jq: command not found—> thats the last 2 echos on the run.sh#!/bin/bashset -e\n(\nif lsof -Pi :27017 -sTCP:LISTEN -t >/dev/null ; then\necho “Please terminate the local mongod on 27017, consider running ‘docker-compose down -v’”\nexit 1\nfi\n)echo “Starting docker .”\ndocker-compose up -d --buildsleep 5\necho “\\n\\nWaiting for the systems to be ready…”\nfunction test_systems_available {\nCOUNTER=0\nuntil $(curl --output /dev/null --silent --head --fail http://localhost:$1); do\nprintf ‘.’\nsleep 2\nlet COUNTER+=1\nif [[ $COUNTER -gt 30 ]]; then\nMSG=“\\nWARNING: Could not reach configured kafka connect system on http://localhost:$1 \\nNote: This script requires curl.\\n”done\n}test_systems_available 8083#echo -e “\\nConfiguring the MongoDB ReplicaSet of 1 node…\\n”\n#docker-compose exec mongo1 /usr/bin/mongo --eval ‘’‘rsconf = { _id : “rs0”, members: [ { _id : 0, host : “mongo1:27017”, priority: 1.0 }]};\n#rs.initiate(rsconf);’‘’sleep 5echo “\\n\\Configuring MongoDB Sink Connector for Apache Kafka…\\n\\n”\ncurl --silent -X POST -H “Content-Type: application/json” -d @mongodb-sink.json http://localhost:8083/connectorsecho “\\n\\Configuring MongoDB Source Connector for Apache Kafka…\\n\\n”\ncurl --silent -X POST -H “Content-Type: application/json” -d @mongodb-source.json http://localhost:8083/connectorssleep 5echo “\\n\\nKafka Connectors status:\\n\\n”\n*curl -s “http://localhost:8083/connectors?expand=info&expand=status” | *echo “\\n\\nVersion of MongoDB Connector for Apache Kafka installed:\\n”\ncurl --silent http://localhost:8083/connector-plugins | jq -c ‘. | select( .class == “com.mongodb.kafka.connect.MongoSourceConnector” or .class == “com.mongodb.kafka.connect.MongoSinkConnector” )’echo ‘’’==============================================================================================================The following services are running:MongoDB 1-node replica set on port 27017\nRedpanda on 8082 (Redpanda proxy on 8083)\nKafka Connect on 8083\nNode Server on 4000 is hosting the API and homepageStatus of kafka connectors:\nsh status.sh - last 2 echos are not foundTo tear down the environment and stop these serivces:\ndocker-compose down -vsh status.shecho “Redpanda topics:\\n”curl --silent “http://localhost:8082/topics” | jqecho “\\nThe status of the connectors:\\n”curl -s “http://localhost:8083/connectors?expand=info&expand=status” | \njq ‘. | to_entries | [ .value.info.type, .key, .value.status.connector.state,.value.status.tasks.state,.value.info.config.“connector.class”]|join(“:|:”)’ | \ncolumn -s : -t| sed ‘s/\"//g’| sortecho “\\nCurrently configured connectors\\n”\ncurl --silent -X GET http://localhost:8083/connectors | jqecho “\\n\\nVersion of MongoDB Connector for Apache Kafka installed:\\n”\ncurl --silent http://localhost:8083/connector-plugins | jq -c ‘. | select( .class == “com.mongodb.kafka.connect.MongoSourceConnector” or .class == “com.mongodb.kafka.connect.MongoSinkConnector” )’docker-compose.ymlversion: ‘3.7’\nservices:redpanda:\ncommand:\n- redpanda\n- start\n- --smp\n- ‘1’\n- --reserve-memory\n- 0M\n- --overprovisioned\n- --node-id\n- ‘0’\n- --pandaproxy-addr\n- 0.0.0.0:8082\n- --advertise-pandaproxy-addr\n- 127.0.0.1:8082\n- --kafka-addr\n- PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092\n- --advertise-kafka-addr\n- PLAINTEXT://redpanda:29092,OUTSIDE://localhost:9092\nimage: docker.vectorized.io/vectorized/redpanda:latest\ncontainer_name: redpanda\nhostname: redpanda\n# networks:\n# - localnet\nports:\n- 9092:9092\n- 29092:29092\n- 8082:8082connect:\nimage: confluentinc/cp-kafka-connect-base:latest\nbuild:\ncontext: .\ndockerfile: Dockerfile-MongoConnect\nhostname: connect\n# container_name: connect\ndepends_on:\n- redpanda\nports:\n- “8083:8083”\n# networks:\n# - localnet\nenvironment:\nCONNECT_BOOTSTRAP_SERVERS: ‘redpanda:29092’\nCONNECT_REST_ADVERTISED_HOST_NAME: connect\nCONNECT_REST_PORT: 8083\nCONNECT_GROUP_ID: connect-cluster-group\nCONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs\nCONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000\nCONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets\nCONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_STATUS_STORAGE_TOPIC: docker-connect-status\nCONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_PLUGIN_PATH: “/usr/share/java,/usr/share/confluent-hub-components”\nCONNECT_AUTO_CREATE_TOPICS_ENABLE: “true”\nCONNECT_KEY_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”\nCONNECT_VALUE_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”mongo1:\nimage: “mongo:latest”\ncontainer_name: mongo1\ncommand: --replSet rs0 --oplogSize 128\nvolumes:\n- /data/db\n# networks:\n# - localnet\nports:\n- “27017:27017”\nrestart: alwaysnodesvr:\nimage: node:16\nbuild:\ncontext: .\ndockerfile: Dockerfile-Nodesvr\ndepends_on:\n- redpanda\n- mongo1\nvolumes:\n- ./backend/:/usr/app\n# - /usr/app/node_modules\nports:\n- “4000:4000”frontendsocket:\nbuild:\ncontext: ./frontendsocket/\ncommand: npm start\nvolumes:\n- ./frontendsocket/:/usr/app\n- /usr/app/node_modules\ndepends_on:\n- nodesvr\nports:\n- “3000:3000”Dockerfile-MongoConnect fileFROM confluentinc/cp-kafka-connect:latestRUN confluent-hub install --no-prompt mongodb/kafka-connect-mongodb:latestENV CONNECT_PLUGIN_PATH=“/usr/share/java,/usr/share/confluent-hub-components”mongodb-sink.json{“name”: “mongo-ts-sink”,\n“config”: {\n“connector.class”:“com.mongodb.kafka.connect.MongoSinkConnector”,\n“tasks.max”:“1”,\n“connection.uri”:“mongodb+srv://<username+password>@cluster0.ebt7p.mongodb.net/?retryWrites=true&w=majority”,\n“topics”:“ChatData”,\n“database”:“socketIo-MongoDb”,\n“collection”:“chatfeed”,\n“poll.max.batch.size”: “1000”,\n“poll.await.time.ms”: “5000”,\n“batch.size”: “1”,\n“change.stream.full.document”: “updateLookup”,\n“key.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“key.converter.schemas.enable”: “false”,\n“value.converter.schemas.enable”: “false”,\n“publish.full.document.only”: “true”,\n“change.data.capture.handler”:“com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler”}}mongodb-source.json{\n“name”: “mongo-source”,\n“config”: {\n“connector.class”: “com.mongodb.kafka.connect.MongoSourceConnector”,\n“tasks.max”: “1”,\n“connection.uri”: “mongodb+srv://<username+password>@cluster0.ebt7p.mongodb.net/?retryWrites=true&w=majority”,\n“database”: “socketIo-MongoDb”,\n“collection”:“chat”,\n“copy.existing”: “true”,\n“poll.max.batch.size”: “1000”,\n“poll.await.time.ms”: “5000”,\n“batch.size”: “1”,\n“change.stream.full.document”: “updateLookup”,\n“key.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“value.converter”: “org.apache.kafka.connect.storage.StringConverter”,\n“key.converter.schemas.enable”: “false”,\n“value.converter.schemas.enable”: “false”,\n“publish.full.document.only”: “true”\n}\n}Even when i run.sh from → mongodb-redpanda-example-main repo in github, i am getting the same error in that repoThank you in advance",
"username": "Onesmus_Nyakotyo"
},
{
"code": "docker psdocker logs xxxx",
"text": "install jqhttps://stedolan.github.io/jq/download/then\ndo a docker ps to see if kafka connect and redpanda are running, if they are read the logs via docker logs xxxx where xxx is the container id, see if there are any errors",
"username": "Robert_Walters"
},
{
"code": "image: docker.vectorized.io/vectorized/redpanda:latest",
"text": "image: docker.vectorized.io/vectorized/redpanda:latestactually the issue is you are using the old name for the image, vectorized change it fromimage: docker.vectorized.io/vectorized/redpanda:latestto‘image: docker.redpanda.com/vectorized/redpanda:latest’change that in the docker-compose file and retry",
"username": "Robert_Walters"
},
{
"code": "ports:\n - \"4000:4000\"\n\n# backend:\n# build:\n# context: ./backend/\n# command: /usr/app/node_modules/.bin/nodemon server.js\n# volumes:\n# - ./backend/:/usr/app\n# - /usr/app/node_modules\n# depends_on:\n# - mongo1\n# ports:\n# - \"4000:4000\"\n",
"text": "hi Robert_Walters,Thank you, that error is gone but i am now unable to connect my react app to port 4000 in —> bundle.js:53895 GET http://localhost:4000/socket.io/?EIO=4&transport=polling&t=O8RKBri net::ERR_CONNECTION_REFUSEDdocker-compose.ymlversion: ‘3.7’\nservices:redpanda:\ncommand:\n- redpanda\n- start\n- --smp\n- ‘1’\n- --reserve-memory\n- 0M\n- --overprovisioned\n- --node-id\n- ‘0’\n- --pandaproxy-addr\n- 0.0.0.0:8082\n- --advertise-pandaproxy-addr\n- 127.0.0.1:8082\n- --kafka-addr\n- PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092\n- --advertise-kafka-addr\n- PLAINTEXT://redpanda:29092,OUTSIDE://localhost:9092\n# image: docker.vectorized.io/vectorized/redpanda:latest\nimage: docker.redpanda.com/vectorized/redpanda:latest\ncontainer_name: redpanda\nhostname: redpanda\n# networks:\n# - localnet\nports:\n- 9092:9092\n- 29092:29092\n- 8082:8082connect:\nimage: confluentinc/cp-kafka-connect-base:latest\nbuild:\ncontext: .\ndockerfile: Dockerfile-MongoConnect\nhostname: connect\n# container_name: connect\ndepends_on:\n- redpanda\nports:\n- “8083:8083”\n# networks:\n# - localnet\nenvironment:\nCONNECT_BOOTSTRAP_SERVERS: ‘redpanda:29092’\nCONNECT_REST_ADVERTISED_HOST_NAME: connect\nCONNECT_REST_PORT: 8083\nCONNECT_GROUP_ID: connect-cluster-group\nCONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs\nCONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000\nCONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets\nCONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_STATUS_STORAGE_TOPIC: docker-connect-status\nCONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_PLUGIN_PATH: “/usr/share/java,/usr/share/confluent-hub-components”\nCONNECT_AUTO_CREATE_TOPICS_ENABLE: “true”\nCONNECT_KEY_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”\nCONNECT_VALUE_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”mongo1:\nimage: “mongo:latest”\n# command: --replSet rs0 --oplogSize 128\nvolumes:\n- /data/db\nports:\n- “27017:27017”\nrestart: alwaysnodesvr:\nimage: node:16\nbuild:\ncontext: .\ndockerfile: Dockerfile-Nodesvr\ndepends_on:\n- redpanda\n- mongo1frontendsocket:\nbuild:\ncontext: ./frontendsocket/\ncommand: npm start\nvolumes:\n- ./frontendsocket/:/usr/app\n- /usr/app/node_modules\ndepends_on:\n- nodesvr\nports:\n- “3000:3000”Dockerfile-Nodesvr —>FROM node:16WORKDIR /usr/srcCOPY ./package*.json ./RUN npm installCOPY ./backend .EXPOSE 4000CMD [ “node”, “backend/server.js” ]Thank you again",
"username": "Onesmus_Nyakotyo"
},
{
"code": "# networks:\n# - localnet\n\nnetworks:\n localnet:\n attachable: true\n",
"text": "it looks like you do not have any networks defined for these, you commented outif you want all these components to be able to talk to each other they need to be on the same network so add networks: to each of the components then at the end of the docker compose file",
"username": "Robert_Walters"
},
{
"code": "ports:\n - \"4000:4000\"\n\n# backend:\n# build:\n# context: ./backend/\n# command: /usr/app/node_modules/.bin/nodemon server.js\n# volumes:\n# - ./backend/:/usr/app\n# - /usr/app/node_modules\n# depends_on:\n# - mongo1\n# ports:\n# - \"4000:4000\"\n",
"text": "Thank you Robert_Walters, am still unable to send data to mongodb atlas db - bundle.js:53895 GET http://localhost:4000/socket.io/?EIO=4&transport=polling&t=O8Vho6a net::ERR_CONNECTION_REFUSEDconst io = new Server(httpServer, {\ncors: {\norigin: “http://localhost:3000”\n}\n});my yml file —>version: ‘3.7’\nservices:redpanda:\ncommand:\n- redpanda\n- start\n- --smp\n- ‘1’\n- --reserve-memory\n- 0M\n- --overprovisioned\n- --node-id\n- ‘0’\n- --pandaproxy-addr\n- 0.0.0.0:8082\n- --advertise-pandaproxy-addr\n- 127.0.0.1:8082\n- --kafka-addr\n- PLAINTEXT://0.0.0.0:29092,OUTSIDE://0.0.0.0:9092\n- --advertise-kafka-addr\n- PLAINTEXT://redpanda:29092,OUTSIDE://localhost:9092\n# image: docker.vectorized.io/vectorized/redpanda:latest\nimage: docker.redpanda.com/vectorized/redpanda:latest\ncontainer_name: redpanda\nhostname: redpanda\nnetworks:\n- localnet\nports:\n- 9092:9092\n- 29092:29092\n- 8082:8082connect:\nimage: confluentinc/cp-kafka-connect-base:latest\nbuild:\ncontext: .\ndockerfile: Dockerfile-MongoConnect\nhostname: connect\n# container_name: connect\ndepends_on:\n- redpanda\nports:\n- “8083:8083”\nnetworks:\n- localnet\nenvironment:\nCONNECT_BOOTSTRAP_SERVERS: ‘redpanda:29092’\nCONNECT_REST_ADVERTISED_HOST_NAME: connect\nCONNECT_REST_PORT: 8083\nCONNECT_GROUP_ID: connect-cluster-group\nCONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs\nCONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000\nCONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets\nCONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_STATUS_STORAGE_TOPIC: docker-connect-status\nCONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_PLUGIN_PATH: “/usr/share/java,/usr/share/confluent-hub-components”\nCONNECT_AUTO_CREATE_TOPICS_ENABLE: “true”\nCONNECT_KEY_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”\nCONNECT_VALUE_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”mongo1:\nimage: “mongo:latest”\n# command: --replSet rs0 --oplogSize 128\nvolumes:\n- /data/db\nports:\n- “27017:27017”\nnetworks:\n- localnet\nrestart: alwaysnodesvr:\nimage: node:16\nbuild:\ncontext: .\ndockerfile: Dockerfile-Nodesvr\ndepends_on:\n- redpanda\n- mongo1\nnetworks:\n- localnetfrontendsocket:\nbuild:\ncontext: ./frontendsocket/\ncommand: npm start\nvolumes:\n- ./frontendsocket/:/usr/app\n- /usr/app/node_modules\ndepends_on:\n- nodesvr\nnetworks:\n- localnet\nports:\n- “3000:3000”networks:\nlocalnet:\nattachable: trueAlthough below logs indicate i am connected to via port 4000onesmusnyakotyo@onesmuss-iMac socketIo-MongoDb % docker compose logs nodesvr\nnodesvr_1 | MongoDB connected…\nnodesvr_1 | IO RUNNING on PORT 4000Thank you again",
"username": "Onesmus_Nyakotyo"
},
{
"code": "ports:\n - “3000:3000”\n - \"4000:4000\"\n",
"text": "ports:you opened port 3000 in docker-compose but you are running on port 4000 within your code apparently so add the 4000 port to the docker-compose",
"username": "Robert_Walters"
},
{
"code": "ports:\n - \"4000:4000\"\n",
"text": "Hi Robert_Walters,Port 3000 is for my frontend(react app) ---->frontendsocket:\nbuild:\ncontext: ./frontendsocket/\ncommand: npm start\nvolumes:Port 4000 for the nodeJs backend ---->nodesvr:\nimage: node:16\nbuild:\ncontext: .\ndockerfile: Dockerfile-Nodesvr\ndepends_on:\n- redpanda\n- mongo1\nnetworks:\n- localnetDockerfile-Nodesvr file ------>FROM node:16WORKDIR /backend .COPY ./package*.json ./RUN npm installCOPY . .EXPOSE 4000CMD [ “node”, “backend/server.js” ]Thank you",
"username": "Onesmus_Nyakotyo"
},
{
"code": "",
"text": "is there anything the docker logs for mongodb ? are the containers still running? can you connect to mongodb from the host? Also you said MongoDB Atlas yet it looks like you are running an instnace of MongoDB locally, which one are you trying to connect to?to send data to mongodb atlas db -",
"username": "Robert_Walters"
},
{
"code": "ports:\n - \"4000:4000\"\n",
"text": "I want to connect to mongoDB atlas – uri: mongodb+srv://[username, password ]@cluster0.ebt7p.mongodb.net/?retryWrites=true&w=majoritydocker compose logs mongo1 —>mongo1_1 | {“t”:{\"$date\":“2022-07-21T20:35:58.956+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“5.0.9”,“gitVersion”:“6f7dae919422dcd7f4892c10ff20cdc721ad00e6”,“openSSLVersion”:“OpenSSL 1.1.1f 31 Mar 2020”,“modules”:,“allocator”:“tcmalloc”,“environment”:{“distmod”:“ubuntu2004”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\nmongo1_1 | {“t”:{\"$date\":“2022-07-21T20:35:58.956+00:00”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“Ubuntu”,“version”:“20.04”}}}mongo1:\nimage: “mongo:latest”\n# command: --replSet rs0 --oplogSize 128\nvolumes:\n- /data/db\nports:\n- “27017:27017”\nnetworks:\n- localnet\nrestart: alwaysnodesvr:\nimage: node:16\nbuild:\ncontext: .\ndockerfile: Dockerfile-Nodesvr\ndepends_on:\n- redpanda\n- mongo1\nnetworks:\n- localnetdocker compose logs connect ---->connect_1 | [2022-07-21 19:27:47,703] WARN [Producer clientId=connector-producer-mongo-source-0] Error while fetching metadata with correlation id 20237 : {socketIo-MongoDb.chat=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)\nconnect_1 | [2022-07-21 19:27:47,703] WARN [Consumer clientId=connector-consumer-mongo-ts-sink-0, groupId=connect-mongo-ts-sink] Error while fetching metadata with correlation id 21029 : {ChatData=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient)",
"username": "Onesmus_Nyakotyo"
},
{
"code": "",
"text": "you need to set auto topic create on in Kafka or create the topic beforehand look like",
"username": "Robert_Walters"
},
{
"code": "",
"text": "This is my connect environment variables as advised by redpanda team–>environment:\nCONNECT_BOOTSTRAP_SERVERS: ‘redpanda: 9092’\nCONNECT_REST_ADVERTISED_HOST_NAME: connect\nCONNECT_REST_PORT: 8083\nCONNECT_GROUP_ID: connect-cluster-group\nCONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs\nCONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_CONFIG_STORAGE_PARTITIONS: 6\nCONNECT_CONFIG_STORAGE_CLEANUP.POLICY: compact\nCONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000\nCONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets\nCONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_STATUS_STORAGE_TOPIC: docker-connect-status\nCONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1\nCONNECT_PLUGIN_PATH: “/usr/share/java,/usr/share/confluent-hub-components”\nCONNECT_AUTO_CREATE_TOPICS_ENABLE: “true”\nTOPIC_CREATION_ENABLE: “true”\nTOPIC_CREATION_DEFAULT_REPLICATION_FACTOR: 3\nTOPIC_CREATION_DEFAULT_PARTITIONS: 10\nTOPIC_CREATION_DEFAULT__CLEANUP_POLICY: compact\nTOPIC_CREATION_DEFAULT_COMPRESSION_TYPE: “lz4”\nCONNECT_KEY_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”\nCONNECT_VALUE_CONVERTER: “org.apache.kafka.connect.json.JsonConverter”Robert_Walters, can you advise on connecting to mongoDB Atlas in this setup, which is the local instance you mentioned above?, because i want my source connected to a mongoDB Atlas collection and then my sink connected to a different collection in mongoDB.",
"username": "Onesmus_Nyakotyo"
},
{
"code": "",
"text": "Did you enable the IP in MongoDB Atlas? See Accessing a MongoDB Atlas cluster in this blog MongoDB Atlas Tutorial | MongoDB.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "mongodb+srv://[username, password ]@cluster0.ebt7p.mongodb.net/?retryWrites=true&w=majorityYes i did → 0.0.0.0/0 (includes your current IP address) and the database was working well before connecting to docker/kafka .Thats the connection uri → mongodb+srv://[username, password ]@cluster0.ebt7p.mongodb.net/?retryWrites=true&w=majorityFor docker or redpanda kafka is there a specific IP that needs to be enabled?Please can you look at these mongo and node image is this correct config ---->mongo1:\nimage: “mongo:latest”\ncontainer_name: mongodb\nenvironment:\n- MONGODB_CONNSTRING=mongodb+srv:// [username, password] @cluster0.ebt7p.mongodb.net/?retryWrites=true&w=majority\nvolumes:\n- mongodb:/data/socketio-MongoDb\n# - /data/socketio-MongoDb\nnetworks:\n- localnetnodesvr:\nimage: node:16\nbuild:\ncontext: .\ndockerfile: Dockerfile-Nodesvr\ndepends_on:\n- redpanda\n- mongo1\nvolumes:\n- /app/node_modules\n- ./nodesvr:/app\nnetworks:\n- localnet\nports:\n- “4000:4000”volumes:\nmongodb:",
"username": "Onesmus_Nyakotyo"
},
{
"code": "mongo1:\nimage: “mongo:latest”\ncontainer_name: mongodb\nenvironment:\n- MONGODB_CONNSTRING=mongodb+srv:// [username, password] @cluster0.ebt7p.mongodb.net/?retryWrites=true&w=majority\nvolumes:\n- mongodb:/data/socketio-MongoDb\n# - /data/socketio-MongoDb\nnetworks:\n- localnet\n",
"text": "if you are using Atlas you don’t need this at all, this is telling docker to install a MongoDB Cluster locally. (image:mongo:latest)",
"username": "Robert_Walters"
},
{
"code": "connect | [2022-07-25 17:08:07,557] INFO Opened connection [connectionId{localValue:30, serverValue:112163}] to cluster0-shard-00-02.ebt7p.mongodb.net:27017 (org.mongodb.driver.connection)```",
"text": "Thank you Robert_Walters for your advise, i removed the mongo1, now my connect logs (docker compose logs -f connect), is this a correct mongodb atlas connection —>",
"username": "Onesmus_Nyakotyo"
},
{
"code": "",
"text": "Looks like it works to me.",
"username": "Robert_Walters"
},
{
"code": " \"topic\": \"ChatData.socketIo-MongoDb.chat\",\n \"key\": \"{\\\"_id\\\": {\\\"_data\\\": \\\"8262DEF196000000122B022C0100296E5A1004FC2D06E10EF64CA2967AEFB29F6E510B46645F6964006462DEF1969F15802D8FA325BB0004\\\"}}\",\n \"value\": \"{\\\"_id\\\": {\\\"$oid\\\": \\\"62def1969f15802d8fa325bb\\\"}, \\\"name\\\": \\\"James Jamerson\\\", \\\"message\\\": \\\"Bonjour\\\"}\",\n \"timestamp\": 1658778010584,\n \"partition\": 2,\n \"offset\": 2\n}```\n\nbut sink is not sending the data to the collection on mongodb atlas ---> \n\n```{\"name\": \"mongo-ts-sink\",\n \"config\": {\n \"connector.class\":\"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\":\"1\",\n \"topics.regex\": \"chatData.*\",\n \"connection.uri\":\" \", \n \"database\":\"socketIo-MongoDb\",\n \"collection\":\"chatfeed\",\n \"auto.create\": \"true\",\n \"auto.evolve\": \"true\",\n \"insert.mode\": \"insert\",\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"key.converter.schemas.enable\": \"false\",\n \"value.converter.schemas.enable\": \"false\",\n \"publish.full.document.only\": \"true\"\n }\n}```\n\nwhat am i missing in this sink connector, thats stopping it from sending data to the 'chatfeed' collection\n\nThank you again Robert_Walters",
"text": "Thank you Robert_Walters, i am able to consume data from mongodb on cmd → ",
"username": "Onesmus_Nyakotyo"
}
] | MongoDb-Kafka-Connector error | 2022-07-18T07:40:17.128Z | MongoDb-Kafka-Connector error | 9,212 |
null | [
"transactions"
] | [
{
"code": " {\n _id: ObjectId(\"63e359b9199e6d59b4495a8b\"),\n email: '[email protected]',\n firstName: 'Name19',\n lastName: 'Last19',\n sendUpdates: true,\n roles: [ 'ROLE_USER' ],\n groupEntries: [\n {\n group: DBRef(\"Groups\", ObjectId(\"63e359b7199e6d59b4495a77\")),\n experation: ISODate(\"2023-01-19T08:13:44.079Z\"),\n inception: ISODate(\"2023-02-08T08:13:44.080Z\")\n },\n {\n group: DBRef(\"Groups\", ObjectId(\"63e359b7199e6d59b4495a74\")),\n experation: ISODate(\"2024-02-08T08:13:44.069Z\"),\n inception: ISODate(\"2023-02-08T08:13:44.069Z\")\n }\n ],\n applications: [],\n accountStatus: 'active',\n registrationDate: ISODate(\"2019-06-25T05:43:44.088Z\"),\n name: 'User-19',\n _class: 'MongoUser',\n agreement: [\n {\n agreementLevel: 'LEVEL',\n agreementType: 'USER',\n transactionDate: '$$NOW',\n validDate: 'null',\n expiredDate: 'null',\n revokedDate: 'null',\n ownAgreement: 'null'\n }\n ]\n }\n]\n {\n _id: ObjectId(\"63e359b9199e6d59b4495a8b\"),\n email: '[email protected]',\n firstName: 'Name19',\n lastName: 'Last19',\n sendUpdates: true,\n roles: [ 'ROLE_USER' ],\n applications: [],\n accountStatus: 'active',\n registrationDate: ISODate(\"2019-06-25T05:43:44.088Z\"),\n name: 'User-19',\n _class: 'MongoUser',\n agreement: [\n {\n agreementLevel: 'LEVEL1',\n agreementType: 'USER',\n transactionDate: '$$NOW',\n validDate: 'null',\n expiredDate: 'null',\n revokedDate: 'null',\n ownAgreement: 'null',\n groupEntries: [\n {\n group: DBRef(\"Groups\", ObjectId(\"63e359b7199e6d59b4495a77\")),\n experation: ISODate(\"2023-01-19T08:13:44.079Z\"),\n inception: ISODate(\"2023-02-08T08:13:44.080Z\")\n },\n {\n group: DBRef(\"Groups\", ObjectId(\"63e359b7199e6d59b4495a74\")),\n experation: ISODate(\"2024-02-08T08:13:44.069Z\"),\n inception: ISODate(\"2023-02-08T08:13:44.069Z\")\n }\n ]\n }\n ]\n }\n]\n",
"text": "I have this document:I want to move groupEntries inside the first element of agreement. With a result like this:My mongo server version is 4.2.0Thank you!",
"username": "Kristina_Sanchez"
},
{
"code": "/* The first stage extract the first element of agreement into a temp. field\n*/\nfirst_element = { \"$set\" : {\n \"_tmp.first_element\" : { \"$arrayElemAt\" : [ \"$agreement\" , 0 ] }\n} }\n\n/* The second stage extract the rest of the agreement array into a temp. field\n*/\nrest_of_array = { \"$set\" : {\n \"_tmp.rest_of_array\" : { \"$slice\" : [ \"$agreement\" , 1 , { \"$size\" : \"$agreement\" } ] }\n} }\n\n/* The next stage add the groupEntries field to the temp. _first_element \n*/\nset_groupEntries = { \"$set\" : {\n \"_tmp.first_element.groupEntries\" : \"$groupEntries\"\n} }\n\n/* The following stage reconstruct the array from the modified _first_element\n and the _rest_of_array */\nset_agreement = { \"$set\" : {\n \"agreement\" : { \"$concatArrays\" : [ [ \"$_tmp.first_element\" ] , \"$_tmp.rest_of_array\" ] }\n} }\n\n/* A final cleanup to remove the temp. fields */\ncleanup = { \"$unset\" : \"_tmp\" } \n\npipeline = [ first_element , rest_of_array , set_groupEntries , set_agreement , cleanup ]\n",
"text": "With the aggregation framework you can do it.If you want to store back the result in the collection use $merge stage.You could do the above in a single stage without _tmp fields but it is easier to develop, debug and understand this way because we can see the result of each little stages.Divide and Conquer",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much!I’m very new in mongodb, I am not able to store the result in the collection using $merge stage. Can you help me, please?Thank you again!",
"username": "Kristina_Sanchez"
},
{
"code": "merge = { \"$merge\" : {\n \"into\" : \"The-Name-Of-Your-Collection\" ,\n \"on\" : \"_id\" ,\n \"whenMatched\" : \"replace\"\n} }\n",
"text": "The $merge will look something like",
"username": "steevej"
},
{
"code": "db.Users.updateMany({},pipeline)",
"text": "I couldn’t store the result with $merge stage.\nFinally, I was able to save them by doing an updateMany:\ndb.Users.updateMany({},pipeline)Thank you very much!",
"username": "Kristina_Sanchez"
},
{
"code": "",
"text": "I am not too sure but the real solution is the pipeline rather than the fact you used the pipeline in updateOne.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Move object list to an another object inside the same document | 2023-02-08T11:01:19.734Z | Move object list to an another object inside the same document | 1,047 |
null | [
"replication"
] | [
{
"code": "2023-01-18T04:58:12.204-0500 F - [replexec-200] Invariant failure opTime.getTimestamp().getInc() > 0 Impossible optime received: { ts: Timestamp(1674035892, 0), t: 252 } src/mongo/db/repl/replication_coordinator_impl.cpp 1213\n2023-01-18T04:58:12.204-0500 F - [replexec-200] \\n\\n***aborting after invariant() failure\\n\\n\n2023-01-18T04:58:12.375-0500 F - [replexec-200] Got signal: 6 (Aborted).\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] ** WARNING: This replica set node is running without journaling enabled but the\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] ** writeConcernMajorityJournalDefault option to the replica set config\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] ** is set to true. The writeConcernMajorityJournalDefault\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] ** option to the replica set config must be set to false\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] ** or w:majority write concerns will never complete.\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] ** In addition, this node's memory consumption may increase until all\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] ** available free RAM is exhausted.\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0]\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] New replica set config in use: { _id: \"<rsName>\", version: 489159, protocolVersion: 1, members: [ { _id: 1, host: \"node2:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: \"node1:27017\", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 2.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 3, host: \"node3:27017\", arbiterOnly: true, buildIndexes: true, hidden: false, priority: 0.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: 60000, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('59c5a4175d7da122417038c8') } }\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] This node is node3.mfps.com:27017 in the config\n2023-01-18T06:20:02.305-0500 I REPL [replexec-0] transition to ARBITER from STARTUP\n2023-01-18T06:20:02.320-0500 I REPL [replexec-0] Member node2:27017 is now in state SECONDARY\n2023-01-18T06:20:02.332-0500 I REPL [replexec-1] Member node1:27017 is now in state PRIMARY\n",
"text": "Hello,We have a 3 node PSA deployment that is currently running MongoDB 3.6. They are all RHEL 7.9 Virtual Machines.Historically, we have not had any issues with this setup but recently the mongod service on our Arbiter has been crashing and giving us this error:The mongod service is able to be immediately restarted after the crash, but will crash again as early as two days later.I am not experienced with MongoDB, and have been trying to do some research here and on the main MongoDB support. However, I have not seen any mention of this specific Invariant failure.Currently, we are having issues authenticating to the DB on the Arbiter to provide any rs.status() or rs.conf() outputs, but we are able to authenticate to the DB on our test environment and see that information there if it is needed.I have read posts about the concerns of Arbiters and certain benefits of HA that are impeded with the Arbiter implementation, but unfortunately I was not with the company when the decision was made to add the Arbiter. Furthermore, the Primary and Secondary nodes are managed by a Third Party company that maintains their bespoke application running on the nodes. Understandably this adds complexity and leads to why we are still on an older version of MongoDB.Additional information in the Log File after crash/service restart:I also saw in the documentation regarding MajorityReadConcern being enabled in a PSA architecture leading to performance issues, but I am not sure if that applies here. I can confirm that on Node1 and Node2, enableMajorityReadConcern is set to true in mongod.conf while Node3 is set to false. We have also not seen any evidence of the RAM being exhausted leading up to this crash as cautioned in the log above.Any further guidance on where to troubleshoot this issue would be greatly appreciated. Thank you to anyone who takes time to read this.",
"username": "ddonzella"
},
{
"code": "Invariant failure opTime.getTimestamp().getInc() > 0 Impossible optime received: { ts: Timestamp([1674035892](tel:1674035892), 0), t: 252 }\n\nWARNING: This replica set node is running without journaling enabled but the writeConcernMajorityJournalDefault option to the replica set config is set to true\n",
"text": "Hey @ddonzella,Welcome to the MongoDB Community Forums! The error messagesuggests that there is an issue with the replication process between the nodes in the replica set. The fact that the mongod service on the Arbiter node is crashing and giving this error indicates that there may be a problem with the way the Arbiter is configured or communicating with the other nodes in the replica set. It may be necessary to engage with the third-party company that is managing the Primary and Secondary nodes to ensure that they are properly configured and communicating with the Arbiter. They may also have more insight into the specific configuration of the replica set and any recent changes that may have caused the issue.Also, the warning messageindicates that the replica set is not configured correctly, which can cause issues with the ability to complete write concerns with the majority of the nodes in the replica set. It is recommended you enable journaling and set writeConcernMajorityJournalDefault to false for your replica set, as it will prevent the warning that you’re seeing and it’s a best practice in MongoDB to have journaling enabled for replica sets. You can read more about setting up and managing journaling here: JournalingPlease let us know if this helps or not. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Hello @Satyam,Thank you so much for replying. I will reach out to our third-party about the journaling and see what they say about any communication errors.As of now, our mongod service on the Arbiter has been running for 7 days without crashing or seeing that error.I will do my best to give an update within reasonable time, but that is fully dependent on when I can schedule time with the other company.Thanks again,\nDerek",
"username": "ddonzella"
},
{
"code": "",
"text": "Hello @Satyam,I was able to meet with our partners and discuss the issue. In doing so, I dug deeper beyond timedatectl showing ntp enabled: yes and ntp synchronized: yes and found that the systems are polling our local ntp appliance at very different intervals, 128s 512s and 1024s.Although they are all reporting the same time when entering the timedatectl command, would the interval settings be enough to throw off the opTime timestamps and produce this error?Thanks for your help and patience,\nDerek",
"username": "ddonzella"
},
{
"code": "",
"text": "Hey @ddonzella,The intervals you mentioned should not be a cause for concern since MongoDB can tolerate date differences of up to a year. Is there anything else that your third party mentioned? Any recent changes to any nodes, any maintenance or upgrades? If not, I would suggest you upgrade your MongoDB version. Support for the 3.6 series officially ended on January 2020, and if this is a known issue, it won’t be backported to 3.6 and the solution is to upgrade to a supported version anyway. So it maybe it’s time you discuss upgrading your version if this failure continues and see if that helps.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "@Satyam,Unfortunately we could not find any major changes since this has been happening. I found documentation from the individual who set up the Arbiter, that is no longer with the company, that mentions the mongod service crashing during high I/O. However, we cannot find any instances of abnormal spikes of read writes to the VM or to the Database.I will continue talking with our counterparts from the third party and see how they feel about upgrading.Thanks again for your time, I greatly appreciate your help.If we do find the root cause, I will be sure to come back and post it here.Best,\nDerek",
"username": "ddonzella"
}
] | Receiving Invariant failure for "Impossible Optime Received" Mongo 3.6 Arbiter | 2023-01-19T20:00:02.927Z | Receiving Invariant failure for “Impossible Optime Received” Mongo 3.6 Arbiter | 819 |
null | [
"atlas-search"
] | [
{
"code": "",
"text": "Greetings,Today Atlas Search supports sorting only number and date fields using the ‘near’ operator.\nSorting string fields is not supported.The alternative of using storedSource and sort stage after search stage is working too slow when handling a lot of documents.My questions are:Thanks a lot,\nOfer.",
"username": "Ofer_Chacham"
},
{
"code": "",
"text": "Hello! I’m facing the same issue. An answer would help.\nThanks a lot!",
"username": "Andrei_Batinas"
}
] | Sorting with Atlas search | 2022-07-25T08:04:59.257Z | Sorting with Atlas search | 1,940 |
null | [
"replication",
"mongodb-shell"
] | [
{
"code": "security:\n authorization: enabled\n keyFile: <path>\\mongo.key\n\nreplication:\n replSetName: rs0\n\nsystemLog:\n destination: file\n verbosity: 0\n quiet: false\n logAppend: true\n logRotate: rename\n path: <path>\\mongod.log\nmongosh.exe -u logsadm -p <password> --host mongo1 --port 27017 --eval \"db.adminCommand({ logRotate: 1 })\"\nmongosh.exe -u logsadm -p <password> --host mongo2 --port 27017 --eval \"db.adminCommand({ logRotate: 1 })\"\n",
"text": "Good day all.We are currently running a single instance of MongoDB within a replicaset that has only one member.I’m in the process of migrating to a PSA architecture. The following will be my setup:mongo1: primary, priority: 10\nmongo2: secondary, priority: 1\nmongoarb: arbiterOnly* All Windows serversThe 3 servers are configured with:We already have a windows task to logRotate with a specific logsadm user account. The task is set to run every day, and we keep only 7 days worth of logs.I found that on primary and secondary, I am able to issue the logRotate command no problem, i.e.:However, because Arbiter doesn’t have a copy of the admin.users collection, it obviously cannot issue the command.I tried connecting to the arbiter directly and create a logsadm user in it, but it of course won’t let me, because it is not the primary server.My question: what is normally done for log rotation in regards of arbiters? I’m currently in a sandbox, testing it all (localhost, running PSA on different ports), so I can always redo steps…For example, before adding arbiter to the replicaset, should I have went in and create the logsadm user, and only after add it as a member to the RS? Could this somehow affect the integrity of the replicaset afterwards?Thanks for your returns.\nPat",
"username": "Patrick_Roy"
},
{
"code": "mongod",
"text": "Hi @Patrick_Roy and welcome to the MongoDB community forum!!As quoted in the MongoDB documentation, since the arbiters do not store data, they do not have access to the user and role mapping as those of the primary and secondary mongod process.\nHence, the log rotation on arbiter is not possible.Unfortunately since you have auth in the replica set, this puts the arbiter in an awkward place where it needs an authorised user to perform some commands like logRotate, however it cannot store any user since by definition arbiters do not store data.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks for your reply @Aasawari. My workaround is to simply have the log file rotate when the mongod service restarts on the arbiter. Could always force a restart nightly, or not. But that seems to be my workaround.Thanks for your time. Pat.",
"username": "Patrick_Roy"
},
{
"code": "docker exec -it mongo3 mongosh --eval \"db.adminCommand({logRotate:1})\"\n",
"text": "instead of trying to log in to the arbiter instance remotely, try logging into the host of the arbiter to use the local host exception.for example, using docker, this simply works in P-S-A:I haven’t tried this with a key file, so please share with us if it works ",
"username": "Yilmaz_Durmaz"
},
{
"code": "logRotate#security:\n# authorization: enabled\n# keyFile: <path-to-keyfile>\n",
"text": "Hello @Yilmaz_Durmaz. Sorry for the delay in responding.That doesn’t work… I’m getting a:MongoServerError: not authorized on admin to execute command { logRotate: 1, $db: “admin” }The only way I can get the Arbiter to rotate it’s log with the logRotate command is if I completely disable/comment out security from its config file:But I am not sure if it’s right to do so… Plus, since all my servers use the key file authentication, won’t this break my replicaset communications or something (sorry, not that knowledgeable regarding security …)",
"username": "Patrick_Roy"
},
{
"code": "SIGUSR1mongodPID=1mongodkill -SIGUSR1 1\n",
"text": "hey again I was trying to find a solution. For the moment, I don’t have one to do it through the mongo shell, but I have found a single command to use on all members.\nlogRotate — MongoDB Manual (since 4.2, at least)You may also rotate the logs by sending a SIGUSR1 signal to the mongod processThe following uses PID=1 as mongod is the first process when started by “docker compose”.You just need to log in to the host machine, that is all. Also, the same command for all members is easier to maintain. You will need to find the appropriate command to fetch PID if it differs in your case.PS: This command will be valid unless someday MongoDB developers decide to deprecate it for some reason. Yet, logical thinking says it will stay with us ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "I would LOVE to be able to use the -SIGUSR1 signal to force mongod to do a rotate log, but… we’re unfortunately on Windows servers…I’ve just setup our arbiter’s server not to append in logs: they are rotated whenever we restart the mongod service normally (server restart,etc). Some day, maybe we’ll get on Linux Thanks for your replies and time ",
"username": "Patrick_Roy"
},
{
"code": "",
"text": "I missed that “windows” part. Then let me ask you a few questions about your setup:Physical is not easy to try scrap ideas, but if you use “easy to discard” cloud machines, or containers, then why don’t you consider putting the arbiter in a Linux machine/container. I haven’t tried such a setup myself, but that should be possible as members are communicating over the network already. Then managing arbiter might get easier. I would like to hear the result if you try this setup ",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "They are all physical / separate windows servers: primary, secondary and arbiter. Moving to Linux / Cloud is not an option. We can disregard this post for now as rotating the arbiter’s log is not that big of a deal… But much thanks for the time you’ve placed in this :))",
"username": "Patrick_Roy"
}
] | Log rotation on Arbiter | 2023-01-10T14:12:15.913Z | Log rotation on Arbiter | 1,238 |
null | [] | [
{
"code": "net start MongoDB\nWindows Server 2019 Standard 64 bitMongo 5.0 Community{\"t\":{\"$date\":\"2021-10-21T00:34:58.007+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.007+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): FileRenameFailed: \\ufffds\\ufffd\\ufffd\\ufffdQ\\ufffdڡC\\nActual exception type: class mongo::error_details::ExceptionForImpl<37,class mongo::AssertionException>\\n\"}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FF77FDF31B3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":321,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"},{\"a\":\"7FF77FDF62EB\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":226,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"BB\"},{\"a\":\"7FF77FEBFA67\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},{\"a\":\"7FF77FEBFA49\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FFDE8AADE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},{\"a\":\"7FFDD6EE1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"},{\"a\":\"7FFDD6EE232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"},{\"a\":\"7FFDD6EE40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},{\"a\":\"7FF7800BBB48\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FFDEC4B4A2F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"},{\"a\":\"7FFDEC414CEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"},{\"a\":\"7FFDEC418AE6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"},{\"a\":\"7FFDE8529329\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FFDD5676210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF77FE6C92D\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/e46f1a17206528956f3466ce347a3323/src/build/opt/mongo/base/error_codes.cpp\",\"line\":2270,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"42D\"},{\"a\":\"7FF77FE00604\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":260,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"184\"},{\"a\":\"7FF77E7B76DD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"},{\"a\":\"7FF77E7B6FBC\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FFDE8A6268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"},{\"a\":\"7FFDE9777974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FDF31B3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/stacktrace_windows.cpp\",\"line\":321,\"s\":\"mongo::`anonymous namespace'::printWindowsStackTraceImpl\",\"s+\":\"43\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FDF62EB\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":226,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"BB\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FEBFA67\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FEBFA49\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE8AADE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD6EE1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD6EE232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD6EE40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF7800BBB48\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDEC4B4A2F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDEC414CEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDEC418AE6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE8529329\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD5676210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FE6C92D\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/e46f1a17206528956f3466ce347a3323/src/build/opt/mongo/base/error_codes.cpp\",\"line\":2270,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"42D\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FE00604\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":260,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"184\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77E7B76DD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77E7B6FBC\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE8A6268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE9777974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23134, \"ctx\":\"ftdc\",\"msg\":\"Unhandled exception\",\"attr\":{\"exceptionString\":\"0xE0000001\",\"addressString\":\"0x00007FFDE8529329\"}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.360+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23136, \"ctx\":\"ftdc\",\"msg\":\"*** stack trace for unhandled exception:\"}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"ftdc\",\"msg\":\"BACKTRACE\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"7FFDE8529329\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FF77FDF5AC9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":99,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"},{\"a\":\"7FF77FDF62FA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":227,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"CA\"},{\"a\":\"7FF77FEBFA67\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"},{\"a\":\"7FF77FEBFA49\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"},{\"a\":\"7FFDE8AADE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"},{\"a\":\"7FFDD6EE1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"},{\"a\":\"7FFDD6EE232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"},{\"a\":\"7FFDD6EE40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"},{\"a\":\"7FF7800BBB48\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"},{\"a\":\"7FFDEC4B4A2F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"},{\"a\":\"7FFDEC414CEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"},{\"a\":\"7FFDEC418AE6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"},{\"a\":\"7FFDE8529329\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"},{\"a\":\"7FFDD5676210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"},{\"a\":\"7FF77FE6C92D\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/e46f1a17206528956f3466ce347a3323/src/build/opt/mongo/base/error_codes.cpp\",\"line\":2270,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"42D\"},{\"a\":\"7FF77FE00604\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":260,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"184\"},{\"a\":\"7FF77E7B76DD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"},{\"a\":\"7FF77E7B6FBC\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"},{\"a\":\"7FFDE8A6268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"},{\"a\":\"7FFDE9777974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE8529329\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FDF5AC9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":99,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FDF62FA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":227,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"CA\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FEBFA67\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FEBFA49\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE8AADE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD6EE1ABF\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"96F\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD6EE232B\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11DB\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD6EE40E9\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF7800BBB48\",\"module\":\"mongod.exe\",\"file\":\"d:/A01/_work/6/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDEC4B4A2F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDEC414CEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDEC418AE6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE8529329\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDD5676210\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FE6C92D\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/e46f1a17206528956f3466ce347a3323/src/build/opt/mongo/base/error_codes.cpp\",\"line\":2270,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"42D\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77FE00604\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":260,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"184\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77E7B76DD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"5BD\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF77E7B6FBC\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":44,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE8A6268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFDE9777974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.361+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23132, \"ctx\":\"ftdc\",\"msg\":\"Writing minidump diagnostic file\",\"attr\":{\"dumpName\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.2021-10-20T16-34-58.mdmp\"}}\n{\"t\":{\"$date\":\"2021-10-21T00:34:58.431+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23137, \"ctx\":\"ftdc\",\"msg\":\"*** immediate exit due to unhandled exception\"}\nDBException::toString(): FileRenameFailedC:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.2021-10-20T16-34-58.mdmpmongod.logMicrosoft (R) Windows Debugger Version 10.0.22000.194 AMD64\nCopyright (c) Microsoft Corporation. All rights reserved.\n\n\nLoading Dump File [C:\\Program Files\\MongoDB\\Server\\5.0\\bin\\mongod.2021-10-20T16-34-58.mdmp]\nUser Mini Dump File: Only registers, stack and portions of memory are available\n\nSymbol search path is: srv*\nExecutable search path is: \nWindows 10 Version 17763 MP (16 procs) Free x64\nProduct: Server, suite: TerminalServer\nEdition build lab: 17763.1.amd64fre.rs5_release.180914-1434\nMachine Name:\nDebug session time: Thu Oct 21 00:34:58.000 2021 (UTC + 8:00)\nSystem Uptime: not available\nProcess Uptime: 7 days 10:20:33.000\n............................................\nThis dump file has an exception of interest stored in it.\nThe stored exception information can be accessed via .ecxr.\n(3100.1cb4): Unknown exception - code e0000001 (first/second chance not available)\nFor analysis of this file, run !analyze -v\nntdll!NtGetContextThread+0x14:\n00007ffd`ec4b1764 c3 ret\n0:023> !analyze -v\n*******************************************************************************\n* *\n* Exception Analysis *\n* *\n*******************************************************************************\n\n*** WARNING: Unable to verify checksum for mongod.exe\n\nKEY_VALUES_STRING: 1\n\n Key : Analysis.CPU.mSec\n Value: 1453\n\n Key : Analysis.DebugAnalysisManager\n Value: Create\n\n Key : Analysis.Elapsed.mSec\n Value: 5554\n\n Key : Analysis.Init.CPU.mSec\n Value: 687\n\n Key : Analysis.Init.Elapsed.mSec\n Value: 10747\n\n Key : Analysis.Memory.CommitPeak.Mb\n Value: 167\n\n Key : Timeline.Process.Start.DeltaSec\n Value: 642033\n\n Key : WER.OS.Branch\n Value: rs5_release\n\n Key : WER.OS.Timestamp\n Value: 2018-09-14T14:34:00Z\n\n Key : WER.OS.Version\n Value: 10.0.17763.1\n\n Key : WER.Process.Version\n Value: 5.0.3.0\n\n\nNTGLOBALFLAG: 0\n\nCONTEXT: (.ecxr)\nrax=0000015da9dd7dc8 rbx=0000000000000001 rcx=0000015daa13d9c0\nrdx=0000015da9dd7db0 rsi=0000002009afe830 rdi=00000000ffffffff\nrip=00007ffde8529329 rsp=0000002009afdd30 rbp=0000002009afe040\n r8=0000015da9dd7db0 r9=0000000000000000 r10=0000000000000018\nr11=0000000000000000 r12=0000000000000000 r13=0000002009afe9f0\nr14=0000002009afe1f0 r15=0000002009afe220\niopl=0 nv up ei pl nz na po nc\ncs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000206\nKERNELBASE!RaiseException+0x69:\n00007ffd`e8529329 0f1f440000 nop dword ptr [rax+rax]\nResetting default scope\n\nEXCEPTION_RECORD: (.exr -1)\nExceptionAddress: 00007ffde8529329 (KERNELBASE!RaiseException+0x0000000000000069)\n ExceptionCode: e0000001\n ExceptionFlags: 00000001\nNumberParameters: 0\n\nPROCESS_NAME: mongod.exe\n\nERROR_CODE: (NTSTATUS) 0xe0000001 - <Unable to get error code text>\n\nEXCEPTION_CODE_STR: e0000001\n\nSTACK_TEXT: \n00000020`09afdd30 00007ff7`7fdf5ac9 : 00000000`00000000 00000000`00000000 00000000`00000000 00007ff7`7fddf88a : KERNELBASE!RaiseException+0x69\n00000020`09afde10 00007ff7`7fdf62fa : 00000000`00000001 00007ff7`80d40ec0 00000020`09afe830 00000000`ffffffff : mongod!mozPoisonValueInit+0x3cd3d9\n00000020`09afde60 00007ff7`7febfa67 : 00000000`00000006 00007ffd`e8a4fb00 0000d61f`00000005 00007ff7`80390479 : mongod!mozPoisonValueInit+0x3cdc0a\n00000020`09afdeb0 00007ff7`7febfa49 : 00000020`09afe220 00000020`09afe1f0 00000000`00000000 00000000`ffffffff : mongod!tcmalloc::Sampler::operator=+0xc0e37\n00000020`09afdee0 00007ffd`e8aade58 : 0000015d`a9d42080 00000020`09aff890 00000020`09afe040 0000015d`a9d42080 : mongod!tcmalloc::Sampler::operator=+0xc0e19\n00000020`09afdf10 00007ffd`d6ee1abf : 00000020`09aff890 00000020`09afe830 00000020`09afe1f0 00000020`09afe1f0 : ucrtbase!terminate+0x18\n00000020`09afdf40 00007ffd`d6ee232b : 00000000`00000005 00007ffd`d569bf1a 00000000`00000001 00007ffd`d5672279 : VCRUNTIME140_1!FindHandler<__FrameHandler4>+0x46f\n00000020`09afe110 00007ffd`d6ee40e9 : 00007ff7`7e390000 00000020`09aff890 00000020`09afe9f0 00000020`09afe830 : VCRUNTIME140_1!__InternalCxxFrameHandler<__FrameHandler4>+0x267\n00000020`09afe1b0 00007ff7`800bbb48 : 00000020`09affb30 00007ff7`80b064ec 00000020`09aff890 00000020`09affb30 : VCRUNTIME140_1!__CxxFrameHandler4+0xa9\n00000020`09afe220 00007ffd`ec4b4a2f : 00000000`00000000 00000020`09afe7c0 00000000`00000001 00007ff7`7e390000 : mongod!tcmalloc::Log+0x1ee608\n00000020`09afe250 00007ffd`ec414cef : 00000020`09afe7c0 00000000`00000001 00007ff7`813092cc 00007ff7`7e390000 : ntdll!RtlpExecuteHandlerForException+0xf\n00000020`09afe280 00007ffd`ec418ae6 : 00000020`09afe9f0 00007ffd`ec4188d1 00000020`09afe9f0 00000000`00000000 : ntdll!RtlDispatchException+0x40f\n00000020`09afe9b0 00007ffd`e8529329 : 00000020`09aff860 00007ff7`80d1e490 00000020`09aff9d0 00000000`19930520 : ntdll!RtlRaiseException+0x316\n00000020`09aff870 00007ffd`d5676210 : 00000020`09aff9d0 00000020`09aff9d0 0000015d`a778bdb0 00000000`00000000 : KERNELBASE!RaiseException+0x69\n00000020`09aff950 00007ff7`7fe6c92d : 0000015d`9bcde2c8 00000020`09affb60 0000015d`9bcde2c8 00000020`09affb60 : VCRUNTIME140!_CxxThrowException+0x90\n00000020`09aff9b0 00007ff7`7fe00604 : 0000015d`a9a3b4d0 00000000`00000005 00000000`00000000 00000000`00000000 : mongod!tcmalloc::Sampler::operator=+0x6dcfd\n00000020`09affa30 00007ff7`7e7b76dd : 0000015d`a9b4b5d0 0000015d`a9a3b4d0 00000020`000000fe 0000015d`a9a3b4d0 : mongod!tcmalloc::Sampler::operator=+0x19d4\n00000020`09affb30 00007ff7`7e7b6fbc : 00000000`00000000 0000015d`a9a3b4d0 00000000`00000000 00000000`00000000 : mongod!tcmalloc::StackTraceTable::~StackTraceTable+0x162aad\n00000020`09affc60 00007ffd`e8a6268a : 00000000`00000000 0000015d`a9b50890 00000000`00000000 00000000`00000000 : mongod!tcmalloc::StackTraceTable::~StackTraceTable+0x16238c\n00000020`09affc90 00007ffd`e9777974 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ucrtbase!thread_start<unsigned int (__cdecl*)(void * __ptr64)>+0x3a\n00000020`09affcc0 00007ffd`ec46a2f1 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : kernel32!BaseThreadInitThunk+0x14\n00000020`09affcf0 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : ntdll!RtlUserThreadStart+0x21\n\n\nSYMBOL_NAME: KERNELBASE!RaiseException+69\n\nMODULE_NAME: KERNELBASE\n\nIMAGE_NAME: KERNELBASE.dll\n\nSTACK_COMMAND: ~23s ; .ecxr ; kb\n\nFAILURE_BUCKET_ID: APPLICATION_FAULT_e0000001_KERNELBASE.dll!RaiseException\n\nOS_VERSION: 10.0.17763.1\n\nBUILDLAB_STR: rs5_release\n\nOSPLATFORM_TYPE: x64\n\nOSNAME: Windows 10\n\nIMAGE_VERSION: 10.0.17763.2028\n\nFAILURE_ID_HASH: {15f4714e-4497-bb0c-6557-4f288161ec4c}\n\nFollowup: MachineOwner\n---------\n\n0:023> .exr -1\nExceptionAddress: 00007ffde8529329 (KERNELBASE!RaiseException+0x0000000000000069)\n ExceptionCode: e0000001\n ExceptionFlags: 00000001\nNumberParameters: 0\n",
"text": "Hi there,I have used MongoDB on Linux several times, and this is the first time I used it on Windows. Since there are other similar applications running on Linux and all of them looks well, I suspect this problem would happen only on Windows. Here is the command I executed:The mongoDB service stopped after the following error displayed in log file.The environment is Windows Server 2019 Standard 64 bit and Mongo 5.0 CommunityI’ve surveyed about the DBException::toString(): FileRenameFailed, but most of the resources online are about file permissions, and this bug has already been fixed here. According to the error log, it seems more like an encoding problem.I also opened the C:\\\\Program Files\\\\MongoDB\\\\Server\\\\5.0\\\\bin\\\\mongod.2021-10-20T16-34-58.mdmp mentioned in mongod.log by WinDbg. And here’s what I get:Thanks in advance.",
"username": "Yoge_Chou"
},
{
"code": "",
"text": "Hi,\nWe’ve same issue here, did you find the root cause ?",
"username": "zhu_Mr"
},
{
"code": "",
"text": "Hi all,\nSame issue for us. We have Kasperky antivirus on pc but we already added an exception rule for mongoDB root folder and his subfolders. Did you find any solution about that ?Thanks by advance",
"username": "Ob_developer"
},
{
"code": "",
"text": "Hi @Yoge_ChouHow did you install MongoDB? Did you follow the procedure listed in Install MongoDB Community Edition on Windows? Could you describe your deployment, e.g. if it’s on a cloud server (AWS, Azure, etc.), what’s the specific version of Windows, and the description of the hardware you’re using (CPU, RAM, etc.)Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "\"Writing fatal message\"",
"text": "Hi,\nIs the investigation ongoing?\nWe’ve same issue here.\"Writing fatal message\" seems to be contained in only one place in the source code.\nDue to encoding issues, detailed reporting is not possible.",
"username": "dot.station"
},
{
"code": "\n \n // Now that the temp interim file is closed, rename the temp interim file to the real one.\n boost::system::error_code ec;\n boost::filesystem::rename(_interimTempFile, _interimFile, ec);\n if (ec) {\n return Status(ErrorCodes::FileRenameFailed, ec.message());\n }\n \n ",
"text": "Only one of the FileRenameFailed sources has encoding problems.\nCan you incorporate an error message in English?",
"username": "dot.station"
},
{
"code": "",
"text": "Can you clarify what you mean by an encoding problem here?",
"username": "Chris_Kelly"
},
{
"code": "ec.message()\n \n return Status(ErrorCodes::FileRenameFailed,\n str::stream() << \"Unexpected error while renaming temporary metadata file \"\n << metadataTempPath.string() << \" to \" << metadataPath.string()\n << \": \" << ex.what());\n \n ",
"text": "ec.message() outputs a localized message.\nIn ja-jp, it is cp932 (Shift-JIS).\nc++ - What is the encoding used by boost-asio error messages? - Stack Overflow\nUTF-8 is expected.The code below returns the information in format.",
"username": "dot.station"
},
{
"code": "ec.message()\n \n << \"Failed to write to interim file buffer for full-time diagnostic data capture: \"\n << _interimTempFile.generic_string()};\n }\n \n interimStream.close();\n \n // Now that the temp interim file is closed, rename the temp interim file to the real one.\n boost::system::error_code ec;\n boost::filesystem::rename(_interimTempFile, _interimFile, ec);\n if (ec) {\n return Status(ErrorCodes::FileRenameFailed, ec.message());\n }\n \n _sizeInterim = buf.length();\n \n return Status::OK();\n }\n \n Status FTDCFileWriter::writeArchiveFileBuffer(ConstDataRange buf) {\n _archiveStream.write(buf.data(), buf.length());\n \n \n ",
"text": "I’m encountered same issue and differently not comming from permition or anti-virus software.\nyou see “\\ufffd” in message that is unicode replacement character for unknown or unrepresentable in Unicode.Get the complete details on Unicode character U+FFFD on FileFormat.Infodifferently, encording issue and ec.message() is the one.you can see “ctx:ftdc” from log line, so, it’s from ftdc component.",
"username": "Hidemi_Ta"
},
{
"code": "",
"text": "Hi, please check my post, I have customer waiting for this one fix.",
"username": "Hidemi_Ta"
},
{
"code": "",
"text": "Good day! We’re getting the same random crash every X days, without noticing some pattern.I haven’t yet found an explanation/fix for this. Has anyone? Thanks.",
"username": "Patrick_Roy"
},
{
"code": "",
"text": "Please any one tell the step by step procedure to solve this problem because i donot understand that problem and i am doing intership training so i need to solve this problem as fast as possible please anyone can help me i am requesting all of you tell me the step by step procedure to solve the problem ",
"username": "Jatin_Tanwar"
},
{
"code": "",
"text": "i am facing the same problem of mdmp file when i executing the mongod command",
"username": "Jatin_Tanwar"
},
{
"code": "",
"text": "Hi @Jatin_Tanwar,SERVER-71274 was opened regarding this issue which speculates one of two possibilities:I would review that ticket to see which of these may apply in your case, or if you are observing something different.",
"username": "Chris_Kelly"
},
{
"code": "",
"text": "sorry for inconvience kindly you can tell me problem solution in a easy way i donot understand the soution you give to me",
"username": "Jatin_Tanwar"
},
{
"code": "",
"text": "@Jatin_Tanwar for us, the random crash seemed to have been caused by the anti-virus / Windows Defender on the server! It was locking files or something while Mongo tried to manipulate them.We simply added an exclude rule to our Windows defender on the Mongo’s /data/ and /log/ folder.Seems to have solved our problem.p.s.: but no, I will not give you “step by step” solution to resolve this ",
"username": "Patrick_Roy"
},
{
"code": "",
"text": "@Patrick_Roy thanks for the solution buddy.\nBrother please understand my problem please tell me the step by step procedure to solve the problem i am beggind for you …\ni am doing intership training and use MongoDB software but software not running which cause the problem in my practicao studies ,I am begging for help @Patrick_Roy please buddy help me i don’t want to discontinue my training",
"username": "Jatin_Tanwar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo process crashed every few days on windows | 2021-10-26T03:05:54.821Z | Mongo process crashed every few days on windows | 6,209 |
null | [
"replication",
"sharding",
"containers",
"installation"
] | [
{
"code": "BadValue: Cannot start a shardsvr as a standalone server. Please use the option --replSet to start the node as a replica set.\n\nBadValue: Cannot start a configsvr as a standalone server. Please use the option --replSet to start the node as a replica set.\nenvironment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\nversion: '3.5'\nservices:\n # Router\n mongo-router-01:\n command: mongos --port 27017 --configdb ${MONGO_RS_CONFIG_NAME}/mongo-config-01:27017,mongo-config-02:27017,mongo-config-03:27017 --bind_ip_all --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_ROUTER_SERVER}-01-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION}\n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-01/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-01/configdb:/data/configdb\n mongo-router-02:\n command: mongos --port 27017 --configdb ${MONGO_RS_CONFIG_NAME}/mongo-config-01:27017,mongo-config-02:27017,mongo-config-03:27017 --bind_ip_all --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_ROUTER_SERVER}-02-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION}\n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-02/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-02/configdb:/data/configdb\n \n # Config Servers\n mongo-config-01:\n command: mongod --port 27017 --configsvr --replSet ${MONGO_RS_CONFIG_NAME} --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_CONFIG_SERVER}-01-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION}\n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/preprod/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-01/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-01/configdb:/data/configdb\n mongo-config-02:\n command: mongod --port 27017 --configsvr --replSet ${MONGO_RS_CONFIG_NAME} --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_CONFIG_SERVER}-02-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION}\n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/preprod/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-02/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-02/configdb:/data/configdb\n mongo-config-03:\n command: mongod --port 27017 --configsvr --replSet ${MONGO_RS_CONFIG_NAME} --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_CONFIG_SERVER}-03-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION}\n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-03/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-03/configdb:/data/configdb\n \n # Data Servers \n mongo-arbiter-01:\n command: mongod --port 27017 --shardsvr --replSet ${MONGO_RS_DATA_NAME} --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_ARBITER_SERVER}-01-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION}\n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_ARBITER_SERVER}-01/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_ARBITER_SERVER}-01/configdb:/data/configdb\n mongo-data-01:\n command: mongod --port 27017 --shardsvr --replSet ${MONGO_RS_DATA_NAME} --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_DATA_SERVER}-01-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION}\n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-01/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-01/configdb:/data/configdb\n mongo-data-02:\n command: mongod --port 27017 --shardsvr --replSet ${MONGO_RS_DATA_NAME} --keyFile /etc/mongo-cluster.key\n container_name: ${MONGO_DATA_SERVER}-02-${ENVIRONMENT_NAME}\n environment:\n MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}\n MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}\n image: mongo:${MONGO_VERSION} \n networks:\n - mongo-network\n restart: always\n volumes:\n - ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-02/db:/data/db\n - ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-02/configdb:/data/configdb\nnetworks:\n mongo-network:\n external:\n name: _preprod\n\n",
"text": "Hello,I am currently trying to install a mongo cluster on docker.\nWe already have such cluster with mongo 4.2 but for the new installation we wanted to use latest version of docker image.\nI used the same docker-compose file but the data and config servers don’t want to start.\nWhen looking at the docker logs, the error is:But I have the replSet in my commands.After some try and errors, the error occurs when I add the init db environment variables to initialize the admin user.I did the test also with mongo image version 5 and I have same behavior.\nI works fine with mongo image 4.4.18Here is my docker compose fileThanks in advance",
"username": "Ankou"
},
{
"code": "",
"text": "I have the same issue when i tried to deploy config server cluster with kubernetes on azure. I setup environment variables: MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD. And the args list is: args: [“–keyFile”, “/mongodb/replicaset.key”, “–configsvr”, “–replSet”, “rs0”, “–dbpath”, “/data/configdb”]. But deployment failed with such error:\nBadValue: Cannot start a configsvr as a standalone server. Please use the option --replSet to start the node as a replica set.\ntry ‘mongod --help’ for more information",
"username": "jim_fu"
},
{
"code": "",
"text": "I finally may have found something.**Step to reproduce**\n`docker run --rm -e MONGO_INITDB_ROOT_USERNAME=root -e MO…NGO_INITDB_ROOT_PASSWORD=root mongo:5.0.3 mongod --shardsvr --replSet a`\nor\n`docker run --rm -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=root mongo:5.0.3 mongod --configsvr --replSet a`\n\n**Actual result**\n```\nBadValue: Cannot start a shardsvr as a standalone server. Please use the option --replset to start the node as a replica set.\ntry 'mongod --help' for more information\n```\n\nStarts from 5.0.3 mongod requires replica set for `shardsvr` or `configsvr` cluster role (issue - https://jira.mongodb.org/browse/SERVER-27383)\n\nIt is caused by this code: https://github.com/docker-library/mongo/blob/master/5.0/docker-entrypoint.sh#L286Seems it is normal that it fails on shard server.\nFor config server, there is a PR: Fix configsvr with user/pass env vars on 5.0 & 6.0 by yosifkit · Pull Request #600 · docker-library/mongo · GitHub\nBut it has not been merged yet.\nSO I guess until the PR is merged and new version of the image is published, there is no way to use the environment variables at all.So the root user insertion should be done via script after the replica sets and routers are initialized",
"username": "Ankou"
},
{
"code": "",
"text": "So currently is it impossible for me to deploy sharded cluster on azure container with mongodb5/6 community? Starting each member of the config server replica set always fails with such error: “BadValue: Cannot start a configsvr as a standalone server” even though I have provided “–replSet” option when start it.",
"username": "jim_fu"
},
{
"code": "",
"text": "I found this repository: GitHub - minhhungit/mongodb-cluster-docker-compose: Demo a simple sharded Mongo Cluster with a replication using docker compose\nI am not done testing it but it seems to work for cluster and user initialization.",
"username": "Ankou"
},
{
"code": "",
"text": "Hi Ankou,\nThank you for your reply!\nI am deploying sharded cluster with aks not docker. I tried to pull the latest mongodb image. But it doesn’t work yet.",
"username": "jim_fu"
}
] | MongoDB 6.0.4 cluster on docker installation issue | 2023-02-07T09:44:06.930Z | MongoDB 6.0.4 cluster on docker installation issue | 2,087 |
null | [] | [
{
"code": "maxChunkSizeInBytes: Long(\"67108864\")",
"text": "Hi, I thought that the MongoDB chunk size was increased to 128MB in the latest mongo. I’ve upgraded from 5 to 6 recent. When I check db.serverStatus() I see this entry:\nmaxChunkSizeInBytes: Long(\"67108864\")Do I need to do something to change the default chunk size?",
"username": "AmitG"
},
{
"code": "maxChunkSizeInBytes: Long(\"134217728\")",
"text": "I just noticed that I was running the command against a mongos router. When I run db.serverStatus() on a db server, it looks like it shows the correct number: maxChunkSizeInBytes: Long(\"134217728\")Is there any incompatibility that may be caused by different values in the mongos client and the mongod server?",
"username": "AmitG"
}
] | maxChunkSizeInBytes in MongoDB 6.0.4 | 2023-02-09T03:53:22.224Z | maxChunkSizeInBytes in MongoDB 6.0.4 | 398 |
null | [
"containers"
] | [
{
"code": "",
"text": "Hi All,I have a SpringBoot application running in the docker and I am using PostgreSQL as a database for this project and the database also running in the docker.Now, I want to use MongoDb along with PostgreSQL as database to my SpringBoot application.I created the docker-compose.yml file and Dockerfile and ran the application. After that MongoDb got installed and running in the docker successfully.I created a api for insertion of a document into the collection. When I hit the api I am getting the error. I am not able to connect to the MongoDb which is running in the docker.error:- com.mongodb.MongoSocketOpenException: Exception opening socketI think I have to do configuration in the MongoDb before doing any CRUD operations.Can anyone please share a detailed configuration of MongoDb with some examples.Thanks.mongodb:\nbuild:\ncontext: mongodb\nargs:\nDOCKER_ARTIFACTORY: ${DOCKER_ARTIFACTORY}\ncontainer_name: “mongodb”\nimage: mongo:6.0.4\nrestart: always\nenvironment:\n- MONGODB_USER=${SPRING_DATASOURCE_USERNAME:-username}\n- MONGODB_PASSWORD=${SPRING_DATASOURCE_PASSWORD:-password}\nports:\n- “27017:27017”\nvolumes:\n- “/mongodata:/data/mongodb”\nnetworks:\n- somenetworkARG DOCKER_ARTIFACTORY\nFROM ${DOCKER_ARTIFACTORY}mongo:6.0.4\nCOPY init/mongodbsetup.sh /docker-entrypoint-initdb.d/\nRUN chmod +x /docker-entrypoint-initdb.d/mongodbsetup.sh\nCMD [“mongod”]",
"username": "Aslam_Shaik"
},
{
"code": "",
"text": "Hi @Aslam_Shaik and welcome to the MongoDB community forum!!The following error is typically observed when the database is not running in the same container as the application.\nCan you help me with the following information which would help me to provide the solution further:Regards\nAasawari",
"username": "Aasawari"
}
] | SpringBoot and MongoDb connection in Docker | 2023-02-02T13:55:57.332Z | SpringBoot and MongoDb connection in Docker | 2,389 |
[
"queries",
"node-js",
"mongoose-odm",
"compass",
"react-js"
] | [
{
"code": "$ npm start\n\n> [email protected] start\n> node server.js\n\nServer is running on port: 5000\nTypeError: Cannot read properties of undefined (reading 'collection')\n at D:\\MERN\\mine\\server\\routes\\record.js:19:6\n at Layer.handle [as handle_request] (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\layer.js:95:5)\n at next (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\route.js:144:13)\n at Route.dispatch (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\route.js:114:3)\n at Layer.handle [as handle_request] (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\layer.js:95:5)\n at D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\index.js:284:15\n at Function.process_params (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\index.js:346:12)\n at next (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\index.js:280:10)\n at Function.handle (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\index.js:175:3)\n at router (D:\\MERN\\mine\\server\\node_modules\\express\\lib\\router\\index.js:47:12)\n\n// This section will help you get a list of all the records.\nrecordRoutes.route(\"/record\").get(function (req, res) {\n let db_connect = dbo.getDb(\"employees\");\n db_connect\n .collection(\"records\")\n .find({})\n .toArray(function (err, result) {\n if (err) throw err;\n res.json(result);\n });\n});\nconst express = require(\"express\");\nconst app = express();\nconst cors = require(\"cors\");\nrequire(\"dotenv\").config({ path: \"./config.env\" });\nconst port = process.env.PORT || 5000;\napp.use(cors());\napp.use(express.json());\napp.use(require(\"./routes/record\"));\n// get driver connection\nconst dbo = require(\"./db/conn\");\n\napp.listen(port, () => {\n // perform a database connection when server starts\n dbo.connectToServer(function (err) {\n if (err) console.error(err);\n\n });\n console.log(`Server is running on port: ${port}`);\n});\nconst express = require(\"express\");\n\n// recordRoutes is an instance of the express router.\n// We use it to define our routes.\n// The router will be added as a middleware and will take control of requests starting with path /record.\nconst recordRoutes = express.Router();\n\n// This will help us connect to the database\nconst dbo = require(\"../db/conn\");\n\n// This help convert the id from string to ObjectId for the _id.\nconst ObjectId = require(\"mongodb\").ObjectId;\n\n\n// This section will help you get a list of all the records.\nrecordRoutes.route(\"/record\").get(function (req, res) {\n let db_connect = dbo.getDb(\"employees\");\n db_connect\n .collection(\"records\")\n .find({})\n .toArray(function (err, result) {\n if (err) throw err;\n res.json(result);\n });\n});\nnpm install mongoose\nconst { MongoClient } = require(\"mongodb\");\nconst Db = process.env.ATLAS_URI;\nconst client = new MongoClient(Db, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nlet _db;\n\nmodule.exports = {\n connectToServer: async function (callback) {\n console.log(\"test\");\n\n try {\n await client.connect();\n } catch (e) {\n console.error(e);\n }\n\n _db = client.db(\"employees\");\n\n try {\n var count = await _db.collection(\"records\").countDocuments();\n console.log(count);\n } catch (e) {\n console.error(e);\n }\n\n if(_db !== undefined){\n return true;\n }\n },\n getDb: function () {\n return _db;\n },\n};\n",
"text": "I am following the MongoDB MERN tutorial and when my front-end tries to connect to the DB to pull documents it errors out. I have pulled the official version of their GitHub repo and added my connection information and it works properly with theirs. The only differences I can find is theirs uses mongoose, which the tutorial doesn’t reference, and the versions of the packages are older.Tutorial: How To Use MERN Stack: A Complete Guide | MongoDB\nnpm version: 9.4.1ErrorSee attached below image and code for line 19 of record.js.\nImage showing WinMerge comparison of my repo vs GitHub repo.\n\n1017×389 66.4 KB\n\nI know that my connection credentials are fine as I have used them with MongoDB Compass and their GitHub repo.\nI have added numerous console.log commands in places to try and determine what is being set when the server runs.\nAdding console.logs within the connectToServer anonymous function never triggers even though it should occur within server.js on line 14.server.jsrecord.js - partialNote: I did try installing mongoose npm install mongoose on the server and it didn’t change the results.If I modify the conn.js file to use async and await I can get details from the db such as a count of records from employees collection. However, none of the routes work properly for the React frontend, though they don’t throw errors either.Revamped conn.js",
"username": "Frank_Troglauer"
},
{
"code": "const express = require(\"express\");\nconst app = express();\nconst cors = require(\"cors\");\nrequire(\"dotenv\").config({ path: \"./config.env\" });\nconst port = process.env.PORT || 5000;\napp.use(cors());\napp.use(express.json());\napp.use(require(\"./routes/record\"));\n// get driver connection\nconst dbo = require(\"./db/conn\");\n\napp.listen(port, async () => {\n // perform a database connection when server starts\n await dbo.connectToServer(function (err) {\n if (err) console.error(err);\n });\n console.log(`Server is running on port: ${port}`);\n});\n\nconst { MongoClient } = require(\"mongodb\");\nconst Db = process.env.ATLAS_URI;\nconst client = new MongoClient(Db, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nlet _db;\n\nmodule.exports = {\n connectToServer: async function (callback) {\n\n try {\n await client.connect();\n } catch (e) {\n console.error(e);\n }\n\n _db = client.db(\"employees\");\n\n return (_db === undefined ? false : true);\n },\n getDb: function () {\n return _db;\n },\n};\n\n// This section will help you get a list of all the records.\nrecordRoutes.route(\"/record\").get(async function (req, response) {\n let db_connect = dbo.getDb();\n\n db_connect\n .collection(\"records\")\n .find({})\n .toArray()\n .then((data) => {\n console.log(data);\n response.json(data);\n });\n\n});\n// This section will help you get a list of all the records.\nrecordRoutes.route(\"/record\").get(async function (req, response) {\n let db_connect = dbo.getDb();\n\n try {\n var records = await db_connect\n .collection(\"records\")\n .find({})\n .toArray();\n response.json(records);\n } catch (e) {\n console.log(\"An error occurred pulling the records. \" + e);\n }\n\n});\n",
"text": "Thanks to Jake Haller-Roby on stackoverflow I was led down the right path. This had to do with async and await.However, the GitHub repo from the tutorial doesn’t rely upon async and await and works fine. I am going to assume that some newer versions of mongodb or express with nodejs changes how things work.Here is the code I ended up using.server.jsconn.jsWithin the recordRoutes route for getting the list of records I ran into an issue with toArray where it was never returning its promise. After googling for a bit I found there are multiple ways of handling this. Using .then after toArray works as well as storing the results from the toArray in a variable and using an await on its call. Below are the two examples..thentry and await",
"username": "Frank_Troglauer"
},
{
"code": "",
"text": "Thank you so much! I’ve been looking for answers to this problem but was struggling to come up with anything. This is the first time I’ve seen the same problem documented anywhere.",
"username": "Andres_Fung"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Solved: MERN Tutorial Issue: Cannot read properties of undefined (reading 'collection') | 2023-02-08T17:57:13.657Z | Solved: MERN Tutorial Issue: Cannot read properties of undefined (reading ‘collection’) | 6,103 |
|
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hello everyone, a newbie in mongodb and document schema modeling. I have a question about techniques to keep extended references up to date. Let’s say I have two collections: an aliment collection and a meal collection. Each aliment is created and update by the user. Each aliment has a name and a list of nutritional values. Each meal is created and updated by the user. Each meal has a name, a list of extended reference to Aliments and a computed nutritional values based on the nutritional values of each aliment of the meal.Assume now that the user changes the name from PASTA to PASTA Barilla. What are the best techinques to propagate that change to all meals that reference the old PASTA aliment?For the moment I can think of the following:When the user updates the aliment, I check each Meal and update the extended reference.In this case, I will have one transaction updating the Aliment document and a second bulk operation to update all the meals.But what happen if for some reason one meal update fails because the user updated the meal in the meantime?Thank you",
"username": "Green"
},
{
"code": "",
"text": "Hi @Green,Thats an interesting design considerations.For some design the extended reference could be very minimal due to update consideration.The idea is to keep user specific data only on the users documents and updates that might be cross user/meals changes to another reference collection. Keeping data in both places makes sense only if it benefits the queries and not changed often.Now I have a few questions to better help you.In general I think that to guarantee consistency across collection you will need to:Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny, thank you again for your answer! To be honest, I suppose that Aliment’s name and nutritional values are not going to change often. Probably, it would be better to keep the created aliment read only and let the user create a new aliment out of it. Thinking about it, since the Aliment is not only the name, but also its nutritional values, if you update it, you are probably creating a new aliment. Also, wouldn’t it be weird that if you have 10 meals with the same aliment and when you change it all the meals gets updated without your knowledge?For your first general consisentcy concern, does that mean that I need to implement compensating transactions in case subsequent ones fail?Thank you",
"username": "Green"
},
{
"code": "db.meals.findAndUpdate({userid : ..., Aliments: previousAliment},{$set: {status : \"inDraft}})\n...\ndb.meals.update({mealid : ...},{$set : { Aliments : [ {...}] , status : \"approved\"}\n",
"text": "Hi @Green,The transactional behaviour make sense if you want users to see or update only commited data and if one changes something outside of a transaction it won’t succed until transaction is committed or rollback.If your flows only update one independent document I would use retrayable writes as a retry mechanism rather than transactions.Regarding the data model, it sounds like you can potentially create draft meals for approval to all the users ones there is a change.Potentially, you can run the update by creating new versions of the user meals:So application only count on approved meals and mark those in change as inDraft. This way even if you fail the version will still be in draft and not yet approved.Let me know if that makes sense.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel, I did not think about drafts, thank you!\nFor transactions, some of the flows will touch in fact multiple documents, mostly to notify that the data of a related document changed, but in general these changes are not going to trigger business logic on other document rather than simple updates of the view data.\nSince Im new to mongo and I dont really like retriable stuff, I would like to try follow the DDD style and use a single transaction per document. If an action that touch a document trigger something on another document, I will trigger a transaction on that document and compensate on the first transaction if necessary (most of the failures will be due to versioning and Im expecting much more reads than writes) (looks like a saga). Everything is for now synchronous, if it works well I will move to async communication between documents.Thank youGreen",
"username": "Green"
},
{
"code": "",
"text": "@Pavel_Duchovny I had to change my model. I realized, that the users may have personal aliments, and putting all inside the same collection may lead to performance problem when querying that data. I was thinking to create a AlimentCollection document and have inside a nested array of aliments. Correct me if Im wrong, this way I can:Now, I have some doubts:Thank you very much!Green",
"username": "Green"
},
{
"code": "",
"text": "Hi @Green,If each aliment collection will endup in a document per user you can store those in an array.Now controling the size of the array is possible and recommended as we don’t want large unbound arrays. Those are bad antipatterns.One option is to keep an array size field to maintain the amount of elements. Then we can inc it by +/-1 if we pull or push. A nice trick is to use $each with sort and slice :If array grows beyond what we need we can extend with a new document or inform the user on possible takeons.Text searches are possible on nested fields as you can define the text index on nested fields and even with wildcard expression. Having said that the best text search is when used with Atlas Search which is one of the many benefits using Atlas managed services.Updating can be made to an array directly including pulling and pushing or adding to set operations.We have array filters for more complex array updating:.Not certain about the version question, do you mean you want to used findAndUpdate to only change documents of a specific version? Will you use a version field?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "1. Match on userId\n2. Project the needed values (if filter by tags, then tags id and name of aliment) (if filter by name, it will be a text search)\n3. Match the values with the filter\n4. Use the skip and limit for pagination\n",
"text": "Hi @Pavel_Duchovny thank you again. I realized I posted on the wrong question… Btw, after analyzing the one-to-many relation using embedded array, I realized that Im very limited in sorting. As you suggested, I can extend the document of aliments with a new document for that user (is the the outlier?). But what happen if the user wants to sort by lets say, amount of kcal?The user will access his Aliment “portfolio” in the aliment section and it is presented with the list of aliment in alphabetical order (paged). The records shown are just a projection of the name and the id of the aliment and when the user clicks on it, the details are shown (nutritional values) (maybe I dont even need to project the name). Now, the user may want to sort according to lets say the amount of proteins or filter based on tags the user can add to the aliment.\nAt this point, isnt better to go back to a Aliment per document? (I would just move the count of the aliments n a separate document and use that to regulate the number) If that is the case, I was thinking to use the aggregation framework and do:This aggregation should first filter using the userId and then text search or tags.contains will be used right? (Im trying to avoid scanning documents of other users)\nI know that skip is not efficient… and I know it can be done with last key, but this should work only on indexed fields that are ordered?About the version, im referring to concurrency version, but, again, since Aliments are private to a single user, I should not need this.To be honest, embedding and extending the document makes sense, given the privacy of the aliment. But Im concerned with read performance. Also, Aliment will be used by other entities in the application, but only copied.At first I was also concerned with write performance, because with embedded I need to extract all everytime I made a change, but extending the array of aliment to multiple document looks very nice.Thank you!Green",
"username": "Green"
},
{
"code": "",
"text": "Hi @Green,Again, If we are talking about 1-to-many of 20-50 elements they can be still embedded , but if its 1-to-1000 than seperating each one/group to its own documents/collection makes more sense. Also you need to consider the concurrency that docs will have. Lock on a document level is the lowest so if your arrays are concurrently updated frequently thats not good.If the amount of aliments are in 10s per user you can use aggraregation to page and sort them by using an unwind stage and then sort stage + skip/limit or any other transformation.Although this is done in memory but it should be ok for low number of array elements.Now if it starts exceeding use outlier pattern to have the documents as seperate.The beauty of MongoDB is you van store meta data in documents to describe them and maybe have different types for very large lists Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny, to be honest, we are talking about max 100 200 elements for most users, but for some users it can reach 1000 elements (which is the hard limit). Probably, using the outlier patter will cost me more code, so Im not thinking to go back to one aliment per document. The question is, would it be a difference in terms of read peformance in the cases:For the second option, I was wondering if the aggregation I described above would be performant. As far as I understood it, the first match on the userId would be efficient since I have an index on it. At this point I project the needed data, so that I dont touch other details that are not needed now. Next, I do text search on the name (@TextIndexed from Spring data mongo) only on the documents that match the user id from the first match stage. Finally, skip and paging will occur only on the subset of data that matched the first two match stages.(Sorry for all these questions, but since Im coming from sql, Im afraid to be biased towards it, and not fully embrace the nosql world)Thank you!",
"username": "Green"
},
{
"code": "",
"text": "Hi @Green,Ok in that case there might be a dilemma …Having a compound index on userid and name multikey text index sounds suboptimal also unwinding large arrays is not my favourite.The numbers of 200-300 are upper limits of embedding leaning for me towards not embed.I would say it sounds like separation sounds like the better way to go.Regarding skip that sounds right.I hope that covers your questions. If you want to update us with your test you are welcome.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "PersonUserNamePersonPerson",
"text": "Hello,\nI know this is an older thread, but google brought me here, and this was the closest thing I could come to regarding my current dilemma.I’m creating an application where a Person object stores the bulk of a users information. I use an Extended Reference pattern today in cases where that Person is associated with a bunch of other things using just their Id and Name fields, and it’s worked well so far. However, what happens if that user decides to change say their UserName and I do need to propagate that change through all of the objects that reference the Person object via an Extended Reference? Likewise, what happens when I delete that Person? How do I ensure that person is removed from all of those referenced objects?This may be trivial but I’m new to MongoDb and I’m not sure the best way to handle this with MongoDB. I’d like to use the Extended Reference pattern, but in this case, I need maintain fields that change like this. I have this with several other object types across the app as well. Am I just using the wrong pattern in this case? Should I be using a traditional join to ensure this stays up to date, or is there a better way?",
"username": "Jeremy_Regnerus"
}
] | Keeping Extended Reference Pattern up to date | 2021-02-17T18:38:43.184Z | Keeping Extended Reference Pattern up to date | 4,712 |
null | [] | [
{
"code": "",
"text": "I can’t find it anywhere in settings.\nThe UI for mongodb is the most confusing of anything I’ve ever seen.",
"username": "Alan_c"
},
{
"code": "",
"text": "Hi @Alan_c, welcome to the community.\nYou can manage Atlas Users by clicking on the Database Access link in the sidebar under the security section.\n|625.1067961165048x2521022×412 45.4 KB\nAfter clicking on the link, you would see an interface like this:\n|624x187, 100%.00267276495391600×479 88.6 KB\n\nAlso, we have already discussed this question in our FAQ post, please take a look at question no. 6 for more information.In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nCurriculum Services Engineer",
"username": "SourabhBagrecha"
},
{
"code": "",
"text": "The password tho? Where exactly could it be retrieved?Regards,\nMarcelino",
"username": "Marcelino_Ndewaiyo"
},
{
"code": "",
"text": "You cannot retrieve the password\nIf you forgot the dbuser password edit the user and change the password as shown in the above snapshots\nor create a new user with add new database user link",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Where is the password for the cluster? | 2021-06-25T13:41:27.984Z | Where is the password for the cluster? | 54,351 |
null | [
"replication",
"database-tools",
"backup"
] | [
{
"code": "",
"text": "I have a version 2.4.4 MongoDB replication set that isn’t behaving well. I’d like to upgrade to the latest version supported by my app (3.0).I’ve done a backup using mongodump of the 2.4 database. Can I restore that to 3.0 using mongorestore? I was thinking that I would restore the data to a new 3.0 instance, and then create the replica set. Would that work?I understand that I can do a binary file replacement to upgrade from 2.4 to 2.6, and then to 3.0. The reason I’d like to reload the data because many of the databases are obscenely large. For example, there’s a database that has 0 records but consumes 92GB of disk storage.If mongodump/mongorestore aren’t cross version compatible, could I go through the upgrade process to 2.6, followed by 3.0, and then do a backup/restore?Thanks for your help.",
"username": "George_Sexton"
},
{
"code": "",
"text": "Hey @George_SextonI’ve done a backup using mongodump of the 2.4 database. Can I restore that to 3.0 using mongorestore?Technically it’s not supported, but generally, newer versions of MongoDB add features and thus should be a superset of an older version (in most cases anyway), but of course this would probably different on a case-by-case basis. I would suggest you try this with your backup to a test environment and see if there’s any error and the restore is usable.I would restore the data to a new 3.0 instance, and then create the replica set. Would that work?It should be possible. For recent MongoDB versions, there’s a page for this: Restore a Replica Set from MongoDB Backups. I don’t think the procedure would be very different on old MongoDB versions, but I would encourage you to test them to be sure.If mongodump/mongorestore aren’t cross version compatible, could I go through the upgrade process to 2.6, followed by 3.0, and then do a backup/restore?Yes I think so. Either that, or a rolling initial sync should do it.I’d still note that all the versions we’re talking about are unsupported, and thus if any issue arises due to a bug in the server, it won’t get fixed anymore.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodump/mongorestore cross version compatibility | 2023-02-07T22:34:38.928Z | Mongodump/mongorestore cross version compatibility | 1,191 |
null | [] | [
{
"code": "**hjk@ops**:**/ops/mongodb-backup/db**$ ls -lh\n\ntotal 80M\n\ndrwxr-xr-x 2 hjk hjk 10 Nov 29 15:30 **_tmp**\n\ndrwxr-xr-x 2 hjk hjk 10 Nov 29 15:30 **journal**\n\n-rw-r--r-- 1 hjk root 64M Nov 29 15:31 local.0\n\n-rw-r--r-- 1 hjk root 16M Nov 29 15:31 local.ns\n\n-rwxr-xr-x 1 hjk hjk 0 Nov 29 15:31 **mongod.lock**\n",
"text": "Hi,Does anyone know if restoring from the filesystem snapshot that looks like below is possible?\nI copied the contents of the dbpath (/data/db) to a different directory. As the size of the database was not big, I made a copy of the directory. The mongodb running it was an older version, and I’m not sure which one.Now I only have these files and hoping to recover what is inside. The journal directory looks empty.When I connect it and check the databases, I see only ‘admin’, ‘config’, and ‘local’.\nThe files locall.0 and local.ns has size 64M and 16M respectively and it looks like the data is in there. Do you think I can recover the data?Thank you for your help in advance!",
"username": "trinity_hj"
},
{
"code": "❯ ls -lh\ntotal 32856\ndrwxr-xr-x 2 xxx yyy 64B 9 Feb 12:35 _tmp\ndrwxr-xr-x 3 xxx yyy 96B 9 Feb 12:35 journal\n-rw------- 1 xxx yyy 64M 9 Feb 12:35 local.0\n-rw------- 1 xxx yyy 16M 9 Feb 12:35 local.ns\n-rwxr-xr-x 1 xxx yyy 6B 9 Feb 12:35 mongod.lock\n",
"text": "Hi @trinity_hj welcome to the community!That directory content curiously look similar to a fresh MongoDB 2.6 installation:As you can see the sizes are exactly the same as well. This is a fresh totally empty MongoDB 2.6 deployment using the discontinued MMAPv1 storage engine that’s not supported anymore and has been removed from MongoDB. If there are data in the database, you should see a lot more files than this. Thus, I’m not sure if there’s anything to recover.Best regards\nKevin",
"username": "kevinadi"
}
] | Restoring Mongodb from a filesystem snapshot | 2023-02-08T09:47:14.506Z | Restoring Mongodb from a filesystem snapshot | 454 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "mongorestore --authenticationDatabase admin --port 27017 -u admin --oplogReplay /tmp/new/logs_0/20230208015236_20230208015524/local/oplog.rs.bson --oplogLimit 1675821494:0 -p\n2023-02-08T07:29:32.814+0000\tchecking for collection data in /tmp/new/logs_0/20230208015236_20230208015524/local/oplog.rs.bson\n2023-02-08T07:29:32.814+0000\treplaying oplog\n2023-02-08T07:29:32.815+0000\tFailed: restore error: error applying oplog: applyOps: (Unauthorized) not authorized on admin to execute command { applyOps: [ { ts: Timestamp(1675821194, 1), t: 1, h: null, v: 2, op: “c”, ns: “config.$cmd”, o: { create: “system.sessions”, idIndex: { v: 2, key: { _id: 1 }, name: “id” } } } ], lsid: { id: UUID(“1da48df1-7268-4bf2-a1bb-6455005d68f3”) }, $clusterTime: { clusterTime: Timestamp(1675841367, 2), signature: { hash: BinData(0, 23BDCA774453874BA680500BE149D0764CC5811C), keyId: 7197629778025775108 } }, $db: “admin”, $readPreference: { mode: “primaryPreferred” } }",
"username": "Balram_Parmar"
},
{
"code": "",
"text": "I am running using ‘root’ role user.855WB43P44:PRIMARY> show users\n{\n“_id” : “admin.admin”,\n“userId” : UUID(“cc00c914-0020-47f4-8ca4-d1e0fe11f227”),\n“user” : “admin”,\n“db” : “admin”,\n“roles” : [\n{\n“role” : “root”,\n“db” : “admin”\n}\n],",
"username": "Balram_Parmar"
},
{
"code": "",
"text": "Hi @Balram_ParmarI’m guessing this is the same question you posted in Mongorestore using OplogReplay - #7 by Balram_Parmar so I’ll close this one and continue on that topic instead.Thanks\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "",
"username": "kevinadi"
}
] | Mongodbrestore failing with latest toosl version 100.6.1 | 2023-02-08T07:47:15.439Z | Mongodbrestore failing with latest toosl version 100.6.1 | 1,167 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "const mongoose = require('mongoose');\nconst Schema = mongoose.Schema;\n\nconst whatIsTruthSchema = new Schema({\n\n \"1-1-5-A\": String,\n \"1-1-6-King\": String,\n \"1-2-1-this\": String,\n \"1-2-1-1-end\": String,\n \"1-2-2-witness\": String,\n \"1-2-3-truth\": String,\n \"1-3-2-Herod-s\": String,\n \"1-3-3-miracle\": String,\n \"1-3-3-miracle-2\": String,\n \"1-3-4-sent\": String,\n \"1-3-5-Pilate\": String,\n \"1-4-1-word\": String,\n \"1-4-2-truth\": String,\n \"1-5-1-fear\": String,\n \"1-5-2-Peace\": String,\n \"1-5-3-you\": String,\n \"1-5-4-terrified\": String,\n \"1-5-5-affrighted\": String,\n \"1-5-6-spirit\": String,\n \"1-6-1-handle\": String,\n \"1-6-3-Word\": String,\n \"1-5-4-of\": String,\n \"1-6-5-Life\": String,\n \"1-6-1-see\": String,\n \"1-7-1-revealer\": String,\n \"1-7-2-revealeth\": String,\n \"1-7-3-revealeth\": String,\n \"1-7-4-manifest\": String,\n \"1-7-5-God\": String,\n \"1-7-6-sure\": String,\n \"1-7-7-light\": String,\n \"1-7-8-dawn\": String,\n \"1-7-9-arise\": String,\n 'Christ': String,\n 'tribute': String,\n 'forbidding': String,\n 'perverting': String,\n isComplete: { type: Boolean, default: false }\n\n},\n {\n timestamps: true,\n }\n);\n\nconst AnAncientDreamSchema = new Schema({\n\n '2-1-2-beginning': String,\n '2-1-3-convinced': String,\n '2-1-4-God': String,\n '2-2-3-in': String,\n '2-1-6-you': String,\n '2-1-7-sure': String,\n '2-1-8-light': String,\n '2-2': String,\n '2-1-11-third': String,\n '2-1-12-Judah': String,\n '2-1-13-Babylon': String,\n '2-1-14-Daniel': String,\n '2-1-15-purposed': String,\n '2-1-16-heart': String,\n '2-1-17-meat': String,\n '2-1-18-wine': String,\n '2-1-19-wisdom': String,\n '2-2-20-ten': String,\n '2-1-10-hearts-2': String,\n '2-2-1-magicians': String,\n '2-2-2-astrologers': String,\n '2-2-3-sorcerers': String,\n '2-2-4-tell': String,\n '2-2-5-man': String,\n '2-2-6-none': String,\n '2-2-7-not': String,\n '2-2-8-desire': String,\n '2-2-9-mercies': String,\n '2-2-10-thank': String,\n '2-2-11-praise': String,\n '2-2-13-God': String,\n '2-2-13-heaven': String,\n '2-2-14-image': String,\n '2-2-15-gold': String,\n '2-2-16-silver': String,\n '2-2-17-brass': String,\n '2-2-18-iron': String,\n '2-2-19-clay': String,\n '2-2-20-stone': String,\n '2-2-21-mountain': String,\n '2-3-1-Thou': String,\n '2-3-2-art': String,\n '2-3-3-kingdom': String,\n '2-2-4-Medes': String,\n '2-2-5-Persians': String,\n '2-3-6-Grecia': String,\n '2-3-7-strong': String,\n '2-3-9-world': String,\n '2-3-8-Ceasar': String,\n '2-3-10-taxed': String,\n '2-3-11-divided': String,\n '2-3-12-mingle': String,\n '2-3-13-not': String,\n '2-3-14-God': String,\n '2-3-15-heaven': String,\n '2-3-16-kingdom': String,\n '2-3-17-consume': String,\n '2-3-18-for': String,\n '2-3-19-ever': String,\n '2-3-20-become': String,\n '2-3-21-Lord': String,\n '2-3-22-reign': String,\n '2-3-22-reign-3': String,\n '2-3-23-ever': String,\n '2-3-24-God': String,\n '2-3-25-gods': String,\n '2-3-26-Lord': String,\n '2-3-27-kings': String,\n '2-4-1-will': String,\n '2-4-2-done': String,\n '2-4-3-seek': String,\n '2-4-4-kingdom': String,\n '2-4-5-righteousness': String,\n '2-4-6-you': String,\n '2-4-7-Christ': String,\n '2-4-8-heaven': String,\n '2-4-9-saints': String,\n '2-4-10-household': String,\n '2-1-1-end': String,\n isComplete: { type: Boolean, default: false }\n\n},\n {\n timestamps: true,\n }\n);\n\nconst userSchema = new Schema({\n\n name: {\n type: String,\n required: true\n },\n\n email: {\n type: String,\n unique: true,\n trim: true,\n lowercase: true,\n required: true\n },\n webflow_user_id: {\n type: String,\n unique: true,\n },\n what_is_truth_free: [whatIsTruthSchema],\n an_ancient_dream: [AnAncientDreamSchema],\n validLessons: Number\n\n\n},\n {\n timestamps: true,\n }\n);\n\n\nmodule.exports = mongoose.model('User', userSchema);\n\n",
"text": "Hello! This is my first time within the community and I was wondering if I could get some assistance on the best route to take.I have an application where a client has some input-formatted quizzes. They want to allow logged in users to return to where they left on the quizzes if they happened to log out. Right now I have the basic functionality down but I was wondering if the way that I am organizing my schemas and documents could be better.Currently they are about 21 different quizzes (individual pages) on the website and about 20-30 inputs that the user has to complete. I created a user schema and plan to embed all the 21 quizzes.Here is examples of of my userSchema and two of the 21 quizzesThe thing is, if i go down this route I feel like the document will be an overkill and it just feels like there is a better way that it can be handled I thought about reference documents that were placed as a “portfolio” schema , but I am having a hard time implementing it (BSONTypeErrors). Would anyone have any suggestions?",
"username": "Joba_A"
},
{
"code": "",
"text": "if you embed quizzes into users’ data, you will lose the ability to change quizzes in one place. So you should have a separate collection for them. you cannot maintain a steady list of quizzes if the number of users over-grows.But a user still needs to track their progress, so, instead of embedding quizzes to user data, you need to first decide a schema/model for this “saved process”, and save/embed crucial information of a quiz question while keeping a reference to it to fetch full question info.so, in short, you need to use both embedding and referencing at the same time, or else the maintenance will be a nightmare.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for the response!Currently I had the idea were I would place all the quizzers in a folder in my models directory which would have 21 different schemas. The plan was to reference them some how, but that I a stuck on that process. Right now, I am pulling the inputs from the current page a user is on, so the schema is guarantee to match when a user saves their responses. Luckily the answers are all just one worded answers so i am saving that in another js file as a cdnDoes that make any sense?",
"username": "Joba_A"
}
] | Should I reference or embed these documents? | 2023-02-08T23:08:14.248Z | Should I reference or embed these documents? | 490 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Start up terminal window",
"username": "Marcelino_Ndewaiyo"
},
{
"code": "",
"text": "the following command;\n$ mongoshdoes not start the MongoDB server as the title of the post indicates. It starts the mongosh client. And without any parameter it will try to connect to the server using the default address:port localhost:27017.",
"username": "steevej"
},
{
"code": "",
"text": "Understood.\nThanks a lot, I’ll make sure to change the title of the Topic.Regards,\nMarcelino.",
"username": "Marcelino_Ndewaiyo"
}
] | Starting the Mongosh Client on a Macbook Pro M1 | 2023-02-08T21:32:13.982Z | Starting the Mongosh Client on a Macbook Pro M1 | 719 |
[
"swift"
] | [
{
"code": "",
"text": "",
"username": "ACFancy"
},
{
"code": "",
"text": "I am facing same issue with Cocoapods as well, any work aroud? Thanks.",
"username": "perlasivakrishna_N_A"
},
{
"code": "",
"text": "I just now created two brand new projects (macOS) using SPM and Cocoapods and it work correctly with both.I followed the instructions here Install RealmI tried it on both Monterey (12.6.2) and Ventura.@perlasivakrishna_N_A Can you include your podfile, version of Cocoapods and OS? Current version is 1.11.3 (pod --version in terminal)OP, i noted the question title say RealmSwift 0.10.34 - the current realm version is actually 10.34.0 so perhaps that’s an issue?Also, the current version of XCode is 14.2 - I don’t think that is part of it but just adding it for clarity.",
"username": "Jay"
},
{
"code": "",
"text": "I’m facing the same issue with Realm 10.34.0. My pod version is 1.11.2 on Monterey (12.6.2) .",
"username": "Negar_Haghbin"
},
{
"code": "",
"text": "@Negar_Haghbin can you please verify your version of Xcode and also include your podfile? Also check your build settings for your projects to ensure it meets the minimum OS level as well.v10.34.0 Swift 5.5 is no longer supported. Swift 5.6 (Xcode 13.3) is now the minimum supported version",
"username": "Jay"
},
{
"code": "",
"text": "Hey Jay,\nThank you for your response. My Xcode version was 14.0. I upgraded it to 14.1 and now it works! ",
"username": "Negar_Haghbin"
}
] | RealmSwift 0.10.34 install with SPM, building compiling error: Converting non-sendable function value to '@Sendable (UInt, UInt) -> Bool' may introduce data races | 2023-01-19T07:51:58.718Z | RealmSwift 0.10.34 install with SPM, building compiling error: Converting non-sendable function value to ‘@Sendable (UInt, UInt) -> Bool’ may introduce data races | 1,344 |
|
null | [
"queries",
"compass"
] | [
{
"code": "lifeCycleinfopayment.completedpayment.completedpayment.created{\n \"lifeCycleInfo\": [\n {\n \"eventId\": \"9b8b6adfae\",\n \"eventSubType\": \"SendTransfer_Receipt\",\n \"eventType\": \"SendTransfer\",\n \"odsTimestamp\": {\n \"$date\": \"2023-02-06T14:33:42.308Z\"\n },\n \"payload\": \"{}\",\n \"timestamp\": {\n \"$date\": \"2023-02-06T14:33:42.271Z\"\n }\n },\n {\n \"eventId\": \"06e8d144-531b02\",\n \"eventSubType\": \"payment.created\",\n \"eventType\": \"Notification\",\n \"odsTimestamp\": {\n \"$date\": \"2023-02-06T14:33:45.488Z\"\n },\n \"payload\": \"{}\",\n \"timestamp\": {\n \"$date\": \"2023-02-06T14:33:45.479Z\"\n }\n },\n {\n \"eventId\": \"9da54454d6\",\n \"eventSubType\": \"payment.completed\",\n \"eventType\": \"Notification\",\n \"odsTimestamp\": {\n \"$date\": \"2023-02-06T14:33:46.698Z\"\n },\n \"payload\": \"{}\",\n \"timestamp\": {\n \"$date\": \"2023-02-06T14:33:46.689Z\"\n }\n }\n ]\n}\n{\"lifeCycleInfo[1].eventtype\":\"payment.completed\"}\n",
"text": "Need help with query to filter the records in mongoDB. I am using compass to run the que We have thousands of records/documents where each record/document contains the following array. For few documents, the events in lifeCycleinfo are out of order i.e. payment.completed event comes before 1payment.completed1 event.I need to filter those records where payment.completed event comes before payment.created event.Sample Object:I tried to find it based on array index but not working.",
"username": "Suraj_Jaldu"
},
{
"code": "lifeCycleinfopayment.completedpayment.completedpayment.createddb.collection.find({\n \"lifeCycleInfo\": {\n $elemMatch: {\n \"eventSubType\": \"payment.completed\",\n \"timestamp\": { $lt: \"$lifeCycleInfo.timestamp\" }\n }\n }\n})\n$elemMatchlifeCycleInfoeventSubTypepayment.completedtimestamptimestampeventSubTypepayment.createdlifeCycleInfoeventSubTypepayment.completedeventSubTypepayment.created",
"text": "Need help with query to filter the records in mongoDB. I am using compass to run the que We have thousands of records/documents where each record/document contains the following array. For few documents, the events in lifeCycleinfo are out of order i.e. payment.completed event comes before 1payment.completed1 event.I need to filter those records where payment.completed event comes before payment.created event.@Suraj_Jaldu Here’s a MongoDB query that can help filter the records based on the requirement:In this query, the $elemMatch operator is used to match the first element in the lifeCycleInfo array that satisfies both the conditions:This query will return the documents whose lifeCycleInfo array contains an event with eventSubType equal to payment.completed and the timestamp of this event is before the timestamp of the event with eventSubType equal to payment.created.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "payment.createdpayment.completed_id: 1_id: 0ISODate()> db.test.find()\n[\n {\n _id: 0,\n lifeCycleInfo: [\n {\n eventSubType: 'SendTransfer_Receipt',\n timestamp: ISODate(\"2023-02-06T14:33:42.271Z\")\n },\n {\n eventSubType: 'payment.created',\n timestamp: ISODate(\"2023-02-06T14:33:45.479Z\")\n },\n {\n eventSubType: 'payment.completed',\n timestamp: ISODate(\"2023-02-06T14:33:46.689Z\")\n }\n ]\n },\n {\n _id: 1,\n lifeCycleInfo: [\n {\n eventSubType: 'SendTransfer_Receipt',\n timestamp: ISODate(\"2023-02-06T14:33:42.271Z\")\n },\n {\n eventSubType: 'payment.created',\n timestamp: ISODate(\"2023-02-06T14:33:45.479Z\")\n },\n {\n eventSubType: 'payment.completed',\n timestamp: ISODate(\"2023-02-05T14:33:46.689Z\")\n }\n ]\n }\n]\ndb.test.aggregate([\n // Filter for only payment.created and payment.completed events\n {$addFields: {\n lifeCycleInfo: {\n $filter: {\n input: '$lifeCycleInfo',\n cond: {$or: [\n {$eq: ['$$this.eventSubType', 'payment.created']},\n {$eq: ['$$this.eventSubType', 'payment.completed']}\n ]}\n }\n }\n }},\n // Sort the lifeCycleInfo array based on timestamp\n {$addFields: {\n lifeCycleInfo: {\n $sortArray: {\n input: '$lifeCycleInfo',\n sortBy: {timestamp: 1}\n }\n }\n }},\n // Match documents where payment.completed event comes before payment.created event\n {$match: {\n 'lifeCycleInfo.0.eventSubType': 'payment.completed',\n 'lifeCycleInfo.1.eventSubType': 'payment.created'\n }}\n])\n_id: 1payment.createdpayment.completedpayment.completedpayment.created",
"text": "Hi @Suraj_Jaldu welcome to the community!I don’t think the query that @Sumanta_Mukhopadhyay provided is correct. I tried it using some example document but it returns nothing.Here’s the example documents I used. Based on your example, I created two documents, where one has payment.created is before payment.completed, and one has the order reversed. I removed other fields for testing purposes, and based on your problem description, the query should match the document with _id: 1 but not _id: 0. I also took the liberty of using ISODate() type for the timestamps.I managed to do this using this aggregation:which outputs only the document with _id: 1.The pipeline uses:Note:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you for responding. I am getting \"Unrecognized expression ‘$sortArray’ error.\nNote that I am trying this in compass version 1.21.2.",
"username": "Suraj_Jaldu"
},
{
"code": "$sortArray$sortArraydb.test.aggregate([\n // Filter for only payment.created and payment.completed events\n {$addFields: {\n lifeCycleInfo: {\n $filter: {\n input: '$lifeCycleInfo',\n cond: {$or: [\n {$eq: ['$$this.eventSubType', 'payment.created']},\n {$eq: ['$$this.eventSubType', 'payment.completed']}\n ]}\n }\n }\n }},\n // Sort the lifeCycleInfo array based on timestamp\n {$unwind: '$lifeCycleInfo'},\n {$sort: {_id: 1, 'lifeCycleInfo.timestamp': 1}},\n {$group: {\n _id: '$_id',\n lifeCycleInfo: {$push: '$lifeCycleInfo'}\n }},\n // Match documents where payment.completed event comes before payment.created event\n {$match: {\n 'lifeCycleInfo.0.eventSubType': 'payment.completed',\n 'lifeCycleInfo.1.eventSubType': 'payment.created'\n }}\n])\n$sortArray",
"text": "Hi @Suraj_Jaldu$sortArray was added in MongoDB 5.2, so I would recommend you to upgrade to MongoDB 6 series (6.0.4 is the latest) to get this feature. Note that this is a server feature, so it’s not related to Compass.Without $sortArray, you can still do this although the workflow will be more complex, by using:so the single $sortArray stage becomes 3 stages. Note that this is not very tested and is just a general idea on how the aggregation would look like. Please test with your data and modify accordingly to suit your actual documents.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks Kevin. It was helpful.",
"username": "Suraj_Jaldu"
}
] | Need help filtering documents with criteria inside array - mongoDB | 2023-02-06T18:17:24.474Z | Need help filtering documents with criteria inside array - mongoDB | 1,061 |
null | [] | [
{
"code": "",
"text": "We have a use case which seems relatively simple but I’m not sure if we’re configuring the system for best performance.We have noticed that performance (specifically startup time) has been very affected since we have our frequent changes. Based on the assumption that this degradation was due to the large amount of history build up and attempts to recover client state, we have:As far as I can see this still implies that the history is kept so that client and that we have backend compaction occurring once a day. Is there a way we can disable history completely, as we do not need it ? Is there a better way to configure App Services for this use case ?Thanks!",
"username": "Jonathan_Thorpe"
},
{
"code": "",
"text": "@Jonathan_Thorpe the maximum client offline time is for flexible sync only. Are you able to migrate to flexible sync? That would allow you to trim down the history quite a bit and get better performance",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Flexible sync has the added benefit of making bootstrapping much faster. I think you will be best off using flexible sync and setting a low max offline time.",
"username": "Tyler_Kaye"
},
{
"code": "{_partition:\"Something\"}",
"text": "Thanks for your answer. I’m guessing as a quick test we can setup flexible sync with a query which mimics the partition sync we’re using ({_partition:\"Something\"}). Then progressively use flexible sync in a more appropriate way…",
"username": "Jonathan_Thorpe"
},
{
"code": "",
"text": "I might be missing something but looking at the docsIt’s not clear that it’s flexible sync only given the UI for setting Client Maximum Offline Time is available for a partition sync configuration.",
"username": "Jonathan_Thorpe"
},
{
"code": "",
"text": "@Jonathan_Thorpe exactly correct. I think you will enjoy your experience with it. We will have a product to help migrate people from Partition Sync to Flexible Sync for applications in production, but if you are just in development or want to start a new version of the backend then I would recommend doing so with Flexible Sync.Thanks or pointing out the docs issue. Interestingly enough I pointed this out last week and it was fixed a few days ago and will be pushed out the next time the docs are released I believe.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Ignore history with no need to recover offline client activity | 2023-02-07T15:27:07.064Z | Ignore history with no need to recover offline client activity | 868 |
[
"mongodb-global-community",
"lebanon-mug"
] | [
{
"code": "And now we reached the end of our first article, feel free to read it, \nshare it with your fellows so everything stay from the community to the community. \n\nStay tuned our second article will published soon, diving in MongoDB Atlas.\n",
"text": "\nImage1920×1080 157 KB\nRelational databases: A relational database, is a way of structuring information’s in tables, rows, and columns with the ability to establish relations between them by joining these tables, making it more understandable, where you can get insights about the relationship between various data points. Some of the most well-known Relational Database Management Systems include MySQL, PostgreSQL, Microsoft SQL Server, and Oracle Database.No SQL databases: It’s a non-tabular database that store data differently than relational tables. NoSQL databases come in a variety of types based on their data model, including key-value, document, column-based and graph databases. Providing a schema flexibility, easy to scale techniques with large amounts of data and high user loads. Some of the most well-known non-relational databases includes MongoDB, Apache Cassandra, Redis, Couchbase and Apache HBase.MongoDB is a scalable, flexible NoSQL document database used for storing, retrieving, and managing large amounts of unstructured and semi-structured data, such as text, images, and videos.MongoDB stores data in flexible, JSON-like documents, meaning fields can vary from document to document and data structure can be changed over time, allowing developers to map the objects in their application code making data easy to work with.MongoDB is a distributed database at its core, so high availability, horizontal scaling, and geographic distribution are built in and easy to useDocument Model:MongoDB as a document-oriented DB, has been designed with developer productivity and flexibility in mind, where data is stored in documents and documents are grouped in collections, giving developers a natural friendly environment, where they can focus on the data they need to store and process, rather than worrying about how to split the data across different rigid tables.\nimage936×410 67 KB\nDocuments in MongoDB are stored in the BSON format, which is a binary-encoded JSON format. This also allows for the storage of binary data, which is useful for storing images, videos, and other binary data in flexible schema way where the documents in a single collection don’t necessarily need to have exactly the same set of fields giving developers the ability to iterate faster and migrate data between different schemas without any downtime.Sharding: For horizontal scalability. \nsharding934×258 30.7 KB\nWhen your business data only resides in a single server, it acts as a single point of failure with the multiple potential errors that may happen, such as a server crash, hardware failure, or even a service interruption, making the access to your data nearly impossible.Here the replication technique in MongoDB come to action, where it refers to the process of synchronizing data across multiple servers.A MongoDB replication set is a group of MongoDB servers that maintain identical data sets. The primary purpose of replication is to ensure high availability of data by providing redundancy and failover capability.In the event of a primary server failure which accepts all write operations and applies those same operations across secondary servers, replicating the data, any one of the secondary servers can be elected to become the new primary node, ensuring that the data remains available to clients.Replication also helps to increase read performance by allowing clients to read from secondary nodes and can also improve write performance by spreading write operations across multiple servers.Authentication MongoDB authentication is a process of verifying the identity of a user who is trying to access the MongoDB database. This is used to control access to the data stored in the database and ensure that only authorized users are able to perform operations like reading, writing, and updating the data.MongoDB supports several authentication methods, including Salted Challenge Response Authentication Mechanism (SCRAM), which is the default, LDAP authentication, Kerberos authentication, and X.509 certificate authentication, when SCRAM is used, the user is required to provide an authentication database, username, and password.Database Triggers Ad-Hoc QueriesIn MongoDB an Ad-hoc query is a one time or infrequent query that get executed against the database with a purpose to retrieve specific piece of data.The term “ad-hoc” refers to the fact that these queries are not pre-defined or part of a regularly scheduled process. Ad-hoc queries in MongoDB are often used by end-users or analysts to perform ad-hoc analysis or retrieve specific data for a one-time use case.These queries can be created and executed dynamically, using the MongoDB query language (MQL), and can search for data based on specific criteria such as values in specific fields or conditions within the documents.Load balancing",
"username": "eliehannouch"
},
{
"code": "",
"text": "This is a great quick read for anyone planning to get a quick 101 about MongoDB! Thanks @eliehannouch.",
"username": "Harshit"
}
] | Elie Hannouch: Article - MongoDB 101, Your first steps to build the next big thing - PART 1 | 2023-02-05T19:00:24.434Z | Elie Hannouch: Article - MongoDB 101, Your first steps to build the next big thing - PART 1 | 1,494 |
|
null | [
"replication",
"mongodb-shell",
"containers"
] | [
{
"code": "version: '3.8'\n\nservices:\n mongo1:\n container_name: mongo1\n hostname: mongo1\n image: mongo:6.0.3\n expose:\n - 27017\n ports:\n - 30001:27017\n restart: always\n networks:\n - my-db\n command: mongod --replSet myrs\n mongo2:\n container_name: mongo2\n hostname: mongo2\n image: mongo:6.0.3\n expose:\n - 27017\n ports:\n - 30002:27017\n restart: always\n networks:\n - my-db\n command: mongod --replSet myrs\n mongo3:\n container_name: mongo3\n hostname: mongo3\n image: mongo:6.0.3\n expose:\n - 27017\n ports:\n - 30003:27017\n restart: always\n networks:\n - my-db\n command: mongod --replSet myrs\n\n mongoinit:\n container_name: mongoinit\n image: mongo:6.0.3\n restart: \"no\"\n networks:\n - my-db\n depends_on:\n - mongo1\n - mongo2\n - mongo3\n command: >\n mongosh --host mongo1:27017 --eval ' db = (new Mongo(\"localhost:27017\")).getDB(\"myDb\"); config = { \"_id\" : \"myrs\", \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"mongo1:27017\"\n },\n {\n \"_id\" : 1,\n \"host\" : \"mongo2:27017\"\n },\n {\n \"_id\" : 2,\n \"host\" : \"mongo3:27017\"\n }\n ] }; rs.initiate(config); ' \n\n\nnetworks:\n my-db:\n driver: bridge\n",
"text": "Hi, i am struggling to get my replica set running by using docker.\nI am just not able to get a connection but my replica set seems to be setup correctly.Did i miss something? Please see my compose.",
"username": "Stefan_Wimmer"
},
{
"code": "mongoinitmongoinit | MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\nmongoinit exited with code 1\nmongo/mongoshnew Mongo(\"localhost:27017\")localhostmongoinitnew Mongo(\"mongo1:27017\")mongosh --host mongo1:27017dbcluster.yamlcluster-init.yaml",
"text": "mongoinit does not part of the cluster, so you should have it in another file which also would immediately show you the problem:When you run mongo/mongosh, everything you do will basically belong to the host you run it, and functions are responsible to direct command to the db server.Here in this new Mongo(\"localhost:27017\") command, localhost is the mongoinit machine. just as you give it as a parameter to the mongo shell, you need to write, for example, new Mongo(\"mongo1:27017\")or remove that part completely. you just need to log in to one of the members, which you are already doing by mongosh --host mongo1:27017. besides, you do not even use db if you look closely.PS: I suggest to have cluster.yaml for 3 members, and cluster-init.yaml for initializing and other works if needed.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks Sir, this made my day.",
"username": "Stefan_Wimmer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Configure replica set with docker | 2023-02-08T14:31:35.035Z | Configure replica set with docker | 2,826 |
null | [
"aggregation",
"crud"
] | [
{
"code": "await single_sku_db.collection(\"test\").updateOne(\n { _id: ObjectId(id) },\n {\n $push: {\n report: {\n $each: [\n {\n name: name,\n views,\n },\n ],\n $position: 0,\n },\n },\n $set: {\n report_status: \"Completed\",\n total_views: { $sum: \"$report.views\"},\n },\n }\n",
"text": "I was trying to do update and sum up the column value.I cant sum the report.views like this, will get this error.the dollar ($) prefixed field ‘’ is not valid for storage.Is there anyway to do this without using aggregate?",
"username": "elss"
},
{
"code": "$inc : { total_views : views }, name: name,views,",
"text": "Is there anyway to do this without using aggregate?You cannot refer to other fields of the document without the aggregation syntax.But since you already have views as a variable, why don’t you simply do$inc : { total_views : views },Is there any reason why you use inconsistent syntax for name: name,andviews,",
"username": "steevej"
}
] | Updating data with $set and $sum | 2023-02-08T09:45:52.599Z | Updating data with $set and $sum | 879 |
null | [
"swift"
] | [
{
"code": "\nclass RealmManager: ObservableObject {\n\n let app: App\n\n @Published var realmError: Error?\n @Published var realm: Realm?\n @Published var realmUser: RealmSwift.User?\n @Published var configuration: Realm.Configuration?\n\n init() {\n self.app = App(id: APP_ID)\n \n self.app.syncManager.errorHandler = { error, _ in\n guard let syncError = error as? SyncError else {\n return\n }\n switch syncError.code {\n case .clientResetError:\n if let (_, clientResetToken) = syncError.clientResetInfo() {\n self.cleanUpRealm()\n \n SyncSession.immediatelyHandleError(clientResetToken, syncManager: self.app.syncManager)\n }\n default:\n break\n }\n }\n }\n \n @MainActor\n func initialize() async throws {\n realm?.invalidate()\n realm = nil\n realmUser = nil\n \n realmUser = try await login()\n \n self.configuration = realmUser?.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"user-connections\") != nil {\n subs.first(named: \"user-connections\")?.updateQuery(toType: UserConnections.self) {\n $0.user_id == userId\n }\n } else {\n subs.append(QuerySubscription<UserConnections>(name: \"user-connections\") {\n $0.user_id == userId\n })\n }\n }, rerunOnOpen: true)\n \n do {\n self.realm = try await Realm(configuration: self.configuration!, downloadBeforeOpen: .always)\n } catch {\n self.cleanUpRealm()\n self.realmError = OutsideRealmError.failedToConfigureRealm\n }\n }\n \n func cleanUpRealm() {\n realm?.invalidate()\n realm = nil\n realmUser = nil\n \n // Deleting immediately doesn't work, introduce a small wait\n DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { [weak self] in\n do {\n if let configuration = self?.configuration {\n _ = try Realm.deleteFiles(for: configuration)\n }\n } catch {\n \n }\n }\n }\n}\n\n",
"text": "We’re looking to release Device Sync to prod very shortly for the first time but are struggling/want to clarify how the client reset process should work when using Flexible Device Sync - not partition based.\nThe documentation here states you can set .discardLocal for a partition based sync which is what we want to do, however for a flexible sync configuration that doesn’t seem to be possible.How can you set the equivalent to .discardLocal for Flexible device sync, or is this not possible?If it’s not possible, is the below the expected correct setup to handle a Client Reset?",
"username": "Ryan_Lindsey"
},
{
"code": " self.configuration = realmUser?.self.configuration = realmUser?.flexibleSyncConfiguration(clientResetMode: .discardUnsyncedChanges(beforeReset: { before in\n debugPrint(\"##### beforeReset Master Realms affected with Client reset - beforeReset :: \\(String(describing: before.configuration.fileURL))\")\n }, afterReset: { before, after in\n debugPrint(\"##### afterReset Master Realms affected with Client reset - before :: \\(String(describing: before.configuration.fileURL))\")\n debugPrint(\"##### afterReset Master Realms affected with Client reset - after \\(String(describing: after.configuration.fileURL))\")\n }), initialSubscriptions: { subs in\n if subs.first(named: \"user-connections\") != nil {\n subs.first(named: \"user-connections\")?.updateQuery(toType: UserConnections.self) {\n $0.user_id == userId\n }\n } else {\n subs.append(QuerySubscription<UserConnections>(name: \"user-connections\") {\n $0.user_id == userId\n })\n }\n }, rerunOnOpen: true)\n",
"text": " self.configuration = realmUser?.Hi Ryan,In Latest Realm swift SDK v 10.35.0 it is supported.\nPlease refactor the Initialisation like below. It will work fine.Thanks,\nSeshu",
"username": "Udatha_VenkataSeshai"
}
] | Handling Client Reset with Flexible Device Sync | 2022-10-03T22:48:49.466Z | Handling Client Reset with Flexible Device Sync | 1,843 |
null | [
"compass",
"mongodb-shell"
] | [
{
"code": "ReferenceError: app is not defined\n if (_fs === \"returned\") return _srv;else if (_fs === \"threw\") throw _srv;\n ^\n\nReferenceError: app is not defined\n at evalmachine.<anonymous>:36:162\n at evalmachine.<anonymous>:208:5\n at evalmachine.<anonymous>:213:3\n at Script.runInContext (node:vm:139:12)\n at Object.runInContext (node:vm:289:6)\n at ElectronInterpreterEnvironment.sloppyEval (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:2160437)\n at ShellEvaluator.innerEval (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:3905059)\n at ShellEvaluator.customEval (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:3905203)\n at OpenContextRuntime.evaluate (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:2159555)\n at ElectronRuntime.evaluate (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:2160971)\n",
"text": "Hello everyone, we are encountering this issue after running even simple query (db.coll.findOne({}) in mongo shell inside mongo compass. Has anyone encountered similar problem? Are there any suggestions to solve it?Current mongo compass version: 1.35.0\nOperation system: MacOS",
"username": "Wiktor_Janiszewski"
},
{
"code": "",
"text": "@Wiktor_Janiszewski would you be able to share a log file?",
"username": "Massimiliano_Marcon"
},
{
"code": "{\"t\":{\"$date\":\"2023-02-06T12:07:24.938Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000007,\"ctx\":\"repl\",\"msg\":\"Evaluating input\",\"attr\":{\"input\":\"db.cross-app-test.findOne({})\"}}\n{\"t\":{\"$date\":\"2023-02-06T12:07:24.956Z\"},\"s\":\"I\",\"c\":\"MONGOSH\",\"id\":1000000007,\"ctx\":\"repl\",\"msg\":\"Evaluating input\",\"attr\":{\"input\":\"(() => {\\n switch (typeof prompt) {\\n case 'function':\\n return prompt();\\n case 'string':\\n return prompt;\\n }\\n })()\"}}\n{\"t\":{\"$date\":\"2023-02-06T12:07:28.483Z\"},\"s\":\"I\",\"c\":\"COMPASS-AUTO-UPDATES\",\"id\":1001000135,\"ctx\":\"AutoUpdateManager\",\"msg\":\"Checking for updates ...\"}\n{\"t\":{\"$date\":\"2023-02-06T12:07:30.190Z\"},\"s\":\"I\",\"c\":\"COMPASS-AUTO-UPDATES\",\"id\":1001000126,\"ctx\":\"AutoUpdateManager\",\"msg\":\"Update not available\"}\n",
"text": "This is log after putting findOne command:",
"username": "Wiktor_Janiszewski"
},
{
"code": "",
"text": "I’d like to inform you that I connected to the same instance as previously but using pymongo and then I could run the queries without problem, I didn’t receive that was mentioned above.",
"username": "Wiktor_Janiszewski"
},
{
"code": "db.cross-app-test.findOne({})\ndb.cross - app - test.findOne({})\n-db['cross-app-test'].findOne({})\ndb.getCollection('cross-app-test').findOne({})\n",
"text": "@Wiktor_Janiszewski The MongoDB shell is a JavaScript environment and will evaluate its input as JavaScript before running it. This means thatis interpreted aswith the - signs standing for literal subtraction operations.You can work around this by using either of the following:This is an inherent limitation when using hyphen characters in collection names.",
"username": "Anna_Henningsen"
},
{
"code": "",
"text": "Thanks for reply, I don’t believe it was caused by this. We’ve managed to figure it out. We overcame this issue by updating Atlas plan from M0 to M10.",
"username": "Wiktor_Janiszewski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error using Mongo shell in mongo compass app | 2023-02-05T11:32:24.443Z | Error using Mongo shell in mongo compass app | 1,369 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "We have a project that is in production, and are going to be adding more sites that use the same front end code base. For the first site we built a search index in Atlas using the UI that has a space in the name - e.g. “content search”. Part of the process of rolling out other sites is that we duplicate the mongo set up - Collections/functions/App Services/Resolvers/Search etc using Atlas API, App Services API and some direct .Net driver methods, but it is not possible to create a search index with a name in using the API, only via the UI.Does anyone know why this is? Is there any way to get round this, or escape the space in some way?",
"username": "James_Houston"
},
{
"code": "",
"text": "Hi @James_Houston - Welcome to the community.Just to clarify, could you confirm the following:Is the above two statements correct? If so, regarding 2., what is the error you are receiving or could you describe what happens when you attempt to create the index with a space in it’s name via the API?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi There.\nHi There,We resolved this by switching to search indices without spaces in their names. I dont remember the error - it was a long time ago now. One is now unable to make a search index name with a space in it in the UI as well - it shows an invalid field notification - you must have pulled that one at some point.",
"username": "James_Houston"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Cannot create Search Indexes with spaces in name using API, only in UI | 2022-08-19T08:15:05.451Z | Cannot create Search Indexes with spaces in name using API, only in UI | 1,595 |
null | [] | [
{
"code": "",
"text": "We have around 150M document in collection.\nThe document structure is like,{\n“Client”:“XYZ”,\n“ClientId”:‘12345’,\n“keyword”: [\n{\n“keyword”: “Keyword1”,\n“keyid”: “34”,\n“keytype”: “Industry1”\n},\n{\n“keyword”: “Keyword2”,\n“keyid”: “35”,\n“keytpe”: “Industry2”\n}\n],\n…\n…\n…\n}When I putt the condition like keyword :{$elementMatch :{keytype}} its takes to much time to fetch documents.Please help?",
"username": "Shahnawaz_Haider"
},
{
"code": "",
"text": "The first thing to do when queries do not perform is to look at the explain plan so see if an index is used and how selective is the index.",
"username": "steevej"
},
{
"code": "",
"text": "Perform all things but unable to do so.",
"username": "Shahnawaz_Haider"
},
{
"code": "executionStats",
"text": "Hello @Shahnawaz_Haider ,There could be several reason to queries responding slow, most common reasons are resource crunch and in-efficient indexing. Please take a look at Best Practices for Query Performance to make sure you are following the best practices for faster query processing. To learn more about your use case, can you please share below details?Perform all things but unable to do so.Can you please provide more details on what have you tried till now and what is not working?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Dear Tarun,Thanks for your reply, I am sharing the details:continue…",
"username": "Shahnawaz_Haider"
},
{
"code": "",
"text": "{\n“ns” : “impact.article_beta”,\n“size” : 166806089712,\n“count” : 125215566,\n“avgObjSize” : 1332,\n“storageSize” : 46065573888,\n“capped” : false,\n“wiredTiger” : {\n“metadata” : {\n“formatVersion” : 1\n},\n“creationString” : “access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u”,\n“type” : “file”,\n“uri” : “statistics:table:collection-30-3242645060345021432”,\n“LSM” : {\n“bloom filter false positives” : 0,\n“bloom filter hits” : 0,\n“bloom filter misses” : 0,\n“bloom filter pages evicted from cache” : 0,\n“bloom filter pages read into cache” : 0,\n“bloom filters in the LSM tree” : 0,\n“chunks in the LSM tree” : 0,\n“highest merge generation in the LSM tree” : 0,\n“queries that could have benefited from a Bloom filter that did not exist” : 0,\n“sleep for LSM checkpoint throttle” : 0,\n“sleep for LSM merge throttle” : 0,\n“total size of bloom filters” : 0\n},\n“block-manager” : {\n“allocations requiring file extension” : 2,\n“blocks allocated” : 12,\n“blocks freed” : 6,\n“checkpoint size” : 45762641920,\n“file allocation unit size” : 4096,\n“file bytes available for reuse” : 302383104,\n“file magic number” : 120897,\n“file major version number” : 1,\n“file size in bytes” : 46065573888,\n“minor version number” : 0\n},\n“btree” : {\n“btree checkpoint generation” : 17822,\n“btree clean tree checkpoint expiration time” : NumberLong(“9223372036854775807”),\n“column-store fixed-size leaf pages” : 0,\n“column-store internal pages” : 0,\n“column-store variable-size RLE encoded values” : 0,\n“column-store variable-size deleted values” : 0,\n“column-store variable-size leaf pages” : 0,\n“fixed-record size” : 0,\n“maximum internal page key size” : 368,\n“maximum internal page size” : 4096,\n“maximum leaf page key size” : 2867,\n“maximum leaf page size” : 32768,\n“maximum leaf page value size” : 67108864,\n“maximum tree depth” : 5,\n“number of key/value pairs” : 0,\n“overflow pages” : 0,\n“pages rewritten by compaction” : 0,\n“row-store empty values” : 0,\n“row-store internal pages” : 0,\n“row-store leaf pages” : 0\n},\n“cache” : {\n“bytes currently in the cache” : 2935955061,\n“bytes dirty in the cache cumulative” : 319634,\n“bytes read into cache” : NumberLong(“7904056170608”),\n“bytes written from cache” : 248169,\n“checkpoint blocked page eviction” : 0,\n“data source pages selected for eviction unable to be evicted” : 50839,\n“eviction walk passes of a file” : 980797,\n“eviction walk target pages histogram - 0-9” : 129141,\n“eviction walk target pages histogram - 10-31” : 103840,\n“eviction walk target pages histogram - 128 and higher” : 0,\n“eviction walk target pages histogram - 32-63” : 124451,\n“eviction walk target pages histogram - 64-128” : 623365,\n“eviction walks abandoned” : 65842,\n“eviction walks gave up because they restarted their walk twice” : 1806,\n“eviction walks gave up because they saw too many pages and found no candidates” : 106482,\n“eviction walks gave up because they saw too many pages and found too few candidates” : 9699,\n“eviction walks reached end of tree” : 158406,\n“eviction walks started from root of tree” : 183830,\n“eviction walks started from saved location in tree” : 796967,\n“hazard pointer blocked page eviction” : 9959,\n“in-memory page passed criteria to be split” : 0,\n“in-memory page splits” : 0,\n“internal pages evicted” : 877222,\n“internal pages split during eviction” : 0,\n“leaf pages split during eviction” : 0,\n“modified pages evicted” : 3,\n“overflow pages read into cache” : 0,\n“page split during eviction deepened the tree” : 0,\n“page written requiring cache overflow records” : 0,\n“pages read into cache” : 71737010,\n“pages read into cache after truncate” : 0,\n“pages read into cache after truncate in prepare state” : 0,\n“pages read into cache requiring cache overflow entries” : 0,\n“pages requested from the cache” : 2893810983,\n“pages seen by eviction walk” : 112950453,\n“pages written from cache” : 8,\n“pages written requiring in-memory restoration” : 0,\n“tracked dirty bytes in the cache” : 0,\n“unmodified pages evicted” : 71709813\n},\n“cache_walk” : {\n“Average difference between current eviction generation when the page was last considered” : 0,\n“Average on-disk page image size seen” : 0,\n“Average time in cache for pages that have been visited by the eviction server” : 0,\n“Average time in cache for pages that have not been visited by the eviction server” : 0,\n“Clean pages currently in cache” : 0,\n“Current eviction generation” : 0,\n“Dirty pages currently in cache” : 0,\n“Entries in the root page” : 0,\n“Internal pages currently in cache” : 0,\n“Leaf pages currently in cache” : 0,\n“Maximum difference between current eviction generation when the page was last considered” : 0,\n“Maximum page size seen” : 0,\n“Minimum on-disk page image size seen” : 0,\n“Number of pages never visited by eviction server” : 0,\n“On-disk page image sizes smaller than a single allocation unit” : 0,\n“Pages created in memory and never written” : 0,\n“Pages currently queued for eviction” : 0,\n“Pages that could not be queued for eviction” : 0,\n“Refs skipped during cache traversal” : 0,\n“Size of the root page” : 0,\n“Total number of pages currently in cache” : 0\n},\n“compression” : {\n“compressed page maximum internal page size prior to compression” : 4096,\n\"compressed page maximum leaf page size prior to compression \" : 131072,\n“compressed pages read” : 70851721,\n“compressed pages written” : 2,\n“page written failed to compress” : 0,\n“page written was too small to compress” : 6\n},\n“cursor” : {\n“bulk loaded cursor insert calls” : 0,\n“cache cursors reuse count” : 32157,\n“close calls that result in cache” : 0,\n“create calls” : 939,\n“insert calls” : 0,\n“insert key and value bytes” : 0,\n“modify” : 42,\n“modify key and value bytes affected” : 72904,\n“modify value bytes modified” : 492,\n“next calls” : 789697062,\n“open cursor count” : 0,\n“operation restarted” : 0,\n“prev calls” : 0,\n“remove calls” : 0,\n“remove key bytes removed” : 0,\n“reserve calls” : 0,\n“reset calls” : 24513035,\n“search calls” : 1411150593,\n“search near calls” : 6167418,\n“truncate calls” : 0,\n“update calls” : 0,\n“update key and value bytes” : 0,\n“update value size change” : 609\n},\n“reconciliation” : {\n“dictionary matches” : 0,\n“fast-path pages deleted” : 0,\n“internal page key bytes discarded using suffix compression” : 4,\n“internal page multi-block writes” : 0,\n“internal-page overflow keys” : 0,\n“leaf page key bytes discarded using prefix compression” : 0,\n“leaf page multi-block writes” : 0,\n“leaf-page overflow keys” : 0,\n“maximum blocks required for a page” : 1,\n“overflow values written” : 0,\n“page checksum matches” : 0,\n“page reconciliation calls” : 8,\n“page reconciliation calls for eviction” : 0,\n“pages deleted” : 0\n},\n“session” : {\n“object compaction” : 0\n},\n“transaction” : {\n“update conflicts” : 0\n}\n},\n“nindexes” : 4,\n“indexBuilds” : ,\n“totalIndexSize” : 9004122112,\n“indexSizes” : {\n“id” : 1383550976,\n“articleid_1_clientid_1” : 2106404864,\n“KeyType” : 618872832,\n“SearchI4All” : 4895293440\n},\n“scaleFactor” : 1,\n“ok” : 1,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1675846924, 2),\n“signature” : {\n“hash” : BinData(0,“GWOnqVGCNARShtiUU7I5uAFSyOQ=”),\n“keyId” : NumberLong(“7155578345936125954”)\n}\n},\n“operationTime” : Timestamp(1675846924, 2)\n}Thanks!\nShahnaawaz",
"username": "Shahnawaz_Haider"
}
] | Querying data from collection that's contains array in document taking to much time to display records | 2023-02-07T05:22:13.251Z | Querying data from collection that’s contains array in document taking to much time to display records | 863 |
null | [
"flutter"
] | [
{
"code": "class $Chat {\n @PrimaryKey()\n @MapTo('_id')\n ObjectId? id;\n ...\n String? channel;\n\n late List<$Media> medias;\n}\n\nclass $Media {\n @PrimaryKey()\n @MapTo('_id')\n ObjectId? id;\n ...\n}\n\n realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.add(realm.query<Chat>(r' channel = 1 '));\n});\n",
"text": "Hi, I’m currently developing a chat app. My data are stored in mongodb and I want to use Flexible Sync to selectively sync chats data to local. My code is something like belowAs you can see above there is a relationship between chat and media. What I want is when chats are synced I also want each media (of synced chats only) to be synced as well.",
"username": "Hongly_Un"
},
{
"code": "$Media$Chat$Mediafinal channel = 'awesome channel';\nrealm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions\n ..add(realm.query<Chat>(r'channel = $0', [channel]))\n ..add(realm.query<Media>(r'channel = $0', [channel]));\n});\n$Media",
"text": "Unfortunately not. This is a well known and to be honest embarrassing limitation that we are working on lifting. For now you basically need to add an extra “foreign-key” relation-ship between $Media and $Chat and use that in your subscription query. Fx. by adding channel to $Media as well, so you can do:I would not recommend doing a query per $Media.",
"username": "Kasper_Nielsen1"
}
] | Is there a way to automatically sync related model along with the synced model? | 2023-02-03T03:59:24.048Z | Is there a way to automatically sync related model along with the synced model? | 906 |
null | [] | [
{
"code": "",
"text": "Hi and good day,i just encountered the error above after i created an index to improve the query performance of a certain collection. field A has a string value and field B has a boolean. I created a compound index for the said fields since it is identified that querying the said field takes a 5 second duration.as per checking on the query it is not using a sorting criteria alsoShould i advise the development to use a sorting field and declare it on the index that will be created to avoid this error ?",
"username": "Daniel_Inciong"
},
{
"code": "sort()",
"text": "Hi @Daniel_InciongI think this is an old error message that signifies that your query has an in-memory sort that exceeds 32MB. See sort operation limits for more details.The last version that I can find this exact message was MongoDB 3.0.15, which was released in 2017. Could you confirm the MongoDB version you’re using?With regard to this specific error, later versions of MongoDB extends this limit to 100MB. However, the main reason for this error is the use of sort() and the lack of index that can be used to perform that sort, so the server was forced to sort in-memory instead.To mitigate this, please see Use Indexes to Sort Query Results.Note that MongoDB 3.0 series was out of support since Feb 2018, so I would strongly recommend you to upgrade to a supported version.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "hi kevinadi,MongoDB version is 2.6.12",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Overflow sort stage buffered data usage of 33556087 bytes exceeds internal limit of 33554432 bytes | 2023-02-08T05:58:31.536Z | Overflow sort stage buffered data usage of 33556087 bytes exceeds internal limit of 33554432 bytes | 654 |
null | [
"aggregation",
"queries",
"crud"
] | [
{
"code": "{ _id: \"1234\",\n name: \"test\",\n items: [\n { \"name\": \"pen\", \"count\": 4}, {\"name\": \"copy\", \"count\": 10}, {\"name\": \"book\", \"count\": 2}\n ]\n}\n{ _id: \"4567\",\n name: \"test2\",\n items: [\n { \"name\": \"pen\", \"count\": 1}, {\"name\": \"copy\", \"count\": 1}, {\"name\": \"book\", \"count\": 10}\n ]\n}\n\nI want to update the data based on the inventory id and the name of the item. Update can be for multiple items but only for 1 inventory.\nExample Update request:\n{ _id: \"1234\",\n name: \"test\",\n items: [\n { \"name\": \"pen\", \"count\": 5}, {\"name\": \"copy\", \"count\": 5}, {\"name\": \"book\", \"count\": 2}\n ]\n}\ndef _prepare_query(data):\nupdate_query = {\n \"$set\": {\n \"items\": {\n \"$map\": {\n \"input\": \"$items\",\n \"as\": \"item\",\n \"in\": {\n \"$cond\": [\n {\"$eq\": [\"$$item.name\", {\"$in\": list(data.keys())}]},\n {\n \"name\": \"$$item.name\",\n \"count\": {\n \"$inc\": [\"$$item.count\", data[\"$$item.name\"]]\n },\n },\n \"$$item\",\n ]\n },\n }\n }\n }\n }\ndata: { \"pen\": 1, \"copy\":-5}\n",
"text": "I am new to MongoDB and not able to generate a query to update items in a list based on the item names.\nMy example object is:\nInventory:{ _id: “1234”,\ndata: { “pen”: 1, “copy”:-5}\n}And the result Inventory should be:I tried and created a query, but facing issue to make it dynamic:where, data is the input dictThis is throwing KeyError: “$$item.name”, which is obv as data doesn’t have a key with that string.\nCan I not prepare the query outside the db.collection.updateOne() call?\nAny help would be appreciated.\nThanks",
"username": "GauD"
},
{
"code": "",
"text": "@Kushagra_Kesav, Can you please help?",
"username": "GauD"
},
{
"code": "{ _id: \"1234\",\n name: \"test\",\n items: [\n { \"name\": \"pen\", \"count\": 4}, {\"name\": \"copy\", \"count\": 10}, {\"name\": \"book\", \"count\": 2}\n ]\n}\n{ _id: \"1234\",\n name: \"test\",\n items: [\n { \"name\": \"pen\", \"count\": 5}, {\"name\": \"copy\", \"count\": 5}, {\"name\": \"book\", \"count\": 2}\n ]\n}\nquery = { \"_id\" : \"1234\" }\nfilters = { \"arrayFilters\" : [\n { \"pen.name\" : \"pen\" } ,\n { \"copy.name\" : \"copy\" }\n] }\ninc = { \"$inc\" : {\n \"items.$[pen].count\" : 1 ,\n \"items.$[copy].count\" : - 5\n} }\ninventory.updateOne( query , inc , filters )\n",
"text": "To updatetousing{ _id: “1234”,\ndata: { “pen”: 1, “copy”:-5}\n}the updateOne call will need the parametersYour _prepare_query can use JS map to create both the inc and filters variables.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much. This helped a lot.\nAlso, just for other people reference, the arrayFilters top-level field name must be an alphanumeric string beginning with a lowercase letter.",
"username": "GauD"
},
{
"code": "",
"text": "Hi @steevej,\nCan you also please help to change the query such that if the item is not present in the original document, it should add that item to the final result document with count as provided?(Apologies I am extremely new to MongoDB.)",
"username": "GauD"
},
{
"code": "db.exp.update( { _id: \"1234\" },\n [ \n { \n $set: { \n Items: {\n $reduce: {\n input: { $ifNull: [ \"$Items\", [] ] }, \n initialValue: { items: [], update: false },\n in: {\n $cond: [ { $eq: [ \"$$this.name\", INPUT_DOC.name ] },\n { \n items: { \n $concatArrays: [\n \"$$value.items\",\n [ { name: \"$$this.name\", count: { $inc: [ \"$$this.count\", INPUT_DOC.count ] } } ],\n ] \n }, \n update: true\n },\n { \n items: { \n $concatArrays: [ \"$$value.items\", [ \"$$this\" ] ] \n }, \n update: \"$$value.update\" \n }\n ]\n }\n }\n }\n }\n },\n { \n $set: { \n Items: { \n $cond: [ { $eq: [ \"$Items.update\", false ] },\n { $concatArrays: [ \"$Items.items\", [ INPUT_DOC ] ] },\n { $concatArrays: [ \"$Items.items\", [] ] }\n ] \n }\n }\n }\n ] \n)\n\n",
"text": "I found something like this:But I don’t know how to iterate over INPUT_DOC, if INPUT_DOC is a list of items.",
"username": "GauD"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Updating count of items in list based on item name | 2023-02-03T07:43:22.974Z | Updating count of items in list based on item name | 843 |
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\" : \"328925Atuador cerâmico, símbolo X\",\n \"valueName\" : \"Atuador cerâmico, símbolo X\",\n \"valueDescription\" : \"\",\n \"language\" : \"Portuguese\",\n \"changedOn\" : \"2/2/2021 5:36 PM\",\n \"valueAlias\" : \"\",\n \"vid\" : \"328925\",\n \"createdOn\" : \"9/18/2019 10:53 AM\",\n \"timestamp\" : ISODate(\"2023-01-26T18:08:20.118+0000\")\n}\n{\n \"_id\" : \"328925แกนเซรามิก\",\n \"valueName\" : \"แกนเซรามิก\",\n \"valueDescription\" : \"\",\n \"language\" : \"Thai\",\n \"changedOn\" : \"\",\n \"valueAlias\" : \"\",\n \"vid\" : \"328925\",\n \"createdOn\" : \"9/18/2019 10:53 AM\",\n \"timestamp\" : ISODate(\"2023-01-26T18:08:20.118+0000\")\n}\n\n{\n \"_id\" : \"164\",\n \"structureGroupIdentifier\" : \"164\",\n \"structureGroupParentIdentifier\" : \"11\",\n \"structureGroupName\" : \"Power Transformers\",\n \"features\" : [\n {\n \"pid\" : \"69\",\n \"parameterName\" : \"Mounting Type (69)\",\n \"vid\" : [\n \"1\",\n \" 328925\",\n \" 339014\",\n \" 384993\",\n \" 409393\",\n \" 411897\",\n \" 420487\"\n ],\n \"rank\" : \"375\",\n \"priority\" : \"Filterable\",\n \"isRange\" : \"\",\n \"dataType\" : \"Character string\",\n \"multivalue\" : \"No\",\n \"leadingParameterIdentifier\" : \"\",\n \"followingParameterIdentifier\" : \"\",\n \"createdOn\" : \"9/17/2019 4:06 PM\",\n \"changedOn\" : \"7/20/2020 12:30 PM\",\n \"longTailKeyword\" : \"\"\n },\n \n ],\n \"timestamp\" : ISODate(\"2023-01-31T01:31:05.787+0000\")\n}\n [\n {\n \"$match\" : {\n \"vid\" : \"328925\"\n }\n }, \n {\n \"$group\" : {\n \"_id\" : \"$vid\",\n \"vidLanguages\" : {\n \"$push\" : \"$$CURRENT\"\n }\n }\n }\n ]\n\n[\n {\n \"$match\" : {\n \"structureGroupIdentifier\" : \"164\"\n }\n }, \n {\n \"$unwind\" : {\n \"path\" : \"$features\",\n \"preserveNullAndEmptyArrays\" : true\n }\n }, \n {\n \"$lookup\" : {\n \"from\" : \"values_master_collection\",\n \"let\" : {\n \"backbone_vid\" : \"$features.vid\"\n },\n \"pipeline\" : [\n {\n \"$match\" : {\n \"vid\" : \"$$backbone_vid\"\n }\n },\n {\n \"$group\" : {\n \"_id\" : \"$vid\",\n \"vidLanguages\" : {\n \"$push\" : \"$$CURRENT\"\n }\n }\n }\n ],\n \"as\" : \"feature.vid\"\n }\n }\n ]\n\n\n{\n \"_id\" : \"164\",\n \"structureGroupIdentifier\" : \"164\",\n \"structureGroupParentIdentifier\" : \"11\",\n \"structureGroupName\" : \"Power Transformers\",\n \"features\" : {\n \"pid\" : \"69\",\n \"parameterName\" : \"Mounting Type (69)\",\n \"vid\" : [\n\n ],\n \"rank\" : \"375\",\n \"priority\" : \"Filterable\",\n \"isRange\" : \"\",\n \"dataType\" : \"Character string\",\n \"multivalue\" : \"No\",\n \"leadingParameterIdentifier\" : \"\",\n \"followingParameterIdentifier\" : \"\",\n \"createdOn\" : \"9/17/2019 4:06 PM\",\n \"changedOn\" : \"7/20/2020 12:30 PM\",\n \"longTailKeyword\" : \"\"\n },\n \"timestamp\" : ISODate(\"2023-01-31T01:31:05.787+0000\")\n}\n\n{\n \"_id\" : \"164\",\n \"structureGroupIdentifier\" : \"164\",\n \"structureGroupParentIdentifier\" : \"11\",\n \"structureGroupName\" : \"Power Transformers\",\n \"features\" : {\n \"pid\" : \"69\",\n \"parameterName\" : \"Mounting Type (69)\",\n \"vid\" : [\n {\n \"_id\" : \"328925\",\n \"vidLanguages\" : [\n {\n \"_id\" : \"328925Atuador cerâmico, símbolo X\",\n \"valueName\" : \"Atuador cerâmico, símbolo X\",\n \"valueDescription\" : \"\",\n \"language\" : \"Portuguese\",\n \"changedOn\" : \"2/2/2021 5:36 PM\",\n \"valueAlias\" : \"\",\n \"vid\" : \"328925\",\n \"createdOn\" : \"9/18/2019 10:53 AM\",\n \"timestamp\" : ISODate(\"2023-01-26T18:08:20.118+0000\")\n },\n {\n \"_id\" : \"328925แกนเซรามิก\",\n \"valueName\" : \"แกนเซรามิก\",\n \"valueDescription\" : \"\",\n \"language\" : \"Thai\",\n \"changedOn\" : \"\",\n \"valueAlias\" : \"\",\n \"vid\" : \"328925\",\n \"createdOn\" : \"9/18/2019 10:53 AM\",\n \"timestamp\" : ISODate(\"2023-01-26T18:08:20.118+0000\")\n },\n \n ]\n },\n {}//many more objects\n\n ],\n \"rank\" : \"375\",\n \"priority\" : \"Filterable\",\n \"isRange\" : \"\",\n \"dataType\" : \"Character string\",\n \"multivalue\" : \"No\",\n \"leadingParameterIdentifier\" : \"\",\n \"followingParameterIdentifier\" : \"\",\n \"createdOn\" : \"9/17/2019 4:06 PM\",\n \"changedOn\" : \"7/20/2020 12:30 PM\",\n \"longTailKeyword\" : \"\"\n },\n \"timestamp\" : ISODate(\"2023-01-31T01:31:05.787+0000\")\n}\n",
"text": "I need to lookup data from other collection, I have done this before but when trying to get ot subarray, it is turning painful and I am not sure what I am doing wrong.Below is a sample from values collection.sample document from backbone collectionI developed a pipeline to merge all languages from values collection under a vidI used this pipeline for a lookup in the backbone collection, I am confused about using $$CURRENT in the pipeline of a lookup, but I am not sure of how else accomplish what I need.Here is the pipeline.what I am getting iswhat I need is for it look like thisall help is appreciated",
"username": "venkata_sreekanth_bhagavatula"
},
{
"code": "$match$lookup$expr$in{\n \"$match\": {\n \"$expr\": {\n \"$in\": [\"$vid\", \"$$backbone_vid\"]\n }\n }\n}\n$$ROOT$push$group \"$push\": \"$$ROOT\"\n$lookup\"as\": \"feature.vid\"\"as\": \"features.vid\"db.backbone.aggregate([\n {\n \"$match\": {\n \"structureGroupIdentifier\": \"164\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$features\",\n \"preserveNullAndEmptyArrays\": true\n }\n },\n {\n \"$lookup\": {\n \"from\": \"values_master_collection\",\n \"let\": {\n \"backbone_vid\": \"$features.vid\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$in\": [\n \"$vid\",\n \"$$backbone_vid\"\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$vid\",\n \"vidLanguages\": {\n \"$push\": \"$$ROOT\"\n }\n }\n }\n ],\n \"as\": \"features.vid\"\n }\n }\n])\n",
"text": "Hello @venkata_sreekanth_bhagavatula, Welcome to the MongoDB community Forum There are a few fixes needed in your query,",
"username": "turivishal"
},
{
"code": "",
"text": "Thank you for the response. It does generate the result required but it is very slow. The in operator is doing a collection scan of 7 million documents against the backbone array which has like 5 elements. some structures have hundreds of values. I thought I could replace the $in with a function and perform something like db.collection.find to speed it up, but from what I read db object isn’t accessible from function.Anyway thanks for the help, if you know something about how to speed it up please do let me know",
"username": "venkata_sreekanth_bhagavatula"
},
{
"code": "structureGroupIdentifier$expr$lookuplocalFieldforeignField$lookup",
"text": "It does generate the result required but it is very slow. The in operator is doing a collection scan of 7 million documents against the backbone array which has like 5 elementsYou can improve your query by creating an indexs on match properties, like on structureGroupIdentifier,Second, the $expr operator can use indexes on properties if you are using MongoDB 5+ version,If you are using MongoDB’s <5 lower versions then you have to use $lookup with localField and foreignField syntax to support the index. and do other pipeline operations outside $lookup or do it after the query on the front-end side.For more clearification you need to post explain() result of your query,",
"username": "turivishal"
}
] | Aggregate lookups with subarray | 2023-01-31T20:56:06.683Z | Aggregate lookups with subarray | 650 |
null | [
"compass",
"atlas-cluster",
"next-js"
] | [
{
"code": "error - unhandledRejection: MongoServerSelectionError: connect ECONNREFUSED 52.87.112.56:27017\nat Timeout._onTimeout (/Users/davidmedero/Desktop/nextjs-shopify-tailwind-1/node_modules/mongodb/lib/sdam/topology.js:277:38)\nat listOnTimeout (node:internal/timers:559:17)\nat processTimers (node:internal/timers:502:7) {\nreason: TopologyDescription {\ntype: 'ReplicaSetNoPrimary',\nservers: Map(3) {\n'cluster0-shard-00-02.axvkp.mongodb.net:27017' => [ServerDescription],\n'cluster0-shard-00-00.axvkp.mongodb.net:27017' => [ServerDescription],\n'cluster0-shard-00-01.axvkp.mongodb.net:27017' => [ServerDescription]\n},\nstale: false,\ncompatible: true,\nheartbeatFrequencyMS: 10000,\nlocalThresholdMS: 15,\nsetName: 'atlas-z81cvx-shard-0',\nmaxElectionId: null,\nmaxSetVersion: null,\ncommonWireVersion: 0,\nlogicalSessionTimeoutMinutes: null\n},\ncode: undefined,\n[Symbol(errorLabels)]: Set(0) {}\n}\n",
"text": "Hi MongoDB community!Everything has been running smoothly until 2 days ago. Yesterday, I resume my cluster and notice that my data isn’t showing up on my site in localhost. I was getting 500 internal server errors in my dev environment ONLY (localhost:3000). My site in production was working just fine. I tried debugging all day with atlas basic tech support, but to no avail. I pause the cluster after trying to fix the issue all day. Today, I resume the cluster then go to my prod site and was shocked to see no data. Also getting 500 errors. So, now I can’t connect to MongoDB at all! Both dev and prod environments can’t connect to my MongoDB Atlas cluster. I can’t connect to my cluster via Compass either even though I never used it until now for debugging purposes. What’s super weird is that there is literally no reason for my prod site to lose connection to the DB because I haven’t pushed or deployed any code to Vercel or Github in the last 35 days. I host my Next js headless Ecommerce app on Vercel. I recently integrated Vercel with MongoDB, yet it was working for a whole year without that integration. I thought the integration would at least fix the issue in Prod, but it didn’t. My IP address in Network Access is set to 0.0.0.0. I also updated my password in Database Access, but that didn’t do anything either. I don’t have any VPNs or Firewalls enabled on my Mac. I also tried on PC and on a different Mac.In localhost:3000, when I start up my app, I get the 500 error in my console log in about 20-30 seconds. So, I guess it’s trying to connect then it just times out. In my VSCode terminal, I get this error at the same time:I would really appreciate your help on this one guys. I’m at a loss.",
"username": "David_Medero"
},
{
"code": "",
"text": "Never mind, I fixed it. I had to delete my node modules and package-lock.json then npm install. I was using Bun prior, but I noticed it was installing different versions of my dependencies.",
"username": "David_Medero"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Suddenly can't connect to DB for apparently no reason at all | 2023-02-07T20:13:16.725Z | Suddenly can’t connect to DB for apparently no reason at all | 1,624 |
null | [
"atlas-functions"
] | [
{
"code": "listCollections",
"text": "This may be a very basic question, but is it possible to run mongo shell commands from the Realm Functions? There are plenty of mongo shell commands that I’d like to run, but I don’t see the relation between the mongo shell and the MongoDB Realm Functions / Realm Administration API.For instance, the listCollections function: https://docs.mongodb.com/manual/reference/command/listCollections/. Every research on that leads me to the mongo shell.",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "@Jean-Baptiste_Beau Can you try the node.js driver? It has a collections method -\nhttps://mongodb.github.io/node-mongodb-native/markdown-docs/collections.html#list-collections",
"username": "Ian_Ward"
},
{
"code": "MongoDB Node.js Driver is not supported. Please visit our MongoDB Realm documentation.",
"text": "@Ian_Ward using the node.js driver in Realm Functions yields the error:MongoDB Node.js Driver is not supported. Please visit our MongoDB Realm documentation.",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "how about this -\nhttps://docs.mongodb.com/realm/admin/api/v3/#post-/groups/{groupid}/apps/{appid}/services/{serviceid}/commands/{commandname}",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Perhaps @Jean-Baptiste_Beau you can try to load the MongoDB js driver using external dependency and then require it in function.https://docs.mongodb.com/realm/functions/upload-external-dependencies/\nThis said you will still have to provide atlas connection user and uri but those could be stored as secrets.",
"username": "Pavel_Duchovny"
},
{
"code": "var MongoClient = require('mongodb').MongoClient;",
"text": "@Pavel_Duchovny I uploaded the driver and its dependencies, however I still get the error when I do:var MongoClient = require('mongodb').MongoClient;Screen Shot 2020-09-15 at 08.30.052404×1498 231 KB",
"username": "Jean-Baptiste_Beau"
},
{
"code": "listCollections",
"text": "@Ian_Ward what would be the service name? Could you show me an example with listCollections please?",
"username": "Jean-Baptiste_Beau"
},
{
"code": "POST /groups/ **{groupId}** /apps/ **{appId}** /services/ **{serviceId}** /commands/ **{commandName}**list_collections",
"text": "@Jean-Baptiste_Beau POST /groups/ **{groupId}** /apps/ **{appId}** /services/ **{serviceId}** /commands/ **{commandName}**{commandName} should be list_collections",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Did you ever get this to work? Do you have example code to share?",
"username": "Nina_Friend"
},
{
"code": "listCollections",
"text": "No unfortunately I couldn’t get this to work. Since I only needed the listCollections methods, I ended up hardcoding an array with all the object types in my database and looping through this instead.",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "Are you figure this out? I’m stuck in this for two days ",
"username": "prineth_fernando"
},
{
"code": "",
"text": "It whould be better if you can explain more",
"username": "prineth_fernando"
}
] | Run mongo shell commands from MongoDB Realm Functions | 2020-09-11T16:06:16.940Z | Run mongo shell commands from MongoDB Realm Functions | 5,171 |
null | [
"aggregation",
"queries"
] | [
{
"code": "Primary object = {\"my_custom_id\":\"1234\", \"primaryProp\":\"abc\"} \nSupplemental object = {\"my_custom_id\":\"1234\", \"supplementalProp\":\"123\"}. \nobject = {\"my_custom_id\":\"1234\", \"primaryProp\":\"abc\", \"supplementalProp\":\"123\"} \n",
"text": "Hello,I am trying to use aggregate operations to retrieve data in a collection in a unique manner.We have primary objects and supplemental objects in the same collection that are linked by a custom string id. Example:The requirement is when data is read, we need to factor in the supplemental objects properties to the base primary object before returning the data. So a resulting query would come back as the following flattened object:If the supplemental object has a property that conflicts with the primary, it should be overwritten to use the supplemental value.I am new to Mongo and it looks like an aggregate operation is what I need. I have match criteria to quickly filter down to the objects I need. From there, the resultant objects will be a bunch of primary and supplemental objects that can be matched via my custom id. The output I need is described above where we flatten the supplemental values into the base primary object and return the result.This is where I get a little lost. It seems like merge, add fields could work but it looks like the result is persisted to the database. Whereas I am just looking to return the newly constructed object to my user but leave the objects that were used as part of the aggregate operation untouched.Any advice would be much appreciated! Thank you.",
"username": "Rory_O_Brien"
},
{
"code": "",
"text": "Aggregation is indeed the way to go.The first stage would be a $lookup to find the supplemental object.The second state would be a $replaceRoot that uses $mergeObjects. A little bit like they do in this example.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for the response. That looks very close to what I am trying to do.A follow up question. How can I use $lookup on only the records that are returned by the match stage? So instead of using lookup to find the match within a different collection, I want to perform it on the objects returned in the first match stage.Should I be looking for a different stage? Maybe $group?",
"username": "Rory_O_Brien"
},
{
"code": "c.aggregate( [\n { \"$match\" : { \"my_custom_id\" : \"1234\" } } ,\n { \"$sort\" : { \"_id\" : 0 } } ,\n { \"$group\" : {\n \"_id\" : null , \n \"_result\" : { \"$mergeObjects\" : \"$$ROOT\" }\n } } ,\n { \"$replaceRoot\" : {\n \"newRoot\" , \"$_result\"\n } }\n] )\n",
"text": "Maybe you do not really need $lookup, $mergeObjects can be used as the accumulator of $group. See https://www.mongodb.com/docs/manual/reference/operator/aggregation/mergeObjects/#-mergeobjects-as-an-accumulator.You might need to do a $sort to ensureIf the supplemental object has a property that conflicts with the primary, it should be overwritten to use the supplemental value",
"username": "steevej"
}
] | Using aggregate to fetch documents and merge them based on an id property | 2023-02-07T03:02:12.280Z | Using aggregate to fetch documents and merge them based on an id property | 1,383 |
null | [
"node-js",
"atlas-cluster",
"containers"
] | [
{
"code": "VARIANT 1\n\"MongoNetworkTimeoutError: connection timed out\n at connectionFailureError (/app/node_modules/mongodb/lib/cmap/connect.js:389:20)\n at TLSSocket.<anonymous> (/app/node_modules/mongodb/lib/cmap/connect.js:310:22)\n at Object.onceWrapper (node:events:627:28)\n at TLSSocket.emit (node:events:513:28)\n at TLSSocket.emit (node:domain:489:12)\n at Socket._onTimeout (node:net:568:8)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\"\n\nVARIANT 2\n\"MongoServerSelectionError: connection <monitor> to 192.168.248.2:27017 timed out\n at Timeout._onTimeout (/app/node_modules/mongodb/lib/sdam/topology.js:285:38)\n at listOnTimeout (node:internal/timers:564:17)\n at process.processTimers (node:internal/timers:507:7) {\"\n\nVARIANT 3\nMongoServerSelectionError: connection 1 to 35.233.114.132:27017 closed\n at .listOnTimeout ( node:internal/timers:564 )\n at process.processTimers ( node:internal/timers:507 )\n\nVARIANT 4\nMongoNetworkError: connection 96 to 192.168.248.3:27017 closed\n at .TLSSocket.emit ( node:events:513 )\n at .TLSSocket.emit ( node:domain:489 )\n at undefined. ( node:net:313 )\n at .TCP.done ( node:_tls_wrap:587 )\n\nVARIANT 5\nPoolClearedError [MongoPoolClearedError]: Connection pool for production-shard-00-01-pri.xxxxx.mongodb.net:27017 was cleared because another operation failed with: \"connection <monitor> to 192.168.248.3:27017 timed out\"\n at .Server.emit ( events.js:400 )\n at .Server.emit ( domain.js:475 )\n at .Monitor.emit ( events.js:400 )\n\nVARIANT 6\nMongoPoolClearedError: Connection pool for production-shard-00-01-pri.xxxxx.mongodb.net:27017 was cleared because another operation failed with: \"connection <monitor> to 192.168.248.3:27017 timed out\"\n at .Server.emit ( node:events:513 )\n at .Server.emit ( node:domain:489 )\n at .Monitor.emit ( node:events:513 )\n\nVARIANT 7\nPoolClearedOnNetworkError: Connection to production-shard-00-02-pri.xxxxx.mongodb.net:27017 interrupted due to server monitor timeout\n",
"text": "Hello,This is following on my previous topic.We are still having issues with the connection to MongoDB from our GCP Cloud Run service.Stack:GCP support team has verified that the configuration on that side is correct. The issue seems to be related to MongoDB.THE ISSUE\nMultiple times per day, we get many errors on our server about the connection with MongoDB. Here are some examples:The issues happen on our QA and development environment too, but much less frequently due to much lower usage.In short\nUnstable connection between Cloud Run and MongoDB. The connection is closed, or the connection timed out, or the connection pool was cleared.The IP address of the GCP VPC Network (subnet) is whitelisted on MongoDB side.What we triedAny help or ideas would be appreciated!Thanks!",
"username": "Laurens"
},
{
"code": "maxIdleTimeMSmaxIdleTimeMS=60000",
"text": "Hi @Laurens,My name is Alex and I’m a Product Manager on the Developer Experience team at MongoDB. First off, apologies for the delay in responding. We take these matters very seriously as our goal is to ensure the best possible experience for developers working with our tools and interfaces.We are continuing to work on improving our Drivers to ensure they are as resilient as possible within serverless environments such as Google Cloud Run and AWS Lamba, however on occasion some default values may need to be tuned.Any help or ideas would be appreciated!Though it’s difficult to determine without doing a full analysis if the issues are transient network issues, configuration issues, application/workload issues or some other issue, one recommendation we can make is to try setting the maxIdleTimeMS connection string option to 60000 (1 minute).Some users have reported less frequent connection timeout errors being raised in GCP environments as a result. As each error variant you shared appears to share a common root of a connection timing out, please test with a maxIdleTimeMS=60000 and let us know if this reduces the frequency of the errors you’re experiencing.",
"username": "alexbevi"
},
{
"code": "maxIdleTimeMS=60000",
"text": "Hi Alex,Thank you for your response.I will try setting the maxIdleTimeMS=60000 and see what happens.\nIn the meantime, I’ve also discussed with someone via Support Chat and decided to make a Support case for this too. I will mention as well that this fix is being tried out.Of cousre, I will share my findings here.",
"username": "Laurens"
},
{
"code": "{ maxIdleTimeMS: 60000 }",
"text": "An update from the test with { maxIdleTimeMS: 60000 }:\nThe issue has not occurred for a couple of days now! This should prove that the solution works. I will mark this as solved and in the case we do see the errors pop up again revisit this thread.",
"username": "Laurens"
},
{
"code": "",
"text": "@alexbevi I am experiencing similar, is there a more permanent solution rather than maxIdleTimeMS=60000 ?",
"username": "Brad_Beighton"
},
{
"code": "maxIdleTimeMS",
"text": "@alexbevi I am experiencing similar, is there a more permanent solution rather than maxIdleTimeMS=60000 ?Hi @Brad_Beighton. For the moment if you’re experiencing this issue adjusting the maxIdleTimeMS should reduce the occurrences of the issue. We are continuing to work on improving the developer experience with our Drivers in environments such as GCP Cloud Run however we do not have a public timeline we can share yet as to what a more permanent solution would be or when it would be available.",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unstable connection between GCP Cloud Run and MongoDB Atlas (2) | 2023-01-17T19:12:59.020Z | Unstable connection between GCP Cloud Run and MongoDB Atlas (2) | 3,626 |
null | [
"compass"
] | [
{
"code": "",
"text": "I’m wanting to work with prisma and some of my field types are not uniform, as in all double.\nis there a way to bulk convert them all to double in compass, or is there a way to do it with the mongo shell",
"username": "Richard_Locke"
},
{
"code": "",
"text": "Hello @Richard_Locke ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you please share below details for me to understand your use case better?some of my field types are not uniform, as in all double.\nis there a way to bulk convert them all to doubleRegards,\nTarun",
"username": "Tarun_Gaur"
}
] | Best way to bulk convert field type to do double | 2023-02-01T18:28:15.218Z | Best way to bulk convert field type to do double | 635 |
null | [
"queries"
] | [
{
"code": "",
"text": "Buen día,Como puedo eliminar/modificar una colección, que por error su nombre fue mal escrito con espacios, caracteres, etc.Ejemplo:\nNombre de una colección con espacios\nusuarios Doc\nSesionesDoc Tx {1};",
"username": "edith_t"
},
{
"code": "db.collection.renameCollection()db.rrecord.renameCollection(\"record\")\nrrecordrecord",
"text": "Hello @edith_t ,Welcome to The MongoDB Community Forums! I tried translating your question in Google translate and please correct me if my understanding of your use-case is wrong but I think you are trying to rename an existing collection in your database?If that is right, then I believe you can use db.collection.renameCollection(). This method operates within a collection by changing the metadata associated with a given collection.Call the db.collection.renameCollection() method on a collection object. For example:This operation will rename the rrecord collection to record .Refer to the documentation renameCollection for additional warnings and messages.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Hello @Tarun_GaurI tried change the name of the collection but show the next error:db.Prueba uno.renameCollection(“prueba”);Error: clone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:……)} could not be cloned.The problem is rename a collection when have space in the name, because does not allow me delete or modify this collection.",
"username": "edith_t"
},
{
"code": "mongoshvar authColl = db.getCollection(\"Prueba uno\");\nauthColl.renameCollection(\"prueba\");\n",
"text": "Hey @edith_t ,To update such collection name, one can try using below code in mongoshAlternate way to do the same via querydb[“Prueba uno”].renameCollection(“prueba”)Note: Please test the code in your test environment and update the code as per your requirements before making any changes to production environment.Tarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Thank you very much, this query was just what I needed, it helped me to solve my doubt.Regards.",
"username": "edith_t"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Eliminar colecciones en mongodb | 2023-02-02T00:04:39.696Z | Eliminar colecciones en mongodb | 881 |
[] | [
{
"code": "",
"text": "Hello Guys,\nI’m configuring my projects and clusters with terraform. But when try to configure de peering on Azure i got the follow error:\nimage1371×78 11.2 KB\n(“AZURE_CUSTOMER_NETWORK_UNREACHABLE”) External Azure subscription unreachableI did all configuration by the book. I’m gues has some configuration on AzureAd.Has Somebody ever here did the “peering” confiruration by terraform? Can show me the Azure configuration\nand if i lack something?thanks everybody.",
"username": "Davidson_Silva"
},
{
"code": "",
"text": "I also experienced the same issue. Did you found any solution?",
"username": "Thisura_Wijesekera"
},
{
"code": "",
"text": "@Davidson_Silva this happened due to an incorrect service principal(enterprise application) on the Azure end. First, create an Azure sp with application ID “e90a1407-55c3-432d-9cb1-3638900a9d22” and used its id in the role assignment’s principal_id",
"username": "Thisura_Wijesekera"
},
{
"code": "",
"text": "Hi @Thisura_Wijesekera .I’m sorry, i realy forgotten to response here.But it is correct. I’ve never thount should you need to use exactly the same application ID!You don’t need to create a new applicationId. Only use on the existing once principal.Here the documentation:\nimage837×409 18.6 KB\nNice!! Tnks man.",
"username": "Davidson_Silva"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | External Azure subscription unreachable - Peering Terraform Azure | 2022-11-11T21:26:25.800Z | External Azure subscription unreachable - Peering Terraform Azure | 2,135 |
|
null | [
"android",
"kotlin"
] | [
{
"code": "Realm.init(this)initio.realm.kotlinAppAppConfiguration.Builder(\"<YOUR-APP-ID>\").build()loginAsyncLoginResponseit",
"text": "I encountered several errors related to the use of the Realm SDK in your Android Studio project written in Kotlin.The first error was related to the initialization of the Realm library in your project, where the Realm.init(this) method was giving an “Unresolved reference: init” error. This was because the init method was not present in the io.realm.kotlin dependency you were using.The second error was related to the creation of an App instance using the AppConfiguration.Builder(\"<YOUR-APP-ID>\").build() method, which was giving an “Unresolved reference. None of the following candidates is applicable because of receiver type mismatch” error.The third error was related to the use of the loginAsync method, which was giving an “Unresolved reference: loginAsync” error.The fourth error was related to the use of a lambda expression, which was giving a “Cannot infer a type for this parameter. Please specify it explicitly.” error.The fifth error was related to the use of the LoginResponse class, which was giving an “Unresolved reference: LoginResponse” error.The sixth error was related to the use of the it keyword in a lambda expression, which was giving an “Unresolved reference: it” error.The seventh error was related to the use of a lambda expression, which was giving a “Cannot infer a type for this parameter. Please specify it explicitly.” error.",
"username": "M_Abdullah_Qureshi"
},
{
"code": "",
"text": "@M_Abdullah_Qureshi: Welcome to the MongoDB community!Can you tell us which project you are talking about here?. - I encountered several errors related to the use of the Realm SDK in your Android Studio project written in Kotlin.",
"username": "Mohit_Sharma"
}
] | How does the latest version of realm sdk can be used to get the documents from my cluster? | 2023-02-06T18:29:38.044Z | How does the latest version of realm sdk can be used to get the documents from my cluster? | 1,176 |
[
"replication",
"monitoring"
] | [
{
"code": "",
"text": "We’re using a standard 3-node Atlas replicaset in a dedicated cluster (M10, Mongo 6.0.3, AWS) and have configured an alert if the ‘Restarts in last hour is’ rule exceeds 0 for any node.We’re seeing this alert fire every now and then and we’re wondering what this means for a node in a dedicated cluster and whether this is something to be concerned about, since I don’t think we have any control over it. Should we should disable this rule or increase the restart threshold?Thanks in advance for any advice.",
"username": "Ben_Morris"
},
{
"code": "",
"text": "Hey @Ben_Morris,Welcome to the MongoDB Community Forums! We’re seeing this alert fire every now and then and we’re wondering what this means for a node in a dedicated cluster and whether this is something to be concerned about, since I don’t think we have any control over it.A node restarting is not necessarily a cause for concern. However, you should investigate the cause of the restart itself to better determine if this is an issue or not. You should take a look at your Project Activity Feed to see if you can determine why the nodes are restarting. I understand you have noted this is an M10 cluster so you should have access to the MongoDB logs, you also can check those to try determine the cause of the node restart. If you do not have access to the logs, you can consider working with Atlas in-app chat support to diagnose the issue.Should we should disable this rule or increase the restart threshold?It’s always good to keep the alerts active, as they can indicate a potential problem as soon as they occur. You can consider increasing the restart threshold to reduce alert noise after concluding whether the restarts are expected or not.Hoping this helps. Please feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "",
"text": "Thanks for your detailed reply @Satyam. In my case, having checked the activity feed I was able to match up all the alerts we were seeing to Mongo version auto-updates on the nodes. We still wanted to keep that so we’ve increased our alert threshold to fire on >1 restart per hour rather than >0 restart. Thanks again for your help.",
"username": "Ben_Morris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Should I be concerned if I receive monitoring alerts that my Atlas nodes have restarted? | 2023-01-31T15:10:20.267Z | Should I be concerned if I receive monitoring alerts that my Atlas nodes have restarted? | 1,145 |
|
null | [
"aggregation",
"java",
"time-series"
] | [
{
"code": "insight{\n \"timestamp\" : ISODate(\"2018-02-01T00:00:00.000+0000\"),\n \"meta\" : {\n \"customerId\" : ObjectId(\"437573746f6d657232343639\"),\n \"insightId\" : ObjectId(\"636f6e73756d61626c650000\"),\n \"itemDefinitionId\" : ObjectId(\"4974656d4465663200000000\"),\n \"itemId\" : ObjectId(\"4974656d3100000000000000\"),\n \"locationId\" : ObjectId(\"4c6f636174696f6e37333200\"),\n \"month\" : NumberInt(2),\n \"tenantId\" : ObjectId(\"64656d6f3100000000000000\"),\n \"year\" : NumberInt(2018)\n },\n \"_id\" : ObjectId(\"63da7cd53e3dae5a51c21b4c\"),\n \"data\" : {\n \"value\" : {\n \"quantity\" : 100.83965259603328,\n \"cost\" : 116.6917902887385\n }\n }\n}\ndb.insight.aggregate([\n {\n $group: {\n _id: {\n itemId: \"$meta.itemId\",\n itemDefinitionId: \"$meta.itemDefinitionId\",\n customerId: \"$meta.customerId\",\n locationId: \"$meta.locationId\",\n tenantId: \"$meta.tenantId\",\n insightId:\"$meta.insightId\",\n year:\"$meta.year\",\n month:\"$meta.month\"\n }\n }\n }]).toArray().length\n{\n \"meta.tenantId\" : 1,\n \"meta.insightId\" : 1,\n \"meta.customerId\" : 1,\n \"timestamp\" : -1\n}\n{\n \"ns\" : \"test.insight\",\n \"size\" : NumberLong(21479128473),\n \"timeseries\" : {\n \"bucketsNs\" : \"test.system.buckets.insight\",\n \"bucketCount\" : NumberInt(35610000),\n \"avgBucketSize\" : NumberInt(603),\n\t\t...\n },\n \"storageSize\" : NumberLong(3451998208),\n\t...\n\t\n \"indexSizes\" : {\n \"meta.tenantId_1_meta.insightId_1_meta.customerId_1_timestamp_-1\" : NumberInt(506097664)\n },\n\t...\n}\n{\n \"ns\" : \"test.copy_of_insight\",\n \"size\" : NumberLong(2691175332),\n \"timeseries\" : {\n \"bucketsNs\" : \"test.system.buckets.copy_of_insight\",\n \"bucketCount\" : NumberInt(2376681),\n \"avgBucketSize\" : NumberInt(1132),\n\t\t...\n },\n \"storageSize\" : NumberInt(1077383168),\n\t...\n \"indexSizes\" : {\n \"meta.tenantId_1_meta.insightId_1_meta.customerId_1_timestamp_-1\" : NumberInt(34258944)\n },\n\t...\n}\n",
"text": "Hello, I’m experimenting with MongoDB 6.0 on Atlas, and more specifically with the newest timeseries special collection.\nI started with an M10 cluster tier and, using a custom-made Java application, I filled a collection named insight with 45M (millions) of sample documents. A typical document is the following:The meta fields are not completely random: an aggregation count like the following returns 1.5MMy bulk load procedure on M10 required 10+ hours, I had to restart it twice (each time skipping times already filled) due to my PC suspension. Moreover it also went through two automatic storage resize of Atlas (which worked perfectly). At the end I created a secondary index, with this definition:The secondary index is crucial for my experiment and I was really surprised to discover its size: 482.7 MB.\nI then checked the dbstats of the collection and got the following:In summary the storage size is 3.2 GB, whereas the size is 20 GB. And again, the index is 482.7 MB. Plus, the bucket count is 35.6 millions.I repeated my procedure, this time on M40 with no storage resize required. After 3 hours, without interruption, the same data appeared on the collection, but with completely different (and better) internals.\nNot sure about the repeatability of my load procedure I tried a very different approach: using 3T Studio I selected Duplicate Collection and got a copy of the very first collection (the one with storage size 3.2 GB, size 20 GB, and index 482.7 MB). The dbstats confirmed the info I got from my 2nd attempt:The rewritten collection has size 2.5 GB, storage size 1 GB and index size 32.7 MB. And the bucket count is 2.3 millions.\nI can provide a snapshot of the database, with both the collections (they are in my Atlas account) for further examination, there is no sensitive data in it.I wonder if I hit, during my first load, some kind of non-optimized (or bugged) code related to the bucket creation. I suspect that 35 millions of buckets lead to a 400+MB of index size and then the 3T Studio Duplicate Collection fully rewriting the data optimized the bucket distribution, lowering them to 2.3 and thus to a 32MB of index size.\nIf you are aware that such situation of “unoptimized buckets” could happen in certain situations (such as storage resize in Atlas) is it possibile to see in a future version of MongoDB an admin command to rebalance the buckets ?\nPlease take into account that a 400+MB of index size (vs an optimal 32MB) has serious impact on the cluster tier on Atlas, if you plan to have the index in memory. Not mentioning how differently the 2 collections performed on the same aggregation (using the secondary index) due to how the data are differently distributed between 35 millions of buckets vs 2.3 millions of buckets.",
"username": "Aldo"
},
{
"code": "",
"text": "The main difference between the two collections is the way they were created. The first collection was filled with a custom-made Java application, which took 10+ hours and went through storage resizes. The secondary index size was 482.7 MB. The second collection was created by duplicating the first collection with 3T Studio, which resulted in a smaller size, storage size, and index size (2.5 GB, 1 GB, and 32.7 MB respectively). The bucket count also decreased to 2.3 million. The difference in size and structure is likely due to the differences in the data loading process and the underlying storage mechanisms used in each case.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Hello Sumanta,\nthat’s exactly what I supposed.\nThe real questions are:For the question 1 I really have no idea.\nFor the question 2 (identical data but very different underlying storage distribution, with very different costs and performances, such as 10x) the first thing that come to my mind is named “vacuum” on PostgreSQL. And the second thing that come to my mind is named “compaction” on Apache Cassandra. I would like to hear something about it from the MongoDB team…Thanks",
"username": "Aldo"
}
] | Inefficient/unpredictable storage for timeseries collection? | 2023-02-04T17:23:56.663Z | Inefficient/unpredictable storage for timeseries collection? | 1,084 |
null | [
"queries",
"react-native",
"react-js"
] | [
{
"code": "//React\nimport React, { useContext, useState, useEffect, useRef } from \"react\";\n\n//Realm\nimport Realm from \"realm\";\nimport app from \"../realmApp\";\n\n//Scehemas\nimport { User } from \"../schemas\";\n\n// Create a new Context object that will be provided to descendants of\n// the AuthProvider.\nconst AuthContext = React.createContext(null);\n\n// The AuthProvider is responsible for user management and provides the\n// AuthContext value to its descendants. Components under an AuthProvider can\n// use the useAuth() hook to access the auth value.\nconst AuthProvider = ({ children }) => {\n const [user, setUser] = useState(app.currentUser);\n\n const [personalDetails, setPersonalDetails] = useState({});\n\n const realmRef = useRef(null);\n\n const [userCart, setUserCart] = useState([]);\n\n useEffect(() => {\n if (!user) {\n return;\n }\n console.log(\"User Realm Openned\");\n // The current user always has their own project, so we don't need\n // to wait for the user object to load before displaying that project.\n\n const OpenRealmBehaviorConfiguration = {\n type: \"openImmediately\",\n };\n // TODO: Open the user realm, which contains at most one user custom data object\n // for the logged-in user.\n const config = {\n schema: [User.UserSchema, User.User_cartSchema, User.User_detailsSchema],\n sync: {\n user,\n partitionValue: `user=${user.id}`,\n newRealmFileBehavior: OpenRealmBehaviorConfiguration,\n existingRealmFileBehavior: OpenRealmBehaviorConfiguration,\n },\n };\n\n // Open a realm with the logged in user's partition value in order\n // to get the projects that the logged in user is a member of\n Realm.open(config).then((userRealm) => {\n realmRef.current = userRealm;\n const users = userRealm.objects(\"User\");\n console.log(users);\n users.addListener(() => {\n // The user custom data object may not have been loaded on\n // the server side yet when a user is first registered.\n\n if (users.length !== 0) {\n const { cart, details } = users[0];\n setUserCart([...cart]); //To set cart of user on login\n setPersonalDetails(details);\n }\n });\n });\n // TODO: Return a cleanup function that closes the user realm.\n // console.log(\"Is this the error?\");\n return () => {\n console.log(\"Closing User realm\");\n // cleanup function\n const userRealm = realmRef.current;\n if (userRealm) {\n userRealm.close();\n realmRef.current = null;\n\n setUserCart([]); // set project data to an empty array (this prevents the array from staying in state on logout)\n // setImage(null);\n // setImageForm(null);\n setPersonalDetails(null);\n console.log(\"Closing User realm\");\n }\n };\n }, [user]);\n\n // The signIn function takes an email and password and uses the\n // emailPassword authentication provider to log in.\n\n const resetPass = async (email, password) => {\n try {\n await app.emailPasswordAuth.callResetPasswordFunction({\n email,\n password,\n });\n // await app.emailPasswordAuth.resetPassword(token, tokenId);\n } catch (err) {\n console.log(err);\n }\n };\n\n const deleteUser = async (email) => {\n await app.deleteUser(email);\n };\n\n const passResetEmail = async (emailAddress) => {\n await app.emailPasswordAuth.sendResetPasswordEmail(emailAddress);\n };\n\n // The signOut function calls the logOut function on the currently\n // logged in user\n const signOut = () => {\n if (user == null) {\n console.log(\"Not logged in, can't log out!\");\n return;\n }\n user.logOut();\n setUser(null);\n };\n\n const signIn = async (email, password) => {\n const creds = Realm.Credentials.emailPassword(email, password);\n const newUser = await app.logIn(creds);\n setUser(newUser);\n return newUser;\n };\n\n const signUp = async (email, password) => {\n await app.emailPasswordAuth.registerUser({\n email,\n password,\n });\n console.log(\"user signed up\");\n };\n\n const addToUserCart = (itemId, qty) => {\n console.log(\"Adding Item to cart\");\n\n const userRealm = realmRef.current;\n const user = userRealm.objects(\"User\")[0];\n\n userRealm.write(() => {\n const result = user.cart.find((obj) => obj.productId === String(itemId));\n\n if (result) {\n const index = user.cart.indexOf(result);\n user.cart[index][\"qty\"] += parseInt(qty);\n } else {\n user.cart.push({\n productId: String(itemId),\n qty: parseInt(qty),\n });\n }\n });\n };\n\n const removeFromUserCart = (itemId) => {\n console.log(\"Removing Item from cart\");\n\n const userRealm = realmRef.current;\n const user = userRealm.objects(\"User\")[0];\n\n console.log(\"Item ID\", itemId);\n\n userRealm.write(() => {\n const result = user.cart.find((obj) => obj.productId === String(itemId));\n\n console.log(\"Result:\", result);\n\n if (result) {\n const index = user.cart.indexOf(result);\n user.cart.splice(index, 1);\n }\n });\n\n const { cart } = user;\n setUserCart([...cart]);\n };\n\n const emptyUserCart = () => {\n console.log(\"Emptying cart\");\n\n for (let i = 0; i < userCart.length; i++) {\n removeFromUserCart(userCart[i][\"productId\"]);\n }\n setUserCart([]);\n };\n\n const updateQuantity = (itemId, bool) => {\n console.log(\"Updating Item quantity\");\n\n const userRealm = realmRef.current;\n const user = userRealm.objects(\"User\")[0];\n\n userRealm.write(() => {\n const result = user.cart.find((obj) => obj.productId === String(itemId));\n\n if (result) {\n const index = user.cart.indexOf(result);\n bool ? (user.cart[index][\"qty\"] += 1) : (user.cart[index][\"qty\"] -= 1);\n }\n });\n };\n\n const updateAvatar = (image, imageForm) => {\n console.log(\"Updating Avatar\");\n\n const userRealm = realmRef.current;\n const user = userRealm.objects(\"User\")[0];\n\n userRealm.write(() => {\n user.details[\"image\"] = image;\n user.details[\"imageForm\"] = imageForm;\n });\n };\n\n const updateUserDetails = (state) => {\n console.log(\"Updating user details\");\n const userRealm = realmRef.current;\n const user = userRealm.objects(\"User\")[0];\n userRealm.write(() => {\n user.details[\"name\"] = state.name;\n user.details[\"userName\"] = state.userName;\n user.details[\"phoneNumber\"] = state.phoneNumber;\n user.details[\"countryCode\"] = state.countryCode;\n user.details[\"altPhoneNumber\"] = state.altPhoneNumber;\n user.details[\"altCountryCode\"] = state.altCountryCode;\n user.details[\"country\"] = state.country;\n user.details[\"province\"] = state.province;\n user.details[\"city\"] = state.city;\n user.details[\"address\"] = state.address;\n user.details[\"postalCode\"] = state.postalCode;\n });\n };\n\n return (\n <AuthContext.Provider\n value={{\n signUp,\n signIn,\n signOut,\n resetPass,\n deleteUser,\n passResetEmail,\n addToUserCart,\n removeFromUserCart,\n emptyUserCart,\n updateQuantity,\n updateAvatar,\n updateUserDetails,\n user,\n userCart,\n personalDetails,\n }}\n >\n {children}\n </AuthContext.Provider>\n );\n};\n// The useAuth hook can be used by components under an AuthProvider to\n// access the auth context value.\nconst useAuth = () => {\n const auth = useContext(AuthContext);\n if (auth == null) {\n throw new Error(\"useAuth() called outside of a AuthProvider?\");\n }\n return auth;\n};\nexport { AuthProvider, useAuth };\n\n const user = userRealm.objects(\"User\"); const user = userRealm.objects(\"User\");",
"text": "So this is for a learning project. A simple e-commerce platform in react native.I have an authproviderthe problem is realm is returning const user = userRealm.objects(\"User\"); as an empty array.\nwhich is creating problems in the rest of the app.\nthe issue didnt exist till yesterday. started happening all of a sudden.\nOne thing to note is that the code works for all old users.\nAny users i create now, have this problem.\nThe code has been like this for some time and previously when i created new users it did’nt have this problem.User documents are created on mongodb atlas and i can see them yet still const user = userRealm.objects(\"User\"); is an empty array.The code i am using is from the task tracker application, just repurposed it.",
"username": "Agha_Syed_Nasir_Mahmood_Azeemi"
},
{
"code": "",
"text": "By old users I mean users created up until yesterday.",
"username": "Agha_Syed_Nasir_Mahmood_Azeemi"
},
{
"code": "console.log(user.customData)userRealm.objects(\"User\")",
"text": "however if print console.log(user.customData), I can see the data. as I can see in mongoDb atlas.What happening, I am confused. userRealm.objects(\"User\") returning an empty array, when there is clearly a user present in mongodb.",
"username": "Agha_Syed_Nasir_Mahmood_Azeemi"
},
{
"code": "",
"text": "I am facing the similar issue. Realm.open is returning empty object but If I turn on chrome debugger it is working fine. Is there any update on the solution?",
"username": "Gopi_Devarapalli"
}
] | Realm.open(config) | 2022-08-15T11:22:07.425Z | Realm.open(config) | 2,457 |
null | [
"node-js",
"python",
"cxx",
"field-encryption"
] | [
{
"code": "cd libmongocrypt/bindings/node\nnode-gyp rebuild\ngyp info it worked if it ends with ok\ngyp info using [email protected]\ngyp info using [email protected] | linux | x64\ngyp info find Python using Python version 3.11.1 found at \"/usr/bin/python3\"\ngyp info spawn /usr/bin/python3\ngyp info spawn args [\ngyp info spawn args '/usr/lib/node_modules/npm/node_modules/node-gyp/gyp/gyp_main.py',\ngyp info spawn args 'binding.gyp',\ngyp info spawn args '-f',\ngyp info spawn args 'make',\ngyp info spawn args '-I',\ngyp info spawn args '/root/libmongocrypt/bindings/node/build/config.gypi',\ngyp info spawn args '-I',\ngyp info spawn args '/usr/lib/node_modules/npm/node_modules/node-gyp/addon.gypi',\ngyp info spawn args '-I',\ngyp info spawn args '/root/.cache/node-gyp/18.13.0/include/node/common.gypi',\ngyp info spawn args '-Dlibrary=shared_library',\ngyp info spawn args '-Dvisibility=default',\ngyp info spawn args '-Dnode_root_dir=/root/.cache/node-gyp/18.13.0',\ngyp info spawn args '-Dnode_gyp_dir=/usr/lib/node_modules/npm/node_modules/node-gyp',\ngyp info spawn args '-Dnode_lib_file=/root/.cache/node-gyp/18.13.0/<(target_arch)/node.lib',\ngyp info spawn args '-Dmodule_root_dir=/root/libmongocrypt/bindings/node',\ngyp info spawn args '-Dnode_engine=v8',\ngyp info spawn args '--depth=.',\ngyp info spawn args '--no-parallel',\ngyp info spawn args '--generator-output',\ngyp info spawn args 'build',\ngyp info spawn args '-Goutput_dir=.'\ngyp info spawn args ]\ngyp info spawn make\ngyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]\nmake: Entering directory '/root/libmongocrypt/bindings/node/build'\n CXX(target) Release/obj.target/mongocrypt/src/mongocrypt.o\n SOLINK_MODULE(target) Release/obj.target/mongocrypt.node\n/usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /root/libmongocrypt/bindings/node/deps/lib/libmongocrypt-static.a: No such file or directory\n/usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /root/libmongocrypt/bindings/node/deps/lib/libkms_message-static.a: No such file or directory\n/usr/lib/gcc/x86_64-alpine-linux-musl/12.2.1/../../../../x86_64-alpine-linux-musl/bin/ld: cannot find /root/libmongocrypt/bindings/node/deps/lib/libbson-static-for-libmongocrypt.a: No such file or directory\ncollect2: error: ld returned 1 exit status\nmake: *** [mongocrypt.target.mk:140: Release/obj.target/mongocrypt.node] Error 1\nmake: Leaving directory '/root/libmongocrypt/bindings/node/build'\ngyp ERR! build error\ngyp ERR! stack Error: `make` failed with exit code: 2\ngyp ERR! stack at ChildProcess.onExit (/usr/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:203:23)\ngyp ERR! stack at ChildProcess.emit (node:events:513:28)\ngyp ERR! stack at ChildProcess._handle.onexit (node:internal/child_process:291:12)\ngyp ERR! System Linux 4.14.300-burmilla\ngyp ERR! command \"/usr/bin/node\" \"/usr/bin/node-gyp\" \"rebuild\"\ngyp ERR! cwd /root/libmongocrypt/bindings/node\ngyp ERR! node -v v18.13.0\ngyp ERR! node-gyp -v v9.3.0\ngyp ERR! not ok\n",
"text": "I’m trying to build libmongocrypt Node.js bindings on Alpine Linux (edge).I have installed the following additional packages: alpine-sdk, node, npm, python3, cmake, openssl-dev, bash, linux-headers.The versions that I have tried to reproduce this behaviour with are: 2.4.0 and git master.In order to reproduce, do the following:The output of node-gyp:For some reason, static libraries libmongocrypt-static.a, libkms_message-static.a and libbson-static-for-libmongocrypt.a are not being built. I can’t figure out why exactly. Am I missing some important step in the process? Am I missing some prerequisite?",
"username": "itsemast"
},
{
"code": "bash ./etc/build-static.sh",
"text": "I think the problem is that bash ./etc/build-static.sh should be run before node-gyp. Running it manually solves it, but I’m curious why this doesn’t happen automatically.",
"username": "itsemast"
},
{
"code": "",
"text": "I have succeeded with building mongosh with patched libmongocrypt and os-dns-native on Alpine Linux 3.16. Unfortunately, it won’t work on newer versions, because mongosh 1.6.2 works with Node.js 16, but not with Node.js 18. Hopefully, later versions can be updated for newer Node.js.Here are the build instructions: GitHub\nAnd here is a working example: Docker Hub",
"username": "itsemast"
}
] | Unable to build libmongocrypt Node.js bindings on Alpine Linux | 2023-02-02T06:38:00.943Z | Unable to build libmongocrypt Node.js bindings on Alpine Linux | 1,665 |
null | [] | [
{
"code": "",
"text": "Hi community,Do you know what might be the reason why, when printing into a file the result of a js executed through mongo shell, in any string literal, it replaces characters like í, é, ó with something like aÃa, I’m using the mongo shell version 4.2.5. I’d like to note that same version of mongo shell, running in a different computer prints these characters accurately.Thanks",
"username": "Erik_Torres"
},
{
"code": "",
"text": "Hi @Erik_Torres welcome to the community!BSON, and by extension MongoDB, supports unicode text. I’m guessing that if it works in one computer but not another, the failing case is most likely the terminal software or something connected to it. Perhaps it’s using the wrong codepage, the wrong font, or the terminal software itself doesn’t support unicode. Please consult the manual for the terminal on how to handle unicode font/format, as I don’t believe this was caused by MongoDB in particular.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unicode characters not getting printed | 2023-02-03T05:47:47.807Z | Unicode characters not getting printed | 857 |
null | [
"atlas-functions",
"atlas-triggers"
] | [
{
"code": "{\n_id:1,\nmeeting_id:123,\nis_active:true\n}\n**chat_123**meeting_idchat_123, chat_124, chat_125,............**is_active****false**",
"text": "Hi Team,I need your help to create a trigger to drop collection, while a specified field updated in another collection.Sample case:\nHere is the one sample collection of ‘meetings’:And there is another collections like concatenation of “chat & meeting_id” i.e., **chat_123** example with the above meetings collection.With every meeting_id has Another collection like chat_123, chat_124, chat_125,.............I need help to get drop the “chat_123” collection when update **is_active** field with **false** in meetings collection.Please help me on this .Thanks in advance.",
"username": "Lokesh_Reddy1"
},
{
"code": "**chat_123**",
"text": "Hi @Lokesh_Reddy1 and welcome to the MongoDB community!!Dropping a collection using the Atlas triggers is not possible as of today. If I may suggest an alternative, you might achieve the same by implementing the drop command in the application, if it’s applicable to your use case.Also,And there is another collections like concatenation of “chat & meeting_id” i.e., **chat_123** example with the above meetings collection.If I understand correctly, if you have a lot of chats, you will also have a lot of collections. Although there is technically no limit to the number of collections in MongoDB, if there are too many, you might hit some hardware-related limitations. Is there a reason for this design in particular? Will it be possible to implement this using a variation of the bucket pattern instead?Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
] | Create atlas trigger for Drop collection while update an document with specified field in another Collection | 2023-02-01T14:11:20.321Z | Create atlas trigger for Drop collection while update an document with specified field in another Collection | 1,251 |
null | [] | [
{
"code": "",
"text": "Hi All,I am new to mongoDB I am facing a problem, I have created indexes for my collection. But when there is no record found it takes too much time to return a result. My collection is of around 1.8 million documents. Can someone please explain how does this work internally.",
"username": "Tania_Garg"
},
{
"code": "",
"text": "Hi @Tania_Garg, welcome to the MongoDB community!Can you provide a little more information, such as an example of the document, indexes and queries that you say are slow?",
"username": "Leandro_Domingues"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"62c865378392121884010c67ce65418c\"\n },\n \"q_id\": 1725833882,\n \"pq_id\":0,\n \"data\": [\n {\n \"l_id\": 1,\n \"updated_date\": \"2022-09-26T18:14:02.729+05\"\n },\n {\n \"l_id\": 2,\n \"updated_date\": \"2022-09-26T18:14:02.729+05\"\n }\n ],\n \"sh_id\": 6782,\n \"level\": 54,\n \"status\": 1,\n \"tmp_id\": {\n \"$numberLong\": \"10\"\n },\n \"is_p\": false,\n \"t_data\": [\n {\n \"t_id\": {\n \"$numberLong\": \"303\"\n },\n \"t_name\": \"xyz\",\n \"t_v_id\": {\n \"$numberLong\": \"6544\"\n },\n \"t_v\": \"tre\"\n },\n {\n \"t_id\": {\n \"$numberLong\": \"487\"\n },\n \"t_name\": \"poi\",\n \"t_v_id\": {\n \"$numberLong\": \"65487\"\n },\n \"t_v\": \"ytre\"\n }\n ],\n \"bl_data\": [\n {\n \"s_m_id\": 21107024369,\n \"status\": 1,\n \"s_details\": [\n {\n \"s_id\": 428097263500,\n \"s_type\": 809\n },\n {\n \"s_id\":4280876652,\n \"s_type\": 1954\n },\n {\n \"s_id\": 654378,\n \"s_type\": 857\n }\n ],\n \"dl_id\":12,\n \"numbers\": 4,\n \"tts\": 6000,\n \"level\": 4\n }\n ],\n \"v_list\": [\n {\n \"v\": \"1\",\n \"used\": false,\n \"level\":1\n \"status\": 1\n },\n {\n \"v\": \"2\",\n \"used\": true,\n \"level\":2\n \"status\": 1\n \n ],\n \"qt_id\":198275\n}\n\n1: qt_id,pq_id,bl_data.s_details.s_id,bl_data.numbers,dl_id,sh_id,bl_data.level\n2: sh_id,v_list.level,level.status,pq_id\n{\n \"$and\": [\n {\n \"sh_id\": {\n \"$in\": [\n 17250\n ]\n },\n \"pq_id\": 0\n },\n {\n \"v_list\": {\n \"$elemMatch\": {\n \"status\": 1,\n \"level\": {\n \"$in\": [\n 7,\n 4,\n 2,\n 90,\n 43\n ]\n },\n \"used\": true\n }\n }\n },\n {\n \"data.l_id\": {\n \"$all\": [\n 1,\n 2\n ]\n }\n },\n {\n \"bl_data.s_details\": {\n \"$elemMatch\": {\n \"s_id\": {\n \"$in\": [\n 1870\n ]\n }\n }\n }\n },\n {\n \"$and\": [\n {\n \"$or\": [\n {\n \"t_data.t_v_id\": {\n \"$in\": [\n 4\n ]\n }\n },\n {\n \"t_data.t_v_id\": {\n \"$in\": [\n 1\n ]\n }\n }\n ]\n },\n {\n \"$or\": [\n {\n \"t_data.t_v_id\": {\n \"$in\": [\n 2\n ]\n }\n },\n {\n \"t_data.t_v_id\": {\n \"$in\": [\n 3\n ]\n }\n }\n ]\n }\n ]\n }\n ]\n}\n",
"text": "Hi @Leandro_Domingues ,Thanks for your reply. Please find below the document, query, and indexes. Please note that i have multiple arrays in one document and the query will include most of them. Whenever there is data for a particular query the response is very fast but in case of no record, it tends to slow down. Please suggest.Document:Index:Query:",
"username": "Tania_Garg"
},
{
"code": "",
"text": "Hi @Sumanta_Mukhopadhyay ,Thank you for explaining the working. I am facing a problem with no records situation, when the data is present the query response time is very fast like in milliseconds. In case of no record same query take seconds to return. Can you explain further that does this relates to caching or not? and in case of no records founds does it search indexes first and then also the whole document?",
"username": "Tania_Garg"
},
{
"code": "db.collection.explain('executionStats').find(...)db.collection.stats()find()aggregate()next()toArray()",
"text": "MongoDB indexes the most frequently used fields to improve query performance.This is a bit misleading because it implies that MongoDB creates indexes automatically. It does not, and I don’t believe it ever will. Creating an index to support a query pattern should be a deliberate design decision, since every index has a price when writing to the collection, and there’s a limit of 64 indexes per collection.@Tania_Garg in terms of your query performance, you might want to post:One possible reason off the top of my head (a wild guess here): have you executed the query by iterating on the cursor? By default, the find() or aggregate() methods in most official drivers returns a cursor and do not actually execute the query unless the cursor is iterated on. It might be possible that you’re comparing two different things, as it’s strange that a query with a result and with no result that does similar work would be radically different in response times. Calling next() or toArray() – in Node – generally executes the cursor.Best regards\nKevin",
"username": "kevinadi"
}
] | Slow Query when no data is returned | 2023-02-03T06:41:27.307Z | Slow Query when no data is returned | 724 |
null | [
"aggregation",
"queries"
] | [
{
"code": "$group$size$facet$group$unwinding$facet$project",
"text": "Hello there community.\nI am a student and I have now worked in multiple projects with mongodb and I focused on aggregations since requests became more and more complicated.But there has been one roadblock that leads to issues over, over, and over again.\nI want to make as few db calls as possible, meaning I want a single aggregation to get all the data I want.\nWe can merge multiple aggregations with $facet, but since this has an actual bytelimit, it doesn’t help in the following example:I want to retrieve data, but also statistics and counts over lookups and whatsoever.\nThis leads to me doing $group, get the $size of an array, then $unwind multiple times to flatten some nested arrays, and finally $facet to get another count and data.What I am trying to say here, is that it would make some aggregations so much easier, if I could instead of a facet, could also store a variable parallel to my aggregations ignored in the stages and only then used when I need it at the end for example.In short the issue is: I perform actions like $group, then get a calculated variabel or count and $unwinding it all, do a lot more actions on a potentially facet-bytelimit exceeding amount of data forcing me to add the variable to all documents in all stages.I would love to be able to have an alternative to the $facet, with an even smaller byte limit of that matter, but something where I have a stage with an aggregation that doesn’t change the above stage, but adds a parallel value which I can retrieve in forexample a $project.Does something like this already exist or could this be implemented in the future?",
"username": "Maximilian_N_A"
},
{
"code": "",
"text": "As per I am aware there’s no alternative to the $facet pipeline stage in MongoDB that allows you to store intermediate values parallel to the aggregation pipeline stages and retrieve them later in the pipeline. This means that you have to work around the bytelimit and design your aggregation pipeline in such a way that it fits within the bytelimit.You can try to reduce the size of intermediate data by using $group to get the counts and statistics before you $unwind and flatten the nested arrays. This way, you can limit the amount of data that gets $unwound, reducing the size of the intermediate data and helping you stay within the bytelimit.Alternatively, you can consider using a separate collection to store intermediate values and use a join to retrieve the data in your final pipeline stage.If you feel that the bytelimit is a limitation to your use case, you can consider opening a feature request on the MongoDB issue tracker. The MongoDB development team will review and consider the request for a future release.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "I am not sure I fully understand your use-case but I think that $unionWith could help.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Stageless aggregation variables | 2023-02-06T23:09:01.407Z | Stageless aggregation variables | 428 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "I want to firstly check the difference between atlas triggers and realm-triggers, as the docs for atlas triggers now quote realm.however i have atlas collection insert trigger which works fine, however when i try to update my code with async await, and run the code, the result is not as expected.for example i am using a package that returns a promise (bluebird)\nand when i await xxxx\nthe result is {“isFulfilled”:false,“isRejected”:false}\nwhile i am not getting promise pending, nor the actual result, the above result shows me that the result has not yet returned even though i am using async await.if i replace await with.then((resp) => {\nconsole.log(resp)\n}this works, and the result is shown.so the problem is with async await… are there any issues with async await with mongo db atlas triggers?",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Hi @Rishi_uttam ,First the Atlas triggers are based on Realm triggers. In fact it creates a dedicated realm app for you behind the scenes for convenience.Now to use async in a trigger function you need to define it in the header of the function.The below article has some examples:\nTriggers Treats and Tricks: Cascade Document Delete Using Triggers Preimage | MongoDBThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel, ThanksYes i thought that behind the scens it is using realm triggers.The main problem is async await, i do know how to use it, and i do put async in the code. The problem is i am using a package called ‘clearbit’ which i have uploaded as a dependency. When i await clearbit the result shows as : [{“isFulfilled”:false,“isRejected”:false}]\"But when use a normal .then , .catch, the code works as expected.When testing on my local node machine, i found that async /await works fine, its only with mongodb triggers with the clearbit package async await does not work… – do you have any ideas? i know the package is using bluebird and needle under the hood to return the promise.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Can you please navigate to your Realm trigger in the UI and copy paste me the link in the browser ",
"username": "Pavel_Duchovny"
},
{
"code": "exports = (payload) => {\n const client = require(\"twilio\")\n client.doSomething()\n .then(( x) => { return x.status }) \n .catch(( error ) => { return error }) \n};\nexports = async (payload) => {\n const client = require(\"twilio\")\n cons x = await client.doSomething()\n return x.status || \"pls work\"\n};\n",
"text": "I’m experiencing the same issue. I have a function that uses Twilio and I can’t get it to work with async/await but I can with thenWorksthis does not workAm I missing something obvious?",
"username": "Alexandar_Dimcevski"
},
{
"code": "cons x = await client.doSomething()\ncons",
"text": "Maybe the cons typo …",
"username": "Pavel_Duchovny"
},
{
"code": " exports = async (payload) => {\n \n const { phone, code } = payload;\n \n const phoneWithPlus = `+${phone}`\n \n const client = require('twilio')(context.values.get(\"TWILIO_ACCOUNT_SID_VALUE\"), context.values.get(\"TWILIO_AUTH_TOKEN_VALUE\"));\n\n const verification_check = await client.verify.v2.services(context.values.get(\"TWILIO_VERIFY_SERVICE_SID_VALUE\"))\n .verificationChecks.create({to: phoneWithPlus, code: code})\n \n if (verification_check.status == \"approved\") {\n return phone\n }\n \n return null\n \n };\n",
"text": "Nope. Maybe I’m just really missing something obvious here?",
"username": "Alexandar_Dimcevski"
},
{
"code": "exports = async function (query) {\n const NodeGeocoder = require(\"node-geocoder\");\n\n return {\n query,\n locations: await NodeGeocoder({\n provider: 'virtualearth',\n apiKey: context.values.get('virtualEarthKey')\n }).geocode(query)\n }\n}\n",
"text": "I’m having the same issue when using node-geocoder. Works fine on my machine, I can seemingly get a response with .then and log it, but with async/await it’s totally busted.E.g…",
"username": "Michael_Phelps"
},
{
"code": "",
"text": "Thanks for the great feedback.An investigation ticket was open with our team.Hopefully we will have answers soon.Meanwhile, use promises where async/await cause trouble …Appropriate you patience Pavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Any updates on this?",
"username": "Alexandar_Dimcevski"
}
] | Atlas triggers, not working with async await | 2021-11-15T17:48:01.252Z | Atlas triggers, not working with async await | 6,489 |
null | [
"node-js",
"transactions"
] | [
{
"code": "const client = new MongoClient(<connectionString>);\nconst db = client.db(<dbName>)\n",
"text": "I have some confusion. I’ve succesfully implemented a few CRUD applications that connect to DBs hosted on Mongo Atlas. Every time, I do this:And then I’m off to the races with queries on db.For the first time, I’ve considered it crucial to carry out queries within a transaction. So I know I’m dealing with sessions now, and upon some research, to deal with a session I’m supposed to use client.connect().I’m confused because I’ve never used the connect() method - but according to MongoDB documentation I shouldn’t even be able to interact with my database without establishing connection.So what gives? It really seems like the MongoClient constructor is automatically establishing connection to me - but I can’t find any documentation to back that up.Tangentially (but also to the actual need I have), if I’m connecting to a db fine with client.db(), does that mean that I’m also likely to succeed with client.startSession().Thanks for any help.Edit to add:\nAnd further - I’m never closing a connection (I’m never opening one in the first place that I can tell…) - would it be any different when working with a session? I know I have to end the session.",
"username": "Michael_Jay2"
},
{
"code": "MongoClient.connect().db().connect()MongoClient.startSession().endSession()",
"text": "The MongoClient constructor creates a client instance, but it doesn’t actually connect to the database. It is recommended to call .connect() method to explicitly connect to the database. The .db() method, however, will automatically connect to the database if there is no established connection, so in most cases, you do not need to call .connect() method.For transactions, it is required to use sessions, and the MongoClient.startSession() method is used to start a new session. It is recommended to end the session by calling .endSession() method after the transaction is completed.In general, it’s a good practice to explicitly manage the connections and sessions in your application, to ensure the correct connection pooling and release resources when they are no longer needed.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Thank you for your reply.Typically, should a connection be established for each query and then closed upon completion of the query?",
"username": "Michael_Jay2"
},
{
"code": "connect()// Connect the client to the server (optional starting in v4.7)\nawait client.connect();\n.db()async function main() {\n const client = new MongoClient('mongodb://localhost:27017?replicaSet=replset', {useUnifiedTopology: true});\n\n const db = client.db('test')\n console.log(db)\n\n const session = client.startSession();\n console.log(session)\n}\nmain().then(() => console.log('done'))\ndbsessionmongodinsertOne()mongodMongoClient()client.close()clientclient",
"text": "Hi @Michael_Jay2It really seems like the MongoClient constructor is automatically establishing connection to me - but I can’t find any documentation to back that up.The node driver automatically calls connect() since version 4.7 (see NODE-4192). This is also mentioned in the code example in the Connection Guide documentation page:However:The .db() method, however, will automatically connect to the database if there is no established connectionThis is not correct. This code:will print the relevant db and session objects to the console, but if you check the mongod logs, there are no connection accepted.Once you do some operation by e.g. doing an insertOne() using the session, then you should see a line similar to this in the mongod log:{“t”:{“$date”:“2023-02-07T10:06:46.318+11:00”},“s”:“I”, “c”:“NETWORK”, “id”:22943, “ctx”:“listener”,“msg”:“Connection accepted”,“attr”:{“remote”:“127.0.0.1:53615”,“uuid”:“7f90e2a3-9a79-491e-a72b-e7ef0e857ed8”,“connectionId”:50,“connectionCount”:4}}which means that only when an operation is involved, the connection is made.Regarding your follow-up question:Typically, should a connection be established for each query and then closed upon completion of the query?You should create the client object using MongoClient() once for the lifetime of the app, and call client.close() only during teardown of the app. During the lifetime of the app, you can put the client object in a global variable so it’s accessible to the whole app.This is because all official MongoDB drivers manages a connection pool via the client object, and will automatically reuse, add, or remove individual connections as needed. Connecting/disconnecting after every operation is an anti-pattern and should be avoided.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank for taking me to school, Kevin.By what I’m reading in your repsonse, it is also not necessary (or important) to actively call client.connect() because if a session is started and a query is executed in that session, then the connection is auto-established. Right?",
"username": "Michael_Jay2"
},
{
"code": "connect()client.close()async function main() {\n const client = new MongoClient('mongodb://localhost:27017?replicaSet=replset', {useUnifiedTopology: true});\n\n const session = client.startSession();\n\n await session.withTransaction(async () => {\n const coll = client.db('test').collection('foo')\n await coll.insertOne({ abc: 1 }, { session })\n })\n await session.endSession() // try removing this\n await client.close() // or this\n}\nmain().then(() => console.log('done'))\nclient.close()client.close()client.close()",
"text": "Hi @Michael_Jay2Yes you are correct. Since connect() is optional, it will be auto-called whenever an operation needed to be done in the database, like inserting a document (with or without a session).With regard to calling client.close(), I would say that it’s best practice to call this when you’re doing teardown in the app. This is to ensure that all resources are cleaned up in both the app side and the database side.For a small experiment, you can try this type of code similar to the one I posted earlier:If you remove client.close(), I believe you’ll find that the script will just wait indefinitely when you run it, requiring you to CTRL-C to exit it. If you leave the client.close() there, the script will exit after it’s done. I tend to think that since the intent is for the script to exit, calling client.close() there is a better scenario.Hope this helps!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using Node driver v5 - when is MongoClient.connect() required? | 2023-02-04T19:18:25.992Z | Using Node driver v5 - when is MongoClient.connect() required? | 1,575 |
null | [
"backup"
] | [
{
"code": "",
"text": "We enabled backup for our Mongo cluster. And when I downloaded an old snapshot via the UI, the files are in *.wtI am trying to recover the old data for just one record in a collection. May I ask if there’s a way for me to decode it into a Json format, or download the backup snapshot in human readable format?Thank you!",
"username": "williamwjs"
},
{
"code": ".tar.gz*.wtmongodmongodumpmongoexportmongod",
"text": "Hi @williamwjs,We enabled backup for our Mongo cluster. And when I downloaded an old snapshot via the UI, the files are in *.wtI presume you downloaded the .tar.gz file and the extracted contents within were the *.wt files but please correct me if I am wrong here.I am trying to recover the old data for just one record in a collection.You can follow the restore procedure documentation. Once you have started the mongod instance against the extracted backup directory then you can use mongodump or mongoexport to extract the specific collection data from that mongod instance.The following Connect to a Cluster using Command Line Tools documentation may also help.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to extract data from Mongo Backups | 2023-02-06T18:50:27.678Z | How to extract data from Mongo Backups | 1,235 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I have a set of related json files for version 1. The identical set exists in 5 other languages. Every 6 months a new version of the files is released. How should these files be stored so that if the user selects a version and a particular language , the data is displayed.example:version 1_english:\nhighlevel.json\nlowlevel.json\n…version 1_korean:\nhighlevel.json\nlowlevel.json…version 2_english:\nhighlevel_json\nlowlevel_json",
"username": "Supriya_Bansal"
},
{
"code": "{\n\"_id\" : ... ,\n\"revision\" : ...,\n\"modifiedDate\" : \"...\",\n\"en\" : [ { \"greeting\" : \"hello\"}, ...],\n\"fr\" : [ {\"greeting\" : \"bonjour\"}, ...],\n...\n}\n",
"text": "Hi @Supriya_Bansal,I suggest to read the following:Building with Patterns: The Document Versioning Pattern | MongoDB BlogCombination of the two can give you an idea .In high level your documents could look something like:Another option is to consider a collection per language or database per language.Best regards\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you @Pavel_Duchovny for sharing these links.\nI looked into the articles and have a follow-up question about the version.\nThe UI needs to display all the available versions in a drop-down list. If you select a specific version the data should be displayed for that version.\nIs it advisable to have one database per version as well?",
"username": "Supriya_Bansal"
},
{
"code": "",
"text": "Hi @Supriya_Bansal,I won’t do versioning based on databases. I would index the user and versions fields within a collection and query a new query each time a user switches the version.Or query all versions into separate parameters and populate ui on switch.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This is really helpful @Pavel_Duchovny. Thank you!",
"username": "Supriya_Bansal"
},
{
"code": "{\n \"_id\" : ... ,\n \"revision\" : ...,\n \"modifiedDate\" : \"...\",\n greeting:[{\n \"lang\":\"en\",\n \"valu\":\"hello\"},{\n \"lang\":\"fr\",\n \"valu\":\"bonjour\"\n }]\n}\n",
"text": "Hi.\nwhat if documents look something like this ?",
"username": "farzam_raoufi"
},
{
"code": "",
"text": "Hi @farzam_raoufi ,The document looks like a possible candidate for version pattern. What is your specific question?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I’m just new in MongoDB and think it can be good idea to build multiple languages document.\ni will study more about version pattern.\nThank you!",
"username": "farzam_raoufi"
}
] | Design MongoDb collection to support multiple versions and multiple languages | 2020-08-16T03:35:23.837Z | Design MongoDb collection to support multiple versions and multiple languages | 5,152 |
[] | [
{
"code": "",
"text": "Hi there esteemed friends, acquintances, and friends-yet-to-be:My name’s Billy Lim and I lead up the MongoDB Community Advocacy Program (CAP), formerly known as the Champions program! The CAP program is version 2.0 of the Champions program that was reinitialized almost single-handedly by @webchick in 2022. Our flagship community, well, advocacy program in that first year entailed a lot of listening and dialogue with our core group of veteran Champions — undisputed technical experts and practitioners of MongoDB from around the globe with model personal character and the force of desire to engage the global community from their position as authentic users and, yes, real advocates for the power of MongoDB to make the dreams of many come to life.How? By enabling the next generation of products and services that will change the world, especially for those in need of the flexibility, ease of use, and high scalability enabled by our developer data platform.From November to the very end of January, a small team of MongoDB staff, including yours truly, came together to build on the foundations set in 2022. We set about the goal of carefully scaling the program, not just by adding more Champion tier members, but by constructing and rolling out a new tier entirely: Enthusiasts - those who range from those early in their MongoDB learning journeys to those who know quite a bit but require more experience practicing the various methods of product advocacy (e.g. blogging, podcast creation, hosting webinars, giving public talks and engaging developers at conferences, writing technical documentation, and much more).With the close of our first full intake cycle ever, we chose from a highly competitive field of applicants, pouring over application materials, vetting candidate backgrounds, conducting interviews, and gaining cross-functional internal approval of our final slate of CAP nominees. The result is that we are kicking off a new fiscal year with 37 Community Advocates (16 Champions and 21 Enthusiasts)!! The future is uncertain for me, but I’m already quite fond of this amazing group of MongoDB Champions and Enthusiasts, and you can bet I will be very active in all of our MongoDB Community spaces in the coming months, supporting them regardless… and participating in all manner of other ways. For one full year, our Community Advocates will receive specialized programming meant to hone their abilities as advocates and greatly accelerate and extend their technical knowledge of MongoDB.These members will receive an array of benefits contingent on their tier, including Q&A with Product staff, invitations to Round Tables, Private Preview program invites, priority slots to carry out key advocacy initiatives, social events, discounted tickets to in-person programming, and and even financially supported travel and accommodations to attend .Local events around the world! And much more. :)) Perhaps most important, these individuals - who represent an extremely rich and heterogeneous mix of technological interests - are united by their shared passion for community engagement and a fierce curiosity that we hope will both be felt throughout the global developer community across the many spaces where MongoDB users gather.We also intend to greatly elevate their profiles as budding or veteran MongoDB practitioners, so that their reach extends and thus more people are brought into the fold who we otherwise would not have reached. The truth is, everyone deserves to know about what MongoDB can do for them if the platform fits their use case, and we’ve got an incredibly energetic and passionate community that is ready to get the word out across every medium you can imagine.I know you’re wondering: “Well dang, did I miss out? When can I apply next?” This intake cycle is officially closed, but currently we plan to hold two intakes per year. If you have questions about how to prepare for the next intake so you can put forth a most competitive application, reply here with your general questions, or stay tuned to the CAP website, which will reflect new information about the 2023 program in the coming two weeks.In the meantime, just take a look at the profile of Nuri Halperin, one of our phenom veteran Champions, for inspiration, while keeping in mind that we will intentionally weave together a diverse collective of individuals with each future intake. Nuri is awesome. But you don’t have to be Nuri! Be you, be true — just make sure your MongoDB knowledge is as up to snuff as possible when you apply (though there is no minimum knowledge requirement) and that you’ve got at least 2x examples of advocacy contributions to show, qualifying you for the Enthusiast tier, or 4x to show if you’re dreaming of “Championship.”We are so excited to kick the year off with this extraordinary group. You will hear more about them with the coming public launch of the 2023 program. Stay tuned for more information on our next intake.Yours in community,Billy Lim\nCAP Lead\nSenior Community Engagement Manager, Global Developer Community\nScreen Shot 2023-02-05 at 1.53.41 PM1348×947 222 KB\n\nThis could be you! Maybe not as impossibly dapper, but there’s always second place! ",
"username": "Billy"
},
{
"code": "",
"text": "I am super excited for Community Advocates",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "This topic was automatically closed after 60 days. New replies are no longer allowed.",
"username": "system"
}
] | SURPRISE: Community Champions is reborn! Say hello to the MongoDB Community Advocacy Program (CAP)! | 2023-02-05T20:00:28.080Z | SURPRISE: Community Champions is reborn! Say hello to the MongoDB Community Advocacy Program (CAP)! | 1,241 |
|
[
"chicago-mug"
] | [
{
"code": "MongoDB Sr Solutions Architect",
"text": "\nChicago MUG - Design Kit1920×1080 126 KB\nMongoDB User Group - ChicagoCome join us for our 1st Chicago MongoDB User Group (MUG) event. Join other local MongoDB users along with experienced MongoDB Solutions Architects who will be sharing their knowledge around common use case scenarios and valuable best practices when managing, running and building applications with MongoDB. The User Group will be a great way to expand your professional network, share your experiences, hear from others first hand and make new friends!\nSome other topics we will be discussing:Formal content will be about 30 minutes so there will be plenty of time to eat, drink and socialize! Register below for this event and join a great community here in Chicago! We will wrap with open Q&A and a giveaway of some great MongoDB Swag!Please RSVP at your earliest convenience at this link : MongoDB User Group ChicagoEvent Type: In-Person\nLocation: 221 N Wood St, Chicago, IL, 60612MongoDB Sr Solutions ArchitectCassiano has been part of the MongoDB team for 2.5 years. He has over 20 years of experience in it and he is local to Chicago! He is passionate about all thing tech, love gadgets, cars and grilling!",
"username": "bein"
},
{
"code": "",
"text": "Just a reminder to RSVP your spot here : Register HERE",
"username": "bein"
},
{
"code": "",
"text": "Hey long time no see. Do you guys have a MUG Event schedule for March in Chicago? Please let me know if you do. Great seeing you!",
"username": "MyCloudVIP_com"
},
{
"code": "",
"text": "Nothing scheduled yet , but we will discuss this during this event so come join us! Also join our group to keep up with all of our events",
"username": "bein"
},
{
"code": "",
"text": "Thanks to everyone who attended our Chicago MUG event. Here are some pictures of our event! \n\n\n\n\n",
"username": "bein"
}
] | Chicago MongoDB User Group - February Meetup | 2023-01-18T17:43:11.932Z | Chicago MongoDB User Group - February Meetup | 2,180 |
|
null | [
"node-js",
"compass",
"server"
] | [
{
"code": "import { MongoClient, AuthMechanism, Db } from \"mongodb\";\n\n\nconst uri =\"mongodb://localhost:27017/\";\n\nconst client = new MongoClient(uri);\n\nasync function run() {\n try {\n // Connect the client to the server (optional starting in v4.7)\n await client.connect();\n // Establish and verify connection\n await client.db(\"admin\").command({ ping: 1 });\n console.log(\"Connected successfully to server\");\n } finally {\n // Ensures that the client will close when you finish/error\n await client.close();\n }\n}\n\nconst print_dir = (msg) => {\n return console.dir(msg, {depth: null})\n}\n\nrun().catch(print_dir);\n\n MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017\n at Timeout._onTimeout (/home/USER/PATH/node_modules/mongodb/lib/sdam/topology.js:277:38)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) {\n 'localhost:27017' => ServerDescription {\n address: 'localhost:27017',\n type: 'Unknown',\n hosts: [],\n passives: [],\n arbiters: [],\n tags: {},\n minWireVersion: 0,\n maxWireVersion: 0,\n roundTripTime: -1,\n lastUpdateTime: 456323,\n lastWriteDate: 0,\n error: MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\n at connectionFailureError (/home/USER/PATH/node_modules/mongodb/lib/cmap/connect.js:383:20)\n at Socket.<anonymous> (/home/USER/PATH/node_modules/mongodb/lib/cmap/connect.js:307:22)\n at Object.onceWrapper (node:events:628:26)\n at Socket.emit (node:events:513:28)\n at emitErrorNT (node:internal/streams/destroy:157:8)\n at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n at processTicksAndRejections (node:internal/process/task_queues:83:21) {\n cause: Error: connect ECONNREFUSED 127.0.0.1:27017\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1247:16) {\n errno: -111,\n code: 'ECONNREFUSED',\n syscall: 'connect',\n address: '127.0.0.1',\n port: 27017\n },\n [Symbol(errorLabels)]: Set(1) { 'ResetPool' }\n },\n topologyVersion: null,\n setName: null,\n setVersion: null,\n electionId: null,\n logicalSessionTimeoutMinutes: null,\n primary: null,\n me: null,\n '$clusterTime': null\n }\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n }\n",
"text": "I’m trying to connect to a new mongodb server from my node app but I get this error.I am sure that the server is running because,\nwhen I’m trying to connect from Compass, I have no problems.App.mjsDev OS: Ubuntu 20.04.5 LTS (WSL 2)\nNode: v16.17.0\nmongodb node driver: v5.0.0OS: Windows 10\nMongoServer: 6.0 (default config)",
"username": "Stergios_Nanos"
},
{
"code": "",
"text": "The fact that you mentioned 2 OSDev OS: Ubuntu 20.04.5 LTS (WSL 2)\nNode: v16.17.0\nmongodb node driver: v5.0.0andOS: Windows 10\nMongoServer: 6.0 (default config)makes me think that you do not understand localhost correctly.If mongod is running on Windows 10 and you try to connect from your node application running on Ubuntu, localhost is definitively note the way you should connect. Your localhost in Windows that let you connect to your Windows’ mongod is not the same localhost as your Ubuntu machine running your host.The host localhost is really your local host. It means localhost on Windows is Windows and localhost on Ubuntu is Ubuntu. If you want to connect to mongod running on Windows from your app running on Ubuntu you will need to specify something else than localhost. The host name of your Windows machine is a likely candidate.",
"username": "steevej"
},
{
"code": "",
"text": "Since I could access Linux apps running on localhost from Windows,\nI thought, I could also do the inverse, access Windows apps from Linux.\nI believed the were sharing the same localhost, a mistake as you said.To solve the problem, I followed the steps by sylvix on this github issue:# Environment\n\n```none\nWindows build number: Win32NT 10.0.19041.0 Microsoft …Windows NT 10.0.19041.0\nYour Distribution version: Release: 18.04\nWhether the issue is on WSL 2 and/or WSL 1: Linux version 4.19.104-microsoft-standard (oe-user@oe-host) (gcc version 8.2.0 (GCC)) #1 SMP Wed Feb 19 06:37:35 UTC 2020\n```\n\n# Steps to reproduce\n\nI cannot connect to MongoDB server running on Windows 10 from WSL2.\n- it was working in WSL1 via `mongodb://user:pass@localhost:27017/dbname`\n- I have tried steps mentioned in [official guide](https://docs.microsoft.com/en-us/windows/wsl/compare-versions#accessing-network-applications)\n- I have also tried Network reset\n\nOn ubuntu WSL2\n- `cat /etc/resolv.conf` returns `nameserver 192.168.176.1`\n- ping 192.168.176.1 **works**\n- ping www.google.com **works**\n\nBut i am **not able to connect** to mongodb on windows via same ip i.e **192.168.176.1**.\n```javascript\nmongoose\n .connect(`mongodb://user:[email protected]:27017/dbname`,\n {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n }\n )\n .then((msg) => {\n console.log(\"Successfully Connected via mongoose!\");\n })\n .catch((err) => console.log(\"Error\", err));\n```\n\n- Also tried bind mongodb server application to 0.0.0.0 **doesn't work**\n\n\n# Expected behavior\n\nI should be able to connect to MongoDB server running on Windows 10 via the WSL ip returned while running `cat /etc/resolv.conf`\n\n# Actual behavior\nUnable to connect **connection timed out Error**",
"username": "Stergios_Nanos"
},
{
"code": "mongos.exemongod.exe'Panel\\System and Security\\Windows Defender Firewall\\Allowed apps'MongoServerSelectionError: connect ECONNREFUSED 127.0.1.1:27017mongod.log{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34488\",\"uuid\":\"458b6f43-275c-4067-9b01-a39bf657cf6e\",\"connectionId\":67,\"connectionCount\":1}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34490\",\"uuid\":\"8a3b9610-4bdb-424b-9dd7-5b097eaee1af\",\"connectionId\":68,\"connectionCount\":2}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34492\",\"uuid\":\"0a23f7f1-be9b-47d9-bd7d-0993c1ac2628\",\"connectionId\":69,\"connectionCount\":3}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34496\",\"uuid\":\"b7649e80-90ba-4f9f-a602-6314d45c13c0\",\"connectionId\":70,\"connectionCount\":4}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34494\",\"uuid\":\"efdf35b6-45fe-4ebc-abbf-6ff71dbbe898\",\"connectionId\":71,\"connectionCount\":5}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34498\",\"uuid\":\"c8ac80e7-4f9c-4d62-9d85-ebe86614ccc6\",\"connectionId\":72,\"connectionCount\":6}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34502\",\"uuid\":\"6fbe4774-bc6f-4853-89ba-daecb7de3d6d\",\"connectionId\":73,\"connectionCount\":7}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34504\",\"uuid\":\"0845ebee-8913-4aea-b25f-1b1c5b1d1f12\",\"connectionId\":74,\"connectionCount\":8}}\n{\"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"172.22.36.92:34500\",\"uuid\":\"4649ce63-a456-4057-8cd1-f3ae6306aa64\",\"connectionId\":75,\"connectionCount\":9}}\n{\"ctx\":\"conn67\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34488\",\"client\":\"conn67\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn68\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34490\",\"client\":\"conn68\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn69\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34492\",\"client\":\"conn69\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn71\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34494\",\"client\":\"conn71\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn70\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34496\",\"client\":\"conn70\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn72\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34498\",\"client\":\"conn72\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn75\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34500\",\"client\":\"conn75\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn73\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34502\",\"client\":\"conn73\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn74\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"172.22.36.92:34504\",\"client\":\"conn74\",\"doc\":{\"driver\":{\"name\":\"nodejs\",\"version\":\"4.13.0\"},\"os\":{\"type\":\"Linux\",\"name\":\"linux\",\"architecture\":\"x64\",\"version\":\"5.10.102.1-microsoft-standard-WSL2\"},\"platform\":\"Node.js v16.17.0, LE (unified)|Node.js v16.17.0, LE (unified)\"}}}\n{\"ctx\":\"conn67\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34488\",\"uuid\":\"458b6f43-275c-4067-9b01-a39bf657cf6e\",\"connectionId\":67,\"connectionCount\":8}}\n{\"ctx\":\"conn68\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34490\",\"uuid\":\"8a3b9610-4bdb-424b-9dd7-5b097eaee1af\",\"connectionId\":68,\"connectionCount\":7}}\n{\"ctx\":\"conn69\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34492\",\"uuid\":\"0a23f7f1-be9b-47d9-bd7d-0993c1ac2628\",\"connectionId\":69,\"connectionCount\":6}}\n{\"ctx\":\"conn71\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34494\",\"uuid\":\"efdf35b6-45fe-4ebc-abbf-6ff71dbbe898\",\"connectionId\":71,\"connectionCount\":5}}\n{\"ctx\":\"conn70\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34496\",\"uuid\":\"b7649e80-90ba-4f9f-a602-6314d45c13c0\",\"connectionId\":70,\"connectionCount\":4}}\n{\"ctx\":\"conn72\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34498\",\"uuid\":\"c8ac80e7-4f9c-4d62-9d85-ebe86614ccc6\",\"connectionId\":72,\"connectionCount\":3}}\n{\"ctx\":\"conn75\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34500\",\"uuid\":\"4649ce63-a456-4057-8cd1-f3ae6306aa64\",\"connectionId\":75,\"connectionCount\":2}}\n{\"ctx\":\"conn73\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34502\",\"uuid\":\"6fbe4774-bc6f-4853-89ba-daecb7de3d6d\",\"connectionId\":73,\"connectionCount\":1}}\n{\"ctx\":\"conn74\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"172.22.36.92:34504\",\"uuid\":\"0845ebee-8913-4aea-b25f-1b1c5b1d1f12\",\"connectionId\":74,\"connectionCount\":0}}\n",
"text": "The connection stopped working again, after converting to a Replica Set following the guide here.The database is running and I’m able to connect with Compass.I have also allowed mongos.exe and mongod.exe to Firewall through 'Panel\\System and Security\\Windows Defender Firewall\\Allowed apps'.When I try to connect from WSL I get MongoServerSelectionError: connect ECONNREFUSED 127.0.1.1:27017, but, the mongod.log has the followng lines when trying to connect from WSL:So it seems to be a Firewall issue. I don’t know what else I have to do to fix it.",
"username": "Stergios_Nanos"
},
{
"code": "",
"text": "If your application generatesMongoServerSelectionError: connect ECONNREFUSED 127.0.1.1:27017the mongod.log is useless. Your application never connected to the server. The log you share are valid connections, probably the ones made byable to connect with CompassSo it seems to be a Firewall issue.Most likely it is not a firewall issue. 127.0.1.1 is probably not the address of your replica set.Share the connection string you used to connect with Compass. Where is running Compass? Where is running mongod? Where is running the code generating the ECONNREFUSED?",
"username": "steevej"
},
{
"code": "mongodb://localhost:27017/DB_NAMEstorage:\n dbPath: C:\\Program Files\\MongoDB\\Server\\6.0\\data\\db\n journal:\n enabled: true\n\nsystemLog:\n destination: file\n logAppend: true\n path: C:\\Program Files\\MongoDB\\Server\\6.0\\log\\mongod.log\n\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\nreplication:\n replSetName: rs0\n enableMajorityReadConcern: true\nmongodb://HOST_IP:27017/DB_NAMEcat /etc/resolv.confUSER@WINDOWS:~$ cat /etc/resolv.conf\n# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:\n# [network]\n# generateResolvConf = false\nnameserver [HOST_IP]\n",
"text": "mongod and Compass is running on Windows.\nthe Compass connection string is mongodb://localhost:27017/DB_NAMEthe mongod.cfg is this:The app I’m trying to connect from, is running on Ubuntu WSL2 on the Windows machine mentioned above.\nThe app connection string is mongodb://HOST_IP:27017/DB_NAME, where HOST_IP IP is the address of the host machine, obtained by running the command cat /etc/resolv.conf, as shown here.",
"username": "Stergios_Nanos"
},
{
"code": "",
"text": "Try with 172.22.36.92 for HOST_IP.The host nameserver is NOT necessarily the IP of the Windows machine. How do you connect to your linux machine from your Windows machine? How do you connect to your Windows machine from your linux machine.HOST_IP has to be address of the Windows machine where mongod is running.",
"username": "steevej"
},
{
"code": "cat /etc/resolv.conf",
"text": "I was trying with 172.22.36.92 and did not work.\nthe database was receiving the connection request from the app,\nas you can see from the mongod.log I provided before,\nbut the app was not receiving the reply from the mongodb.172.22.36.92 is not static it changes with every reboot and to find the new IP have to run cat /etc/resolv.conf from the WSL shell.",
"username": "Stergios_Nanos"
},
{
"code": "",
"text": "What are the IP addresses of your Windows machine?",
"username": "steevej"
},
{
"code": "IPv4 Address. . . . . . . . . . . : 192.168.100.6",
"text": "from ipconfig:\nIPv4 Address. . . . . . . . . . . : 192.168.100.6",
"username": "Stergios_Nanos"
},
{
"code": "",
"text": "That is the address that should work for your application on Linux that wants to connect to your Windows’ mongod.But I am surprise that you do not have a second IP address in the 172.22.36.0 network.",
"username": "steevej"
}
] | Mogodb server: connect ECONNREFUSED 127.0.0.1:27017 | 2023-02-04T01:13:18.962Z | Mogodb server: connect ECONNREFUSED 127.0.0.1:27017 | 4,185 |
[
"python",
"serverless"
] | [
{
"code": "",
"text": "In the serverless cluster limitations section, it mentions PyMongo should be at least 3.12.0, but based on the context around it (all languages), it seems like it’s supposed to be Python is at least 3.12.0.\nimage996×745 21.8 KB\nTo corroborate this, when I select Python as the driver in the “connect” screen/instructions, Python 3.12 is the only option for my serverless cluster.Additionally, the Data API doesn’t seem to work with my serverless cluster, though this isn’t explicitly documented as far as I can tell. Is this a supported or unsupported feature?",
"username": "Sterling_Baird"
},
{
"code": "",
"text": "This is not a typo. PyMongo>=3.12 is needed in order to connect to a serverless instance and any Python version (that PyMongo supports) will work.",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Possible typo in serverless cluster documentation, and does it support the Data API? | 2023-02-04T06:14:49.225Z | Possible typo in serverless cluster documentation, and does it support the Data API? | 1,257 |
|
null | [
"connecting"
] | [
{
"code": "appName const currentActivities = await db.admin().command({ currentOp: 1, $all: 1 });\n const activeConnections: Record<string, number> =\n currentActivities.inprog.reduce(\n (acc: Record<string, number>, curr: Record<string, number>) => {\n const appName = curr.appName ? curr.appName : 'Unknown';\n acc[appName] = (acc[appName] || 0) + 1;\n acc['TOTAL_CONNECTION_COUNT']++;\n return acc;\n },\n { TOTAL_CONNECTION_COUNT: 0 }\n );\nMongoDB CPS Module",
"text": "Hi,I have a MongoDB cluster running on Atlas. We connect each client to MongoDB by passing an appName with that we know who connects to the cluster and we can identify them.At some points, we get a huge connection spike of unknown connections and we can’t identify them.If an alert happens we identify the unknown connections with this script:I tried analyzing the logs and I see a large number of connections from private IPs. These private IPs have (most of the time) an app name set. For example, they are MongoDB CPS Module.It is not super straightforward to analyze the logs so I am not sure which IPs the UNKNOWN connections are. I only count the connections by IP.Did anybody have a similar problem and some suggestions on how to tackle that?Thank you Sandro",
"username": "Alessandro_Volpicella"
},
{
"code": "MongoDB CPS ModuleMongoDB CPS ModuleappNames",
"text": "Hey @Alessandro_Volpicella,Welcome to the MongoDB Community Forums! At some points, we get a huge connection spike of unknown connections and we can’t identify them.For clarification - How large is the spike in the connections? Additionally, does the spike of connections cause some performance issues?I tried analyzing the logs and I see a large number of connections from private IPs. These private IPs have (most of the time) an app name set. For example, they are MongoDB CPS Module.As you pointed out some of the connections are named MongoDB CPS Module - I believe this is related to the Cloud Provider Snapshots (CPS) (although you can confirm with the Atlas chat support team) which is the Backup offering available on Atlas. Additionally, please also see the following note from the Connections Limits and Cluster Tier documentation:Atlas reserves a small number of connections to each Atlas cluster for supporting Atlas services. Contact Atlas support for more information on Atlas reserved connections.If you’re sure that all the private IP’s do not belong to any of your clients, I’d recommend you contact the Atlas in-app chat support team to verify if these IPs are part of Atlas service(s). It would be best to advise how you have determined the IP’s / appNames (e.g. MongoDB CPS Module), if these connection spikes cause any issues, as well as how large the spikes are to the support team to verify if this is expected or not.Please let us know if this helps. Feel free to reach out for anything else as well.Regards,\nSatyam",
"username": "Satyam"
},
{
"code": "192.168.xx opened: 346 closed: 346 \n192.168.xx opened: 228 closed: 228 \n",
"text": "Thanks for the welcome For clarification - How large is the spike in the connections? Additionally, does the spike of connections cause some performance issues?Spikes are everywhere in the range between 1,000 - 4,000 connections of Unknown connections. I do see app names such asBut the unknown portion is insanely high. And we set all app names everywhere. If I analyze the logs I see so many private IPs and in my eyes these can only be MongoDB internal connections then right?If you’re sure that all the private IP’s do not belong to any of your clients, I’d recommend you contact the Atlas in-app chat support team to verify if these IPs are part of Atlas service(s).\nI did that. Unfortunately, the support is super slow and didn’t help really much. I can try again.The support also shared a file that displays the unknown connections and they display mainly a high number of connections coming from private IPs. But the support didn’t give me any explanation. They also tell me that the chat is not helping for dedicated technical issues (somewhere in that saying).This is what they shared, or the first two lines.I hope this helps. I am open to any suggestions since we had this issue on the weekend again.Additionally, does the spike of connections cause some performance issues?\nYes if the connections are at max our production application loses the connection and prod is down.Thanks for you help ",
"username": "Alessandro_Volpicella"
}
] | MongoDB Atlas - Unknown Connections spikes from private IPs | 2023-02-02T16:45:28.303Z | MongoDB Atlas - Unknown Connections spikes from private IPs | 1,286 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hey folks!I have a question about whether it would be wiser to use multiple databases or multiple collections within a single database when designing for applications at scale.I’m anticipating using some sharding for certain types of data I’m storing, but I’d like to keep other collections unsharded to more easily query the dataset as a whole (using indexes, of course).Since unsharded collections are all stored on the primary shard, would it make more sense for me to separate collections that I want to remain unsharded into their own databases? That way there’s less data stored on a primary shard for a given database, which - to my understanding - would reduce the need for sharding in that database.Moreover, would this be more expensive for my backend API? I’d have to maintain connections to several databases rather than a single database, although they’d all still be in the same cluster.Or, is this a misunderstanding on my part? Would creating more databases actually reduce the need for sharding, or is the need for sharding determined more by the storage available in the cluster?Please let me know if you need any clarification as to what I’m asking. Thank you!",
"username": "Matthew_Eva"
},
{
"code": "\"expensive\"shard",
"text": "Hi @Matthew_Eva,Welcome to the MongoDB Community forums I have a question about whether it would be wiser to use multiple databases or multiple collections within a single database when designing for applications at scale.The answer depends on the specific requirements of the application and the nature of the data being stored.Using multiple databases can help ensure that each database can be optimized for a specific set of use cases, and can simplify scaling and management operations. It also helps in ensuring that the failure of one database does not affect other databases.Using multiple collections within a single database can simplify deployment and management operations and reduce latency for cross-collection data relationships. It can also make it easier to take advantage of certain database-specific features.Ultimately, there is no one-size-fits-all answer to this question, as the best approach will depend on the specific requirements of the application. It’s always a good idea to consider the specific use cases and workloads to determine the most appropriate design.Since un-sharded collections are all stored on the primary shard, would it make more sense for me to separate collections that I want to remain un-sharded into their own databases? That way there are less data stored on a primary shard for a given database, which - to my understanding - would reduce the need for sharding in that database.Yes, it depends on the use case, but a database is just a namespace. Actually, in the sharding doc page, all mentions are about collections. Not about databases. Note this part in particular:“A database can have a mixture of sharded and un-sharded collections. Sharded collections are partitioned and distributed across the shards in the cluster. Unsharded collections are stored on a primary shard. Each database has its own primary shard.”Moreover, would this be more expensive for my backend API? I’d have to maintain connections to several databases rather than a single database, although they’d all still be in the same cluster.I’m not clear about the \"expensive\" concern here. Could you explain this bit more in what concern you are asking for?Also, do you have a specific design goal that requires the use of sharding in mind that cannot be achieved with a normal replica set?Please note that sharding requires the use of more infrastructure and planning compared to a regular replica set, and consequently, the maintenance of a sharded cluster will also be more complex.Or, is this a misunderstanding on my part? Would creating more databases actually reduce the need for sharding, or is the need for sharding determined more by the storage available in the cluster?The need for sharding is not solely determined by the storage available in the cluster, although that is one factor to consider. Sharding is a technique used to distribute the data and load across multiple machines, allowing the database to handle increased load and scale horizontally.However, the need for sharding is ultimately determined by the scale characteristics of the data and the workloads being run. If you have a large amount of data and high write or read loads, you may still need to shard, even if you have multiple databases.In my suggestion, it’s a good idea to evaluate the specific requirements of the application, including the data being stored, the queries being run, and the preferred scale, in order to determine the best approach to sharding and the use of multiple databases.I hope it helps!Let us know if you have any further questions!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hey @Kushagra_Kesav !Thank you so much for the thorough and detailed response! This is very helpful.My question actually originated from the piece of documentation you excerpted:A database can have a mixture of sharded and un-sharded collections. Sharded collections are partitioned and distributed across the shards in the cluster. Unsharded collections are stored on a primary shard. Each database has its own primary shard.Specifically this last part - each database has it’s own primary shard.My conclusion was that since each database has it’s own primary shard, storing multiple unsharded collections in one database would place them all on the primary shard of that database, which would eat up that primary shard’s storage capacity.Whereas, creating multiple databases would create multiple primary shards (since each database has it’s own), which would mean that two unsharded collections stored in two separate databases would have more storage capacity available to each of them, which could help prevent the need for sharding, or at least help minimize the degree of sharding necessary.Is this incorrect? The way I’m conceptualizing it is that each database essentially gets its own independent shard that is separate from shards in other databases. I.e. a cluster with seven separate databases would have seven separate primary shards, all with independent storage capacity. Is this not how it actually works?I’m planning to use sharding because I anticipate having a volume of data that would exceed the storage capacity of an unsharded cluster. I’ve decided to use mongodb as my database because it specifically allows for horizontal scaling via sharding.As far as “expensive” goes, I suppose I should have said “resource intensive”. I know that establishing a connection to a cluster is a relatively resource intensive operation, but once connected to that cluster, is it resource intensive to query to multiple different databases within that cluster from the same API? Or is it basically as resource intensive as querying to multiple collections within a single database?Thank you so much for your help!Best,Matt",
"username": "Matthew_Eva"
},
{
"code": "",
"text": "Hi @Matthew_Eva,As I understand your main concern is aroundwhich is understandable.In a sharded MongoDB setup, each database has its own primary shard. If you have multiple collections in one database that are not sharded, they would all be stored on the primary shard of that database, which could lead to storage capacity issues on that shard. Also, Having un-sharded collections in a sharded database can become a problem if the collections grow too large, as this can result in an imbalance of data across primary shards, leading to a concentration of un-sharded collections on a few primary shards. To mitigate this risk, it’s recommended to regularly monitor the growth of collections and shard them proactively if necessary.But this depends on the use case and the growth rate of the data.Please refer to this doc for sharding operational restrictions.Furthermore, having un-sharded collections in a sharded database is not inherently an issue, as long as the collections remain relatively small. However, as the collections grow, the risk of an imbalance of data across primary shards increases. This concentration can lead to performance issues, as well as increased resource usage on the primary shards that are handling the majority of the un-sharded collections. To avoid these problems, it is important to monitor the growth of collections and shard them proactively if necessary to ensure a balanced distribution of data across primary shards.As far as “expensive” goes, I suppose I should have said “resource intensive”. I know that establishing a connection to a cluster is a relatively resource-intensive operation, but once connected to that cluster, is it resource intensive to query multiple different databases from the same API? Or is it basically as resource intensive as querying multiple collections within a single database?Querying across multiple databases in MongoDB usually takes up the same amount of resources as searching across multiple collections in one database. The primary factor that affects the resource utilization of a query is the size of the data being queried and the complexity of the query itself, not the number of databases or collections.Additionally, maintaining consistency across multiple databases can be more difficult, especially if you are using transactions or need to ensure that updates to one database are visible to another.I hope it clarifies your doubt!Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Kushagra_Kesav,Yes, this exactly answers my question! Thank you so much!Best,Matt",
"username": "Matthew_Eva"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Multiple databases vs. multiple collections | 2023-02-02T22:20:37.686Z | Multiple databases vs. multiple collections | 5,008 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{\n \"UserCount\": 10,\n \"_id\": 30,\n},\n{\n \"UserCount\": 22,\n \"_id\": 31,\n},\n{\n \"UserCount\": 30,\n \"_id\": 32,\n},\n{\n \"UserCount\": 35,\n \"_id\": 33,\n}\n{\n \"UserCount\": 10,\n \"_id\": 30,\n \"difference\" : \"12\",\n},\n{\n \"UserCount\": 22,\n \"_id\": 31,\n \"difference\" : \"8\",\n},\n{\n \"UserCount\": 30,\n \"_id\": 32,\n \"difference\" : \"5\",\n},\n{\n \"UserCount\": 35,\n \"_id\": 33,\n \"difference\" : \"0\",\n}\n",
"text": "My goal is to find two random documents. Each document has a UserCount. The usercount difference between these two documentActually, I’m getting an output belowexpecting output below.using MongoDB query, it is very difficult to solve this problem.",
"username": "Sai1232"
},
{
"code": "",
"text": "using MongoDB query, it is very difficult to solve this problemIt is very hard with any system because your problem is not well defined.How do you get your differences of “12” for _id:30 ? Is it the difference between UserCount of _id:30 and UserCount of _id:31 ?What do you mean byfind two random documentsIs it random? Or you want the difference between the next _id ?If difference is the result of a subtraction why do you put it between quotes as a string?",
"username": "steevej"
}
] | Find the difference Between The Documents in single collection | 2023-02-06T06:14:03.122Z | Find the difference Between The Documents in single collection | 394 |
null | [
"dot-net",
"document-versioning"
] | [
{
"code": "",
"text": "We are using C# driver in .netcore. We are looking for sound implementation of Document Versioning similar to SQL Server versioning. I have gone through BlogPost where they discuss document versioning.Thanks",
"username": "Marmik_Shah"
},
{
"code": "",
"text": "Document versioning in MongoDB can be implemented in multiple ways, each with its pros and cons. Here are a few options:Whichever method you choose, make sure to test it thoroughly and to consider the trade-off between the complexity of the solution and the requirements for versioning in your use case.",
"username": "Sumanta_Mukhopadhyay"
},
{
"code": "",
"text": "Does the process which reads Change Stream have to be running consciously or it can have downtime? Is there the concept of “resume from” when I start reading from ChangeStream again? What are the provisions for consistency in this?I was also thinking about implementing versioning in my application itself with the below approach.Below operations would be in the transactionInsert for the document (newDocument) comes in with PrimaryKey = PolicytIdWhat do you think about this approach?",
"username": "Marmik_Shah"
},
{
"code": "",
"text": "Hi @Marmik_ShahThere are a couple of blog posts that may be useful to you regarding this topic:Note that those two posts are quite technical, but contains a lot of interesting information regarding the pros/cons of some approaches. Those are a bit old, but still relevant today since it deals with schema design.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks for the links.Does the process which reads Change Stream have to be running consciously or it can have downtime? Is there the concept of “resume from” when I start reading from ChangeStream again? What are the provisions for consistency in this?SeeMongoDB triggers, change streams, database triggers, real time",
"username": "steevej"
}
] | Document Versioning | 2023-02-04T19:47:21.600Z | Document Versioning | 3,502 |
[
"atlas-functions"
] | [
{
"code": "",
"text": "Hi,I created a Function that takes 3 parameters (dateInitial, dateFinal and groupdId). It has System Authentication.Users have a custom parameter called groupId.I specified an authorization expression to load this function only if the groupId argument passed to the function is equal to the custom groupId parameter of the user who called the function, but it does not work.\nauthorization1097×574 26.9 KB\nWhere am I going wrong?",
"username": "Bruno_Nobre"
},
{
"code": "",
"text": "Same situation here, I am not able to use a function parameter inside the rule expression. It seems that there is not expasion operator to use these params.Were you able to solve this? I can’t find any information about this in the docs.",
"username": "Manuel_Da_Silva"
},
{
"code": ".callFunction(\n \"myFunctionName\",\n \"First parameter\",\n 123\n )\n",
"text": "It is possible to access the arguments sent to the function by using the %%args expansion operator, but this expands into an array representing the arguments that were sent to the function:using the %%args operator will expand into [“First parameter”, 123] with the example above.To access a certain argument from that array you can use “%%args.0” for accessing “First parameter” or “%%args.1” to access the 123 value. Using the %%args expansion plus the index of your argument in the original function you can have access to it within the rule expression. ",
"username": "Manuel_Da_Silva"
}
] | Protect Function | 2021-11-03T15:20:49.361Z | Protect Function | 2,900 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.