image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"schema-validation",
"reference-pattern"
] | [
{
"code": "db.createCollection(\"users\", {\n validator: {\n $jsonSchema: {\n bsonType: \"object\",\n required: [ \"username\", \"password\" ],\n properties: {\n username: {\n bsonType: \"string\",\n description: \"must be a string and is required\"\n },\n password: {\n bsonType: \"string\",\n minLength: 8,\n description: \"must be a string at least 8 characters long, and is required\"\n }\n }\n }\n }\n} )\n",
"text": "All of the examples of JSON schema validation I have found use the following snippet’s format:If I want the validation to use a schema.JSON file stored within the same Atlas’ collection, what is the syntax to compare adherence to this separate schema file, after the opening ’ $jsonSchema: {…'?Thank you for helping a noob JSON schema user.",
"username": "Stephen_Clark"
},
{
"code": "var schema = db.getCollectionInfos({name: \"users\"})[0].options.validator;\ndb.createCollection(\"users1\", { validator: schema } );\nconst filter = { name: 'users' };\nconst collectionInfos = await db.listCollections(filter).toArray();\n",
"text": "@Stephen_Clark , I see you want to reuse the validator of existing collection . In that caseIn mongo shell , you can useIn driver, i see that getCollectionInfo isn’t available, so you can use listCollections instead",
"username": "Vivekanandan_Sakthivelu"
}
] | Syntax for json schema validation within Atlas? | 2023-08-22T00:21:11.147Z | Syntax for json schema validation within Atlas? | 420 |
null | [] | [
{
"code": "sudo apt-get updateHit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease\nHit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease\nHit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease\nHit:4 https://deb.nodesource.com/node_16.x focal InRelease\nIgn:5 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 InRelease\nHit:6 http://security.ubuntu.com/ubuntu jammy-security InRelease\nGet:7 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 Release [2090 B]\nGet:8 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 Release.gpg [866 B]\nIgn:8 https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 Release.gpg\nReading package lists... Done\nW: GPG error: https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 160D26BB1785BA38\nE: The repository 'https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 Release' is not signed.\nN: Updating from such a repository can't be done securely, and is therefore disabled by default.\nN: See apt-secure(8) manpage for repository creation and user configuration details.\n/usr/share/keyrings/mongodb-server-7.0.gpg",
"text": "I have been following the instructions to upgrade the MongoDB server from 6 to 7 from the official page. Then I tried to install version 7 according to this. When I run sudo apt-get update, I am getting the following error:My keyring is in place in this file: /usr/share/keyrings/mongodb-server-7.0.gpgHow to fix this error?",
"username": "khat33b"
},
{
"code": "",
"text": "Same issue.The docs say to just repeat the Import the public key used by the package management system step, but that didn’t work for me.",
"username": "Jason_Gonzales"
},
{
"code": "sudo apt-key listcurl -fsSL https://pgp.mongodb.com/server-7.0.asc | sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor",
"text": "One thing I found was the keyring is not showing when running sudo apt-key list. No matter how many times I run the curl -fsSL https://pgp.mongodb.com/server-7.0.asc | sudo gpg -o /usr/share/keyrings/mongodb-server-7.0.gpg --dearmor command, the key ring doesn’t show up there but the file is created.",
"username": "khat33b"
},
{
"code": "sudo apt-key list",
"text": "I found that the keyring is not showing when running sudo apt-key list for me.",
"username": "khat33b"
},
{
"code": "sudo vi /etc/apt/sources.list.d/mongodb-org-7.0.listdeb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse",
"text": "The solution is to put the keyring location in the sources file.Edit the file using sudo vi /etc/apt/sources.list.d/mongodb-org-7.0.list and put this instead:deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverseReference to how key rings work: repository - What commands (exactly) should replace the deprecated apt-key? - Ask Ubuntu",
"username": "khat33b"
},
{
"code": "curl -fsSL https://pgp.mongodb.com/server-7.0.asc | \\\n sudo gpg -o /etc/apt/trusted.gpg.d/mongodb-server-7.0.gpg \\\n --dearmor\n",
"text": "Another solution is to create the keyring in /etc/apt/trusted.gpg.d/ instead of /usr/share/keyrings/.\nThe command would beThen sudo apt-get update to check if it can find the keyring",
"username": "Fahad_Hassan"
},
{
"code": "http://repo.mongodb.org/apt/debian bullseye/mongodb-org/7.0 Release.gpg [866 B]",
"text": "Thanks, it works\nhttp://repo.mongodb.org/apt/debian bullseye/mongodb-org/7.0 Release.gpg [866 B]",
"username": "Mad_Demon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting error while updating MongoDB from 6 to 7 in Ubuntu 22.04 | 2023-08-16T07:50:40.881Z | Getting error while updating MongoDB from 6 to 7 in Ubuntu 22.04 | 2,793 |
null | [
"serverless"
] | [
{
"code": "if (body) {\n const reviews = context.services.get(\"mongodb-atlas\").db(\"myDBName\").collection(\"myCollectionName\");\n \n const reviewDoc = {\n name: body.name,\n user_id: body.user_id,\n date: new Date(),\n text: body.text,\n restaurant_id: BSON.ObjectId(body.restaurant_id)\n };\n \n return await reviews.insertOne(reviewDoc);\n }\n else{\n return {};\n }\n};\n{\n \"text\": \"test\",\n \"name\": \"JoeB\",\n \"user_id\": \"123\",\n \"restaurant_id\": \"64e3247f8c589fd65db49b50\"\n}\n",
"text": "Greetings-I have a VERY simple POST that adds a record to a collection. It works in the RQL function editor for the serverless endpoint, yet not in postman:HOWEVER, when I try the same POST in postman via JSON object:I get a 400 Bad request and the error “ObjectId in must be a single string of 12 bytes or a string of 24 hex characters.” It seems to have an issue with the “restaurant_id” field which is an ObjectIdPlease inform.",
"username": "Kevin_N"
},
{
"code": "{\n \"text\": \"test\",\n \"name\": \"JoeB\",\n \"user_id\": \"123\",\n \"restaurant_id\": \"64e3247f8c589fd65db49b50\"\n}\n",
"text": "Hi @Kevin_N,HOWEVER, when I try the same POST in postman via JSON object:I get a 400 Bad request and the error “ObjectId in must be a single string of 12 bytes or a string of 24 hex characters.” It seems to have an issue with the “restaurant_id” field which is an ObjectIdCan you provide any log details or details regarding the request being performed in postman for when you get the error?Regards,\nJason",
"username": "Jason_Tran"
}
] | POST works in RQL/Atlas in browser, but not in Postman | 2023-08-21T16:32:56.948Z | POST works in RQL/Atlas in browser, but not in Postman | 453 |
[
"atlas-functions"
] | [
{
"code": "",
"text": "Hi guys, I’m trying to create a Data Federation that integrates my data with a Bucket in S3 inside MongoAtlas, these documents as a base: https://www.mongodb.com/developer/products/atlas/automated-continuous - data-copying-from-mongodb-to-s3/#what-is-parquet-But currently it is returning the error below:\n\nimage1218×786 37.7 KB\n",
"username": "Marcelo_Caldas"
},
{
"code": "",
"text": "Hi @Marcelo_Caldas,Seems like it runs for a several seconds before having the error returned. I would advise contacting the Atlas in-app chat support if you have not already done so to see if theres any further information regarding this error.Regards,\nJason",
"username": "Jason_Tran"
}
] | (InternalError) an error occurred when communicating with AWS S3 - Data Federation | 2023-08-17T19:07:57.049Z | (InternalError) an error occurred when communicating with AWS S3 - Data Federation | 472 |
|
null | [
"transactions"
] | [
{
"code": "",
"text": "Hi\nI need to alter mongoldb.cfg for transactions for Atlas as I do when running MonogDB locally.I am using the MongoDB C driver to connect to Atlas.\nI need to set:setParameter:\ntransactionLifetimeLimitSeconds: 500Does anyone know how to do this?",
"username": "Phillip_Carruthers"
},
{
"code": "setParameter",
"text": "Hi @Phillip_Carruthers,I need to set:setParameter:\ntransactionLifetimeLimitSeconds: 500The setParameter command in unsupported in Atlas as per the Unsupported Commands in Atlas documentation.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi\nIs there any way to set the value for transactionLifetimeLimitSeconds?\nThe current value is not large enough for my transactions to complete.Phillip",
"username": "Phillip_Carruthers"
},
{
"code": "",
"text": "Hi Phillip,Is there any way to set the value for transactionLifetimeLimitSeconds?Currently as per the documentation for unsupported commands in Atlas states:Contact Atlas support if your use case requires access to a command that the Atlas database user privileges don’t currently support.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How do I alter the Atlas mongoDB transaction configuration | 2023-08-20T12:08:01.403Z | How do I alter the Atlas mongoDB transaction configuration | 411 |
null | [
"queries"
] | [
{
"code": "{\"makeMonth\": \"Apr\", \"makeYear\": \"2023\"}\n",
"text": "I have a collection called MachineData, which holds the data of all the machines, and the dates they were made. Each document has a field called makeMonth and makeYear, which would have values of “Apr” and “2023”, respecitvely. I want to have an email that sends out monthly on the first of each month, and grabs the data for the reporting month/year (so in this case, on Aug. 1st all machines in July 2023 would be graphed).This is rather simple to do without having a dynamic query that will know the current month/year, just by doingI was not sure if it is even possible to grab the current month/year and compare it to the field within the data set. As well, I know how to set up the charts to go out via email, so that is not the issue. Issue stems from creating the correct query, if anyone can lead me in the right direction. I was looking at ISODATE, but not sure how I would do that, and the document would need to be updated to match that string created by ISODATE, which is not too much of an issue.As well, I do not really mind if the email has to go out on the last day of the month, just to make it easier grabbing the date, really would just like to get the query to be dynamic with the changing month/year, so we do not have to manually edit the query once a month via the interface.Thanks in advance for any leads!",
"username": "bigorca54"
},
{
"code": "",
"text": "I answered something similar recently that may be help, you can add a custom pipeline to the report query:You should be able to do something similar in your query that the report uses.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Awesome, thanks @John_Sewell ! I will give it a look.",
"username": "bigorca54"
},
{
"code": "{\n $expr: {\n $and: [\n {\n $lte: [\n '$date',\n {\n $dateFromString: {\n dateString: {\n $dateToString: {\n date: '$$NOW',\n format: '%Y-%m-%d'\n }\n }\n }\n }\n ]\n },\n {\n $gte: ['$date', '$beginningDate']\n }\n ]\n }\n}\n",
"text": "@John_SewellSo I have the query that can grab the date, although one thing I am having trouble with is the start point.So right now I can grab all machines that are made on August 11th, 2023. Although, how would I get something that puts a filter of August 1st, 2023 through August 11th, 2023? Not sure if I would have to do something like grab the current month/year within my code, put it into a document within the collection and check that document using an expression (?) in the query? Sorry for the questions if they are elementary…Edit:It would be something like this, correct?db.monthlyBudget.find( { $expr: { $gt: [ “$spent” , “$budget” ] } } )Although like this for the example:Where ‘beginningDate’ field is within a document within the collection, that will get updated every time the application itself (software which interacts with the DB), gets updated to hold the Month/Year where we start, and then the $$NOW variable will grab until current time.",
"username": "bigorca54"
},
{
"code": "$set$dateFromString",
"text": "Note that the easiest way to filter data for the current or previous month is to drag a date field into the Filters pane and choose the Period option:In your case the challenge may be that you don’t actually store the relevant data in a date field, but you could probably get around this by putting a $set in the query bar and using $dateFromString to assemble it from its component paths.Tom",
"username": "tomhollander"
},
{
"code": "$set$dateFromString",
"text": "@tomhollanderThanks for the input Tom! So in order to get to this, I would need to be using ISODATE function in the date field for it to be relevant?Otherwise, you would be using the $set and $dateFromString to create basically variables within the query to probably query the data?Thanks! Sorry if the questions are basic…just getting started with Mongo!",
"username": "bigorca54"
},
{
"code": "IsoDate()Date()$set$dateFromString[\n { \n $set: {\n parsedDate: {\n $dateFromString: {\n dateString: { $concat: [ \"$makeMonth\", \" 01 \", \"$makeYear\" ] },\n format: \"%b %d %Y\"\n }\n }\n }\n }\n]\n",
"text": "IsoDate() is just a helper function to help you can use to create constant date values in your queries. You can also use the normal JavaScript Date() function for this purpose. I don’t see a need to use either here.The $set pipeline stage creates a calculated field in your pipeline that can set a new value derived from other values in each document. In your case I suggested using the $dateFromString to assemble the Date-typed field since it provides options to parse the month name which you mentioned was in your data.So I’d expect your query to look something like (note, not tested):",
"username": "tomhollander"
},
{
"code": "LocalDateTime currentDateTime = LocalDateTime.now();\nDateTimeFormatter format = DateTimeFormatter.ofPattern(\"yyyy-MM-dd'T'HH:mm:ss.SSS'Z'\");\nString isoFormattedDate = currentDateTime.format(format);\nisoFormattedDate: \"2023-08-21T09:27:32.581Z\"\nisoFormattedDate",
"text": "@tomhollander,I wanted to make this easier for the future so I am trying to correctly format the dates of the documents. I went ahead and added this within my Java codeand received the expected output, which now the collection has a field called isoFormattedDate, which haswithin the document, along with other fields.Although, when I go to create a chart first thing I notice is the isoFormattedDate field is not present within the collection, so I press “Add Fields” and then look it up explicitly and it adds it in. Following this, when I go to move that field into the “Filter” category, I do not receive what you were showing me in a previous reply. I get thisIs it because the date field is still not correctly formatted? Thanks.",
"username": "bigorca54"
},
{
"code": "currentDateTime",
"text": "The issue here is that you are storing your date as a string. No matter how you format it, a string is a different data type to a date, and you can’t do any kind of date arithmetic on it. If you put your Java currentDateTime directly into the MongoDB document it should be persisted with the correct date type.",
"username": "tomhollander"
}
] | Dynamic Queries for MongoDB Atlas Charts | 2023-08-11T12:48:59.727Z | Dynamic Queries for MongoDB Atlas Charts | 764 |
null | [
"sharding",
"mongodb-shell"
] | [
{
"code": "",
"text": "Upgrading MongoDb from 5.0.20 to 6.0.9 on ubuntu 20.04Hey guys, I am having an issue right now upgrading my mongoDB to v6.0.9\nNow I have a server which has a version 5.0.20 and all working fine.Steps done,\nFirst I updated the ubuntu server from 18.04 to 20.04 (eventually to 22.04).\nThen I went to upgrade MongoDB from version 5.0.20 to version 6.0.9.\nNo issue there, but when I want to check the version it ask me to install mongodb-client\nPlease note that I installed the packages by downloding them not from apt-get install.\nSee bellow for more detailsCommand ‘mongo’ not found, but can be installed with:\napt install mongodb-clients< mongo --version\nMongoDB shell version v5.0.20\nBuild Info: {\n“version”: “5.0.20”,\n“gitVersion”: “2cd626d8148120319d7dca5824e760fe220cb0de”,\n“openSSLVersion”: “OpenSSL 1.1.1f 31 Mar 2020”,\n“modules”: ,\n“allocator”: “tcmalloc”,\n“environment”: {\n“distmod”: “ubuntu1804”,\n“distarch”: “x86_64”,\n“target_arch”: “x86_64”\n}\n}>< dpkg -l | grep mongoii mongodb-database-tools \t\t100.8.0 amd64 mongodb-database-tools package provides tools for working with the MongoDB server:\nii mongodb-mongosh \t\t0.15.6 amd64 MongoDB Shell CLI REPL Package\nii mongodb-org \t\t5.0.20 amd64 MongoDB open source document-oriented database system (metapackage)\nii mongodb-org-database \t\t5.0.20 amd64 MongoDB open source document-oriented database system (metapackage)\nii mongodb-org-database-tools-extra \t5.0.20 amd64 Extra MongoDB database tools\nii mongodb-org-mongos \t\t5.0.20 amd64 MongoDB sharded cluster query router\nii mongodb-org-server \t\t5.0.20 amd64 MongoDB database server\nii mongodb-org-shell \t\t5.0.20 amd64 MongoDB shell client\nii mongodb-org-tools \t\t5.0.20 amd64 MongoDB tools><# ls -1\nmongodb-database-tools_100.8.0_amd64.deb\nmongodb-mongosh_1.10.5_amd64.deb\nmongodb-org_6.0.9_amd64.deb\nmongodb-org-database_6.0.9_amd64.deb\nmongodb-org-database-tools-extra_6.0.9_amd64.deb\nmongodb-org-mongos_6.0.9_amd64.deb\nmongodb-org-server_6.0.9_amd64.deb\nmongodb-org-shell_6.0.9_amd64.deb\nmongodb-org-tools_6.0.9_amd64.deb><dpkg -i *.deb><# dpkg --list | grep mongo\nii mongodb-database-tools \t\t100.8.0 amd64 mongodb-database-tools package provides tools for working with the MongoDB server:\nii mongodb-mongosh \t\t1.10.5 amd64 MongoDB Shell CLI REPL Package\nii mongodb-org \t\t6.0.9 amd64 MongoDB open source document-oriented database system (metapackage)\nii mongodb-org-database \t\t6.0.9 amd64 MongoDB open source document-oriented database system (metapackage)\nii mongodb-org-database-tools-extra \t6.0.9 amd64 Extra MongoDB database tools\nii mongodb-org-mongos \t\t6.0.9 amd64 MongoDB sharded cluster query router\nii mongodb-org-server \t\t6.0.9 amd64 MongoDB database server\nii mongodb-org-shell \t\t6.0.9 amd64 MongoDB shell client\nii mongodb-org-tools \t\t6.0.9 amd64 MongoDB tools>At this point I cannot connect to the db in command line or get the version as I mentionned above.As anyone seen this issue?\nThank you",
"username": "Perry_Santos"
},
{
"code": "",
"text": "Well I guess I found the answer.\nNeed to use mongosh instead of mongo to connect to db",
"username": "Perry_Santos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb upgrading from 5.0.20 to 6.0.9 issue | 2023-08-21T18:59:03.920Z | Mongodb upgrading from 5.0.20 to 6.0.9 issue | 612 |
null | [
"dot-net",
"connecting"
] | [
{
"code": "1.ExecuteGetMoreCommand(IChannelHandle channel, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.AsyncCursor1.MoveNext(CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.AsyncCursorEnumerator",
"text": "Hello Everyone ,I’m currently using C# driver (v2.17.1 ) to connect to local Mongo Database and then loop through every document and update a field inside the document.\nOne iteration might take 2 to 5 seconds .This works up until 1.4k to 1.5k documents .\nLater there is an exception which is like below :MongoDB.Driver.MongoCursorNotFoundException: Cursor 4299221987179083891 not found on server localhost:27017 using connection 1465.\nat MongoDB.Driver.Core.Operations.AsyncCursor1.ExecuteGetMoreCommand(IChannelHandle channel, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.AsyncCursor1.GetNextBatch(CancellationToken cancellationToken)\nat MongoDB.Driver.Core.Operations.AsyncCursor1.MoveNext(CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.AsyncCursorEnumerator1.MoveNext()My C# code looks like below :string connectionString = @“mongodb://localhost:27017/”;\nMongoClient dbClient = new MongoClient(connectionString );\nstring dbName = “mongodbname”;\nstring collectionName = “mongodbcollectionname”;\nvar db_sample = dbClient.GetDatabase(dbName);\nvar collection = db_sample.GetCollection(collectionName);foreach (DataModel cdm in collection.AsQueryable())\n{\n//Iteration and updation of documents .\n}Any help is much appreciated !Thanks in advance.",
"username": "bharath_narayan"
},
{
"code": "cursorIdgetMorecursorIdgetMoregetMoreToList()collection.AsQueryable().ToList()AsQueryable(new AggregationOptions { BatchSize = N })getMorecollection.AsQueryable().Where(x => x.SortField > lastSeen).OrderBy(x => x.SortField).Take(10)cursorTimeoutMillis",
"text": "Hi, @bharath_narayan,Welcome to the MongoDB Community Forums.When a query fetches documents from MongoDB those documents are returned in 16MB batches. The initial query returns the first 16MB batch along with a cursorId, which can be further iterated by calling getMore with that cursorId. Fetching additional results is handled automatically by the driver by internally calling getMore when the current bath is exhausted.By default MongoDB terminates idle cursors after 10 minutes. Thus if it takes you more than 10 minutes to process a 16MB batch, the getMore will fail because the idle cursor has been killed.There are a few possible solutions:Hope this helps.Sincerely,\nJames",
"username": "James_Kovacs"
}
] | MongoDB C# MongoCursorNotFoundException | 2023-08-21T08:42:23.134Z | MongoDB C# MongoCursorNotFoundException | 521 |
null | [] | [
{
"code": "",
"text": "Hello,I embedded my dashboard to an application, say School. Once logging into the particular user account (Teacher account), application has option to select one student. There is list of student email id’s. If we select one student email id, it should show the dashboard relating to only that particular student.\nI have used injected function option to verify teacher account.const stringId = context.token.UserId;\nreturn { “teacherId”: stringId };Now my issue is, when I am selecting student emaild, I am not sure how to capture that value and where to provide that filter option in mondodb dashboard.Thank you,\nSunita",
"username": "sunita_kodali"
},
{
"code": "",
"text": "Please close the issue.",
"username": "sunita_kodali"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Selecting user value | 2023-08-17T17:06:42.269Z | Selecting user value | 292 |
null | [
"node-js",
"crud",
"containers"
] | [
{
"code": "MongoServerSelectionError{\"topologyId\":0,\"previousDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":\"REDACTED\",\"compatible\":\"REDACTED\",\"heartbeatFrequencyMS\":\"REDACTED\",\"localThresholdMS\":\"REDACTED\",\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":\"REDACTED\",\"logicalSessionTimeoutMinutes\":null},\"newDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":\"REDACTED\",\"localThresholdMS\":\"REDACTED\",\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":null,\"logicalSessionTimeoutMinutes\":30}}{\"topologyId\":0,\"previousDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":30000,\"localThresholdMS\":15,\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":0,\"logicalSessionTimeoutMinutes\":30},\"newDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":30000,\"localThresholdMS\":15,\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":0,\"logicalSessionTimeoutMinutes\":30}}MongoServerSelectionError\"REDACTED\"findOneUpdateMany\"REDACTED\"",
"text": "To whom it may concern,We are running node native driver 4.13, running with node16 docker hosted in Azure Kubernetes service. We are connecting to cosmosDB with mongo api, which is essentially a standalone mongodb and manages all replica informations on server side. We are connecting to the DB using direct mode give the replication is managed by server side.We do experience network blip time to time and most of time the running pods would be able to recover but some pods are not able to and went into a “stuck” state: all commands experienced MongoServerSelectionError and with more detailed message that timedout after 30 seconds.I have enabled cluster monitor on sdk side, and compared side by side for the “good” vs “bad” pods. Here are my findings:\nAfter initial heartbeat failure due to network issues, which leads to server description change, topology description change, connection pool cleared/ready a series events. All pods will stabilized. The recover logic is working as expected. However, I do notice for the “bad” stuck pods, their topology description change events would include some value of “REDACTED”, and if their stabilized topology event change include “REDACTED” information, while these “good” pods, they don’t have it. Samples:\n“BAD pods”:\n{\"topologyId\":0,\"previousDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":\"REDACTED\",\"compatible\":\"REDACTED\",\"heartbeatFrequencyMS\":\"REDACTED\",\"localThresholdMS\":\"REDACTED\",\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":\"REDACTED\",\"logicalSessionTimeoutMinutes\":null},\"newDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":\"REDACTED\",\"localThresholdMS\":\"REDACTED\",\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":null,\"logicalSessionTimeoutMinutes\":30}}\n“Good pods”\n{\"topologyId\":0,\"previousDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":30000,\"localThresholdMS\":15,\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":0,\"logicalSessionTimeoutMinutes\":30},\"newDescription\":{\"type\":\"Single\",\"servers\":{},\"stale\":false,\"compatible\":true,\"heartbeatFrequencyMS\":30000,\"localThresholdMS\":15,\"setName\":null,\"maxElectionId\":null,\"maxSetVersion\":null,\"commonWireVersion\":0,\"logicalSessionTimeoutMinutes\":30}}A few questions I have:Many thx!",
"username": "Xin_Zhang"
},
{
"code": "",
"text": "Hey @Xin_Zhang,We are running node native driver 4.13, running with node16 docker hosted in Azure Kubernetes service. We are connecting to cosmosDB with mongo api, which is essentially a standalone mongodb and manages all replica informations on server side.The CosmosDB is a Microsoft product and is semi-compatible with a genuine MongoDB server. Hence, we couldn’t comment on how it works, or even know why it’s not behaving like a genuine MongoDB server.As of now, CosmosDB currently passes only 33.51% of MongoDB server tests, so I would recommend reaching out to CosmosDB support regarding this issue.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hello @Kushagra_KesavThanks for the information!",
"username": "Xin_Zhang"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | TopologyDescriptionChanged ends up with REDACTED informations and in a bad state | 2023-08-15T18:32:00.022Z | TopologyDescriptionChanged ends up with REDACTED informations and in a bad state | 489 |
null | [
"aggregation",
"dot-net",
"atlas-search"
] | [
{
"code": "coll.Aggregate(new AggregateOptions()\n {\n BatchSize = 1000\n })\n .Search(\n compound.ToSearchDefinition(),\n indexName: \"myCoolIndex\",\n highlight: new SearchHighlightOptions<T>(Builders<T>.SearchPath.Multi(searchpaths)),\n ***** SEARCH GO HERE? ***** \n returnStoredSource: true)\nnew BsonArray\n{\n new BsonDocument(\"$search\", \n new BsonDocument\n {\n { \"index\", \"myCoolIndex\" }, \n { \"compound\", \n new BsonDocument(\"should\", \n new BsonArray\n {\n new BsonDocument(\"autocomplete\", \n new BsonDocument\n {\n { \"query\", \"aSearchValue\" }, \n { \"path\", \"somefieldpath\" }\n }),\n new BsonDocument(\"autocomplete\", \n new BsonDocument\n {\n { \"query\", \"aSearchValue\" }, \n { \"path\", \"a Different Field Path\" }\n })\n }) }, \n { \"highlight\", \n new BsonDocument(\"path\", \n new BsonArray\n {\n \"some Field Path\",\n \"a Different Field Path\"\n }) }, \n { \"sort\", \n new BsonDocument(\"A Field to Sort On\", -1) }, \n { \"returnStoredSource\", true }\n }),\n new BsonDocument(\"$addFields\", \n new BsonDocument(\"SearchHighlightDetails\", \n new BsonDocument(\"$meta\", \"searchHighlights\")))\n}\n",
"text": "I cannot find a way to add a “Sort” to the MongoDB Search function within C#. Am I missing something or must this be done via BsonDocuments?Simple Example:Of course I know I can do the below, but is it part of the C# driver and I am just missing it?",
"username": "Mark_Mann"
},
{
"code": "",
"text": "Is there any plan for the C# driver “.Search” function to accept a FilterDefinition for the “filter” component that can accompany a search or a SortDefinition directly?",
"username": "Mark_Mann"
},
{
"code": "mongodb:mastersboulema:feature/CSHARP-4728-atlas-search-sort",
"text": "Hi @Mark_Mann and welcome to MongoDB community forums!!Could you confirm if the below links are the ones that you are looking for.\nhttps://jira.mongodb.org/browse/CSHARP-4728\nThe pull request for which isThe 10 July 2023 release of MongoDB Atlas Search introduced the sort option.\n\n…[Atlas Search Changelog - 10 July 2023 Release](https://www.mongodb.com/docs/atlas/atlas-search/changelog/#10-july-2023-release)\n\n[Sort Atlas Search Results](https://www.mongodb.com/docs/atlas/atlas-search/sort/#std-label-sort-ref)\n\nIt would be very nice to have access to the sort field from the C# driver\nThe 10 July 2023 release of MongoDB Atlas Search introduced the sort option.Let us know if you are looking for something else.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This does appear to be what I am looking for. Thank you",
"username": "Mark_Mann"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Search + C# + Sorting | 2023-08-11T19:19:23.339Z | Atlas Search + C# + Sorting | 614 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hello,\nI have mongodb replicaset with one primary and two secondaries (running as statefulset in K8s). Few days ago when running tests, the disks went full. After allocating more diskspace we managed to get mongodb back to running. Then we deleted with ‘deletemany’ documents from the db. This did not reclaim the disk, as the compact is needed to run.But, when running the compact, I cannot use loop in secondaries, like:db.getCollectionNames().forEach(function(collname) {print('Compacting: ’ + collname);});So I cannot loop trought all the collections. Also I need to run the compact commands per collection, on all nodes. First on secondaries then change primary to secondary and there as well.\nIs this the correct process, is what I am thinking.Thanks in any advance!",
"username": "tatuh"
},
{
"code": "",
"text": "First on secondaries then change primary to secondary and there as well.This is the recommended way to compact on a replica set.If the remaining data is small enough it may be faster to perform an initial sync.Delete the data and journal (if you have it separate) pvc of one secondary at a time. Step down primary and repeat.",
"username": "chris"
},
{
"code": "",
"text": "Thank you! But,Then, what would be right way to run the compact after delete? You cannot run compact on primary?And, why would I delete data per node? I tought delete many is the right way to delete records, and the compact is needed to reclaim the space.",
"username": "tatuh"
},
{
"code": "",
"text": "Yes deletemany is the correct way to remove documents from your collection.If you are going to run the compact command then you do it as you’ve outlined. First on the secondaries and then step down the primary and run on that node as it’s a secondary now.The other method @chris mentioned is doing an initial sync on each node after doing the delete many. Because if you delete the data and journal files when the data is replicated it’ll only claim the space used by the documents.",
"username": "tapiocaPENGUIN"
}
] | Reclaim disk space after delete many in mongodb replicaset | 2023-08-21T06:46:28.450Z | Reclaim disk space after delete many in mongodb replicaset | 610 |
null | [
"atlas-device-sync",
"transactions"
] | [
{
"code": "",
"text": "We recieve the following error from a partition based sync realm app:Enabling Sync …approximately 34283/140520 (24.40%) documents copied\nErrors… db.tbl: “encountered error when flushing batch: error in onBatchComplete callback: error integrating batch: error performing database transaction to flush translator batch: inserting entries into client history failed: timed out while checking out a connection from connection pool: context deadline exceeded; maxPoolSize: 20, connections in use by cursors: 0, connections in use by transactions: 0, connections in use by other operations: 10”, db.tbl: “encountered error when flushing batch: error in onBatchComplete callback: error integrating batch: error performing database transaction to flush translator batch: inserting entries into client history failed: context deadline exceeded”, db.tbl: “encountered error when flushing batch: error in onBatchComplete callback: error integrating batch: error performing database transaction to flush translator batch: failure from transaction callback in batch upload integration: error flushing unsynced documents: error flushing unsynced documents cache: context deadline exceeded”, db.tbl: “encountered error when flushing batch: error in onBatchComplete callback: error integrating batch: error performing database transaction to flush translator batch: failure from transaction callback in batch upload integration: error flushing unsynced documents: error flushing unsynced documents cache: context deadline exceeded”, db.tbl: “encountered error when flushing batch: error in onBatchComplete callback: error integrating batch: error performing database transaction to flush translator batch: failure from transaction callback in batch upload integration: error flushing unsynced documents: error flushing unsynced documents cache: context deadline exceeded”, db.tbl: “encountered error when flushing batch: error in onBatchComplete callback: error integrating batch: allocating new client versions failed: connection(…) incomplete read of message header: context canceled”Any idea of what can be done about it?",
"username": "Sandeepani_Senevirathna"
},
{
"code": "",
"text": "Hi, we have a ticket to clean up this message but this is a transient mongodb error that one of our writes was rejected (cluster is overloaded, primary changing, etc) and we had to error out and restart (should happen automatically). It looks like your app is up and running though and writes are processing quickly. Let me know if there is anything else we can help with.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "hello, I am facing the following problem in this api route. https://vila.up.railway.app/post/index/most-recent-card/popupthere are some other routes bursting this problem from what I saw the problem is in mongo",
"username": "Mateus_Henrique"
}
] | Error starting sync | 2022-12-05T11:57:52.810Z | Error starting sync | 2,485 |
[
"replication",
"storage"
] | [
{
"code": "tail -f /data/mongodb.log\n\n{\"t\":{\"$date\":\"2023-08-19T07:31:06.694+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795901, \"ctx\":\"SignalHandler\",\"msg\":\"WiredTiger closed\",\"attr\":{\"durationMillis\":433}}\n{\"t\":{\"$date\":\"2023-08-19T07:31:06.694+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22279, \"ctx\":\"SignalHandler\",\"msg\":\"shutdown: removing fs lock...\"}\n{\"t\":{\"$date\":\"2023-08-19T07:31:06.694+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"SignalHandler\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-19T07:31:06.694+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":4784926, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time data capture\"}\n{\"t\":{\"$date\":\"2023-08-19T07:31:06.694+00:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20626, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down full-time diagnostic data capture\"}\n{\"t\":{\"$date\":\"2023-08-19T07:31:06.738+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"SignalHandler\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-08-19T07:31:06.738+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"SignalHandler\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\n",
"text": "Hi I am facing an issue\nimage1893×171 11.9 KB\n",
"username": "edam_meghanath"
},
{
"code": "\"msg\":\"Now exiting\"}\n\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":0}}\nmms-automation",
"text": "Hey @edam_meghanath,Welcome to the MongoDB Community!Based on the log snippets you provided, it looks like your MongoDB server was shut down gracefully. It exits with code 0, indicating a clean shutdown.Few things to check:If the shutdown was intentional and everything seems normal on restart, then it likely was just a clean shutdown event.Also, I noticed there are also mms-automation processes that usually belong to Ops Manager or Cloud Manager. If you’re having any specific issues with those products, please open a support ticket so the issue can be addressed by engineers.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Mongodb shutting down | 2023-08-19T10:16:40.483Z | Mongodb shutting down | 519 |
|
null | [
"graphql"
] | [
{
"code": "",
"text": "I have generated GraphQL queries on realm app based on the schema. Generated GraphQL query has multiple filter options like _exists, _in, _gt, _lt, _nin, _gte etc.I’m looking to fetch all the records where name starts with “xyz” or string contains any given word. Can anyone suggest how to apply such filters?",
"username": "Vivek_Sharma"
},
{
"code": "",
"text": "Hi Vivek – For this case we would recommend creating a custom resolver that leverages MongoDB’s Text Search.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "thanks for the response Drew,I would imagine taking the custom resolver route for any specialized functionality where I have the freedom to implement the behaviour I want. However regex pattern matching on string fields should be part of auto-generated filters.Do you think it could be implemented as generic feature at some point in future? or, are there any risks etc in adding it to default auto-generated filters like lack of indexes?",
"username": "Vivek_Sharma"
},
{
"code": "",
"text": "Hi Vivek – We’re definitely considering adding a Regex pattern to our defaults but don’t have a timeline for when this would be added. This is slightly complicated by the fact that $search is becoming the recommendation for use cases like this but requires additional set-up on the cluster.",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "I am using the graphQL custom resolvers, and while they will do what you want, you are now operating outside the autogenerated types. For instance, you cannot combine things like in, greater than, less than with the custom resolvers because you have to rewrite all that.A better solution would be to add custom Query Inputs. I get that $search is recommended from the database perspective, but I already have to look at the pros and cons of searching. I just want to extend the GraphQL schema a bit more than just a custom resolver.",
"username": "Jorden_Lowe"
},
{
"code": "",
"text": "+1. It’s frustrating to have to choose between all the generated options that map to find automatically (lt, gt, etc) and a custom resolver to implement missing features like pagination or search.",
"username": "mdierker"
},
{
"code": "",
"text": "+1 on this! This is the missing piece of the jigsaw!!! Please please mongo team implement this. If additional setup is needed on the cluster then so be it, surely we can check the collection for required setup before generating the query types.",
"username": "beef1"
}
] | GraphQL query string filters like "StartsWith" or "Contains" | 2020-12-30T21:47:58.245Z | GraphQL query string filters like “StartsWith” or “Contains” | 14,697 |
null | [] | [
{
"code": "",
"text": "Hello,I’m working on a project where we want to measure and calculate how long it takes for a user to do a task. However, I’m having significant issues assigning a field as a ‘time’ type variable for displaying and calculating it.For example, I want to measure how long it takes for a person from arriving to a scene to do a task. This would look like time 1 minus time 2, and then displaying the mean time from the whole database. This is measured in minutes.The issue I see here, is that mongo charts seems not to have a ‘time’ formatting option, like it does for ‘date’ meaning any charts or calculation comes back as a null. Any tips or advice appreciated!Cheers!",
"username": "Andrew_Bivard"
},
{
"code": "$dateDiff",
"text": "Hi @Andrew_Bivard -While there isn’t a dedicated Charts feature for this, you should be able to accomplish this by using the $dateDiff operator in a calculated field or in the query bar.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks @tomhollander\nMuch appreciated, and thanks for linking the relevant documentation\nWith the data I have, it presents the error:\nUnrecognized expression ‘$dateDiff’, correlationID = 177d4b3a2bdfaff536c06954I’m not quite sure what’s causing this error.",
"username": "Andrew_Bivard"
},
{
"code": "",
"text": "What version is your cluster? As per the docs, this expression was added in MongoDB 5.0.",
"username": "tomhollander"
}
] | Help calculating and displaying time in min (not dates) | 2023-08-21T02:48:53.489Z | Help calculating and displaying time in min (not dates) | 412 |
null | [
"queries",
"node-js",
"mongoose-odm"
] | [
{
"code": "const mongoose = require(\"mongoose\");\nmongoose.connect(\"mongodb://127.0.0.1:27017/fruitsDB\", { useNewUrlParser: true });\n\nconst fruitSchema = new mongoose.Schema({\n name: {\n type: String,\n required: [true, \"Please check your data entry, no name specified!\"]\n },\n rating: {\n type: Number,\n min: 1,\n max: 6\n },\n review: String\n});\n\nconst Fruit = mongoose.model(\"Fruit\", fruitSchema);\n\nconst personSchema = new mongoose.Schema({\n name: String,\n age: Number\n});\n\nconst Person = mongoose.model(\"Person\", personSchema);\n\nasync function saveFruitAndPerson() {\n try {\n // Creating and saving a Fruit document\n const fruit = new Fruit({\n name: \"Grapes\",\n rating: 6,\n review: \"Grape is a good fruit\"\n });\n const savedFruit = await fruit.save();\n console.log(\"Saved fruit:\", savedFruit);\n\n // Creating and saving a Person document\n const person = new Person({\n name: \"John\",\n age: 37\n });\n const savedPerson = await person.save();\n console.log(\"Saved person:\", savedPerson);\n\n // Using findOne to retrieve a Fruit document\n const retrievedFruit = await Fruit.findOne({ name: \"Grapes\" });\n console.log(\"Retrieved fruit:\", retrievedFruit);\n\n // Using findOne to retrieve a Person document\n const retrievedPerson = await Person.findOne({ name: \"John\" });\n console.log(\"Retrieved person:\", retrievedPerson);\n } catch (error) {\n console.error(\"Error:\", error);\n } finally {\n mongoose.connection.close(); // Close the database connection\n }\n}\n\nsaveFruitAndPerson();\n",
"text": "",
"username": "Arun_vel"
},
{
"code": "Throw new MongooseError(‘Model.find() no longer accepts a callback’);\nPromiseasync/await",
"text": "Hey @Arun_vel,Welcome to the MongoDB Community.The use of callback functions has been deprecated in the latest version of Mongoose (version 7.x).Reference: Mongoose v7.4.3: Migrating to Mongoose 7If you are using Mongoose 7.x+, please modify the functions that use a callback by switching to the Promise or async/await syntax.If you have any further questions or concerns, feel free to ask.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Throw new MongooseError('Model.find() no longer accepts a callback'); | 2023-08-20T16:39:22.946Z | Throw new MongooseError(‘Model.find() no longer accepts a callback’); | 1,214 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "I need to create a terraform / terragrunt script to create a scheduled trigger function similar to what was created for event bridge database trigger can this be provided by mongodb",
"username": "Renato_Palabasan1"
},
{
"code": "mongodbatlas_event_triggermongodbatlas_event_trigger",
"text": "Hey @Renato_Palabasan1,Welcome to the MongoDB Communtiy!I need to create a terraform / terragrunt script to create a scheduled triggerTo create a scheduled trigger for MongoDB Atlas using Terraform, you can use the mongodbatlas_event_trigger (Terraform Registry) resource.The mongodbatlas_event_trigger resource allows you to create both scheduled and database triggers that invoke functions.Please take a look at the documentation and let us know if this is helpful for your use case.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Need for terraform script to create scheduled mongodb trigger function | 2023-06-27T19:06:01.979Z | Need for terraform script to create scheduled mongodb trigger function | 739 |
null | [
"connecting",
"vscode"
] | [
{
"code": "",
"text": "I created a cluster in MongoDB and I am trying to connect it to visual studio code. When in visual studio code I have the MongoDB extension and choose to create a connection. I have the connection string and when I enter the string and hit enter it gives me an authentication error. What is the reason for this error and why will it not make the connection.",
"username": "Kurt_Pfeffer"
},
{
"code": "",
"text": "An aufhentication error mean you have the wrong user name or password.",
"username": "steevej"
},
{
"code": "",
"text": "I gave the password in the password field and copied the connection URL from the connect option for VS Studio (where the URL already has my name), but it still shows an authentication error. Is there any way?",
"username": "Swetha_S"
},
{
"code": "",
"text": "If you have the correct URL, the correct user name and the correct password then you should NOT have an authentication error. So you are doing something wrong. But with the little amount of details you shared it is impossible for any of us to tell you what you are doing wrong.",
"username": "steevej"
},
{
"code": "",
"text": "@Kurt_Pfeffer having the same problem. Have you found a solution?mongodb+srv://:@cluster0.d2ncc.mongodb.net/",
"username": "Gabidas_Biz"
},
{
"code": "",
"text": "My problem was the allowed IP address list. I have to update mine or allow from anywhere",
"username": "Gabidas_Biz"
},
{
"code": "",
"text": "I am having a similar problem. Only my error states “URL malformed”, which I do not comprehend. As I literally copied the URL for connecting to VSC which was provided by mongodb. I changed the “password” to my password and pasted it. How on earth can the URL be malformed? Could it be that the program does not like the % sign, which I must use to replace the ! in my password?Personally, I feel there should be a student option which does not require a password. I am completing my server-side development course @UCI because business analysts suddenly need to know how to build and run databases to attain an engagement. The coding part is not my hang up. It is connecting these programs to VSC that keeps tripping me up, consuming most of my time. Any tips or tricks?",
"username": "Mae_Goodman"
}
] | Authentication error when connecting MongoDB to Visual Studio code through connection string | 2022-06-18T22:54:28.678Z | Authentication error when connecting MongoDB to Visual Studio code through connection string | 5,445 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "MongoServerSelectionError: Server selection timed out after 30000 ms.I am using node version 16.14.0 and mongoose version of 6.11.4. I am getting the same error repeatedly. I have checked the mongodb server also it is working fine.",
"username": "Suresh_S1"
},
{
"code": "27017hostname/portusername/password mongoose.connect(uri)\nconnectTimeoutMS mongoose.connect(uri, {\n useNewUrlParser: true,\n connectTimeoutMS: 30000, // increase from default\n })\n",
"text": "Hey @Suresh_S1,Welcome to the MongoDB Community!MongoServerSelectionError: Server selection timed out after 30000 ms. I am getting the same error repeatedly.Please let us know if the timeout continues after verifying connectivity and adjusting the timeout settings. Feel free to provide more details on your setup and we’ll be glad to help you troubleshoot the issue!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Even from mongosh and MongoDB Compass I am able to connect to local server. The port 27017 is not blocked. Even I tried connectionTimeoutMS and also increased the time. But still I am getting the same error.MongoServerSelectionError: Server selection timed out after 30000 ms. In my project I use try catch block. In the try block first I use mongoose.connect to connect to the master DB and then parallelly I try to connect to my other data bases using mongoose.createConnection. In the old version of mongoose which is 5.10.3 the same is working fine. When I updated my mongoose to 6.11.4 the same above mentioned error is occuring. Please support the above issue.Thanks & Regards,\nSuresh",
"username": "Suresh_S1"
},
{
"code": "mongoose.createConnectionv6.11.4const mongoose = require('mongoose');\nconst uri = 'mongodb://localhost:27017';\n\nmongoose.connect(uri, { useUnifiedTopology: true, useNewUrlParser: true }).\n then(() => console.log('Connected')).\n catch(err => console.log('Connection Error', err.stack));\n",
"text": "Hey @Suresh_S1,In the try block first I use mongoose.connect to connect to the master DB and then parallelly I try to connect to my other data bases using mongoose.createConnection.To better assist with troubleshooting the issue, it would be very helpful if you could provide a code snippet that reproduces the problem along with the full text of the error message you are encountering. That will allow me to replicate the behavior on my end and troubleshoot further.In the meantime, I tried the following script using Mongoose v6.11.4 which connects to a local MongoDB instance as expected:Please let me know if the above code snippet works for you, as it may help narrow down where the issue lies. Also, feel free to post any code and error logs you can share.Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "\nimage943×250 7.41 KB\nThe same code was working with earlier version of mongoose which is 5.10.3. When I have updated to the 6.11.4 the same was throwing an error. Refer the below image\nScreenshot 2023-07-27 0808171822×342 32.8 KB\nHelp me in this case if you find any solution",
"username": "Suresh_S1"
},
{
"code": "",
"text": "@Kushagra_Kesav Any solution for the above problem.",
"username": "Suresh_S1"
}
] | MongoServerSelectionError node js -16 | 2023-07-19T11:39:06.521Z | MongoServerSelectionError node js -16 | 609 |
null | [
"queries",
"atlas-cluster"
] | [
{
"code": "await client.connect();\nconst db = client.db();\n\nproducts = await db.collection('products').find().toArray();\n",
"text": "const MongoClient = require(‘mongodb’).MongoClient;const url =\n‘mongodb+srv://shivam20214008:[email protected]/products_test?retryWrites=true&w=majority’;const client = new MongoClient(url,{useNewUrlParser: true,useUnifiedTopology: true});\nconst createProduct = async (req, res, next) => {\nconst newProduct = {\nname: req.body.name,\nprice: req.body.price\n};console.log(“Going to try.”);try {\nconsole.log(“Going to try.”);\nconst a = await client.connect(\nprocess.env.MONGO_URL,\noptions,\n(err) =>{\nif(err) console.log(err)\nelse console.log(“mogodb is connected!”);\n}\n);\nconsole.log(a);\nconsole.log(“Going to try.”);\nconst db = client.db();\nconsole.log(“Going to try.”);\nconst result = db.collection(‘products’).insertOne(newProduct);\nconsole.log(“Going to try.”);\n//console.log(result);\n} catch (error) {\n// console.log(error);\nreturn res.json({message: ‘Could not store data.’});\n};\nclient.close();res.json(newProduct);\n};const getProducts = async (req, res, next) => {\nconst client = new MongoClient(url);let products;\ntry{}catch(error){\nreturn res.json({message: “Could not retrieve products”});\n};client.close();\nres.json(products);};exports.createProduct = createProduct;\nexports.getProducts = getProducts;It is giving error I have spent all the day searching for the error but i couldn’t find one.\nThe code inside the try block is executing only till the “Going to try” block after that block another “Going to try” block is there but it is not working.\nIf anyone can help?\nThanks in advance.",
"username": "Shivam_Kumar13"
},
{
"code": "",
"text": "Hi @Shivam_Kumar13,It is giving error I have spent all the day searching for the error but i couldn’t find one.It would be helpful to understand what the actual error message is. If you can provide this, then we may be able to assist easier.I’ve also sent you a DM regarding the connection string.Regards,\nJason",
"username": "Jason_Tran"
}
] | MongoDB Atlas error in connection | 2023-08-18T15:37:42.151Z | MongoDB Atlas error in connection | 418 |
null | [] | [
{
"code": "",
"text": "Hello everyone!Testing clustered collections, I’ve cheked that the find queries using $in operator dont scan the clustered index, scan the entire collection. Reading the documentation seems correct:Faster queries on clustered collections without needing a secondary index, such as queries with range scans and equality comparisons on the clustered index keySomeone knows if this improvement is coming in the next releases? I think that this functionalty would be fantastic!greetings!!",
"username": "juan_lluesma"
},
{
"code": "$in_id_id_id_id_id_id_id_id",
"text": "Hi @juan_lluesma welcome to the community!I believe you have read the main documentation page of Clustered Collections, and I think the answer to your questions are somewhat mentioned in that page.find queries using $in operator dont scan the clustered index, scan the entire collectionThis is actually alluded to in the paragraph you have quoted: clustered collection is best when you have equality or range queries. A $in query is not really either (more like multiple equality instead of a single equality), so it doesn’t benefit from a clustered index.This behaviour makes sense when you consider the following point from the doc page:A non-clustered collection stores the _id index separately from the documents.This is the “normal” _id index, and:A clustered collection stores the index and the documents together in _id value order.So a clustered collection stores the _id index together with the document.Thus when you have a query on _id that is a single equality, it’s relatively straightforward for a clustered index to zoom into the target document’s location. It doesn’t need to scan a separate _id index then fetch the related document, it can scan and fetch at the same time since the key and the document are stored together. Similar situation will be observed for a range query. However the keyword here is “scan”. Since the index and the document are stored together, scanning is pretty much needed.This is mentioned in the doc page as well:When a query uses a clustered index, it will perform a bounded collection scan.What “bounded collection scan” means is that it’s basically a scan, but it’s typically not the whole collection since we know where the relevant _id are.However, note that you can create secondary indexes that sort of make a clustered collection behave somewhat like a normal collection, so if you have a freeform query using other fields, presumably you can use a clustered collection as well.I think in short a clustered collection is beneficial if your queries are linear for the _id field. If you need a more freeform queries, then a regular collection may offer more flexibility. Which collection type is best depends on your use case; you might want to test clustered vs. non-clustered collections under a test workload and see which one comes out on top.Hope that make sense!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "{ _id: { $in: [6660057786, 6557989767] } }\n",
"text": "I have a query on a clustered index like this:Why would this not result in two equality checks, but a full collection scan?",
"username": "hk86"
},
{
"code": "$in$in",
"text": "Hi @hk86 welcome to the communityThis is because a $in query is an equality query, as I have mentioned in the above post:clustered collection is best when you have equality or range queries. A $in query is not really either (more like multiple equality instead of a single equality), so it doesn’t benefit from a clustered index.Hope this clears things out.Best regards\nKevin",
"username": "kevinadi"
}
] | Clustered collections dont use index | 2023-01-25T17:00:34.775Z | Clustered collections dont use index | 956 |
null | [] | [
{
"code": "",
"text": "Hi,I’m curious whether the Atlas Search functionality works with data that is in cold Online Archive storage. I guess not, but I can’t find anywhere in the documentation where this is clearly stated.Thanks,\nRichard",
"username": "rmales"
},
{
"code": "$search$search is only supported when querying Atlas data sources\n$search",
"text": "Hi @rmales - Welcome to the community.I believe you won’t be able to query online archive data sources using $search. I tried testing against a data federation instance with data in S3 and received the following in my test environment:I guess not, but I can’t find anywhere in the documentation where this is clearly stated.Thanks for the feedback with regards to the documentation and the behaviour of $search and online archive.If i’ve misunderstood your query please let me know but I hope the above helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does Atlas Search work with Online Archive (cold) data? | 2023-08-17T12:09:27.954Z | Does Atlas Search work with Online Archive (cold) data? | 435 |
null | [
"backup"
] | [
{
"code": "atlas backup restore start pointInTime --clusterName ${clusterName} --pointInTimeUTCMillis ${pointInTimeMillis} --targetClusterName ${clusterName} --targetProjectId ${projectIdProduction} --output jsonError: https://cloud.mongodb.com/api/atlas/v2/groups/<groupId>/clusters/<cluster-name>/backup/restoreJobs POST: HTTP 400 (Error code: \"BACKUP_PIT_RESTORE_TIME_INVALID\")\nDetail: Chosen Point in time timestamp invalid: Given point in time is too far ahead of latest oplog.. \nReason: Bad Request. Params: [Given point in time is too far ahead of latest oplog.]\n",
"text": "Hi there. My goal is to successfully use the continuous cloud backup feature, when doing automatic db migrations. (so a script can automatically recover the database, if migrations failed for some reason). The script runs this MongoDB Atlas CLI command, as described here:atlas backup restore start pointInTime --clusterName ${clusterName} --pointInTimeUTCMillis ${pointInTimeMillis} --targetClusterName ${clusterName} --targetProjectId ${projectIdProduction} --output jsonAnd sometimes if fails with:What does “Given point in time is too far ahead of latest oplog” means - and how can I mitigate this?",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Hi @Alex_Bjorlig,What does “Given point in time is too far ahead of latest oplog” means - and how can I mitigate this?There is a maximum value stated in the UI for what value the oplog timestamp can be at most. I do not believe this same oplog timestamp is available to be retrieved currently using the public API so you’ll need to get this value from UI at the moment.For example:\n\nimage711×403 24 KB\nI’ve created the following feedback post and noted this internally with regards to this value being available via the API. You can vote for it if you wish.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Continous cloud backup; BACKUP PIT RESTORE TIME INVALID error | 2023-08-17T12:39:29.047Z | Continous cloud backup; BACKUP PIT RESTORE TIME INVALID error | 461 |
null | [] | [
{
"code": "",
"text": "I am getting a cluster told error:This M0 cluster cannot be resumed because the mongoDB version of its backup snapshot is too old. Please contact support to resume this cluster.I don’t have a support plan so not sure what to do. Is there a way to create copy the data from one cluster to another when the one cluster is not resumed? I didn’t take a snapshot before pausing it so I don’t have that. Is it possible to connect to the clusters collections while not resumed?I would rather not pay the $49 per month fee to just have this cluster started to grab a snapshot or have Mongo support restart it.",
"username": "Jason_Erv"
},
{
"code": "",
"text": "Hi @Jason_Erv,I would rather not pay the $49 per month fee to just have this cluster started to grab a snapshot or have Mongo support restart it.I would recommend contacting the atlas in-app chat support regarding this. The chat support is free to contact. You should see the chat bubble at the bottom right hand corner of the Atlas UI when logged into Atlas.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | Paused cluster to old to resume | 2023-08-20T18:44:46.420Z | Paused cluster to old to resume | 372 |
null | [
"installation"
] | [
{
"code": "Err:6 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 Release\nCould not handshake: The TLS connection was non-properly terminated. [IP: 185.46.212.98 9480]\nReading package lists... Done\nE: The repository 'https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 Release' does not have a Release file.\nN: Updating from such a repository can't be done securely, and is therefore disabled by default.\nN: See apt-secure(8) manpage for repository creation and user configuration details.\n<html>\n<head><title>404 Not Found</title></head>\n<body>\n<h1>404 Not Found</h1>\n<ul>\n<li>Code: NoSuchKey</li>\n<li>Message: The specified key does not exist.</li>\n<li>Key: apt/ubuntufocal/mongodb-org/4.4multiverse</li>\n<li>RequestId: C5JWFBGFTN5X4B10</li>\n<li>HostId: RHZGG+Mt6t/Gmo25ZCTy1L5+8NaZiz+NqCuDxYYd8iTyiKjMuUvUKU3Zm+gKbbJd5H5aMzal+fQ=</li>\n</ul>\n<hr/>\n</body>\n</html>\n",
"text": "Following the documentation for Mongo 4.4 on Ubuntu, I get the key, add the repository, then get the following error when running atp-get update:If I try to curl to the page I get the following:Am I looking at a key issue or is something not right with the repo?",
"username": "jon_rawlins"
},
{
"code": "https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 Release",
"text": "https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 ReleasePlease have a look on below link and follow steps for ubuntu 20MongoDB is an open-source document database used commonly in modern web applications. It is classified as a NoSQL database because it does not rely on a tra…",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "I have been following every guide I can find.",
"username": "jon_rawlins"
},
{
"code": "",
"text": "Hi @jon_rawlins,I tried above mongodb link on which worked for me.image1230×94 18.1 KBimage1589×154 39.1 KBimage1556×743 194 KB",
"username": "ROHIT_KHURANA"
},
{
"code": "",
"text": "I am also experiencing this issue trying to install MongoDB 4.4 on Ubuntu 20.04LTS … This mentions no release file for focal when I try apt update (after adding the PGP key of course). @jon_rawlins did you find a workaround for this?",
"username": "Etienne_Jacquot"
},
{
"code": "echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.listecho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org.list",
"text": "the fix for this to change this line… basically remove 4.4 on the saved file …echo \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.listto this\necho \"deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org.list",
"username": "Marlon_Wenceslao"
},
{
"code": "echo \"deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.4.list\n\nsudo apt update\n\n\napt-cache policy libssl1.0-dev\n\nsudo apt-get install libssl1.0-dev\n\n\nsudo apt-get install -y mongodb-org=3.4.17 mongodb-org-server=3.4.17 mongodb-org-shell=3.4.17 mongodb-org-mongos=3.4.17 mongodb-org-tools=3.4.17",
"text": "Hopefully, this can be helpful for installing MongoDB version 3.4.17\nTry this:",
"username": "Dhruv_Sharma2"
},
{
"code": "sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6\n",
"text": "Update, please also run this step of adding repository keys as the first step before running any of the other steps mentioned above:Apologies for the inconvenience!",
"username": "Dhruv_Sharma2"
},
{
"code": "",
"text": "tried everything but doesnot work\n\nimage953×526 197 KB\n",
"username": "Suman_Osti"
},
{
"code": "",
"text": "@Suman_Osti please update if you find any solution",
"username": "salman_musa"
},
{
"code": "",
"text": "Hi @salman_musa and welcome to the MongoDB community forums. You don’t state what issues you are having so it’s hard for us to help you out. Have followed the installation notes for putting MongoDB on Ubuntu 20.04? Please post any error messages you are getting so that we may help you get things fixed.",
"username": "Doug_Duncan"
}
] | Problems installing MongoDB on Ubuntu 20.04 | 2021-04-07T18:03:49.743Z | Problems installing MongoDB on Ubuntu 20.04 | 50,948 |
null | [
"node-js",
"swift",
"atlas-device-sync"
] | [
{
"code": "{\n \"$or\": [\n {\n \"userId\": \"%%user.id\"\n },\n {\n \"invitedUserIds\": \"%%user.id\"\n }\n ]\n}\n {\n \"userId\": \"%%user.id\"\n }\n{\n \"invitedUserIds\": \"%%user.id\"\n}\n",
"text": "Hi All,I apologize if this is in the wrong category. I am relatively new to MongoDB and coding in general. The app (using node.js and Swift Realm Drivers) that my partner and I are working on requires the collaborator permissions ability that is listed under the Mongo Documentation → Device Sync Permissions Guide → Dynamic collaboration.We have managed to implement it as described in the guide and it works when you have a single role with the $or function for the owner and collaborators, i.e.:However, what we found is that using this approach, you can only set the same permissions for collaborators and the owner, since they are under the same permissions set. When we try to create a second role with a separate set of permissions, i.e. one for owner and one for collaborator, and set them up such that one contains the owner read/write filter and the other contains the collaborator read/write filter, it doesn’t seem to grant access for collaborators.Owner Role read/write filter:Collaborator Role read/write filter:Is this because Atlas is setup to interpret arrays of collaborators with that $or function? Is there a way that we can set owner permissions and also provide an array of collaborators in a separate role with separate permissions from the owner of the document?Thanks for any help in advance.",
"username": "Texpert"
},
{
"code": "invitedUserIdsuserId{\n \"name\": \"collaborator\",\n \"apply_when\": { \"%%user.custom_data.type\": \"collaborator\" },\n \"document_filters\": {\n \"read\": { \"invitedUserIds\": \"%%user.id\" }, \n \"write\": { \"invitedUserIds\": \"%%user.id\" }\n },\n // other permissions \n}\n{\n \"name\": \"owner\",\n \"apply_when\": { \"%%user.custom_data.type\": \"owner\" },\n \"document_filters\": {\n \"read\": { \"userId\": \"%%user.id\" }, \n \"write\": { \"userId\": \"%%user.id\" }\n },\n // other permissions \n}\n",
"text": "Hi @Texpert , and welcome to the community!However, what we found is that using this approach, you can only set the same permissions for collaborators and the owner, since they are under the same permissions set. When we try to create a second role with a separate set of permissions, i.e. one for owner and one for collaborator, and set them up such that one contains the owner read/write filter and the other contains the collaborator read/write filter, it doesn’t seem to grant access for collaborators.Before going into my recommendation, I would first suggest taking a look at our docs on role order evaluation if you haven’t already. Do both roles have the same Apply When expression? If that’s the case, the set of permissions in the first matching role will always apply, regardless of whether a second matching role is defined.Is there a way that we can set owner permissions and also provide an array of collaborators in a separate role with separate permissions from the owner of the document?If I’m understanding your use case correctly, you’re looking to create two roles:I would recommend attempting to leverage custom user data in your role’s Apply When expression. TLDR custom user data is metadata you can store that is tied to your users in addition to the data that App Services stores by default. In this case, you could assign each user a type (either “collaborator” or “owner”), and store that as a field in the custom user data.Then, you could have a role for collaborators like:And one for user owners like:Let me know if that helps,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "{\n \"userId\": \"%%user.id\"\n }\n{\n \"invitedUserIds\": \"%%user.id\"\n}\n",
"text": "Hi @Jonathan_Lee. Thanks for the detailed information! Apologies, I probably need to share a bit more information for this to make sense. We unfortunately can’t use the custom user table because the concept we want to create requires our permissions to be document based, not user based. In other words, a user can be the owner of their own documents and a collaborator on documents that other users own.Long story short: our app basically has a few different collections, but for the sake of simplicity, let’s use 2 collections and call one of the collections “main” and the second collection “invitations”. The workflow is as follows:Note: The permissions I’m referencing where we have the issue are applied to the main collection. No issues with invitations since both the owner and the invited user should always have access to this. Both collections use Device Sync/Realm between Node and Swift for real time synchronization.So, as mentioned, when we set it up exactly as described under a single role, it works, but we can only apply 1 set of permissions. When we set it up with 2 roles (ordered as you reference), where we set the owner first (0) and the collaborator permissions second (1), it does not pick up the second set of permissions, even if the user is not an owner of the document (meaning that it won’t trigger the second set of permissions). This is the setup we tried where it didn’t work (same apply where expression - blank).\nimage2050×1422 203 KB\nSetting this up for Role 0 - apply when blank, read/write filters as follows:Setting this up for Role 1 - apply when fields blank, read/write fields as follows:When we set them up separately, however (i.e. only having role 0 and testing with each individually… they work fine). The problem is when we do it with role 0 and 1, where 0 has full permissions as the owner, and the 1 has document level field permissions, search only, no insert, no delete, the documents in the “main” collection don’t return from Realm for the collaborators.I apologize if this isn’t so clear.TLDR: When using permissions with 2 roles (role order evaluation) with different permissions for owners/collaborators, collaborators can’t see the collection. Is this due to the specific field level permissions we set by chance? When setting them up individually OR with the $or method as described in the documentation, it works perfectly fine.Thanks again for your help. Much appreciated.",
"username": "Texpert"
},
{
"code": "{\n \"invitedUserIds\": \"%%user.id\"\n}\n",
"text": "@Jonathan_Lee - I tried implementing a match condition on Apply When that matches the same statement as my document filter:, but it doesn’t seem to work or evaluate the Apply When expression. I tried it with document filters set to true for read/write as well, but still it returns nothing. Is this Flexible Sync compatible?",
"username": "Texpert"
},
{
"code": "{\n \"userId\": \"%%user.id\"\n }\n{\n \"invitedUserIds\": \"%%user.id\"\n}\n{\n \"userId\": \"%%user.id\"\n}\n{\n \"name\": \"role\",\n \"apply_when\": {},\n \"document_filters\": {\n \"read\": { \n \"$or\": [\n { \"userId\": \"%%user.id\" },\n { \"invitedUserIds\": \"%%user.id\" }\n ]\n },\n \"write\": { \n \"$or\": [\n { \"userId\": \"%%user.id\" },\n { \"invitedUserIds\": \"%%user.id\" }\n ]\n },\n },\n \"search\": true, \n \"insert\": { \"userId\": \"%%user.id\" },\n \"delete\": { \"userId\": \"%%user.id\" },\n \"read\": true,\n \"write\": true\n}\n",
"text": "Hi,For usage in Flexible Sync, a role’s Apply When expression cannot reference document fields. The reason for this is because in Flexible Sync, roles are evaluated at session start (before any documents are looked at), as opposed to on a per-document basis. You can find more information about this behavior here.The issue with having an empty Apply When expression for both roles is that an empty Apply When expression always evaluates to true. Thus, due to the nature of role order evaluation, the permissions described by the first role will always come into play.Setting this up for Role 0 - apply when blank, read/write filters as follows:Setting this up for Role 1 - apply when fields blank, read/write fields as follows:When we set them up separately, however (i.e. only having role 0 and testing with each individually… they work fine). The problem is when we do it with role 0 and 1, where 0 has full permissions as the owner, and the 1 has document level field permissions, search only, no insert, no delete, the documents in the “main” collection don’t return from Realm for the collaborators.Thus, in this case, for any sync session, the permissions set described by role 0 would always be applied for the course of a sync session. For a user trying to sync, documents that matched the expression:would be visible to the user, which explains why the collaborators wouldn’t have been able to access even documents such that their ID appears in “invitedUserIds”.That’s the main reason why I was suggesting trying to incorporate custom user data into your permissions evaluation strategy (this is commonly how I’ve seen other apps setup Flexible Sync permissions in a more dynamic way). Also, if I understand correctly, if you don’t need restrictive permissions on the fields in the document, you could combine both roles into a single role like:(You can see the full JSON view of the role by clicking the “Advanced View” in the toggle in the roles viewer)As for why this only works if you don’t need field-level permissions – this is because dynamic expressions for field-level permissions is currently unsupported in Flexible Sync.Alternatively, if using the permissions system in a way that fits the needs of the application seems impossible, you could always edit your application logic to perform these kinds of checks separately.Let me know if that helps,\nJonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "Hi @Jonathan_LeeThanks for the reply and clarification on this. Appreciate the full explanation. It makes sense why it is not working when apply when is empty and also unfortunately we do need field level permissions for collaborators, but full access to the document for our owners, both of which being document specific. I had a feeling that perhaps it wasn’t supported yet and we figured we could edit our application to create our desired behavior as a last resort, but we figured we’d try to see if we could restrict data access at the DB level, if at all possible. Do you know if there are plans for Mongo to support dynamic permissions w/ flexible sync and field-level restrictions in your roadmap? I think it would be pretty powerful for not just us, but perhaps many other apps.",
"username": "Texpert"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Collaborator Permissions | 2023-08-08T17:49:16.151Z | MongoDB Collaborator Permissions | 720 |
null | [
"replication",
"upgrading"
] | [
{
"code": "db.shutdownServer()",
"text": "Related Issue: [bitnami/mongodb] Replicaset Graceful shutdown · Issue #17432 · bitnami/charts · GitHub\nRelated Discussion: Operator: Detected unclean shutdown - mongod seems never to shutdown gracefully - #2 by Yilmaz_DurmazTL;DR: When restarting the primary node in k8s cluster, the replicaset will lose primary for a short period of time because of election. Same thing won’t happen if using db.shutdownServer() command. Based on the documentation, using SIGTERM should have the same effect (stepDown to a secondary and skip election) as command shutdown, while it is not the case in real world. Instance has plenty of time to wrap up in our test environment (gracefulShutdownPeriod extended).We would like to know if this design is intended? What’s the reason behind the behavior difference?Thanks!",
"username": "NeverBehave"
},
{
"code": "db.shutdowServer... \"ctx\":\"conn220026\",\"msg\":\"Received replSetStepUp request\"}\n... \"ctx\":\"SignalHandler\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":10000}}\n... \"ctx\":\"ReplCoord-21\",\"msg\":\"Heartbeat failed after max retries\",\"attr\":{\"target\":\"mongodb-0.mongodb-headless.default.svc.cluster.local:27017\",\"maxHeartbeatRetries\":2,\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Error connecting to mongodb-0.mongodb-headless.default.svc.cluster.local:27017 (100.96.5.193:27017) :: caused by :: Connection refused\"}}}\n",
"text": "I tried to check your github issue; I could read your logs but could not continue as it is too long for my current time slot.In short, my wild guess here would be about how k8s shuts down servers.As you already noted, SIGTERM causes “PRIMARY not found” error while db.shutdowServer does it gracefully.You may immeditely notice this line at the start of secondary log for shutdown command:This clearly indicates continuous communication between nodes.On the other hand, when SIGTERM used, primary says it is about to send a message to the cluster, but secondary starts failing to get heartbeat at the same interval.Your logs have a 5 seconds gap between these two lines, I wonder at what time secondary lost the heartbeat.Anyways, this leads me thinking that the “network connection” for the primary is closed before it can send that step down command to the cluster, hence no heartbeat to others. And closing the network is the job of the k8s.As you know, SIGTERM is part of forced shutdown commands, though it awaits the program to do cleanup. yet this does not tell anything about the rest of the system, especially for the network.Unfortunately I don’t have the setup to test this, so I hope you and others find better explanation.PS: you mention of v4.4 server. have you also tried with 5 or 6?",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thanks for the update! I what I could do right now is to check if preStop hook have different behavior than normal shutdown.At the same time I don’t have 5/6 version available since I bumped into this problem during my upgrade process. But since bitnami team could reproduce this problem, I believe they are using 5/6 since those are the supported version.",
"username": "NeverBehave"
},
{
"code": "",
"text": "In docker containers, once the process that is set to run at the start shutdowns itself, the container will also be terminated.preStop hook helps with this: it tells mongod to shutdown itself, it does so gracefully (unless timed out), and once mongod is stopped the container also shuts down itself.Though I hoped to be wrong, as you too mentioned in your github follow up post, it seems container’s network is cut off early. This might be a bug in k8s, or it might be set somewhere else in it so pod resources stay up until main container process exits/dies.Please keep us updated.",
"username": "Yilmaz_Durmaz"
}
] | Unexpected primary election when shutdown primary node in k8s update | 2023-08-18T01:09:15.677Z | Unexpected primary election when shutdown primary node in k8s update | 524 |
null | [] | [
{
"code": "",
"text": "Hey wanted to know from you all, that I have a simple get query with indexes set up, but the response time is really high, even though there are no documents in that collection to return, it returns really high response time of 300ms, how to lessen it, please give some ideas on it?",
"username": "Sahil_Anower"
},
{
"code": "explain()mongosh shell",
"text": "Hey @Sahil_Anower,Welcome to the MongoDB community!I have a simple get query with indexes set up, but the response time is really high, even though there are no documents in that collection to returnIn order to better understand the issue you are encountering could you please share some more information related to it, such as:With these specifics, we should be able to better assist you. Also, feel free to sanitize any sensitive field names or values.Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Need Help Optimizing Empty Query Latency | 2023-08-18T19:17:21.344Z | Need Help Optimizing Empty Query Latency | 424 |
null | [] | [
{
"code": "",
"text": "The chunk migration operation takes too long to work with the Balancer window in the production environment.So I’m thinking about always working through the “_waitForDelete setting”, but the manual says that the option is test option.May I ask your opinion about enabling this option in production use?\nI am using mongodb 4.2.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "By default, this value is false. Do you want to enable it ? (meaning the balancer will wait for the delete phase)",
"username": "Kobe_W"
},
{
"code": "",
"text": "I wonder if it is a setting that can be used in a production environment.\nI want to do a stable chunk migration in an environment where steady input and output exist.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Having the same question.\nIs enabling the “_waitForDelete” in production a valid use case?\nMy main concern about the regular async migration is:I’m using self managed storage for the DB and that the default setting is not freeing up space (since the migration takes a long time).oplog is filling up very fast because of the migration, which leads to loosing change-stream resume point.Thanks.",
"username": "Oded_Raiches"
}
] | About "_waitForDelete" in production | 2023-02-15T00:20:37.551Z | About “_waitForDelete” in production | 1,083 |
null | [
"java",
"atlas",
"storage"
] | [
{
"code": "db.createCollection(\"test\", {capped: false, storageEngine: {wiredTiger: {configString: \"block_compressor=snappy\"}}})\nMongoServerError: parameter storageEngine is disallowed in create command\n",
"text": "Hello,I’m trying to create a collection on a db inside an Atlas cluster (M0 sandbox, Mongo version 6.0.9) but I receive an error related to storageEngine parameter.I’m using MongoDB Java driver 3.12.11, this is what I receive:MongoCommandException: Command failed with error 8000 (AtlasError): ‘parameter storageEngine is disallowed in create command’I tried also from shell:and this is the response:According to documentation that parameter seems supported when creating a collection. Is this a bug on Atlas?Thanks!",
"username": "Filippo_Muscolino"
},
{
"code": "",
"text": "Hi @Filippo_MuscolinoSome commands are restricted or disallowed in the shared tiers and sometime in all tiers.All collections will be compressed with the snappy algorithm by default.",
"username": "chris"
}
] | Unable to create collection with storageEngine parameter | 2023-08-18T10:53:16.843Z | Unable to create collection with storageEngine parameter | 470 |
null | [
"aggregation",
"queries",
"python",
"compass",
"indexes"
] | [
{
"code": "",
"text": "When trying to create a regular index in one my collection using compass or using pymongo , I get this error → \nconnection 9 to 54.165.55.61:27017 closed\nand it fails every time I try\nthe collection contains around 2m documents and due not having this index any aggregate query using that field fails when I try to run it , and gives Auto reconnect errors",
"username": "Vatsal_Sharma"
},
{
"code": "",
"text": "connection 9 to 54.165.55.61:27017 closedany other useful info on why the connection is closed?do you see any pattern change on dashboards when you start creating that index?",
"username": "Kobe_W"
}
] | Connection timed out errors while creating index | 2023-08-19T15:15:04.122Z | Connection timed out errors while creating index | 493 |
null | [] | [
{
"code": " static const String username = Env.username;\n static const String secretKey = Env.secretKey;\n static const String mongoUrl = Env.mongoUrl;\n static const String port = Env.port;\n static const String dbName = 'MyDatabase';\n static const String connectionString = 'mongodb://$username:$secretKey@$mongoUrl:$port/$dbName/';\n\n // for requests\n static final db = Db(connectionString);\n\n // for scheduled tasks\n static final db2 = Db(connectionString);\n",
"text": "To open and close the DB connection in the requests is simple, but with scheduled tasks its more complex. Thats why I want to access the DB with two different connections. One is for requests and the other is for scheduled tasks.How can I achieve something like this? Are two connectionStrings with different values required?",
"username": "ce665209b5a480c9b680600b0c97aad"
},
{
"code": "",
"text": "Not sure which language this is, but generally you create a MongodbClient instance to interact with mongodb, and internally it uses a connection pool. You can of course create as many client instances as you like.To open and close the DB connection in the requests is simpleThis is done automatically, not manually by application code.",
"username": "Kobe_W"
}
] | Two connections to the same DB possible? | 2023-08-20T01:11:16.400Z | Two connections to the same DB possible? | 389 |
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "$bitsAllSet",
"text": "We want to introduce bitwise operations to atlas search for specific aggregation.Typically, I would like to use $bitsAllSet syntax, but atlas search doesn’t seem to support it directly.Is there any way to do bitwise operations on mongot directly without using storedSource ?",
"username": "wrb"
},
{
"code": "",
"text": "Hello, @wrb !It would be much easier to understand your problem and suggest some solution, if you:",
"username": "slava"
}
] | Can atlas search use bitwise operations? | 2023-08-17T07:30:38.655Z | Can atlas search use bitwise operations? | 412 |
[
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "I have tried to connect mongodb to REACT in a million different ways, using uri, putting the code straight on the ./app, doing the connection in another file and requiring it in the ./app page… nothing has worked.I believe I have the username and password just fine, along with the database name…Why am I keep getting these errors and not able to connect?snip2891×497 29.8 KB",
"username": "Edgar_Lindo"
},
{
"code": "const mongoose = require(\"mongoose\");\n\nconst connection =\n\n \"mongodb+srv://<myusernamehere>:<mypasswordhere>@cluster0.gcyyo.mongodb.net/<mydatabasehere>?retryWrites=true&w=majority\";\n\nmongoose\n\n .connect(connection, {\n\n useNewUrlParser: true,\n\n useUnifiedTopology: true,\n\n useFindAndModify: false,\n\n })\n\n .then(() => console.log(\"Database Connected Successfully\"))\n\n .catch((err) => console.log(err));\n",
"text": "I have tried tons of different codes… this is just one of them…",
"username": "Edgar_Lindo"
},
{
"code": "",
"text": "Did you get through? Im trying with Svelte 3.49 and Vite 3.1.0 using ES6 and I get the same error. Tried lots of “solutions”. None work. So frustrating how krappy web software is. I developed software for years on pc and most of the time it krapped out was human error. Web software is bug infested!",
"username": "Patrick_Kavanagh"
},
{
"code": "",
"text": "Has any solution been found for this issue?\nI am having the same problem now.",
"username": "Michel_Bouchet"
},
{
"code": "connectconnectmongoose",
"text": "Hello, everyone It seems, you guys are trying to connect use Mongoose on the frontend.From official Mongoose doc:“Mongoose’s browser library is very limited. The only use case it supports is validating documents as shown below.”That means it can not connect to a database, so no reason to use connect method on frontend.In order to use connect method, mongoose object has to be initialized on the backend application and used on the server-side. If this method is missing in your node js app, then you have a dependency problem. Make sure you install, import, use your dependencies and run your application correctly.Frontend app operates in browser and communicates with server, sending HTTP-requests to a server (backend app). The bankend then can use Mongoose to connect and do something with database and return result back to the frontend. Frontend does not interact with the database directly. At least, Mongoose is not the right tool for this.",
"username": "slava"
}
] | Mongoose connect is not a function... What is wrong? | 2022-02-09T18:10:11.840Z | Mongoose connect is not a function… What is wrong? | 15,048 |
|
null | [] | [
{
"code": "",
"text": "intent = [\n{\n“_id”: {\n“$oid”: “64d872cfb050fa5efbcd5501”\n},\n“name”: “Account_Opening_Documents”,\n“created”: {\n“$date”: “2023-06-19T12:22:01.755Z”\n},\n“updated”: {\n“$date”: “2023-08-14T08:10:17.081Z”\n},\n“examples”: [\n“64d9e169545a6aa086e2a119”,\n“64d9e169545a6aa086e2a11a”\n],\n“description”: “testing”\n}\n]example = [\n{\n“_id”: {\n“$oid”: “64d9e169545a6aa086e2a119”\n},\n“text”: “hello”,\n“created”: {\n“$date”: “2023-08-07T08:47:41.860Z”\n},\n“updated”: {\n“$date”: “2023-08-09T13:30:26.467Z”\n}\n},\n{\n“_id”: {\n“$oid”: “64d9e169545a6aa086e2a11a”\n},\n“text”: “What are the documents required in opening an account?”,\n“created”: {\n“$date”: “2023-06-19T12:22:01.755Z”\n},\n“updated”: {\n“$date”: “2023-06-19T12:22:01.755Z”\n}\n}\n]How could I join two collections for query in mongodb ?",
"username": "Md_Asif_Kabir_Emon"
},
{
"code": "",
"text": "Hello, @Md_Asif_Kabir_Emon and welcome to the MongoDB community! Blockquote\nHow could I join two collections for query in mongodb ?Well, in your case, I think $lookup stage of the aggregation pipeline is what you’re looking for.\nAdditionally, I’d recommend to read about data modelling in MongoDB - maybe you won’t need two collections and, therefore, merging them Also, it is better to format your code examples before posting. This way your topic will be easier to read and more people will be willing to help you.",
"username": "slava"
}
] | How to join two collections | 2023-08-19T03:41:35.677Z | How to join two collections | 458 |
null | [
"compass",
"server"
] | [
{
"code": "brew install [email protected]\nWarning: No available formula with the name \"[email protected]\". Did you mean [email protected], mongodb-community or [email protected]?\n==> Searching for similarly named formulae and casks...\n==> Formulae\nmongodb/brew/[email protected] mongodb/brew/mongodb-community ✔ mongodb/brew/[email protected]\n\nTo install mongodb/brew/[email protected], run:\n brew install mongodb/brew/[email protected]\n\n==> Casks\nmongodb-compass\n\nTo install mongodb-compass, run:\n brew install --cask mongodb-compass\nbrew tap mongodb/brewbrew update",
"text": "Hi there. My objective is to install [email protected] on MacOS. According to this guide here, it should be as simple as running:But this just gives the following error:I did try to resolve this with brew tap mongodb/brew and brew update - but to no help.",
"username": "Alex_Bjorlig"
},
{
"code": "if Hardware::CPU.intel?\n url \"https://fastdl.mongodb.org/osx/mongodb-macos-x86_64-6.0.6.tgz\"\n sha256 \"61ea12cc65bc47b3ed87e550147eb0b9a513db812165a4b5d1a5841d68280045\"\n else\n url \"https://fastdl.mongodb.org/osx/mongodb-macos-arm64-6.0.6.tgz\"\n sha256 \"a3ddf886901c59f185cc232282a0dcfa358d14cf48cb49d72b638d87df8eefd9\"\n end\n",
"text": "Hi, I also meet this trouble currently. It seems like the official Homebrew formula for MongoDB Community has no upgrade for Version 7.0 currently and still been pending on Verison 6.0.6:",
"username": "CG_YANG"
},
{
"code": "",
"text": "Hey @Alex_Bjorlig/@CG_YANG,Thanks for reaching out to MongoDB Community forums!Please allow us some time to cross-check this, and we will provide you with an update.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Just spotted this too. Seems the docs have been updated but the new version hasn’t been setup in the homebrew tap.",
"username": "Dwight_Gunning"
},
{
"code": "",
"text": "Hi, there is definitely a problem.I did the installation of v6 a few days ago without any issues.\nNow I’m trying to install v7 and facing this problem as Alex mentioned.Hardware: Macbook Air M2, latest OS.",
"username": "landsman"
},
{
"code": "brew install [email protected]\n",
"text": "Hey everyone,We have updated the “homebrew tap” to support 7.0. We hope you can now download the latest MongoDB version using homebrew.In case of any further questions, feel free to open a new thread.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Installing [email protected] on MacOS gives error | 2023-08-17T08:43:53.740Z | Installing [email protected] on MacOS gives error | 1,705 |
null | [] | [
{
"code": "",
"text": "Hi Team,I have two collections for example Collection A and B, B collection have business_id column(integer), I want to update A collection Object_id to B collection business_id based on A collection id field and B collection business_id? How to do this?",
"username": "KRISHNAKUMAR_K"
},
{
"code": "",
"text": "I do not understand your use-case very well.Please post sample documents from both collections and the desired results.Do you want to store back the result permanently?",
"username": "steevej"
},
{
"code": "",
"text": "I have two collections for example Collection A and B, B collection have business_id column(integer), I want to update A collection Object_id to B collection business_id based on A collection id field and B collection business_id? How to do this?Simple @steevej I need simple update command.In the above example i have two collections. One collection name is users and another one collection name is user_business.Users collections have object_id example {“_id”: ObjectId(“64a6b0f5af3af7f59a05ba33”) and i have business_id (integer) column in user_business table.I want to update Users collection Object_id value to User_businesses collection business_id (integer) based on where condition source_collection.id = target_collection.id.",
"username": "KRISHNAKUMAR_K"
},
{
"code": "var usersCollection = db.getCollection(\"users\");\nvar userBusinessCollection = db.getCollection(\"user_businesses\");\n\nusersCollection.find({}).forEach(function(userDoc) {\n var businessDoc = userBusinessCollection.findOne({ business_id: userDoc.business_id });\n if (businessDoc) {\n usersCollection.updateOne({ _id: userDoc._id }, { $set: { object_id: businessDoc.business_id } });\n }\n});\n\n",
"text": "I have used below script but not working,",
"username": "KRISHNAKUMAR_K"
},
{
"code": "",
"text": "You should check the result of updateOne().Is you updateOne really executed? May be it is never executed because businessDoc is always null.Does userDoc really have a field named business_id? We do not know for sure because we do not have sample data.Is your code matching any document? If not the query part is wrong.Is userDoc updated with the wrong value? If so the the $set part is wrong.This where it is important to supplysample documents from both collections and the desired results.What I suspect is that businessDoc is always null.What I do not understand is why do you do this update? From what I see userDoc already has business_id, so why duplicate this value in a new field object_id?",
"username": "steevej"
},
{
"code": "",
"text": "Business id column have integer value like 1,2,5,…etc, so that i want User table object_id act as foreign key in user_business table business_id column.",
"username": "KRISHNAKUMAR_K"
},
{
"code": "",
"text": "The above query executed successfully without any errors, but values are not updated.",
"username": "KRISHNAKUMAR_K"
},
{
"code": "",
"text": "This is my point. You do not update when businessDoc is null soWhat I suspect is that businessDoc is always null.If this is the case, then the code will beexecuted successfully without any errors, but values are not updatedAlsoYou should check the result of updateOne()If businessDoc matchexs the query business_id:userDoc.business_id, the setting object_id to businessDoc.business_id will duplicate in userDoc the field business_id.It is hard to help you. If the update is not happening the the issue is your code or your data. Possible issues with the code have been identified but have not been addressed and despite my request for sample data you still prefer to describe it.",
"username": "steevej"
},
{
"code": "db.users.find().forEach(function(userDoc) {\n var businessDoc = db.user_businesses.findOne({ business_id: userDoc._id });\n if (businessDoc) {\n print(\"Matching user _id:\", userDoc._id, \"with business_id:\", businessDoc.business_id);\n }\n});\n",
"text": "Hi @steevej,I have checked matching records using below command.It’s fetching matching records and it’s working fine.",
"username": "KRISHNAKUMAR_K"
},
{
"code": "db.users.find().forEach(function(userDoc) {\n var matchingBusinessDoc = db.user_businesses.findOne({ business_id: userDoc.id });\n if (matchingBusinessDoc) {\n db.user_businesses.updateOne(\n { business_id: userDoc.id },\n { $set: { business_id: userDoc._id: ObjectId} }\n );\n }\n});\n",
"text": "When i am updating Users table UUID (Object_id) using below command getting errorIs this possible one table UUID to another table act as foreign key? if the above code is incorrect please give me correct one.In the above code getting syntax error. at { $set: { business_id: userDoc._id: ObjectId} }",
"username": "KRISHNAKUMAR_K"
},
{
"code": "{ business_id: userDoc.business_id }{ business_id: userDoc._id }{ business_id: userDoc._id: ObjectId}{ $set: { object_id: businessDoc.business_id } }\n",
"text": "It’s fetching matching records and it’s working fine.Good. We are going somewhere. That seems to confirmWhat I suspect is that businessDoc is always nullbecause the findOne query is not the same compared to the original code.original{ business_id: userDoc.business_id }and the one that works{ business_id: userDoc._id }As for the error about in{ business_id: userDoc._id: ObjectId}I would try the $set that you had in the original code",
"username": "steevej"
},
{
"code": "",
"text": "Is this still an issue?Have you made any progress?If the issue is resolved please share the solution with the rest of us and mark one post as the solution. This would help future reader with similar issue.",
"username": "steevej"
},
{
"code": "",
"text": "Dear @william_worse,you should not have reply to my post with spam. It has been flagged to the crew. I have started following you. I have marked your other post as spam too.I hate spammers.",
"username": "steevej"
},
{
"code": "",
"text": "@KRISHNAKUMAR_K, please follow up on your posts. I have read most of your other threads and you let them die without providing the final solution you implemented. This behavior is detrimental to the usefulness of this forum. Other users will lose time reading your posts and their replies without knowing what works and what does not.",
"username": "steevej"
}
] | How to update object_id to cross collections? | 2023-08-09T11:24:50.112Z | How to update object_id to cross collections? | 935 |
null | [] | [
{
"code": "",
"text": "Hello Dear Community,I am facing this error :MongoDB shell version v4.4.14\nconnecting to : mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1I am on Ubuntu 16.04 and I am basically trying to encrypt data from client to server using TLS/SSL. So, I created a CA file and I am trying to connect to mongodb using that certificate with this command :mongo --tls --tlsCAFile rootCA.pem --host 127.0.0.1However even the command “mongo” gives me this error. I am new to Linux so I do not know how to verify that mongodb is on the right IP and port could you please show step by step ?",
"username": "Youssef_Kharoufi"
},
{
"code": "",
"text": "Is your mongod up and running on port 27017?\nps -ef|grep mongo or check from service status depending on your install method\nIs it running as service or manually started from command line?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hello, thank you for your response : when I run “ps -ef|grep mongo” I get this output :And I am sorry but I don’t know weather mongod is running as a service but I installed it manually.",
"username": "Youssef_Kharoufi"
},
{
"code": "",
"text": "Your mongod is up & running\nPlease show contents of your config file",
"username": "Ramachandra_Tummala"
},
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n tls :\n mode : requireTLS\n certificateKeyFile : /home/youssef/mongodb.pem\n CAFile : /home/youssef/rootCA.pem\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n",
"text": "This is the content of my mongod.conf file :Thank you for your time",
"username": "Youssef_Kharoufi"
},
{
"code": "port: 1687",
"text": "port: 1687It is running on port 1687\nSo connect by\nmongo --port 1687",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "No sorry I just changed the port to try out some tests. the actual port is 27017. And even when I specify port 1687 it gives me the same error",
"username": "Youssef_Kharoufi"
},
{
"code": "",
"text": "In this post : Error: couldn't connect to server 127.0.0.1:27017 - #4 by Stephen_Fuller it says I need to create some directories because me too when I run the command : \" netstat -an | grep 27017` \" I don’t get any output. But I don’t know what I have to do since it is not stated clearly in the other post",
"username": "Youssef_Kharoufi"
},
{
"code": "",
"text": "Check your mongod.log for more errors\nCd to your dbpath dir if it exists no need to create first\nThe command i gave is to connect assuming your mongod is running with no acces control\nSince you say your mongod runs on port 27017 please check step by step first whether you can connect without security parms enabled\nShutdown your mongod edit config file and comment the security params.Start mongod and try to connect issuing just mongo\nIf you can connect then add back tls and other params and investigate why it is not working",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "This is the actual error I am getting from the logs :“msg”:“Error receiving request from client. Ending connection from remote”,“attr”:{“error”:{“code”:141,“codeName”:“SSLHandshakeFailed”,“errmsg”:“SSL handshake received but server is started without SSL support”},“remote”:“127.0.0.1:34766”,“connectionId”:4}}But how do I start the server with ssl support ?",
"username": "Youssef_Kharoufi"
},
{
"code": "",
"text": "See https://www.mongodb.com/docs/manual/reference/configuration-options/#net-options.If you are planning to manage your own server, I strongly recommend you take MongoDB Courses and Trainings | MongoDB University. Question like this one are covered.",
"username": "steevej"
},
{
"code": "",
"text": "I already took that course and although it mentions TLS encryption it doesn’t describe how to implement it ",
"username": "Youssef_Kharoufi"
},
{
"code": "",
"text": "please I stack same error\nmongo\nMongoDB shell version v4.4.23\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn’t connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nplease provide me with implement command\nThank you in advans",
"username": "chetan_c"
}
] | Error : couldn't connect to server 127.0.0.1:27017, connection attempt failed : Socket Exception | 2022-05-23T21:56:53.968Z | Error : couldn’t connect to server 127.0.0.1:27017, connection attempt failed : Socket Exception | 33,440 |
[
"compass"
] | [
{
"code": "",
"text": "Hi, I am facing the below error message while trying to connect the Database using MongoDb Shell connection string on my command line. Please can anyone help me with this error? Or explain where I went wrong?\nimage1385×723 36 KB\nAlso, a similar error message is displayed when I try to connect the database using the MongoDB compass",
"username": "Ashish_Jadhav"
},
{
"code": "",
"text": "The current version of mongosh is 1.10.5 or later. You might try upgrading.",
"username": "Jack_Woehr"
}
] | Error while trying to connect Database using MongoDB Shell connection string | 2023-08-19T10:20:30.623Z | Error while trying to connect Database using MongoDB Shell connection string | 380 |
|
null | [
"node-js",
"connecting",
"performance"
] | [
{
"code": "**That's really slow.**const MongoClient = require('mongodb').MongoClient;\nurl = `mongodb+srv://${process.env.mongo_user}:${process.env.mongo_password}@${process.env.mongo_host}/${process.env.mongo_db}?retryWrites=true&w=majority`\n\n MongoClient.connect(url,\n {\n useUnifiedTopology: true,\n useNewUrlParser: true\n }).then((client) => {\n\n console.log(\"mongo db conection success\");\n \n }).catch(err => {\n console.log(\"connection failure.... \", url);\n console.log(\"connection errored \", err);\n })\n",
"text": "Hello mongo gurus,I recently signed up for the mongo atlas to eliminate some of the operational challenges of my own hosted/self managed community edition.My app seems operational after changing the driver and connection string in uri (mongo+srv) format.However, it takes close to 25 seconds to establish the initial connection. **That's really slow.**Any help to improve the performance of the initial connection would be greatly appreciated.Here my stack.Node.js - v10.18.1driver - “mongodb”: \"^3.6connection codethank you,\nJag",
"username": "Jag_Shetty"
},
{
"code": "",
"text": "Hi Jag,I am having the same issue. However for me this started happening suddenly with no clear explanation. I am using nodemon for hot reloading. For months I have been working on an application that would connect to MongoDB using mongoose and every time there was a reload it would reconnect to Mongo within milliseconds. Now it has consistently been taking 20+ seconds to connect on every reload.I cannot figure out what caused this to start happening and how to fix it. If anyone has any advice, it would be greatly appreciated.",
"username": "Michael_S"
},
{
"code": "",
"text": "Same here, connection to database is very slow and await doc.save() is not finishing",
"username": "Tudor_Esan"
},
{
"code": "",
"text": "Can you test the connection with Compass and the shell to see if there are similar slow downs to the Node.js driver? This will eliminate the driver as the point of failure?",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "I’m having the same issue. I have 2 pc, on the first one the connection take less than 1s, one the 2nd one, the connection take 20s … they are connected to the same network and using the same version of node. 2 days ago I had no issue.\nI have try to connect with mongoDB compass on windows 10 after I enter the url, the app is loading forever … (I’ve try to connect several times)",
"username": "Maxence_Fourrier"
},
{
"code": "",
"text": "Hi all,Hopefully this will help:",
"username": "Michael_S"
},
{
"code": "",
"text": "Hopefully this will help:Thanks, indeed using the url given when using the mongo shell seems to work.",
"username": "Maxence_Fourrier"
},
{
"code": "",
"text": "Try rebooting your router. This is what fixed the issue for me.When using SRV the driver initiates a lookup that resolves a single hostname to the actual names of the hosts, this is the step that was causing the issue for me. I still do not know why my router was interfering with the SRV lookup specifically - my internet connection was otherwise perfect. I would love if someone could elaborate on why rebooting my router resolved this issue?",
"username": "Michael_S"
},
{
"code": "",
"text": "As ridiculous as it sounds, rebooting my router did fix the issue for me. More specifically, DNS was messed up. After restarting my router and my Pihole docker container, DNS issues resolved (I hadn’t noticed having any anyway), and it no longer takes me 20 seconds to connect to a MongoDB Atlas instance.",
"username": "Russell_Weed"
},
{
"code": "",
"text": "I tried different things:What worked for me, was that I changed my network settings use Google Public DNS, instead of my ISP’s DNS servers (I have a cheap ASUS router). After that I am now connection to my MongoDB within 2-4 seconds, which is about 10 times faster than before.Here is how to do it: Get Started | Public DNS | Google Developers",
"username": "Lukas_Knudsen"
},
{
"code": "",
"text": "You don’t need to reboot your router none of that, you just need a VPN im using Urban VPN Desktop its free and its an easy fix",
"username": "Tupynamba_Lucas"
},
{
"code": "",
"text": "Every query, that I run in my application it is taking an extra overload of 250-300ms don’t know why! I have my mongodb and server both hosted on AWS. There are security groups monitoring the connection. Our both hosted ones are t2.medium. The query when I am running on Mongodb Compass, is taking 0ms, but the same query through application, 300ms. Please help resolve it.",
"username": "Sahil_Anower"
}
] | Slow connection to the server using NodeJS driver | 2020-12-27T21:17:07.178Z | Slow connection to the server using NodeJS driver | 18,356 |
null | [
"compass",
"vscode"
] | [
{
"code": "",
"text": "I have created a collection in VS-Code using thunder client but I am unable to view that collection in Mongodb compass",
"username": "Utkarsha_Nikam"
},
{
"code": "",
"text": "Hi @Utkarsha_Nikam\nDid you sign in and authenticate with the same user in Compass as you did in your code connection string? Did you insert any data to the collection or just create it?Can you share a code snippet you used in VS Code?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Yes I have inserted data too. As you can see in SS that I am not getting inotebook collection instead of that data is getting add into ‘test DB’ under ‘user’ collection , but it should be under inotebook\nScreenshot (116)_LI1337×726 96.8 KB\n",
"username": "Utkarsha_Nikam"
},
{
"code": "",
"text": "vs code\n\nScreenshot (114)1366×768 80.8 KB\n",
"username": "Utkarsha_Nikam"
},
{
"code": "",
"text": "I see this is posting to /auth/createUser is this API Posted to the users collection or the inotebook collection?Can you show the code you are using for the API request to connect to MongoDB as well?",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "Thanks a lot for help , I got it actually I haven’t putted the right collection name while connecting to MongoDB",
"username": "Utkarsha_Nikam"
},
{
"code": "",
"text": "Awesome! Glad you got it fixed",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "I am also having same problem can you tell me how you have solved yours",
"username": "Saurabh_Khare"
},
{
"code": "",
"text": "It was syntax error , actually I had given wrong name for mongodb connection link. Kindly check it",
"username": "Utkarsha_Nikam"
},
{
"code": "",
"text": "Sorry but I am not getting it. Are you talking about mongoURI name in db.js.",
"username": "Saurabh_Khare"
},
{
"code": "",
"text": "Yes I am talking about link",
"username": "Utkarsha_Nikam"
},
{
"code": "",
"text": "\nimg1366×733 69.5 KB\n\nI think that I have imported the corrected link. If there is some problem please correct me.\nAnd I have used 0.0.0.0, as localhost was not working.",
"username": "Saurabh_Khare"
}
] | NOT able to view collections in Mongo Compasss | 2023-05-01T18:39:08.940Z | NOT able to view collections in Mongo Compasss | 1,373 |
null | [
"aggregation",
"java",
"indexes",
"atlas",
"spring-data-odm"
] | [
{
"code": "\n Executing aggregation: [{ \"$match\" : {}}, { \"$facet\" : { \"objects\" : [{ \"$sort\" : { \"popularity\" : -1}}, { \"$skip\" : 95510}, { \"$limit\" : 10}], \"countFacet\" : [{ \"$count\" : \"count\"}]}}, { \"$project\" : { \"objects\" : 1, \"totalCount\" : { \"$arrayElemAt\" : [\"$countFacet.count\", 0]}}}] in collection my_collection\n[Request processing failed: org.springframework.data.mongodb.UncategorizedMongoDbException: Command failed with error 292 (QueryExceededMemoryLimitNoDiskUseAllowed): 'PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.'mongodb+srv://<username>:<passowrd>@<hostname>/?retryWrites=true&w=majority",
"text": "Hello,\nI have a data set of around 90k records hosted on my Mongo Atlas Free tier account. I’m using Spring Boot with MongoTemplate to run paginated queries using Aggregation. One of my queries is like below:I’m trying to get the last couple of records when sorted by popularity descending. But I’m faced with the error:\n[Request processing failed: org.springframework.data.mongodb.UncategorizedMongoDbException: Command failed with error 292 (QueryExceededMemoryLimitNoDiskUseAllowed): 'PlanExecutor error during aggregation :: caused by :: Sort exceeded memory limit of 33554432 bytes, but did not opt in to external sorting. Aborting operation. Pass allowDiskUse:true to opt in.'I’ve tried the following workarounds:Even after trying all the above, I’m still not able to make this work. I have around 5 fields based on which I will sort the data. I have created individual indexes on all 5 fields and a composite index containing all the fields. But for some reason, the indexes doesn’t seem to work.Also, I make my connecting using the below string from Java :\nmongodb+srv://<username>:<passowrd>@<hostname>/?retryWrites=true&w=majorityPlease help me how to fix this as I’ve been breaking my head with this for over a week.",
"username": "Vishnu_Ramana"
},
{
"code": "",
"text": "When using the facets it will not make use of indexes.\nInstead of using using facets and skip, sort and filter using object id or similar.",
"username": "John_Sewell"
},
{
"code": "",
"text": "For pagination, I kind of need to use facets to get the total objects returned etc. Is there any other way I can modify the query to return the same response but also make use of the indexes?",
"username": "Vishnu_Ramana"
},
{
"code": "",
"text": "@Vishnu_Ramana Can you modify the aggregation operation to include the “allowDiskUse:true” option when invoking the aggregation.",
"username": "sandeep_s1"
},
{
"code": "{ \"$match\" : {}}",
"text": "You could try to $sort before the $facet.You could also eliminate your empty $match{ \"$match\" : {}}since it might throw off of the optimizer.",
"username": "steevej"
},
{
"code": "",
"text": "When using the facets it will not make use of indexes.Just to be more specific about the above.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the more complete anwer Steeve, I was out and about so just popped in the reply!",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thank you all for your help. I will try the suggestions and get back.",
"username": "Vishnu_Ramana"
},
{
"code": "",
"text": "Thanks, everyone. I was able to fix the issue by making 2 calls instead of one single call to MongodbThe main reason I used the facet was to make a single call to get the results from the criteria and also the total results for that specific criteria. However, since this did not make use of indexes, my query started failing when the amount of data to process in memory exceeded what MongoDB is set to by default.My resolution was to use 2 separate calls. One call for getting the results for the criteria which does not use facets and only has the limit sort and find criteria. Another call to only get the count of rows for that same criteria. Both the calls make use of indexes now so speed is not an issue. However, there are 2 round-trip calls to the application server and database which is unavoidable.I hope this helps someone who stumbles across this in the future ",
"username": "Vishnu_Ramana"
}
] | Sort exceeded memory limit of 33554432 bytes error even after adding index | 2023-08-14T23:02:48.278Z | Sort exceeded memory limit of 33554432 bytes error even after adding index | 798 |
null | [
"node-js",
"data-modeling"
] | [
{
"code": "",
"text": "Hi there,I’m just starting out and experimenting with MongoDB, and I wanted to get an idea of whether I can improve what I have in mind. I’m working on a personal project for a pet shop e-commerce website, and I’ll have the following “tables”/documents: Animals, Products, and Users.My users can be connected to multiple animals, and my products can also be associated with several animals. So, when I display products to the user, I want to prioritize showing products related to the animals they own. The approach I came up with is as follows:User: {\nid\nAnimals[Object…]\n}Product: {\nid\nAnimals[Object…]\n}Animals:{\nid\nProducts[Object…]\nUser[Object…]\n}However, I’m a bit unsure if this is the best way to go about it. I’m not sure if there might be a more efficient way to perform the query I want. ",
"username": "Tutuacs_N_A"
},
{
"code": "animalsaminalsuseranimaldb.users.insertMany([\n {\n _id: 'U1',\n hasPets: ['cat', 'budgerigar']\n },\n {\n _id: 'U2',\n hasPets: ['dog']\n },\n]);\ndb.products.insertMany([\n {\n _id: 'P1',\n forPets: ['cat']\n },\n {\n _id: 'P2',\n forPets: ['dog', 'cat']\n },\n {\n _id: 'P3',\n forPets: ['budgerigar']\n }\n]);\npetsdb.products.aggregate([\n {\n $match: {\n forPets: {\n $in: ['dog', 'budgerigar']\n }\n }\n }\n]);\n[\n { _id: 'P2', forPets: [ 'dog', 'cat' ] },\n { _id: 'P3', forPets: [ 'budgerigar' ] }\n]\n{\n _id: 'U1',\n hasPets: ['cat', 'budgerigar'],\n petObjects: [\n {\n name: 'Cat 1',\n color: 'ginger'\n },\n {\n name: 'Cat 2',\n color: 'black'\n },\n {\n name: 'Budgerigar',\n color: 'green'\n }\n ],\n }\n",
"text": "Hello, @Tutuacs_N_A ! Welcome to the MongoDB community! If you do not plan to interact with animals documents often (to read or write updates), you can embed aminals object in user objects. Especially if that animal objects do not represent something abstract, like a general description of some parrot, but rather a pet, that belong only to some one single user in the database.So, you can model your data like this:Later, when a client application makes a request to your node.js application, during authentication/authorization process you fetch current user object along with the pets array. So, by the time corresponding endpoint is hit, you already know the pet list for which you need to fetch products:Output:If you animal objects has additional fields, you can store them as an array in a separate field, like this:Don’t worry about data normalization - for many use cases in MongoDB, the denormalized data model is optimal. Your’s seems to be exact same case .",
"username": "slava"
}
] | Little help with ecommerce data structure | 2023-08-18T17:11:35.452Z | Little help with ecommerce data structure | 411 |
null | [
"swift"
] | [
{
"code": "",
"text": "EmbeddedObjects seem like a good fit for my current use case as I have a schema something like:Parent: Object\n– [Child: EmbeddedObject]\n---- [SubChild: EmbeddedObject]If I .observe the parent object for changes using a NotificationToken I don’t ever pick up anything that happens when children or sub-children are modified. Any changes that occur directly to properties on the top-level parent object do result in a notification for changes however. Is this expected?Thanks!",
"username": "Mike_McNamara"
},
{
"code": "",
"text": "I should add that I’ve also tried to .observe the List that exists on the parent object and that indeed works. I suppose that this is probably a compromise that keeps Realm’s performance from degrading with deeply nested objects.Having an option to ignore this, ignore it to a certain nested level, or to have nested embedded objects of given types trigger notifications would be useful.",
"username": "Mike_McNamara"
},
{
"code": "",
"text": "@Mike_McNamara Did you ever find resolution to this problem? We are getting unwanted notifications for child objects, and are looking for ways to silence them… I’m imagining there might be an attribute on the property that could achieve this but haven’t found it yet.",
"username": "Philip_Hadley1"
},
{
"code": "",
"text": "@Philip_Hadley1to this problemCan you clarify what the ‘problem’ is?An embedded object is not a child object like you would have when the objects are discreet and have a forward relationship.An embedded object is just that, embedded, and actually part of the schema of the object and can be thought of as just another direct property of the object. It follows the object; for example, if the object is deleted, all of its embedded objects are deleted as well. If it’s copied, that copy will include all of the embedded objects.If an object is being observed, you are asking Realm to tell you about any changes to the properties of that object… and if an embedded object is changed within an object, the object is changed so the “modified” event fires.Is that not what you’re experiencing?Are you expecting some other behavior? If so, can you tell us a little about what you’re after so perhaps we can point you in the right direction?Including some code examples may help and consider crafting a separate question if this ‘problem’ isn’t actually what you’re asking about.Solutions would include changing your observer to be more granular with, for example, key-path observing of objects instead of object or collection observing.",
"username": "Jay"
},
{
"code": " [MapTo(\"photos\")]\n [Realms.Preserve]\n [WovenProperty]\n public IList<ShowroomPhoto> Photos\n {\n get\n {\n if (this.\\u003CPhotos\\u003Ek__BackingField == null)\n this.\\u003CPhotos\\u003Ek__BackingField = this.GetListValue<ShowroomPhoto>(\"photos\");\n return this.\\u003CPhotos\\u003Ek__BackingField;\n }\n }\n",
"text": "Hi, @Jay. This topic opened by @Mike_McNamara seemed similar to what we were seeing, but upon further research, probably not. Let me explain…In our .NET app (Xamarin iOS + Android), we are getting a notification about a change to the parent object when only a child object has changed. The relationship between parent and child is one-to-many. This is unexpected behavior, and we’re looking for a way to avoid getting those notifications. The parent object contains a List of child objects like this - see snippet below. When a ShowroomPhoto changes, we get a notification about the changed parent object.After some research, it looks like keypath filtering might be one way to solve the issue for us. However, this feature seems to be only available in your Swift SDK, but not in the .NET SDK that we are using:Add support for keypath filtering for notifications · Issue #1398 · realm/realm-dotnet (github.com)",
"username": "Philip_Hadley1"
},
{
"code": "",
"text": "This is unexpected behaviorI assume Photos are Embedded objects.If so, that’s actually expected behavior when it comes to Embedded Objects, but not when it comes to Objects.Realm enforces unique ownership constraints that treat each embedded object as nested data inside of a single, specific parent object. An embedded object inherits the lifecycle of its parent object and cannot exist as an independent Realm object. Realm automatically deletes embedded objects if their parent object is deleted or when overwritten by a new embedded object instance.That’s also the reason that cannot be directly queried - they can only be queried though the parent object.So while an embedded object ‘feels’ like it has a relationship to its parent, it is really just a property of the parent and reacts like any other property would in regard to firing events upon modification. Which is why I mentioned the behavior is expected.I (now) see you are using .Net so a more granular solution like key-value observing (as in Swift) is not available so another possibility is to store them as managed Realm Objects instead of Embedded Objects - that will give you the behavior you’re seeking as changes to those objects would not fire an event for an related object.",
"username": "Jay"
},
{
"code": "",
"text": "Actually, both the parent object (ShowroomVisit) and the child object (ShowroomPhoto) subclass RealmObject - not EmbeddedObject.",
"username": "Philip_Hadley1"
},
{
"code": "",
"text": "lol. Let’s start again!The title of the initial question and the question itself revolve around EmbeddedObjects.If you’re not using EmbeddedObjects then please open a separate question as all of the above info will not have any relevance to your question.Do that and we’ll take a look!",
"username": "Jay"
},
{
"code": "",
"text": "Ok, thanks. I created a new ticket:\nUnwanted notifications about changed objects in child collection - MongoDB Atlas App Services & Realm - MongoDB Developer Community Forums",
"username": "Philip_Hadley1"
}
] | Should changes on EmbeddedObjects bubble up as observable changes to parent objects? | 2022-06-24T22:18:40.755Z | Should changes on EmbeddedObjects bubble up as observable changes to parent objects? | 2,019 |
null | [
"aggregation",
"views"
] | [
{
"code": "[\n\t\t\t {$match:{ \"stats.dateLastLogin\":{ $gt:now().add(\"d\",-30) } }}\n\t\t\t ,{$lookup:{from:\"posts\", let:{member:\"$memberID\"}, as:\"posts\", pipeline:[\n\t\t\t\t {$match:{$expr:{$eq:[\"$member\",\"$$member\"]}}}\n\t\t\t\t,{$group:{_id:nullValue(), cnt:{$sum:1}, cntActive:{$sum:{$cond:[{$eq:[\"$status\",\"ACT\"]},1,0]}}}}\n\t\t\t]}}\n\t\t\t,{$unwind:{path:\"$posts\", preserveNullAndEmptyArrays:true}}\n\t\t\t,{$addFields:{\"stats.numPosts\":{$ifNull:[\"$posts.cnt\",0]}, \"stats.numPostsActive\":{$ifNull:[\"$posts.cntActive\",0]}}}\n\t\t\t,{$project:{\"posts\":0}}\n\t\t\t,{$lookup:{from:\"views\", let:{member:\"$memberID\"}, as:\"views\", pipeline:[\n\t\t\t\t {$match:{post:{$exists:1}, $expr:{$eq:[\"$member\",\"$$member\"]}}}\n\t\t\t\t ,{$group:{_id:nullValue(), cnt:{$sum:1}, last:{$max:{$toDate:\"$_id\"}}}}\n\t\t\t]}}\n\t\t\t,{$unwind:{path:\"$views\", preserveNullAndEmptyArrays:true}}\n\t\t\t,{$addFields:{\"stats.numPostViews\":{$ifNull:[\"$views.cnt\",0]}, \"stats.dateLastPostView\":{$ifNull:[\"$views.last\",\"$$REMOVE\"]}, \"stats.searchGeniusCandidate\":{$cond:{if:{$lt:[{$ifNull:[\"$views.last\",createDate(1972,9,4)]},createDate(2016,4,2)]},then:\"$$REMOVE\",else:true}}}}\n\t\t\t,{$project:{\"views\":0}}\n\t\t\t,{$project:{stats:1}}\n\t\t\t,{$merge:{into:\"members\", on:\"_id\"}}\n];\n",
"text": "I have a fairly simple merge aggregation that runs nightly to update some statistics in a users collection. The three collections are decent sized but not humongous. Since my Atlas cluster was updated to 6.0.8, the aggregation just runs forever (days) and I’ve been forced to kill it. I can reproduce the problem on two Atlas clusters running 6.0.8 and confirm the agg runs fine on my local develop machine on 6.0.6. Not even sure where to begin troubleshooting this. Anyone else seen issues with 6.0.8?",
"username": "Sean_Daniels"
},
{
"code": "db.collection.explain(\"executionStats\")db.collection.explain(\"executionStats\")\"size\"\"count\"\"avgObjSize\"collStats",
"text": "Hi @Sean_Daniels - Welcome to the community.It’s definitely interesting that this was able to be reproduced on two Atlas clusters running the same version. In saying so, I am wondering if you could provide the following details to help reproduce this on my end / troubleshoot the cause:Since my Atlas cluster was updated to 6.0.8, the aggregation just runs forever (days) and I’ve been forced to kill itconfirm the agg runs fine on my local develop machine on 6.0.6Please redact any sensitive and personal information before posting hereLook forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "db.collection.explain(\"executionStats\"){\n explainVersion: '1',\n stages: [\n {\n '$cursor': {\n queryPlanner: {\n namespace: 'dealstream.members',\n indexFilterSet: false,\n parsedQuery: {\n 'stats.dateLastLogin': { '$gt': ISODate(\"2023-06-24T00:00:00.000Z\") }db.collection.explain(\"executionStats\"){\n explainVersion: '1',\n stages: [\n {\n '$cursor': {\n queryPlanner: {\n namespace: 'dealstream.members',\n indexFilterSet: false,\n parsedQuery: {\n 'stats.dateLastLogin': { '$gt': ISODate(\"2023-06-24T00:00:00.000Z\") }\"size\"\"count\"\"avgObjSize\"collStats",
"text": "Hi Jason, thanks for the reply. Answers below:\n\n6.0.6: size, 1199718063; count, 700185; avgObjSize, 1713\n6.0.8: size, 1178152738; count, 693630, avgObjSize, 1698",
"username": "Sean_Daniels"
},
{
"code": "{\n _id: ObjectId(\"6454140325d01b11f53e5772\"),\n browserlang: 'en-us',\n currency: 'USD',\n dateCreated: ISODate(\"2023-05-04T20:22:27.670Z\"),\n dateTOU: ISODate(\"2023-05-04T20:22:27.654Z\"),\n dateUpdated: ISODate(\"2023-05-04T20:22:29.055Z\"),\n dismissChangelog: true,\n email: '[email protected]',\n ip: '96.67.22.213',{\n _id: ObjectId(\"644ff409dc978c52800bcda9\"),\n currency: 'USD',\n industry: {\n codetype: 'MN',\n ruleID: '3637A6DC6A3EF4DA0EC2B87A72E54E5C',\n code: '422',\n id: 422,\n tags: [ 152, 422 ],\n aggregator: 422{\n _id: ObjectId(\"6481dc273ed2fb4d633d0cb9\"),\n lang: 'en',\n member: '22914580-04E2-96CD-6E90ACA6AC93BD9C',\n memberType: 0,\n objectStatus: 'ACT',\n objectType: 5,\n post: ObjectId(\"644c37b0e96ccf1a31136af1\"),\n promoted: true,\n site: 'us'$matchnadja:dealstream sdaniels$ brew upgrade [email protected]\nWarning: mongodb/brew/mongodb-community 6.0.6 already installed\n",
"text": "Sample from “members”\nSample from “posts”\nSample from “views”\nIt took about 32 minutes. However, since I added the $match stage at the front of the pipeline (to try to get the aggregation working on a smaller dataset), it takes only 2 minutes or so on 6.0.6. Even with the $match stage it just hangs on 6.0.8.I have not yet, only because I do not believe 6.0.8 has been released yet to homebrew, which is how I install/update on my local machine:",
"username": "Sean_Daniels"
},
{
"code": "",
"text": "Thanks Sean - Going to take a look at the information provided and will update here if I notice anything that may be causing the hang.I have not yet, only because I do not believe 6.0.8 has been released yet to homebrew, which is how I install/update on my local machine:That is fair. If possible, can you try launch it direct from the unpacked download?Best regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "db.collection.explain(\"executionStats\")db.collection.explain(\"executionStats\")\"executionStats\"",
"text": "I inspected both of the explain outputs but it seems its using the default. Would you be able to provide the \"executionStats\" level output?Additionally, the 32 minutes you noted when it ran on 6.0.6 - Was this the local or Atlas instance?Just to cover some extra bases, what is the Atlas tier the aggregation is hanging on? I’m curious to know if you noticed any resource pressure when it’s run.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Additionally, the 32 minutes you noted when it ran on 6.0.6 - Was this the local or Atlas instance?Both.Just to cover some extra bases, what is the Atlas tier the aggregation is hanging on? I’m curious to know if you noticed any resource pressure when it’s run.It’s hanging on my dev instance and my production instance. Dev is M10 and Prod is M50.",
"username": "Sean_Daniels"
},
{
"code": "\"executionStats\"db.runCommand({aggregate:\"members\", pipeline:myPipeline, explain:true})",
"text": "I inspected both of the explain outputs but it seems its using the default. Would you be able to provide the \"executionStats\" level output?I’m not sure how to achieve this for an aggregation. To get the output I posted earlier I used db.runCommand({aggregate:\"members\", pipeline:myPipeline, explain:true})How would I modify the above to get the executionStats level output? Thanks.",
"username": "Sean_Daniels"
},
{
"code": "executionStatsdb.runCommand({explain:{aggregate:\"members\",pipeline:pipe, cursor:{}}, verbosity:\"executionStats\"})",
"text": "OK, I think I figured out the executionStats thing. I used db.runCommand({explain:{aggregate:\"members\",pipeline:pipe, cursor:{}}, verbosity:\"executionStats\"})I have updated the gist for 6.0.6 above accordingly. I am still waiting for the results on 6.0.8 (it appears to be HUNG?)",
"username": "Sean_Daniels"
},
{
"code": "",
"text": "Yeah, I tried the explain multiple times on 6.0.8 and it just hangs. ",
"username": "Sean_Daniels"
},
{
"code": "memberspostsview",
"text": "Thanks @Sean_Daniels - I’ve run some tests and got it working on 6.0.8 but i’m now going to expand my test collections. Do you know how many documents inside the other 2 collections mentioned? I assume members collection is the below count:count, 700185In the meantime, i’ll generate posts and view collections with similar sizes but let me know the actual values for these 2 other collections.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @Sean_Daniels - I’ve run some tests and got it working on 6.0.8 but i’m now going to expand my test collections. Do you know how many documents inside the other 2 collections mentioned?members: 700,635\nposts: 520,801\nviews: 76,613,887",
"username": "Sean_Daniels"
},
{
"code": "durationMillis{$match:{ \"stats.dateLastLogin\":{ $gt:now().add(\"d\",-30) } }}\n$match$match_id",
"text": "Hi @Sean_Daniels,Thanks for your patience.I wasn’t able to replicate this same behaviour on two local 6.0.6 and 6.0.8 instances. The pipelines ran (although they did take some time) and had the exact same execution stats (minus the durationMillis which did not vary in any significant amounts between the two versions).I do have one test I am curious to see the results of: On the 6.0.8 environment can you try changing this $match stage to match only a single document using an index? The easiest would probably to use a $match on the _id value of a single document and seeing if the aggregation completes or does it still hang?Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I wonder if this is related to the issue I’m getting here: Major performance hit for aggregation with lookup from 5.0.13 to 5.0.18",
"username": "Jean-Francois_Lebeau"
},
{
"code": "",
"text": "Difficult to say with the current information. One thing I can see in reference to it is that the CPU is choked for minutes but perhaps you can verify on the original post if it ever completes a long with the other previously requested information (if it doesn’t ever complete then you may not be able to extract the execution stats output from the later version on that thread). However, I would continue on that thread for that particular topic since it may be unrelated.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "A post was split to a new topic: Need Help Optimizing Empty Query Latency",
"username": "Kushagra_Kesav"
}
] | Aggregation Fails (hangs) Since Atlas upgrade from 6.0.6 to 6.0.8 | 2023-07-21T16:35:08.335Z | Aggregation Fails (hangs) Since Atlas upgrade from 6.0.6 to 6.0.8 | 1,177 |
null | [
"queries",
"golang"
] | [
{
"code": "sortingOptionsBson := bson.D{{\"priority\", -1},{\"_id\", 1}}\n// limit and offset values are passed from the arguments of the function where this code resides\n// the problem lies with SetSort()\nsortingOptionsBson := bson.D{{\"priority\", -1},{\"_id\", 1}}\nopts := options.Find().SetSort(sortingOptionsBson).SetLimit(o_limit).SetSkip(o_skip)\ncur, err = coll.Find(ctx, filter, opts)\n",
"text": "Hello Mongo Fam, need a solution to this, facing a blockerI’m using Golang. In my mongo collection, I have the “_id” which is object id key and an “priority” key which is an integer value. I’m applying the find query and sorting the data with {priority, -1}, {_id, 1} like shown below.My code would look something like thisSometimes, the sort would be done based on “_id” first and then “priority” which should not be done.\nI want to avoid this misordering and always want the priority sorting to happen first and _id sorting to happen second. What is the issue here and how can it be solved?",
"username": "Krishna_Chaitanya4"
},
{
"code": "",
"text": "See the original thread for the same question.",
"username": "steevej"
},
{
"code": "",
"text": "See my reply to the original thread here.",
"username": "Matt_Dale"
}
] | [URGENT NEED HELP] - Sorting Issue in Find Query - Golang | 2023-08-10T05:16:56.134Z | [URGENT NEED HELP] - Sorting Issue in Find Query - Golang | 621 |
null | [
"queries",
"golang"
] | [
{
"code": "",
"text": "Hello Mongo Fam,In my mongo collection, I have the “_id” which is object id key and an “priority” key which is an integer value. I’m applying the find query and sorting the data with {priority, -1}, {_id, 1}. But sometimes the “_id” sorting is happening first and then priority sorting where opposite should happen. The data is not coming as expected in these cases. The ordering of sorting is changing. Can you please give a solution to this?",
"username": "Krishna_Chaitanya4"
},
{
"code": "",
"text": "But sometimes the “_id” sorting is happening first and then priority sorting where opposite should happen.It would be a major flaw if that would really happen. Most likely you are misusing find. Please share the exact code you are using.",
"username": "steevej"
},
{
"code": "opts := options.Find().SetSort(sortingOptionsBson).SetLimit(o_limit).SetSkip(o_skip)\ncur, err = coll.Find(ctx, filter, opts)\n// o_limit, o_skip have no issues. They are present above this line.\nFor sortingOptionsBson it would be like \nsortingOptionsBson := bson.D{{\"priority\", -1},{\"_id\", 1}}\n",
"text": "These are the inputs to the Find query. Can you suggest an alternate to this so that the priority sorting happens first and id sorting happens second all the time?",
"username": "Krishna_Chaitanya4"
},
{
"code": "bson.D{{\"priority\", -1},{\"_id\", 1}}",
"text": "Please tag your thread with the programming language you are using.I am not familiar with the syntaxbson.D{{\"priority\", -1},{\"_id\", 1}}In JSON, it should be { “priority” : -1 , “_id” : 1 }. I do not know if the above code produce the correct object.How is options variable defined?How is ctx variable defined?Are you sure coll always refer to the collection that has the index?",
"username": "steevej"
},
{
"code": "opts := options.Find().SetSort(sortingOptionsBson).SetLimit(o_limit).SetSkip(o_skip)\n\nctx is passed from the function parameters where this code resides in \n",
"text": "I’m using GolangOptions is defined above hereYes I’m sure, coll represents the collection that has the indexHope this can clear those queries",
"username": "Krishna_Chaitanya4"
},
{
"code": "opts := options...",
"text": "Options is defined above hereFrom the little I understand, the codeopts := options...sets the variable opts from an expression that starts with the global variables options. We need to know how options is defined. May be someone with golang driver expertise knows that options is part of the driver. Hopefully this someone car carry on.",
"username": "steevej"
},
{
"code": "\"go.mongodb.org/mongo-driver/bson\"\n\"go.mongodb.org/mongo-driver/bson/primitive\"\n\"go.mongodb.org/mongo-driver/mongo\"\n\"go.mongodb.org/mongo-driver/mongo/options\"\n",
"text": "These are the mongo-drivers that I’m using",
"username": "Krishna_Chaitanya4"
},
{
"code": "",
"text": "What would help at this point is that if you could share the explain plan of your query.",
"username": "steevej"
},
{
"code": "sortingOptionsBson := bson.D{{\"priority\", -1},{\"_id\", 1}}\n// limit and offset values are passed from the arguments of the function where this code resides\n// the problem lies with SetSort()\nsortingOptionsBson := bson.D{{\"priority\", -1},{\"_id\", 1}}\nopts := options.Find().SetSort(sortingOptionsBson).SetLimit(o_limit).SetSkip(o_skip)\ncur, err = coll.Find(ctx, filter, opts)\n",
"text": "I’m using Golang. In my mongo collection, I have the “_id” which is object id key and an “priority” key which is an integer value. I’m applying the find query and sorting the data with {priority, -1}, {_id, 1} like shown below.My code would look something like thisSometimes, the sort would be done based on “_id” first and then “priority” which should not be done.\nI want to avoid this misordering and always want the priority sorting to happen first and _id sorting to happen second. What is the issue here and how can it be solved?",
"username": "Krishna_Chaitanya4"
},
{
"code": "bson.D{{\"priority\", -1},{\"_id\", 1}}{ \"priority\" : -1 , \"_id\" : 1 }",
"text": "I want to reiterate:share the explain plan of your queryWe could also use sample documents that are wrongly sorted. It could also be useful to get sample documents that are correctly sorted.Are the wrongly sorted documents reside on the same server as the one correctly sorted?Could anyone with golang experience confirm that the codebson.D{{\"priority\", -1},{\"_id\", 1}}generates a document equivalent to the JSON:{ \"priority\" : -1 , \"_id\" : 1 }",
"username": "steevej"
},
{
"code": "",
"text": "SO on how to run an explain with GO:",
"username": "John_Sewell"
},
{
"code": "bson.M{ \"priority\" : -1 , \"_id\" : 1}\n",
"text": "I saw a different syntax for bson in Help golang Aggregate QueryAnd like I wrote I do not know golang but I feel thatmight be the correct way to specify the sort you want.",
"username": "steevej"
},
{
"code": "bson.D{{\"priority\", -1},{\"_id\", 1}}func main() {\n\tclient, err := mongo.Connect(\n\t\tcontext.Background(),\n\t\toptions.Client().ApplyURI(\"mongodb://localhost:27017/\"))\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\tdefer client.Disconnect(context.Background())\n\n\tcoll := client.Database(\"test\").Collection(\"priority_order\")\n\n\t// Drop the collection so we start with an empty collection.\n\tif err := coll.Drop(context.Background()); err != nil {\n\t\tpanic(err)\n\t}\n\n\t// Insert 20 docs with an integer \"priority\" field.\n\tvar docs []any\n\tfor i := 0; i < 20; i++ {\n\t\tdocs = append(docs, bson.D{{\"priority\", i}})\n\t}\n\t_, err = coll.InsertMany(context.Background(), docs)\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\t// Find the documents in reverse priority order, 2 at a time.\n\tfor i := 0; i < 10; i++ {\n\t\topts := options.Find().\n\t\t\tSetSort(bson.D{{\"priority\", -1}, {\"_id\", 1}}).\n\t\t\tSetLimit(2).\n\t\t\tSetSkip(int64(i * 2))\n\t\tcur, err := coll.Find(context.Background(), bson.D{}, opts)\n\t\tif err != nil {\n\t\t\tpanic(err)\n\t\t}\n\n\t\tfor cur.Next(context.Background()) {\n\t\t\tfmt.Println(cur.Current)\n\t\t}\n\t\tif err := cur.Err(); err != nil {\n\t\t\tpanic(err)\n\t\t}\n\t}\n}\n\n// Output:\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d5928a\"},\"priority\": {\"$numberInt\":\"19\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59289\"},\"priority\": {\"$numberInt\":\"18\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59288\"},\"priority\": {\"$numberInt\":\"17\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59287\"},\"priority\": {\"$numberInt\":\"16\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59286\"},\"priority\": {\"$numberInt\":\"15\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59285\"},\"priority\": {\"$numberInt\":\"14\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59284\"},\"priority\": {\"$numberInt\":\"13\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59283\"},\"priority\": {\"$numberInt\":\"12\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59282\"},\"priority\": {\"$numberInt\":\"11\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59281\"},\"priority\": {\"$numberInt\":\"10\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59280\"},\"priority\": {\"$numberInt\":\"9\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d5927f\"},\"priority\": {\"$numberInt\":\"8\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d5927e\"},\"priority\": {\"$numberInt\":\"7\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d5927d\"},\"priority\": {\"$numberInt\":\"6\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d5927c\"},\"priority\": {\"$numberInt\":\"5\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d5927b\"},\"priority\": {\"$numberInt\":\"4\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d5927a\"},\"priority\": {\"$numberInt\":\"3\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59279\"},\"priority\": {\"$numberInt\":\"2\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59278\"},\"priority\": {\"$numberInt\":\"1\"}}\n// {\"_id\": {\"$oid\":\"64dfadfed96b783d87d59277\"},\"priority\": {\"$numberInt\":\"0\"}}\n",
"text": "@Krishna_Chaitanya4 It’s not immediately clear what could be going wrong. The sort document looks correctbson.D{{\"priority\", -1},{\"_id\", 1}}and the rest of the code you posted seems correct.I have some questions to help troubleshoot:I attempted to reproduce the issue you described using MongoDB 6.0 and Go Driver v1.12.1, but never got out-of-order results. Here’s the code I used to attempt to reproduce the issue:",
"username": "Matt_Dale"
}
] | [NEED HELP] - Problem with sorting in Mongo Find query | 2023-08-09T12:27:03.139Z | [NEED HELP] - Problem with sorting in Mongo Find query | 1,306 |
null | [
"golang",
"time-series"
] | [
{
"code": "timeseries: {\n timeField: \"timestamp\",\n granularity: \"seconds\"\n}\ntimeFieldmetaField\tfilter := bson.D{{\n\t\tKey: \"timestamp\",\n\t\tValue: bson.D{\n\t\t\t{\n\t\t\t\tKey: \"$gte\",\n\t\t\t\tValue: timestamp_start,\n\t\t\t},\n\t\t\t{\n\t\t\t\tKey: \"$lt\",\n\t\t\t\tValue: timestamp_end,\n\t\t\t},\n\t\t},\n\t}}\n\tresult, err := coll.DeleteMany(context.TODO(), filter)\nCannot perform an update or delete on a time-series collection when querying on a field that is not the metaField",
"text": "I have a simple Time Series collection created with the following parameters:The documents contain data about the market price of some asset. Due to a change on how the price is calculated, I need to delete the documents between two dates, before re-inserting the documents with corrections.But according to the doc, delete commands on Time Series collections have the following limitation:The query may only match on metaField field values.I am new to MongoDB so I’m probably missing something here, but I do not understand this limitation:In case it helps to answer my question, this is how I wanted to do this operation, using the following Go code:This fails with the following error: Cannot perform an update or delete on a time-series collection when querying on a field that is not the metaField.",
"username": "x8X5HXVGK"
},
{
"code": "metaFieldmetaFieldmetaFieldmetaFieldmetaField",
"text": "@x8X5HXVGK thanks for the question!I’m a bit fuzzy on the best practices of time series collections in MongoDB, but I think your intuition about adding the document timestamp to the metaField affecting performance is correct. My understanding is that MongoDB uses the values in the metaField to create data “buckets” that are optimized for querying across large and variable time ranges. If you have too many distinct values in the metaField, your buckets will become tiny and performance will probably suffer.It might be possible to put a low-granularity time value in the metaField, maybe a “day stamp”, that would let you delete and replace buckets of data without reloading the entire collection. I highly recommend testing the performance impact of adding any timestamp value to the metaField before reyling on it, though.",
"username": "Matt_Dale"
},
{
"code": "metaFieldmetaFieldmetaFieldtimeField",
"text": "Hi @Matt_Dale, thank you for your response.With your response, I think I understand better how Time Series collections work. My use case is very similar to the example in the doc about the stock data. Using this example, let’s say I have a Time Series collection containing the prices of various assets, where each asset is identified by its ticker symbol. The metaField for this collection would contain that ticker symbol.Now, if it is discovered that prices for one asset are incorrect over a period of time, say last week, there is no way to delete or update just the data from that period, I would need to delete and re-insert the entire data for this asset?Your idea to add a less granular value of the time as part of the metaField would help, but metaField is defined when the collection is created, and cannot be modified later on, so I can’t use this now.This limitation of not being able to use the timeField for a delete operation on a Time Series is very surprising.",
"username": "x8X5HXVGK"
},
{
"code": "",
"text": "I’ve ran into the same problem, im deleting all the info concerning the symbol that contains duplicates or corrupted data and then re-inserting the modified version. ",
"username": "Raoul_Boulos"
},
{
"code": "",
"text": "I have the same problem. Does anyone knows if this feature is included in MongoDB 7. And how deteriorates the performance when including the time variable in the metadata?",
"username": "Rodrigo_Vazquez"
},
{
"code": "metaFieldmetaField",
"text": "@Rodrigo_Vazquez according to the current MongoDB 7.0 time series documentation, deletes in time series collections can now match on any field, not just metaField values. I believe it’s still not recommended to include a timestamp in the metaField. Keep in mind that MongoDB 7.0 is still pre-release, so it’s possible the documented behavior will change with the stable release.",
"username": "Matt_Dale"
},
{
"code": "",
"text": "Yes, I am waiting for the final version to be released. I was checking for a release date but I didn´t find any.",
"username": "Rodrigo_Vazquez"
},
{
"code": "",
"text": "@Rodrigo_Vazquez the the final MongoDB 7.0 release is now available!",
"username": "Matt_Dale"
}
] | How can I delete the documents between two dates from a Time Series collection? | 2022-03-02T14:54:00.925Z | How can I delete the documents between two dates from a Time Series collection? | 7,848 |
null | [
"node-js",
"crud"
] | [
{
"code": "const classesSchema = new Schema(\n {\n name:{type: String, required: true},\n students: [{\n // other fields go here\n attendance:[{\n date:{type: Date},\n tHours:{type: Number},\n aHours:{type: Number}\n }],\n }]\n }, \n {\n timestamps: true\n }\n)\napp.post(\"/api/classes/:id/students/attendance\", async(req, res) => {\n try {\n req.body.forEach((element, index) => {\n const attendanceObj = {\n date: element.date,\n tHours: element.tHours,\n aHours: element.aHours\n }\n classModel.updateOne(\n {_id: req.params.id}, \n { $push: { \"students.attendance\": attendanceObj } }\n );\n res.json(\"A new attendance record has been added successfully!\");\n \n });\n } catch (error) {\n res.status(500).json({ error: \"An error occured while adding a record!\"});\n }\n});\n",
"text": "I recently started learning about the MERN stack, and in a project, I’m trying to add attendance records for every student from the same class.\nMy schema looks like this To make it clear, imagine there are two students in the “students” array, I want for example to add every item from this array [{date: “2023-08-11”, tHours: 2, aHours; 1}, { “2023-08-11”, tHours: 2, aHours; 0}] to the student which has the same index.I tried something like this but it didn’t work :",
"username": "FKHARZ_El_mokhtar"
},
{
"code": "res.json()try {\n for (const element of req.body ) {\n const attendanceObj = {\n date: element.date,\n tHours: element.tHours,\n aHours: element.aHours\n }\n await classModel.updateOne(\n {_id: req.params.id}, \n { $push: { \"students.attendance\": attendanceObj } }\n );\n\n \n });\n res.json(\"A new attendance record has been added successfully!\");\n } catch (error) {\n res.status(500).json({ error: \"An error occured while adding a record!\"});\n }\n});\n",
"text": "I don’t know exactly what is your expectation about this request but you are calling\nres.json() inside a foreach. In this case your foreach just will run the first element and make a response to request according express docs.Sends a JSON response. This method sends a response (with the correct content-type) that is the parameter converted to a JSON string using JSON.stringify().\nThe parameter can be any JSON type, including object, array, string, Boolean, number, or null, and you can also use it to convert other values to JSON.Maybe just make your foreach sync operation using await before updateOne and on forEach and making the response only after the foreach do all updates successfully you finally can make a response.Sometihing like:",
"username": "Jennysson_Junior"
},
{
"code": "app.post(\"/api/classes/:name/students/attendance\", async(req, res) => {\n try {\n const data = await classModel.findOne({ name: req.params.name })\n data.students.forEach((student) => {\n student.attendance.push({date: req.body.date, tHours: req.body.tHours, aHours: req.body[`${student._id}`]})\n })\n await data.save()\n res.json(\"A new attendance record has been added successfully!\");\n } catch (error) {\n res.status(500).json({ error: \"An error occured while adding an attendance record!\"});\n }\n});\n",
"text": "Thank you for replying\nYeah, I tried this way before but I don’t know why it didn’t work.\nOn the other hand, this one worked for me :",
"username": "FKHARZ_El_mokhtar"
}
] | How to update many nested arrays within the same document | 2023-08-16T00:11:25.985Z | How to update many nested arrays within the same document | 487 |
null | [
"queries"
] | [
{
"code": "",
"text": "I’m facing mongoDB community edition connection failed issue with my AWS Glue Crawler, Can Anyone help me in this?Looking forward to your response. Thank you in Advance!",
"username": "Anuj_Pratap"
},
{
"code": "",
"text": "Hello @Anuj_Pratap,Welcome to the MongoDB Community forums I’m facing MongoDB community edition connection failed issue with my AWS Glue Crawler,Could you please share some specific details, such as the error message you are seeing, the version of MongoDB you are using, and where it is deployed? Additionally, can you confirm if your AWS Glue connection is successful?Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hi @Kushagra_Kesav\nI have MongoDB on my EC2 Instance. MongoDB version: db version v6.0.3. I want to connect that MongoDB with AWS Glue Crawler. AWS EC2 Instance and AWS Glue Crawler both are in same VPC and Subnet.It showing me as connection failed error.",
"username": "Anuj_Pratap"
},
{
"code": "",
"text": "Hello @Anuj_Pratap,It showed me a connection failed error.It seems like there may be a configuration issue on the AWS end. However, I found a blog that provides an overview of how to utilize the AWS Glue crawler with MongoDB Atlas, (fully-managed cloud database). Please refer to the blog for more details: Introducing MongoDB Atlas Metadata Collection with AWS Glue Crawlers.Additionally, I suggest opening a similar thread with relevant details at the AWS community (https://repost.aws/), as they have expertise in AWS Glue Crawler and may be able to provide you with a relevant solution.Please let us know how it goes, and feel free to reach out if you have any other questions or feedback.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hello @Kushagra_Kesav\nI’m not using MongoDB Atlas here, I’m using community version, so please guide me accordingly.",
"username": "Anuj_Pratap"
},
{
"code": "",
"text": "Hi we are also having the same problem, did you find a solution?",
"username": "Kay_Khan"
}
] | AWS Glue Crawler | 2023-05-08T06:16:11.105Z | AWS Glue Crawler | 1,328 |
null | [
"queries",
"atlas-cluster"
] | [
{
"code": "db.test.insertOne({a:1,b:1,c:1})\ndb.test.createIndex({a:1,b:1})\nAtlas atlas-d3opcw-shard-0 [primary] example> db.test.find({a:1,b:1,c:1}).explain()\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'example.test',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [ { a: { '$eq': 1 } }, { b: { '$eq': 1 } }, { c: { '$eq': 1 } } ]\n },\n queryHash: '8ACB2000',\n planCacheKey: '3C4FF473',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'FETCH',\n filter: { c: { '$eq': 1 } },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { a: 1, b: 1 },\n indexName: 'a_1_b_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], b: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { a: [ '[1, 1]' ], b: [ '[1, 1]' ] }\n }\n },\n rejectedPlans: []\n },\n command: { find: 'test', filter: { a: 1, b: 1, c: 1 }, '$db': 'example' },\n serverInfo: {\n host: 'atlas-d3opcw-shard-00-01.3xfvk.mongodb.net',\n port: 27017,\n version: '6.0.8',\n gitVersion: '3d84c0dd4e5d99be0d69003652313e7eaf4cdd74'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1692249290, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"3538e6ceb2f3ada9370432af2cd8526de5649c01\", \"hex\"), 0),\n keyId: Long(\"7222277933912555522\")\n }\n },\n operationTime: Timestamp({ t: 1692249290, i: 1 })\ndb.test.createIndex({a:1,b:1,c:1})\nAtlas atlas-d3opcw-shard-0 [primary] example> db.test.find({a:1,b:1,c:1}).explain()\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'example.test',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [ { a: { '$eq': 1 } }, { b: { '$eq': 1 } }, { c: { '$eq': 1 } } ]\n },\n queryHash: '8ACB2000',\n planCacheKey: '8E4E5BF1',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'FETCH',\n filter: { c: { '$eq': 1 } },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { a: 1, b: 1 },\n indexName: 'a_1_b_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], b: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { a: [ '[1, 1]' ], b: [ '[1, 1]' ] }\n }\n },\n rejectedPlans: [\n {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { a: 1, b: 1, c: 1 },\n indexName: 'a_1_b_1_c_1',\n isMultiKey: false,\n multiKeyPaths: { a: [], b: [], c: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { a: [ '[1, 1]' ], b: [ '[1, 1]' ], c: [ '[1, 1]' ] }\n }\n }\n ]\n },\n command: { find: 'test', filter: { a: 1, b: 1, c: 1 }, '$db': 'example' },\n serverInfo: {\n host: 'atlas-d3opcw-shard-00-01.3xfvk.mongodb.net',\n port: 27017,\n version: '6.0.8',\n gitVersion: '3d84c0dd4e5d99be0d69003652313e7eaf4cdd74'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1692249341, i: 7 }),\n signature: {\n hash: Binary(Buffer.from(\"71739a96ca57b0f6e87bb3f38cf996796d5ebb41\", \"hex\"), 0),\n keyId: Long(\"7222277933912555522\")\n }\n },\n operationTime: Timestamp({ t: 1692249341, i: 7 })\n",
"text": "Firstly, I create a collection - test and inserted some valuesThen I created a compound indexThen I used the explain() to see the winning planthen i created one more composite indexlooked again at the winning planMy question is -\nWhy is MongoDB choosing the plan with index a_1_b_1 when there is a better option of choosing a_1_b_1_c_1?",
"username": "Abhishek_Chaudhary1"
},
{
"code": "{a:1, b:1, c:1}a_1_b_1a_1_b_1_c_1Atlas atlas-d3opcw-shard-0 [primary] example> db.test.find({a:1,b:1,c:1}).explain()\na_1_b_1abc:1{a:1, b:1, c:1}ca_1_b_1abc{a:1, b:1, c:2}ca_1_b_1_c_1abc winningPlan: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: {\n a: 1,\n b: 1,\n c: 1\n },\n indexName: 'a_1_b_1_c_1',\n ...\n rejectedPlans: [\n {\n stage: 'FETCH',\n filter: {\n c: {\n '$eq': 1\n }\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: {\n a: 1,\n b: 1\n },\n indexName: 'a_1_b_1',\nc",
"text": "Hey @Abhishek_Chaudhary1,MongoDB chooses query plans based on the query shape and available indexes to find the optimal balance between performance and resource utilization.In this case when inserting a document {a:1, b:1, c:1}, and creating two indexes - a_1_b_1 and a_1_b_1_c_1.The MongoDB will choose the smaller a_1_b_1 index as the winning plan for this query since the collection has a single document. The database can retrieve the matching document using just a and b, and get the c:1 value for free without needing to use the larger index.However, when inserting a second document {a:1, b:1, c:1} with the same value for c, MongoDB will still use the a_1_b_1 index. This index contains only a and b, but that’s sufficient to locate the two matching documents as the value of c is also the same across the collection.Finally, upon inserting a third document {a:1, b:1, c:2} with a different value for c, MongoDB will now choose a_1_b_1_c_1 as the winning plan. This index contains all three fields a, b, and c, which allows MongoDB to efficiently locate the two documents that match the query’s criteria on all three fields.In summary, MongoDB adapts its query plan based on several factors one being data distribution in this specific case. As more documents match on a specific field like c, MongoDB will start utilizing indexes that contain that field, switching from smaller indexes to larger ones as needed to improve query performance.I hope it clarifies!Thanks,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Understanding Compound Index in MongoDB | 2023-08-17T05:16:18.120Z | Understanding Compound Index in MongoDB | 576 |
[] | [
{
"code": "",
"text": "Hi!\nI followed this tutorial and it worked really well for me:Learn how to get started with Vector Search on MongoDB while leveraging the OpenAI.The problem I have is that the text I want to store is too big to be represented in a single vector. Now I want to store the original (full) text along with several vectors in one document.\nThen, when I use vector search and one of the vectors matches I want to get a match in my query.Is that possible? If yes, how?",
"username": "Patrick_Treppmann"
},
{
"code": "{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"vector1\": {\n \"dimensions\": 384,\n \"similarity\": \"euclidean\",\n \"type\": \"knnVector\"\n },\n \"vector2\": {\n \"dimensions\": 384,\n \"similarity\": \"euclidean\",\n \"type\": \"knnVector\"\n }\n }\n }\n}\nvector1vector2$searchknnBeta",
"text": "Hi @Patrick_Treppmann and welcome to MongoDB community forums!!The problem I have is that the text I want to store is too big to be represented in a single vector.If I may ask, how big is the text exactly and are you being returned with any specific error messages whilst trying to create the vectors for this text?when I use vector search and one of the vectors matches I want to get a match in my query.As per my understanding from the above statement, you are trying to create two vector embedding fields and create vector index on both of them. Though vector search allows you to create index on multiple fields like:considering vector1 and vector2 are two vector embedded fields, but the catch here is, only one query operator is allowed under $search - and knnBeta specifically is only allowed at the top-level, so it can’t be nested in a compound either. So, one at a time.I hope I was able to help you with your query.Warm regards\nAasawari",
"username": "Aasawari"
}
] | Vector Search schema problem! | 2023-08-13T17:51:28.597Z | Vector Search schema problem! | 513 |
|
null | [] | [
{
"code": "",
"text": "Small startup, have started on the shared tier and now we have a few customers in production was looking to upgrade the cluster to the next tier.The panel warned that there would be 7-10 minutes of downtime. However it’s been closer to an hour now and the cluster is completely unresponsive.To add to the frustration, the Support chat is completely unresponsive.\nI know we’re only on a cheaper plan right now, but it doesn’t instill confidence that we should grow relying on this kind of service.",
"username": "Qdev"
},
{
"code": "",
"text": "Hello @Qdev ,Welcome to The MongoDB Community Forums! We do not have access to any insights on your cluster over the forums so contacting the chat support was the right choice in this case and I hope you have gotten a response by this time from the chat support team regarding this issue and your issue is resolved.However in saying so, there are several support plans with dedicated SLA’s in which the high severity incidents with severity levels 1 and 2 have 24x7 support as noted in the Cloud Services Support Policy.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Upgrading to next plan. Said would take a few minutes, it's now closer to an hour | 2023-08-15T04:10:46.879Z | Upgrading to next plan. Said would take a few minutes, it’s now closer to an hour | 315 |
null | [
"compass"
] | [
{
"code": "",
"text": "HI Team,we are unable to connect MongoDB CompassMongocompass version:1.39MongoDB Version: 4.2Regards,\nShareef",
"username": "shareef_sayed1"
},
{
"code": "",
"text": "Hi Team,Could you please help me ount",
"username": "shareef_sayed1"
},
{
"code": "",
"text": "",
"username": "shareef_sayed1"
},
{
"code": "getaddrinfo ENOTFOUND",
"text": "Hello @shareef_sayed1 ,Welcome to The MongoDB Community Forums! The getaddrinfo ENOTFOUND error generally indicates that the hostname you have provided for your MongoDB deployment cannot be resolved by Compass.If you are still having trouble connecting, please share some more details including:Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Is MongoDB 4.2 support MongoCompass 1.39 | 2023-08-15T08:07:13.331Z | Is MongoDB 4.2 support MongoCompass 1.39 | 556 |
null | [] | [
{
"code": "",
"text": "I have set up VPC Peering between my AWS VPC and Atlas cluster. My problem is that when I go to change my “IP access list” to remove the “0.0.0.0/0” IP, I am no longer able to connect to the cluster. I have included the VPC’s CIDR block in the allowed IPs.On the AWS side, I have a lambda that is being triggered and which then communicates with Mongo. The VPC that the lambda sits within, has no “internet gateway” so as I understand it, it should not be able to connect to the open internet. The VPC’s Route table diverts all traffic (from the lambda) to the peering connection (which connects Atlas to my VPC).From my setup, it seems that no traffic should be leaving my AWS VPC from any other IP address than the ones included in my VPC CIDR block. So why is it that removing the “0.0.0.0/0” IP from the list of allowed IP’s on mongo stops me from being able to connect?",
"username": "Edouard_Finet"
},
{
"code": "",
"text": "I have also tried adding the security group ID to the IP access list now. This also did not fix the problem.",
"username": "Edouard_Finet"
},
{
"code": "",
"text": "Work for you? I have the same problem. You can a help me?",
"username": "Renato_Souza"
}
] | VPC peering between Atlas cluster and AWS | 2022-09-16T11:58:29.704Z | VPC peering between Atlas cluster and AWS | 1,640 |
null | [
"replication"
] | [
{
"code": "",
"text": "MongoServerError: Reconfig attempted to install a config that would change the implicit default write concern on the shard to {w: 1}. Use the setDefaultRWConcern command to set a cluster-wide write concern on the cluster and try the reconfig again.This is the error I am facing, please look into the issue and help me out with a solution.",
"username": "Aryan_Semwal"
},
{
"code": "",
"text": "First, mongos use “majority” for shards.\nSecond, If you add arbiter to the replica set, Write Concern of set will be changed.if [ (#arbiters > 0) AND (#non-arbiters <= majority(#voting-nodes)) ]\ndefaultWriteConcern = { w: 1 }\nelse\ndefaultWriteConcern = { w: “majority” }Therefore, you must change the defaultReadConcern and writeConcern to use commands such as sh.addShard() and proceed.db.adminCommand( { setDefaultRWConcern: 1, defaultReadConcern: { “level”: “local” }, defaultWriteConcern: { “w”: 1 }, writeConcern: { “w”: 1 } })",
"username": "Kim_Hakseon"
}
] | In my mongodb cluster I have created replicaset for shard and now to that shard replicaset I want to add an arbiter. After running the addArb() i am getting write concern error | 2023-08-17T15:09:38.164Z | In my mongodb cluster I have created replicaset for shard and now to that shard replicaset I want to add an arbiter. After running the addArb() i am getting write concern error | 428 |
null | [
"aggregation",
"node-js",
"indexes",
"atlas-cluster"
] | [
{
"code": "{orgId:1,workspaces:1,status:1}{ \"$match\": {\n\"orgId\": \"64c20003cdaf92000d336ae3\",\n\"workspaces\": \"64c20003cdaf92000d336ae7\",\n\"status\": {\n\"$ne\": \"pending\"\n}\n}},\n{ \"$project\": { \"tags\" : 1 }},\n{ \"$unwind\": \"$tags\" },\n{ \"$group\": { \"_id\": \"$tags\", \"count\": { \"$sum\": 1 } }}\n]\n) \n* {\n'$cursor': {\nqueryPlanner: {\nplannerVersion: 1,\nnamespace: 'tagbox.files',\nindexFilterSet: false,\nparsedQuery: {\n'$and': [\n{ orgId: { '$eq': '64c20003cdaf92000d336ae3' } },\n{ workspaces: { '$eq': '64c20003cdaf92000d336ae7' } },\n{ status: { '$not': { '$eq': 'pending' } } }\n]\n},\nqueryHash: '0AB18F00',\nplanCacheKey: '8A6445FF',\nwinningPlan: {\nstage: 'PROJECTION_SIMPLE',\ntransformBy: { _id: true, tags: true },\ninputStage: {\nstage: 'FETCH',\nfilter: {\n'$and': [\n{ status: { '$not': { '$eq': 'pending' } } },\n{ workspaces: { '$eq': '64c20003cdaf92000d336ae7' } }\n]\n},\ninputStage: {\nstage: 'IXSCAN',\nkeyPattern: { orgId: 1 },\nindexName: 'orgId_1',\nisMultiKey: false,\nmultiKeyPaths: { orgId: [] },\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: 'forward',\nindexBounds: {\norgId: [\n'[\"64c20003cdaf92000d336ae3\", \"64c20003cdaf92000d336ae3\"]'\n]\n}\n}\n}\n},\nrejectedPlans: [\n{\nstage: 'PROJECTION_SIMPLE',\ntransformBy: { _id: true, tags: true },\ninputStage: {\nstage: 'FETCH',\nfilter: {\n'$and': [\n{ orgId: { '$eq': '64c20003cdaf92000d336ae3' } },\n{\nworkspaces: { '$eq': '64c20003cdaf92000d336ae7' }\n}\n]\n},\ninputStage: {\nstage: 'IXSCAN',\nkeyPattern: { status: 1 },\nindexName: 'status_1',\nisMultiKey: false,\nmultiKeyPaths: { status: [] },\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: 'forward',\nindexBounds: {\nstatus: [ '[MinKey, \"pending\")', '(\"pending\", MaxKey]' ]\n}\n}\n}\n},\n{\nstage: 'PROJECTION_SIMPLE',\ntransformBy: { _id: true, tags: true },\ninputStage: {\nstage: 'FETCH',\ninputStage: {\nstage: 'IXSCAN',\nkeyPattern: {\norgId: 1,\nworkspaces: 1,\ncreatedAt: -1,\n<em>id: 1,\nstatus: 1\n},\nindexName: 'orgId_1_workspaces_1_createdAt</em>-1__id_1_status_1',\nisMultiKey: true,\nmultiKeyPaths: {\norgId: [],\nworkspaces: [ 'workspaces' ],\ncreatedAt: [],\n_id: [],\nstatus: []\n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: 'forward',\nindexBounds: {\norgId: [\n'[\"64c20003cdaf92000d336ae3\", \"64c20003cdaf92000d336ae3\"]'\n],\nworkspaces: [\n'[\"64c20003cdaf92000d336ae7\", \"64c20003cdaf92000d336ae7\"]'\n],\ncreatedAt: [ '[MaxKey, MinKey]' ],\n_id: [ '[MinKey, MaxKey]' ],\nstatus: [ '[MinKey, \"pending\")', '(\"pending\", MaxKey]' ]\n}\n}\n}\n},\n{\nstage: 'PROJECTION_SIMPLE',\ntransformBy: { _id: true, tags: true },\ninputStage: {\nstage: 'FETCH',\nfilter: { workspaces: { '$eq': '64c20003cdaf92000d336ae7' } },\ninputStage: {\nstage: 'IXSCAN',\nkeyPattern: { orgId: 1, status: 1, collections: 1 },\nindexName: 'orgId_1_status_1_collections_1',\nisMultiKey: true,\nmultiKeyPaths: {\norgId: [],\nstatus: [],\ncollections: [ 'collections' ]\n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: 'forward',\nindexBounds: {\norgId: [\n'[\"64c20003cdaf92000d336ae3\", \"64c20003cdaf92000d336ae3\"]'\n],\nstatus: [ '[MinKey, \"pending\")', '(\"pending\", MaxKey]' ],\ncollections: [ '[MinKey, MaxKey]' ]\n}\n}\n}\n},\n{\nstage: 'PROJECTION_SIMPLE',\ntransformBy: { _id: true, tags: true },\ninputStage: {\nstage: 'FETCH',\ninputStage: {\nstage: 'IXSCAN',\nkeyPattern: { orgId: 1, workspaces: 1, status: 1 },\nindexName: 'orgId_1_workspaces_1_status_1',\nisMultiKey: true,\nmultiKeyPaths: {\norgId: [],\nworkspaces: [ 'workspaces' ],\nstatus: []\n},\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: 'forward',\nindexBounds: {\norgId: [\n'[\"64c20003cdaf92000d336ae3\", \"64c20003cdaf92000d336ae3\"]'\n],\nworkspaces: [\n'[\"64c20003cdaf92000d336ae7\", \"64c20003cdaf92000d336ae7\"]'\n],\nstatus: [ '[MinKey, \"pending\")', '(\"pending\", MaxKey]' ]\n}\n}\n}\n},\n{\nstage: 'PROJECTION_SIMPLE',\ntransformBy: { _id: true, tags: true },\ninputStage: {\nstage: 'FETCH',\nfilter: { workspaces: { '$eq': '64c20003cdaf92000d336ae7' } },\ninputStage: {\nstage: 'IXSCAN',\nkeyPattern: { orgId: 1, status: 1, tags: 1 },\nindexName: 'orgId_1_status_1_tags_1',\nisMultiKey: true,\nmultiKeyPaths: { orgId: [], status: [], tags: [ 'tags' ] },\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: 'forward',\nindexBounds: {\norgId: [\n'[\"64c20003cdaf92000d336ae3\", \"64c20003cdaf92000d336ae3\"]'\n],\nstatus: [ '[MinKey, \"pending\")', '(\"pending\", MaxKey]' ],\ntags: [ '[MinKey, MaxKey]' ]\n}\n}\n}\n}\n]\n},\nexecutionStats: {\nexecutionSuccess: true,\nnReturned: 43910,\nexecutionTimeMillis: 8327,\ntotalKeysExamined: 43934,\ntotalDocsExamined: 43934,\nexecutionStages: {\nstage: 'PROJECTION_SIMPLE',\nnReturned: 43910,\nexecutionTimeMillisEstimate: 8149,\nworks: 43935,\nadvanced: 43910,\nneedTime: 24,\nneedYield: 0,\nsaveState: 470,\nrestoreState: 470,\nisEOF: 1,\ntransformBy: { _id: true, tags: true },\ninputStage: {\nstage: 'FETCH',\nfilter: {\n'$and': [\n{ status: { '$not': { '$eq': 'pending' } } },\n{ workspaces: { '$eq': '64c20003cdaf92000d336ae7' } }\n]\n},\nnReturned: 43910,\nexecutionTimeMillisEstimate: 8135,\nworks: 43935,\nadvanced: 43910,\nneedTime: 24,\nneedYield: 0,\nsaveState: 470,\nrestoreState: 470,\nisEOF: 1,\ndocsExamined: 43934,\nalreadyHasObj: 0,\ninputStage: {\nstage: 'IXSCAN',\nnReturned: 43934,\nexecutionTimeMillisEstimate: 69,\nworks: 43935,\nadvanced: 43934,\nneedTime: 0,\nneedYield: 0,\nsaveState: 470,\nrestoreState: 470,\nisEOF: 1,\nkeyPattern: { orgId: 1 },\nindexName: 'orgId_1',\nisMultiKey: false,\nmultiKeyPaths: { orgId: [] },\nisUnique: false,\nisSparse: false,\nisPartial: false,\nindexVersion: 2,\ndirection: 'forward',\nindexBounds: {\norgId: [\n'[\"64c20003cdaf92000d336ae3\", \"64c20003cdaf92000d336ae3\"]'\n]\n},\nkeysExamined: 43934,\nseeks: 1,\ndupsTested: 0,\ndupsDropped: 0\n}\n}\n}\n}\n},\nnReturned: Long(\"43910\"),\nexecutionTimeMillisEstimate: Long(\"8191\")\n},\n{\n'$unwind': { path: '$tags' },\nnReturned: Long(\"233618\"),\nexecutionTimeMillisEstimate: Long(\"8222\")\n},\n{\n'$group': { _id: '$tags', count: { '$sum': { '$const': 1 } } },\nnReturned: Long(\"237\"),\nexecutionTimeMillisEstimate: Long(\"8258\")\n}\n],\nserverInfo: {\nhost: 'atlas-u7e53s-shard-00-02.8avbv.mongodb.net',\nport: 27017,\nversion: '4.4.23',\ngitVersion: '36c047f935fd86b2d5ac4c4f5189e52daa044966'\n},\nok: 1,\n'$clusterTime': {\nclusterTime: Timestamp({ t: 1692090071, i: 12 }),\nsignature: {\nhash: Binary(Buffer.from(\"31ddbd309d0f50634f3086763a42616ee28da83c\", \"hex\"), 0),\nkeyId: Long(\"7221999452528050178\")\n}\n},\noperationTime: Timestamp({ t: 1692090071, i: 12 })\n}\n",
"text": "We have a big collection of files in MongoDB,\neach file can have multiple tags,\nI want to count how many files we have on each tag to show the user\nthis couldn’t be cached as the count changes as users apply any tag (meaning filtering) or any other filtering for that matter.I’ve taken multiple approaches: Indexing, aggregation pipeline, and splitting the count for each tag with indexing and aggregation pipeline.I might need help in my approaches and indexing or trying different approaches to make it even faster.For a large data set of 50K files, this can take up to 7 seconds.\nThis is then passed to the user in our platform as should be real-time data. for that, the time is way too long. (using Node.js API)what are the best approaches to take?I have a index for\n{orgId:1,workspaces:1,status:1} but it is rejected (in the rejected plans)\nI also used hint to try to use it but got similar resultsExample on the code we ranThe result of the executionStats :",
"username": "Oz_Meir"
},
{
"code": "$ne$in$nefileCollectionmetadataFileCollection",
"text": "Did you really need to use $ne operator?It can be very hard to database to execute not operator, probably it’s the reason of your index isn’t in use.If status is an Enum ( I am trying to guess based on value pending), could be better to you use $in operator instead $ne. Try to do it and probably will use your index.Other approach that you can think about is split the documents between file in fileCollection and metadata in metadataFileCollection. So you can just make your queries on metadata collection that will be smaller than file collection and can improve your performance.",
"username": "Jennysson_Junior"
},
{
"code": "db.files.explain(\"executionStats\").aggregate([\n { \"$match\": {\n \"orgId\": \"64c2d9a3cdaf92000d336ae3\",\n \"workspaces\": \"64c2d9a3cdaf92000d336ae7\",\n \"status\": {\n \"$eq\": \"complete\"\n }\n }},\n { \"$project\": { \"tags\" : 1 }}, \n { \"$unwind\": \"$tags\" }, \n { \"$group\": { \"_id\": \"$tags\", \"count\": { \"$sum\": 1 } }} \n ]\n ,{hint: 'orgId_1_workspaces_1_status_1'}\n{\n stages: [\n {\n '$cursor': {\n queryPlanner: {\n plannerVersion: 1,\n namespace: 'tagbox.files',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { orgId: { '$eq': '64c2d9a3cdaf92000d336ae3' } },\n { status: { '$eq': 'complete' } },\n { workspaces: { '$eq': '64c2d9a3cdaf92000d336ae7' } }\n ]\n },\n queryHash: '04612910',\n planCacheKey: 'E40C51EF',\n winningPlan: {\n stage: 'PROJECTION_SIMPLE',\n transformBy: { _id: true, tags: true },\n inputStage: {\n stage: 'FETCH',\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { orgId: 1, workspaces: 1, status: 1 },\n indexName: 'orgId_1_workspaces_1_status_1',\n isMultiKey: true,\n multiKeyPaths: { orgId: [], workspaces: [ 'workspaces' ], status: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n orgId: [\n '[\"64c2d9a3cdaf92000d336ae3\", \"64c2d9a3cdaf92000d336ae3\"]'\n ],\n workspaces: [\n '[\"64c2d9a3cdaf92000d336ae7\", \"64c2d9a3cdaf92000d336ae7\"]'\n ],\n status: [ '[\"complete\", \"complete\"]' ]\n }\n }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 43799,\n executionTimeMillis: 8169,\n totalKeysExamined: 43799,\n totalDocsExamined: 43799,\n executionStages: {\n stage: 'PROJECTION_SIMPLE',\n nReturned: 43799,\n executionTimeMillisEstimate: 8055,\n works: 43800,\n advanced: 43799,\n needTime: 0,\n needYield: 0,\n saveState: 448,\n restoreState: 448,\n isEOF: 1,\n transformBy: { _id: true, tags: true },\n inputStage: {\n stage: 'FETCH',\n nReturned: 43799,\n executionTimeMillisEstimate: 8001,\n works: 43800,\n advanced: 43799,\n needTime: 0,\n needYield: 0,\n saveState: 448,\n restoreState: 448,\n isEOF: 1,\n docsExamined: 43799,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'IXSCAN',\n nReturned: 43799,\n executionTimeMillisEstimate: 62,\n works: 43800,\n advanced: 43799,\n needTime: 0,\n needYield: 0,\n saveState: 448,\n restoreState: 448,\n isEOF: 1,\n keyPattern: { orgId: 1, workspaces: 1, status: 1 },\n indexName: 'orgId_1_workspaces_1_status_1',\n isMultiKey: true,\n multiKeyPaths: { orgId: [], workspaces: [ 'workspaces' ], status: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n orgId: [\n '[\"64c2d9a3cdaf92000d336ae3\", \"64c2d9a3cdaf92000d336ae3\"]'\n ],\n workspaces: [\n '[\"64c2d9a3cdaf92000d336ae7\", \"64c2d9a3cdaf92000d336ae7\"]'\n ],\n status: [ '[\"complete\", \"complete\"]' ]\n },\n keysExamined: 43799,\n seeks: 1,\n dupsTested: 43799,\n dupsDropped: 0\n }\n }\n }\n }\n },\n nReturned: Long(\"43799\"),\n executionTimeMillisEstimate: Long(\"8105\")\n },\n {\n '$unwind': { path: '$tags' },\n nReturned: Long(\"232843\"),\n executionTimeMillisEstimate: Long(\"8142\")\n },\n {\n '$group': { _id: '$tags', count: { '$sum': { '$const': 1 } } },\n nReturned: Long(\"237\"),\n executionTimeMillisEstimate: Long(\"8165\")\n }\n ],\n serverInfo: {\n host: 'atlas-u7e53s-shard-00-02.8avbv.mongodb.net',\n port: 27017,\n version: '4.4.23',\n gitVersion: '36c047f935fd86b2d5ac4c4f5189e52daa044966'\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1692170598, i: 40 }),\n signature: {\n hash: Binary(Buffer.from(\"514609968c6aab0748ec202e591120dff1b3466b\", \"hex\"), 0),\n keyId: Long(\"7221999452528050178\")\n }\n },\n operationTime: Timestamp({ t: 1692170598, i: 40 })\n}\n",
"text": "Thanks for the response\nyou had a promising idea\nbut it seems not to really help at the end\nit still takes 8 seconds to get the 40K filesand got result",
"username": "Oz_Meir"
},
{
"code": "FETCHdb.files.stats()",
"text": "Your FETCH stage looks like a degraded performance.I don’t know the reason for it.What is the avg size of each document in this collection?\nHow much memory does your cluster have?Try to share your collection stats, can help to identify the problem\ndb.files.stats()",
"username": "Jennysson_Junior"
}
] | Creating a fast and efficient query to count dynamic data | 2023-08-15T18:33:53.596Z | Creating a fast and efficient query to count dynamic data | 488 |
null | [
"node-js"
] | [
{
"code": "ending session with error: error bootstrapping new query: error while querying the state collection \"state_mycollection\": (BadValue) cannot compare to undefined (ProtocolErrorCode=201)mycollection",
"text": "Hi there, I’m having problems with OtherSessionError in one of the users from my application. This error is currently only occurring to this user (that I’m aware) and it’s happening when it tries to open the realm connection and setup initial subscriptions for flexible sync.The complete error is ending session with error: error bootstrapping new query: error while querying the state collection \"state_mycollection\": (BadValue) cannot compare to undefined (ProtocolErrorCode=201).This collection is used as a relationship in another schema, so I checked if there was any undefined value in that collection, which wasn’t the case. I also tried checking inside mycollection for undefined values, but also didn’t find anything.I tried searching for more information about this error in the documentation and didn’t find anything, so any help on that would be appreciated.",
"username": "Rossicler_Junior"
},
{
"code": "",
"text": "Hello @Rossicler_Junior ,Are you using custom user data in your rules by any chance? For reference, we’ve seen errors like the above occur when 1.) a rule expression references custom user data, and 2.) custom user data is missing for a user that is trying to start a sync session.Jonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "Actually, yes, I do use user custom data in my realm app rules.I checked and it is the case, but I’m still not sure why did that happen. The user did had a custom data, but for some reason that I’m unsure, the realm user id got changed somehow. Do you have any idea how is that possible? I checked the user createdAt and it is correct, so it isn’t supposed to be a new user.",
"username": "Rossicler_Junior"
},
{
"code": "",
"text": "Hey @Rossicler_Junior ,Are you willing to share me your App ID (can find this via the URL of your app – feel free to DM me to avoid having it sit on the public forum)? From there, the team can take a deeper look into the issue and hopefully identify what’s happening.Jonathan",
"username": "Jonathan_Lee"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm flexible sync throwing OtherSessionError | 2023-08-16T20:11:22.925Z | Realm flexible sync throwing OtherSessionError | 599 |
null | [
"java",
"atlas-search"
] | [
{
"code": "equalsSearchOperator.of(...)",
"text": "I am using mongodb-driver-core 4.9.0, and noticed that the library does not offer many operators, like equals as mentioned in https://www.mongodb.com/docs/atlas/atlas-search/equals/May I ask if I am missing anything here, or I would have to use SearchOperator.of(...) here? If so, I tried to pass in the equals Bson document, but it does not work in my query.Looks forwards to your help! Thank you!",
"username": "williamwjs"
},
{
"code": "equalsof",
"text": "Yeah, it doesn’t look like there is no builder for equals. If you reply with the code you tried for using of, along with the expect and actual output, we can try to figure out what’s going wrong.",
"username": "Jeffrey_Yemin"
},
{
"code": " private static final String EQUAL_CLAUSE = \"\"\"\n {\n \"equals\": {\n \"path\": \"myFlag\",\n \"value\": true\n }\n }\n \"\"\";\n filterOperators.add(SearchOperator.of(Document.parse(EQUAL_CLAUSE)));\n CompoundSearchOperator compoundSearchOperator = SearchOperator.compound().filter(filterOperators);\n if (!mustOperators.isEmpty()) {\n compoundSearchOperator = compoundSearchOperator.must(mustOperators);\n }\n if (!shouldOperators.isEmpty()) {\n compoundSearchOperator = compoundSearchOperator.should(shouldOperators);\n }\n aggregates.add(Aggregates.search(compoundSearchOperator));\n",
"text": "Hi Jeffrey, thank you for your reply!I am trying to do something like:However, it would yield with no results for me.",
"username": "williamwjs"
},
{
"code": "",
"text": "It sounds like the Java driver part is working ok, but the query just doesn’t match. You might want to look in logs to confirm that the query is generated as you expect, and then check your data against the query, and re-read the docs, to see why you’re not getting results.Jeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Found the issue in the index configuration. all good now.Thank you!",
"username": "williamwjs"
},
{
"code": "",
"text": "Hey, may I ask how do you fix the issue? What is wrong with the index configuration?\nI am facing the same issue",
"username": "Eason"
},
{
"code": "equals",
"text": "And indeed there is no builder for equals see here: SearchOperator (driver-core 4.10.0 API) (mongodb.github.io)",
"username": "Eason"
},
{
"code": "lucene.standard",
"text": "Sorry, tbh, I cannot remember what I did…\nI think maybe I initially did not configure to use lucene.standard as the Analyzer. Would you check that? Or did you turn on the dynamic mapping?(Perhaps pasting your index mapping JSON here would be helpful to identify the cause)",
"username": "williamwjs"
}
] | How to use equals operator for atlas search in mongodb-driver-core Java library | 2023-03-14T02:46:39.082Z | How to use equals operator for atlas search in mongodb-driver-core Java library | 1,236 |
null | [
"node-js",
"mongoose-odm",
"react-native",
"flexible-sync",
"app-services-cli"
] | [
{
"code": "mongoose.db(<db name>).collection(<collection name>)",
"text": "In Node.js we can access a specific database in the following mannermongoose.db(<db name>).collection(<collection name>)How do i do this in realm react-native? Im trying to sync specific data in specific databases within the cluster in mongoDB Atlas, but it goes to a default database.Also how do i configure 2 different databases. Let’s say i have “Purchases” database, and this database needs to update the “Clients” database and “Product” database. I know how to do this with mongoose but not with realm",
"username": "Herlander_Tavares"
},
{
"code": "",
"text": "Hi, when using developer mode to populate schemas, all tables are mapped to “defaultDBName.realmTableName”. If you want to map realm tables to specific database/collections, you can do so by setting this in the “Schemas” configuration in the UI.I think this page is perhaps good to read through. If you have any other questions about it after reading through it, please let me know and I would be happy to assist.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "export default class Product extends Realm.Object {\n _id!: Realm.BSON.ObjectId;\n title!: string;\n price!: number;\n descritpion!: string;\n\n static generate({title, price, description}) {\n return {\n _id: new Realm.BSON.ObjectId(),\n title,\n price,\n description,\n };\n }\n\n static schema: schemaType = {\n //DB name is shop\n //Collection name is products\n name: 'Product',\n primaryKey: '_id',\n properties: {\n _id: {type: 'objectId', default: () => new Realm.BSON.ObjectID()},\n title: 'string',\n price: 'double',\n description: 'string',\n },\n };\n}\n",
"text": "Thank you for replying, and forgive me for my lack of understanding, I’m new to realm.So how would that look like in this schema?",
"username": "Herlander_Tavares"
},
{
"code": "",
"text": "Hi, to change the mapping from Realm Table to MongoDB Namespace you need to configure the App Services application at realm.mongodb.com. See this screenshot:\nScreenshot 2023-08-17 at 11.14.40 AM2512×1264 177 KB\n",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "I have created an app, it’s fully functional, with users, schemas, rules, etc.My problem is, as you can see in the image below, I have a database named shop, with a collection named products, but when I sync, my objects go to a database named todo, but I want to specifically send them to shop.\nimg21920×1081 246 KB\nI understand that this is because my sync settings in app services is set to development mode where you can specify a default database. But I want to pick specifically where my objects should be stored, and not default to a random database.",
"username": "Herlander_Tavares"
},
{
"code": "",
"text": "Hi, to pick specific db/collection to map your tables to you need do one of the following:Configure it in the UI:Use Developer Mode:Let me know if that works for you.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "WardA (db)\n Patients (collection)\n <patient obj>\n <patient obj>\n <patient obj>\n ...\n\nWardB (db)\n Patients (collection)\n <patient obj>\n <patient obj>\n <patient obj>\n ...\nconst user = WardA\n\nconst collection = mongoose.db(<user>).collection('patients').find( )\n",
"text": "That’s very clear, i will do that now, thank you. But what happens, if I have two databases with the same collection and schema, how would it know which collection to sync to?I will give you my real app scenario, given this is just a demo for me to try and work this out:I’m building an app for hospital wards, each ward will have their own database, each database it’s own “patient” collection, among other collections with the exact same name. So you may have:I was planning to specify the database to sync to with state, based on the user that is logged in to the app. Example with mongoose:Is there a way to specify a database like that with react-native?",
"username": "Herlander_Tavares"
},
{
"code": "",
"text": "Are you trying to use sync or just the Remote Data Access built into the SDK’s? If you are trying to use sync that is not currently possible. Its worth mentioning that generally having databases per customer is an anti-pattern and you should try to use a single DB / Collection with a “ward_id” field or something like that.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "So how would you structure this database, would you have all the patients, staff and all other info within the same database? so for example, the “patients” collection will hold all the patients for all the wards?Also, say we have have 4 hospitals, each hospital has about 8 to 10 wards, each ward has around 25 patients, would you still recommend having everything in the same database.The app is to be used in hospitals, I will need to sync data because you may not always have an internet connection, so I need the app to work online and off.",
"username": "Herlander_Tavares"
},
{
"code": "",
"text": "Hi, I would recommend having a single collection that holds the documents for all hospitals/users. I think this is a good documentation summary of how to think of multi-tenant architecture in MongoDB.",
"username": "Tyler_Kaye"
}
] | Realm React - specifying a database | 2023-08-17T05:54:13.350Z | Realm React - specifying a database | 506 |
null | [] | [
{
"code": "",
"text": "Hello There,I need to calculate student progress in percentage. Let’s say there are 10 courses that they need to complete and so far they finished 5. I need to show there progress in percentage.\nI have a collection studentProgress, which has an array field student.\nStudent:Array\n{“courseId”:\n“courseName”:“Social”,\n“status”:“completed”\n“score”:50\n},\n{\n{“courseId”:\n“courseName”:“Math”,\n“status”:“completed”\n“score”:80\n},\nSomething like above. I need to show this gauge chart. First I added new fields by calculating Total Courses($size:“$student) and then size of the score Total Completed($size:”$student.score\").\nIndividually they are giving me right answers, but when I put them all together like\n{$multiply:\n[{ $divide:\n[“$TotalCompleted”,“$TotalCourses”] },\n100\n]\n}\nIt isn’t giving me the right answer. I feel like there is something wrong with calculating Total Courses field.Would someone please help me with this.\nThanks in advance.\nSunita",
"username": "sunita_kodali"
},
{
"code": "db.studentProgress.insertMany([\n {\n studentId: 'S1',\n courses: [\n {\n courseId: 'C1',\n courseName: 'Social',\n status: 'completed',\n score: 50\n },\n {\n courseId: 'C2',\n courseName: 'Math',\n status: 'completed',\n score: 100\n },\n {\n courseId: 'C3',\n courseName: 'Geography',\n status: 'completed',\n score: 80\n },\n {\n courseId: 'C4',\n courseName: 'Literature',\n status: 'enrolled',\n score: null\n },\n {\n courseId: 'C5',\n courseName: 'Spanish',\n status: 'enrolled',\n score: null\n },\n ],\n },\n {\n studentId: 'S2',\n courses: [\n {\n courseId: 'C1',\n courseName: 'Social',\n status: 'completed',\n score: 95\n },\n {\n courseId: 'C2',\n courseName: 'Math',\n status: 'enrolled',\n score: null\n },\n ],\n }\n]);\ndb.studentProgress.aggregate([\n {\n $addFields: {\n totalCoursesEnrolled: {\n $size: '$courses'\n },\n totalCoursesCompleted: {\n $size: {\n $filter: {\n input: '$courses',\n cond: {\n $eq: ['$$this.status', 'completed']\n }\n }\n }\n }\n }\n },\n {\n $addFields: {\n percentageOfCompletedCourses: {\n $multiply: [\n { $divide: ['$totalCoursesCompleted', '$totalCoursesEnrolled'] },\n 100\n ]\n\n }\n }\n },\n // clean results\n {\n $project: {\n _id: false,\n courses: false,\n }\n }\n]);\n[\n {\n studentId: 'S1',\n totalCoursesEnrolled: 5,\n totalCoursesCompleted: 3,\n percentageOfCompletedCourses: 60\n },\n {\n studentId: 'S2',\n totalCoursesEnrolled: 2,\n totalCoursesCompleted: 1,\n percentageOfCompletedCourses: 50\n }\n]\n$addFields",
"text": "Hello, @sunita_kodali !I can help you to resolve your issue, but first - read the post about Formatting code and log snippets in posts. Remember: the better you explain your problem, provide examples and format your code in the posts - more people in the forum would want to help you Ok, let’s create test collection with sample documents that would be close to your situation:We can easily calculate each student progress with the following aggregation pipeline:Output:Notice: you can not calculate fields and reuse them within the same stage. That is why I needed to use stage $addFields two times ",
"username": "slava"
},
{
"code": "",
"text": "got it. Thank you for detailed explanation.",
"username": "sunita_kodali"
}
] | Progress percentage | 2023-07-28T23:13:12.486Z | Progress percentage | 708 |
null | [
"java"
] | [
{
"code": "",
"text": "Hi, I try to migrate mongodb-java-driver-legacy to mongodb-java-driver-sync, but I can found connection-per-host option in mongo client settings.how can i migrate that option?Thx!",
"username": "Namaksin_N_A"
},
{
"code": "maxSizeConnectionPoolSettingsvar settings = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(\"<your connection string>\"))\n .applyToConnectionPoolSettings(builder ->\n .maxSize(200)\n .build()\n",
"text": "See https://www.mongodb.com/docs/drivers/java/sync/v4.3/fundamentals/connection/mongoclientsettings/#std-label-mcs-connectionpool-settings.It’s the maxSize property on ConnectionPoolSettings, e.g.",
"username": "Jeffrey_Yemin"
}
] | How to migrate connection-per-host option in mongodb-java-driver-legacy to mongodb-java-driver-sync? | 2023-08-17T01:42:26.006Z | How to migrate connection-per-host option in mongodb-java-driver-legacy to mongodb-java-driver-sync? | 354 |
null | [] | [
{
"code": "\"msg\":\"Internal assertion\",\"attr\":{\"error\":{\"code\":262,\"codeName\":\"ExceededTimeLimit\",\"errmsg\":\"operation exceeded time limit\"},\"location\":\"{fileName:\\\"src/mongo/util/interruptible.h\\\", line:398, functionName:\\\"operator()\\\"}\"}}\n \"type\" : \"op\",\n \"host\" : \"hostname:27018\",\n \"desc\" : \"conn18708\",\n \"connectionId\" : 18708,\n \"client\" : \"IP:39780\",\n \"clientMetadata\" : {\n \"driver\" : {\n \"name\" : \"NetworkInterfaceTL\",\n \"version\" : \"4.4.17\"\n },\n \"os\" : {\n \"type\" : \"Linux\",\n \"name\" : \"Ubuntu\",\n \"architecture\" : \"x86_64\",\n \"version\" : \"16.04\"\n }\n },\n \"active\" : true,\n \"currentOpTime\" : \"2023-07-27T10:15:35.575-07:00\",\n \"effectiveUsers\" : [\n {\n \"user\" : \"__system\",\n \"db\" : \"local\"\n }\n ],\n \"opid\" : 1905631997,\n \"secs_running\" : NumberLong(6),\n \"microsecs_running\" : NumberLong(6675354),\n \"op\" : \"command\",\n \"ns\" : \"admin.$cmd\",\n \"command\" : {\n \"isMaster\" : 1,\n \"maxAwaitTimeMS\" : NumberLong(10000),\n \"topologyVersion\" : {\n \"processId\" : ObjectId(\"64b7ff3ca7e87fe3e3fee978\"),\n \"counter\" : NumberLong(5)\n },\n \"internalClient\" : {\n \"minWireVersion\" : 8,\n \"maxWireVersion\" : 9\n },\n \"$db\" : \"admin\"\n },\n \"numYields\" : 0,\n \"waitingForLatch\" : {\n \"timestamp\" : ISODate(\"2023-07-27T17:15:29Z\"),\n \"captureName\" : \"FutureResolution\"\n },\n \"locks\" : {\n\n },\n \"waitingForLock\" : false,\n \"lockStats\" : {\n\n },\n \"waitingForFlowControl\" : false,\n \"flowControlStats\" : {\n\n }\n },\n",
"text": "Hi Team,\nWe have upgraded mongodb from 4.2 to 4.4. After the upgrade we are seeing below user asserts: ExceededTimelimit. Please let me know which parameter to adjust on this?ErrorI am guessing it is related to belowRegards\nSyed",
"username": "Tausiff_Ahamad_Syed"
},
{
"code": "{\"t\":{\"$date\":\"2023-08-17T02:50:32.464-07:00\"},\"s\":\"D2\", \"c\":\"COMMAND\", \"id\":21965, \"ctx\":\"conn40\",\"msg\":\"About to run the command\",\"attr\":{\"db\":\"admin\",\"commandArgs\":{\"isMaster\":1,\"maxAwaitTimeMS\":10000,\"topologyVersion\":{\"processId\":{\"$oid\":\"64d4e48e6472f9ade56a1f31\"},\"counter\":3},\"internalClient\":{\"minWireVersion\":9,\"maxWireVersion\":9},\"$db\":\"admin\"}}}\n{\"t\":{\"$date\":\"2023-08-17T02:50:32.465-07:00\"},\"s\":\"D3\", \"c\":\"FTDC\", \"id\":23904, \"ctx\":\"conn40\",\"msg\":\"Using maxAwaitTimeMS for awaitable isMaster protocol.\"}\n{\"t\":{\"$date\":\"2023-08-17T02:50:32.465-07:00\"},\"s\":\"D1\", \"c\":\"REPL\", \"id\":21342, \"ctx\":\"conn40\",\"msg\":\"Waiting for an isMaster response from a topology change or until deadline\",\"attr\":{\"deadline\":{\"$date\":\"2023-08-17T09:50:42.465Z\"},\"currentTopologyVersionCounter\":3}}\n{\"t\":{\"$date\":\"2023-08-17T02:50:42.468-07:00\"},\"s\":\"D3\", \"c\":\"-\", \"id\":4892201, \"ctx\":\"conn40\",\"msg\":\"Internal assertion\",\"attr\":{\"error\":{\"code\":262,\"codeName\":\"ExceededTimeLimit\",\"errmsg\":\"operation exceeded time limit\"},\"location\":\"{fileName:\\\"src/mongo/util/interruptible.h\\\", line:398, functionName:\\\"operator()\\\"}\"}}\n",
"text": "We have narrowed down the issue could be related to isMaster response timeout of maxAwaitTimeMS 10000. is there a way to change this parameter?",
"username": "Tausiff_Ahamad_Syed"
}
] | Mongo 4.4 error code 262 Operation exceeded time limit in user asserts | 2023-07-27T17:30:07.357Z | Mongo 4.4 error code 262 Operation exceeded time limit in user asserts | 564 |
null | [
"flutter"
] | [
{
"code": "libc++abi: terminating with uncaught exception of type realm::util::DecryptionFailed: Decryption failed: 'unable to decrypt after 0 seconds (retry_count=2, from=iv1 == 0, size=16384)'\n* thread #9, name = 'io.flutter.1.ui', stop reason = signal SIGABRT\n frame #0: 0x00000001ddf67160 libsystem_kernel.dylib`__pthread_kill + 8\nlibsystem_kernel.dylib`:\n-> 0x1ddf67160 <+8>: b.lo 0x1ddf67180 ; <+40>\n 0x1ddf67164 <+12>: pacibsp\n 0x1ddf67168 <+16>: stp x29, x30, [sp, #-0x10]!\n 0x1ddf6716c <+20>: mov x29, sp\nTarget 0: (Runner) stopped.\nLost connection to device.\n",
"text": "I used realm base on flutter, I insert data into realm and found errorPlease help to resolve this error ",
"username": "Chakrachai_Klinfung"
},
{
"code": "",
"text": "Hi @Chakrachai_Klinfung!\nWe will be glad if you could share more details about you code.\nHave you set an encryption key into your Configuration?",
"username": "Desislava_St_Stefanova"
},
{
"code": "var config = Configuration.local(\n schemas,\n encryptionKey: rawKey,\n path: \"$_dbPath/${dbType.name}.realm\",\n schemaVersion: odaSchemaVersion,\n shouldDeleteIfMigrationNeeded: ,\n shouldCompactCallback: (_forServer ? (totalSize, usedSize) => true : null),\n);\nRealm realm = Realm(config);\n",
"text": "My code for setup RealmI set encryption key into my Configuration. and I can query data too.",
"username": "Chakrachai_Klinfung"
},
{
"code": "Realm file decryption failed (Decryption failed: 'unable to decrypt after 0 seconds (retry_count=4........')",
"text": "@Chakrachai_Klinfung thanks for the sample code. It looks correct to me.\nI suppose you receive this error: Realm file decryption failed (Decryption failed: 'unable to decrypt after 0 seconds (retry_count=4........').\nIf yes, this means that the file that you try to open is encrypted with a different key.\nOnce you create a Realm file with one encryptionKey, later it can be opened and used only with the same encryptionKey. Probably, you already have an existing realm file on this location, that have been created with another encryptionKey.\nI hope this will help to fix your problem.",
"username": "Desislava_St_Stefanova"
},
{
"code": "",
"text": "Thank for support me.\nI’m not sure this problem from my usecase",
"username": "Chakrachai_Klinfung"
},
{
"code": "",
"text": "Yes, cross-platform encryption could be the thing that causes the issue. We have an ticket opened about this, where you can find more information. I would suggest you to find some workaround. Try to encrypt the realm using an app built on the same architecture as the one that you use for the decryption.",
"username": "Desislava_St_Stefanova"
}
] | Realm on Flutter error 'unable to decrypt after 0 seconds (retry_count=2, from=iv1 == 0, size=16384)' | 2023-08-15T15:22:27.541Z | Realm on Flutter error ‘unable to decrypt after 0 seconds (retry_count=2, from=iv1 == 0, size=16384)’ | 696 |
null | [
"aggregation"
] | [
{
"code": "ping -c 1 <hostname>",
"text": "I can get the IP of server with the help of: ping -c 1 <hostname> , But now how I can connect to the server that is not self-hosted.\nI wanted to see the logs on server : /var/log/mongodb/mongodb.log.\nHow can I see this.Thanks in advance.",
"username": "Shivam_Tiwari2"
},
{
"code": "show log global",
"text": "Hi @Shivam_Tiwari2You do not get server access when using Atlas.You may be able to view logs through the Atlas gui. https://www.mongodb.com/docs/atlas/mongodb-logs/ downloading logs is dependent on a dedicated tier(M10+) though.Using the cli you can view the last 1024 events with the shell helper.show log global",
"username": "chris"
},
{
"code": "",
"text": "Thanks @chris for this solution.",
"username": "Shivam_Tiwari2"
}
] | How to connect with server which is Managed by Mongodb | 2023-08-16T10:30:03.785Z | How to connect with server which is Managed by Mongodb | 306 |
null | [
"connector-for-bi"
] | [
{
"code": "/bin/mongosqld --addr=127.0.0.1 --mongo-authenticationMechanism SCRAM-SHA-1 --mongo-authenticationSource my_database --auth --mongo-username=user --mongo-password=XXXXconnection accepted from X.X.X.X:18865 #1 (1 connection now open)\n[conn1] handshake error: unable to saslStart conversation 0: unable to execute command: (AuthenticationFailed) Authentication failed.\n",
"text": "I have mongodb 4.4.0 running on a linux machine, as well as mongosqld version 2.14.2. I am trying to connect from a windows machine. The db is set up with auth. Here’s the command currently being used to start the mongosqld:/bin/mongosqld --addr=127.0.0.1 --mongo-authenticationMechanism SCRAM-SHA-1 --mongo-authenticationSource my_database --auth --mongo-username=user --mongo-password=XXXXWhen the windows user tries to connect, I’m getting this error from the mongo odbc daemon:I cannot seem to find any solution to this, and so many of the posts don’t specify where you are supposed to install things (the connecting workstation or the server). I’ve tried the other 3 authentication mechanisms, same error.Please let me know if you need any more info and if you have any idea as to how to resolve this?FWIW, if I remove the auth on the mongo daemon and the odbc daemon, the windows machine can connect just fine.Thanks for any help,\nKevin",
"username": "Kevin_Chugh"
},
{
"code": "",
"text": "I was facing the same issue, the following work around resolved the issue:\nI just had to replace ‘%40’ with ‘@’ in the password.\nThere could be other reasons as well, but this one worked out for me.Thanks!\nShabeer",
"username": "syed_shabeer"
},
{
"code": "",
"text": "Hi I tried using $ in password. I am able to map the schema in mongosqld but when configurating to mongodb odbc driver I am facing an error Handshake error: unable to saslstart conversion",
"username": "Sainath_Rao"
}
] | Handshake error: unable to saslStart conversation with mongosqld | 2021-03-22T16:28:44.530Z | Handshake error: unable to saslStart conversation with mongosqld | 6,065 |
null | [
"node-js",
"atlas-functions",
"atlas-triggers"
] | [
{
"code": "",
"text": "I am writing a function for an Atlas Trigger and I have decided to use Node.js runtime environment to write this function. To start I am trying to connect to the cluster using the following command: “context.services.get(<SERVICE_NAME>)”. However, this method is only returning “{“version”:1}”. So when I attempt to connect to the db with the result using context.services.get(<SERVICE_NAME>).db(‘db_name’), it returns an empty object {}.I have updated the Service Name and the DB Name to match the one in our setup. Any idea on why I am unable to properly access the cluster?While we’re here I also want to ask if any of this setup is necessary for my use case. I am writing this scheduled trigger to scan a collection for records with a Due Date that match the current time. Upon locating a document that matches the criteria, it will update the status.",
"username": "Nicholas_Jurgens"
},
{
"code": "exports = async function () {\n const result = context.services.get(\"mongodb-atlas\").db('test');\n return {result}\n}\n> result: \n{\n \"result\": {\n \"collection\": {},\n \"aggregate\": {},\n \"getCollectionNames\": {}\n }\n}\n> result (JavaScript): \nEJSON.parse('{\"result\":{\"collection\":{},\"aggregate\":{},\"getCollectionNames\":{}}}')\n",
"text": "Hey @Nicholas_Jurgens,I am writing a function for an Atlas Trigger and I have decided to use Node.js runtime environment to write this function. To start I am trying to connect to the cluster using the following command: “context.services.get(<SERVICE_NAME>)”. However, this method is only returning “{“version”:1}”If I understood correctly, you are encountering an issue while executing the Atlas Function.Could you confirm if you have everything defined in your Linked Data Source tab?\nFor example, I’ve got this information:\nimage1559×887 110 KB\nAfter saving the draft and deploying the changes, I executed the code below:And it returns the following output:Please try executing the above code and see if this resolves the issue.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | context.services.get(<SERVICE_NAME>) returning {"version":1} | 2023-08-11T16:34:13.424Z | context.services.get(<SERVICE_NAME>) returning {“version”:1} | 528 |
[
"mongodb-shell"
] | [
{
"code": "mongosh\nCurrent Mongosh Log ID: 63c4361ece45dfaa717549a9\nConnecting to: **mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.2**\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\n",
"text": "Hi does anyone have a solution to my problem. Whenever I type mongosh into my terminal this is the error I keep getting\nScreenshot 2023-01-15 at 12.11.351920×1200 91.9 KB\nAny help would be nice",
"username": "Runyararo_Mucheche"
},
{
"code": "mongosh",
"text": "By default mongosh will connect to to 127.0.0.1(localhost) on port 27017.There is no server listening at this address or the connection is being actively rejected(firewall as an example).",
"username": "chris"
},
{
"code": "",
"text": "Hi ChrisHow should I go about resolving the issue with it being rejected",
"username": "Runyararo_Mucheche"
},
{
"code": "",
"text": "You need a mongodb server to connect to.Start a free Atlas cluster or deploy your own.Links for both at the page below.",
"username": "chris"
},
{
"code": "",
"text": "Hi Chris\nI’ve tried uninstalling and reinstalling mongo, using both version 5.0 and 6.0 but the error remains the same. Is there a way for me to run the mongo shell locally on my mac terminal?",
"username": "Runyararo_Mucheche"
},
{
"code": "mongosh",
"text": "You are running the shell, it is giving an error because there is no server to connect to. mongosh is like mysql, psql, sqlplus it is a client, it needs a server to connect to.You need an Atlas cluster or a local server to get started.Getting started with Atlas:Installing and starting a server on macOS:",
"username": "chris"
},
{
"code": "",
"text": "after facing this question, I come across this thread and I found no way to solve my question. yet I have tried to search until come to know how to solve this, below is my way let me know if it works for you too:first start the mongodb by using this command into the terminal:service mongod startthen run:mongoshfrom here you will be able to proceed. Thank you !",
"username": "Joseph_IRIRWANIRIRA"
},
{
"code": "",
"text": "",
"username": "Kushagra_Kesav"
}
] | Mongosh command not working and entering shell | 2023-01-15T17:32:08.271Z | Mongosh command not working and entering shell | 2,757 |
|
null | [
"dart"
] | [
{
"code": "import 'package:realm/realm.dart';\n\npart 'schemas.g.dart';\n\n@RealmModel()\nclass _ItemCategory {\n @MapTo('_id')\n @PrimaryKey()\n late ObjectId id;\n late String name;\n late String description;\n late bool isActive;\n @Backlink(#categories)\n late _ItemType? type;\n late List<_Item> categories;\n}\n\n@RealmModel()\nclass _ItemType {\n @MapTo('_id')\n @PrimaryKey()\n late ObjectId id;\n late String name;\n late String description;\n late bool isConsumable;\n late List<_ItemCategory> categories;\n late bool isActive;\n}\n\n@RealmModel()\nclass _Item {\n @MapTo('_id')\n @PrimaryKey()\n late ObjectId id;\n late String name;\n late String description;\n late bool isActive;\n @Backlink(#items)\n late _ItemCategory? type;\n}\n",
"text": "RequirementsCan you please modify my code to meet my requirements",
"username": "Health_Centre_ERP"
},
{
"code": "@BacklinkIterable@RealmModel()\nclass _Source {\n String name = 'source';\n _Target? oneTarget;\n List<_Target> manyTargets = [];\n}\n\n@RealmModel()\nclass _Target {\n String name = 'target';\n\n @Backlink(#oneTarget)\n late Iterable<_Source> oneToMany;\n\n @Backlink(#manyTargets)\n late Iterable<_Source> manyToMany;\n}\n",
"text": "Hi @Health_Centre_ERP!\nYou have to design your model according to your needs. The model could be changed during the development it may require redesign.\nYou can follow our documentation and our tests to understand how the backlinks are used.\nThis code sample is taken from our tests. Please note that the @Backlink annotation is on an Iterable collection in order to achieve ToMany relations like OneToMany and ManyToMany.Here in the documentation you can find additional explanations about Backlinks.",
"username": "Desislava_St_Stefanova"
}
] | Relationship in realms | 2023-08-17T03:10:51.363Z | Relationship in realms | 477 |
null | [] | [
{
"code": "",
"text": "Hi, I follow the setup procedure as usual, for install MongoDB 7.0 on Debian 12. When I do apt update, apt say The repository “MongoDB Repositories bookworm/mongodb-org/7.0 Release” does not have a Release file. so I’m unable to install !!some forum say to use Ubuntu setup. I can than install everything except mongodb-org-tools ! so crazy !I’ve been using mongodb for years, I installed version 4.4, version 6… but this version can’t be installed.",
"username": "Giovanni_Manzoni"
},
{
"code": "",
"text": "If the releases file for bookworm is not there then head over to jira.mongodb.com and log a bug.The docs are released for 7.0 and say it is supported.The tools issue should now be resolved as there was a post for a new release of tools.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Debian 12: The repository does not have a Release file | 2023-08-16T17:00:01.156Z | Debian 12: The repository does not have a Release file | 1,033 |
null | [
"golang",
"database-tools",
"backup"
] | [
{
"code": "database-tools",
"text": "We are pleased to announce version 100.8.0 of the MongoDB Database Tools.This version uploads the MongoDB Database Tools to the MongoDB Linux v7.0 repos.The Database Tools are available on the MongoDB Download Center. Installation instructions and documentation can be found on docs.mongodb.com/database-tools. Questions and inquiries can be asked on the MongoDB Developer Community Forum. Please make sure to tag forum posts with database-tools. Bugs and feature requests can be reported in the Database Tools Jira where a list of current issues can be found.ReleaseBug",
"username": "Rohan_Sharan"
},
{
"code": "",
"text": "@Rohan_SharanThere might be a few mongodb v7 install topics that had dependency issues that could use a follow up due to this.",
"username": "chris"
}
] | Database Tools 100.8.0 Released | 2023-08-16T20:47:13.260Z | Database Tools 100.8.0 Released | 523 |
null | [
"python",
"time-series"
] | [
{
"code": "",
"text": "Im creating a program that is fetching data from an API to display that data in an easy to read format but I also want to save that data for later use.The API gives me a random set of numbers and a few other True/False fields plus a timestamp. I was looking into using timeseries to sort the data, but after reading about it I found some forum posts that said timeseries sort could be quite slow for very large amounts of documents. (For reference the API gives out data every 2-2.5 minutes and goes back a few years, so it would be roughly 1 million documents (But I will definitely not end up with that many)) What would be a way to sort loads of data by its timestamp quickly.[I am new to the forums so apologies if this is in the wrong place]",
"username": "Catotron_N_A"
},
{
"code": "",
"text": "Hey @Catotron_N_A,Welcome to the MongoDB Community!The API gives me a random set of numbers and a few other True/False fields plus a timestamp.May I ask how big are the documents on average, and what it looks like? Could you share a sample of it?I was looking into using time series to sort the data, but after reading about itWhen inserting data into a time series collection in MongoDB, the order of the document timestamps impacts performance and query efficiency.Inserting in random timestamp order can cause many more buckets to be created than necessary. This is because one of the time series collection’s assumptions is that you’ll be inserting your data in a regular interval in a monotonically increasing time order.To read more about the bucketing pattern, please refer to this post.What would be a way to sort loads of data by its timestamp quickly?Based on my understanding you can consider:In case you need further help, feel free to reach out to us.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Should I use Timeseries or another way to sort massive amounts of data | 2023-08-10T04:23:57.876Z | Should I use Timeseries or another way to sort massive amounts of data | 580 |
null | [
"time-series",
"flutter"
] | [
{
"code": "",
"text": "in flutter, i have\nvar date = DateTime.now().toUtc();\nto print it out in string it shows like this (2023-07-12 09:41:03.048759Z)\nwhich create a datetime object with current UTC time.I need to insert this var date into mongodb timeseries collection which expecting a Date object instead.The inserting always failed as the format is not right?Anyone has a solution for this??",
"username": "Shuai_Aaron_Shaw"
},
{
"code": "date",
"text": "Hi @Shuai_Aaron_Shaw,Welcome to the MongoDB Community!in flutter, i have\nvar date = DateTime.now().toUtc();\nto print it out in string it shows like this (2023-07-12 09:41:03.048759Z)\nwhich create a datetime object with current UTC time.Could you confirm whether you are using Realm Flutter SDK or something else?I need to insert this var date into mongodb timeseries collection which expecting a Date object instead.\nThe inserting always failed as the format is not right?Could you please provide the code snippet where you are trying to insert the date variable into the MongoDB time-series collection? Additionally, could you share the error message you are encountering? Also, let us know the MongoDB version you are using.Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "hey thanks for the reply, im not using realm flutter sdk, im posting data directly from flutter to mongodb atlas data api, error is timestamp date field is not correct, i have no issues inserting timestamp as utc string to a non-timeseries collection",
"username": "Shuai_Aaron_Shaw"
},
{
"code": "ISODate()Date()",
"text": "Hey @Shuai_Aaron_Shaw,Apologies for the late response.var date = DateTime.now().toUtc();\nto print it out in string it shows like this (2023-07-12 09:41:03.048759Z)It seems like the date string is missing a “T” separator between the date and time portions. MongoDB expects dates in ISO-8601 format with a distinct separator.A couple of ways you can consider fixing the date formatting before inserting:I came across the “toIso8601String” method in the DateTime class in Dart’s API documentation. This method might be helpful to you.In case you need further help please share the code snippet that you are using to post the data in the TS collection, as well as the exact error message you are encountering. This information will help us assist you more effectively.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Question on Flutter with Mongodb Timeseries Date Field | 2023-07-12T09:41:57.166Z | Question on Flutter with Mongodb Timeseries Date Field | 772 |
null | [
"aggregation",
"charts"
] | [
{
"code": "",
"text": "Mongo Chart is awesome.However, sometimes the data query we would like to visualize is very complex, and we are unable to get it done through the charts UI.I do notice the ability to “View aggregation pipeline”, so I am wondering whether there’s a way for us to actually write my own aggregation pipeline query, and then use Mongo Chart to display that?Thank you!",
"username": "williamwjs"
},
{
"code": "",
"text": "Thanks @williamwjs - you most certainly can do this! Simply enter your aggregation pipeline (make sure it’s in square brackets) into the query bar in the chart builder. You can then drag and drop the fields returned from your query onto your chart channels.Tom",
"username": "tomhollander"
}
] | Write my own query for Mongo Charts | 2023-08-16T19:11:52.137Z | Write my own query for Mongo Charts | 469 |
null | [] | [
{
"code": "",
"text": "actually I am facing same issue. I am a noobie to web development. I am trying to learn and implement. I am using NextJs 13.4.12 and TypeScript. I’m getting the AtlasError:8000 and buffering timeout on my query. I tried await mongoose.connect(process.env.DB_URL||“mongodb://localhost:27017/projectname”,{\nuseNewUrlParser = true,\nuseUnifiedTopology = true,\n})\nbut I am getting the error as well. How can I fix it. I am seeking guidance through out. TIA",
"username": "Hrittik_Bhattacharjee"
},
{
"code": "process.env.DB_URL",
"text": "Hey @Hrittik_Bhattacharjee,Welcome to the MongoDB Community!AtlasError:8000 and buffering timeout on my queryCould you please double-check that you have defined a database user with the proper roles and privileges in MongoDB Atlas?\n\nimage949×409 56.7 KB\nPrint/log the full MongoDB connection URI from process.env.DB_URL and verify it is using the correct hostname, username, password, database name, etc.Make sure your IP address is whitelisted in the Network Access control panel in MongoDB Atlas. Test with Telnet that you can connect to the Atlas cluster on port 27017 from your current public IP.Try connecting with the mongo shell first to isolate issues with the Node.js driver.In case the issue persists please share a code snippet, including the require/import of the driver, the connection URI, and any error handling. Please obfuscate any sensitive credentials in the connection string, but leave the schema and hostname intact. This will help diagnose where things might be going wrong.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | AtlasError:8000 and buffering timeout on my query | 2023-08-14T10:03:21.482Z | AtlasError:8000 and buffering timeout on my query | 312 |
null | [
"aggregation",
"compass"
] | [
{
"code": "",
"text": "I know that starting from Compass 1.21 saved aggregation pipelines are moved to filesystem as stated here, but I was not able to find where!!\nWhat am I missing?",
"username": "Sergio_Ferlito1"
},
{
"code": "",
"text": "Finally solved!!\nUsing:sudo strace -f -t -e trace=file -p COMPASS_PIDIs possible to trace files opened by Compass Process ID (COMPASS_PID above).\nIn this way I found that aggregation pipelines are saved by Compass as json file inside path:/home/<current_user>/.config/MongoDB\\ Compass/SavedPipelines/61b1c7079f9d09c1caf3b3c5.jsonThis, at least, in Kubuntu 21.10.",
"username": "Sergio_Ferlito1"
},
{
"code": "~/Library/Application Support/MongoDB Compass/SavedPipelines%APPDATA%/MongoDB Compass/SavedPipelines",
"text": "Hi @Sergio_Ferlito1! Thank you for asking this question and good job with finding the answer! That’s indeed where pipelines are saved for Linux.For completeness: on macOS, the location of the saved aggregations is ~/Library/Application Support/MongoDB Compass/SavedPipelines, on Windows the location is %APPDATA%/MongoDB Compass/SavedPipelines.I am curious: can you share a bit more about your use cases for wanting to access the files directly? What do you do with them? Happy to schedule a quick call to go through your workflow with developing aggregations in Compass.",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "Thank you providing the location for different OS.\nI can’t speak for the OP, but I found this post via internet search because I need to backup and change laptop, so knowing where it is saved helps.",
"username": "cequencer"
}
] | Where are aggregation pipeline saved in Compass 1.28? | 2021-12-09T08:47:33.755Z | Where are aggregation pipeline saved in Compass 1.28? | 3,573 |
null | [
"queries",
"golang"
] | [
{
"code": "package main\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"time\"\n\n\t\"go.mongodb.org/mongo-driver/bson\"\n\t\"go.mongodb.org/mongo-driver/bson/primitive\"\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\ntype User struct {\n\tId primitive.ObjectID `bson:\"_id,omitempty\"`\n\tIdentity string `bson:\"Identity,omitempty\"`\n\tName string `bson:\"Name,omitempty\"`\n\tAge int `bson:\"Age,omitempty\"`\n}\n\ntype Log struct {\n\tId primitive.ObjectID `bson:\"_id,omitempty\"`\n\tCreatedAt time.Time `bson:\"CreatedAt,omitempty\"`\n\tCreatedBy string `bson:\"CreatedBy,omitempty\"`\n\tModelValue interface{} `bson:\"ModelValue,omitempty\"`\n}\n\nfunc main() {\n\tresults := []*Log{}\n\n\tuser := new(User)\n\tuser.Id = primitive.NewObjectID()\n\tuser.Identity = \"E00000001\"\n\tuser.Name = \"test user\"\n\n\tlog := new(Log)\n\tlog.CreatedAt = time.Now()\n\tlog.ModelValue = user\n\tlog.CreatedBy = \"E00000SYS\"\n\n\tctx := context.Background()\n\tmongoClient, err := mongo.Connect(context.Background(), options.Client().ApplyURI(\"mongodb://localhost:27018\"))\n\tif err != nil {\n\t\tfmt.Println(\"Create client failed: \", err)\n\t}\n\n\tmongoColl := mongoClient.Database(\"mongo_test\").Collection(\"Log\")\n\t_, err = mongoColl.InsertOne(ctx, log)\n\tif err != nil {\n\t\tfmt.Printf(\"Create document with error: %v\\n\", err)\n\t}\n\n\tcursor, _ := mongoColl.Find(context.Background(), bson.M{})\n\tcursor.All(ctx, &results)\n\tfor _, v := range results {\n\t\tfmt.Println(v.ModelValue)\n\t}\n}\n[{_id ObjectID(\"64128ad91f13069952432f7f\")} {Identity E00000001} {Name test user}]\nmap[_id ObjectID(\"64128ad91f13069952432f7f\") Identity E00000001 Name 'test user']\n",
"text": "Query results (with bson.D):Expected:How can I return the ModelValue as bson.M instead of bson.D by default?Thanks",
"username": "chengkun_kang"
},
{
"code": "Clientbson.MDefaultDocumentMmongoClient, err := mongo.Connect(\n\tcontext.Background(),\n\toptions.Client().\n\t\tApplyURI(\"mongodb://localhost:27018\").\n\t\tSetBSONOptions(&options.BSONOptions{\n\t\t\tDefaultDocumentM: true,\n\t\t}))\n// ...\n",
"text": "@chengkun_kang thanks for the question!Starting in Go Driver v1.12.0, you can use SetBSONOptions to override the default BSON marshal and unmarshal behavior for a Client.For example, to always unmarshal to bson.M when there is no type information, set DefaultDocumentM to true:",
"username": "Matt_Dale"
}
] | Obect field return slice/array after query from DB (6.0.4 Community Edition) | 2023-03-16T03:30:11.718Z | Obect field return slice/array after query from DB (6.0.4 Community Edition) | 1,038 |
null | [] | [
{
"code": "# network interfaces \nnet:\n port: 27017\n bindIp: 0.0.0.0\n tls:\n mode: requireTLS\n certificateKeyFile: /etc/ssl/Cert.pem\n{\"t\":{\"$date\":\"2023-08-14T09:20:54.869+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-08-14T09:20:54.869+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-08-14T09:20:54.869+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-08-14T09:20:54.874+02:00\"},\"s\":\"E\", \"c\":\"NETWORK\", \"id\":23251, \"ctx\":\"-\",\"msg\":\"Cannot read PEM key\",\"attr\":{\"keyFile\":\"/etc/ssl/Cert.pem\",\"error\":\"error:00000000:lib(0)::reason(0)\"}}\n{\"t\":{\"$date\":\"2023-08-14T09:20:54.874+02:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":20574, \"ctx\":\"-\",\"msg\":\"Error during global initialization\",\"attr\":{\"error\":{\"code\":140,\"codeName\":\"InvalidSSLConfiguration\",\"errmsg\":\"Can not set up PEM key file.\"}}}\n",
"text": "Hey,I do have a RapidSSL Certificate (PEM File) and want to use it to encrypt my MongoDB Connection. MongoDB (Standalone) Server is installed on a Ubuntu 22.04 machine.But once I try to start it using the certificate using this configuration:I´m getting the following error:I´ve checked the certficiate using openssl and it works. Permissions are also set (tried using 777)",
"username": "da_K"
},
{
"code": "140,\"codeName\":\"InvalidSSLConfiguration\",\"errmsg\":\"Can not set up PEM key file.\"}}}",
"text": "140,\"codeName\":\"InvalidSSLConfiguration\",\"errmsg\":\"Can not set up PEM key file.\"}}}You should not give 777\nIt should have just read permissions\nGive 400 and see if it works",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "@Ramachandra_Tummala wasn´t anything like that.I´ve checked the certificate file and the CSR was also included. I´ve removed the CSR Code from the certificate and then it worked.",
"username": "da_K"
}
] | TLS Error when Starting MongoDB | 2023-08-14T07:26:27.828Z | TLS Error when Starting MongoDB | 559 |
null | [] | [
{
"code": "",
"text": "Hello friends.\nWhere do I get the Network Access settings of my mongodb account?",
"username": "Edwin_Tools"
},
{
"code": "",
"text": "When you sign into cloud.mongodb.com and view your MongoDB atlas cluster it’s in the nav bar on the left\nimage1901×654 26.8 KB\n",
"username": "tapiocaPENGUIN"
}
] | Network Access Settings Location | 2023-08-16T17:02:12.612Z | Network Access Settings Location | 371 |
null | [
"upgrading"
] | [
{
"code": "",
"text": "Hi guysSorry guys but am new to Mongodb.\nI got an issue right now, I need to upgrade mongodb from 4.0.16 to at least 5.x from our production server.\nIdeally I would create a test server with the same settings / versions / data to upgrade it as test. Unfortunatly I am unable to download\nversion 4.0.16 as its deprecated. MongoDB is been used by Graylog and since I need to upgrade both I want to make sure its is done properly\nwithout loosing any of the settings. Does anyone have a suggestion?Thanks\nPerry",
"username": "Perry_Santos"
},
{
"code": "",
"text": "https://www.mongodb.com/download-center/community/releases/archiveDownloads for archived releases.",
"username": "chris"
},
{
"code": "",
"text": "Sorry I found that link after I posted, thanks again",
"username": "Perry_Santos"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Upgrading from 4.0.16 to 5.x | 2023-08-15T17:48:32.337Z | Upgrading from 4.0.16 to 5.x | 413 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "I have an SPA based on Angular and an ASP .NET Core API. My app uses data stored on Mongo Db Atlas.I would like to authentify my users. I thought about using the Mongo Db Atlas App Services. That means that my users would be stored on Mongo Db side.Is it a good idea to delegate this authentication part to MongoDb or could it be done differently ? In fact, I want to use the MFA, mailing & monitoring part already handled by Mongo Atlas and I cannot seem to find any other solution.I have already tried ASP .NET Core Identity but it seems so “hand-crafted” especially for MFA. Do you have any ideas, please ? Thanks",
"username": "Mary_Be"
},
{
"code": "",
"text": "Hello @Mary_Be ,Welcome to The MongoDB Community Forums! Delegating user authentication to MongoDB Atlas App Services can be a good idea, especially if you’re already using MongoDB Atlas for your data storage and want to take advantage of the MFA, mailing, and monitoring features.You can secure App Services Apps with built-in user management. With the built-in user management of App Services, only authorized users can access your App. You can delete or disable users, and revoke user sessions. Users can log in with:You can enable one or more authentication providers in the App Services backend, and then implement them in your client code. Please check below resources to learn more about Authenticating a User.You can also add an OAuth and OpenID Connect layer on top of your existing database of users. Please referConfigure MongoDB as an Auth0 custom database to simplify user migration or just add OAuth/Open ID Connect.App Services keeps a log of application events, records metrics that summarize your App’s usage and performance, and publishes notifications to your Atlas project’s activity feed. Please check below link to learn more about this.I hope this helps! \nLet me know if you need any help or have any queries/facing issues with implementation.Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Authentication for SPA & API with Mongo DB Atlas | 2023-07-31T09:13:57.358Z | Authentication for SPA & API with Mongo DB Atlas | 599 |
null | [] | [
{
"code": "",
"text": "I have seen articles for the opposite but this is something I have struggled to find.\nI have data in S3. I can query it on atlas using a federated database.Here was my plan:I wanted to ask if there is some bookmark sort of feature where we can know where to copy from and to. My time stamp variable is not distinct enough to use it.Any help on this will be much appreciated. I am really stuck.\nThank you in Advance!",
"username": "Anirudh_S"
},
{
"code": "",
"text": "Hey @Anirudh_S ,As you noticed, you can definitely write data from S3 to Atlas on a recurring basis using Data Federation.I think there are probably a few different ways that you could approach this, but I think it depends on how the data is structured in yourself bucket. Some things that come to mind might be a timestamp or tag on the object itself, or maybe having the file deleted or moved after it has been imported with some other function.I think it might be easiest to brainstorm on a call though, would you like to put some time on my calendar here: Calendly - Benjamin FlastBest,\nBen",
"username": "Benjamin_Flast"
},
{
"code": "",
"text": "@Benjamin_Flast I kind of faced a similar problem where we tons of s3 docs in json formats and trying to write to the mongodb. Any solutions or best practices suggested would be great.",
"username": "Raj_Alamuri"
}
] | Automate continuous copying from s3 to mongodb | 2022-11-07T11:00:36.396Z | Automate continuous copying from s3 to mongodb | 1,087 |
null | [
"dot-net",
"atlas-device-sync",
"transactions"
] | [
{
"code": "could not decode next message: error reading body: failed to read: read limited at 16777217 bytes16777217",
"text": "We have a .NET Xamarin.Forms app that utilizes .NET Realm SDK to sync the app to Realm cloud using partition-based synchronization.We are using M20 Dedicated Cluster for Realm sync with the global deployment model.Some of our customers started experiencing the following sync errorcould not decode next message: error reading body: failed to read: read limited at 16777217 bytesWhen I looked for that magic 16777217 number in forums I found several threads without resolution.As I can understand there’s a 16MB limit somewhere. We are not using images anywhere in the sync process. However, some of our customers have large databases. And we do have a logic of “sharing” the database which essentially is a single transaction that copies the data to the target user’s database and then the Realm Sync picks up and syncs that DB in the background.I could, in theory, split the data into chunks and use several transactions, but I do not know how to ensure that for an even larger dataset, those individual transactions would not exceed 16MB. I think this issue needs to be addressed at lower levels than simply creating more transactions.What are best practices in scenarios like this? How do we ensure that we have fluent background synchronization even for the largest databases that could in theory exceed gigabytes?",
"username": "Gagik_Kyurkchyan"
},
{
"code": "",
"text": "Can you file a support ticket for this? The support team will be best suited to advise you based on your use case. It’s also possible that you’re hitting some corner case that should be fixed on our end rather than having to modify your usage pattern and they have the tools to recognize that.",
"username": "nirinchev"
},
{
"code": "",
"text": "Hi @Gagik_Kyurkchyan,As I can understand there’s a 16MB limit somewhere.Yes, there is one, and is well documented: a single document in MongoDB cannot exceed the total size of 16777216 bytes. There’s GridFS if you need to, you can find more information in the docs or our Knowledge Base, however GridFS isn’t supported by Realm at this time.Therefore, Realm isn’t limited to have databases smaller than that, but single documents need to be: there are also advantages in this approach (if, as it regularly happens in mobile, you have a huge transaction that fails half-way, the only possible way to recover is to start the transaction again - and you probably don’t want to waste GB of mobile data over re-attempts).I could, in theory, split the data into chunks and use several transactions, but I do not know how to ensure that for an even larger dataset, those individual transactions would not exceed 16MB. I think this issue needs to be addressed at lower levels than simply creating more transactions.Can you please explain your use case in detail? As illustrated above, keeping all data in massive documents isn’t common nor advised on mobile, you may be better served with a different architecture.As advised by @nirinchev, a Support case would be better, so that we can discuss details you may not be willing to share in a public forum.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "@nirinchev and @Paolo_Manna thanks for getting back and sharing these details.We do not have a single document that is larger than 16mb. We may have scenarios when a single transaction is larger than 16MB, and from what I can see a single transaction in Realm is represented by a Single BSON document and that’s the culprit of the issue and that’s the reason there can’t be a transaction that is larger than 16MB.The use case we have is the following. We have a B2B Realm application. In the app they can have both a synced database that uses partition-based sync, and they can have an offline database with the same schema. The users have the possibility of creating a “Backup” of their Realm database. A backup of a database is simply an offline Realm database into which we copy all of the data from the source database. The copy is performed manually entity by entity bases.At some point, the users can either share that backup with their colleagues or restore it and overwrite their existing database. Overwriting means deleting all of their existing data and copying over the data we had in the backed-up offline database into their Realm-synced database. And this is where the issue occurs. We do the copy in a single transaction right now. Now I can see that it’s not a good idea if the database is large and we probably need to split the restoration process into multiple transactions.As for raising a support ticket, I will be happy to do so. Where can I do so? I never did that before. I tried to use support.mongodb.com but I receive the following errorTo access the MongoDB Support Portal, please ensure that you are a member of a supported customer project.",
"username": "Gagik_Kyurkchyan"
},
{
"code": "",
"text": "Hi, I believe the error you are running into is actually that we require the WebSocket library to limit messages to 16MB. Interestingly we came across your application while looking through errors occurring in production and are in the process of discussing raising this limit. I will keep you in the loop on what we decide there but having limits on these kinds of things is important for the system to function properly and reliably.This is happening because you have a single realm transaction that is > 16MB in size when compressed. Ideally, you can chunk out your writes to be a bit smaller and Realm and Device Sync will ensure that smaller transactions are batched up to increase throughput, but unfortunately, we cannot break up a transaction (as that would break the concept of a transaction).Therefore, the best option for you is to try to update your application to break up these large transactions into smaller ones. In the meantime, we will discuss raising this limit and keep you in the loop. As one added point, having such large transactions is likely to cause issues later on in the upload integration, especially on undersized clusters as we need to apply all of these changes in a single MongoDB transaction.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Tyler_Kaye thanks, that makes sense, and I do agree that a single web request should not be that large. In fact, I had an assumption that a single transaction gets “streamed” somehow magically, and if I knew before what I know now I wouldn’t create a single transaction for a full database copy.I am going to work on splitting these transactions into chunks. The strategy I will take is creating “batches” per entity small enough so I will never face this issue again, even if the database is humongous. We will do some load tests and I think we should be good.Appreciate everybody’s support.",
"username": "Gagik_Kyurkchyan"
},
{
"code": "",
"text": "As for raising a support ticket, I will be happy to do so. Where can I do so? I never did that before. I tried to use support.mongodb.com but I receive the following errorFrom your Atlas Project page, you have a Support tab where you can activate it: there’s a free trial you can take advantage of, if you never had a Support contract.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "@Gagik_Kyurkchyan sounds good. We have discussed adding some API that allows us to define split points and break up transactions as we see fit, but right now the API of the realm is that it is a Transaction, and that is a defined boundary that we cannot safely split up without breaking how users might expect changes to be replicated to devices.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Customers experience Realm sync error due to read limit of 16777217 bytes | 2023-08-16T07:53:55.097Z | Customers experience Realm sync error due to read limit of 16777217 bytes | 514 |
null | [] | [
{
"code": "",
"text": "Hi!As part of the MongoDB for Startups program, I’ve had the pleasure of working with hundreds of founders, CTO’s and Developers to help them transform their ideas into functioning businesses. Nearly every startup with which I’ve worked has a unique set of challenges but they all have one thing in common: Financing the dream can be a challenge.To help with this, many startups turn to VC’s, accelerators and tech companies to receive resources at reduced cost or for free. I’ll be delivering a talk this May at MongoDB World called Zero to Live in 45: Build and launch your startup with free and low-cost services and I’d love to learn about the resources you’re using to build your startup.I’m curious about your journey… what does your stack look like? Are you using any freemium, or low-cost options to help you launch?If you’re in the MongoDB for Startups program - what do you enjoy about it? What features of Atlas are you using?If you’re not in the program? Why not?I appreciate any experience you’re willing to share!Regards,\nMikeP.S. Here are just a few of the free offerings I’ve seen - maybe you use some of these? If so, let me know!Hosting/InfrastructureCRM SystemsEmail/MarketingDomain NamesAnalytics",
"username": "Michael_Lynn"
},
{
"code": "",
"text": "Using:Just now re-writing my SaaS to use AWS serverless technologies to run the application with S3 and the MongoDB backend Apollo Graphql layer with AWS Lambda.\nCurrently it is run on ECS Fargate service inside a VPC connected to my M10 cluster. This should significantly reduce my cost further especially as it scales to more clients.",
"username": "Natac13"
},
{
"code": "",
"text": "This sesion will be really helpful for someone like me who are just at the begining stage with their startup especially bootstraping it.Currently I am using:FreemiumsAWS Free tier for very small projectsAtlas free tier M0Let’s Encrypt for free SSL certificateTrello for task management and stuff for business as well as personalPostman for API’sDraw.io (I use it rarely for making models & diagrams)VSCode as IDEZoom for Meeting & Conferences (Inititally subscribed for free plan)GithubPremiumsZoom for Meeting & Conferences ($15)Termius for SSH, SFTP, Port Forwarding best experience ever ($8.33)Hubstaff for time tracking and productivity monitoring tool, It includes billing, invoicing, timesheets and lots of amazing featuresGoogle AnalyticsGSuit for maling, Google docs, Google slides for documents and presentationCurrently, This suffice most of the need and I think it would work well for startup having 50 employees upto 100 I feel.I am not in the startup program yet but I am planning to be in there within 2 months.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Atlas free tier M0 only supports maximum 100 concurrent connections.\nSo it is not sufficient for the production. For Startup organizations it will add more cost if they switch to M10, M20 because they are not sure their future traffic.\nIs there any other Instance or free service available from Mongo Atlas for startup ?",
"username": "Ajay_P"
},
{
"code": "0.0.0.0",
"text": "For Startup organizations it will add more cost if they switch to M10, M20 because they are not sure their future traffic.I am in this exact situation. However the cost of M10 far out weighs the fact that I do not have to manage the database myself. I am only a one-man operation.Not being sure of future traffic is I think the reason Mongo allows us to scale the Atlas Cluster.Also M10 has a max of 750 connections. And Mongo recommends it only for Low traffic applications (which is what I am currently on)M10+ are the only tiers that offer VPC peering as well which is essential so that I do not have to whitelist 0.0.0.0 as an IP using Lambda or Fargate on AWS to connect to the DB.Although it would be very nice for us users to have a completely free options for Atlas to use in a production environment; but how can we expect MongoDB, as a company, to operate without any revenue?On that note Mongodb now has a program that startups can apply to. Which is offering $3000 is Atlas credits! Very exciting!",
"username": "Natac13"
}
] | What free/low-cost resources are you using to help build your startup? | 2020-02-13T14:46:04.625Z | What free/low-cost resources are you using to help build your startup? | 6,403 |
null | [
"flutter",
"schema-validation"
] | [
{
"code": "@RealmModel()\nclass _MatchFormSettingsSchema {\n @PrimaryKey()\n @MapTo('_id')\n late ObjectId id;\n\n late List<_Question> questionsArray;\n}\n\n\n@RealmModel(ObjectType.embeddedObject)\nclass _Question {\n late String input;\n\n late String type;\n}\n\n final appConfig = AppConfiguration(appID);\n final app = App(appConfig);\n\n final user = await app.logIn(Credentials.anonymous());\n final realmConfig = Configuration.flexibleSync(\n user,\n [\n MatchSchema.schema,\n MatchFormSettingsSchema.schema,\n\n ],\n );\n\n late Realm realm;\n if (await isDeviceOnline()) {\n realm = await Realm.open(realmConfig);\n } else {\n realm = Realm(realmConfig);\n }\nException has occurred.\nRealmException: Message: Schema validation failed due to the following errors:\n- Property 'MatchFormSettingsSchema.questionsArray' of type 'array' has unknown object type 'Question')\n",
"text": "Hello, I’m trying to setup sync with a new Schema called MatchFormSettingsSchema. This is the code for that schema:However when initializing my flutter App I get this error while creating my Realm Object:Error:",
"username": "Juan_Pablo_Gutierrez"
},
{
"code": " [\n MatchSchema.schema,\n MatchFormSettingsSchema.schema,\n\n ],\n",
"text": "Hi @Juan_Pablo_Gutierrez,Could you please also share the schema you have defined in your App Services App on the Atlas Portal? Alternatively, you can also share the App ID, and/or send it with a direct message, so that we could check.At a first glance:You didn’t include Question there: you can either add it, or just avoid to define a schema list altogether, so that Realm will figure it out by itself.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Thanks for answering, it was in fact that I was not defining the Question Schema.Thanks!",
"username": "Juan_Pablo_Gutierrez"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Schema validation failed: property 'MatchFormSettingsSchema.questionsArray' of type 'array' has unknown object type 'Question' | 2023-08-11T18:55:49.327Z | Schema validation failed: property ‘MatchFormSettingsSchema.questionsArray’ of type ‘array’ has unknown object type ‘Question’ | 559 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "let getStatus = (flag) => {\n return flag=='ok' ? 'ok' :'broken';\n}\naggregate({\n $project: {\n '_id': 1,\n 'status': getStatus($flag3)\n }\n});\n",
"text": "Can I use a normal javascript function inside the aggregation pipeline. My query is not about $function operator. I am using cosmosdb for mongo API, which doesn’t support the $function. In one of the stack overflow thread I saw direct usage of function like belowRef: node.js - Call function inside mongodb's aggregate? - Stack OverflowIs it possible ?",
"username": "sandeep_s1"
},
{
"code": "{ \"$cond\" : [ { \"$eq\" : [ \"$flag3\" , \"ok\" ] , \"ok\" , \"broken\" } ] }\n",
"text": "Why would you want to do that?If your aggregation is really that simple you could simply evaluate $flag3 directly in your application.If you need the value of status further in your aggregation the simplest thing to do is to use $cond with something likein your $project rather than getStatus(…).",
"username": "steevej"
}
] | Adding a normal function in a aggregation pipeline | 2023-08-16T13:02:07.324Z | Adding a normal function in a aggregation pipeline | 248 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "What I understood is $bucket is useful when embedding 1:S (squillion) relationship exist between two entities or 1:M but document size is much more. Bucketing also help to improve performance of referring specific buckets only.When is it preferred to use $bucket or $bucketauto or $facet?\nFor example, there is an application, which add following sensor records at every 5 minutes:\n{sensor_id : <> , sensor_name:< >, location : <>, timestamp : <>, value:<>}\nOR\n{sensor_id : <> , sensor_name:< >, location : <>,reading:[{timestamp : <>, value:<>},…]}But to create a bucket schema also, such large collection need to be queried. And Insertion of Sensor data is continuous . So, existing bucket schema need to be updated frequently. It shows two drawbacks : i) duplication of data in Denormalized and bucket format both. ii) Overhead of updating bucket schema and that too on continuously increasing collection.Can you explain how and when to use bucket in such situation?",
"username": "Prof_Monika_Shah"
},
{
"code": "",
"text": "It appears that there is some confusion between the bucket design pattern which is a schema pattern for the storage of data as expressed by{sensor_id : <> , sensor_name:< >, location : <>,reading:[{timestamp : <>, value:<>},…]}And the analytical stages $bucket and $bucketAuto which are meant to produce calculated values, sums, averages …, based on your stored data. With $bucket you define value boundaries for the buckets and with $bucketAuto you define the number of documents per buckets.The $facet stage is a completely different beast. It is used when you want 2 or more different processing for the same documents.The $facet stage is another beast.",
"username": "steevej"
}
] | When to use $bucket or $bucketauto | 2023-08-15T16:52:25.283Z | When to use $bucket or $bucketauto | 261 |
null | [
"aggregation",
"queries",
"node-js",
"indexes"
] | [
{
"code": "{\n A: 1,\n B: 2\n}\n{\n A: 1,\n C: 2\n}\nuser.aggregate([{\n $match: {\n A: \"house\",\n $or: [ \n { B: \"car\" },\n { C: \"boat\" },\n ]\n } \n }\n])\n$or",
"text": "index:Query:In order to check the $or condition, will mongodb make use of both indexes?EDITED:According to the documentation on mongo’s site, multiple single field can indexes can be used in the same query. However, in my question, both indexes are compound indexes.",
"username": "Big_Cat_Public_Safety_Act"
},
{
"code": "ORdb.foo.drop();\ndb.foo.insertOne([{}]);\ndb.foo.createIndexes([{ A: 1, B: 1 }, { A: 1, C: 1 }]);\ndb.foo.explain(\"executionStats\").aggregate([\n { $match: { \n A: \"house\", \n $or: [ \n { B: \"car\" }, \n { C: \"boat\" } \n ] \n }}\n]);\n{\n \"explainVersion\": \"2\",\n \"queryPlanner\": {\n \"namespace\": \"test.foo\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n [...] \n },\n [...]\n \"winningPlan\": {\n \"queryPlan\": {\n \"stage\": \"FETCH\",\n [...] \n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"planNodeId\": 1,\n \"keyPattern\": {\n \"A\": 1,\n \"B\": 1\n },\n \"indexName\": \"A_1_B_1\",\n [...] \n }\n },\n \"slotBasedPlan\": {\n \"slots\": \"$$RESULT=s12 env: { s4 = 1692189851389 (NOW), s7 = KS(3C686F75736500F0FE04), s1 = TimeZoneDatabase(Atlantic/Canary...America/Hermosillo) (timeZoneDB), s17 = \\\"boat\\\", s11 = {\\\"A\\\" : 1, \\\"B\\\" : 1}, s2 = Nothing (SEARCH_META), s3 = Timestamp(1692189850, 7) (CLUSTER_TIME), s6 = KS(3C686F757365000A0104), s16 = \\\"car\\\" }\",\n \"stages\": \"[2] filter {(traverseF(s14, lambda(l1.0) { ((l1.0 == s16) ?: false) }, false) || traverseF(s15, lambda(l2.0) { ((l2.0 == s17) ?: false) }, false))} \\n[2] nlj inner [] [s5, s8, s9, s10, s11] \\n left \\n [1] cfilter {(exists(s6) && exists(s7))} \\n [1] ixseek s6 s7 s10 s5 s8 s9 [] @\\\"011d24a7-fd1c-442a-9a28-101bcb8732b0\\\" @\\\"A_1_B_1\\\" true \\n right \\n [2] limit 1 \\n [2] seek s5 s12 s13 s8 s9 s10 s11 [s14 = B, s15 = C] @\\\"011d24a7-fd1c-442a-9a28-101bcb8732b0\\\" true false \\n\"\n }\n },\n \"rejectedPlans\": [\n {\n \"queryPlan\": {\n \"stage\": \"FETCH\",\n \"planNodeId\": 4,\n \"inputStage\": {\n \"stage\": \"OR\",\n \"planNodeId\": 3,\n \"inputStages\": [\n {\n \"stage\": \"IXSCAN\",\n \"planNodeId\": 1,\n \"keyPattern\": {\n \"A\": 1,\n \"B\": 1\n },\n \"indexName\": \"A_1_B_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"A\": [],\n \"B\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"A\": [\n \"[\\\"house\\\", \\\"house\\\"]\"\n ],\n \"B\": [\n \"[\\\"car\\\", \\\"car\\\"]\"\n ]\n }\n },\n {\n \"stage\": \"IXSCAN\",\n \"planNodeId\": 2,\n \"keyPattern\": {\n \"A\": 1,\n \"C\": 1\n },\n \"indexName\": \"A_1_C_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"A\": [],\n \"C\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"A\": [\n \"[\\\"house\\\", \\\"house\\\"]\"\n ],\n \"C\": [\n \"[\\\"boat\\\", \\\"boat\\\"]\"\n ]\n }\n }\n ]\n }\n },\n \"slotBasedPlan\": {\n \"slots\": \"$$RESULT=s24 env: { s4 = 1692189851389 (NOW), s7 = KS(3C686F757365003C63617200FE04), s1 = TimeZoneDatabase(Atlantic/Canary...America/Hermosillo) (timeZoneDB), s14 = KS(3C686F757365003C626F617400FE04), s11 = {\\\"A\\\" : 1, \\\"B\\\" : 1}, s2 = Nothing (SEARCH_META), s18 = {\\\"A\\\" : 1, \\\"C\\\" : 1}, s3 = Timestamp(1692189850, 7) (CLUSTER_TIME), s6 = KS(3C686F757365003C636172000104), s13 = KS(3C686F757365003C626F6174000104) }\",\n \"stages\": \"[4] nlj inner [] [s19, s21, s23, s20, s22] \\n left \\n [3] unique [s19] \\n [3] union [s23, s20, s22, s19, s21] \\n branch0 [s9, s10, s11, s5, s8] \\n [1] cfilter {(exists(s6) && exists(s7))} \\n [1] ixseek s6 s7 s10 s5 s8 s9 [] @\\\"011d24a7-fd1c-442a-9a28-101bcb8732b0\\\" @\\\"A_1_B_1\\\" true \\n branch1 [s16, s17, s18, s12, s15] \\n [2] cfilter {(exists(s13) && exists(s14))} \\n [2] ixseek s13 s14 s17 s12 s15 s16 [] @\\\"011d24a7-fd1c-442a-9a28-101bcb8732b0\\\" @\\\"A_1_C_1\\\" true \\n right \\n [4] limit 1 \\n [4] seek s19 s24 s25 s21 s23 s20 s22 [] @\\\"011d24a7-fd1c-442a-9a28-101bcb8732b0\\\" true false \\n\"\n }\n },\n [...]\nORwinningPlanexplainwinningPlanOR",
"text": "Hey @Big_Cat_Public_Safety_Act,According to the documentation on mongo’s site, multiple single field can indexes can be used in the same query. However, in my question, both indexes are compound indexes.If you evaluate the Explain Results for your query you’ll see that a plan containing an OR stage (which evaluates more than 1 plan/index) will be generated:In the case of my example above, though multiple plans were generated (one containing an OR stage), the winningPlan in this case was a single index plan.This is due to the query planner evaluating that index as performing best during the trial phase (which makes sense given the amount of data in in our collection at the moment).If you explain your operation you should be able to see if multiple indexes are being used in the winningPlan if there’s an OR stage present.",
"username": "alexbevi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Will mongodb use 2 indexes on an $or query? | 2023-08-16T00:34:22.077Z | Will mongodb use 2 indexes on an $or query? | 478 |
null | [
"flutter",
"dart"
] | [
{
"code": "import 'package:realm/realm.dart';\n\npart 'item_type_schemas.g.dart';\n\n/---item_type_schema.dart\n@RealmModel()\nclass _ItemType {\n @MapTo('_id')\n @PrimaryKey()\n late ObjectId id;\n late String name;\n late String description;\n late bool isConsumable;\n late bool isActive;\n}\n\n/--item_category_schemas.dart\nimport 'package:realm/realm.dart';\nimport '../../item_type/data/item_type_schemas.dart';\npart 'item_category_schemas.g.dart';\n\n@RealmModel()\nclass _ItemCategory {\n @MapTo('_id')\n @PrimaryKey()\n late ObjectId id;\n late String name;\n late String description;\n late _ItemType? itemType;\n late bool isActive;\n}\ni am unable to import _ItemType. kindly help\n",
"text": "i am learning a flutter realm. i am unable to import a realm object which is in another file",
"username": "Health_Centre_ERP"
},
{
"code": "class $ItemCategory {class $ItemType {$ItemTypeItemType",
"text": "Hi @Health_Centre_ERP,\nYou can use $ instead of _. For example:\nclass $ItemCategory {\nclass $ItemType {\nTe generator will work with $ and your classes won’t be private.\nWe usually suggest defining all the realm models in the same file in order to avoid using $ItemType instead of the generated ItemType by mistake while working with the Realm.",
"username": "Desislava_St_Stefanova"
}
] | Importing Realm Object to another Realm Object of another file | 2023-08-16T02:11:18.944Z | Importing Realm Object to another Realm Object of another file | 445 |
null | [
"field-encryption"
] | [
{
"code": "**sysadmin@soft-serve**:**~**$ sudo chown -R mongodb:mongodb /data/var/lib/mongodb\n\n**sysadmin@soft-serve**:**~**$ sudo systemctl start mongod\n\n**sysadmin@soft-serve**:**~**$ sudo systemctl status mongod\n\n**×** mongod.service - MongoDB Database Server\n\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n\nActive: **failed** (Result: exit-code) since Thu 2023-08-10 14:49:31 UTC; 12s ago\n\nDocs: https://docs.mongodb.org/manual\n\nProcess: 81733 ExecStart=/usr/bin/mongod --config /etc/mongod.conf **(code=exited, status=100)**\n\nMain PID: 81733 (code=exited, status=100)\n\nCPU: 38ms\n\nAug 10 14:49:31 soft-serve systemd[1]: Started MongoDB Database Server.\n\nAug 10 14:49:31 soft-serve mongod[81733]: {\"t\":{\"$date\":\"2023-08-10T14:49:31.699Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":7484500, \"ctx\":\"-\",\"msg\":\"Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \\\"processManagement.fo>\n\nAug 10 14:49:31 soft-serve systemd[1]: **mongod.service: Main process exited, code=exited, status=100/n/a**\n\nAug 10 14:49:31 soft-serve systemd[1]: **mongod.service: Failed with result 'exit-code'.**\n\nlines 1-12/12 (END)\n...{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.823+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-08-10T14:40:02.824+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.699+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.700+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingIn>\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.700+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.701+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueu>\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.709+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrati>\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.709+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMig>\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.709+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\">\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.709+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.709+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":81733,\"port\":27017,\"dbPath\":\"/data/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\">\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.710+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.8\",\"gitVersion\":\"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74\",\"openSSLV>\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.710+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.710+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017>\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.710+00:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"Location28596: Unable to determine status of lock file i>\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-08-10T14:49:31.711+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n",
"text": "I updated my path var to use subdirs of /data which is my luks vg partition.|'ve spent a couple of hours following the suggestions of gpt 4: Deleting lock files and adjusting privileges for mongodb user.Still not working.Please assist as I’m really looking forward to getting my SaaS online. I’ve done as much as I am able, alone.log file:Thank you for reading.\nSam",
"username": "sam_ames"
},
{
"code": "",
"text": "Hi @sam_ames,\nCan you paste here you configuration file?\nHave you got data under your new Path?Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": " /etc/mongod.conf \n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /data/var/lib/mongodb\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n\nsysadmin@server:~$ ls /data\nbeta.domain.com lost+found var\nsysadmin@server:~$ ps -ef | grep mongo\nsysadmin 83521 83229 0 20:45 pts/1 00:00:00 grep --color=auto mongo\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\nsystemd-network:x:100:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:101:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nmessagebus:x:102:105::/nonexistent:/usr/sbin/nologin\nsystemd-timesync:x:103:106:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin\nsyslog:x:104:111::/home/syslog:/usr/sbin/nologin\n_apt:x:105:65534::/nonexistent:/usr/sbin/nologin\ntss:x:106:112:TPM software stack,,,:/var/lib/tpm:/bin/false\nuuidd:x:107:113::/run/uuidd:/usr/sbin/nologin\ntcpdump:x:108:114::/nonexistent:/usr/sbin/nologin\nsshd:x:109:65534::/run/sshd:/usr/sbin/nologin\npollinate:x:110:1::/var/cache/pollinate:/bin/false\nlandscape:x:111:116::/var/lib/landscape:/usr/sbin/nologin\nfwupd-refresh:x:112:117:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin\nubuntu:x:1000:1000:Ubuntu:/home/ubuntu:/bin/bash\nlxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false\nusbmux:x:113:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin\nsysadmin:x:1001:1001:,,,:/home/sysadmin:/bin/bash\nmongodb:x:114:65534::/home/mongodb:/usr/sbin/nologin\n",
"text": "mongod.conf:data under new path:mongo process:/ect/passwd:I can wipe /data but I’m concernjed it might trash my ubuntu system entirely… I’ve had trouble previouisly, after deleting lopst+found so Id rather not…?| only added 2 folders as you can see, above.Many thanks ",
"username": "sam_ames"
},
{
"code": "",
"text": "Hi @sam_ames,\nStop the service and do the following command:\nchown -R mongodb:mongodb /data/var/lib/mongodb\nAnd try to start the mongod service again.If It doesn’ t work, paste the status of service and the logs.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "sysadmin@server:~$ chown -R mongodb:mongodb /data/var/lib/mongodb\nchown: cannot read directory '/data/var/lib/mongodb': Permission denied\nsysadmin@server:~$ sudo chown -R mongodb:mongodb /data/var/lib/mongodb\n[sudo] password for sysadmin: \n\nsysadmin@server:~$ sudo systerctl start mongod\nsudo: systerctl: command not found\nsysadmin@server:~$ sudo systemctl start mongod\nsysadmin@server:~$ sudo systemctl status mongod\n× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Fri 2023-08-11 07:44:29 UTC; 10s ago\n Docs: https://docs.mongodb.org/manual\n Process: 85895 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=100)\n Main PID: 85895 (code=exited, status=100)\n CPU: 31ms\n\nAug 11 07:44:29 server systemd[1]: Started MongoDB Database Server.\nAug 11 07:44:29 server mongod[85895]: {\"t\":{\"$date\":\"2023-08-11T07:44:29.870Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":7484500, \"ctx\":\"-\",\"msg\":\"Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \\\"processManagement.fo>\nAug 11 07:44:29 server systemd[1]: mongod.service: Main process exited, code=exited, status=100/n/a\nAug 11 07:44:29 server systemd[1]: mongod.service: Failed with result 'exit-code'.\nlines 1-12/12 (END)\nsysadmin@server:~$ sudo tail -f /var/log/mongodb/mongod.log\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-08-11T07:44:29.880+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n\n",
"text": "Frustrating…But thanks for your support ",
"username": "sam_ames"
},
{
"code": "Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \\\"processManagement",
"text": "Hi @sam_amesEnvironment variable MONGODB_CONFIG_OVERRIDE_NOFORK == 1, overriding \\\"processManagementHere is your error.I would delete this environment variable and check if it starts runningRegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "chown: cannot read directory '/data/var/lib/mongodb': Permission denied",
"text": "This is the errorchown: cannot read directory '/data/var/lib/mongodb': Permission deniedDo the chown using sudo.",
"username": "steevej"
},
{
"code": "sudo chown -R mongodb:mongodb /data/var/lib/mongodb\n[sudo] password for sysadmin\n",
"text": "Hi @steevejHe corrected in the next line.Best Rergards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "I really must improve my reading skills. B-)~~ /) ~~",
"username": "steevej"
},
{
"code": "",
"text": "Don’t worry @steevej , who knows how many times I skip whole parts to read ",
"username": "Fabio_Ramohitaj"
},
{
"code": "sysadmin@soft-serve:~$ sudo nano /usr/lib/systemd/system/mongod.service\nsysadmin@soft-serve:~$ sudo systemctl start mongod\nWarning: The unit file, source configuration file or drop-ins of mongod.service changed on disk. Run 'systemctl daemon-reload' to reload units.\nsysadmin@soft-serve:~$ sudo systemctl daemon-reload\nsysadmin@soft-serve:~$ sudo systemctl start mongod\nsysadmin@soft-serve:~$ sudo systemctl status mongod\n× mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: exit-code) since Fri 2023-08-11 15:33:50 UTC; 14s ago\n Docs: https://docs.mongodb.org/manual\n Process: 88421 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=100)\n Main PID: 88421 (code=exited, status=100)\n CPU: 39ms\n\nAug 11 15:33:50 soft-serve systemd[1]: Started MongoDB Database Server.\nAug 11 15:33:50 soft-serve systemd[1]: mongod.service: Main process exited, code=exited, status=100/n/a\nAug 11 15:33:50 soft-serve systemd[1]: mongod.service: Failed with result 'exit-code'.\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.795+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.797+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingIn>\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.797+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.797+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueu>\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrati>\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMig>\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\">\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":88421,\"port\":27017,\"dbPath\":\"/data/var/lib/mongodb\",\"architecture\":\"64-bit\",\"host\":\">\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.8\",\"gitVersion\":\"3d84c0dd4e5d99be0d69003652313e7eaf4cdd74\",\"openSSLV>\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Ubuntu\",\"version\":\"22.04\"}}}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.805+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"127.0.0.1\",\"port\":27017>\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.806+00:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"Location28596: Unable to determine status of lock file i>\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.806+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2023-08-11T15:33:50.807+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n",
"text": "MONGODB_CONFIG_OVERRIDE_NOFORKI commented out the var.\nNow I have:Log:Would you consider downgrading, if in this situation yourself?.. As suggested in this post?Thanks for your support Sam",
"username": "sam_ames"
},
{
"code": "{\"error\":\"Location28596: Unable to determine status of lock file",
"text": "Hi @sam_ames,\nIs selinux disabled?\nThe error now Is changed:{\"error\":\"Location28596: Unable to determine status of lock fileIn the extreme case, I would attempt to remove the files under the path /data/var/lib/mongodb (if you haven’ t collection populated with document in your istance) because seems corrupted the lock file or as you’ve suggested, try a downgrade or update.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "It is a difficult install, I don’t even know what is se Linux is to be perfectly honestThanks,\nSamI will try your suggestions on Monday morning, many thanks",
"username": "sam_ames"
},
{
"code": "",
"text": "Default install of Ubuntu **",
"username": "sam_ames"
},
{
"code": "",
"text": "Hi @sam_ames,\ncat /etc/selinux/config\nLet me know how it will go then!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "\nsysadmin@soft-serve:~$ sudo cat /etc/selinux/config\n[sudo] password for sysadmin: \ncat: /etc/selinux/config: No such file or directory\n",
"text": "It seems like your suspicions were correct.Please suggest how I can resolve this. I did not delete selinux and am not sure why it’s missing.Thanks for the support, Sam",
"username": "sam_ames"
},
{
"code": "",
"text": "Hi @sam_ames,\nAnother way to get selinux status Is toselinux status with the command\ngetenforce or cat /etc/sysconfig/selinux.\nIf the getenforce command results in output of the type enforcing or permissive, use the following command:\nsetenforce 0.\nBut as suggested in a previous post, i think is better to clean your data directory or do and update for resolve the problem\nRegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "sysadmin@soft-serve:~$ cat /etc/sysconfig/selinux\ncat: /etc/sysconfig/selinux: No such file or directory\nsysadmin@soft-serve:~$ getenforce\nCommand 'getenforce' not found, but can be installed with:\nsudo apt install selinux-utils\nsysadmin@soft-serve:~$ sudo getenforce\n[sudo] password for sysadmin: \nsudo: getenforce: command not found\nsysadmin@soft-serve:~$ \n",
"text": "Please see the following output.Do I need to install selinux tools?Thanks,\nSam",
"username": "sam_ames"
},
{
"code": "",
"text": "Hi @sam_ames,\nIn theory no, it should be present by default in the os.\nTry the solutions discussed last time and let me know if they workRegards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "I can’t swipe the data volume because it is part of a raid partition and it was very complicated to set up.However, I can delete everything in the directory that I have personally added. Is this worth doing?You previously mentioned a downgrade, which version should I consider? I think I am currently using the latest release of version six. The post I shared, which mentioned downgrading was quite an old one so I assume the recommended version is no no longer the optimal solution.Thank you,\nSam",
"username": "sam_ames"
}
] | Mogo error after path updated | 2023-08-10T15:06:36.338Z | Mogo error after path updated | 1,355 |
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "{\n \"_id\": ObjectId(\"63bd85a644552d73f36c366e\"),\n \"ts\": datetime.datetime(2023, 1, 10, 15, 35, 2, 374000),\n \"dt\": datetime.datetime(2023, 1, 10, 15, 34, 10),\n \"aff\": 2,\n \"src\": 2,\n \"st\": 2\n}\n$searchMetaresults = my_collection.aggregate(\n[\n {\n \"$match\": {\n \"ts\": {\"$gte\": datetime.now() - timedelta(hours=24)},\n \"st\": 2\n }\n },\n {\n \"$group\": {\n \"_id\": {\"aff\": \"$aff\", \"src\": \"$src\"},\n \"count\": {\"$sum\": 1},\n \"last_message_datetime\": {\"$max\": \"$dt\"}\n }\n },\n {\"$sort\": {\"_id.src\": -1, \"_id.aff\": -1}}\n ]\n)\nresults = my_collection.aggregate([\n {\n \"$searchMeta\": {\n \"index\": \"MsgAtlasIndex\",\n \"count\": {\"type\": \"total\"},\n \"facet\": {\n \"operator\": {\n \"compound\": {\n \"must\": [\n {\n \"range\": {\n \"path\": \"ts\",\n \"gte\": datetime.now() - timedelta(hours=24)\n }\n },\n {\"equals\": {\"path\": \"st\", \"value\": 2}},\n ]\n }\n },\n \"facets\": {\n \"src\": {\n \"type\": \"number\",\n \"path\": \"src\",\n \"boundaries\": [1, 2, 3, 4, 5, 6, 7, 10, 11]\n },\n \"aff\": {\n \"type\": \"number\",\n \"path\": \"aff\",\n \"boundaries\": [1, 2, 3, 4, 5, 7, 8, 9, 10, 11, 12, 14, 17, 18, 19, 20, 21]\n },\n },\n },\n }\n },\n {\n \"$match\": {\n \"$or\": [\n {\"facet.aff.buckets.count\": {\"$gt\": 0}},\n {\"facet.src.buckets.count\": {\"$gt\": 0}}\n ]\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"count\": 1,\n \"facet.aff.buckets\": {\n \"$filter\": {\n \"input\": \"$facet.aff.buckets\",\n \"as\": \"bucket\",\n \"cond\": {\"$gt\": [\"$$bucket.count\", 0]}\n }\n },\n \"facet.src.buckets\": {\n \"$filter\": {\n \"input\": \"$facet.src.buckets\",\n \"as\": \"bucket\",\n \"cond\": {\"$gt\": [\"$$bucket.count\", 0]}\n }\n }\n }\n }\n ])\n\"last_message_datetime\": {\"$max\": \"$dt\"}\ndt",
"text": "My documents are like this below:I started using $searchMeta faceting for a better and way faster approach.\nSo, I converted my aggregation pipeline from this approach:into this below:I suffered with how to apply this line below in searchMeta:It is to get the maximum date of the counted messages:(the date of the last message of every group) using the field dt (date).Any help, please?",
"username": "ahmad_al_sharbaji"
},
{
"code": "",
"text": "Hi @ahmad_al_sharbaji and welcome to MongoDB community forums!!Based on the sample data and the aggregation pipeline shared, could you provide details for the below mentioned points which would help me understand the use case and replicate in my test environment to see if what you are after is possible.Based on the two queries mentioned, could you help me understand the reason why are you considering to use the searchMeta in the aggregation stage. As mentioned in the MongoDB documentation for searchMeta, this stage offers you to store the data in the form of buckets and returns different types of metadata result documents.Secondly, I tried to replicate the query in local environment, with sample data similar to provided, and it provides the data similar to this:[{‘count’: {‘total’: 1001}, ‘facet’: {‘aff’: {‘buckets’: [{‘_id’: 1, ‘count’: 124}, {‘_id’: 2, ‘count’: 137}, {‘_id’: 3, ‘count’: 119}, {‘_id’: 4, ‘count’: 133}, {‘_id’: 5, ‘count’: 260}, {‘_id’: 7, ‘count’: 110}, {‘_id’: 8, ‘count’: 118}]}, ‘src’: {‘buckets’: [{‘_id’: 1, ‘count’: 129}, {‘_id’: 2, ‘count’: 109}, {‘_id’: 3, ‘count’: 136}, {‘_id’: 4, ‘count’: 126}, {‘_id’: 5, ‘count’: 109}, {‘_id’: 6, ‘count’: 136}, {‘_id’: 7, ‘count’: 256}]}}}]Lastly, are you able to provide the expect / desired output based off your sample document(s)?Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "affsrcaffsrcaffsrc$groupaffsrcaffsrc",
"text": "Hi @Aasawari , Thank you for your kind response!My goal is to count documents for every aff and src together. So, I will be counting how many documents for aff 1 and src 1, then how many documents for aff 2 and src 1, etc…Why I used searchMeta? Because I found a hilarious speed! Instead of taking around 15 seconds per query, it takes ~ 0.5 using searchMeta. Our database contains more than 55 million documents.But, it is not a complete solution. As you see in the regular aggregation, I used to $group. But now using searchMeta I don’t know to do that!\nIt counts separately, but I need them combined! I need to count documents for aff and src together in the same bucket or something else, not separately.\nAs you see in your results, you have 2 buckets, one counting for aff and one for src. The goal is to count together.Please help me to achieve that.",
"username": "ahmad_al_sharbaji"
},
{
"code": "",
"text": "Hi @ahmad_al_sharbajiThank you for writing back.If I understand correctly, the goal of using $searchMeta is because it tremendously increases the query execution time.\nUsing $searchMeta it would create the buckets and would count the according to the buckets created.\nI believe perhaps there would be another way to increase the efficiency.\nWe may be able help you in this case if you could share how you would want the desired output to look like.Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "srcaff[{'_id': {'aff': 21, 'src': 8},\n 'count': 24,\n 'last_message_datetime': datetime.datetime(2023, 8, 2, 10, 13)},\n {'_id': {'aff': 2, 'src': 6},\n 'count': 222,\n 'last_message_datetime': datetime.datetime(2023, 8, 2, 11, 30, 9)},\n {'_id': {'aff': 2, 'src': 2},\n 'count': 34,\n 'last_message_datetime': datetime.datetime(2023, 8, 2, 16, 17, 2)}]\ntsstsrcaffdtaffsrc",
"text": "Hi @Aasawari ,\nThank you for time. Exactly. I wanna make a better speed and increases the query execution time.\nAnd exactly the buckets are my issue. If one bucket would be grouping among src and aff together, that would be perfect. But they are separated and that is my issue.The desired output is like this:This will count documents for the last 24 hours ts field, where status is 2 st field, and will group over documents via source src and affiliation aff fields, and will get the latest message time via dt field.You can kindly create a simple collection, insert some documents like the sample I provided, change the aff and src for some documents, and apply the first aggregate I provided, you will get 100% the desired output.This is really urgent and I’m waiting for your response.\nThanks in advance!",
"username": "ahmad_al_sharbaji"
},
{
"code": "affsrcresults = my_collection.aggregate(\n",
"text": "Hi @ahmad_al_sharbaji,You can kindly create a simple collection, insert some documents like the sample I provided, change the aff and src for some documents, and apply the first aggregate I provided, you will get 100% the desired output.Thanks for getting back to me with those details. After taking a read over the post again I understand that the first aggregation mentioned nearly gets you your desired output (as opposed to the desired output to the dot) but please correct me if my interpretation here is wrong.Please advise me on the below information:into this below:Could you provide the output of when you ran this query? This is to see what you are currently getting and if it’s possible to achieve you’re desired output based off the current output.Additionally, I understand that you’ve provided a single sample document but would you be able to provide several sample documents and the expected output based off these exact sample documents so that I can easily insert these into my test environment to see if the desired output is possible?Lastly, please provide the search index definition in use in JSON format, as I can use this to replicate the aggregation on the sample documents in my test environment.Warm regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi @Aasawari ,\nI actually appreciate your effort. I worked hard and decided not to use facets. I kept using searchMeta and it’s incredible speed. But guess what.In my normal aggregation, I used to get the time of the last collected document. But I couldn’t use it using Atlas searchMeta.Please visit this topic I created another one:Thanks!!",
"username": "ahmad_al_sharbaji"
}
] | $max operator with Atlas $searchMeta MongoDB | 2023-07-24T06:20:51.054Z | $max operator with Atlas $searchMeta MongoDB | 829 |
null | [
"kubernetes-operator"
] | [
{
"code": "",
"text": "I get the following when attempting to setup the kubenetes operator in a cluster (this is after the CRD is registered, the k8s clusters are added to the clustermap, and the webhook is setup.panic: runtime error: index out of range [5] with length 5goroutine 700 [running]:\ngithub.com/10gen/ops-manager-kubernetes/pkg/multicluster/memberwatch.(*MemberClusterHealthChecker).WatchMemberClusterHealth(0xc0004823b0, 0xc00042db00?, 0xc0005a97a0?, {0x1dc2fc0, 0xc000027e50}, 0xc0002fc8d0?)\n/go/src/github.com/10gen/ops-manager-kubernetes/pkg/multicluster/memberwatch/memberwatch.go:54 +0xcca\ncreated by github.com/10gen/ops-manager-kubernetes/controllers/operator.AddMultiReplicaSetController\n/go/src/github.com/10gen/ops-manager-kubernetes/controllers/operator/mongodbmultireplicaset_controller.go:957 +0x76c",
"username": "Dorian_Preston"
},
{
"code": "",
"text": "Hi, can you please raise a support case - there will be more information required to advise and troubleshoot.MongoDB Paid Support. MongoDB offers help with training, upgrading, and more",
"username": "Dan_Mckean"
}
] | Go Error when setting up Kubernetes operator | 2023-08-15T14:42:16.095Z | Go Error when setting up Kubernetes operator | 405 |
[
"atlas-device-sync",
"flutter",
"realm-studio"
] | [
{
"code": "",
"text": "I want to delete all user data and add a user to the collection but while executing .deleteAll() and then trying to add but facing this error Cannot add object to Realm because the object is managed by another Realm instance in Flutter.\nScreen Shot 2023-08-14 at 6.32.41 PM954×348 32 KB\n",
"username": "Agnel_Selvan"
},
{
"code": "user",
"text": "Hi @Agnel_Selvan,trying to add but facing this error Cannot add object to Realm because the object is managed by another Realm instanceWhere’s the user in the code snippet above coming from? If you’re copying or moving an object from one realm to another, you need to make a deep copy (i.e. copying not only the object itself, but the sub-objects it may contain or refer to), writing (or assigning) it to a different realm won’t work as Realm keeps “live” objects linked to the respective databases.",
"username": "Paolo_Manna"
}
] | RealmError (Realm error : Cannot add object to Realm because the object is managed by another Realm instance) | 2023-08-14T13:04:43.903Z | RealmError (Realm error : Cannot add object to Realm because the object is managed by another Realm instance) | 522 |
|
[
"pune-mug"
] | [
{
"code": "",
"text": "\nInaugural1920×1076 70.2 KB\n Announcing the Inaugural Meetup of the Pune MongoDB User Group! We are excited to invite you to the first-ever user group meetup of MongoDB enthusiasts in Pune. Whether you’re a seasoned MongoDB user or just getting started, this meetup is the perfect platform to connect, learn, and share your experiences with fellow MongoDB enthusiasts.Meet and Greet: Connect with other MongoDB users in Pune, share your experiences, and create a strong, knowledge-sharing community.Introductory Talks: Gain insights on MongoDB’s features, scalability, flexibility, and data distribution capabilities. Whether you’re a beginner or expert, there will be something to learn.Live Q&A Session: Get your burning MongoDB questions answered by experienced users and experts present at the meetup.Plan Future Meetups: We’ll discuss the roadmap for our group and plan the topics for future meetups based on the interests and needs of the community.Networking: Make sure to bring plenty of business cards! There will be ample time for networking, so you can connect with your peers, make new friends, and potentially find future collaborators.Secure your spot now and be part of an unforgettable experience!Kindly note that joining the waitlist is the first step towards securing your registration. To complete the process, a separate confirmation email will be sent to you. Please ensure to present this email at the event entrance for seamless access.In order to ensure a seamless and well-organized event, we kindly request you to not rely on on-spot registrations. Thank you for your understanding and cooperation. We look forward to welcoming you to the event! Event Type: In-Person\nLocation: Thoughtworks Technologies India Private Limited, Binarius Building, Deepak Complex, National Games Road Beside Sales Tax Office, Shastrinagar, Yerawada, Pune, Maharashtra 411006",
"username": "Faizan_Akhtar"
},
{
"code": "",
"text": "MongoDB in Pune! Woohoo!\" ",
"username": "Shekhar_Chaugule"
},
{
"code": "",
"text": " On behalf of the PuneMUG team, I just wanted to say a big Thank you to all for making our inaugural Pune MUG meetup a smashing success! Your energy and enthusiasm were infectious, making it a day to remember.Check out the snapshots (attached below) of the amazing moments we shared that day. Missed it? No fret! Join the Pune MUG community for exciting events ahead. Let’s keep the momentum going! #PuneMUGPS: We have added the event recap here, make sure to check it out! \nAttendeesPhoto2048×1152 341 KB\n\nInaguralEventCake2048×1152 361 KB\n\nTeamPhoto2048×1152 409 KB\n",
"username": "Faizan_Akhtar"
}
] | Pune MUG: Inaugural Meetup | 2023-07-18T14:43:30.078Z | Pune MUG: Inaugural Meetup | 3,688 |
|
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "I wish to monitor a collection for changes using changeStreams. I only want to watch the documents in my collection which have a filed which matches a specific value. I am not looking for changes in that field but in the other.\nFor example.\nI have a mongoose model called Trainer.\nI wish to montior all trainers with fields called “available” set to “true” and “language” set to “english”.\nAvailability stays unchanged but antother field, say, “location” can change.\nCan I watch for the change in location on all available trainers who speak English?\nI cannot seem to do this without seeing a change on all trainers. (I have some who speak other languages but are also available.Thanks for your help team.",
"username": "John_Parsons"
},
{
"code": "const mongoose = require('mongoose');\n\n// Connect to MongoDB\nmongoose.connect('mongodb+srv://findThief:[email protected]/?retryWrites=true&w=majority/test', {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nconst db = mongoose.connection;\ndb.on('error', console.error.bind(console, 'MongoDB connection error:'));\n\n// Define Trainer schema\nconst trainerSchema = new mongoose.Schema({\n available: Boolean,\n language: String,\n});\nconst Trainer = mongoose.model('Trainer', trainerSchema);\nconst pipeline = [\n {\n $match: {\n available: true,\n language: \"English\",\n },\n },\n];\n\nconst changeStream = Trainer.watch(pipeline, { fullDocument: 'updateLookup' });\nconsole.log(changeStream);\n\nconsole.log('Change stream started.');\nchangeStream.on('change', (change) => {\n console.log(change);\n});\n{'_id': {'_data': '8264B625FF000000172B022C0100296E5A1004B72A49ECB85648E49A08B734CBEA93AF46645F6964006464B5240E413B98DBF731CF880004'}, 'operationType': 'update', 'clusterTime': Timestamp(1689658879, 23), 'wallTime': datetime.datetime(2023, 7, 18, 5, 41, 19, 410000), 'fullDocument': {'_id': ObjectId('64b5240e413b98dbf731cf88'), 'name': 'XYZ', 'available': True, 'language': 'English', 'location': 'Changed Location'}, 'ns': {'db': 'test', 'coll': 'trainers'}, 'documentKey': {'_id': ObjectId('64b5240e413b98dbf731cf88')}, 'updateDescription': {'updatedFields': {'location': 'updated Location'}, 'removedFields': [], 'truncatedArrays': []}}\n{'_id': {'_data': '8264B625FF000000182B022C0100296E5A1004B72A49ECB85648E49A08B734CBEA93AF46645F6964006464B5244F413B98DBF731CF890004'}, 'operationType': 'update', 'clusterTime': Timestamp(1689658879, 24), 'wallTime': datetime.datetime(2023, 7, 18, 5, 41, 19, 410000), 'fullDocument': {'_id': ObjectId('64b5244f413b98dbf731cf89'), 'name': 'ABC', 'available': True, 'language': 'English', 'location': 'Changed Location'}, 'ns': {'db': 'test', 'coll': 'trainers'}, 'documentKey': {'_id': ObjectId('64b5244f413b98dbf731cf89')}, 'updateDescription': {'updatedFields': {'location': 'updated Location'}, 'removedFields': [], 'truncatedArrays': []}}\n",
"text": "Hi @John_Parsons and welcome to MongoDB community forums!!Based on my understanding, you wish to view the updates in the change stream for only the documents that satisfy the match condition fields called “available” set to “true” and “language” set to “english”.\nI tried the following code the:The output for the above when an update operation is performed is:The pipeline in the above code would match the requirements in the change stream and would should the results as fullDocument in the change stream output.Could you please try the above code and let us know if it works for you.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "[ \n { \"$match\": \n { \"operationType\": \"update\", \n \"fullDocument.available\": true,\n \"fullDocument.language\": \"english\",\n \"updateDescription.updatedFields.location\": { \"$exists\": true } \n } \n }\n]\n",
"text": "Hi @John_Parsonsarrgh, sniped by @Aasawari (happily with a code example) my pipeline is slightly different.I cannot seem to do this without seeing a change on all trainers. (I have some who speak other languages but are also available.A pipeline would need to be added to the watch.Something like:This would strictly match updates where the field location is updated along with the matching criteria for available and language.If you want to also catch other scenarios like a new matching document being inserted the the pipeline would need further modification.",
"username": "chris"
},
{
"code": "",
"text": "Thanks for the quick reply, sorry I have taken so long to get back to you. Things to do.It still doesn’t quite work. I get the console.log that the change stream has started, that indicates it is “watching” for changes. When I make a change tot the document (using MongoDB Compass on my desktop) I get no reaction.",
"username": "John_Parsons"
},
{
"code": "",
"text": "Hi @John_ParsonsWould you mind sharing the code snippet that you are using to get the updates in the console screen ?Thanks\nAasawari",
"username": "Aasawari"
},
{
"code": "const pipeline =[\n {$match: {available: true}}\n ];\n\n const changeStream = Trainer.watch(pipeline, { fullDocument: 'updateLookup' });\n\n changeStream.on('change', data => console.log(data));\nconst pipeline = [ ];",
"text": "Thanks Asawari for your reply:The following code is in place:If I leave pipline empty:\nconst pipeline = [ ];It works, as soon as I put anything in the pipeline it doesn’t respond, just hangs waiting for a change (although one has been made).\nThanks for any help. I believe I am not composing the pipeline correctly but have run out of ideas to try. I have left only the one criteria to test if it works, if I can make it work with one I can add the others later.John",
"username": "John_Parsons"
},
{
"code": "",
"text": "Did you try the pipeline I suggested ?",
"username": "chris"
},
{
"code": "",
"text": "Yes, no reaction, as if there was no change registered.\nJohn",
"username": "John_Parsons"
},
{
"code": "const pipeline =[\n {$match: {available: true}}\n ];\n const changeStream = Instructor.watch(pipeline, { fullDocument: 'updateLookup' });\n\n if (changeStream) {console.log('Change stream started.');}\n\n changeStream.on('change', data => console.log(data));\n",
"text": "Sorry, I left a bit out:The if statement tells me that a changeStream object has been created. “'Change stream started.” gets logged to the console. But on changing a document in the database I get no response.Thanks\nJohn",
"username": "John_Parsons"
},
{
"code": "",
"text": "With a blank pipeline have you logged the document raised in the event to the console to validate the paths in your query?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Thanks John,\nYes I have, the expected document was logged and the result for the updated field was correct. All is working fine. I just don’t seem to be able to do what I want which is only check for change in documents which match certain criteria.\nAll suggestions appreciated.\nJohn",
"username": "John_Parsons"
},
{
"code": "const pipeline =[\n {$match: {available: true}}\n ];\nconst pipeline =[\n {$match: {'fullDocument.available': true}}\n ];\n",
"text": "Ok, I suggested that as we tested change stream pipelines and when we swapped out to full document and did a few other changes the shape of the event changed so our pipeline no longer matched so filtered out all documents!Your pipeline is:From one of your above posts? Should this not be:as @chris suggested above?",
"username": "John_Sewell"
},
{
"code": "const pipeline =[\n {\n $match: {\n 'fullDocument.available': true,\n 'fullDocument.language': {$in:['english']}\n }\n }\n ];\n",
"text": "Thanks fo ryour time that did the trick. The next stumbling block is to match a language as well. The required language is an element of an array (where the subject speaks mor than one language.\nlanguage:[“english”, “german”, “french”]\nI am not sure how to formulate the request.Doesnt work.\nWhat is the correct formulation please? The documentation is quaite vague.\nOn another note, where can I find documentation on the fullDocument notation and the allowed options in the watch(pipeline, options) function.\nThanks again for helping me learn.\nRegards\nJohn",
"username": "John_Parsons"
},
{
"code": "db.getCollection(\"Test\").find({'fullDocument.languages':'English'})\ndb.getCollection(\"Test\").find({'fullDocument.courseLanguage':$in:['Esperanto':'English']})\n",
"text": "This will search for a document with an element in the array matching the filter condition.You could use the $in operator the other way around, so if you had a list of documents with courses or something, you could find any that are in one of a number of values, i.e.See here:\nUnder the array section.",
"username": "John_Sewell"
},
{
"code": "updateDescription: {\n updatedFields: { 'location.coordinates.1': '2' },\n removedFields: [],\n truncatedArrays: []\n }\nconst changedData = (data.updateDescription.updatedFields);\n console.log(changedData);\n{ 'location.coordinates.1': '2' }",
"text": "Thank you John, things are coming along. I have now hit the next hurdle. I don’t seem to be able to access the data retrieved.\nIn Visual Studio code I have been able to access the changed fields and the following has been logged to the console.But I cannot get at the data in the updatedFields field.I have tried this:which results in this:\n{ 'location.coordinates.1': '2' }but:\nconsole.log(changedData.location);is undefined, I can’t get at the data.\nI want to be able to access the array “location” and extract the two values, location.coordinates[0] and location.coordinates.[1]The data object behaves like a normal object allowing me to get at its values through dot notation but only down to updatedFields. Then I am blocked.Again, any light you can shine on this will be greatly appreciated. there is some fundamental logic that I don’t understand.John",
"username": "John_Parsons"
},
{
"code": "",
"text": "Sorry all,\nLooking for the complex solution instead of the simple one. The values are accessed through the fullDocument object and not through the updateDescription object.\nJohn",
"username": "John_Parsons"
}
] | Watch for changes in documents that match a specific criteria | 2023-07-16T16:27:01.656Z | Watch for changes in documents that match a specific criteria | 977 |
null | [] | [
{
"code": "sudo apt-get install -y mongodb-orgmongodb-database-toolsReading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\nThe following packages have unmet dependencies:\n mongodb-org-tools : Depends: mongodb-database-tools but it is not installable\nE: Unable to correct problems, you have held broken packages.\nsudo apt-get updatesudo apt-get install -fmongodb-database-toolssudo apt-get install -y mongodb-database-tools",
"text": "Hey MongoDB Community,I’m currently facing a bit of a snag while trying to install MongoDB on my system, and I could use some guidance. Here’s the situation:I’m attempting to install MongoDB using the command sudo apt-get install -y mongodb-org, but I’m encountering an error related to unmet dependencies. The error message specifically mentions the package mongodb-database-tools as being uninstallable.Here’s the full error message I’m getting:I’ve tried a few things to troubleshoot:I’ve checked the repository configuration and made sure I’m following the instructions correctly, but I’m still running into this issue.My setup:Has anyone encountered a similar problem? Any ideas on how to get past this unmet dependencies issue? I’d greatly appreciate any insights or suggestions to help me move forward with the installation.Thank you in advance for your help!",
"username": "Clement_LEFEVRE"
},
{
"code": "",
"text": "same erorr what cloud are you using , i am using hetzener , u ?",
"username": "foxtract_notus"
},
{
"code": "",
"text": "Same problem here. I am installing on my VPS Linux Ubuntu 22.04.03. I’ll try tarball method to see if it’s any better.",
"username": "Tvrtko_Begovic"
},
{
"code": "",
"text": "hetzenerDigital Ocean for me, but it doesn’t matter I guess",
"username": "Clement_LEFEVRE"
}
] | Trouble Installing MongoDB: Unmet Dependencies Issue | 2023-08-15T17:36:05.783Z | Trouble Installing MongoDB: Unmet Dependencies Issue | 833 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi MongoDB community! I’m Mohit Singh Chauhan, a full-stack developer and technical writer.I first learned MongoDB in my college database courses. I was immediately drawn to its flexible document data model and powerful querying and aggregation features. MongoDB’s intuitive API made it easy for me to get started.Since then, I’ve been deepening my MongoDB skills through hands-on projects. I’ve built web apps using MongoDB on the backend and really enjoyed the DX it provides.Writing technical tutorials and documentation is another passion of mine. I love breaking down complex topics into easy-to-follow guides. My goal is to produce content that helps other developers quickly get up and running with new technologies.Outside of coding and writing, you can find me geeking out over new tech & actively networking with all the experiences. I joined the MongoDB community to continue growing my skills and connect with other developers using MongoDB in their projects.Feel free to reach out if you ever want to chat about MongoDB, backend development, technical writing, or my open source contributions. I’m always happy to connect with fellow developers!",
"username": "MohitSinghChauhan"
},
{
"code": "",
"text": "Hey @MohitSinghChauhan,Welcome to the MongoDB Community! Glad to have you hear. If you like, do explore our User Groups in India. They’re a great way to learn more about MongoDB as well as network with like-minded folks along with sharing one’s MongoDB experiences! See you around in the forums \nRegards,\nSatyam",
"username": "Satyam"
}
] | Hello World! A Little Bit About Me | 2023-08-13T17:02:13.509Z | Hello World! A Little Bit About Me | 503 |
null | [
"compass"
] | [
{
"code": "not authorized on <DATABASE> to execute command",
"text": "Hello, I am trying to connect a MongoDB which we’ve been given limited and specific access to. We are able to access a specific index within a specific collection and database. I am able to use MongoDB Compass to query the data in that specific index, but attempting to access it via a third party resource (such as Retool) fails. I have also tried to use other web apps which connect to MongoDB, but all fail as well.I am trying to connect using the srv connection URL, however every attempt across these third party providers throws the same error:not authorized on <DATABASE> to execute commandAlthough the error is self explanatory, what I don’t understand is that I can access in Compass normally but cannot with other db applications or integrations.Would love any suggestions or guidance from the community, thanks in advance!",
"username": "Dan_Acca"
},
{
"code": "",
"text": "Those other tools are trying to run commands(probably unnecessarily) to gather some information, either about the cluster or the databases/collections in the cluster.They may have options to disable these or not.Observing the client or server logs could show what commands they expect to run, and modify the role to allow these commands.",
"username": "chris"
}
] | Connecting Retool to MongoDB - not authorized on <DATABASE> to execute command | 2023-08-15T17:50:11.552Z | Connecting Retool to MongoDB - not authorized on <DATABASE> to execute command | 364 |
null | [
"replication"
] | [
{
"code": "{\n device_id: \"\",\n shows: [\n show_id1, show_id2, show_id3\n ]\n}\n",
"text": "I have a personalisation service which generates 10 GB of data per hour in json format, the data is mostly got updated frequently with high traffic. What should be the best way to store these data on the DB?I tried writing it with batch updated, but even with 10% of our platform traffic its giving huge replica alerts.",
"username": "vicky_kumar4"
},
{
"code": "",
"text": "10GB per hour… too much.Are you sure you want to use a general purpose database to do this kind of ting? they are not designed for such heavy write.What kind of operaitons you do on those data? you ever search it? Creating indexes on so much data is also a big pain.maybe you want to try something like a distributed file system.",
"username": "Kobe_W"
},
{
"code": "",
"text": "10GB is largeish but this is completely able to be handled.Optimizations in schema and how data is updated can streamline updates. I.e. Rewriting a whole document vs updating specific fields.It is not specified if this is self hosted or Atlas. Identify the bottleneck and address it.In Atlas this will be selecting a higher tier, self-hosted adding more ram, faster disk.If scaling up is becomes prohibitive then scale out using sharing to distribute the load amoung multiple mongodb replicates.",
"username": "chris"
}
] | Manage large update without replica lag | 2023-08-15T11:26:54.359Z | Manage large update without replica lag | 386 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.