image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"swift"
]
| [
{
"code": " let userId = AnyBSON(mongoDBApp.currentUser!.id)\n \n collection?.insertOne(\n [\"userId\": userId,\n \"userName\": uName,\n \"userLastName\": uLastName,\n \"userEmail\": uEmail,\n \"userPhone\": uPhone\n ]\n ) { (result) in\n switch result {\n case .failure(let error):\n print(\"Failed to update: \\(error.localizedDescription)\")\n return\n case .success(let updateResult): \n print(\"worked: \")\n }\n }\n",
"text": "HelloI am trying to insert a document in Custom User Data in App Services\nI get error \"failed to update: insert not permitted.\nIs there any documentation about where to check access rights?\nShould I look in Rules in AppService/Data Access ? Or more places?My Collection has rules:\nDocument permission: Insert Delete Search\nField permissions: Read: All Write:All\nId: userIdDevelopment Mode is onSwift code:Thanks",
"username": "Per_Eriksson"
},
{
"code": "mongoDBApp.currentUser!.idapp.logincollection",
"text": "A comment and then a couple of questions;First, and I am sure you know, this is not a good ideamongoDBApp.currentUser!.idoptionals should be safely handled… they are optionals and could be nil. It may be fine in this case but just mentioning it.Let’s eleminate some simple things first:Is the userId valid?Is the code called in the closure following an app.login or some time later?Did you check the App Users section in the console to ensure that the field you’re using to map the data matches the structure in the question? It appears to be but making sure.How is the collection defined?",
"username": "Jay"
}
]
| Custom User Data - failed to update: insert not permitted | 2022-12-14T19:35:00.672Z | Custom User Data - failed to update: insert not permitted | 1,447 |
[
"queries"
]
| [
{
"code": "",
"text": "OK All, I see similar posts have been made a number of times but my question does not seem to be answered in any of those threads.I have the usual Query Targeting error, I get it at least daily, BUT, the profiler shows nothing and it notes that only “slow” operations are shown.Looking at the status graphs for my clusters I can see it is happening a lot.\n\nimage1161×393 18.9 KB\nBUT, what I cannot figure out for the life of me is what collections are causing the issue. Does anyone know of a way to isolate that information? Or any other more advanced troubleshooting than a) check the profiler b) check the metrics or c) add an indexI added indexes to the collections I thought might be the issue but the problem if anything seemed to get worse. So I am at a loss.",
"username": "Dan_Dickout"
},
{
"code": "",
"text": "I had planned to try and get help at the Mongo.local in Toronto today but we had an ice storm over night and decided not to risk the 2 hour drive.",
"username": "Dan_Dickout"
}
]
| How to figure out which collection is causing Query Targeting: Scanned Objects / Returned has gone above 1000 | 2022-12-15T16:01:21.435Z | How to figure out which collection is causing Query Targeting: Scanned Objects / Returned has gone above 1000 | 835 |
|
null | [
"aggregation",
"golang"
]
| [
{
"code": "",
"text": "Hello,\nI am trying to search and filter names in a collection, and am using an aggregation pipeline to do this. However, I would like the results to be ordered according to a set of match priorities:For example, if I search for “ron”, I would like the results to be ordered as follows:Is there a clever way to do this using a single aggregation pipeline, or would I need to run multiple aggregation pipelines and then combine the results?Thanks.",
"username": "George_Kamel"
},
{
"code": "",
"text": "Hi @George_Kamel ,This sounds exactly how atlas search full text scoring is working.I believe the regex operator is what you looking into:Learn how to use a regular expression in your Atlas Search query.Have you tried using atlas and atlas search?Using plain aggregation would need to go through complex logic to operate the way a search engine does…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "pipeline = []\n\n/* first stage of the pipeline is a simple regex match for ron anywhere */\n_match = { \"$match\" : {\n \"name\" : { \"$regex\" : \"ron\" , \"$options\" : \"i\" }\n} }\n\npipeline.push( _match )\n\n/* then the magic stage, a $set that uses $cond to set 3 _sort_priorities for the 3 conditions.\n for demontration purpose I will use a simple $cond for the /^Ron / case */\n\n_sort_priorities = { \"$set\" : {\n \"_sort_priorities : { \"$cond\" : [\n { '$regexMatch': { input: '$name', regex: '^Ron ', options: 'i' } } ,\n 0 \n 1 , /* for other case we need another $cond for other cases */\n ] }\n} }\n\npipeline.push( _sort_priorities )\n\n/* then the final $sort that uses _sort_priotity */\n_sort = { \"$sort\" : {\n \"_sort_priority\" : 1 ,\n name : 1 \n} }\n\npipeline.push( _sort )\n",
"text": "As a challenge, as a learning experience and for people that cannot use Atlas search, I am trying to come up with something.Yes itcomplex logicbut the concept should work.The difficulty lies in the complex $cond that sets the appropriate _sort_priority. Much simpler with Atlas search but doable otherwise.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you @Pavel_Duchovny and @steevej for your replies.I wasn’t familiar with MongoDB Atlas before, so @Pavel_Duchovny thank you for pointing me in the right direction!@steevej, thank you for you elegant solution - it works brilliantly and have extended it for more complex sorting priorities. It will serve as an excellent interim solution until we are in a position to use MongoDB Atlas.",
"username": "George_Kamel"
},
{
"code": "",
"text": "After implementing the solution in Go (using the latest version of the official MongoDB driver), it seems the driver does not recognise $regexMatch within $cond (it works perfectly fine in MongoDB Compass). As a workaround, I’m just wondering how I could assign the $regexMatch result to a variable outside of $cond, and use $eq in $cond to do the check?",
"username": "George_Kamel"
},
{
"code": "bson.D{\n {\"$regexMatch\",\n bson.D{\n {\"input\", \"$name\"},\n {\"regex\", primitive.Regex{Pattern: \"ron\"}},\n {\"options\", \"i\"},\n },\n },\n },\n",
"text": "Hi @George_Kamel ,I think latest compass aggregation tab have an “Export to Language” for Go. Also maybe the beta has also advanced language syntax It recommends me to use:Have you tried the compass syntax?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "$regexMatch$regex$condbson.D{\n\t{\n\t\tKey: \"$addFields\",\n\t\tValue: bson.D{\n\t\t\tbson.E{\n\t\t\t\tKey: \"match_whole_name\",\n\t\t\t\tValue: bson.D{\n\t\t\t\t\t{\n\t\t\t\t\t\tKey: \"$cond\",\n\t\t\t\t\t\tValue: bson.M{\n\t\t\t\t\t\t\t\"if\": bson.D{\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tKey: \"$regexMatch\",\n\t\t\t\t\t\t\t\t\tValue: bson.D{\n\t\t\t\t\t\t\t\t\t\t{Key: \"input\", Value: \"$name\"},\n\t\t\t\t\t\t\t\t\t\t{Key: \"regex\", Value: primitive.Regex{Pattern: \"\\bRonald\\b\"}},\n\t\t\t\t\t\t\t\t\t\t{Key: \"options\", Value: \"i\"},\n\t\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t},\n\t\t\t\t\t\t\t\"then\": 0,\n\t\t\t\t\t\t\t\"else\": 1,\n\t\t\t\t\t\t},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t},\n}\n$cond$eq$cond",
"text": "Hi @Pavel_Duchovny,\nYes, all the code I used was based on the “Export to Language” feature, but the issue is that neither $regexMatch nor $regex is being recognised by the mongo driver inside $cond. This is the code of the specific pipeline stage with the issue:I raised a separate thread here about the regexMatch not recognised issue - it seems to be a bug / lack of support in the driver? If so, then I was wondering if I could use some workaround by setting a variable outside $cond and checking this with $eq inside the $cond?Thanks,\nGeorge",
"username": "George_Kamel"
},
{
"code": "",
"text": "@George_KamelThere is a new $let stage:",
"username": "Pavel_Duchovny"
},
{
"code": "{ a : 1 , b : 2 }\n{ '$and': [ { a: { '$eq': 1 } }, { b: { '$eq': 2 } } ] }\n$cond$eq$cond{ \"$set\" : {\n \"_regex_match\" : {\n \"input\" : \"$name\" ,\n \"regex\" : \"\\\\bRonald\\\\b ,\n \"options\" : \"i\"\n }\n} }\n",
"text": "in Go (using the latest version of the official MongoDB driver), it seems the driver does not recognise $regexMatch within $condThat is very surprising because in principal pipelines and queries are sent over and and run by the server. Yes, in principal, the driver in some occasion modify the query. For example, the short-cut querymight be sent asBut it is still possible that the driver has something to do with it.if I could use some workaround by setting a variable outside $cond and checking this with $eq inside the $condThat is the beauty of the concept of a pipeline, you may add stages to simplify followings. I often do that as it also helps debugging as you may set fields that are kept so you may find where things fails. So yes you may try to evaluated $regexMatch in a separate stage. Since I am not very go fluent, try this js version:Then in your $cond of the next stage you simply use $_regex_match as the expression.In case, you did not notice, I used 2 backslashes as it is needed in Java and JS. I do not know about go.",
"username": "steevej"
}
]
| Run aggregation pipeline and sort results by a set of match priorities | 2022-12-09T10:44:50.980Z | Run aggregation pipeline and sort results by a set of match priorities | 3,456 |
[
"compass",
"connecting"
]
| [
{
"code": "",
"text": "I am not seeing any databases in compass after connecting by connection string. There is no error,\nBut no dbs or collections neither. It’s totally empty.Below is the image of Atlas, Compass, Conncection info accordingly. As I am a new user I could not upload three separate photos\n\ndownload1940×3280 426 KB\n",
"username": "Hasanuzzaman_Hasan"
},
{
"code": "",
"text": "What are the permissions the db user you are using in Compass? Can that user list collections?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "I have only one user to that database. I used that user to connect to compass. This user is the project owner",
"username": "Hasanuzzaman_Hasan"
},
{
"code": "",
"text": "I have found the solution. I just needed to set the built-in role to admin",
"username": "Hasanuzzaman_Hasan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongodb compass not showing databases from Atlas | 2022-12-15T12:40:36.083Z | Mongodb compass not showing databases from Atlas | 2,995 |
|
[
"dot-net"
]
| [
{
"code": "",
"text": "Hi!\nIm trying to use the atlas search package in C#,\nThis is my C# code using autocomplete:\n\nimage765×76 12.2 KB\n\nAnd im getting this error:\n\nimage1540×43 8.31 KB\nAlso if possible, how do i use Fuzzy in C# autocomplete?Couldn’t found any of those mentioned options anywhere on internet.\nMuch appreciate the help and the time helping me!",
"username": "Henrique_Shoji"
},
{
"code": "ProfileNametypeAutocomplete",
"text": "The error suggests that you need to modify your search index to ensure that ProfileName is of type Autocomplete. Could you share your index definition and the full C# query code rather than a picture?",
"username": "Marcus"
},
{
"code": "",
"text": "I fixed this problem by changing the default index using the following JSON:\n{\n“mappings”: {\n“dynamic”: false,\n“fields”: {\n“ProfileName”: {\n“foldDiacritics”: false,\n“maxGrams”: 8,\n“minGrams”: 3,\n“type”: “autocomplete”\n}\n}\n}\n}Now im trying to use sort by textScore and use fuzzy, how do i use fuzzy on autocomplete or regex?",
"username": "Henrique_Shoji"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Atlas search Index in C# | 2022-12-14T19:26:54.747Z | Atlas search Index in C# | 1,745 |
|
null | [
"aggregation"
]
| [
{
"code": "abcd: {\n code: \"A1\",\n weekOne: ObjectID(),\n weekTwo: ObjectID(),\n weekThree: ObjectID(),\n weekFour: ObjectID(),\n}\n {\n $lookup: {\n from: \"programmings\",\n localField: \"weekOne\",\n foreignField: \"_id\",\n pipeline: [\n {\n $project: {\n _id: 0,\n sets: 1,\n setRest: 1,\n reps: 1,\n repRest: 1\n },\n }\n ],\n as: \"weekOne\",\n }\n },\n { $unwind: \"$weekOne\" },\nabcd: {\n code: \"A1\",\n weekOne: { sets, setRest, reps, repRest },\n weekTwo: { sets, setRest, reps, repRest },\n weekThree: { sets, setRest, reps, repRest },\n weekFour: { sets, setRest, reps, repRest },\n}\nabcd: {\n superset: \"A\", \n supersetIndex: 1,\n valuesThatVaryByWeek: [\n 0: { week: 1,\n sets, setRest, reps, repRest },\n 1: {week: 2,\n sets, setRest, reps, repRest },\n 2: {week: 3,\n sets, setRest, reps, repRest },\n 3: {week: 4,\n sets, setRest, reps, repRest }\n ]\n}\n",
"text": "I’m working within an aggregation pipeline (as there are various other things I need to do). I have a part of my document that starts off like…I use the following $lookup …to give me something that looks like this…which is close, but I want to have:How do I achieve this?",
"username": "Oliver_Browne"
},
{
"code": "$addFields {\n $lookup: {\n from: \"programmings\",\n localField: \"weekOne\",\n foreignField: \"_id\",\n pipeline: [\n {\n $project: {\n _id: 0,\n sets: 1,\n setRest: 1,\n reps: 1,\n repRest: 1\n },\n },\n {\n $addFields: {\n week: 1\n }\n }\n ],\n as: \"weekOne\",\n }\n },\n { $unwind: \"$weekOne\" },\n ...move onto $lookup for weekTwo, etc...\n ..finish with a $project step to get these into the parent object\ncodesupersetsupersetIndex$substr$projectweekOne$project {\n $project: {\n _id:0,\n superset: { $substr: [ \"$code\", 0, 1] },\n supersetIndex: { $substr: [ \"$code\", 1, 1] },\n exercise: \"$exercise\",\n valuesThatVaryByWeek: [\n \"$weekOne\",\n \"$weekTwo\",\n \"$weekThree\",\n \"$weekFour\"\n ]\n }\n",
"text": "Okay, so, turns out writing the question out for y’all helped me change how I think about it, and get an answer. Typical For adding the week number, I put that into an $addFields step, immediately after the relevant $project, e.g. in the $lookup snippet I showed above, it became…Then for the splitting of the code field to make new fields superset and supersetIndex I used a $substr and added new fields to my final $project step. That made me realise I could also do the same for getting the weekOne etc to be a happy array. So my final $project step now looks like…Thanks for listening ether, and I’ll leave this here in case it helps anyone else ",
"username": "Oliver_Browne"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to replace existing object fields with a new array field | 2022-12-15T11:26:04.066Z | How to replace existing object fields with a new array field | 913 |
null | []
| [
{
"code": "",
"text": "When will MongoDB v6 be available on M0 free/shared clusters?",
"username": "Ian"
},
{
"code": "",
"text": "Hi @Ian,MongoDB doesn’t communicate any exact timelines as this may change. But typically, free tier (M0) and shared tiers (M2/M5) are upgraded to the new major version soon after it becomes the default Atlas version.You are now welcome to speculate based on the last time M0/M2/M5 were upgrade to 5.0 and when 5.0 was released (Jul 13, 2021).Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| MongoDB 6 on M0 free clusters | 2022-12-13T15:06:25.653Z | MongoDB 6 on M0 free clusters | 1,236 |
null | [
"replication",
"containers"
]
| [
{
"code": "",
"text": "Hey guys,I want to use MongoDB in Docker for my local development and MongoDB Atlas for production. At this moment, MongoDB Atlas Free doesn’t provide a possibility to choose MongoDB 6.0.3 and locked to use 5.0.x.As far as I understand, it is okay to use a MongoDB replica set with only one primary node for local development.Is there any official guide that helps me to configure it with Docker?",
"username": "Roman_Mahotskyi"
},
{
"code": "",
"text": "developmentHello, @Roman_Mahotskyi , you can try to follow this : Docker & MongoDB | Containers & Compatibility | MongoDB",
"username": "Alex_Maxime"
},
{
"code": "",
"text": "Hey @Alex_Maxime, I just realized that I didn’t mention transactions in my question. Do you know how to set-up Docker + MongoDB + transactions?P.S. Some people on the internet are saying that MongoDB + Transactions can be used in a replica set with only the primary node? How can I do this?",
"username": "Roman_Mahotskyi"
},
{
"code": "rs.initiate()PRIMARYdocker run --name some-mongo -p 27017:27017 mongo --replSet rsname\nmongoshrs.initiate()",
"text": "Hi @Roman_Mahotskyi welcome to the community!You can follow the instructions in Convert a Standalone to a Replica Set. Basically after executing rs.initiate(), you don’t add more nodes to the replica set. By default, it should progress as a single node replica set as PRIMARY.As with using Docker, this seems to work for me:You should be able to connect to your host’s port 27017 using mongosh, then execute rs.initiate() there.Note that this is a very basic example, so its storage is managed by Docker. If your needs are more specific, please see more detailed examples in DockerHaving said that, please note that the Docker Hub’s official MongoDB image is not maintained by MongoDB. Rather, it’s maintained by Docker.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to use MongoDB for local development? | 2022-12-11T20:54:52.442Z | How to use MongoDB for local development? | 2,634 |
null | [
"swift"
]
| [
{
"code": "async/await@MainActor",
"text": "The Realm Swift documentation mentions this repeatedlyIf your app accesses Realm in an async/await context, mark the code with @MainActor to avoid threading-related crashes.But there are no examples of where/when to use @MainActor.Is there a best practice of how to integrate @MainActor with async/await calls? Maybe some code examples would help clarify the usage.Jay",
"username": "Jay"
},
{
"code": "",
"text": "Would love to see some practical examples too.",
"username": "Sonisan"
}
]
| @MainActor Usage and Placement | 2022-11-05T14:10:24.992Z | @MainActor Usage and Placement | 1,413 |
null | [
"node-js",
"mongoose-odm"
]
| [
{
"code": "const TestRunSchema: Schema = new Schema(\n\n {\n\n testRun: {\n\n type: Array,\n\n testcases: [\n\n {\n\n name: { type: String },\n\n url: { type: String },\n\n devices: { type: [String] },\n\n userAgent: { type: String },\n\n viewport: {\n\n width: { type: Number },\n\n height: { type: Number }\n\n },\n\n isMobile: { type: Boolean },\n\n hasTouch: { type: Boolean },\n\n browser: { type: String },\n\n fullPage: { type: Boolean },\n\n element: {\n\n toScreenshot: { type: String },\n\n toClick: { type: String },\n\n toHover: { type: String }\n\n }\n\n }\n\n ]\n\n }\n\n },\n\n);\nconst TestCaseSchema: Schema = new Schema([\n\n {\n\n testrunId: {\n\n type: mongoose.Schema.Types.ObjectId,\n\n ref: 'TestRun'\n\n },\n\n name: String,\n\n url: String,\n\n devices: [{type: String}],\n\n userAgent: {type: String},\n\n viewport: {\n\n width: {type: Number},\n\n height: {type: Number}\n\n },\n\n isMobile: {type: Boolean},\n\n hasTouch: {type: Boolean},\n\n browser: {type: String},\n\n fullPage: {type: Boolean},\n\n element: {\n\n toScreenshot: {type: String},\n\n toClick: {type: String},\n\n toHover: {type: String}\n\n }\n\n },\n\n {\n\n versionKey: false\n\n }\n\n]);\ntestrunId: {\n\n type: mongoose.Schema.Types.ObjectId,\n\n ref: 'TestRun'\n\n },\n",
"text": "Hello everybody,\nI have two schemas. The first looks as follows:The second looks as follows:You can interpret the TestRun as whole and the TestCases as subset of TestRun. When I’m saving the testrun into the database into the testrun collection the testcases will also be saved into the testcases collection as own object per testcase.\nWhat I want to do is to add the objectId from the Testrun document into each testcase object or in other words I want to reference the objectId from the testrun to each testcase. Is it possible somehow?From what I read so far I thought something like this could do the work.But this didn’t work.I’m a absolute beginner, so maybe I’m getting something totally wrong.Any help is really appreciated.Best regards",
"username": "CHH_N_A"
},
{
"code": "const mongoose = require('mongoose');\nconst { Schema } = mongoose;\n\nconst personSchema = Schema({\n _id: Schema.Types.ObjectId,\n name: String,\n age: Number,\n stories: [{ type: Schema.Types.ObjectId, ref: 'Story' }]\n});\n\nconst storySchema = Schema({\n author: { type: Schema.Types.ObjectId, ref: 'Person' },\n title: String,\n fans: [{ type: Schema.Types.ObjectId, ref: 'Person' }]\n});\n\nconst Story = mongoose.model('Story', storySchema);\nconst Person = mongoose.model('Person', personSchema);\n",
"text": "Hello @CHH_N_A ,I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you please confirm if my understanding of your use case is correct?You are trying to refer the objectId of documents of TestCases collection into documents of TestRun collection.\nIf this is correct then you can try using populate().Population is the process of automatically replacing the specified paths in the document with document(s) from other collection(s). We may populate a single document, multiple documents, a plain object, multiple plain objects, or all objects returned from a query. Let’s look at an example.In case you have any more queries regarding this, then please provide some sample documents and example scenario to discuss further.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to reference the objectId from one collection/schema to another? | 2022-12-06T15:23:04.413Z | How to reference the objectId from one collection/schema to another? | 12,406 |
null | [
"queries"
]
| [
{
"code": "",
"text": "How can I lock a document from reading operation for update later in MongoDB, likes SELECT * FROM mytable WHERE id = 123 FOR UPDATE; in RDBMS",
"username": "VIRGO_ROMANIA"
},
{
"code": "@VersionOptimisticLockingFailureException",
"text": "10.5.10. Optimistic LockingThe @Version annotation provides syntax similar to that of JPA in the context of MongoDB and makes sure updates are only applied to documents with a matching version. Therefore, the actual value of the version property is added to the update query in such a way that the update does not have any effect if another operation altered the document in the meantime. In that case, an OptimisticLockingFailureException is thrown. The following example shows these features:",
"username": "psram"
},
{
"code": "",
"text": "There is a way for Pessimistic ?",
"username": "VIRGO_ROMANIA"
},
{
"code": "",
"text": "Hi @VIRGO_ROMANIA welcome to the community!There is a blog post for exactly this: How To SELECT ... FOR UPDATE inside MongoDB Transactions | MongoDB BlogBest regards\nKevin",
"username": "kevinadi"
}
]
| How can I lock a document from reading operation for update later in MongoDB | 2022-12-14T08:09:00.209Z | How can I lock a document from reading operation for update later in MongoDB | 2,904 |
[
"node-js",
"mongoose-odm"
]
| [
{
"code": "(function () {\n\n var mongoose = require(\"mongoose\"),\n\n FileSchema = mongoose.Schema({\n\n url: {\n\n type: String,\n\n maxlength: 256,\n\n },\n\n filename: {\n\n type: String,\n\n maxlength: 128,\n\n },\n\n fileType: {\n\n type: String,\n\n maxlength: 50,\n\n },\n\n }),\n\n CreateBySchema = mongoose.Schema({\n\n userId: {\n\n type: String,\n\n maxlength: 24,\n\n },\n\n avatar: {\n\n type: String,\n\n maxlength: 256,\n\n },\n\n username: {\n\n type: String,\n\n maxlength: 50,\n\n },\n\n }),\n\n ChargeSchema = mongoose.Schema({\n\n id: {\n\n type: String,\n\n maxlength: 50,\n\n },\n\n amount: Number,\n\n description: {\n\n type: String,\n\n maxlength: 100,\n\n },\n\n created: Number,\n\n }),\n\n itemSchema = mongoose.Schema({\n\n title: {\n\n type: String,\n\n maxlength: 100,\n\n },\n\n businessName: {\n\n type: String,\n\n maxlength: 50,\n\n },\n\n files: {\n\n type: [FileSchema],\n\n default: undefined,\n\n },\n\n modifiedDate: {\n\n type: Date \n\n },\n\n createdBy: CreateBySchema,\n\n tags: {\n\n type: [\n\n {\n\n type: String,\n\n maxlength: 30,\n\n },\n\n ],\n\n validate: [arrayLimit, \"{PATH} exceeds the limit of 5\"],\n\n default: undefined,\n\n },\n\n needs: {\n\n type: [\n\n {\n\n type: String,\n\n maxlength: 30,\n\n },\n\n ],\n\n validate: [arrayLimit, \"{PATH} exceeds the limit of 5\"],\n\n default: undefined,\n\n },\n\n wage: Number,\n\n categories: {\n\n type: [\n\n {\n\n type: String,\n\n maxlength: 50,\n\n },\n\n ],\n\n default: undefined,\n\n },\n\n noOfPoints: Number,\n\n noOfComments: Number,\n\n noOfViews: Number,\n\n hasUpvoted: Boolean,\n\n hasReported: Boolean,\n\n price: Number,\n\n address: {\n\n type: String,\n\n maxlength: 100,\n\n },\n\n address2: {\n\n type: String,\n\n maxlength: 50,\n\n },\n\n zipcode: {\n\n type: String,\n\n maxlength: 10,\n\n },\n\n city: {\n\n type: String,\n\n maxlength: 40,\n\n },\n\n state: {\n\n type: String,\n\n maxlength: 20,\n\n },\n\n country: {\n\n type: String,\n\n maxlength: 20,\n\n },\n\n noOfEmployees: Number,\n\n noOfChairs: Number,\n\n noOfTables: Number,\n\n contactPhoneNo: {\n\n type: String,\n\n maxlength: 18,\n\n },\n\n contactEmail: {\n\n type: String,\n\n maxlength: 50,\n\n },\n\n income: Number,\n\n rentCost: Number,\n\n otherCost: Number,\n\n leaseEnd: Date,\n\n yearOld: Number,\n\n area: Number,\n\n duration: Number,\n\n overview: {\n\n type: String,\n\n maxlength: 180,\n\n },\n\n description: {\n\n type: String,\n\n maxlength: 2000,\n\n },\n\n charge: ChargeSchema,\n\n status: {\n\n type: String,\n\n maxlength: 20,\n\n },\n\n geometry: Object,\n\n expired: {\n\n type: Boolean,\n\n },\n\n refundable: {\n\n type: Boolean,\n\n },\n\n coupon: {\n\n appliedCoupon: Boolean,\n\n discount: Number,\n\n },\n\n isSpecial: {\n\n type: Boolean,\n\n },\n\n });\n\n function arrayLimit(val) {\n\n return val.length <= 5;\n\n }\n\n itemSchema.index({ modifiedDate: -1, tags: 1 });\n\n //TTL of modifiedDate.\n\n //157680000 is 5years , 94608000 is 3 years, 63072000 is 2 years , 34190000 is 13 months\n\n module.exports = mongoose.model(\"item\", itemSchema);\n\n})();\n",
"text": "Can anyone help me why after I insert a record into collection there tons of empty records also being inserted right after that. I don’t know what did I do wrong. Here is the picture\nimage700×663 26.7 KB\nand here is my mongoose model",
"username": "Hai_Nguyen"
},
{
"code": "",
"text": "Your title says Mongo keep inserting empty values but I would be very surprised if Mongo does that.\nMore likely it is an issue in your code. You shared your mongoose schema but it is of no use for us to help you. We need to see your code where youinsert a record into collectionWe also need so idea about the architecture of your application. Perhaps you have to other process running creating the extra documents.",
"username": "steevej"
},
{
"code": " router.post(\"/svc/business/create\", middleware.isValidUser, (req, res) => {\n var item = req.body;\n validate(req.user, item)\n .then((isValid) => {\n if (!isValid) {\n return res.status(status.INTERNAL_SERVER_ERROR).json(\"Invalid form\");\n }\n return itemSvc.addItem(item);\n })\n .then((newItem) => {\n return res.status(status.OK).json(newItem);\n })\n .catch((err) => {\n // console.log(\"Business/create err: \");\n // console.log(err);\n return res.status(status.INTERNAL_SERVER_ERROR).json(err);\n });\n });\n\n\nasync function validate(user, item) {\n let charge = item.charge;\n let price = 2000;\n if (item.coupon && item.coupon.appliedCoupon) {\n //make sure discount is correct\n if (item.coupon.discount !== 0.5) {\n return false;\n }\n price = price * item.coupon.discount;\n }\n let duration = Number(item.duration);\n if (user.role !== \"ADMIN\" && duration === 0.5) {\n let existItem = await itemSvc.getOneItem({\n \"createdBy.userId\": user.id,\n duration: 0.5,\n });\n if (existItem) {\n return false;\n }\n }\n if (duration === 1 && charge.amount !== price) {\n return false;\n }\n if (duration === 3 && charge.amount !== price * 2) {\n return false;\n }\n if (duration === 6 && charge.amount !== price * 3) {\n return false;\n }\n //duration\n if (duration === 24 && charge.amount !== price * 4) {\n return false;\n }\n if (\n duration !== 0.5 &&\n duration !== 1 &&\n duration !== 3 &&\n duration !== 6 &&\n duration !== 24\n ) {\n return false;\n }\n if (\n !item.files ||\n item.files.length > 10 ||\n item.files.length <= 0 ||\n (item.tags && item.tags.length > 5) ||\n (item.categories && item.categories.length > 20)\n ) {\n return false;\n }\n item.tags = item.tags.slice(0, 5);\n item.createdBy = {\n userId: user.id,\n avatar: user.avatar,\n username: user.username,\n rank: user.rank,\n noOfFollowers: user.noOfFollowers,\n };\n item.modifiedDate = moment().format();\n item.noOfPoints = 0;\n item.noOfSeens = 0;\n item.noOfShares = 0;\n item.noOfComments = 0;\n item.status = \"NEW\";\n return true;\n }\n\n//item services \n function addItem(item) {\n return new Promise((resolve, reject) => {\n Item.create(item, (err, item) => {\n if (err) {\n return reject(err);\n }\n return resolve(item);\n });\n });\n }\n function getRandomItems(conditions) {\n return new Promise((resolve, reject) => {\n Item.aggregate()\n .match(conditions)\n .sample(50)\n .exec((err, items) => {\n if (err) {\n return reject(err);\n } else {\n return resolve(\n items.filter((item) => {\n return !isExpired(item);\n })\n );\n }\n });\n });\n",
"text": "Hi @steevej , thanks for helping me, here is my insert functionI deployed all the functions to firebase.Also i have this code running to get random sample of items. Wonder if this could cause the issue",
"username": "Hai_Nguyen"
},
{
"code": "",
"text": "Nothing obvious in that.Do you have some code that counts the number of times the item is viewed? A GET route or something like that.Your first post had a screenshot that indicates that your document had 13 more fields. Can you share those 13 fields?",
"username": "steevej"
},
{
"code": " router.post(\"/svc/business/:id/upview\", (req, res) => {\n var condition = {\n _id: req.params.id,\n };\n itemSvc\n .getItemByIdAndIncreaseView(condition)\n .then((item) => {\n return processOne(req, res, item);\n })\n .then((result) => {\n return res.status(status.OK).json(result);\n })\n .catch((err) => {\n return res.status(status.INTERNAL_SERVER_ERROR).json(err);\n });\n });\n\n function getItemByIdAndIncreaseView(item) {\n return updateItem(\n item,\n { $inc: { noOfViews: 1 } },\n { upsert: true, new: true }\n );\n }\n\n function updateItem(conditions, newInfo, options) {\n return new Promise((resolve, reject) => {\n Item.findOneAndUpdate(conditions, newInfo, options, (err, newItem) => {\n if (err) {\n return reject(err);\n }\n return resolve(newItem);\n });\n });\n }\n",
"text": "@steevej OMG yes, i have that, that could be the problem\nhere is the code",
"username": "Hai_Nguyen"
},
{
"code": "var condition = {\n _id: req.params.id,\n };\nupdateItem(\n item,\n { $inc: { noOfViews: 1 } },\n { upsert: true, new: true }\n )\n",
"text": "Your bug is inandYou receive req.params.id as a string but it is stored as an ObjectId. So when you updateItem() no item is ever found. But upsert is true so a new item is created with noOfViews:1 as the only field. I am also not too sure if new:true is a valid option.",
"username": "steevej"
},
{
"code": "",
"text": "OMG thank you so much for pointing this out. I’m new to Mongo so I am really stuck here. Once again thanks so much",
"username": "Hai_Nguyen"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Code keep inserting empty values | 2022-12-13T16:51:44.969Z | Code keep inserting empty values | 2,714 |
|
null | [
"serverless"
]
| [
{
"code": "",
"text": "Hello, I was wondering if you get charged for deleting documents in MongoDB Atlas Serverless. I only saw information regarding writes and reads.",
"username": "Daniel_Brunner"
},
{
"code": "",
"text": "Hey Daniel,You could conceptualize deletes as “inserts” of empty documents, so deleting documents are indeed charged; they incur WPUs. Hope this helps.Best,\nChris\n- Atlas product team",
"username": "Christopher_Shum"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Delete Cost for Serverless | 2022-12-13T06:03:52.145Z | Delete Cost for Serverless | 1,760 |
null | []
| [
{
"code": "",
"text": "I had an old organization with its cluster and populated data, when I want to come back, I get this error\n“{ message: “Error Occurred” }” and there is no traces of the database, how can I know why it was deleted, when, etc.? Thank you.",
"username": "Kevin_Martellotti"
},
{
"code": "Organization Owner",
"text": "Hi @Kevin_Martellotti,I believe contacting the Atlas support team via the in-app chat may be better for this particular question.In saying so, only a user with the Organization Owner role can delete an Org (in which they have this role for).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| An organizaiton was deleted, how do I know who did it, and when? | 2022-12-14T22:55:31.833Z | An organizaiton was deleted, how do I know who did it, and when? | 837 |
null | []
| [
{
"code": "",
"text": "Dear all =)There are very few guides about running MongoDB in OpenShift. Is that because it is not recommended?Hugs,\nSandra =)",
"username": "Sandra_Schlichting"
},
{
"code": "",
"text": "May be, just may be there is only a few guides because OpenShift is based (afaik) on k8s/docker. So there is nothing special about it.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Sandra_Schlichting ,As @steevej mentioned, OpenShift is an enterprise Kubernetes platform so there aren’t really OpenShift-specific configuration details for MongoDB. Red Hat is a MongoDB partner, so there are supported deployment solutions.For more information, see:The docs above refer to the MongoDB Enterprise Kubernetes Operator. There is also a MongoDB Community Kubernetes Operator which may be relevant depending on your use case.A few more helpful ops resources (not specific to OpenShift):If you’re looking for information that does not appear to be covered in these resources, please let us know.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is MongoDB in OpenShift recommended? | 2022-12-11T22:44:30.317Z | Is MongoDB in OpenShift recommended? | 996 |
null | []
| [
{
"code": "",
"text": "Hi,I have an installation with three shards (PSA) of MongoDB 4.2. I have to reinstall OS on the servers from Ubuntu 16.04 to 22.04. After I completed the first server from the first shard I had to rsync all data from another member of this shard. Partition on which database is stored has XFS filesystem. The same as previously. After that mongod process started and everything seemed to be ok. However, I noticed very high disk utilisation on the reinstalled server. All other servers have about 5-15% of disk utilisation and this one have about 76%. iotop shows some mongod process called ApplyBa.Journal which creates almost all the load. I wasn’t able to find any information about this process.\nI checked the other server from the same shard (and other shards) and there is no such process so it seems that for some reason it runs only on this “fresh” server.I’m looking for some information about this process. What it does exactly (it is quite obvious that it’s related to journal) and if/how it is possible to do anything with it to decrease disk utilisation.",
"username": "luso"
},
{
"code": "mongod",
"text": "Hi @luso welcome to the community!Note that as per the production notes, Ubuntu 22 is not officially supported yet.Having said that, are you seeing the high IO utilization consistently, or does it taper off to normal levels after a while? The mongod logs may be able to provide further clues as to what is happening. Do you see this only in one Ubuntu 22 installation, or do you have other Ubuntu 22 installations that do not exhibit this behaviour?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi kevinadi,High IO utilization is consistent since the process started with copied data. Before that it was going through initial sync and IO was high but not that high.\nimage2068×370 107 KB\nI have one more installation that run mongoDB 4.4.18 on the same OS version without any issues and process ApplyBa.Journal doesn’t appear on that installation.\nI thougt it might be related to sharding but there is another mongod instance on the same server which is a member of config server replicaset and I noticed that it also runs the same process.I checked logs of both mongod and mongod-cfgsrv instances and they do not indicate any problems.",
"username": "luso"
},
{
"code": "",
"text": "Here is a screenshot from iotop.\nimage2082×472 143 KB\n",
"username": "luso"
},
{
"code": "mongodmongod",
"text": "there is another mongod instance on the same server which is a member of config server replicasetIf you’re running multiple mongod processes within the same machine, note that this is strongly not recommended since: 1) it can result in resource contention and, 2) can have difiicult to diagnose issues.I have one more installation that run mongoDB 4.4.18 on the same OS version without any issues and process ApplyBa.Journal doesn’t appear on that installation.If I understand correctly, this is only happening on MongoDB 4.2 series on Ubuntu 22, and only in this particular hardware. Is this correct?The MongoDB 4.4.18 installation that doesn’t exhibit this issue, is this on the same hardware, or on different hardware? It is a known possibility that a bad hardware could cause this unexplained disk utilization.I would try to troubleshoot this in this manner:Best of luck, hopefully this helps!Best regards\nKevin",
"username": "kevinadi"
}
]
| Mongod high disk utilisation by ApplyBa.Journal process after upgrading OS from Ubuntu 16.04 LTS to 22.04 LTS | 2022-12-13T14:09:54.176Z | Mongod high disk utilisation by ApplyBa.Journal process after upgrading OS from Ubuntu 16.04 LTS to 22.04 LTS | 1,096 |
null | [
"queries",
"data-modeling",
"attribute-pattern"
]
| [
{
"code": "{\n \"name\": \"Product Name\",\n \"price\": {\n \"Avg365\": 10,\n \"Avg730\": 5,\n \"Avg1095\": 15\n }\n}\n{\n \"name\": \"Product Name\",\n \"price\": {\n \"Avg90\": 4,\n \"Avg180\": 8,\n \"Avg270\": 9,\n \"Avg365\": 10,\n \"Avg455\": 10,\n \"Avg545\": 6,\n \"Avg635\": 8,\n \"Avg730\": 5,\n \"Avg820\": 10,\n \"Avg910\": 12,\n \"Avg1000\": 18,\n \"Avg1095\": 15\n }\n}\n{ price: {$elemMatch: { \"k\": \"Avg365\", \"v\": { $lte: 8, $gte: 1 } } } }",
"text": "When using the attribute pattern should I be mindful of the number of attributes I create per document to ensure good performance? And what impact does a larger number of attributes have on write and query performance?My use case is that I would like my users to be able to filter data based on averages over the past number of years but balance this with maintaining good performance. Will there be a significant impact on performance between having 3 attributes and 12 attributes in an indexed field given the number of index keys increases?Existing document (simplified version) - Yearly dataPropose new document (simplified version) - Every 90-daysIn these cases I have an index on “price” with queries such as:\n{ price: {$elemMatch: { \"k\": \"Avg365\", \"v\": { $lte: 8, $gte: 1 } } } }",
"username": "Callum_Boyd"
},
{
"code": "",
"text": "When using the attribute pattern should I be mindful of the number of attributes I create per document to ensure good performance?Like any design pattern you should be mindful in apply it. Otherwise you might end up implementing an anti-pattern.The following thread is still in my bookmarks because I have not had the time to take a serious look at it.",
"username": "steevej"
},
{
"code": "{\n \"name\": \"Product Name\",\n \"average_prices\": [\n {\"days\": 90, \"price\": 4},\n {\"days\": 180, \"price\": 8},\n {\"days\": 270, \"price\": 9},\n {\"days\": 365, \"price\": 10},\n {\"days\": 455, \"price\": 10},\n {\"days\": 545, \"price\": 6},\n {\"days\": 635, \"price\": 8},\n {\"days\": 730, \"price\": 5},\n {\"days\": 820, \"price\": 10},\n {\"days\": 910, \"price\": 12},\n {\"days\": 1000, \"price\": 18},\n {\"days\": 1095, \"price\": 15}\n ]\n}\n{\"average_prices.days\": 1, \"average_prices.price\": 1}db.coll.find({\n average_prices: {\n $elemMatch: {\n days: 270, \n price: {$gt: 8, $lt: 10}\n }\n }\n})\nwinningPlan: {\n stage: 'FETCH',\n filter: {\n average_prices: {\n '$elemMatch': {\n '$and': [\n { days: { '$eq': 270 } },\n { price: { '$lt': 10 } },\n { price: { '$gt': 8 } }\n ]\n }\n }\n },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { 'average_prices.days': 1, 'average_prices.price': 1 },\n indexName: 'average_prices.days_1_average_prices.price_1',\n isMultiKey: true,\n multiKeyPaths: {\n 'average_prices.days': [ 'average_prices' ],\n 'average_prices.price': [ 'average_prices' ]\n },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n 'average_prices.days': [ '[270, 270]' ],\n 'average_prices.price': [ '(8, 10)' ]\n }\n }\n }\n",
"text": "Hi @Callum_Boyd,The proposed new document is the anti-pattern that leads to the Attribute Pattern.Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.Better document would be:Also your query doesn’t work with that document (no “k” and no “v” keys). And $elemMatch only works on an array.Index on my doc would be {\"average_prices.days\": 1, \"average_prices.price\": 1}This index would be used by this query:This is what the winning plan from an explain looks like:IXSCAN => FETCH is the best we can do here.About the performances, the index will add one extra entry for each new element in the array (== each new average price). So if you have 1M docs and each have 10 prices => 10M entries in the index.\nIt’s not really a problem as long as you have enough RAM to support it but it’s going to be a problem if you decide to have 1000 prices in each doc.This also means that each insert in this collection will now generate 1 write operation in the collection and 10 write ops in the index. Nothing alarming here.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "{\n \"name\": \"Product Name\",\n \"price\": [\n {\"k\": \"Avg365\", \"v\": 10},\n {\"k\": \"Avg730\", \"v\": 5},\n {\"k\": \"Avg1095\", \"v\": 15}\n ]\n}\n",
"text": "Thanks @MaBeuLux88 !Apologies for the error in my post, I tried to simplify too far. My document looks more like:Also thank you for your comments on the performance side, its reassuring to know the impact won’t be huge provided the price options do not grow excessively",
"username": "Callum_Boyd"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Attribute Pattern - Should I limit the number of attributes? | 2022-12-07T15:24:37.772Z | Attribute Pattern - Should I limit the number of attributes? | 2,663 |
null | [
"atlas-triggers"
]
| [
{
"code": "",
"text": "I have a trigger which refuses to restart saying : “PlanExecutor error during aggregation :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.” … My trigger calls a realm function. The documentation says if it cant find it on the oplog it will just continue with what is in the oplog, but mine refuses to start. I cant see any switches that say ignore resume token … I’m relatively new to mongodb atlas so would appreciate some help … we’re using v5 of mongoDB.",
"username": "derek_henderson1"
},
{
"code": "",
"text": "I found the solution, restart in the realm UI has a tick box to say don’t use Resume Token.",
"username": "derek_henderson1"
},
{
"code": "",
"text": "As of 14-Dec-2022 there is no longer a “don’t use Resume Token” option to tick. Trying to solve similar issue I have discovered that a trigger will get suspended if the function he is trying to run encountered an error and fails. You may try to see if the function runs correctly from the Atlas App Services console. If it fails - try to fix the function’s bug and resume the trigger afterwards.\nBest regards,\nOmri",
"username": "Porat_Projects"
}
]
| Trigger won't restart, keeps getting suspended | 2021-10-20T08:31:19.630Z | Trigger won’t restart, keeps getting suspended | 4,618 |
null | []
| [
{
"code": "",
"text": "A Database Trigger has been SuspendedA database trigger RealmABC has failed in your application AWSEventBridge and has been suspended. This trigger will not run until it can be successfully resumed. Please use the following link to view the trigger’s configuration and address the problem.\nResumed trigger multiple times but not worked\nmaximum attempts (10) reached processing event for trigger id=we232391a9a719732e21a3c5e: ValidationException: Total size of the entries in the request is over the limit. status code: 400, request id: 232233-9273-4587-962b-c4b41ddcf3e3",
"username": "Balraj_Yadav"
},
{
"code": "",
"text": "Hi @Balraj_Yadav,Welcome to MongoDB Community.It looks like the database trigger is failed to tail the operations and needs to be manually resumed.There is a high chance that you have too small oplog for your workload and triggers fall off it. If this is the case you will have to resume without the resume token checkbox as there is no way to resume it otherwise…In any case I suggest to contact MongoDB support for further analysis.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "After facing the same issue I have discovered that a trigger will get suspended if the function he is trying to run encountered an error and fails. You may try to see if the function runs correctly from the Atlas App Services console. If it fails - try to fix the function’s bug and resume the trigger afterwards.\nBest regards,\nOmri",
"username": "Porat_Projects"
}
]
| Database Trigger has been Suspended | 2021-04-08T08:16:10.937Z | Database Trigger has been Suspended | 3,631 |
null | [
"sharding",
"migration"
]
| [
{
"code": "shards:\n { \"_id\" : \"rs1\", \"host\" : \"rs1/shard1a:27018,shard1b:27018,shard1c:27018\" }\n { \"_id\" : \"rs2\", \"host\" : \"rs2/shard2a:27018,shard2b:27018,shard2c:27018\", \"state\" : 1 }\nautosplit:\n Currently enabled: yes\nbalancer:\n Currently enabled: yes\n Currently running: yes\n Collections with active migrations: \n mydatabase.mycollection started at Wed Nov 30 2022 15:06:53 GMT-0800 (PST)\n Failed balancer rounds in last 5 attempts: 0\n Migration Results for the last 24 hours: \n No recent migrations\ndatabases:\n { \"_id\" : \"mydatabase\", \"primary\" : \"rs1\", \"partitioned\" : true }\n mydatabase.mycollection\n shard key: { \"userId\" : \"hashed\" }\n unique: false\n balancing: true\n chunks:\n rs1\t44215\n rs2\t21\n",
"text": "Hi everyone, a few days ago we added a second shard to our database, which seemed to go pretty smoothly. This triggered the shard balancer to start doing some migrations.However, a few days later, we’re not seeing much data actually move to the new shard:As you can see from the above output, around 44k chunks are on rs1, only 21 chunks on rs2. The chunk count has also not changed for quite awhile.And despite the balancer running, the “active migration” started two days ago and has neither failed nor provided any result.Tailing the logs for the two shards and the config server didn’t appear to show any errors being outputted related to this either.The database does actively have production traffic and user writes/reads going to it. Would that cause migrations to not proceed?Is there anything I can do to troubleshoot whether this migration is proceeding as expected, or if some issue has occurred that has halted it?If the migration is stuck, is there any process to “stop and restart” the migration?Ultimately, any insight would be much appreciated! We aren’t seeing any clear error or sign as to why it would be in this state for multiple days without much movement.",
"username": "Clark_Kromenaker"
},
{
"code": "rs1--shardsvr",
"text": "Hi @Clark_Kromenaker and welcome in the MongoDB Community !Just a random guess at this point but at least it’s a lead to explore.I don’t see “state: 1” in rs1. So my guess is that RS1 isn’t started with --shardsvr and isn’t shard aware.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@MaBeuLux88, thanks for the welcome and the quick reply!That’s a good point - I somehow missed the concept of starting the process with “shardsvr” to make it shard aware. The newly added shard does have “state” set, but the original shard (which we deployed years ago) does not.So, we’ll try to rectify that this coming week and see if it resolves the issue. Thanks for the tip!",
"username": "Clark_Kromenaker"
},
{
"code": "storage:\n dbPath: /mongodb/data\n journal:\n enabled: true\n\nsystemLog:\n destination: file\n logAppend: true\n logRotate: reopen\n path: /mongodb/log/mongod.log\n\nnet:\n port: 27018\n bindIp: 0.0.0.0\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n fork: true\n\nreplication:\n replSetName: rs1\n\nsharding:\n clusterRole: shardsvr\n{\n\t\t\t\"shard\" : \"rs1\",\n\t\t\t…\n\t\t\t\"active\" : true,\n\t\t\t\"currentOpTime\" : \"2022-12-08T18:10:24.606+0000\",\n\t\t\t\"opid\" : \"rs1:767299796\",\n\t\t\t\"secs_running\" : NumberLong(673411),\n\t\t\t\"microsecs_running\" : NumberLong(\"673411012241\"),\n\t\t\t\"op\" : \"command\",\n\t\t\t\"ns\" : \"admin.$cmd\",\n\t\t\t\"command\" : {\n\t\t\t\t\"moveChunk\" : \"mydatabase.mycollection\",\n\t\t\t\t\"shardVersion\" : [\n\t\t\t\t\tTimestamp(22, 1),\n\t\t\t\t\tObjectId(\"5b3bf4a351bf517cc03596ce\")\n\t\t\t\t],\n\t\t\t\t\"epoch\" : ObjectId(\"5b3bf4a351bf517cc03596ce\"),\n\t\t\t\t\"configdb\" : \"rsconfig/config1:27019,config2:27019,config3:27019\",\n\t\t\t\t\"fromShard\" : \"rs1\",\n\t\t\t\t\"toShard\" : \"rs2\",\n\t\t\t\t\"min\" : {\n\t\t\t\t\t\"userId\" : NumberLong(\"-9215713443395991186\")\n\t\t\t\t},\n\t\t\t\t\"max\" : {\n\t\t\t\t\t\"userId\" : NumberLong(\"-9214977367518352602\")\n\t\t\t\t},\n\t\t\t\t\"maxChunkSizeBytes\" : NumberLong(67108864),\n\t\t\t\t\"waitForDelete\" : false,\n\t\t\t\t\"takeDistLock\" : false,\n\t\t\t\t\"$clusterTime\" : {\n\t\t\t\t\t\"clusterTime\" : Timestamp(1669849613, 163),\n\t\t\t\t\t\"signature\" : {\n\t\t\t\t\t\t\"hash\" : BinData(0,\"6VoraWaZJWlSL5Er5dML0dCBvok=\"),\n\t\t\t\t\t\t\"keyId\" : NumberLong(\"7122446341149558275\")\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"$configServerState\" : {\n\t\t\t\t\t\"opTime\" : {\n\t\t\t\t\t\t\"ts\" : Timestamp(1669849613, 163),\n\t\t\t\t\t\t\"t\" : NumberLong(16)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"$db\" : \"admin\"\n\t\t\t},\n\t\t\t\"msg\" : \"step 3 of 6\",\n\t\t\t\"numYields\" : 1213,\n\t\t\t\"locks\" : {\n\n\t\t\t},\n\t\t\t\"waitingForLock\" : false,\n\t\t\t\"lockStats\" : {\n\t\t\t\t\"Global\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"r\" : NumberLong(2437),\n\t\t\t\t\t\t\"w\" : NumberLong(3)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"Database\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"r\" : NumberLong(1217),\n\t\t\t\t\t\t\"w\" : NumberLong(3)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"Collection\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"r\" : NumberLong(1217),\n\t\t\t\t\t\t\"W\" : NumberLong(1)\n\t\t\t\t\t},\n\t\t\t\t\t\"acquireWaitCount\" : {\n\t\t\t\t\t\t\"W\" : NumberLong(1)\n\t\t\t\t\t},\n\t\t\t\t\t\"timeAcquiringMicros\" : {\n\t\t\t\t\t\t\"W\" : NumberLong(266342)\n\t\t\t\t\t}\n\t\t\t\t},\n\t\t\t\t\"oplog\" : {\n\t\t\t\t\t\"acquireCount\" : {\n\t\t\t\t\t\t\"w\" : NumberLong(2)\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n{\n \"_id\" : \"mydatabase.mycollection\",\n \"state\" : 2,\n \"process\" : \"ConfigServer\",\n \"ts\" : ObjectId(\"6269bc1620b5633916ac3f46\"),\n \"when\" : ISODate(\"2022-11-30T23:06:53.590Z\"),\n \"who\" : \"ConfigServer:Balancer\",\n \"why\" : \"Migrating chunk(s) in collection mydatabase.mycollection\"\n}\n{\n \"_id\" : \"mydatabase.mycollection-userId_-9215713443395991186\",\n \"ns\" : \"mydatabase.mycollection\",\n \"min\" : {\n \"userId\" : NumberLong(-9215713443395991186)\n },\n \"max\" : {\n \"userId\" : NumberLong(-9214977367518352602)\n },\n \"fromShard\" : \"rs1\",\n \"toShard\" : \"rs2\",\n \"chunkVersion\" : [ \n Timestamp(22, 1), \n ObjectId(\"5b3bf4a351bf517cc03596ce\")\n ],\n \"waitForDelete\" : false\n}\nmongos> sh.stopBalancer()\n2022-12-08T10:32:47.860-0800 E QUERY [js] uncaught exception: Error: command failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Operation timed out\",\n\t\"code\" : 202,\n\t\"codeName\" : \"NetworkInterfaceExceededTimeLimit\",\n\t\"operationTime\" : Timestamp(1670524367, 1917),\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1670524367, 1917),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n\t\t\t\"keyId\" : NumberLong(0)\n\t\t}\n\t}\n} :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\ndoassert@src/mongo/shell/assert.js:18:14\n_assertCommandWorked@src/mongo/shell/assert.js:583:17\nassert.commandWorked@src/mongo/shell/assert.js:673:16\nsh.stopBalancer@src/mongo/shell/utils_sh.js:177:12\n@(shell):1:1\n",
"text": "@MaBeuLux88 Thanks so much for your previous reply. You mentioned that the lack of { state : 1 } for rs1 could indicate that the mongod process is not running with --shardsvr, however, upon looking into the mongod.conf files, it does appear that the mongod processes on these shards do have shardsvr specified:We tried to research it but couldn’t find anything super obvious about the “state” flag. We seem to recall it might be something where a shard updated from an older version of mongodb might not show the “state” value correctly, but it was a benign issue.When examining the current ops for the db, we can see that there is a moveChunk operation that has been running since Nov 30:It’s a little unclear what’s causing this operation to not complete. At this point, it doesn’t seem likely that it’s just taking a long time, but it seems like it’s just stuck. But some clarity on this would be helpful.We do see in the config.locks collection for this database there is this lock:And the migrations collection also shows this:We were originally planning on rebooting all shard and config machines to attempt to get things moving again, but we weren’t sure of the consequences of doing that, and didn’t want our data to end up in an invalid state. But attempting to stop the shard balancer times out and gives us this error:At this point we are a bit at a loss of what to do. We want to get our shard balancer working again so that we can actually have the benefits of having a second shard, as well as establishing a balancer window. Any help or recommendations would be very helpful.",
"username": "Andrew_Dos_Santos"
},
{
"code": "",
"text": "Thanks for all the details, it’s very helpful!I’ll get a few other set of eyes on this as this doesn’t trigger another “genius” idea here.Could you just give me a couple more information that might be useful?Anything suspicious in the logs of any of these nodes?Thanks,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "{ \"_id\" : \"rs1\", \"host\" : \"rs1/shard1a:27018,shard1b:27018,shard1c:27018\" }\n{ \"_id\" : \"rs2\", \"host\" : \"rs2/shard2a:27018,shard2b:27018,shard2c:27018\", \"state\" : 1 }\n",
"text": "Thanks so much, Maxime!Sure thing. Our MongoDB version is 3.6.23We have two sharded clusters, one with user data that users regularly interact with (many reads and writes) and this one, which is more write heavy as it’s more of an activity log that holds records of user actions.As I mentioned we just added this second shard to this activtydb in order to scale for user growth. For this cluster, we’re running 3 config machines, 2 router machines, and 2 shards both in replica sets that are replicated to 2 secondaries each. As mentioned from our sh.status():Unfortunately, we didn’t see anything suspicious in the logs.",
"username": "Andrew_Dos_Santos"
},
{
"code": "sh.stopBalancer()sh.startBalancer() shards:\n { \"_id\" : \"rs1\", \"host\" : \"rs1/shard1a:27018,shard1b:27018,shard1c:27018\" }\n { \"_id\" : \"rs2\", \"host\" : \"rs2/shard2a:27018,shard2b:27018,shard2c:27018\", \"state\" : 1 }\n active mongoses:\n \"3.6.23\" : 1\n autosplit:\n Currently enabled: yes\n balancer:\n Currently enabled: yes\n Currently running: no\n Collections with active migrations:\n mydatabase.mycollection started at Fri Dec 09 2022 10:47:11 GMT-0800 (PST)\n Failed balancer rounds in last 5 attempts: 0\n Migration Results for the last 24 hours:\n No recent migrations\n databases:\n { \"_id\" : \"mydatabase\", \"primary\" : \"rs1\", \"partitioned\" : true }\n mydatabase.mycollection\n shard key: { \"userId\" : \"hashed\" }\n unique: false\n balancing: true\n chunks:\n rs1\t44215\n rs2\t21\n too many chunks to print, use verbose if you want to force print\nsh.startBalancer()sh.isBalancerRunning()",
"text": "Just as an update here, we stepped down our primary config machine and that seemed to get some things unstuck. We’re able to successfully call sh.stopBalancer() and sh.startBalancer() now and the timestamp of the migration has been updated.That said, even after calling sh.startBalancer() (and that running successfully) sh.isBalancerRunning() returns false.We’re giving it a bit of time to see if it starts work to migrate chunks, but wondering if this is potentially an issue.",
"username": "Andrew_Dos_Santos"
},
{
"code": "sh.status()[...]\nshards\n[\n {\n _id: 'shard1',\n host: 'shard1/mongod-s1-1:27018,mongod-s1-2:27018,mongod-s1-3:27018',\n state: 1,\n topologyTime: Timestamp({ t: 1670635728, i: 1 })\n },\n {\n _id: 'shard2',\n host: 'shard2/mongod-s2-1:27018,mongod-s2-2:27018,mongod-s2-3:27018',\n state: 1,\n topologyTime: Timestamp({ t: 1670635729, i: 1 })\n },\n {\n _id: 'shard3',\n host: 'shard3/mongod-s3-1:27018,mongod-s3-2:27018,mongod-s3-3:27018',\n state: 1,\n topologyTime: Timestamp({ t: 1670635729, i: 7 })\n }\n]\n[...]\n{state: 1}shard1 [direct: primary] config> use local\nswitched to db local\nshard1 [direct: primary] local> db.startup_log.find()\n[\n {\n _id: 'mongod-s1-1-1670635632853',\n hostname: 'mongod-s1-1',\n startTime: ISODate(\"2022-12-10T01:27:12.000Z\"),\n startTimeLocal: 'Sat Dec 10 01:27:12.853',\n cmdLine: {\n net: { bindIp: '*' },\n replication: { replSet: 'shard1' },\n sharding: { clusterRole: 'shardsvr' }\n },\n pid: Long(\"1\"),\n buildinfo: {\n version: '6.0.3',\n gitVersion: 'f803681c3ae19817d31958965850193de067c516',\n modules: [],\n allocator: 'tcmalloc',\n javascriptEngine: 'mozjs',\n sysInfo: 'deprecated',\n versionArray: [ 6, 0, 3, 0 ],\n openssl: {\n running: 'OpenSSL 1.1.1f 31 Mar 2020',\n compiled: 'OpenSSL 1.1.1f 31 Mar 2020'\n },\n buildEnvironment: {\n distmod: 'ubuntu2004',\n distarch: 'x86_64',\n cc: '/opt/mongodbtoolchain/v3/bin/gcc: gcc (GCC) 8.5.0',\n ccflags: '-Werror -include mongo/platform/basic.h -ffp-contract=off -fasynchronous-unwind-tables -ggdb -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -fno-omit-frame-pointer -fno-strict-aliasing -O2 -march=sandybridge -mtune=generic -mprefer-vector-width=128 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-const-variable -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fdebug-types-section -Wa,--nocompress-debug-sections -fno-builtin-memcmp',\n cxx: '/opt/mongodbtoolchain/v3/bin/g++: g++ (GCC) 8.5.0',\n cxxflags: '-Woverloaded-virtual -Wno-maybe-uninitialized -fsized-deallocation -std=c++17',\n linkflags: '-Wl,--fatal-warnings -pthread -Wl,-z,now -fuse-ld=gold -fstack-protector-strong -fdebug-types-section -Wl,--no-threads -Wl,--build-id -Wl,--hash-style=gnu -Wl,-z,noexecstack -Wl,--warn-execstack -Wl,-z,relro -Wl,--compress-debug-sections=none -Wl,-z,origin -Wl,--enable-new-dtags',\n target_arch: 'x86_64',\n target_os: 'linux',\n cppdefines: 'SAFEINT_USE_INTRINSICS 0 PCRE_STATIC NDEBUG _XOPEN_SOURCE 700 _GNU_SOURCE _FORTIFY_SOURCE 2 BOOST_THREAD_VERSION 5 BOOST_THREAD_USES_DATETIME BOOST_SYSTEM_NO_DEPRECATED BOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS BOOST_ENABLE_ASSERT_DEBUG_HANDLER BOOST_LOG_NO_SHORTHAND_NAMES BOOST_LOG_USE_NATIVE_SYSLOG BOOST_LOG_WITHOUT_THREAD_ATTR ABSL_FORCE_ALIGNED_ACCESS'\n },\n bits: 64,\n debug: false,\n maxBsonObjectSize: 16777216,\n storageEngines: [ 'devnull', 'ephemeralForTest', 'wiredTiger' ]\n }\n }\n]\nsharding: { clusterRole: 'shardsvr' }",
"text": "I dug up my old repo and I made a new version of my “quick start” sharded cluster with docker.master/sharding-dockerSome scripts for MongoDB Training. Contribute to MaBeuLux88/MongoDB-Training development by creating an account on GitHub.I used this to create a 3 shards cluster to play with and in the result of my sh.status() command I get this:Without too much of a surprise, I’m getting {state: 1} on my 3 shards… So there is definitely something wrong in here and I bet this is our issue.Can you please make SURE that these 3 nodes are actually running with this option ON at the moment? Maybe these startup scripts have been updated after the last restart of these 3 machines and the mongod have a large uptime.To check you can run this on your first RS:And here in my example I confirmed that I have sharding: { clusterRole: 'shardsvr' } in my cmdLine.Note that all the above outputs are from MongoDB 6.0.3 and it’s definitely time for an update on your sharded clusters! Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@MaBeuLux88, thanks so much for your attention while troubleshooting this!We discovered that having the primary config server step down seemed to result in the shard balancer commands becoming responsive again. However, we were still seeing the migration seemingly never completing.Ultimately, we scheduled some downtime and rebooted the entire cluster (all config machines, and all replica set members in all shards). This did resolve the issue, and we now see data migrating at a decent clip.Amazingly, “have you tried turning it off and turning it back on” applies even here! We hear you on updating the cluster - these were initially deployed back in the 3.6 days, and it’s been hard to prioritize updating them. But I’m sure it’d be better to do that proactively than under duress!",
"username": "Clark_Kromenaker"
},
{
"code": "state:1sh.status()",
"text": "Something weird was happening with the primary config server then !I have that reference !I’m glad it’s resolved. Do you have the state:1 now in your sh.status()?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Troubleshoot or Stop Long-Running Shard Migration | 2022-12-02T20:54:17.130Z | Troubleshoot or Stop Long-Running Shard Migration | 3,920 |
null | [
"database-tools",
"backup",
"time-series"
]
| [
{
"code": "EPOCH_DATE=$(date '+%s%3N')\nfor COLLECTION in \"${COLLECTIONS[@]}\"\ndo\n mongodump \\\n --db=database\\\n --collection=$COLLECTION \\\n --query=\"{ \\\"createdOn\\\": { \\\"\\$gte\\\": {\\\"\\$date\\\": $EPOCH_DATE} } }\"\n --out=/dir/backup/\ndone\nFailed: cannot process query [{createdOn [{$gte 1670480228792}]}] for timeseries collection database.collection mongodump only processes queries on metadata fields for timeseries collections.\n\n",
"text": "Hi all,I am trying to schedule backups on a time series collection and I want to do hourly backups of the last hour of data. On top of daily full backups, I would like to schedule a backup every hour of the past hour of data (ie at 3pm, create a dump for data from 2pm-3pm).I am trying to set up my shell script as such:But I am getting the following error:Is there a better way of achieving the goal of doing scheduled backups? Or should I change the way I structure my script?Appreciate any help on the matter!",
"username": "Daryl_Ang"
},
{
"code": "",
"text": "Hi @Daryl_Ang and welcome in the MongoDB Community !I reproduced the problem here with an example and - indeed - I get the same error message which is documented in the mongodump doc.I reached out to a couple of colleagues that might have an idea for you but I don’t have a smart idea at the moment. I’d love if someone can find a workaround though.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "find()",
"text": "I spoke with @Tim_Fogarty who worked on mongodump.He explained to me why this constraint exists and to be fair, it’s quite complex. It’s due to the low level implementation of timeseries in MongoDB and explaining all those details won’t help. The conclusion is that there are currently no workaround using mongodump.If you do not need a point-in-time snapshot using the oplog though (which I think is the case here), you can use mongoexport in a script and achieve basically the same thing. It won’t be as fast as mongodump - but at least this should work properly.Else you can still write a script and use find() with the appropriate filter to find these docs but it’s a bit more work.I hope this helps.\nCheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for the assistance Maxime!Could I also use this opportunity to know if there are any intended approaches to backing up timeseries data?",
"username": "Daryl_Ang"
},
{
"code": "--query",
"text": "First trivial idea that comes to mind would be to mongodump the entire collection.It would probably be faster to use a disk snapshot though depending on your production env.A random idea that could be worth exploring though would be to add an extra field in the metadata (a different one every hours) and you could use this field for your query. As it’s in the metadata this time, it would work with the --query.I guess you have more than one client so you could come up with an algorithm that generates a new unique ID every hours (first that comes to mind could be to have day 1 that goes from 1 to 24. Day 2 from 25 to 48, etc).Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Mongodump Query on Timeseries Collection | 2022-12-08T06:19:02.165Z | Mongodump Query on Timeseries Collection | 1,988 |
null | [
"attribute-pattern"
]
| [
{
"code": "offers{\n ...\n attributes: [\n { key: \"color\", value: \"red\" },\n { key: \"price\", value: 50 },\n ...100 attributes more...\n ]\n}\n{\n 'attributes.key': 1,\n 'attributes.value': 1,\n}\ndb.offers.count({\"attributes\":{\"$all\":[{\"$elemMatch\":{\"key\":\"color\",\"value\":{\"$in\":[\"red\"]}}}]}})\nexplain(){ explainVersion: '1',\n queryPlanner: \n { namespace: 'offers.offers',\n indexFilterSet: false,\n parsedQuery: \n { attributes: \n { '$elemMatch': \n { '$and': \n [ { key: { '$eq': 'color' } },\n { value: { '$eq': 'red' } } ] } } },\n queryHash: '3EC4C516',\n planCacheKey: '37ABC9CD',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: \n { stage: 'COUNT',\n inputStage: \n { stage: 'FETCH',\n filter: \n { attributes: \n { '$elemMatch': \n { '$and': \n [ { key: { '$eq': 'color' } },\n { value: { '$eq': 'red' } } ] } } },\n inputStage: \n { stage: 'IXSCAN',\n keyPattern: { 'attributes.key': 1, 'attributes.value': 1 },\n indexName: 'attributes.key_1_attributes.value_1',\n isMultiKey: true,\n multiKeyPaths: \n { 'attributes.key': [ 'attributes' ],\n 'attributes.value': [ 'attributes', 'attributes.value' ] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: \n { 'attributes.key': [ '[\"color\", \"color\"]' ],\n 'attributes.value': [ '[\"red\", \"red\"]' ] } } } },\n rejectedPlans: [] },\n command: \n { count: 'offers',\n query: { attributes: { '$all': [ { '$elemMatch': { key: 'color', value: { '$in': [ 'red' ] } } } ] } },\n '$db': 'offers' },\n serverInfo: \n { host: 'be1988957f70',\n port: 27017,\n version: '6.0.3',\n gitVersion: 'f803681c3ae19817d31958965850193de067c516' },\n serverParameters: \n { internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600 },\n ok: 1 }\n",
"text": "I am trying to improve the performance of our total count query. We have a collection called offers with 600k documents (Storage Size: 2.09GB, Logical Data Size: 3.73GB) . Each offer has ~150 associated attributes. Therefore we used your recommened attribute pattern.Document structure:Compound index:Query:The explain() output:This query is really really slow. It takes between 2 and 6 seconds. Is there anyway for us to improve the performance of the count?",
"username": "bene123"
},
{
"code": "",
"text": "Performance is based on a few things.The quantity of data, which you shared.\nThe complexity of the computation, which you shared.\nThe capacity of the resources, which you did not shared.What you see might be normal depending of the system resources allocated to perform the workload.What are the specifications of your implementation?",
"username": "steevej"
},
{
"code": "",
"text": "I’ve been running an M40 instance on MongoDB Atlas and a local MongoDB server on my M1 Macbook. Both servers have the same poor performance.",
"username": "bene123"
},
{
"code": "explain(true){'attributes.value': 1, 'attributes.key': 1}",
"text": "Hi @bene123,Can you please share the explain plan with the execution stats explain(true) for this particular query so we can see where the problem might come from?Also can you please confirm that your M40 has enough RAM to operate (all indexes fit in RAM) and you have some spare RAM to perform aggregations, in-memory sorts and queries (like this one).Without these information I can only take wild guesses.Cheers,\nMaxime.",
"username": "MaBeuLux88"
}
]
| Slow total number of documents using attribute pattern | 2022-11-29T21:25:39.548Z | Slow total number of documents using attribute pattern | 2,219 |
null | [
"queries",
"atlas-search"
]
| [
{
"code": "",
"text": "Hi,\nHi.I would like to perform and query in atlas search and exclude some documents.\nIs it possible to have a query like “not equals” or “does not contain”?I could probably add a second stage where I exclude the documents from the search result, but this would not be as performant, because I can not use an index. Additionally, the $searchMeta feature could not be used anymore.Thanks.",
"username": "Mathias_Mahlknecht"
},
{
"code": "mustNotequals{\n $search: {\n \"compound\": {\n \"mustNot\": {\n \"equals\": {\n \"path\": \"_id\",\n \"value\": ObjectId(\"xxxxx\")\n }\n }\n }\n }\n}\n",
"text": "Hi @Mathias_Mahlknecht ,You can use the mustNot option in the compound operator to filter out documents that match a specified equals clause. Something like:Hope this helps!",
"username": "amyjian"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Atlas Search with exclude | 2022-12-14T16:12:56.527Z | Atlas Search with exclude | 1,779 |
null | [
"node-js",
"atlas-functions"
]
| [
{
"code": "myFuncgetSetWithNumbersconst COLS = 6;\nconst ROWS = 6;\n\nfunction getSetWithNumbers(start, end) {\n const set = new Set();\n\n const num1 = start;\n let num2 = start + 2;\n if (num2 > end) {\n num2 = start + 2 - end;\n } else if (num2 === end) {\n num2 = 0;\n }\n\n set.add(num1);\n set.add(num2);\n\n console.log(\n `Added ${num1} & ${num2} to the set for row ${start}, set size: ${set.size}`\n );\n\n return set;\n}\n\nfunction myFunc(request, response) {\n for (let row = 0; row < ROWS; row++) {\n const mySet = getSetWithNumbers(row, ROWS);\n\n for (let col = 0; col < COLS; col++) {\n if (mySet.has(col)) {\n console.log(`Row-${row}: mySet has col ${col}: ${mySet.has(col)}`);\n }\n }\n }\n\n return 'Hello World!';\n}\n\nexports = myFunc;\nAdded 0 & 2 to the set for row 0, set size: 2\nRow-0: mySet has col 0: true\nRow-0: mySet has col 2: true\nAdded 1 & 3 to the set for row 1, set size: 2\nRow-1: mySet has col 1: true\nRow-1: mySet has col 3: true\nAdded 2 & 4 to the set for row 2, set size: 2\nRow-2: mySet has col 2: true\nRow-2: mySet has col 4: true\nAdded 3 & 5 to the set for row 3, set size: 2\nRow-3: mySet has col 3: true\nRow-3: mySet has col 5: true\nAdded 4 & 0 to the set for row 4, set size: 2\nRow-4: mySet has col 0: true\nRow-4: mySet has col 4: true\nAdded 5 & 1 to the set for row 5, set size: 2\nRow-5: mySet has col 1: true\nRow-5: mySet has col 5: true\nAdded 0 & 2 to the set for row 0, set size: 2 \nRow-0: mySet has col 0: true \nRow-0: mySet has col 2: true \nAdded 1 & 3 to the set for row 1, set size: 2 \nRow-1: mySet has col 1: true \nAdded 2 & 4 to the set for row 2, set size: 2 \nRow-2: mySet has col 2: true \nAdded 3 & 5 to the set for row 3, set size: 2 \nRow-3: mySet has col 3: true \nAdded 4 & 0 to the set for row 4, set size: 2 \nRow-4: mySet has col 0: true \nRow-4: mySet has col 4: true \nAdded 5 & 1 to the set for row 5, set size: 2 \nRow-5: mySet has col 5: true\n",
"text": "Hi,\nThis is my first time using the Atlas App Services, and while working on my app I faced a weird issue with functions which I’m not able to explain & figure out. I’m sure there is an explanation for the behavior, it will be great if someone can explain it.So here it goes:\nThe below code is not exactly my app code, but it is sufficient to explain the behavior. I have created a function myFunc which has 2 nested loops for rows & columns. For every row I get 2 column numbers by calling another function getSetWithNumbers in the same file. This function returns these 2 numbers in a Set.Now here is the weird part:\nIf I call myFunc using my local node setup I get following logs (everything is as per the expectation):But when I call this function by pushing it to App Services I get the following logs:As you can see there are missing entries / logs. What is causing this behavior?If I replace the Set with a List (with the corresponding changes for add → push & has → includes), everything works perfectly.This is bugging me for last 3-4 days, so better to have an explanation Thanks",
"username": "Rajeev_R_Sharma"
},
{
"code": "Set.has(…)",
"text": "Hi @Rajeev_R_Sharma,Thank you for raising this: we could reproduce the behaviour, and indeed it looks like Set.has(…) isn’t returning the expected results.We’re opening an internal ticket about the matter, and will keep this post updated.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Thank you @Paolo_Manna.I was thinking that may be there is some limitation with functions I’m not aware about. Good to know that I found a valid issue.",
"username": "Rajeev_R_Sharma"
},
{
"code": "Set.has(…)Set.has()List",
"text": "Hi @Rajeev_R_Sharma,The Team responsible of the underlying function engine has confirmed that the error is due to mishandling of integer values. Set.has(…) should still work for other types, though.(To clarify: the problem is that the addition of two integers is returning a float, that Set.has() doesn’t compare properly)For the time being, you can use your workaround with List: we don’t have a timeline for the resolution yet, if there’s one soon we’ll update this post.Thank you for your help in identifying this bug!",
"username": "Paolo_Manna"
},
{
"code": "function getSetWithNumbersNew(start, end) {\n const set = new Set();\n let logContent = ['Added'];\n for (let i = start; i < end; i++) {\n set.add(i);\n logContent.push(i);\n }\n\n logContent.push(`to the set for row ${start}, set size: ${set.size}`);\n\n console.log(logContent.join(' '));\n\n return set;\n}\nAdded 0 1 2 3 to the set for row 0, set size: 4 \nRow-0: mySet has col 0: true \nRow-0: mySet has col 1: true \nRow-0: mySet has col 2: true \nRow-0: mySet has col 3: true \nAdded 1 2 3 to the set for row 1, set size: 3 \nRow-1: mySet has col 1: true \nRow-1: mySet has col 2: true \nRow-1: mySet has col 3: true \nAdded 2 3 to the set for row 2, set size: 2 \nRow-2: mySet has col 2: true \nRow-2: mySet has col 3: true \nAdded 3 to the set for row 3, set size: 1 \nRow-3: mySet has col 3: true\n",
"text": "Thanks for the update @Paolo_Manna.Glad that I could help in finding the bug. Came across it so shared with the right people :-). Now that the team has identified the root cause, it will be fixed in due course.Just wanted to add one more observation with regards to the same. I think this issue happens only if the added integers are not consecutive (index wise). If you add numbers which are consecutive then it works fine. I tried with the following changes:And called this function instead (also changed ROWS & COLUMNS to 4 to reduce the number of logs), and I get the same result with my local node setup as well as the atlas function.That is all from my side.Thanks",
"username": "Rajeev_R_Sharma"
},
{
"code": " let num2 = start + 2;\n",
"text": "Indeed the problematic operation is the additionnot the insertion in the set, per se: if you don’t have an addition, you don’t get into the issue.",
"username": "Paolo_Manna"
},
{
"code": "const randomInt = (min, max) => {\n return Math.floor(Math.random() * (max - min)) + min;\n};\nSetmin",
"text": "Oh! my bad. I didn’t read the sentence properly.And that explains the actual issue which I faced (not the above dummy code)In my actual code I was inserting random entries to the Set using the above function. So float is getting returned from that addition here as well. If I moved that outside min to inside Math.floor then probably it would have worked.Thanks for explaining ",
"username": "Rajeev_R_Sharma"
}
]
| Weird behavior with Sets in App Services Function | 2022-12-13T07:58:43.000Z | Weird behavior with Sets in App Services Function | 1,942 |
null | []
| [
{
"code": "",
"text": "A friend and I talked about building a low latency global service, using Cloudflare workers and MongoDB.Let’s say we have a cluster on M30 with global low-latency reads configured.If we now follow the guidelines described in this blog on setting up Ream, how is the architecture going to be?If a user in Argentina makes a GET request, the nearest Cloudflare data center will respond and call the Realm API. But where is the Realm API physically located - is it also deployed globally?Thanks for your input.",
"username": "Alex_Bjorlig"
},
{
"code": "nearest",
"text": "Hey @Alex_Bjorlig,I’m the author of that blog post. \nThis doc is important for my answer.MongoDB Realm apps are deployed either globally or locally. If they are deployed globally, your app is available in Ireland, Oregon, Sydney and Virginia.So if a client is in New York, he will likely be rooted to the app in Virginia, then depending on your configuration, it will reach to the Primary node for a write or to the closest node if you are reading with readPreference nearest (assuming you have a Replica Set. A Sharded cluster would cover even more ground). And this without Cloudflare. Only Realm auth + Realm functions (which are equivalent to Cloudflare workers).Now the problem with my Cloudflare blog post is that we omitted to talk about caching the MongoDB Connections. This blog post was more a proof of concept rather than a production ready code sample.In MongoDB Realm, connections to the underlying cluster is cached. Each time a Realm function needs to connect to the MongoDB cluster, the same client connection is re-used. This avoid the handshake and the cost of creating and maintaining a connection for the replica set EACH TIME we make a query. When you run a query, you just want to run the query and access MongoDB, not initialise everything from scratch each time.It’s a bit what’s happening with my Cloudflare code. Maybe there is a way to cache the connection with Cloudflare, but I don’t know enough about Cloudflare to do so.It’s the same thing for AWS Lambdas. You have to cache the connection in a global variable that is reused by all the other lambdas.Cloudflare is an extra layer in between your client and your MongoDB cluster in Atlas that isn’t necessary really. It’s an extra step in the network as well.The best scenario would be to have a locally deployed Realm app in Dublin, Ireland and an Atlas cluster also deployed in Dublin. When you execute your Realm Function, it can access the cluster next door very fast without rooting the query around the world twice.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Cloudflare is an extra layer in between your client and your MongoDB cluster in Atlas that isn’t necessary really. It’s an extra step in the network as well.I don’t think that’s correct? In my perspective, the business rules, authentication, and authorization would live here. But that’s probably a separate discussion.The way I view Cloudflare workers is the future of serverless. It’s more global and features 0ms cold-start → a significant improvement compared to serverless functions. Now the question is how to combine this with an equally distributed database. I believe this is exactly why Cloudflare announced their D1 SQL database in September 2022.So my question could essentially be rephrased like this:Is MongoDB+Realm a good fit for Cloudflare workers, or would it make more sense to use Cloudflare D1?On a side note, could you share some insights as to why it’s not possible to use the node.js MongoDB driver? Would it ever be possible, or is the V8 environment never going to be compatible?",
"username": "Alex_Bjorlig"
},
{
"code": " const mongodb = context.services.get(\"mongodb-atlas\");\n const movies = mongodb.db(\"stitch\").collection(\"movies\");\ncontext",
"text": "I don’t think that’s correct? In my perspective, the business rules, authentication, and authorization would live here. But that’s probably a separate discussion.I totally agree with that. What I meant to say is that MongoDB Realm App (Atlas App Service soon - MongoDB is renaming them) is already capable of handling this serverless workload.You can achieve the same result without Cloudflare entirely and replace the Cloudflare workers by Realm Functions (Atlas Functions soon). The difference is that Realm Functions have a built-in cache mechanism that handles the connection to the Atlas cluster for you.With MongoDB Realm you can handle the Rules, Schemas, App Users & Auth, GraphQL API, Functions, Triggers, HTTPS Endpoints (=webhooks for REST API) and front-end hosting.Or you could also use the Atlas Data API (== REST API) which can just be activated in one clic.With Serverless functions (from anywhere) you JUST want to execute the actual code and remove anything that would make you waste time (like initialise a framework, initialise a connection to another server (like MongoDB…), start a JVM, etc).With the implementation we did in the Cloudflare blog post, it works. Ok. But each call to this lambda/worker creates a brand new connection to the MongoDB Cluster (at least that’s my understanding) with it’s entire connection pool, etc. This is like a cold start and it’s also like a DDOS attack from the MDB cluster perspective given that you are not executing this only 3 times a day of course. A MongoDB cluster can only sustain a certain number of connections (for an M30 it’s 3000) and it costs memory to the cluster to open and close them. Not counting the network TCP handshakes.Realm Functions access the MongoDB cluster like this:This built-in context act as a cache that keeps a pool of MongoDB connections available for the serverless functions to use when they need it. No need to repeat the handshake & auth each time I want to talk to MDB.About Cloudflare D1, you just made me aware of its existence. So I have absolutely no idea what it’s worth. I just know it’s won’t scale like MongoDB does (because it’s SQL).I think Cloudflare workers don’t support Tier Party Libraries (NPM) entirely (https://workers.cloudflare.com/works) and I think the MongoDB Node.js driver isn’t supported. I would have used that for the proof of concept / blog post. But I had to use this weird workaround with the Realm Web SDK (not really proud) that is supposed to be used in a front-end (not a back-end function)… But it’s the only solution I had to get a connection with MongoDB.I hope it helps .\nMaxime",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@MaBeuLux88 thanks for the excellent answer - possibly the best forum answer I got in a long time. My key takeaways are:Now that you mention Realm Functions as an alternative to Cloudflare workers, how do they compare?Thaaaanks - I know it’s a lot of answers, but reading through the documentation did not give me a clear indication if Realm functions is a direct alternative to Cloudflare/vercel/lamdba ",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "I recently implemented a quick POC using Cloudflare Workers as the backend of a web app and had to connect it to a MongoDB Atlas cluster. Cloudflare Workers currently supports HTTP and WebSockets but not plain TCP sockets. For this reason, as @MaBeuLux88 points out, the MongoDB Node driver is not supported. That being said, Cloudflare seems to be working on this limitation. Some workarounds to connect to a MongoDB cluster from Cloudflare Workers include:When using Realm, it seems that the blog implementation does not create a new connection with each request. This post suggests that Realm manages connections to Atlas automatically, depending on the requests made by client endpoints.",
"username": "Sergio_50904"
},
{
"code": "",
"text": "Hi Folks – Tackling a few of the latest questions on this thread. Note, we have recently renames MongoDB Realm (APIs, Triggers, and Sync) to ‘Atlas App Services’ to be clearer/more differentiated from the Realm SDKs.Finally @Sergio_50904 – on your connection management question – App Services essentially open connection pools between our hosts and your cluster and dynamically create/release connections based on demand. Connections can also be shared across multiple requests so you tend to open a more efficient number of connections at scale and pay the cost of opening a new connenction less frequently. This is true for all App Services (Sync/SDKs, Data API, GraphQL, Triggers).",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Thanks for your excellent answers - especially the honest answers on 5 and 6 For now, I think we will stay with more proven technology. Also - it seems like Altas App Services is currently missing the possibility to run locally - something I think most developers would identify as a critical feature for development.",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "Thanks for the feedback Alex – We have designed our CLI to be interactive and make it easy to work alongside the cloud while developing locally or in CI/CD pipelines, but I do understand that some folks prefer a local/emulated environment for development. It is certainly another area that we’re considering!",
"username": "Drew_DiPalma"
},
{
"code": "",
"text": "Yeah especially with MongoDB it makes sense to be able to run locally - since MongoDB is one of the few databases that will literally run everywhere \nIn our code architecture, we love to integration test against MongoDB running locally in-memory. It’s fast, makes for reliable tests, costs nothing, and almost emulates the production environment (looking at full-text search here )",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "@Drew_DiPalma @Sergio_50904Just found this article, and it seems very relevant for this discussion; Introducing the PlanetScale serverless driver for JavaScript.Planetscale today announces support for edge environments, using aFetch API-compatible database driverThis is exciting - becasue of the obvious question; Could MongoDB do the same thing? So we could finally have a solution for using MongoDB in combination with Cloudflare Workers/Vercel Edge/Netlify Functions.",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "This is true for all App Services (Sync/SDKs, Data API, GraphQL, Triggers).Hi @Alex_Bjorlig,I’m back from paternity leave !Now that Atlas Data API is released, you can use this in Cloudflare workers to communicate with MongoDB. This is the best solution as it’s a way to reach MongoDB over HTTPS without any dependency (driver) instead of direct TCP and this solves the auth problem that we had with the SDK.\nI think it’s also more simple to handle in the code so it’s another point for the Data API.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I hope you enjoyed the leave - I have 2 girls myself and it’s a lovely but busy time in the first years Do you have some numbers regarding the performance of the Atlas Data API compared to direct TCP? Because our application is pretty data-intense and we server-side render things, it’s essential for data-fetching to be very fast.",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "I don’t have numbers but I guess nothing would be better than your own tests based on your data & sizing.Maybe @Drew_DiPalma has some numbers or can bring the Data API team in the game?Adding this link as it’s on topic:",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "That would be amazing. Setting up tests with our own data would be quite the effort ",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "HI Alex - while exact numbers will always vary based on a multitude of things including region, query, amount of data you have, cluster size.Basic CRUD usage of a well optimized configuration is likely to be in the very low 100ms range (100-200ms) while drivers will perform in the 5-15ms range. Note that this is not a guarantee, but a rough estimate.This follows the architectural model that the Data API is solving for → it is meant to replace your API and Microservice layer by behaving like that managed middleware, not help you build a backend/middleware API service.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "@Sumedha_Mehta1 - thanks so much for writing this and being honest.100-200ms is nowhere fast enough for server-side rendering Let me try to summarize the (for me still unanswered) questions in this thread:Thank you so much ",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "It’s a different story with Vercel and AWS Lambda (compared to the initial Cloudflare workers).Vercel has a native integration with MongoDB Atlas:For AWS Lambda it’s a bit different because you directly have a NodeJS runtime with NPM support so you can leverage directly the MongoDB Driver but you have to cache the connection so you don’t connect each time you run a lambda.So if your cluster is located in a single region and you can host your Lambda in this same AWS region, there are good chances that you are going to be faster than Cloudlare & Atlas Data API.I hope this helps.\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for the breakdown - it’s beneficial.Some follow-up questions:Looking at the Vercel-MongoDB repo you linked, is it simply one connection per aws-lambda container?If the answer to 1 is yes, is this table then essentially the limit of parallel containers that can be executed?How much time does it generally take for the MongoDB client to establish a connection when a cached connection is not available?So, based on my question “Does MongoDB have any plans to support data lookups from V8 edge functions (with good performance)?”, the answer is no?Thanks so much ",
"username": "Alex_Bjorlig"
},
{
"code": "",
"text": "No, MongoDB uses connection pools. So I think it’s more like a hundred connections per driver rather than one but it probably depends on the driver we are talking about as well as each have a different implementation (but follow the same specs).Yes but it’s more like 100 connections per driver instances or so and there are also internal connections for the Replica Set, etc.Main issue will be the latency between the server (back-end / Lambda) and the primary you are trying to reach I guess. It also depends on the Internet speed, network quality, etc. The connection itself it pretty fast, the problem is the path and worst case scenario, only the very first lambda would be slower. Once it’s initialised, it’s gonna run forever.No idea. We need someone else here.Maxime.",
"username": "MaBeuLux88"
}
]
| How does the MongoDB Data API work from a high-level perspective? | 2022-05-30T06:00:48.424Z | How does the MongoDB Data API work from a high-level perspective? | 6,906 |
null | [
"python",
"production"
]
| [
{
"code": "",
"text": "We are pleased to announce the 0.6.3 release of PyMongoArrow - a PyMongo extension containing tools for loading MongoDB query result sets as Apache Arrow tables, Pandas and NumPy arrays.This is a patch release that adds wheels for Linux AArch64 and Python 3.11, and\nfixes the handling of time zones in schema auto-discovery.See the 0.6.3 release notes in JIRA for the complete list of resolved issues.Documentation: [PyMongoArrow 0.6.3 Documentation])(PyMongoArrow 0.6.3 Documentation — PyMongoArrow 0.6.3 documentation)\nChangelog: Changelog 0.6.3\nSource: GitHubThank you to everyone who contributed to this release!",
"username": "Steve_Silvester"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| PyMongoArrow 0.6.3 Released | 2022-12-14T15:01:35.699Z | PyMongoArrow 0.6.3 Released | 1,468 |
null | [
"installation"
]
| [
{
"code": "user@ubuntu:~$ sudo systemctl start mongod\nuser@ubuntu:~$ sudo systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)\n Active: failed (Result: core-dump) since Wed 2022-12-14 10:43:06 UTC; 4s ago\n Docs: https://docs.mongodb.org/manual\n Process: 6318 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ILL)\n Main PID: 6318 (code=dumped, signal=ILL)\n\nDec 14 10:43:05 ubuntu systemd[1]: Started MongoDB Database Server.\nDec 14 10:43:06 ubuntu systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL\nDec 14 10:43:06 ubuntu systemd[1]: mongod.service: Failed with result 'core-dump'.\nmongod --config /etc/mongod.confIllegal instruction (core dumped)sudo mongod --config /etc/mongod.confIllegal instructionsudo mongod --logpath ~/mongod.logIllegal instruction",
"text": "Hi there,I’m trying to get MongoDB 5.0.12 running on ubuntu 20.04.5 LTS, but whenever I try to start the mongod service through systemctl, I get the below output.At first I was trying to get 6.0.3 running but this gave me the same result, which is why I decided to switch to 5.0.12, which I already have running on a different development oriented ubuntu 22.04 machine.\nReinstalling the OS and trying the installation again from scratch gives me the exact same result.Running mongod --config /etc/mongod.conf returns Illegal instruction (core dumped).\nRunning sudo mongod --config /etc/mongod.conf returns Illegal instruction.\nRunning sudo mongod --logpath ~/mongod.log returns Illegal instruction and does not create a log file.I’m running out of ideas and could use some help ",
"username": "DF1229"
},
{
"code": "",
"text": "It is a know issue discussed at length. See Search results for 'status=4/ILL' - MongoDB Developer Community Forums",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the reply, turns out (in my case at least) the server I was using had an older CPU which does not support the AVX instruction set required by MongoDB 5.0+.For future reference, see this list of CPU’s which support AVX: Advanced Vector Extensions - Wikipedia",
"username": "DF1229"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Community 5.0.12 Illegal instruction (core dumped) ubuntu 20.04.5 LTS | 2022-12-14T10:48:20.864Z | MongoDB Community 5.0.12 Illegal instruction (core dumped) ubuntu 20.04.5 LTS | 14,105 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "",
"text": "hi! is there a way to sort all movies of sample_mflix by year/released? i mean, some documents have YEAR property, some have RELEASED\nim working with node, express and mongoose\nthank you!",
"username": "mongu"
},
{
"code": "$addFields$ifNull$sortfinalYeardb.collection.aggregate([\n { \"$addFields\": { \"finalYear\": { \"$ifNull\": [\"$year\", \"$released\"] } } },\n { \"$sort\": { \"finalYear\": 1 } }\n])\n",
"text": "Hello @mongu, Welcome to the MongoDB community forum,No straight way, I would suggest you make a single property by combining both, for the temporary solution you can try the below aggregation query,Note: this query will not use the index if provided in year/released, and is expensive in performance!",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sample_mflix year vs released | 2022-12-14T02:33:36.060Z | Sample_mflix year vs released | 1,074 |
null | [
"production",
"ruby",
"mongoid-odm"
]
| [
{
"code": "",
"text": "This patch release in the 8.0 series fixes the following issues:The the following minor improvement were added:",
"username": "Dmitry_Rybakov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| Mongoid 8.0.3 released | 2022-12-14T11:55:10.762Z | Mongoid 8.0.3 released | 1,577 |
null | [
"crud"
]
| [
{
"code": "{\n favorites: [\n {\n id: new ObjectId(\"639707f36bf9468265d91810\"),\n expiresAt: 1671361200000,\n reminder: false\n },\n {\n id: new ObjectId(\"637cc4c986b4fbec43579e1f\"),\n expiresAt: 1672603200000,\n reminder: false\n }\n ],\n _id: new ObjectId(\"637e8af40e43f40373686da2\"),\n email: '[email protected]',\n forename: 'something',\n surname: 'something',\n role: 'user',\n password: 'something',\n __v: 0\n}\nconst userSchema = new Schema({\n email: String,\n forename: String,\n surname: String,\n role: String,\n password: String,\n favorites: {\n id: { type: Schema.Types.ObjectId, ref: \"Event\" },\n expiresAt: Number,\n reminder: Boolean,\n },\n}); \n1. User.findOneAndUpdate(\n { _id: req.body.user, \"favorites.id\": BSON.ObjectId(req.body.id) },\n { $set: { \"favorites.$.reminder\": true } },\n ).setOptions({ sanitizeFilter: true });\n2. User.findOneAndUpdate(\n { _id: req.body.user },\n { $set: { \"favorites.$[elem].reminder\": true } },\n {\n arrayFilters: [{ \"elem.id\": { $eq: BSON.ObjectId(req.body.id) } }],\n returnNewDocument: true,\n }\n ).setOptions({ sanitizeFilter: true });\n",
"text": "Hello all,I have problems updating a subdocument in an array of subdocuments.Here is my data structure in the users collection:My Schema is:I want to update the reminder field in a subdocument based on the subdocument’s id.I’ve tried following approaches:Here nothing happens. It finds the document but does not update it.Here it returns an error: “Error: Could not find path “favorites.0.id” in schema”I cannot find where is my mistake? Any help is much appreciated!P.S.Mongo version is 5.0.14",
"username": "Kaloyan_Hristov"
},
{
"code": "favoritesconst userSchema = new Schema({\n email: String,\n forename: String,\n surname: String,\n role: String,\n password: String,\n favorites: [\n {\n _id: false,\n id: { type: Schema.Types.ObjectId, ref: \"Event\" },\n expiresAt: Number,\n reminder: Boolean,\n }\n ]\n});\n",
"text": "Hello @Kaloyan_Hristov,It causing the issue because the type of the favorites property is an object in your schema, and your example document has that in the array of objects,So if it is an array of objects then you need to change you schema like this,After changes in schema, try your query I think it should work.",
"username": "turivishal"
},
{
"code": "",
"text": "Hi,Amazing how stupid I am That was the problem.Thank you very much!!!",
"username": "Kaloyan_Hristov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| I have problem updating a subdocument in an array of subdocuments | 2022-12-13T19:24:31.351Z | I have problem updating a subdocument in an array of subdocuments | 1,558 |
null | [
"queries"
]
| [
{
"code": "",
"text": "Hello Members,\nI am new to the community and last year CSE Student.\nI am going to take one of the MongoDB Certifications. But got confused about which one to take.\nWhich certification is beneficial for me either MongoDB Developer or MongoDB DBA?\nI also want to know about career perspectives of a DB Developer and DBA.\nCan you please help?",
"username": "Mansi_Panchal"
},
{
"code": "",
"text": "Welcome @Mansi_Panchal to the community Which certification is beneficial for me either MongoDB Developer or MongoDB DBA?\nI also want to know about career perspectives of a DB Developer and DBA.You will find many different opinions on this. At the bottom line these are two very different disciplines. You will need to get your hands dirty in both to make a personal decision. As a student I’d recommend to do this via internships - Good to get a feeling what better fits to you and good to build a base for further opportunities.MongoDB DBA is kind of rare and in case you find a company looking specifically for this you are in a good position, as you can deliver this. However it might be more beneficial to have already a DevOps background before taking this learning path. The learning path is on MonogDB Administration not on learning CI/CD pipelines (to name only one aspect). Also it might be an aspect to think of the current “cloud first / serverless” mainstream this leads to a more developer centric demand.MongoDB Dev also will not teach you how to code. It will focus on the MongoDB specifics. Since a student most likely spend a lot of time coding, this learning path might add easier value.In general both paths are very well made and will add value to your skills. As said in the beginning in a first step it will help to find out your personal preferences and skills.Beyond this I recommend to take both when you want to get deep into the story. Based on experience especially senior (MongoDB) Devs benefit from DBA trainings since this will explain a lot of the “mechanics” which makes the engine so smooth at high performance.Disclaimer: this is my personal opinion, I like to encourage others to add on or confirm.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Welcome to the forums @Mansi_Panchal!@michael_hoeller provided some great insight into our different learning paths and associated certifications. For some help making your decision, please check out these resources if you haven’t already:*Associate DBA Practice Questions\n*Associate DBA Study Guide\n*Associate Developer Practice Questions\n*Associate Developer Study GuideHope you find these helpful and good luck on your exam! ",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Thank You @michael_hoeller for the detailed response.\nIt would surely help.\n",
"username": "Mansi_Panchal"
},
{
"code": "",
"text": "Thanks @Aiyana_McConnell ",
"username": "Mansi_Panchal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| What is the difference between MongoDB Developer Path and MongoDB DBA? | 2022-12-13T04:54:32.250Z | What is the difference between MongoDB Developer Path and MongoDB DBA? | 3,467 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Background: We have a users table and a office location table. A user record has an office_location_id. We need to show a list of users and their office locations (10 results per page), and also be able to sort the data by office location name ASC or DESC.We currently have an aggregate query, that pulls all user records and then does a lookup in the office locations table. Then we sort by that column, then skip/limit the results to get the first 10 results. It works perfectly fine, but extremely slowly (5s)We have 30,000 users and each has an office location. I don’t know the internals of Mongo, but I assume it has to loop over each of those 30,000 users to do the office lookup for each one, just to then show only 10 results (i.e. it’s having to lookup 29,990 unneeded office locations).If we move the lookup/sort after the skip/limit, then it becomes much quicker, by obviously only sorts the 10 results returned, leading to incorrect sorting per page (ABC,ABC instead of AAB,BCC). So we need to do the lookup and sort before skip/limit.Is there a better way to do this sort of thing?",
"username": "Kieran_Pilkington"
},
{
"code": "aggregate()$lookup$sortdb.table1.aggregate([\n {\n $lookup: {\n from: \"table2\",\n localField: \"id\",\n foreignField: \"table1_id\",\n as: \"table2_data\"\n }\n },\n {\n $sort: {\n \"table2_data.column1\": 1\n }\n }\n])\n\n$lookupidtable1_idtable2_data$sortcolumn1table2_data$lookup$sort",
"text": "To efficiently look up data in a joining table and sort the results by that data in MongoDB, you can use the aggregate() method and specify a $lookup stage to perform the join, followed by a $sort stage to sort the results. Here is an example of how you might do this:n this example, we are using the $lookup stage to perform the join between table1 and table2, using the id field in table1 and the table1_id field in table2. The results of the join are stored in the new table2_data field. We then use the $sort stage to sort the results by the column1 field in the table2_data field.As with the SQL example, the specific details of your query will depend on the structure of your data and the specific data you are trying to retrieve. You can read more about the $lookup and $sort stages in the MongoDB documentation.",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "I’d also recommend potentially creating an index that could be used when joining the tables. That is if this is a key part of what you are doing and/or you regularly are searching for the field that you would index in your app.",
"username": "Wayne_Barker"
}
]
| How to efficiently lookup joining table and sort by that data | 2022-12-13T23:11:10.271Z | How to efficiently lookup joining table and sort by that data | 1,785 |
null | [
"aggregation",
"queries",
"node-js"
]
| [
{
"code": "const productSchema = new Schema(\n {\n variants: [\n {\n attrs: {},\n },\n ],\n },\n { minimize: false, timestamps: true }\n);\n",
"text": "In an array of objects, I want to find all objects that has a certain key-value pair and store them all in a different variable.",
"username": "Good_Going"
},
{
"code": "$elemMatch$elemMatch",
"text": "I believe you need to use the $elemMatch operator for this one.\nWhat you would do is, query for the field you want to return where you use $elemMatch to see the nested field that equals what you are looking for.",
"username": "Wayne_Barker"
}
]
| Query on objects contained within an array of objects | 2022-12-14T01:53:07.624Z | Query on objects contained within an array of objects | 802 |
null | [
"replication"
]
| [
{
"code": "",
"text": "Hi ,Just wanted to consult regarding applications that attempts to connect on hidden members , from my understanding , only replication work between nodes that will have a connection on hidden members but for some reason one of the application that is connected to the replicaset attempts to connect on a member that is hidden, is this a normal behavior of mongodb drivers? Thanks",
"username": "Daniel_Inciong"
},
{
"code": "",
"text": "Hi @Daniel_Inciong,I found this in the MongoDB Driver specifications.For server discovery and monitoring purposes, the client must be able to connect to all the nodes in the RS. This doesn’t mean it will perform read or write operations there. At least that’s my understanding.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Hi ,Sorry for the late response. Will check on the provided link. Thank you again for the assistance",
"username": "Daniel_Inciong"
}
]
| Application attempts to connect on hidden members | 2022-12-01T06:30:30.550Z | Application attempts to connect on hidden members | 1,281 |
null | [
"realm-web",
"flutter"
]
| [
{
"code": "",
"text": "I’m trying to use the Realm in our production application but I see that Realm does not support Flutter web. I’ve tried using an unofficial library but with no success. I need to be able to listen to database changes in the front-end. Is there any other way to do this? if not, is there a plan to support Flutter web in the near future?",
"username": "Lucian_Simo"
},
{
"code": "",
"text": "There’s no plan to support Flutter for web in the near future. If you need to access MongoDB data, then we can recommend using the GraphQL API, though you’ll need to write the code to access it yourself.",
"username": "nirinchev"
},
{
"code": "",
"text": "Good to know. I will look into GraphQL API. thanks a lot",
"username": "Lucian_Simo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Realm support for Flutter Web | 2022-12-13T16:51:47.055Z | Realm support for Flutter Web | 2,648 |
null | [
"kafka-connector"
]
| [
{
"code": "",
"text": "Hello,\ni have replication from 3 nodes ( 1 primary and 2 secondry)\nfrom 2 weeks i get this message and when i get this message the insert to db stopped2022-05-23T22:46:03.329+0300 E - [conn8139] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102\n2022-05-23T22:46:04.311+0300 E - [conn8140] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102\n2022-05-23T22:46:05.293+0300 E - [conn8141] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102\n2022-05-23T22:46:06.298+0300 E - [conn8142] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102\n2022-05-23T22:46:07.318+0300 E - [conn8143] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102\n2022-05-23T22:46:08.334+0300 E - [conn8144] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102\n2022-05-23T22:46:09.302+0300 E - [conn8145] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102\n2022-05-23T22:46:10.315+0300 E - [conn8146] Assertion: BSONObjectTooLarge: BSONObj size: 17064926 (0x10463DE) is invalid. Size must be between 0 and 16793600(16MB) First element: note: “all times in microseconds” src/mongo/bson/bsonobj.cpp 102because of this message make big LAG in Kafkahow can solve this issue?thanks",
"username": "Eng_Dawoud_Esmaeil"
},
{
"code": "",
"text": "There is change stream metadata that is included in the kafka topic message. When the document itself is large this extra metadata makes it go above the 16MB limit. As of now your only option is to reduce the size of the document. You could also enable writing errors to the DLQ on the sink so your connector will keep working if they run into these errors.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "how does one reduce the size of the document?",
"username": "Palani_Thangaraj"
}
]
| Mongo log issue | 2022-05-23T19:47:45.691Z | Mongo log issue | 3,103 |
null | [
"swift",
"app-services-user-auth"
]
| [
{
"code": "",
"text": "Hi,I want to implement a passwordless authentication method (magic link). Therefore, I have to implement two steps for the authentication:My plan is to implement one function for each step. The first function is exposed using an HTTPS endpoint (system authentication as the user is not yet logged in), the second is triggered with the swift-sdk custom function authentication credential method.My question is, first, is there a better way to implement this authentication? Second, how would I secure the HTTPS endpoint against e.g. DDoS attacks? I read in the docs that one can secure the endpoint with a secret, but is it best practice to store a secret somewhere in my project (e.g. info.plst)?Thanks for the help\nDominik",
"username": "Dominik_Hait"
},
{
"code": "",
"text": "Can you elaborate a bit on what the tie-in is between this question and MongoDB Realm? Are you using Realm Authentication or some other technology?When you say the first function is exposed using an HTTPS endpoint , what endpoint? Is this a web based or device based app?",
"username": "Jay"
},
{
"code": "",
"text": "Yes sure. I am using MongoDB Realm as my database service with sync enabled and Realm Authentication for authenticating my users. Currently, I have Apple Sign In + Email/Password authentication enabled, but I want to change the latter to a passwordless method using magic links.Currently, I have the passwordless authentication running as an app service function which is tied to the custom function authentication method. Within the app I first call the login method of the realm sdk with the custom function credentials sending the email address as payload. The function then sends an email with a magic link to the user’s email address but fails because no token is send in the payload (on purpose). When the user clicks on the magic link, the login method is again called with the custom function credential, sending the email and token as payload. This time, if the token is correct (checked against a custom user collection), the user id is returned and the user is logged in. I do not use any third party service for the magic link authentication (all custom implemented in the app service function)I guess it is not intended to call the login method twice and catch the error the first time (when the magic link is sent). Therefore my question whether the intended way for a multiple step custom function is to separate the app service function into two functions and expose the first function to send the magic link as an https endpoint (within app services). If so, how would one secure the endpoint, because it would need to be set to system authentication as the user is not logged in. I am not sure about storing a secret within the project.",
"username": "Dominik_Hait"
},
{
"code": "",
"text": "Ok. So you’re using Realm Authentication with Custom Function User and the payload contains the users email address.That email address is massaged externally and an email is sent. Can we assume the external service generates some kind of external user id after the user clicks the link in the email and returns it to the custom function?",
"username": "Jay"
},
{
"code": "",
"text": "Ok. So you’re using Realm Authentication with Custom Function User and the payload contains the users email address.Yes, correct.That email address is massaged externally and an email is sent. Can we assume the external service generates some kind of external user id after the user clicks the link in the email and returns it to the custom function?Basically yes, but I am not using any external service for user authentication. I implemented everything myself within the custom function. Within that function I create an entry for the email address (if not already existing) in a custom mongodb user collection containing the email address, an “external” user id as well as a token. The token is send to the email address (using Sendgrid, which is only used for sending an email containing the magic link which I generated).The part I don’t get is how to move forward. Currently I just call the login method in the swift sdk twice to handle the initial login (only email address as payload), which triggers sending the email containing the magic link, and the second time when the user clicks the link and then contains the email address + token as payload in which case the custom function returns the “external” user id.You said the “external service” should return an external user id to the custom function. Is the custom function in this case waiting for the external service (until the user clicks the magic link)? If yes, how would you implement such a logic within mongodb only, meaning without external service (using app service functions + custom user collection).",
"username": "Dominik_Hait"
}
]
| Multiple step custom function authentication (magic link) | 2022-12-12T16:39:13.085Z | Multiple step custom function authentication (magic link) | 2,194 |
[
"aggregation",
"dot-net"
]
| [
{
"code": "",
"text": "Hi,\nCan somebody help me convert this aggregation(that i created using the mongodb for vscode extension)\nto C# code? I don’t know how to use the conversion that the extension generates, if thats easier and someone can help me how to use it properly i’ll be gratefulHere’s the aggregation im trying to use on C#\n\nimage360×620 11.8 KB\n",
"username": "Henrique_Shoji"
},
{
"code": "",
"text": "Did you see this post and link to the C# builder library?",
"username": "Marcus"
},
{
"code": "",
"text": "Didn’t found it before, thanks!\nI’ll give it a try",
"username": "Henrique_Shoji"
},
{
"code": "",
"text": "Marcus is correct. Your best bet right now is to use David Golub’s Atlas Search NuGet package. That will give you a better development experience than using JSON/BSON stages as demonstrated in our driver documentation here.Note that we are currently merging David Golub’s work into the core driver and it will be released in an upcoming version. Please follow CSHARP-3437 for details.James",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Ok, thanks you guys for the fast reply and for the help!\nIm gonna check the Atlas Search package and give it a try.",
"username": "Henrique_Shoji"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can someone help me conver this aggregation to C#? | 2022-12-13T14:24:01.960Z | Can someone help me conver this aggregation to C#? | 1,785 |
|
null | [
"aggregation",
"queries",
"dot-net",
"atlas-search"
]
| [
{
"code": "{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"Details\": {\n \"fields\": {\n \"Customer\": {\n \"fields\": {\n \"Id\": {\n \"type\": \"objectId\"\n }\n },\n \"type\": \"document\"\n },\n },\n \"type\": \"document\"\n },\n }\n}\nnew BsonDocument(\"equals\",\n new BsonDocument { new BsonElement(\"path\", \"Details.Customer.Id\"),\n new BsonElement(\"value\", new BsonBinaryData(customerId, GuidRepresentation.CSharpLegacy)) });\n{ \"equals\" : { \"path\" : \"Details.Customer.Id\", \"value\" : CSUUID(\"d9e9e08b-11f7-4c38-ab42-85bff1e022d4\") } }\n",
"text": "HiI have a collection named Projects that contains an embedded document Customer, with an Id field that stores a Guid value from .NET (NUUID)I’ve built the search index this way:I’d like to know if this is the very best way to define the index in this particular case and, if so, how can I perform the query in the $search stage.I’ve tried the code below, in C#, but it didn’t work:Ele produziu a seguinte pipeline :E o seguinte erro:\n“compound.filter[1].equals.value” must be a boolean or objectId\"How to implement this?Thanks for now",
"username": "Jeferson_Soares"
},
{
"code": "",
"text": "Hello @Elle_ShwerI’m mentioning you in this post, because I’ve found another one that you replied that UUID fields are not currently supported (Atlas Search - Compound Index UUID + Text).\nI’d like to know if there is any expectation of support for this field type and how I could follow the ongoing process.Thanks for now\nI will be waiting anxiously for a reply",
"username": "Jeferson_Soares"
},
{
"code": "",
"text": "Hi Jefferson, you can track our plans in this feedback item . Voting on it helps us prioritize it and will keep you posted on when we do it.",
"username": "Elle_Shwer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Search index and query with C# Guid field | 2022-10-10T20:51:16.907Z | Search index and query with C# Guid field | 3,016 |
null | [
"server",
"security",
"atlas"
]
| [
{
"code": "",
"text": "Hello,User Case:We have a local server and deployed a project, and that project will access through the LAN to another chield PCs,We are using a wifi from a mobile/hotspot or net setter or dongle to connect the internet in just an only a local server for data transmission to the atlas,So we are adding the public IP address to the IP whitelist to Atlas,Problem:The problem is we have to use wifi from a mobile/hotspot or net setter or dongle method to connect the internet to the server, This connection is not persistent, sometimes it is disconnect because of a weak network.So again we have to add the new IP address to IP whitelist in Atlas,Second, we have to shut down the server every night and start in the morning, so every day we have to add the new IP address to IP whitelist in Atlas,So we are searching for another method of authentication for Atlas server, instead of an IP whitelist, any token-based authentication so we don’t need to add an IP whitelist every time.Thanks",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishalJust an idea: how about using some service fronting the database, instead of opening the database itself to the end application? For example, something like what’s described in this blog post: REST APIs with Java, Spring Boot and MongoDB so that you only need to whitelist the REST API’s server IPs in Atlas, and open the API server to the world (with some authentication of course) so your app can use it?Alternatively since you’re already using Atlas, you may be able to use Realm HTTPS endpoints executing some Realm functions that does the data manipulation. As a bonus, there are some built-in authentication methods with the HTTPS endpoints.Those are just two ideas off the top of my head. I’m sure there are many other ways of doing this, with some tradeoffs specific to each technique.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks, @kevinadi,Got your idea, I have to update my whole code for that and it will take time, but, this will work for my situation, We will implement one of the methods.Thanks.",
"username": "turivishal"
},
{
"code": "",
"text": "I have integrated the MongoDB Atlas Administration API > Project IP Access List,https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Project-IP-Access-List",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Alternative of IP WhiteListing in Atlas? | 2022-04-08T07:44:01.276Z | Alternative of IP WhiteListing in Atlas? | 4,386 |
null | [
"python",
"database-tools"
]
| [
{
"code": "",
"text": "I’m new to MongoDB and have taken the M001 course but not sure how to perform the following:\nI have a large number of json files that are “metadata” and “data”. The data documents are a many to one relationship to the metadata, ie one metadata document is the same for many data documents. How to I insert the metadata document, then all of the data documents with a reference to its respective metadata document? I suspect the best reference to use in the data document is the metadata’s _id, but not sure how to insert the metadata document, get it’s _id and then add that to the data documents as a field prior to the insert. I should also mention that I would put the metadata and data documents in different collections. I have also found a mongodb tool called mongoimport that may be useful to do this, but suspect I will have to insert all of these files via a program in javascript or python.\nAny thoughts on this and the best approach would be appreciated.",
"username": "Leon_Werenka"
},
{
"code": "//Metadata collection\n{\n _id : \"abc\" ,\n metadataField1 : \n ...\n}\n\n// Data collection\n{\n _id : \"111\",\n metadataId : \"abc\",\n dataField1 : ... ,\n ...\n}\n\n{\n _id : \"222\",\n metadataId : \"abc\",\n dataField1 : ... ,\n ...\n}\n \n",
"text": "Hi @Leon_Werenka ,Usually we will see a structure such as :This will form a one to many relationship between the collections , you can still add additional data to the metadata collection like “numberOfDataDocs” and maintain it over time.Now the import question can be done in many ways. You can write script or code that first insert the metadata gets the ids needed and embed it in the data docs when they are inserted.If you use an import tool that cannot perform this logic easily then either have the ids pre set in the loaded documents or load into a staging collection and use $merge aggregation to populate themThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the comments. This approach does solve the problem by overriding the default GUID-type assigning of the _id field. I guess that is okay, but one would then have to be careful of not re-using an _id. Or one could generate GUID id’s using the scripting language.\nWhen using the database generated _id, for finding the metadata _id, one would probably have to have another field like “new”: true to find the newest metadata document submitted, get the _id, update the “new”: false, then insert the data documents with the _id.\nIs there an update or insert function that could return the _id as part of the return value by specifying something like {_id = 1} or can this only be done with find() functions?",
"username": "Leon_Werenka"
},
{
"code": "db.metadata.findOneAndUpdate({metadataId : \"abc\"},{$set : {timestamp : new Date()} }).sort({timestamp : -1})\n",
"text": "Hi @Leon_Werenka ,You don’t have to use _id to perform as your primary key. You can have a metadataId and timestamp being the primary key.Then you can index {metadataId : 1, timestamp: -1}This will allow you to get findOne when sorted by timestamp : -1We do have a findAndUpdateOne method thqt can find a document and also update it :This will always get the latest version.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| How to insert documents and have a reference from "data" documents to a "metadata" document | 2022-12-12T16:59:15.129Z | How to insert documents and have a reference from “data” documents to a “metadata” document | 1,650 |
null | [
"android",
"kotlin"
]
| [
{
"code": "LocalDate override fun filterDiariesByDate(localDate: LocalDate): Flow<RequestState<List<Diary>>> {\n return if (user != null) {\n try {\n realm.query<Diary>(\n \"ownerId == $0 AND date == $1\",\n user.identity, ???\n ).asFlow()\n .map { result -> RequestState.Success(data = result.list) }\n } catch (e: Exception) {\n flow { emit(RequestState.Error(e)) }\n }\n } else {\n flow { emit(RequestState.Error(UserNotAuthenticatedException())) }\n }\n }\n",
"text": "I have a a Date type of a field in my collection schema. I need to search and query only the items of a specific day in the month that the user chooses (From a date picker in Android) . I’m passing a LocalDate to that function. after user selection. Now my question is, how can I make a query with that LocalDate type, in order to get the right information?That Date type of a filed actually represents a RealmInstant btw, which is normal.",
"username": "111757"
},
{
"code": " override fun filterDieariesByDate(localDate: LocalDate) {\n ...\n realm.query<Diary>(\"ownerId == $0 AND date < $1\",\n user.identity,\n \"${localDate.year}-${localDate.month}-${localDate.dayOfMonth}\"\n )\n ...\n }\n",
"text": "I’ve tried following the official documentation (https://www.mongodb.com/docs/realm/realm-query-language/#date-operators). And I’ve written a query like this:But I’m getting an error: Unsupported comparison between type ‘timestamp’ and type ‘string’.\nCan someone please point me in the right direction? Also it would be great to have some more docs about querying a date.",
"username": "111757"
},
{
"code": "",
"text": "@111757 : Why don’t you consider converting back the local date to RealmInstant and passing it as an argument to the query.",
"username": "Mohit_Sharma"
},
{
"code": " realm.query<Diary>(\n \"ownerId == $0 AND date < $1\",\n user.identity,\n RealmInstant.from(zonedDateTime.toLocalDate().toEpochDay(), 0)\n )\n",
"text": "I’ve tried passing a local date, but even though the result was successful, I did receive an empty list. This is my query:",
"username": "111757"
},
{
"code": "RealmInstant.from(zonedDateTime.toLocalDate().toEpochDay(), 0)",
"text": "Try running the query at Atlas like this to validate if everything is good or not?\nScreenshot 2022-12-12 at 13.51.222146×242 16.7 KB\n where date is RealmInstant.from(zonedDateTime.toLocalDate().toEpochDay(), 0).",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "I’ve tried, but still an empty response:\n11515×590 25.7 KB\n\n22341×566 2.44 KB\n\n33324×545 2.33 KB\n",
"username": "111757"
},
{
"code": "",
"text": "I think so there is something wrong with your date altogether.Have look at my model\nWhich get saved like this\n\nScreenshot 2022-12-13 at 11.44.151448×222 16.5 KB\nAnd you can search like\n\nScreenshot 2022-12-13 at 11.56.321678×774 36.7 KB\n",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Search items by a Date - Query | 2022-12-10T12:48:57.406Z | Search items by a Date - Query | 3,062 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "Modeling\nA Widget has 1 or more Gadgets\nA Widget has 1 or more Gizmos\nA Widget has 1 or more SprocketsUse Case\nWhen the Widget is “in assembly”, it has an indeterminant number of Gadgets, Gizmos, and Sprockets components. Information on these components may be changed WITHOUT changing information on the Widget.As such, it seems most appropriate for me to use the Document Reference pattern.HOWEVER - once the Widget has been fully assembled, ALL details are frozen.In this case, is it overkill to, upon “fully assembled” status, to delete the documents in the Gadgets/Gizmos/Sprockets collections and embed those documents on the Widget document? Document size is never going to be an issue - we’re talking about under 100 KB for the aggregate- and that’s being very generous on size.To restate my question, in case the example isn’t clear:\nGiven a set of collections which reference a parent collection - once a series of actions is finished, the data for the document represented in the parent collection, and the documents in its child collections, will become read-only.Does it make sense, in this case, to delete the child documents and embed them onto the parent document? Is this overkill? Could it make my database more performant to keep the number of documents in the child collections to a minimum?Thanks for any help.",
"username": "Michael_Jay2"
},
{
"code": "",
"text": "Hi @Michael_Jay2 ,Optimizing your schema for your access patterns after a certain processing is always welcome.It is acceptable to have one pattern while assembling a list and then combining the list into a single document.My question will be what prevents you from embedding the parts in the parent document at first? Is it the complexity of operations against it or the concern for a single document level locking?Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Pavel,\nThank you for the reply!I would love to use the embedded pattern from the beginning.Take this case:\nDuring assembly, N Gizmos may be added to the Widget.\nThen, Gizmo[0] may have a property that needs to be changed: say from {color: “blue”} to {color: “green”}I was under the impression that, given the dynamic nature of the child documents that it would be better to use the reference pattern so only the related document (or sub-document, in the case of the embedded pattern) would be updated.In other words - my reasoning is that with embedded I would be running:\nUPDATE Widget.Gizmo[0].color\nwhereas with reference, I would runL:\nUPDATE Gizmo[].color",
"username": "Michael_Jay2"
},
{
"code": "",
"text": "Pavel,\nWhile working on something else, I stumbled on this guide:I actually didn’t know this was a thing! I’m pretty sure this is going to be the best of both worlds. Is this what you had in mind with your question?My question will be what prevents you from embedding the parts in the parent document at first? Is it the complexity of operations against it or the concern for a single document level locking?",
"username": "Michael_Jay2"
},
{
"code": "$set : { \"gizmo.0.color\" : yellow}\n",
"text": "Hi @Michael_Jay2 ,Yes exactly, you have great operators on arrays.But if you always need to access a specific position :Will work.I would try to use the assembled version rather than the disassemble one.Doing less in the application will make it go faster Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Pavel,\nThanks again! I’m going to work on that today!While I have you - looking through the docs and guides, I haven’t seen yet an answer for this use case:Add a Gizmo to Widgets.Gizmos array. If Widgets.Gizmos does not exist, create it.",
"username": "Michael_Jay2"
},
{
"code": "{\n Gizmos: []\n}\n",
"text": "Well you can always start with an empty array:Use $addToSet or $push to update it.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Is it a reasonable idea to move from document reference to embedded documents for my particular use case? | 2022-12-12T01:01:01.069Z | Is it a reasonable idea to move from document reference to embedded documents for my particular use case? | 1,404 |
null | [
"compass"
]
| [
{
"code": "",
"text": "I am using compass and tried to upload json-converted pdf files,but that fails ‘matrix not supported?’I also tried to import pdf directly into the collection but that also fails. My files are approx 1 MB each.How shall i move forward?",
"username": "Joseph_Pareti"
},
{
"code": "",
"text": "What are json-converted pdf file? Would you be able to share an example?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "I started with a pdf file and the json file based on it. The translation was made using an online program. Unfortunately I am not allowed to upliad files",
"username": "Joseph_Pareti"
},
{
"code": "",
"text": "I think your are on your own.We have no clue on how a PDF (a portable file format to represent text and other type of printable stuff) and JSON (an structural way to represent data) can be related. Unless of course your PDF files represent data in some sort of matrix. But we do not now. I see a few way for you to get out of this mess.",
"username": "steevej"
},
{
"code": "",
"text": "there is nothing confidential in my files, they are press reports. I could not upload because this system does not allow me to do so",
"username": "Joseph_Pareti"
},
{
"code": "",
"text": "Sorry I misinterpreted the I am not allowed to upload files.Could you post a link to the the files?",
"username": "steevej"
},
{
"code": "",
"text": "here are the links to the source pdf file and the json translation using an online pdf2jason programGoogle Drive file.Google Drive file.",
"username": "Joseph_Pareti"
},
{
"code": "Operation passed in cannot be an Array\n",
"text": "The json file looks good at first sight so I did some experimentation.I can load the file fine with firefox and there is no error generated.If I cut-n-paste the whole and use ADD DATA → Insert Document in Compass it works.But, if I save the same cut-n-paste into a file and try ADD DATA → Import File it does not work. But the error message is different from yours. What I get isSo I tried what would be completely counter-intuitive. I made the whole document an array by inserting, before the first character, an opening square bracket [ and appending, after the last character, a closing square bracket ]. I was able to Import File after that change.",
"username": "steevej"
},
{
"code": "db.coll0.find({\"pdfDoc\":\"Sam\"})\n",
"text": "Thank you so much. Apology for asking this type of questions:returns Symbol ‘db’ is undefined",
"username": "Joseph_Pareti"
},
{
"code": "{\"pdfDoc\":\"Sam\"}",
"text": "I have no clue.Never had this issue. Post a screenshot of where you run this command. May be you do it in the wrong place.Note that the query{\"pdfDoc\":\"Sam\"}will not return any document.",
"username": "steevej"
},
{
"code": "",
"text": "here is a status report with the issues I spoke about",
"username": "Joseph_Pareti"
},
{
"code": "{\"pdfDoc\":\"Sam\"}",
"text": "As I suspectedyou do it in the wrong placeIn Compass, on the FILTER line the query is simply {\"pdfDoc\":\"Sam\"}. The db.coll0.find() syntax is for mongosh and nodejs.If you are going to make a presentation or write some kind of papers about mongodb, I strongly suggest that you follow a few courses from univesity.mongodb.com so that what you communicate is what you know rather than the answers you got here.",
"username": "steevej"
}
]
| How to create a database using pdf files | 2022-12-08T11:30:16.573Z | How to create a database using pdf files | 3,882 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Hi, I have collection like {“stringdouble”: “26.58”, “stringouble”: “36.58”, “stringdouble”: “”}\nI need to convert stringdouble datatype to float in aggregate pipeline whose result looks like {“stringdouble”: 26.58, “stringdouble”: 36.58, \"stringdouble: 0}, if it has “”, then it should be 0.",
"username": "GVidhyaSagar_Reddy"
},
{
"code": "$toDouble\"\"$convertonErrordb.baz.aggregate([\n{$set:{\n n:{$convert: {input:'$stringdouble',to:'double',onError:0}}}},\n{$project:{_id:0}}\n])\n[\n { stringdouble: '26.58', n: 26.58 },\n { stringdouble: '36.58', n: 36.58 },\n { stringdouble: '', n: 0 }\n]\n",
"text": "If you can, insert the data with the correct type in the first place.Using $toDouble on \"\" would error so you have to use $convert with the onError option:",
"username": "chris"
}
]
| Convert string to double if it is not "" | 2022-12-13T10:08:28.414Z | Convert string to double if it is not “” | 748 |
null | [
"aggregation"
]
| [
{
"code": "[\n {\n \"_id\": \"NILAI HARIAN\",\n \"average\": 77.85714285714286,\n \"many\": 7,\n \"sum\": 545\n },\n {\n \"_id\": \"PENILAIAAN TENGAH SEMESTER\",\n \"average\": 80,\n \"many\": 1,\n \"sum\": 80\n },\n {\n \"_id\": \"PENILAIAAN AKHIR SEMESTER\",\n \"average\": 73,\n \"many\": 1,\n \"sum\": 73,\n \"final_score\": 0,\n }\n]\n",
"text": "hi guy’s i need some help for deadline task.\nthis is sample of my document and query Mongo playground: a simple sandbox to test and share MongoDB queries onlinemy query always give 0 result\nthe question is how to get a final score from sum average\ni wish the result become like this",
"username": "Nuur_zakki_Zamani"
},
{
"code": "average: {\n $avg: \"$datas.report.scores.value\",\n \n },\n final_score: {\n $sum: \"$average\"\n }\n",
"text": "A field that is created in a stage is only available in the next stage. In your $group, the field average: cannot be used to compute final_score:.",
"username": "steevej"
},
{
"code": "[\n {\n \"_id\": \"PENILAIAAN AKHIR SEMESTER\",\n \"average\": 73,\n \"many\": 1,\n \"sum\": 73\n },\n {\n \"_id\": \"NILAI HARIAN\",\n \"average\": 77.85714285714286,\n \"many\": 7,\n \"sum\": 545\n },\n {\n \"_id\": \"PENILAIAAN TENGAH SEMESTER\",\n \"average\": 80,\n \"many\": 1,\n \"sum\": 80\n },\n\"final_score\":230\n]\n",
"text": "and how to make return became like thisfinal score got from sum of average ?",
"username": "Nuur_zakki_Zamani"
},
{
"code": "",
"text": "Share what you have tried and explained how it failed to provide the desired result.This would help us save time by not investigation in a direction that you already know does not work.",
"username": "steevej"
}
]
| How to sum from average result | 2022-12-13T09:13:51.933Z | How to sum from average result | 1,040 |
null | [
"aggregation",
"java",
"compass",
"spring-data-odm"
]
| [
{
"code": "Filters.innew Document(\"$in\", Arrays.asList(userId, \"$owners))MongoCollection<Document> batches = mongoTemplate.getCollection(\"batches\")\n\nBson lookup = Aggregates.lookup(\"listItems\", // from\n Arrays.asList(\n new Variable<>(\"listId\", \"$listId\"),\n new Variable<>(\"batchId\", \"$latestBatch\")), // let\n Arrays.asList(\n Aggregates.match(Filters.expr(Filters.eq(\"status\"), statusVar))),\n Aggregates.match(Filters.expr(Filters.or(\n Filters.in(\"owners\", userId),\n Filters.in(\"readers\", userId)\n ))),\n Aggregates.match(Filters.expr(Filters.and(\n Filters.eq(\"listId\", \"$listId\"),\n Filters.eq(\"batch\", \"$batchId\")\n )))\n ), // pipeline\n \"listItems\"); // as\n\nBson project = Aggregates.project(new Document(\"listItems\", 1L));\n\nbatches.aggregate(Arrays.asList(lookup, project));\n",
"text": "Posted this to stackoverflow, but felt a cross-post here was sensible.One thing I’ve noticed is that Filters.in doesn’t work and I have to use new Document(\"$in\", Arrays.asList(userId, \"$owners))Is there a separate helper function for “Aggregate in”, and are there any other errors or improvements I could make?The aim isHere’s my pipeline, which was generated in new Document() style in Compass, but I altered to use helpers for readability.",
"username": "Thomas_Clarke"
},
{
"code": "",
"text": "First, I do not use builders because I like to use the same code in mongosh, nodejs or Java. And I usually do not like abstraction layers.If you look at the documentation you will see there is 2 versions of $in:Standard queryor using aggregation $exprAs you see the syntax is quite different. You call Filters.in inside Filters.expr so I suspect that the $expr version has to be used. However from the Filters.in documentation I looks like it generates the symple version.",
"username": "steevej"
}
]
| Mongo Helpers - Filters.in() within Filters.expr() | 2022-12-13T09:27:47.033Z | Mongo Helpers - Filters.in() within Filters.expr() | 1,796 |
null | [
"aggregation"
]
| [
{
"code": "db.collection.aggregate([\n {\n $project : {\n \"result\" : {$setDifference : [\n \n [{\"names\" : \"a\", \"value\": \"something\"}, {\"names\" : \"x\"}, {\"names\" : \"y\"}, {\"names\" : \"z\"}, {\"names\" : \"h\"}, {\"names\" : \"r\"}, {\"names\" : \"s\"}, {\"names\" : \"t\"}],\n [{\"value\": \"something\", \"names\" : \"a\"}, {\"names\" : \"b\"}, {\"names\" : \"c\"}, {\"names\" : \"d\"}, {\"names\" : \"t\"}]\n \n ]},\n }\n },\n {\n $limit:1\n }\n ])\n{\n\t\"_id\" : ObjectId(\"5e4a5521deb23e00017b6162\"),\n\t\"result\" : [\n\t\t{\n\t\t\t\"names\" : \"a\",\n\t\t\t\"value\" : \"something\"\n\t\t},\n\t\t{\n\t\t\t\"names\" : \"x\"\n\t\t},\n\t\t{\n\t\t\t\"names\" : \"y\"\n\t\t},\n\t\t{\n\t\t\t\"names\" : \"z\"\n\t\t},\n\t\t{\n\t\t\t\"names\" : \"h\"\n\t\t},\n\t\t{\n\t\t\t\"names\" : \"r\"\n\t\t},\n\t\t{\n\t\t\t\"names\" : \"s\"\n\t\t}\n\t]\n}\n{\"names\" : \"a\", \"value\": \"something\"}{\"value\": \"something\", \"names\" : \"a\"}",
"text": "Consider the below queryNow the expectation is to have only [x y z h r s] is the final result. But mongodb assumes {\"names\" : \"a\", \"value\": \"something\"} and {\"value\": \"something\", \"names\" : \"a\"} are two different values even though they are identical.Is this a bug? Are there any workarounds for this? (My actual data has 5 fields so being in a different order is more likely imo)",
"username": "Shrinidhi_Rao1"
},
{
"code": "",
"text": "Is this a bug?No it is not a bug. Operations like $setDifference and $setIntersection are based on value equality. Two objects are equals if the have same fields and same values in the same order.Are there any workarounds for this?Normalize your data so that fields are in the same order. Since objects are usually updated using an application or an API, having the fields in the same order is usually not an issue. The issue arise when human update the values or when code is broken and objects are updated directly rather than a data access layer.",
"username": "steevej"
},
{
"code": "",
"text": "I see.The issue arise when human update the values or when code is broken and objects are updated directly rather than a data access layer.This makes sense to me. The documents I checked seems to have the same order.Is there a way for me to check if theres any out of order?",
"username": "Shrinidhi_Rao1"
},
{
"code": "{ _id: 0, a: { foo: 1, bar: 2 } }\n{ _id: 3, a: { bar: 4, foo: 5 } }\n_set = { \"$set\" : { _a : { $objectToArray : \"$a\" } } }\n_wrong_order = {\n \"_a.0.k\" : { \"$ne\" : \"foo\" } ,\n \"_a.1.k\" : { \"$ne\" : \"bar\" }\n}\n_match = { $match : _wrong_order }\npipeline = [ _set , _match ]\n",
"text": "The concept of aggregation pipeline is so powerful that there isa way for me to check if theres any out of orderTo illustrate the logic lets start with the collection:The goal is to find out of any of the a: have the wrong order. The correct order is to have foo: first and then bar:. In the collection above _id:0 has the correct order and _id:3 has not.The first stage will set the field _a: using $objectToArray on a:Then a simple $match stage will ensure that foo is first and bar is second. But sinceAnd the final pipeline would be:",
"username": "steevej"
}
]
| $setDifference fails to match objects with a different order of fields | 2022-12-12T08:49:11.077Z | $setDifference fails to match objects with a different order of fields | 1,390 |
null | [
"production",
"ruby"
]
| [
{
"code": "",
"text": "This patch release in the 2.18 series adds the following new features:The following issues have also been addressed:",
"username": "Dmitry_Rybakov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| Ruby Driver 2.18.2 | 2022-12-13T11:12:40.390Z | Ruby Driver 2.18.2 | 1,667 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "In MongoChart, Filter pane is not working for lookup collection field.\n**I have two collection. Datasource is First collections and used lookup for Second collection. I am trying to filter the field value which is from second collection but filter pane is not working.Note:\n** I tried second collection as datasource to chart and applied filter to same field , it is working .**",
"username": "shital_Sonawane"
},
{
"code": "",
"text": "Hello @shital_Sonawane ,Welcome to The MongoDB Community Forums! I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you please help me understand below things from your use-case?Regards,\nTarun",
"username": "Tarun_Gaur"
}
]
| In MongoChart, Filter pane is not working for lookup collection field | 2022-11-09T15:20:48.186Z | In MongoChart, Filter pane is not working for lookup collection field | 1,499 |
null | [
"node-js",
"change-streams",
"rust"
]
| [
{
"code": "",
"text": "Hi,we are fairly experienced with MongoDB, but still wonder what experiences the community has made in using change streams.Our scenario is that we are using change streams for event driven architectures, currently via Realm Triggers, but we are facing issues in Triggers being suspended when a bit of heavy writing occurs.We have changestreams open for around 50 collections, no match expressions and in many cases full lookups, one collection with the preimage option set.We are now considering moving away from Realm Triggers and consuming the change streams via serverfull instances dedicated for that purpose. We are using Node.js.Questions:Thanks for any input.",
"username": "Manuel_Reil1"
},
{
"code": "",
"text": "Hi (-:\nDid you have any advances on the subject?",
"username": "yaron_levi"
}
]
| Change Stream consideration while watching multiple collections | 2022-04-07T15:52:46.133Z | Change Stream consideration while watching multiple collections | 4,526 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "{$group: {_id: {\"date\": {\"$subtract\": [{\"$toLong\": {\"$toDate\": \"$timeStamp\"}}, {\"$mod\": [{\"$toLong\": {\"$toDate\": \"$timeStamp\"}},900000]}]}},value:{$avg:{\"$divide\": [\"$rvalue\",1000]}}}}\nmillisecSELECT MEAN(\"water_level\") FROM \"h2o_feet\" \nWHERE \"location\"='coyote_creek' \nAND time >= '2015-08-18T00:06:00Z' \nAND time <= '2015-08-18T00:54:00Z' \nGROUP BY time(18m,6m)\n",
"text": "Hi guys,I want to query the data based on time, so i tried group by time(15mins) with match followed by group query.\nGroup by time: 15 minsThis gives me timeStamp with millisec and avg value.\nBut now I want a query where group by time and offset…query similar to influx group by time(time_intervel,offset_interval)\nexample influx query:",
"username": "Bukka_Dinakar"
},
{
"code": "",
"text": "Hi @Bukka_Dinakar and welcome to the MongoDB community forumAs we are unfamiliar with Influx, it would be helpful in understanding and replicating in local environment, if you could share the following informationBest Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Group by time interval and offset interval | 2022-12-05T17:57:30.675Z | Group by time interval and offset interval | 1,479 |
null | []
| [
{
"code": "{\n \"analyzer\": \"emailUrlExtractor\",\n \"searchAnalyzer\": \"emailUrlExtractor\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"user_email\": {\n \"analyzer\": \"emailUrlExtractor\",\n \"searchAnalyzer\": \"emailUrlExtractor\",\n \"type\": \"string\"\n }\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"emailUrlExtractor\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"uaxUrlEmail\"\n }\n }\n ]\n}\n",
"text": "I have this Search Index with a custom analyzer to search for email addresses.When querying it, it seems to only return results if you query for the exact/entire email address. With emails of form “[email protected]”, how can I query just parts of the email? Like “first”, “last” or even the text and digit parts.",
"username": "Wilfredo_Gomez"
},
{
"code": "{\n \"analyzer\": \"emailUrlExtractor\",\n \"searchAnalyzer\": \"emailUrlExtractor\",\n \"mappings\": {\n \"fields\": {\n \"email\": {\n \"type\": \"string\"\n }\n }\n },\n \"analyzers\": [\n {\n \"charFilters\": [],\n \"name\": \"emailUrlExtractor\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n },\n {\n \"maxGram\": 10,\n \"minGram\": 2,\n \"type\": \"nGram\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"uaxUrlEmail\"\n }\n }\n ]\n}\n\"email\"\"user_email\"\"first\"email> db.collection.aggregate\n({\n '$search': { index: 'email', text: { path: 'email', query: 'first' } }\n})\n[\n {\n _id: ObjectId(\"638ea1744d4a7d082dc9b8dd\"),\n email: '[email protected]'\n },\n {\n _id: ObjectId(\"638ea18b4d4a7d082dc9b8de\"),\n email: '[email protected]'\n },\n {\n _id: ObjectId(\"6396aaf20333165f1d593199\"),\n email: '[email protected]'\n }\n]\nmulti",
"text": "Hi @Wilfredo_Gomez - Welcome to the community When querying it, it seems to only return results if you query for the exact/entire email address. With emails of form “[email protected]”, how can I query just parts of the email? Like “first”, “last” or even the text and digit parts.One possible way to achieve this is using the following index definition below should be able to get you the email based off a partial match (small tweak to the index definition you have provided):Note: The index definition I have provided uses a field \"email\" compared to the \"user_email\" field you have specified. Adjust and test accordingly.Output when searching for \"first\" based off my test environment:You can alter the index definition accordingly based off your use case(s). I would recommend to test this thoroughly against a test environment if you believe it may be able to achieve what you are after and also to ensure it does meet all your requirements.You may wish to also check the following Multi Analyzer documentation as you can use the multi object in your index definition to specify alternate analyzers with which to also index the field.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Query partial email addresses using Atlas Search Index | 2022-12-05T19:16:05.126Z | Query partial email addresses using Atlas Search Index | 1,582 |
[
"charts"
]
| [
{
"code": "",
"text": "Hi there,I share a dashboard with my organization.\nI don’t want the organization’s members to see the billing informations.Do you know how to do so ?",
"username": "Emmanuel_Bernard1"
},
{
"code": "Organization Member",
"text": "Hi @Emmanuel_Bernard1,I share a dashboard with my organization.\nI don’t want the organization’s members to see the billing informations.Do you know how to do so ?I do not believe it is possible to restrict the “viewing” (or “reading”) of the billing page as the Organization Member role provides read-only access to the settings, users, and billing in the organization and the projects they belong to as noted in the Atlas User Roles documentation.There is currently a feedback post regarding more granular permissions which is in the “started” status. You can vote for this and perhaps mention your particular use case details.Regards,\nJason",
"username": "Jason_Tran"
}
]
| Share a dashboard without billing access | 2022-12-12T15:27:01.018Z | Share a dashboard without billing access | 1,487 |
|
null | []
| [
{
"code": "",
"text": "How can I set string support to utf-32? Default is utf-8 and it can’t handle my project requirements, I solved it by encoding strings to hex but it takes more memory. Is it possible to set utf-32 instead of utf-8?",
"username": "Jonatan_Saari"
},
{
"code": "",
"text": "Hi @Jonatan_Saari and welcome in the MongoDB Community !I never heard of this possibility. Maybe using a binary type could be more compact than hex? Not sure though.If you feel like it, you can send this improvement idea in the MongoDB Feedback Engine and if it gets enough votes and traction, it might land in the next version! Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Can I use $regex for searching part of binary data? Are they sortable?",
"username": "Jonatan_Saari"
},
{
"code": "",
"text": "$regex only works with Strings with UTF-8 support.Binary Data is just a byte array. I don’t see any issue with sorting but I doubt it’s very useful.",
"username": "MaBeuLux88"
}
]
| How to change string support to UTF-32? | 2022-12-08T07:26:31.965Z | How to change string support to UTF-32? | 1,305 |
null | [
"queries",
"data-modeling",
"database-tools",
"app-services-data-access"
]
| [
{
"code": "5742a6af744f6b0dcf0003d1 convert to 321432\n5406e4c49b324869198b43 convert to 353213\n",
"text": "Hello,What is the best way to convert the normal Realm object _id to a more normal value that I can display to the user?\nFor example:",
"username": "Daniel_Gabor"
},
{
"code": "",
"text": "Hi. the bytes of the object id do have specific meanings as you can see here: https://www.mongodb.com/docs/manual/reference/method/ObjectId/That being said, what you are asking for is really an anti-pattern. If you want something like an auto-incrementing user_id field that is customer-facing then you should consider either:\n(a) making the _id field an integer (note: I wouldn’t recommend doing this)\n(b) have a separate field called user_id that gets its value by incrementing a singleton documents counter variable. You can have this still be a unique index if you want: https://www.mongodb.com/docs/manual/core/index-unique/Trying to synthesize meaning out of an ObjectId (other than to figure out the time it was created) will likely lead to more confusion than it is worth.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "@Daniel_Gabor : I also agree with @Tyler_Kaye, doesn’t make much sense to convert object_id. May I also know why you are doing this?",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Hello @Mohit_Sharma @Tyler_Kaye ,At the moment the id looks like: 5742a6af744f6b0dcf0003d1I want to change to a more normal value, for example only number value: 4321543.It will be easier for the user to comunicate a problem to us with a simplier id of the object than with that big number/letter id.There are no easy way? Like hashing that value or something?",
"username": "Daniel_Gabor"
},
{
"code": "",
"text": "If you look at the link I sent above, the objectId does store some meaning that you could use. Particularly you could use the time-stamp bytes, but they are no guaranteed to be unique. Similarily you could just hash the _id if you want, but again, that is not guaranteed to be unique, especially if you are hashing to a low number (6 digits).All around, can you do what you want to do…kind of. However, I would really reccomend having a second field that is user_facing. The ObjectId / _id is normally an internal field and not visible to the user (the same as a UUID field would be), but if you need a more readable user_id, you should have a separate field for that and have application logic / unique indexes be in charge of managing it.",
"username": "Tyler_Kaye"
}
]
| Convert initial _id to a normal value | 2022-12-10T20:56:46.866Z | Convert initial _id to a normal value | 3,947 |
null | [
"replication",
"kafka-connector"
]
| [
{
"code": "{\n \"name\": \"mongo-source\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"connection.uri\": \"mongodb:***:27017/?authSource=admin&replicaSet=myMongoCluster&authMechanism=SCRAM-SHA-256\",\n \"database\": \"someDb\",\n \"collection\": \"someCollection\",\n \"output.format.value\":\"json\",\n \"output.format.key\":\"json\",\n \"key.converter.schemas.enable\":\"false\",\n \"value.converter.schemas.enable\":\"false\",\n \"key.converter\":\"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\":\"org.apache.kafka.connect.storage.StringConverter\",\n \"publish.full.document.only\": \"true\",\n \"change.stream.full.document\":\"updateLookup\",\n \"copy.existing\": \"true\"\n }\n}\n{\"_id\": {\"_id\": {\"$oid\": \"5e54fd0fbb5b5a7d35737232\"}, \"copyingData\": true}}\n{\"_id\": {\"_data\": \"82627B2EF6000000022B022C0100296E5A1004A47945EC361D42A083988C14D982069C46645F696400645F0FED2B3A35686E505F5ECA0004\"}}\n",
"text": "We are trying to take all the records from MongoDB to Kafka using the com.mongodb.kafka.connect.MongoSourceConnector. The settings are used for connector as follows:When all documents are initially uploaded from MongoDB to Kafka, the “key” corresponds to the “id” from Mongo document:But when a document in MongoDB is updated, an update with a different “key” gets into Kafka:Thus, the consumer cannot identify the initially uploaded document and update for it.Please help me find which settings on the Kafka, Connector or MongoDB side are responsible for this and how I can change the “Key” in Kafka to the same as during the initial upload.",
"username": "v.motov.13"
},
{
"code": "",
"text": "I am having the same issue, did you find any solution to this?",
"username": "Andri_Valur_Gudjohnsen"
}
]
| In Kafka, the "Key" does not match the "Id" when updating the document in MongoDB | 2022-05-12T05:45:49.053Z | In Kafka, the “Key” does not match the “Id” when updating the document in MongoDB | 3,153 |
null | [
"python",
"field-encryption"
]
| [
{
"code": "openssl rand 96 > master-key.txt\n",
"text": "Hello,We’re using MongoDB Community Edition 4.4.16. We are using MongoDB as our Database in our current project and wanted to implement Client Side Field Level Encryption. We’ve implemented and tested Explicit Encryption and Automatic Decryption with Python as our’s is community Edition. Everything is fine except that we’re unable to find a way to query the encrypted data from Mongo Shell. Since we require CSFLE ClientEncryption Instance to query, we can’t seem to find a way to import Master Key into Shell as Mongo only accepts .js files.We’ve created 96 byte character master key using openssl.Can someone please suggest how to proceed with this? We want to query the encrypted fields from Mongo Shell.",
"username": "Bhargav_Sai"
},
{
"code": "",
"text": "Hi @Bhargav_Sai ,I suggest to review the example in this article:Learn MongoDB’s best practices for encrypting and protecting your data.In this example the key is pasted as a whole into a “key” variable maybe you can do the same or pipe it to the shellThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you. I am able to retrieve the key using keyAltName method.",
"username": "Bhargav_Sai"
}
]
| Query Encrypted Data from MongoShell - Community Edition | 2022-12-07T08:41:43.533Z | Query Encrypted Data from MongoShell - Community Edition | 1,666 |
null | [
"server",
"installation"
]
| [
{
"code": "",
"text": "Hi,I have deployed MongoDB on my Macbook/OSX with brew.\nCreated mongod.conf file with all basic fields as: systemlog, storage, net.\nIt works perfectly - I created admin, use DB, etc.Then tired to do authorisation/security.\nSo when added under security: authorisation: enabled, it works well to.So then tried to do keyfile with key. I created key as requested on doc pages.\nThen made chmod 400 to the file - which is located in anyplace, but could be as /opt/homebrew/etc\nmongod is running from my account, so there are privilidges as they should be.But it’s not starting.shows: [email protected] error 512 under ‘brew services list’Any hints?Thanks. J.",
"username": "Jakub_Polec"
},
{
"code": "",
"text": "Check mongod.log.It may give more details\nIs it standalone or replica?\nShare security section of your config file",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Sure, it’s very simple config:\nsecurity:\nauthorization: enabled\nKeyFile: /Users/jpolec/mongo_test/new_keysystemLog:\ndestination: file\npath: /opt/homebrew/etc/log\nlogAppend: truestorage:\ndbPath: /opt/homebrew/etc/db#replication:net:\nbindIp: localhost\nport: 27017And what I have found when commented with # KeyFile: the log is created. But once I uncomment, so want to use KeyFile:, no log is created. In the same location. Strange. And no mongod is running.the brew services list shows error as:\[email protected] error 512 jpolec ~/Library/LaunchAgents/[email protected]",
"username": "Jakub_Polec"
},
{
"code": "",
"text": "I have not put any replica yet. As try to run it without replica first to see step by step. So it’s commented with #.",
"username": "Jakub_Polec"
},
{
"code": "",
"text": "Keyfile is used with replica for internal authentication between members and also for role based access control",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi, Yes, I know why the keyfile is used for.\nThis is my intention to do replica of 2-3 members.Hence can’t run with Security keyfile:\nsecurity:\nauthorization: enabled\nKeyFile: /Users/jpolec/mongo_test/new_keyit doesn’t start with this on one server. On the other is works fine. So therefore I am confused.\nOnce it starts, then I will start doing replicas.",
"username": "Jakub_Polec"
},
{
"code": "</>```\nfile content\n```\nmongod --config /path/to/config/mongod.conffork",
"text": "please re-format your file contents with code blocks: </> button in the editor or between triple single quotes like this:where is your config file located? try to run an instance with mongod --config /path/to/config/mongod.conf. if it has fork enabled in it, comment that line before starting. doing this will start the server in the current terminal and will print errors to screen. it should exit immediately, if not use “ctrl+c” (cmd+c?). copy that log and share here. (don’t forget to format)",
"username": "Yilmaz_Durmaz"
},
{
"code": "security:\n authorization: \"enabled\"\n KeyFile: /Users/jpolec/mongo_test/new_key\n\nsystemLog:\n destination: file\n path: /opt/homebrew/etc/log\n logAppend: true\n\nstorage:\n dbPath: /opt/homebrew/etc/db\n\n#replication:\n# replSetName: \"equity_replica\"\n\nnet:\n bindIp: localhost\n port: 27017\n% brew services list \nName Status User File\nmongodb-community none \[email protected] error 512 jpolec ~/Library/LaunchAgents/[email protected]\n-r-------- 1 jpolec staff 1024 Dec 7 12:38 /Users/jpolec/mongo_test/new_key\n",
"text": "Thank you. Please see below.and when I comment the KeyFile line it starts well. I have user with admin privileges, etc. and it works well. Once I uncomment the KeyFile like, it shows errors.also the directory has chmod 400 setand in case I commenting the KeyFile line, I can see log, and the mongod.conf log is read properly.Thanks for help.",
"username": "Jakub_Polec"
},
{
"code": "mongod --config /where/ever//is/that/mongod.confsudo",
"text": "@Jakub_Polec , you misunderstood me.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "The keyfile should have chmod 400 permission\nAnd the directory where it resides should be owned by mongod",
"username": "Ramachandra_Tummala"
},
{
"code": "user:mongodgroup:mongodchownchgrp444",
"text": "@Ramachandra_Tummala has a nice point: more privilege issues.in case you haven’t known:back to your problem. if it is this owner/group issue, then my suggested method to run as your user would “possibly” fail due to permissions to data folder and log file (or run just fine … permissions), but would definetely succesfuly run with root permissions (sudo). that is why it was/is important to run that command.before moving on, change file permissions to 444 so it can be read by mongod, and try to run the service again. if it runs fine, you will know what to do: move file to a safe location, change user/group, change permission of both “the file and the folder it resides in”.",
"username": "Yilmaz_Durmaz"
},
{
"code": "keyFileKeyFile",
"text": "Unrecognized option: security.KeyFileyou know why programming mistakes that cause error as “bug”!? because they are so small they are hard to notice. (besides the real cockroach fried inside electronics )config file uses camel case names. where the first letter is small. your file should read keyFile, not KeyFile. please, correct that and report back if you get any more errors.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you so much.Yes, I have discovered that put the name “KeyFile” with capital K and it should start with lower k.\nNo more errors.Thank you and much appreciated. Happy to use MongoDB and recommend to the others.",
"username": "Jakub_Polec"
},
{
"code": "",
"text": "Great job,\nI am so happy to hear from you big success and progress\nBest wishes,\nLahcene",
"username": "Lahcene_Ouled_Moussa"
}
]
| Mongo not starting with KeyFile line in security section of config: | 2022-12-07T11:59:04.482Z | Mongo not starting with KeyFile line in security section of config: | 5,047 |
null | [
"aggregation",
"queries",
"python",
"charts"
]
| [
{
"code": "created_at: 2022-12-10T00:25:23.273+00:00\n",
"text": "Hello, I’m having problems filtering date using Atlas Charts.Let me show you what I’m talking about…Chart (Image)I have chart above showing how many Gains and Losses we had today.\nIn my collection I have a field named “result” which is filled with “gain” or “loss”, and I’m using an aggregation count with this field.But I need to show only documents which was inserted today, so I was filtering like this:Filtering Date on the Chart (Image)As we can see I have a field named “created_at” with the date, and I set the UTC Time Zone as UTC-03:00\nI’m using python (pymongo) to fill my collections and to fill the “created_at” field I used the package datetime and function datetime.now()I verified these fields and the date are correct and they’re using my local time (UTC-03:00 Brasilia)PS: It’s a date field, not string field.But, the problem is…Today’s count is wrong on the chart! Counting manually we see that we have 26 LOSSES, but on the chart it appears 23\nI used the following filter and count manually looking at the date:\nFiltering my Collection to count manually (Image)Probably the gain is incorrect too but I used losses for example to make it easier to count manually.Is it a problem on the chart filtering date? What can I do to show only today’s document?@edit:\nIf I change the filter to Absolute and fill with today’s date, it WORKS, but i don’t want to have to change the filter everyday.",
"username": "michael.kz"
},
{
"code": "",
"text": "Hi @michael.kz -Can you clarify exactly how you are storing the dates? The MongoDB date type is not time zone-aware, so the correct way of storing dates is to always normalise them to UTC for storage, and then you can convert them to your desired time zone for display or analysis. You may be doing this already but it’s unclear from your post so I wanted to establish this before looking at other possible issues.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "You may also be able to debug this somewhat by looking at the aggregation pipeline generated for the chart. The time boundaries are calculated externally to the pipeline by the Charts backend, but should give you an idea of what’s happening. Here I’m also using the Period / 1 day filter set to my time zone of UTC+11. You can see that the calculated date boundaries are at 13:00 UTC which is midnight in UTC+11.\nimage823×539 21.9 KB\n",
"username": "tomhollander"
},
{
"code": "[\n {\n \"$match\": {\n \"created_at\": {\n \"$gte\": {\n \"$date\": \"2022-12-11T03:00:00Z\"\n },\n \"$lt\": {\n \"$date\": \"2022-12-12T03:00:00Z\"\n }\n }\n }\n },\n {\n \"$addFields\": {\n \"result\": {\n \"$switch\": {\n \"branches\": [\n {\n \"case\": {\n \"$in\": [\n {\n \"$toString\": \"$result\"\n },\n [\n {\n \"$literal\": \"green\"\n }\n ]\n ]\n },\n \"then\": \"Gain\"\n },\n {\n \"case\": {\n \"$in\": [\n {\n \"$toString\": \"$result\"\n },\n [\n {\n \"$literal\": \"loss\"\n }\n ]\n ]\n },\n \"then\": \"Loss\"\n }\n ],\n \"default\": \"Other values\"\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"__alias_0\": \"$result\"\n },\n \"__alias_1\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$result\"\n },\n \"missing\"\n ]\n },\n 1,\n 0\n ]\n }\n },\n \"__alias_2\": {\n \"$sum\": {\n \"$cond\": [\n {\n \"$ne\": [\n {\n \"$type\": \"$result\"\n },\n \"missing\"\n ]\n },\n 1,\n 0\n ]\n }\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"__alias_0\": \"$_id.__alias_0\",\n \"__alias_1\": 1,\n \"__alias_2\": 1\n }\n },\n {\n \"$project\": {\n \"color\": \"$__alias_1\",\n \"x\": \"$__alias_0\",\n \"y\": \"$__alias_2\",\n \"_id\": 0\n }\n },\n {\n \"$addFields\": {\n \"__agg_sum\": {\n \"$sum\": [\n \"$y\"\n ]\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"x\": \"$x\"\n },\n \"__grouped_docs\": {\n \"$push\": \"$$ROOT\"\n },\n \"__agg_sum\": {\n \"$sum\": \"$__agg_sum\"\n }\n }\n },\n {\n \"$sort\": {\n \"__agg_sum\": -1\n }\n },\n {\n \"$unwind\": \"$__grouped_docs\"\n },\n {\n \"$replaceRoot\": {\n \"newRoot\": \"$__grouped_docs\"\n }\n },\n {\n \"$project\": {\n \"__agg_sum\": 0\n }\n },\n {\n \"$limit\": 5000\n }\n]\n",
"text": "Oh, thank you Tom. I’m new using Atlas Charts, now I could understand what’s happening.I was using the filter with UTC Time Zone 03:00 Brasilia, but looking at the Chart Aggregation Pipeline I saw it was filtering from 03:00 AM, but the date on the field is already using my correct local time.Removing the Time Zone filter it works!!!",
"username": "michael.kz"
},
{
"code": "",
"text": "Oh I got another problem right now @ tomhollander, because I’m in Brazil, and here it’s 09:33 PM on the 11th yet, but my date filter is already using tomorrow’s date (12th). Probably because in UTC it’s already 12th. What can I do in this case?",
"username": "michael.kz"
},
{
"code": "",
"text": "The solution is to normalise your dates to UTC before they are stored. If you treat them as local, other date/time functions won’t work as expected as they assume your date is stored as UTC.If you can’t change the stored dates, another possibility may be to create a calculated field that normalises the dates to proper UTC (i.e. subtract 3 hours) and then do the time zone-aware filtering on that field. This kind of calculation gets messy when DST is involved, but I see Brazil doesn’t observe DST so it may be a viable option.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks tom! I created a utc_date calculated field from my original created_at field converting to UTC, and changed the chart filter to UTC-03:00 Brasilia and now it’s working perfectly ",
"username": "michael.kz"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Issue filtering date with Time Zone UTC-03:00 Brasilia | 2022-12-10T15:38:05.576Z | Issue filtering date with Time Zone UTC-03:00 Brasilia | 4,385 |
null | [
"mongodb-shell"
]
| [
{
"code": "",
"text": "Im new to MongoDB. Just installed Mongo through hombrew successfully. When I try to run the most basic commands such as “mongosh – version” or “mongo help” I get hit with a “Missing Semicolon” syntax error.Im not sure what Im doing wrong.Also, Im getting a reference error: “mongosh not defined” when running “mongosh -help”",
"username": "Ariel_Nurieli"
},
{
"code": "",
"text": "Most likely you are at mongo prompt\nExit and run your command from os prompt",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "So all “mongosh” commands I should be running from os prompt? And do i need to get out of the mongosh shell altogether?",
"username": "Ariel_Nurieli"
},
{
"code": "mongoshmongoshmongosh --helpmongoshmongosh",
"text": "Hi @Ariel_Nurieli,Not all mongosh command needs to be executed from the OS prompt. The options of the mongosh can be executed from the OS prompt. You can see all the options after running mongosh --help.\nhelp.png1407×413 46.8 KB\nTo see all the mongosh options refer to this link: https://www.mongodb.com/docs/mongodb-shell/reference/options/#optionsAnd for all the methods of mongosh: https://www.mongodb.com/docs/mongodb-shell/reference/methods/I hope it helps! Please let us know if you have any further questionsThanks,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Syntax Error: "Missing semicolon" | 2022-12-07T14:25:48.723Z | Syntax Error: “Missing semicolon” | 12,647 |
null | [
"queries",
"cxx"
]
| [
{
"code": "db.name_collections.find({x: 8}).explain('executionStats')",
"text": "How can i execute this command but in cpp\ndb.name_collections.find({x: 8}).explain('executionStats')I would like to do something like std::cout << explain",
"username": "Leno_N_A"
},
{
"code": "using namespace bsoncxx::builder::basic;\nauto reply = db.run_command(make_document(kvp(\"explain\", make_document(kvp(\"find\", \"coll\"), kvp(\"filter\", make_document(kvp(\"foo\", \"bar\")))))));\nauto jsonStr = bsoncxx::to_json(reply.view());\nstd::cout << jsonStr << std::endl;\n",
"text": "Hi @Leno_N_A and welcome to the MongoDB community forum!!The explain() command can be sent with the runCommand method.\nThe below code snippet might be helpful for you.Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed after 60 days. New replies are no longer allowed.",
"username": "system"
}
]
| Explain(executionStats) in c++ | 2022-11-28T20:38:55.761Z | Explain(executionStats) in c++ | 1,539 |
null | []
| [
{
"code": "",
"text": "Hi there, I am a veteran of the old university. I am now trying the new content and here are a few problems I noticed that make it a bit hard.1- Learning paths are nice, but our old completions made their way a bit wrong. I started the “MongoDB DBA Path”. Although I haven’t taken the whole new “Introduction to MongoDB” (which I see it as a refresher), the path now shows as “completed”.2- The path is nice to show the content, but when I complete a section it shows only “View” button, and “Continue” button while still active. it does not show “View Details” anymore where we would see the overview of that section. this details page should be available anytime.3- The course page is nice, but browsing through is a bit hard, especially when taking a quiz. “prev/next” buttons are easily confused with the quiz itself.4- Long pages in the course scroll themselves toward the half. I thought I skipped the video at first, that I clicked the next button twice, but then I noticed the scrolling. this happened on many new long pages I visited, so I assume it has something to do with page coding.5- Course completion requirements are not clear. we had this condition clearly stated to be 65% total with 3 tries on graded assignments. Is this still relevant in the new university? What is the requirement for completion? I did not see any statements about that.6- New lab playground is great, but once we click on the “check” button, the lab is not available to play anymore. That is a downside as each lab playground holds its own data for the purpose of the lecture. They should be available all the time, or the “tasks/hints/answers” along with the data should be available to work with after it completes.7- the check box for the “HIDE ANSWERS UNTIL THE END OF THE QUIZ” is not visible enough, and this feature is not working as intended. Even when checked, it still shows the answer to each question.8- Some of the review content seems to continue lingering in the quizzes.9- the full-screen “Next” overlay on the videos that are put on a video when it ends is a pure annoyance. At first glance, it serves well to transition to the next lecture while in full-screen, but it simply blocks reviewing the video content even when not in full-screen. One needs to go back and forth to get back to the video to review it until this overlay comes up again, and then the same pain all over again.I hope you fix these issues before fully migrating to the new university (which is already in use for newcomers)",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "here are a few others I am discovering (I will add them to the original post so to make a whole list)7- the check box for the “HIDE ANSWERS UNTIL THE END OF THE QUIZ” is not visible enough, and this feature is not working as intended. Even when checked, it still shows the answer to each question.8- Some of the review content seems to continue lingering in the quizzes.9- the full-screen “Next” overlay on the videos that are put on a video when it ends is a pure annoyance. At first glance, it serves well to transition to the next lecture while in full-screen, but it simply blocks reviewing the video content even when not in full-screen. One needs to go back and forth to get back to the video to review it until this overlay comes up again, and then the same pain all over again.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Hi @Yilmaz_Durmaz,Thanks for taking the course on MongoDB Learn and sharing your feedback. I’ve communicated this with the concerned team and will keep you updated.Thank you again,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": ".../learn/course/...../courses/...overview page: \n https://learn.mongodb.com/courses/m001-mongob-basics\nprogress page: \n https://learn.mongodb.com/learn/course/m001-mongob-basics/...\n",
"text": "Hi @Kushagra_Kesav, many thanks. I tried to sort them to their notable occurrences.The labs being unavailable once completed, are currently the biggest issue, I must say. This hinders the ability to get help from others as no one who completed that lab could help with what goes wrong.For the overview page, I edit the URLs to get back to it for now: .../learn/course/.. to .../courses/.... They should come naturally as links to browse around.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for attempting the labs on the new platform and sharing your feedback. I’ve raised this with the concerned team and will keep you posted.Regards,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed after 60 days. New replies are no longer allowed.",
"username": "system"
}
]
| Few problems with the New University that needs attention from its creators (A Feedback) | 2022-12-06T12:05:18.867Z | Few problems with the New University that needs attention from its creators (A Feedback) | 2,257 |
null | []
| [
{
"code": "",
"text": "Hi guys\nSo I have a data with a field “createdDate” type Date. I want to filter out documents from my DB based on documents creation Date. My exact target is to filter documents from a specific date to 2 years before the current date. How could I do this, please someone help.\n(Note :- I need query for mongoshell version 3.4.24, I have created working query for higher versions already)",
"username": "John_Hopkins"
},
{
"code": "",
"text": "need query for mongoshell version 3.4.24I have created working query for higher versions alreadyI do not see at first what would be the difference. Share the query you have and explain how it fails to provide the desired output.",
"username": "steevej"
},
{
"code": "db.getCollection(<collectionName>).deleteMany({ \"createdDate\": { $gt: ISODate(\"2020-12-31\"), $lte: new Date(ISODate().getTime() - 1000*86400*365*2) }})",
"text": "db.getCollection(<collectionName>).deleteMany({ \"createdDate\": {\n $gt: ISODate(\"2020-12-31\"),\n $lte: new Date(ISODate().getTime() - 1000*86400*365*2)\n }})",
"username": "John_Hopkins"
},
{
"code": "",
"text": "Read it as Mongodb server version 3.4.24\n(Not sure if shell or server need to be the same version or not)",
"username": "John_Hopkins"
},
{
"code": "",
"text": "The query seems to be syntactically correct, but logically it is another story.Two years before today is 2020-12-10, as such no dates can be both $gt:2020-12-31 and $lte:2020-12-10.Check your math andexplain how it fails to provide the desired output.",
"username": "steevej"
},
{
"code": "",
"text": "Actually, that will be in use from 2023 thats why written that way\nYou can change the dates and check. It is giving errors, I don’t know why\nSome people said it could be a version issue, maybe the line written for $lte is not valid in 3.4.24",
"username": "John_Hopkins"
},
{
"code": "",
"text": "I do not have 3.4. What is the error you get? Cut-n-paste the whole error message and/or stack trace.",
"username": "steevej"
},
{
"code": "",
"text": "Failed to retrieve documentsDatabase error!\nStacktrace:\n|/ java.lang.Exception: [<Database_Name>.<collection_name>[replica set] [direct]] Database error!\n|/ Mongo Server error (MongoQueryException): Query failed with error code 2 with name ‘BadValue’ and error message 'error processing query: ns=<Database_Name>.<collection_name> batchSize=50Tree: $and\n|… createdDate $lte new Date(1668206779750)\n|… createdDate $gt new Date(1656633600000)\n|… Sort: {}\n|… Proj: {}\n|_… No query solutions’ on server <host>.<port>",
"username": "John_Hopkins"
},
{
"code": "",
"text": "One more point to be noted is:- when I cloned that collection locally and ran the query on it, it works but not on the actual remote one which is 3.4.24\nMakes me think it is a version issue only",
"username": "John_Hopkins"
},
{
"code": "",
"text": "The error message really contains the strings <Database_Name>, <collection_name> or you hiding the real name. I find it strange that the error message is not more detailed like it could show the real Database_Name and collection_name rather than place holder.| … Sort: {}\n| … Proj: {}This seems to indicate that the query you run in real like has more to it than the find you share. May the error lies in the part you did not share.",
"username": "steevej"
},
{
"code": "",
"text": "Yes, I hide the actual database name and collection name because it is elated to my organization. Hope you dont mind\nRest I have shown the complete error",
"username": "John_Hopkins"
},
{
"code": "",
"text": "Same goes for and \nBTW just the diff between the query I shared with you and the one I used to get this error is\nI used diff dates and used find instead of deleteMany because I didn’t wanted to delete the data myself accidentally, just wanted to test the query",
"username": "John_Hopkins"
},
{
"code": "",
"text": "BTW I am opening the database and executing the query on Studio3T",
"username": "John_Hopkins"
}
]
| About date filter in Mongodb query | 2022-12-09T06:14:33.676Z | About date filter in Mongodb query | 10,309 |
null | [
"change-streams"
]
| [
{
"code": "",
"text": "My Node app uses Mongo change streams, and the app runs 3+ instances in production (more eventually, so this will become more of an issue as it grows). So, when a change comes in the change stream functionality runs as many times as there are processes.How to set things up so that the change stream only runs once?",
"username": "Amit_Gupta"
},
{
"code": "",
"text": "Feed the change-stream into a message queue and have all your processes take changes from the task queue. the task queue will ensure that only one process gets each task. The input to the message queue can be the change-stream.",
"username": "Joe_Drumgoole"
},
{
"code": "",
"text": "Yes that is one solution that I am working using aws sqs , I think mongodb should also provide this option of exactly once read , like if I open change stream from 5 places then only one process will get it , but for now seems it is not possible , Thanks for clarification btw .",
"username": "Amit_Gupta"
},
{
"code": "",
"text": "mongodb should also provide this option of exactly once read , like if I open change stream from 5 places then only one process will get itThat’s not really possible without MongoDB implementing a queue/message broker that’s built on top of change streams… Seems like there are already quite a few software packages that do exactly this…",
"username": "Asya_Kamsky"
},
{
"code": "must",
"text": "Hey, I have the same question. So basically, we must only deploy app with change stream (that feed into message-queue) into a single server / single pod?",
"username": "nmfdev"
}
]
| Change Streams In production | 2020-09-15T20:17:43.256Z | Change Streams In production | 4,422 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "store example: \n{\n \"_id\": {\n \"$oid\": \"639449e63234220ad3e22642\"\n },\n \"name\": \"Tesla\",\n \"slug\": \"tesla\"\n }\ndeals example (will be many different deals document for the store): {\n \"_id\": {\n \"$oid\": \"63944bb38ac849ba16eb01d3\"\n },\n \"description\": \"Save 11%\",\n \"store\": {\n \"$oid\": \"639449e63234220ad3e22642\"\n },\n \"clicks\": 10\n }\n{\n \"_id\": {\n \"$oid\": \"639449e63234220ad3e22642\"\n },\n \"name\": \"Tesla\",\n \"slug\": \"tesla\",\n \"clicks\": 5822\n }\n",
"text": "Hello everyone! I want to create a table filled with the stores in my db. In the row I also want to display the total amount of clicks. The clicks are however on the deals collection. How can i solve this? Expected end results:",
"username": "David_N_A7"
},
{
"code": "",
"text": "You simply do the same $lookup as you do in Help with aggregate store and its deals but with a $group stage _id:null that $sum the clicks.",
"username": "steevej"
},
{
"code": "db.stores.aggregate([{$lookup: {from: \"deals\", localField: \"_id\", foreignField: \"store\", as: \"deals\"}}])\n",
"text": "@steevejYou mean this?I gave it a try but I dont understand where ur code would go here? ",
"username": "David_N_A7"
},
{
"code": "count_clicks = [ { \"$group\" : {\n \"_id\" : null ,\n \"clicks\" : { \"$sum\" : \"$clicks\" }\n} } ]\ndb.stores.aggregate([{$lookup: {from: \"deals\", localField: \"_id\", foreignField: \"store\", as: \"deals\", pipeline:count_clicks}}])\n\"clicks\" : [ { _id : null , \"clicks\" : 5822 } ]\n",
"text": "A pipeline that uses $group with _id:null that $sum the clicks would look like:Then you simply put this in your lookup as:The format of the final clicks field will look like:Then a cosmetic $set stage can easily transform the above to what you wish.Use my code above at your own risk.",
"username": "steevej"
},
{
"code": "db.stores.aggregate([\n {\n $lookup: {\n from: \"deals\",\n localField: \"_id\",\n foreignField: \"store\",\n as: \"deals\"\n }\n },\n {\n $project: {\n name: 1,\n slug: 1,\n clicks: {\n $sum: \"$deals.clicks\"\n }\n }\n }\n])\n",
"text": "Thanks for this! It did work but seems like it added some unecessary stuff in there? I ended up asking Co-pilot and it actually managed to give me exactly what i wanted:But very grateful for your help, thank you!",
"username": "David_N_A7"
},
{
"code": "\"clicks\" : [ { _id : null , \"clicks\" : 5822 } ]\nclicks : { $arrayElemAt : [ \"$clicks.clicks\" , 0 ] }\ndb.stores.aggregate([\n {\n $lookup: {\n from: \"deals\",\n localField: \"_id\",\n foreignField: \"store\",\n as: \"deals\",\n pipeline: [ { \"$group\" : {\n \"_id\" : null ,\n \"clicks\" : { \"$sum\" : \"$clicks\" }\n } } ]\n }\n },\n {\n $project: {\n name: 1,\n slug: 1,\n clicks : {\n $arrayElemAt : [ \"$clicks.clicks\" , 0 ]\n }\n }\n }\n])\n",
"text": "it added some unecessary stuff in thereYep, like I mentionedThe format of the final clicks field will look like:Then a cosmetic $set stage can easily transform the above to what you wishNotes that your version will use more memory, since all the deals are kept from the $lookup until your final $project. By counting, the clicks in the pipeline of the $lookup, just the count is kept. Keeping all the deals inside the store object until the final $project increases the chances to get the 16Mb limit on object size.The following can simply be used in your final $project to remove the extra stuff.The whole pipeline would look like:",
"username": "steevej"
},
{
"code": "group_stage = { \"$group \": {\n \"_id\" : \"$store\" ,\n \"clicks\" : { \"$sum\" : \"$clicks\" }\n} }\n\nlookup_stage = { \"$lookup\" : {\n \"from\" : \"stores\" ,\n \"localField\" : \"store\" ,\n \"foreignField\" : \"_id\" ,\n \"as\" : \"store\" \n} }\n\nproject_stage = { \"$project\" : {\n \"name\" : { $arrayElemAt : [ \"$store.name\" , 0 ] } ,\n \"slug\" : { $arrayElemAt : [ \"$store.slug\" , 0 ] }\n \"clicks\" : 1\n} }\n\npipeline = [ group_stage , lookup_stage , project_stage ]\n",
"text": "All this discussion made me thinks that there might be a better way to do that.The idea is to aggregate the deals first and to a $lookup for the store.Something along the untested lines:It could be fun to see which one perform better. I sure do not know.",
"username": "steevej"
}
]
| How to count the amount of clicks? | 2022-12-10T10:02:25.691Z | How to count the amount of clicks? | 1,390 |
null | [
"aggregation",
"queries",
"charts"
]
| [
{
"code": "",
"text": "Hello!I want to use a Chart Type: Number to display only the percentage of gains.I have a collection with a field named “result” that is filled with “gain” or “loss”\nI need to count number of docs in total and number of docs with result = “gain” to calculate this percentage.How can I do it? I tried a lot of queries and calculated fields but nothing works. Can anyone help me to do it?",
"username": "michael.kz"
},
{
"code": "$facet[\n {\n $facet: {\n gains: [\n {\n $match: {\n result: \"gain\",\n },\n },\n {\n $count: \"count\",\n },\n ],\n total: [{ $count: \"count\" }],\n },\n },\n {\n $unwind: \"$gains\",\n },\n {\n $unwind: \"$total\",\n },\n {\n $set: {\n gainPercentage: {\n $divide: [\"$gains.count\", \"$total.count\"],\n },\n },\n },\n]\ngainPercentage",
"text": "Hi @michael.kz -This is possible but a little tricky. The secret is to use $facet which lets you fork the pipeline to calculate two different results (in this case, the number of gains, and the total number of documents). After this, we can combine the two results into a single figure:You can put this pipeline into the Charts query bar, and then use the resulting gainPercentage value in the number chart.HTH\nTom",
"username": "tomhollander"
},
{
"code": "[\n {\n $facet: {\n gains: [\n {\n $match: {\n result: \"gain\",\n created_at: {\n $gte: {\n $date: \"2022-12-11T00:00:00Z\"\n },\n $lt: {\n $date: \"2022-12-12T00:00:00Z\"\n }\n }\n }\n },\n {\n $count: \"count\"\n }\n ],\n total: [\n {\n $match: {\n created_at: {\n $gte: {\n $date: \"2022-12-11T00:00:00Z\"\n },\n $lt: {\n $date: \"2022-12-12T00:00:00Z\"\n }\n }\n }\n },\n { $count: \"count\" }\n ]\n }\n },\n {\n $unwind: \"$gains\"\n },\n {\n $unwind: \"$total\"\n },\n {\n $set: {\n gainPercentage: {\n $divide: [\"$gains.count\", \"$total.count\"]\n }\n }\n }\n]\n",
"text": "Thank you Tom, it works, perfect!!@edit:Just to take advantage of the post, is it possible to get the current date to filter without having to change the filter daily?Like this:For example, instead of using fixed date in $gte: {$date: HERE_THE_DATE}.It’s because I need two percentages, one of today’s data, another with all days.",
"username": "michael.kz"
},
{
"code": "{\n $match: {\n date: {\n $gt: {\n $dateTrunc: {\n date: \"$$NOW\",\n unit: \"day\"\n }\n }\n }\n }\n }\n",
"text": "As you probably figured out, you can’t use a normal Charts date filter since it doesn’t put it at the right point of the pipeline. However you should be able to do something like this:",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Calculate percentage of gains in Atlas Charts | 2022-12-10T18:06:49.872Z | Calculate percentage of gains in Atlas Charts | 2,489 |
null | [
"sharding"
]
| [
{
"code": "readConcern: locallocalmongosreadConcern: local readConcern: localmongos",
"text": "Hi, we come from an upgrade path 4.0 => 4.2. => 4.4.We have a large sharded collection with { _id: hashed } as the shard key that has had balancing enabled in the past. In my current use case, I want to read from secondary (with balancing disabled) because I can tolerate any stale documents as long as they are not orphan documents.I found out recently that reading from secondary returns duplicate documents when querying using a non-shard key index. This is true even when balancing is disabled. I investigated further and found that the duplicate documents do come from two different shards, so they are likely due to failed migration. I found out that this is a known issue from these articles and tickets:(1) https://www.mongodb.com/community/forums/t/when-a-chunk-is-migrating-do-its-documents-exist-on-both-shards/89332/2Till 3.4, if you were reading from secondary shard members, there is a possibility of getting duplicate documents (orphans) which seems like your scenario.\nWhen you read from a primary (or a secondary with read concern local starting from mongodb 3.6) the node will apply the shard filter before returning the document. So, we won’t return twice the document. If the shard doesn’t official own the document it will just not returning it even if it has it locally.^ A MongoDB Employee quotes this but I cannot find this in the referenced docs link or anywhere on google. But I was able to verify that { readConcern: local } with secondary reads does remove duplicate documents. Is this information still accurate without caveats or corner cases?(2) Background Indexing on Secondaries and Orphaned Document Cleanup in MongoDB 2.6 | MongoDB BlogThe scenario where users typically encounter issues related to orphaned documents is when issuing secondary reads. In a sharded cluster, primary replicas for each shard are aware of the chunk placements, while secondaries are not. If you query the primary (which is the default read preference), you will not see any issues as the primary will not return orphaned documents even if it has them. But if you are using secondary reads, the presence of orphaned documents can produce unexpected results, because secondaries are not aware of the chunk ownerships and they can’t filter out orphaned documents.^ This official article explains why secondaries show up orphaned documents, but doesn’t talk about readConcern: local(3) database - In MongoDB, why is read concern \"available\" default option for secondaries in non causally consistent sessions? - Stack Overflow^ A MongoDB Engineer mentioned this about local concern but I want to clarify on the second statement. Does it communicate with the shard’s primary or config server or both? I am concerned about the performance degradation if it communicates with shard’s primary. Since we want to disable balancing, in theory, we should be able to perform the shard/chunk ownership filter in mongos itself.My primary questions are:",
"username": "Ken_Mercado"
},
{
"code": "readConcern: localcleanupOrphaned",
"text": "FYI for anyone looking for answers… After doing a live test on a 1TB production database, readConcern: local does NOT guarantee you don’t get orphaned documents. I resolved it by running cleanupOrphaned which took about 2 days and it didn’t impact performance while it was running.",
"username": "Ken_Mercado"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Secondary reads and orphaned documents guarantees | 2022-12-04T13:44:29.024Z | Secondary reads and orphaned documents guarantees | 1,577 |
null | [
"queries",
"node-js",
"crud"
]
| [
{
"code": " User.findOneAndUpdate(\n { _id: req.user },\n { $push: { friends: newFriend } },\n (err) => {\n if (err) return res.status(400).json(err);\n User.findOneAndUpdate(\n { _id: newFriend },\n { $push: { friends: req.user } },\n (err) => {\n if (err) return res.status(400).json(err);\n FriendRequest.findOneAndDelete(\n { $and: [{ to: req.user }, { from: newFriend }] },\n (err, response) => {\n if (err) return res.status(400).json(err);\n res.json(response);\n }\n );\n }\n );\n }\n );\n await User.findOneAndUpdate(\n { _id: req.user },\n { $push: { friends: newFriend } }\n );\n await User.findOneAndUpdate(\n { _id: newFriend },\n { $push: { friends: req.user } }\n );\nconst deletedRequest = await FriendRequest.findOneAndDelete({\n $and: [{ to: req.user }, { from: newFriend }],\n});\n\nres.json(deletedRequest);",
"text": "Hi. I was wondering if there is a difference in performace or which one would be considered a better way of doing 3 queries:or:",
"username": "Mark_t1"
},
{
"code": "",
"text": "Hi @Mark_t1 ,I think this is more of a js question.I assume there should not be much difference as the delete will only happen once both prior operations completely finished.So for me the readability of the second option is more charmingThanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Chaining findOneAndUpdate vs awaits | 2022-04-15T18:42:49.843Z | Chaining findOneAndUpdate vs awaits | 1,614 |
null | [
"aggregation"
]
| [
{
"code": "db.collection.aggregate([\n {\n $match: {\n \"_id\": \"1\"\n }\n },\n {\n \"$lookup\": {\n from: \"collection\",\n let: {\n \"criteria\": \"$tags\"\n },\n pipeline: [\n {\n $project: {\n \"match\": {\n $setIntersection: [\n \"$tags\",\n \"$$criteria\"\n ]\n },\n \n }\n }\n ],\n as: \"result\"\n }\n },\n {\n $project: {\n \"tags\": 0\n }\n },\n \n])\n[\n { \"_id\": \"1\", \"tags\": [{ \"_id\": \"a\", \"displayName\": \"a\", \"level\": 1}, {\"_id\": \"b\", \"displayName\": \"b\", \"level\": 2}, {\"_id\": \"c\", \"displayName\": \"c\", \"level\": 3}]},\n {\"_id\": \"2\", \"tags\": [{\"_id\": \"a\", \"displayName\": \"a\", \"level\": 1}, {\"_id\": \"b\", \"displayName\": \"b\", \"level\": 2}]},\n {\"_id\": \"3\", \"tags\": [{\"_id\": \"a\", \"displayName\": \"a\", \"level\": 1}, {\"_id\": \"d\", \"displayName\": \"d\", \"level\": 4}]}\n]\n[{\n \"_id\": \"1\", \"result\": [\n {\"_id\": \"1\", \"match\": [{\"_id\": \"a\", \"displayName\": \"a\", \"level\": 1}, {\"_id\": \"b\", \"displayName\": \"b\", \"level\": 2},{\"_id\": \"c\",\"displayName\": \"c\",\"level\": 3}]},\n {\"_id\": \"2\", \"match\": [{\"_id\": \"a\", \"displayName\": \"a\", \"level\": 1}, {\"_id\": \"b\", \"displayName\": \"b\", \"level\": 2}]},\n {\"_id\": \"3\",\"match\": [*here should be the match to _id: \"a\", but it's not (always) there*]}\n ]\n}]\n",
"text": "I want to query a match between records in my db based on certain tags. The match would be calculated based on a formula and the intersection of the tags. Now, even querying the intersection doesn’t work…always. Sometimes it does, sometimes it doesn’t. In my example, if I change the displayName attribute to something else (add or remove one character, the query works. In its current state (for demo purposes) it doesn’t as it does not deliver the one intersection match for the last doc with id 3.https://mongoplayground.net/p/KAYPoV29RFOThat’s my query:Here is the example data (simplified):and the result as it is: (expected is 3 matches for id 1, 2 matches for id 2 and one for the last id. However, the last result has 0 elements in the intersection result. Again, when i change “displayName” to “displayNam” or “displayNames” (obviously in all docs), it give the correct result…Does anyone have an idea what I am missing here?Thanks in advance,Chris",
"username": "Chris_Bernil"
},
{
"code": "{ _id: '1',\n result: \n [ { _id: '1',\n match: \n [ { _id: 'a', displayName: 'a', level: 1 },\n { _id: 'b', displayName: 'b', level: 2 },\n { _id: 'c', displayName: 'c', level: 3 } ] },\n { _id: '2',\n match: \n [ { _id: 'a', displayName: 'a', level: 1 },\n { _id: 'b', displayName: 'b', level: 2 } ] },\n { _id: '3',\n match: [ { _id: 'a', displayName: 'a', level: 1 } ] } ] }\n",
"text": "Hi @Chris_Bernil ,Unfortunately, in my experience https://mognoplayground.net does not always provide consistent MongoDB emulation and behaviour.I recommend loading the data and using one of the official MongoDB tools or drivers to rule out if something works or not.Using your queries in MongoDB shell or compass against a real MongoDB instance return consistent results:So although playground is a convenient tool, its flaky and therefore use the queries against a certified server.There is a very nice web shell interface that can be instantly consumed for tests:\nhttps://mws.mongodb.com/Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel,thanks for your response. I actually did try this on my local mongo installation via Studio3T before writing this post. But after banging my head against the wall with this for a few hours (with the real query being way more complex than this simplified example) in the early morning I must have copied the data badly, as it does work now locally.Thank you for taking the time,Chris",
"username": "Chris_Bernil"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| $setIntersection works only randomly | 2022-12-11T04:43:57.580Z | $setIntersection works only randomly | 1,264 |
[
"dot-net",
"unity"
]
| [
{
"code": "",
"text": "Hello,Im trying to connect my Unity project to a MongoDB cloud instance using the mongo-csharp-driver but, after importing the package from github, Unity just gives a ton of errors relating sintaxe for instance “(” , “;” , etc .\nI tried using the Net framework 4.X settings but nothing changed. I also tried to change my project from WebGL to Windows platform but the errors are always there.Does someone know how to solve this ?Thank you!\ndhdr1027×308 39.7 KB\n",
"username": "Bruno_Miguel"
},
{
"code": "",
"text": "Hi, did you manage to fix this?\nGot the same problem. Spend a day searching over the internet, but to no avail.",
"username": "Michael_Ochnev"
},
{
"code": "",
"text": "I managed to get the C# driver working using a dotnet classlib project. You can find the details here:",
"username": "Martin_Raue"
}
]
| MongoDB with Unity - Sintaxe errors | 2021-11-03T13:01:08.056Z | MongoDB with Unity - Sintaxe errors | 4,392 |
|
null | [
"dot-net",
"unity"
]
| [
{
"code": "",
"text": "Hi,we are using Mongo DB Atlas as our Unity game backend DB.The backend is a Unity instance (currently 2021.3.12) that runs on linux servers (mono build).Some time ago, we have been able to get the 2.11.6 driver working using a nuget install with some minor changes (removing some libraries for different .Net version etc.).We would like to update to the latest version 2.18.0 but fail to internal error in the library. The latest one came from the BSON dll: “Unable to resolve reference ‘System.Runtime.CompilerServices.Unsafe’.”I was wondering:Thank you very much in advance!",
"username": "Martin_Raue"
},
{
"code": "dotnet> dotnet new classlib --framework \"netstandard2.0\" -o MongoDBUnity> cd MongoDBUnity> dotnet add package MongoDB.Driver> dotnet publish",
"text": "We manage to get working The trick was using a dotnet classlibrary project targeting .Net Standard 2.0:\n> dotnet new classlib --framework \"netstandard2.0\" -o MongoDBUnity\n> cd MongoDBUnity\n> dotnet add package MongoDB.Driver\n> dotnet publishThis will collect all the required dependencies:\n\ngrafik606×778 23.1 KB\nAdd this to the Unity project and it worked.Note: This is only for the mono backend. We did not test it with IL2CCP (and I doubt that would work).",
"username": "Martin_Raue"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| C Sharp Driver in Unity | 2022-12-10T11:04:21.243Z | C Sharp Driver in Unity | 3,400 |
[
"aggregation",
"indexes",
"atlas-search",
"text-search"
]
| [
{
"code": "",
"text": "Hi All,\nI am trying to query search index and text indexing in same query, as showing in picture below\n\nimage753×604 16.1 KB\nBut I am getting this error always\nimage1156×206 11.2 KB\nI tried changing the position of queries too, still error remains same.\nAnyone help me and provide me a method how to implement both queries simultaneously.",
"username": "Ashutosh_Mishra1"
},
{
"code": "",
"text": "Hi @Ashutosh_Mishra1 ,The two operators (atlas text search and regular text search) are incompatible together.You should be able to achieve everything with just atlas search stage,\n.Can you explain the requirements.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny , Thanks for replying.\nWe have a fuzzy search index created to search for a field named “CompanyName”, and simultaneously we also want a textSearch index, to search through multiple fields namely, “technology”, “business”, etc. I want to get AND results of both the queries using single Atlas Aggregation pipeline.",
"username": "Ashutosh_Mishra1"
},
{
"code": "db.collection.aggregate([\n {\n \"$search\": {\n \"compound\": {\n \"must\": [{\n \"text\": {\n \"query\": \"company1\",\n \"path\": \"companyName\"\n }\n }],\n \n \"filter\": [{\n \"text\": {\n \"query\": \"bussnies1\",\n \"path\": [\"technology\",\"business\"]\n }\n }]\n }\n }\n }\n])\n",
"text": "Hi @Ashutosh_Mishra1 ,You need a compound operator that has a text search on one field with fuzzy and then probably a filter clause on other fields:You can see more variations and consideration when to add an end as “must” or when to use other filters. Filter does not affect score so its up to you how to use it properly.Use the compound operator to combine multiple operators in a single query and get results with a match score.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "We don’t need to metion respective indexes in queries? as we have different indexes for different fields in a collection.\n\nLike we mention here",
"username": "Ashutosh_Mishra1"
},
{
"code": "",
"text": "Hi @Ashutosh_Mishra1 ,As far as I know it noy works with a single index in the same aggregation.Why do you index on different indexes? Are the purpose different or its just been done that way?Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| $text and $search in same aggregation pipeline | 2022-12-08T06:26:58.337Z | $text and $search in same aggregation pipeline | 2,624 |
|
null | [
"aggregation",
"queries",
"data-modeling"
]
| [
{
"code": "[\n{\n \"title\": \"This is a title-1\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik1.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-1-72876078\"\n},\n{\n \"title\": \"This is a title-2\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik2.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-2-72876078\"\n},\n{\n \"title\": \"This is a title-3\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik3.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-3-72876078\"\n},\n{\n \"title\": \"This is a title-4\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik4.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-4-72876078\"\n},\n{\n \"title\": \"This is a title-5\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik5.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-5-72876078\"\n},\n{\n \"title\": \"This is a title-6\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik6.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-6-72876078\"\n},\n{\n \"title\": \"This is a title-7\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik7.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-7-72876078\"\n},\n{\n \"title\": \"This is a title-8\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik8.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-8-72876078\"\n},\n{\n \"title\": \"This is a title-9\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik9.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-9-72876078\"\n},\n{\n \"title\": \"This is a title-10\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik10.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-10-72876078\"\n},\n{\n \"title\": \"This is a title-11\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik11.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-11-72876078\"\n},\n{\n \"title\": \"This is a title-12\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik12.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-12-72876078\"\n},\n]\n{\n \"title\": \"This is a title-1\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik1.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-1-72876078\",\n \"randomPost1\": {\n \"title\": \"This is a title-6\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik6.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-6-72876078\"\n },\n \"randomPost2\": {\n \"title\": \"This is a title-9\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik9.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-9-72876078\"\n },\n \"randomPost3\": {\n \"title\": \"This is a title-5\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik5.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-5-72876078\"\n },\n \"randomPost4\": {\n \"title\": \"This is a title-12\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik12.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-12-72876078\"\n },\n \"randomPost5\": {\n \"title\": \"This is a title-8\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik8.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-8-72876078\"\n },\n \"randomPost6\": {\n \"title\": \"This is a title-2\",\n \"description\": \"This is a description\",\n \"imageURL\": \"https://img.freepik2.com/free-vector/night-ocean-landscape-full-moon-stars-shine_107791-7397.jpg?w=2000\",\n \"slug\": \"this-is-a-title-2-72876078\"\n }\n }\n",
"text": "This is my collection, Just simple post data per document.How to add random post data of other documents to every document like this:How can I do this?",
"username": "Manish_Sharma10"
},
{
"code": "db.collection.aggregate([\n {\n \"$match\": {\n \"title\": \"This is a title-1\"\n }\n },\n {\n \"$lookup\": {\n \"from\": \"collection\",\n \"pipeline\": [\n {\n \"$sample\": {\n \"size\": 5\n }\n }\n ],\n \"as\": \"randomPosts\"\n }\n }\n])\n",
"text": "Hi @Manish_Sharma10 ,This can be done with aggregation using $lookup and $sample with a “pipeline lookup” syntax:Important the number of random posts is provided to the sample stage. In this one I get 5 random posts.Ty\nPavel",
"username": "Pavel_Duchovny"
}
]
| How to group random documents? | 2022-12-10T05:59:47.065Z | How to group random documents? | 891 |
null | []
| [
{
"code": "",
"text": "To confirm my understanding (because the documentation is unclear to me at least):GraphQL requests are authenticated using a Realm Access Token (passed as “Authorization: Bearer” header) or using one of the less recommended approaches: “api-key” header, basic authentication with email / password, or passing “jwtTokenString” header with the full JWT tokenCalls to functions using Application Authentication via HTTPS endpoints can only be authenticated using “api-key” header, basic authentication with email / password, or passing the JWT with the “jwtTokenString” headerIf this understanding is correct, why can’t the authentication approach be aligned between GraphQL and HTTPS endpoints? Why require me to get an access_token for GraphQL queries if I can’t use it for HTTPS endpoint requests?",
"username": "Nick_Olson"
},
{
"code": "access_tokenaccess_tokenasync function authenticate() {\n try {\n const credentials = Realm.Credentials.emailPassword(\"<email>\", \"<password>\");\n const user = await app.logIn(credentials);\n console.log(`access token: ${user.accessToken}`);\n return user.id;\n } catch (err) {\n console.error(err);\n }\n}\naccess_tokencurl --location --request POST 'https://eu-west-1.aws.realm.mongodb.com/api/client/v2.0/app/<app_id>/graphql' \\\n--header 'Authorization: Bearer <access_token>' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\"query\":\"query ... }'\naccess_tokenasync function authenticate() {\n console.log(\"AUTHENTICATE\");\n try {\n const credentials = Realm.Credentials.emailPassword(\"<email>\", \"<password>\");\n const user = await app.logIn(credentials);\n const resultOfCallFunction = await user.callFunction(\"<function name>\",[arg1, arg2]);\n return user.id;\n } catch (err) {\n console.error(err);\n }\n}\n",
"text": "Hello @Nick_OlsonThank you for joining the MongoDB Forum community - My name is Josman and I am happy to assist you with your question.GraphQL requests are authenticated using a Realm Access Token (passed as “Authorization: Bearer” header) or using one of the less recommended approaches: “api-key” header, basic authentication with email/password, or passing “jwtTokenString” header with the full JWT tokenUnder the hood, when you use the recommended way to authenticate GraphQL requests, you need to generate an access_token first. Therefore, you will need to use one of the authentication providers Realm offers to issue an access_token.Following my previous example, if I use the email/password provider, I would need to generate a user token:And use that access_token to perform a GraphQL request:Calls to functions using Application Authentication via HTTPS endpoints can only be authenticated using “api-key” header, basic authentication with email / password, or passing the JWT with the “jwtTokenString” headerWhen calling a Realm Function with application authentication, i.e, execute with the permissions of the user calling the function, we need to be authenticated first by using one of the authentication providers previously mentioned and using the authenticated user to call the Realm Function which will have the inherited access_token:If this understanding is correct, why can’t the authentication approach be aligned between GraphQL and HTTPS endpoints? Why require me to get an access_token for GraphQL queries if I can’t use it for HTTPS endpoint requests?In the end, both authentication methods are aligned. Thus, both will benefit from the authentication providers you are enabling in your Realm App.Please let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman",
"username": "Josman_Perez_Exposit"
},
{
"code": "access_tokenaccess token",
"text": "Josman - thank you for taking the time to respond, I appreciate it!Your response lays out where the confusion is, at least for me. As I understand it, making a GraphQL request is a two step process - step 1 is to authenticate, step 2 is to use the user token (access_token) for all future queries (i.e., you have to already be authenticated). Whereas calling a function through an HTTPS endpoint is a one step process - you authenticate (via email/pass, api-key, JWT) and call the function all together.For context, my app has an authentication package (nuxt auth) that can be configured to query an endpoint after authentication to get user information. I am able to authenticate into Mongo using a JWT, however, a subsequent call to an HTTPS Endpoint uses the access token as the authentication header in that request.I guess my main question is whether there is a way to use Bearer authentication in calls to HTTPS Endpoint functions (i.e., whether Application Authentication supports Bearer authentication) or if I need to pass in the entire JWT whenever I want to hit the endpoint.Thanks again for the help!",
"username": "Nick_Olson"
},
{
"code": "context.functions// difference: subtracts b from a using the sum function\nexports = function(a, b) {\n return context.functions.execute(\"sum\", a, -1 * b);\n};\n{\n \"numGamesPlayed\": {\n \"%function\": {\n \"name\": \"sum\",\n \"arguments\": [\n \"%%root.numWins\",\n \"%%root.numLosses\"\n ]\n }\n }\n}\nrealm-cli function run \\\n --function=sum \\\n --user=61a50d82532cbd0de95c7c89 \\\n --args=1 --args=2\n",
"text": "Hello @Nick_OlsonWhereas calling a function through an HTTPS endpoint is a one step process - you authenticate (via email/pass, api-key, JWT) and call the function all together.Could you please share with me an example of the above? Currently, you can call a function from within:When you are referring to an HTTPS endpoint, could you please share with me an example?Looking forward to your response.Regards,\nJosman",
"username": "Josman_Perez_Exposit"
},
{
"code": "access_tokenaccess_tokenaccess_token",
"text": "Sure - I am trying to call the function from a client application via HTTPS Endpoint. My client app is built with nuxt/vue and queries data from my Mongo Atlas instance using the GraphQL interface exposed by Realm.I log in using Sign in with Google, so I get a JWT from Google, which I then pass to Realm to authenticate (using Custom JWT Authentication). I take the resulting user token (access_token) and use it for all my GraphQL queries.For this thread, the HTTPS Endpoint in question would just respond back with the user’s information from context.user. I.e., I would like to take my authenticated user in my client web app (who has an access_token), query /userinfo, and get back that user’s info (which was populated into Realm through the Google JWT payload).I recognize that I could potentially use the Web SDK but my intent is to keep the front end as decoupled as possible from the backend and I want to minimize the number of packages since I’m already using nuxt.Hopefully this helps clarify. Basically I am trying to understand how I will need to configure the axios instance in my app to correctly hit Realm HTTPS Endpoints - do I need to pass the full JWT or can I just use an access_token header.",
"username": "Nick_Olson"
},
{
"code": "access_token",
"text": "Hello @Nick_OlsonHopefully this helps clarify. Basically I am trying to understand how I will need to configure the axios instance in my app to correctly hit Realm HTTPS Endpoints - do I need to pass the full JWT or can I just use an access_token header.Thank you for clarifying your use case. Unfortunately, at this moment you can’t use access tokens with https endpoints yet, which is why they are not recommended for the client-side. We are working on this but I do not have an ETA yet of when this is going to be available.I am so sorry for the inconvenience this might cause on your application.Please let me know if you have any additional questions or concerns regarding the details above.Kind Regards,\nJosman",
"username": "Josman_Perez_Exposit"
},
{
"code": "",
"text": "@Josman_Perez_Exposit - I appreciate you confirming this. I suspected as much once I figured out that I could authenticate using the full JWT string but I wanted to make sure I wasn’t missing anything obvious.Thank you for your help!",
"username": "Nick_Olson"
},
{
"code": "{\"error\":\"invalid session: error finding user for endpoint\",\"error_code\":\"InvalidSession\",\"link\":\"https://realm.mongodb.com/groups/error_log\"}",
"text": "Hi @Josman_Perez_Exposit , sorry to ask here instead of creating a new post - I actually did but haven’t got any response I’m trying to call HTTPS endpoint using JWT - but unfortunately, I keep getting the following error\n{\"error\":\"invalid session: error finding user for endpoint\",\"error_code\":\"InvalidSession\",\"link\":\"https://realm.mongodb.com/groups/error_log\"}",
"username": "Mustafa_Al_Ani"
},
{
"code": "",
"text": "@Mustafa_Al_Ani - do you have Custom JWT set up as an authentication provider in Realm → Data Access → Authentication?If so, do you have an App User configured for that JWT token?",
"username": "Nick_Olson"
},
{
"code": "",
"text": "@Nick_Olson - I do have a custom JWT setup as a provider.I don’t have a specific user for JWT, can’t I use the existing (email/pass provider) users ids?",
"username": "Mustafa_Al_Ani"
},
{
"code": "",
"text": "You might need to configure the Custom JWT authentication to create the user if they haven’t previously logged in. I would also double check your user permissions.I am pretty sure you can map the user entry that’s created by the Custom JWT capability to existing users (e.g., those create through email / pass or in a separate DB). I think you have to set a UUID key for the user doc though.",
"username": "Nick_Olson"
},
{
"code": "",
"text": "Hi @Josman_Perez_Exposit , is the feature for using HTTPS endpoint using access token implemented now ??",
"username": "k_prabhath"
}
]
| Different authorization approaches for HTTPS endpoint vs GraphQL endpoint | 2021-12-16T15:02:58.230Z | Different authorization approaches for HTTPS endpoint vs GraphQL endpoint | 6,495 |
null | [
"node-js"
]
| [
{
"code": " const db = client.db(dbName);\n const collections = await db.listCollections();\n console.log(collections );\nCollection {\n s: {\n db: Db { s: [Object] },\n options: {\n raw: false,\n promoteLongs: true,\n promoteValues: true,\n promoteBuffers: false,\n ignoreUndefined: false,\n bsonRegExp: false,\n serializeFunctions: false,\n fieldsAsRaw: {},\n enableUtf8Validation: true,\n readPreference: [ReadPreference]\n },\n namespace: MongoDBNamespace { db: 'apphead', collection: 'todos' },\n pkFactory: { createPk: [Function: createPk] },\n readPreference: ReadPreference {\n mode: 'primary',\n tags: undefined,\n hedge: undefined,\n maxStalenessSeconds: undefined,\n minWireVersion: undefined\n },\n bsonOptions: {\n raw: false,\n promoteLongs: true,\n promoteValues: true,\n promoteBuffers: false,\n ignoreUndefined: false,\n bsonRegExp: false,\n serializeFunctions: false,\n fieldsAsRaw: {},\n enableUtf8Validation: true\n },\n readConcern: undefined,\n writeConcern: undefined\n }\n}\n\n<ref *2> ListCollectionsCursor {\n _events: [Object: null prototype] {},\n _eventsCount: 0,\n _maxListeners: undefined,\n parent: Db {\n s: {\n client: [MongoClient],\n options: [Object],\n logger: [Logger],\n readPreference: [ReadPreference],\n bsonOptions: [Object],\n pkFactory: [Object],\n readConcern: undefined,\n writeConcern: undefined,\n namespace: [MongoDBNamespace]\n }\n },\n filter: {},\n options: {\n raw: false,\n promoteLongs: true,\n promoteValues: true,\n promoteBuffers: false,\n ignoreUndefined: false,\n bsonRegExp: false,\n serializeFunctions: false,\n fieldsAsRaw: {},\n enableUtf8Validation: true,\n readPreference: ReadPreference {\n mode: 'primary',\n tags: undefined,\n hedge: undefined,\n maxStalenessSeconds: undefined,\n minWireVersion: undefined\n }\n },\n [Symbol(kCapture)]: false,\n [Symbol(client)]: <ref *1> MongoClient {\n _events: [Object: null prototype] {},\n _eventsCount: 0,\n _maxListeners: undefined,\n s: {\n url: 'mongodb://localhost:27017',\n bsonOptions: [Object],\n namespace: [MongoDBNamespace],\n hasBeenClosed: false,\n sessionPool: [ServerSessionPool],\n activeSessions: [Set],\n options: [Getter],\n readConcern: [Getter],\n writeConcern: [Getter],\n readPreference: [Getter],\n logger: [Getter],\n isMongoClient: [Getter]\n },\n topology: Topology {\n _events: [Object: null prototype],\n _eventsCount: 26,\n _maxListeners: undefined,\n bson: [Object: null prototype],\n s: [Object],\n client: [Circular *1],\n [Symbol(kCapture)]: false,\n [Symbol(waitQueue)]: [Denque]\n },\n [Symbol(kCapture)]: false,\n [Symbol(options)]: [Object: null prototype] {\n hosts: [Array],\n compressors: [Array],\n connectTimeoutMS: 30000,\n directConnection: false,\n metadata: [Object],\n enableUtf8Validation: true,\n forceServerObjectId: false,\n heartbeatFrequencyMS: 10000,\n keepAlive: true,\n keepAliveInitialDelay: 120000,\n loadBalanced: false,\n localThresholdMS: 15,\n logger: [Logger],\n maxConnecting: 2,\n maxIdleTimeMS: 0,\n maxPoolSize: 100,\n minPoolSize: 0,\n minHeartbeatFrequencyMS: 500,\n monitorCommands: false,\n noDelay: true,\n pkFactory: [Object],\n raw: false,\n readPreference: [ReadPreference],\n retryReads: true,\n retryWrites: true,\n serverSelectionTimeoutMS: 7500,\n socketTimeoutMS: 0,\n srvMaxHosts: 0,\n srvServiceName: 'mongodb',\n waitQueueTimeoutMS: 0,\n zlibCompressionLevel: 0,\n dbName: 'test',\n userSpecifiedAuthSource: false,\n userSpecifiedReplicaSet: false\n }\n },\n [Symbol(namespace)]: MongoDBNamespace { db: 'apphead', collection: undefined },\n [Symbol(documents)]: [],\n [Symbol(initialized)]: false,\n [Symbol(closed)]: false,\n [Symbol(killed)]: false,\n [Symbol(options)]: {\n readPreference: ReadPreference {\n mode: 'primary',\n tags: undefined,\n hedge: undefined,\n maxStalenessSeconds: undefined,\n minWireVersion: undefined\n },\n fieldsAsRaw: {},\n promoteValues: true,\n promoteBuffers: false,\n promoteLongs: true,\n serializeFunctions: false,\n ignoreUndefined: false,\n bsonRegExp: false,\n raw: false,\n enableUtf8Validation: true\n },\n [Symbol(session)]: ClientSession {\n _events: [Object: null prototype] { ended: [Function] },\n _eventsCount: 1,\n _maxListeners: undefined,\n client: <ref *1> MongoClient {\n _events: [Object: null prototype] {},\n _eventsCount: 0,\n _maxListeners: undefined,\n s: [Object],\n topology: [Topology],\n [Symbol(kCapture)]: false,\n [Symbol(options)]: [Object: null prototype]\n },\n sessionPool: ServerSessionPool { client: [MongoClient], sessions: [] },\n hasEnded: false,\n clientOptions: [Object: null prototype] {\n hosts: [Array],\n compressors: [Array],\n connectTimeoutMS: 30000,\n directConnection: false,\n metadata: [Object],\n enableUtf8Validation: true,\n forceServerObjectId: false,\n heartbeatFrequencyMS: 10000,\n keepAlive: true,\n keepAliveInitialDelay: 120000,\n loadBalanced: false,\n localThresholdMS: 15,\n logger: [Logger],\n maxConnecting: 2,\n maxIdleTimeMS: 0,\n maxPoolSize: 100,\n minPoolSize: 0,\n minHeartbeatFrequencyMS: 500,\n monitorCommands: false,\n noDelay: true,\n pkFactory: [Object],\n raw: false,\n readPreference: [ReadPreference],\n retryReads: true,\n retryWrites: true,\n serverSelectionTimeoutMS: 7500,\n socketTimeoutMS: 0,\n srvMaxHosts: 0,\n srvServiceName: 'mongodb',\n waitQueueTimeoutMS: 0,\n zlibCompressionLevel: 0,\n dbName: 'test',\n userSpecifiedAuthSource: false,\n userSpecifiedReplicaSet: false\n },\n explicit: false,\n supports: { causalConsistency: true },\n clusterTime: undefined,\n operationTime: undefined,\n owner: [Circular *2],\n defaultTransactionOptions: {},\n transaction: Transaction {\n state: 'NO_TRANSACTION',\n options: {},\n _pinnedServer: undefined,\n _recoveryToken: undefined\n },\n [Symbol(kCapture)]: false,\n [Symbol(snapshotEnabled)]: false,\n [Symbol(serverSession)]: null,\n [Symbol(txnNumberIncrement)]: 0\n }\n}\n\n",
"text": "I try to list all Collections like this in Node.jsBut all I get back is this weird output instead of an ArrayAm I doing something wrong?",
"username": "Ivan_Jeremic"
},
{
"code": "",
"text": "From my best friend the documentation, I understand that listCollections() return a cursor. As with other cursors you might need to call toArray().",
"username": "steevej"
},
{
"code": " db.listCollections().toArray()",
"text": "Thanks db.listCollections().toArray() solves my problem",
"username": "random_user123"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| listCollections Node.js Driver | 2022-10-01T15:36:14.949Z | listCollections Node.js Driver | 4,796 |
null | [
"monitoring"
]
| [
{
"code": "",
"text": "Hello All,\nI have setup a MongoDB atlas database, which is shared among my 5 team members, all of them can create/delete/update documents.\nI just want to know that, is there any way from which, It can be monitored and tracked that which user has accessed the database and did any sort of CRUD changes or amything at any time.\nThanks in Advance!",
"username": "Ashutosh_Mishra1"
},
{
"code": "",
"text": "Auditing is a feature available in Atlas(M10 and above) and Enterprise Edition, not something you can do with Community edition.",
"username": "chris"
}
]
| Atlas Monitoring | 2022-12-09T06:41:48.328Z | Atlas Monitoring | 1,681 |
null | [
"containers"
]
| [
{
"code": "",
"text": "I didn’t quite know where to post this, so apologies if it’s in the wrong place.I wondered if it would be a good idea in the docker page to advise of special characters that should not make up the password of a user as I recently got caught out by needing to use escape characters in bash in order to login on the command line.Cheers",
"username": "Simon_McNair"
},
{
"code": "",
"text": "The docker image is maintained by the docker community.Try raising an issue on their github for it:Docker Official Image packaging for MongoDB. Contribute to docker-library/mongo development by creating an account on GitHub.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Docker container documentation query | 2022-12-09T10:28:53.023Z | Docker container documentation query | 1,165 |
null | [
"java"
]
| [
{
"code": "",
"text": "hello\ni want to insert rows into a collection from a java 8 stream. is there a way to insert larger-than-memory streams? and maybe do it parallel? only way i could think of is split it into the chunks (because its a stream i dont know the exact size) and insert each chunk sequentially.\nis there a better way to do this ?\ni am using sync driver",
"username": "Ali_ihsan_Erdem1"
},
{
"code": "",
"text": "Hi @Ali_ihsan_Erdem1,I don’t understand what you are trying to do exactly.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "lets say i have a 20GB csv file i want to insert into a mongodb collection. and i have 8gb of ram. is there a way to do this other than multiple insert_one or splitting the data into chunks and writing it chunk by chunk",
"username": "Ali_ihsan_Erdem1"
},
{
"code": "mongoimport--type=csv-j",
"text": "Are you going to transform the data from the CSV (like transform dates in ISODates or make sure some geo loc data is stored as a proper GeoJSON valid point or you just want to insert the CSV “as is”?If that’s the case then I think I would recommend using mongoimport with --type=csv.You can use -j to set the number of insertion workers but if you really want to go fast, the easiest way is probably to cut it in 20 parts and spawn 20 jobs on 20 different machines. But then is your cluster strong enough to ingest that much data ? Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "i know mongoimport and like i said i know that i can chunk my data in to pieces.\ni was looking for a more elegant solution like giving mongodb driver an iterator and expecting the driver to do some magic.\ni can resize my cluster to moon, that is not the problem but in my previous experience mongodb cant leverage all the hardware. we get \"out of memory \" errors on aggregations despite having enormous amount(256GB) of ram and relatively small(2GB max) collections. this is why i am rewriting our pipeline in java.",
"username": "Ali_ihsan_Erdem1"
},
{
"code": "{allowDiskUse: true}",
"text": "\"out of memory \"This triggers my spidey-sense that you might need to add the {allowDiskUse: true} option in your aggregation command.You probably get that because your pipeline is trying to use more than 100MB of RAM and needs to write to temporary disk files.Back to your CSV issue, I wouldn’t use insert_one at all in this situation as each insert_one operation would need a TCP round trip to acknowledge each write operations.I would - indeed - use a bulkwrite operation instead (or an insert_many) to reduce the number of TCP round trips. I would send the bulkwrite operation every 1000 docs or so. Maybe 10000 if they are small.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "i did add the allowDiskUse: true to my pipeline. still same",
"username": "Ali_ihsan_Erdem1"
},
{
"code": "",
"text": "If you feel like it, we can have a look to the pipeline in another topic and try to find the problem. Feel free to tag me.Ideally I would need a way to reproduce the problem but it’s most probably not easy with a few sample docs. But if you could provide with a few sample docs + the pipeline + the expected output + the error message, I think we can have a look at it.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "i was looking for a more elegant solution like giving mongodb driver an iterator and expecting the driver to do some magic.",
"username": "square_mcs"
}
]
| Writing stream to collection without putting it all into memory | 2022-12-07T12:31:22.049Z | Writing stream to collection without putting it all into memory | 2,044 |
null | [
"aggregation",
"atlas-triggers"
]
| [
{
"code": " \"{\\\"updatedFields\\\":{\\\"livescores.currentstatus\\\":\\\"PAUSED\\\"}{\"updateDescription.updatedFields.livescores.currentstatus\":{\"$ne\":null}}{\"updateDescription.updatedFields.livescores.currentstatus\":{\"$exists\":true}}{\"updateDescription.updatedFields\": {\"livescores.currentstatus\":{\"$exists\":true}}}{\"updateDescription\": {\"updatedFields\":{\"livescores.currentstatus\":{\"$exists\": true}}}}{\"updateDescription.updatedFields.livescores\":{\"currentstatus\":{\"$exists\":true}}}",
"text": "So I’m unsure if I’m totally misunderstanding the documentation or just having a major brain fart, but I cannot for the life of me get this $match in Triggers to work. Below is my change stream \"{\\\"updatedFields\\\":{\\\"livescores.currentstatus\\\":\\\"PAUSED\\\"}and then attempted $match queries{\"updateDescription.updatedFields.livescores.currentstatus\":{\"$ne\":null}}\n{\"updateDescription.updatedFields.livescores.currentstatus\":{\"$exists\":true}}\n{\"updateDescription.updatedFields\": {\"livescores.currentstatus\":{\"$exists\":true}}}\n{\"updateDescription\": {\"updatedFields\":{\"livescores.currentstatus\":{\"$exists\": true}}}}\n{\"updateDescription.updatedFields.livescores\":{\"currentstatus\":{\"$exists\":true}}}I want the trigger to run when the livescores.currentstatus field is present. The first query is the only one that allows anything through, but also allows everything through. Appreciate any advice that can be provided, have been going round with this for too long",
"username": "Kaleb_Ludlow"
},
{
"code": "livescores.currentstatus",
"text": "Okay it seems its because the changeEvent is showing the livescores.currentstatus field as a dot-not object instead of the proper document representation. Is this an oversight or am I doing something wrong",
"username": "Kaleb_Ludlow"
},
{
"code": "\"$set\" : {\"livescores\":{\"currentstatus\": f\"PAUSED\"}}",
"text": "Okay I think I’ve figured it out. If I’m performing a $set operation on the livescores.currentstatus field, using dot notation, it won’t work. If I instead separate the two fields like this \"$set\" : {\"livescores\":{\"currentstatus\": f\"PAUSED\"}} then it matches, but it will remove the fields within the livescores document that aren’t included. What is the correct way to update a single field within an nested doc, without removing other fields, and have the Trigger still $match?",
"username": "Kaleb_Ludlow"
},
{
"code": "",
"text": "Okay I think I’ve got it. How can I update a single field in an object without removing the rest? findAndModify or perhaps my $set update is incorrectEdit: This page here states that $set uses dot notation to access fields in embedded docs. There is no mention of there being an impact on the $match query in Triggers",
"username": "Kaleb_Ludlow"
}
]
| Can't seem to get $match working | 2022-12-09T21:08:01.756Z | Can’t seem to get $match working | 1,650 |
null | [
"aggregation",
"node-js",
"compass",
"transactions"
]
| [
{
"code": "ackcompleted$match$set",
"text": "Hey everyone, just sharing a project I’ve been working on called DocMQ. It’s a visibility based queueing system optimized for run times set in the future as opposed to a traditional FIFO. Before going too much further, I wanted to explain the pitch/anti-pitch. Why wouldn’t you use this?\nThe first, and most important reason you wouldn’t use DocMQ is that you have a need for high-performance FIFO queues. While DocMQ supports the queue style, you’ll consistently see better performance on a tool such as Kafka or Rabbit or even a Redis based solution.The second reason you may not need DocMQ is that you are not requiring at-least-once delivery. Several MQ systems, especially in node, do not require an ack operation in order to confirm the job’s completion. For example, sending a push notification does not require at-least-once guarantees, as the missed notification does not have an outsized negative impact on the product. Why would you use this?\nQueryable future jobs. Because DocMQ is built around document based databases including Mongo, it’s trivial to execute a query and get a snapshot of upcoming and in-process jobs and their payload. Mongo Compass is already an excellent explorer that can tell you everything about your queue’s health using simple aggregation pipelines!At least once delivery. DocMQ operates on visibility windows. Similar to SQS/ASQ/CPS, DocMQ relies on a timestamp of when a job is eligible to be attempted. When jobs are claimed, this window is set N seconds in the future. If a job completes the completed flag is set, but if a job never completes, it becomes automatically eligible to run again when the timestamp N seconds in the future is reached. Coupled with Mongo’s transaction, once a job is in DocMQ it will run until its exhausted all attempts. Recovering a failed job is a simple $match + $set operation.Internally, we’re using DocMQ at work and a single worker is doing about 500 ops/s on a single core, limited mainly by our concurrency settings since other things also run on the machine. I’d love feedback about the tool or even key features you think it should have. The goal is to have a solid core that’s easy to extend and build on without requiring an entire ecosystem of custom tooling.",
"username": "Jakob_Heuser"
},
{
"code": "",
"text": "Just wanted to share with the Mongo community some updates since the original post. As of this message, the latest version of DocMQ is 0.5.4.DocMQ RoadmapIn 0.5.4In the future, I’m planning to add priority queues and support for additional context passing for the ack/fail lifecycle. And, of course, I’ll share these updates in here on the forum thread for those looking for Job & Message solutions that work with Mongo.",
"username": "Jakob_Heuser"
}
]
| [alpha] docmq - SQS style queues backed by Mongo for node.js | 2022-07-26T01:33:00.250Z | [alpha] docmq - SQS style queues backed by Mongo for node.js | 3,598 |
null | [
"aggregation",
"queries",
"java"
]
| [
{
"code": "{\n \"value\": {\n \"shipmentId\": 1079,\n \"customer_orders\": [\n {\n \"customer_order_id\": 1124,\n \"active\": false\n },\n {\n \"customer_order_id\": 1277,\n \"active\": true,\n \"items\": [\n {\n \"item_id\": 281,\n \"active\": false,\n \"qty\": 1,\n \"name\": \"apples\",\n \"attributes\": null\n },\n {\n \"item_id\": 282,\n \"active\": true,\n \"qty\": 2,\n \"name\": \"bananas\"\n }\n ]\n }\n ],\n \"carrier_orders\": [\n {\n \"carrier_order_id\": 744,\n \"active\": true\n }\n ]\n }\n}\ndb.getCollection('shipments').aggregate([\n {\n \"$match\": {\n \"value.shipmentId\": {\n \"$in\": [\n 1079\n ]\n }\"\n }\n },\n {\n \"$project\": {\n \"value.shipmentId\": 1,\n \"value.customer_orders\": 1,\n \"value.carrier_orders\": 1,\n }\n },\n {\n \"$addFields\":{\n \"value.customer_orders\":{\n $filter:{\n input: \"$value.customer_orders\",\n as: \"customer_order\",\n cond: {\n $eq: [\"$$customer_order.active\", true]\n }\n }\n },\n \"value.customer_orders.items\":{\n $filter:{\n input: \"$value.customer_orders.items\",\n as: \"item\",\n cond: {\n $eq: [\"$$item.active\", true]\n }\n }\n },\n \"value.carrier_orders\": {\n $filter:{\n input: \"$value.carrier_orders\",\n as: \"carrier_order\",\n cond: {\n $eq: [\"$$carrier_order.active\", true]\n }\n }\n }\n }\n }\n]\n);\n{\n \"value\": {\n \"shipmentId\": 1079,\n \"customer_orders\": [\n {\n \"customer_order_id\": 1277,\n \"active\": true,\n \"items\": [\n {\n \"item_id\": 282,\n \"active\": true,\n \"qty\": 2,\n \"name\": \"bananas\"\n }\n ]\n }\n ],\n \"carrier_orders\": [\n {\n \"carrier_order_id\": 744,\n \"active\": true\n }\n ]\n }\n}\n",
"text": "Collection in the database:Query:Desired output:I am trying to apply filters at two different levels:What I want is to filter out inactive customer orders, and within active customer orders, filter out inactive items. While doing this, if there are any attributes at the customer order level, we want to retain them too in the output.\nHow can I achieve this multi-level nesting of conditions and retain attributes using the aggregate pipeline?",
"username": "Ambarish_Rao"
},
{
"code": "top_filter = { \"$addFields\" : {\n \"value.customer_orders\": { \"$filter\" : {\n input: \"$value.customer_orders\",\n as: \"customer_order\",\n cond: { $eq: [\"$$customer_order.active\", true] }\n } }\n} }\n\ninner_filter = { $addFields : {\n \"value.customer_orders\" : { $map : {\n input: \"$value.customer_orders\",\n as: \"customer_order\",\n in: { $mergeObjects : [\n \"$$customer_order\" ,\n { \"items\" : { \"$filter\" : {\n input: \"$$customer_order.items\",\n as: \"item\",\n cond: { $eq: [\"$$item.active\", true] }\n } } }\n ] } \n } }\n} }\n",
"text": "If do not know but you probably can do it in a single $addField but it would be easier to do it in different stages.The first $addField will filter out customer_orders with active:true like you do now.Then a second $addField will $map customer_orders applying the $filter on the inner items.Use at your own risk:",
"username": "steevej"
},
{
"code": "test:PRIMARY> db.coll.findOne()\n{\n\t\"_id\" : ObjectId(\"639305dd6b130c4eb3fc8983\"),\n\t\"value\" : {\n\t\t\"shipmentId\" : 1079,\n\t\t\"customer_orders\" : [\n\t\t\t{\n\t\t\t\t\"customer_order_id\" : 1124,\n\t\t\t\t\"active\" : false\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"customer_order_id\" : 1277,\n\t\t\t\t\"active\" : true,\n\t\t\t\t\"items\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"item_id\" : 281,\n\t\t\t\t\t\t\"active\" : false,\n\t\t\t\t\t\t\"qty\" : 1,\n\t\t\t\t\t\t\"name\" : \"apples\",\n\t\t\t\t\t\t\"attributes\" : null\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"item_id\" : 282,\n\t\t\t\t\t\t\"active\" : true,\n\t\t\t\t\t\t\"qty\" : 2,\n\t\t\t\t\t\t\"name\" : \"bananas\"\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"customer_order_id\" : 1234,\n\t\t\t\t\"active\" : true,\n\t\t\t\t\"items\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"item_id\" : 302,\n\t\t\t\t\t\t\"active\" : false,\n\t\t\t\t\t\t\"qty\" : 1,\n\t\t\t\t\t\t\"name\" : \"pears\",\n\t\t\t\t\t\t\"attributes\" : null\n\t\t\t\t\t},\n\t\t\t\t\t{\n\t\t\t\t\t\t\"item_id\" : 303,\n\t\t\t\t\t\t\"active\" : true,\n\t\t\t\t\t\t\"qty\" : 2,\n\t\t\t\t\t\t\"name\" : \"pizzas\"\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t],\n\t\t\"carrier_orders\" : [\n\t\t\t{\n\t\t\t\t\"carrier_order_id\" : 744,\n\t\t\t\t\"active\" : true\n\t\t\t}\n\t\t]\n\t}\n}\nmatch = { \"$match\" : { \"value.shipmentId\" : { \"$in\" : [ 1079 ] } } }\n\nfilter1 = {\n\t\"$addFields\" : {\n\t\t\"value.customer_orders\" : {\n\t\t\t\"$filter\" : {\n\t\t\t\t\"input\" : \"$value.customer_orders\",\n\t\t\t\t\"as\" : \"customer_order\",\n\t\t\t\t\"cond\" : {\n\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\"$$customer_order.active\",\n\t\t\t\t\t\ttrue\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t},\n\t\t\"value.carrier_orders\" : {\n\t\t\t\"$filter\" : {\n\t\t\t\t\"input\" : \"$value.carrier_orders\",\n\t\t\t\t\"as\" : \"carrier_order\",\n\t\t\t\t\"cond\" : {\n\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\"$$carrier_order.active\",\n\t\t\t\t\t\ttrue\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nfilter2 = {\n\t\"$addFields\" : {\n\t\t\"value.customer_orders\" : {\n\t\t\t\"$map\" : {\n\t\t\t\t\"input\" : \"$value.customer_orders\",\n\t\t\t\t\"as\" : \"customer_order\",\n\t\t\t\t\"in\" : {\n\t\t\t\t\t\"$mergeObjects\" : [\n\t\t\t\t\t\t\"$$customer_order\",\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"items\" : {\n\t\t\t\t\t\t\t\t\"$filter\" : {\n\t\t\t\t\t\t\t\t\t\"input\" : \"$$customer_order.items\",\n\t\t\t\t\t\t\t\t\t\"as\" : \"item\",\n\t\t\t\t\t\t\t\t\t\"cond\" : {\n\t\t\t\t\t\t\t\t\t\t\"$eq\" : [\n\t\t\t\t\t\t\t\t\t\t\t\"$$item.active\",\n\t\t\t\t\t\t\t\t\t\t\ttrue\n\t\t\t\t\t\t\t\t\t\t]\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\ntest:PRIMARY> db.coll.aggregate([match, filter1, filter2]).pretty()\n{\n\t\"_id\" : ObjectId(\"639305dd6b130c4eb3fc8983\"),\n\t\"value\" : {\n\t\t\"shipmentId\" : 1079,\n\t\t\"customer_orders\" : [\n\t\t\t{\n\t\t\t\t\"customer_order_id\" : 1277,\n\t\t\t\t\"active\" : true,\n\t\t\t\t\"items\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"item_id\" : 282,\n\t\t\t\t\t\t\"active\" : true,\n\t\t\t\t\t\t\"qty\" : 2,\n\t\t\t\t\t\t\"name\" : \"bananas\"\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"customer_order_id\" : 1234,\n\t\t\t\t\"active\" : true,\n\t\t\t\t\"items\" : [\n\t\t\t\t\t{\n\t\t\t\t\t\t\"item_id\" : 303,\n\t\t\t\t\t\t\"active\" : true,\n\t\t\t\t\t\t\"qty\" : 2,\n\t\t\t\t\t\t\"name\" : \"pizzas\"\n\t\t\t\t\t}\n\t\t\t\t]\n\t\t\t}\n\t\t],\n\t\t\"carrier_orders\" : [\n\t\t\t{\n\t\t\t\t\"carrier_order_id\" : 744,\n\t\t\t\t\"active\" : true\n\t\t\t}\n\t\t]\n\t}\n}\n",
"text": "Wow @steevej this is really good and it works indeed!I spent a few hours yesterday trying to figure this out and I was missing the $mergeObjects part.\nMy pipeline was working for the given example by as I suspected, I was actually duplicating the same sub-array of items everywhere and I couldn’t find a solution just yet.I love your solution!Just for the sake of it and because I spent some time on it I’ll just provide the output of my console as I tested everything again with a more “complex” example that was breaking my pipeline. But all the credits is for @steevej!Very well done!I my example I added some pears and pizzas to make sure I wasn’t duplicating the same sub-array.Then I need the 3 stages of the pipeline:And finally I can aggregate:Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for the good words.And thanks for putting out the whole solution together. I had left the $match and $filter of carrier_orders out because they were not necessary to understand the multi-level filtering. But it is nice to see it all together.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Aggregation: Multi-level filters within $addField | 2022-12-05T09:38:40.479Z | Aggregation: Multi-level filters within $addField | 2,174 |
null | [
"queries",
"data-modeling",
"java",
"atlas-data-lake"
]
| [
{
"code": "",
"text": "I am working on an appilcation that needs Archival of data wrt date created. I am planning to create two seprate MongoDB cluster with same collection names. So, is there any library or tech is there like Mongo Atlas Online archive/ data lake for free? to fecth/query from two different database at the single query? Mongo data lake having the unified interface to fetch data from two DBs with single query, which involves cost. Is there any free alternative to it?I am using Java& Sprinboot for my backend",
"username": "Imrankhan_M"
},
{
"code": "",
"text": "Hey @Imrankhan_M. While I’m not aware of any free alternative. I can say that a Federated Database Instance is very inexpensive. The way the service works is that we push down as much of the query as possible to the underlying cluster so we transfer as little data as possible between the underlying cluster and the federated database instance.",
"username": "Benjamin_Flast"
}
]
| Free Alternative to Mongo Atlas Data lake? | 2022-12-09T06:51:08.129Z | Free Alternative to Mongo Atlas Data lake? | 2,020 |
null | [
"aggregation",
"dot-net"
]
| [
{
"code": "MongoDB.Driver.MongoCommandException: Command aggregate failed: PlanExecutor error during aggregation :: caused by :: Remote error from mongot :: caused by :: Error connecting to localhost:28000 (127.0.0.1:28000) :: caused by :: Connection refused.\n",
"text": "Hi!\nCurrently im experiencing this error while trying to run an aggregation on my c# application:Can someone help me fix this error? I couldn’t find anything on the internet, or i don’t know how to look for a solution",
"username": "Henrique_Shoji"
},
{
"code": "",
"text": "Do you already have a search index created on your cluster?Similar issue",
"username": "John_Wiegert"
},
{
"code": "",
"text": "I have, i even created another one named “default” and tryed to remove the bson document that calls the index, but still getting the same error",
"username": "Henrique_Shoji"
},
{
"code": "",
"text": "Do you have documents in the collection with the index defined? Can you see any data/it existing in the Atlas UI under the Search tab? I would also try running a query directly from there",
"username": "John_Wiegert"
}
]
| C# aggregation failed to connect | 2022-12-09T13:01:44.580Z | C# aggregation failed to connect | 1,546 |
null | [
"queries"
]
| [
{
"code": "{\n \"_id\" : ObjectId(\"528f22140fe5e6467e58ae73\"),\n \"user_id\" : \"user1\", \n \"sex\" : \"Male\",\n \"age\" : 17,\n \"date_of_join\" : \"16/10/2010\",\n \"education\" : \"M.C.A.\",\n \"profession\" : \"CONSULTANT\",\n \"interest\" : \"MUSIC\"\n}\n{\n \"_id\" : ObjectId(\"528f222e0fe5e6467e58ae74\"),\n \"user_id\" : \"user2\",\n \"sex\" : null,\n \"age\" : 24,\n \"date_of_join\" : \"17/10/2009\",\n \"education\" : \"M.B.A.\",\n \"profession\" : \"MARKETING\",\n \"interest\" : null\n}\n{\n \"_id\" : ObjectId(\"528f22390fe5e6467e58ae75\"),\n \"user_id\" : \"user3\",\n \"sex\" : \"Female\",\n \"age\" : 19,\n \"date_of_join\" : \"16/10/2010\",\n \"education\" : \"M.C.A.\",\n \"profession\" : null,\n \"interest\" : \"ART\"\n}\n{\n \"_id\" : ObjectId(\"528f22430fe5e6467e58ae76\"),\n \"user_id\" : \"user4\",\n \"sex\" : \"Female\",\n \"age\" : 22,\n \"date_of_join\" : \"17/8/2009\",\n \"education\" : null,\n \"profession\" : \"DOCTOR\"\n}\n",
"text": "Hello Everyone,I would like to identify the null values in the collection without using the specific key. For example in the below mentioned, without using the keys profession/education/etc., i want to get the list of the keys which are having null values across the collection. Is it possible to get the details as I need?",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "Have you solve your other issue:\nIf you did share the solution or mark one of the reply as the solution.This following post also needs closure:As mentioned in your other posts:Please read Formatting code and log snippets in posts and update your sample documents and pipeline so it is easier to understand and to cut-n-paste for experimentation.",
"username": "steevej"
},
{
"code": "",
"text": "Can any one provide the inputs for this query?",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "I will give you a hint. Look at $objectToArray then $match on v : null.But that is all I will give you until you do your part to keep this forum useful. And your part is to supply closure for your other threads as requested in my previous post.",
"username": "steevej"
}
]
| Identifying the null values in a Collection | 2022-12-05T10:05:24.913Z | Identifying the null values in a Collection | 2,158 |
null | [
"atlas-functions",
"atlas-triggers"
]
| [
{
"code": "{ fieldA: { \"$eq\": null }, fieldB: { \"$lt\": someDate } }{ fieldA: 1, fieldB: 1}{ fieldA: 1 }",
"text": "Hi everyone,In order to clean once per day one of my collection with a two-field query, I wanted to create a Scheduled Trigger on Atlas. I just need to run a “deleteMany” operation on 2 fields. This collection is pretty big, as there are more than 175 million documents, so I created a specific index for the query I need to run.For a mysterious reason, if I run my operation ({ fieldA: { \"$eq\": null }, fieldB: { \"$lt\": someDate } }), the selected index is not the one I created. Rather than selecting the compound index I created ({ fieldA: 1, fieldB: 1}), MongoDB selects the prefix ({ fieldA: 1 }). every time. My query runs a first IXSCAN with the “fieldA” index, and then runs a COLLSCAN inside that subset of documents. However, a subset of more than 80 millions documents is still huge, so the query runs for an eternity.The “hint” method resolves this issue, but I cannot make it work inside a Trigger Function. In the MongoDB API documentation of the “deleteMany” operation, hint does not appear, which is really blocking me right now.Am I missing something ? Is there no way to run a “deleteMany” operation with “hint” option inside a Trigger Function ?Thanks in advance ! PS : the Atlas Cluster I am targeting runs MongoDB v4.4, and no, I cannot make a TTL Index to cover my use case.",
"username": "Yannis_Pages"
},
{
"code": "delete",
"text": "Hi @Yannis_Pages,Unfortunately, it’s indeed the case that hints are not supported in App Services Functions, and not only for delete operations: feel free to add the suggestion to our feedback portal.As a general advice, though, ensure that the function connected to your trigger runs as System: running it as Application means that, for each document, permissions need to be checked, and that slows down things considerably.",
"username": "Paolo_Manna"
},
{
"code": "{ fieldA: 1, fieldB: 1}{ fieldA: 1 }{ fieldA: 1 }{ fieldA: 1 }",
"text": "If you have indexes:\n{ fieldA: 1, fieldB: 1}\n{ fieldA: 1 }Then the { fieldA: 1 } is redundant as the compound index will cover any queries that this one would.I would suggest you hide the { fieldA: 1 } index and eventually delete it after testing everything is well.",
"username": "chris"
},
{
"code": "",
"text": "I will, thank you for the link. I think this feature should be available in case other people encounter this kind of issue.",
"username": "Yannis_Pages"
},
{
"code": "",
"text": "I did not know that feature, that solves my issue, thanks !The index on fieldA is a TTL Index but Hidden TTL Indexes still delete the expired documents, so that’s perfect.",
"username": "Yannis_Pages"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to specify hint for a deleteMany operation | 2022-12-08T11:04:57.828Z | How to specify hint for a deleteMany operation | 2,469 |
null | []
| [
{
"code": "",
"text": "I have completed below course in my Developer Learning path. But now it’s not showing as completed in my path. I have proof of completion downloaded. Could you please someone check and advise?\nSeems the learning path page is recently upgraded/changed. I suspect that is the reason it’s not captured.\nCompleted Course Id: M201 and M220JThanks,\nUdhay",
"username": "udhaya_kumar_Bagavathiappan"
},
{
"code": "",
"text": "Hi @udhaya_kumar_Bagavathiappan,Welcome to the MongoDB University forums Please forward the issue to [email protected]. Our University Support team will be happy to help you.Thanks,\nKushagra Kesav",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Completed course not showing as completed in learning path | 2022-12-08T16:55:48.458Z | Completed course not showing as completed in learning path | 1,529 |
[
"replication"
]
| [
{
"code": " \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"100.130.10.149:41001\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 200,\n \"optime\" : {\n \"ts\" : Timestamp(1669006964, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1669006964, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDate\" : ISODate(\"2022-11-21T05:02:44Z\"),\n \"optimeDurableDate\" : ISODate(\"2022-11-21T05:02:44Z\"),\n \"lastHeartbeat\" : ISODate(\"2022-11-21T05:02:50.799Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2022-11-21T05:02:49.782Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"100.130.9.150:41001\",\n \"syncSourceId\" : 1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 1,\n \"configTerm\" : 2\n },\n {\n \"_id\" : 1,\n \"name\" : \"100.130.9.150:41001\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 5797,\n \"optime\" : {\n \"ts\" : Timestamp(1669006964, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDate\" : ISODate(\"2022-11-21T05:02:44Z\"),\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1669005454, 1),\n \"electionDate\" : ISODate(\"2022-11-21T04:37:34Z\"),\n \"configVersion\" : 1,\n \"configTerm\" : 2,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 2,\n \"name\" : \"100.130.10.150:41001\",\n \"health\" : 1,\n \"state\" : 7,\n \"stateStr\" : \"ARBITER\",\n \"uptime\" : 5399,\n \"lastHeartbeat\" : ISODate(\"2022-11-21T05:02:50.151Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2022-11-21T05:02:50.159Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 1,\n \"configTerm\" : 2\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1669006964, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1669006964, 1)\n",
"text": "from doc:\nw: “majority” Behavior\nStarting in MongoDB 4.4, replica set members in the STARTUP2 state do not participate in write majorities.but i test,i find that STARTUP2 state still participate in write majorities.\ni’m psa replication.when i add new node to it. the new node state is startup2.\nbut replication writableVotingMembersCount from 2 to 3.\nso i insert data with writeConcern that is hang or timeout.the follwoing is my test:\nshard1:PRIMARY> rs.status();\n{\n“set” : “shard1”,\n“date” : ISODate(“2022-11-21T05:02:51.701Z”),\n“myState” : 1,\n“term” : NumberLong(2),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“heartbeatIntervalMillis” : NumberLong(2000),\n“majorityVoteCount” : 2,\n“writeMajorityCount” : 2,\n“votingMembersCount” : 3,\n“writableVotingMembersCount” : 2,}shard1:PRIMARY> rs.printSlaveReplicationInfo()\nsource: 100.130.10.149:41001\nsyncedTo: Mon Nov 21 2022 13:03:54 GMT+0800 (CST)\n0 secs (0 hrs) behind the primaryshard1:PRIMARY> db.POCCOLL.insert({_id:1,name:“testWriteConcern”},{writeConcern:{w:“majority”,wtimeout:5000}})\nWriteResult({ “nInserted” : 1 })shard1:PRIMARY> rs.add(“100.130.9.149:41001”)\n{\n“ok” : 1,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1669007243, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n},\n“operationTime” : Timestamp(1669007243, 1)\n}\nshard1:PRIMARY> rs.printSlaveReplicationInfo()\nsource: 100.130.10.149:41001\nsyncedTo: Mon Nov 21 2022 13:07:23 GMT+0800 (CST)\n0 secs (0 hrs) behind the primary\nsource: 100.130.9.149:41001\nsyncedTo: Thu Jan 01 1970 08:00:00 GMT+0800 (CST)\n1669007243 secs (463613.12 hrs) behind the primaryshard1:PRIMARY> rs.status();\n{\n“set” : “shard1”,\n“date” : ISODate(“2022-11-21T05:07:36.186Z”),\n“myState” : 1,\n“term” : NumberLong(2),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“heartbeatIntervalMillis” : NumberLong(2000),\n“majorityVoteCount” : 3,\n“writeMajorityCount” : 3,\n“votingMembersCount” : 4,\n“writableVotingMembersCount” : 3,\n“members” : [\n{\n“_id” : 0,\n“name” : “100.130.10.149:41001”,\n“health” : 1,\n“state” : 2,\n“stateStr” : “SECONDARY”,\n“uptime” : 485,\n“optime” : {\n“ts” : Timestamp(1669007254, 1),\n“t” : NumberLong(2)\n},\n“optimeDurable” : {\n“ts” : Timestamp(1669007254, 1),\n“t” : NumberLong(2)\n},\n“optimeDate” : ISODate(“2022-11-21T05:07:34Z”),\n“optimeDurableDate” : ISODate(“2022-11-21T05:07:34Z”),\n“lastHeartbeat” : ISODate(“2022-11-21T05:07:35.802Z”),\n“lastHeartbeatRecv” : ISODate(“2022-11-21T05:07:35.807Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “100.130.9.150:41001”,\n“syncSourceId” : 1,\n“infoMessage” : “”,\n“configVersion” : 2,\n“configTerm” : 2\n},\n{\n“_id” : 1,\n“name” : “100.130.9.150:41001”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 6082,\n“optime” : {\n“ts” : Timestamp(1669007254, 1),\n“t” : NumberLong(2)\n},\n“optimeDate” : ISODate(“2022-11-21T05:07:34Z”),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“electionTime” : Timestamp(1669005454, 1),\n“electionDate” : ISODate(“2022-11-21T04:37:34Z”),\n“configVersion” : 2,\n“configTerm” : 2,\n“self” : true,\n“lastHeartbeatMessage” : “”\n},\n{\n“_id” : 2,\n“name” : “100.130.10.150:41001”,\n“health” : 1,\n“state” : 7,\n“stateStr” : “ARBITER”,\n“uptime” : 5684,\n“lastHeartbeat” : ISODate(“2022-11-21T05:07:35.802Z”),\n“lastHeartbeatRecv” : ISODate(“2022-11-21T05:07:35.805Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“configVersion” : 2,\n“configTerm” : 2\n},\n{\n“_id” : 3,\n“name” : “100.130.9.149:41001”,\n“health” : 1,\n“state” : 5,\n“stateStr” : “STARTUP2”,\n“uptime” : 12,\n“optime” : {\n“ts” : Timestamp(0, 0),\n“t” : NumberLong(-1)\n},\n“optimeDurable” : {\n“ts” : Timestamp(0, 0),\n“t” : NumberLong(-1)\n},\n“optimeDate” : ISODate(“1970-01-01T00:00:00Z”),\n“optimeDurableDate” : ISODate(“1970-01-01T00:00:00Z”),\n“lastHeartbeat” : ISODate(“2022-11-21T05:07:35.813Z”),\n“lastHeartbeatRecv” : ISODate(“2022-11-21T05:07:35.327Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “100.130.9.150:41001”,\n“syncSourceId” : 1,\n“infoMessage” : “”,\n“configVersion” : 2,\n“configTerm” : 2\n}\n],\n}so now i insert data again ,startup2 state participate in write majorities.\nso it is bug for it or other reason?shard1:PRIMARY> db.POCCOLL.insert({_id:3,name:“testWriteConcern”},{writeConcern:{w:“majority”,wtimeout:5000}})\nWriteResult({\n“nInserted” : 1,\n“writeConcernError” : {\n“code” : 64,\n“codeName” : “WriteConcernFailed”,\n“errmsg” : “waiting for replication timed out”,\n“errInfo” : {\n“wtimeout” : true,\n“writeConcern” : {\n“w” : “majority”,\n“wtimeout” : 5000,\n“provenance” : “clientSupplied”\n}\n}\n}\n})https://jira.mongodb.org/browse/SERVER-71509",
"username": "jing_xu"
},
{
"code": "",
"text": "Hi @jing_xu,Arbiters are evil. I have already explained this at length in multiple posts in this forum that you can dig out.From what I understand, you now have 4 nodes in your RS including one Arbiter which cannot acknowledge write operations but DO count as a voting member of the RS.To be able to write with w=majority, by definition, you need to be able to write to the majority of the voting members in this RS.With these 4 nodes, your majority is now at 3. (4 voting nodes / 2 +1 = 3 nodes). With the Arbiter you already lost your only chance to have a node down. And because your 4th node is still in STARTUP2 (initial sync I guess) you cannot write to this RS with majority > 2 as 2 of your nodes cannot acknowledge write operations at the moment.Replace the Arbiter with a normal and it will work just fine.Also don’t stay at 4 nodes. It’s either 3 or 5 but 4 isn’t optimal as your majority is at 3 with 4 nodes meaning that you can only afford to lose a single node (just like when you only have 3 nodes). With 5 nodes you can lose up to 2 machines as your majority is still at 3 but you have 5 nodes now.To sum up this once for all: You CANNOT use w=majority if you are running an arbiter. Else your RS is not Highly Available (HA) which is alarming because Replica Sets exist for this very reason: make your cluster HA.PSA => Shouldn’t try to write with more than W=1 (while majority stands at 2).\nPSSSA => W=2 maximum (while majority stands at 3).And this is already a trade off because you are trading HA vs 1 arbiter (=less cost). Atlas doesn’t propose arbiters in the configurations to keep things simple and HA.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Maximehi Maxime:\ni want to ask about:\nStarting in MongoDB 4.4, replica set members in the STARTUP2 state do not participate in write majorities\nfrom https://www.mongodb.com/docs/v4.4/reference/write-concern/#causally-consistent-sessions-and-write-concerns.i think it startup2 state do not participate in write majorities for replication. but i add startup2 state node to PSA. so it is still participate in write majorities.I want to say that this official document is not accurate about Starting in MongoDB 4.4, replica set members in the STARTUP2 state do not participate in write majorities.",
"username": "jing_xu"
},
{
"code": "",
"text": "Hi @jing_xu,And if you read just below your section in the next one:They explain what I was explaining above.STARTUP2 node do NOT participate in write majorities => They do not acknowledge write operations. You have to rely on other data-bearing nodes to do so as STARTUP2 nodes are still catching up with the cluster.BUT they do count as take part in the cluster and its configuration - including the calculation of the majority the write concern.The lesson here is that you shouldn’t use w=“majority” with a P-S-A cluster because if P or S fail, you can’t write anymore. You need a P-S-S for this so you can at least lose one node and keep everything running. Else you are not Highly Available and almost all the nodes are SPOF. With P-S-A it’s w=1 maximum to be somewhat HA.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "hi Maxime:\nThis is no different from the 4.2 and 4.4, it’s all participate write majorities.so I understand that 4.4 startup2 not participate in write majorities. It will not participate in the statistics until the status becomes secondary.\nwriteConcern997×640 42.2 KB\n",
"username": "jing_xu"
},
{
"code": "",
"text": "My understanding (meaning I could be wrong) is that in 4.2 and down, RS members in STARTUP2 were able to participate in write majorities (=they were able to acknowledge a write operation with w=majority).This doesn’t mean that STARTUP2 members don’t take part in the votes as they are part of the configuration.That being said, apparently there is an exception for “newly added” in the RS. Which makes sense in your case but I don’t know when this “newly added” status stops…",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "hi Maxime:\ni test 4.4 doc it is wrong. 5.0 startup2 node do NOT participate in write majorities 4.4 still\ndo participate in write majorities.so i think it is wrong.\nStarting in MongoDB 4.4, replica set members in the STARTUP2 state do not participate in write majorities\nfrom https://www.mongodb.com/docs/v4.4/reference/write-concern/#causally-consistent-sessions-and-write-concerns \nshard1 [direct: primary] test> db.version()\n4.4.18shard1 [direct: primary] test> rs.status()\n{\nset: ‘shard1’,\ndate: ISODate(“2022-12-09T08:26:29.625Z”),\nmyState: 1,\nterm: Long(“1”),\nsyncSourceHost: ‘’,\nsyncSourceId: -1,\nheartbeatIntervalMillis: Long(“2000”),\nmajorityVoteCount: 2,\nwriteMajorityCount: 2,\nvotingMembersCount: 2,\nwritableVotingMembersCount: 2,\noptimes: {\nlastCommittedOpTime: { ts: Timestamp({ t: 1670574382, i: 3656 }), t: Long(“1”) },\nlastCommittedWallTime: ISODate(“2022-12-09T08:26:22.030Z”),\nreadConcernMajorityOpTime: { ts: Timestamp({ t: 1670574382, i: 3656 }), t: Long(“1”) },\nreadConcernMajorityWallTime: ISODate(“2022-12-09T08:26:22.030Z”),\nappliedOpTime: { ts: Timestamp({ t: 1670574389, i: 29720 }), t: Long(“1”) },\ndurableOpTime: { ts: Timestamp({ t: 1670574389, i: 29720 }), t: Long(“1”) },\nlastAppliedWallTime: ISODate(“2022-12-09T08:26:29.200Z”),\nlastDurableWallTime: ISODate(“2022-12-09T08:26:29.200Z”)\n},\nlastStableRecoveryTimestamp: Timestamp({ t: 1670574339, i: 56832 }),\nelectionCandidateMetrics: {\nlastElectionReason: ‘electionTimeout’,\nlastElectionDate: ISODate(“2022-12-09T08:22:36.812Z”),\nelectionTerm: Long(“1”),\nlastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(“-1”) },\nlastSeenOpTimeAtElection: { ts: Timestamp({ t: 1670574156, i: 1 }), t: Long(“-1”) },\nnumVotesNeeded: 1,\npriorityAtElection: 1,\nelectionTimeoutMillis: Long(“10000”),\nnewTermStartDate: ISODate(“2022-12-09T08:22:36.839Z”),\nwMajorityWriteAvailabilityDate: ISODate(“2022-12-09T08:22:36.862Z”)\n},\nmembers: [\n{\n_id: 0,\nname: ‘10.130.10.149:41001’,\nhealth: 1,\nstate: 1,\nstateStr: ‘PRIMARY’,\nuptime: 261,\noptime: { ts: Timestamp({ t: 1670574389, i: 29720 }), t: Long(“1”) },\noptimeDate: ISODate(“2022-12-09T08:26:29.000Z”),\nlastAppliedWallTime: ISODate(“2022-12-09T08:26:29.200Z”),\nlastDurableWallTime: ISODate(“2022-12-09T08:26:29.200Z”),\nsyncSourceHost: ‘’,\nsyncSourceId: -1,\ninfoMessage: ‘’,\nelectionTime: Timestamp({ t: 1670574156, i: 2 }),\nelectionDate: ISODate(“2022-12-09T08:22:36.000Z”),\nconfigVersion: 2,\nconfigTerm: 1,\nself: true,\nlastHeartbeatMessage: ‘’\n},\n{\n_id: 1,\nname: ‘10.130.9.149:41001’,\nhealth: 1,\nstate: 5,\nstateStr: ‘STARTUP2’,\nuptime: 7,\noptime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(“-1”) },\noptimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long(“-1”) },\noptimeDate: ISODate(“1970-01-01T00:00:00.000Z”),\noptimeDurableDate: ISODate(“1970-01-01T00:00:00.000Z”),\nlastAppliedWallTime: ISODate(“1970-01-01T00:00:00.000Z”),\nlastDurableWallTime: ISODate(“1970-01-01T00:00:00.000Z”),\nlastHeartbeat: ISODate(“2022-12-09T08:26:28.090Z”),\nlastHeartbeatRecv: ISODate(“2022-12-09T08:26:27.711Z”),\npingMs: Long(“5”),\nlastHeartbeatMessage: ‘’,\nsyncSourceHost: ‘10.130.10.149:41001’,\nsyncSourceId: 0,\ninfoMessage: ‘’,\nconfigVersion: 2,\nconfigTerm: 1\n}\n],\nok: 1,\n‘$clusterTime’: {\nclusterTime: Timestamp({ t: 1670574389, i: 29720 }),\nsignature: {\nhash: Binary(Buffer.from(“0000000000000000000000000000000000000000”, “hex”), 0),\nkeyId: Long(“0”)\n}\n},\noperationTime: Timestamp({ t: 1670574389, i: 29720 })\n}shard1 [direct: primary] test> db.version()\n5.0.3\nshard1 [direct: primary] test> rs.status()\n{\nset: ‘shard1’,\ndate: ISODate(“2022-12-09T08:14:07.209Z”),\nmyState: 1,\nterm: Long(“1”),\nsyncSourceHost: ‘’,\nsyncSourceId: -1,\nheartbeatIntervalMillis: Long(“2000”),\nmajorityVoteCount: 1,\nwriteMajorityCount: 1,\nvotingMembersCount: 1,\nwritableVotingMembersCount: 1,\noptimes: {\nlastCommittedOpTime: { ts: Timestamp({ t: 1670573647, i: 19992 }), t: Long(“1”) },\nlastCommittedWallTime: ISODate(“2022-12-09T08:14:07.168Z”),\nreadConcernMajorityOpTime: { ts: Timestamp({ t: 1670573647, i: 19992 }), t: Long(“1”) },\nappliedOpTime: { ts: Timestamp({ t: 1670573647, i: 23064 }), t: Long(“1”) },\ndurableOpTime: { ts: Timestamp({ t: 1670573647, i: 19992 }), t: Long(“1”) },\nlastAppliedWallTime: ISODate(“2022-12-09T08:14:07.202Z”),\nlastDurableWallTime: ISODate(“2022-12-09T08:14:07.168Z”)\n},\nlastStableRecoveryTimestamp: Timestamp({ t: 1670573623, i: 45092 }),\nelectionCandidateMetrics: {\nlastElectionReason: ‘electionTimeout’,\nlastElectionDate: ISODate(“2022-12-09T08:08:38.675Z”),\nelectionTerm: Long(“1”),\nlastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(“-1”) },\nlastSeenOpTimeAtElection: { ts: Timestamp({ t: 1670573318, i: 1 }), t: Long(“-1”) },\nnumVotesNeeded: 1,\npriorityAtElection: 1,\nelectionTimeoutMillis: Long(“10000”),\nnewTermStartDate: ISODate(“2022-12-09T08:08:38.691Z”),\nwMajorityWriteAvailabilityDate: ISODate(“2022-12-09T08:08:38.701Z”)\n},\nmembers: [\n{\n_id: 0,\nname: ‘10.130.10.149:51001’,\nhealth: 1,\nstate: 1,\nstateStr: ‘PRIMARY’,\nuptime: 336,\noptime: { ts: Timestamp({ t: 1670573647, i: 23064 }), t: Long(“1”) },\noptimeDate: ISODate(“2022-12-09T08:14:07.000Z”),\nsyncSourceHost: ‘’,\nsyncSourceId: -1,\ninfoMessage: ‘’,\nelectionTime: Timestamp({ t: 1670573318, i: 2 }),\nelectionDate: ISODate(“2022-12-09T08:08:38.000Z”),\nconfigVersion: 2,\nconfigTerm: 1,\nself: true,\nlastHeartbeatMessage: ‘’\n},\n{\n_id: 1,\nname: ‘10.130.9.149:51001’,\nhealth: 1,\nstate: 5,\nstateStr: ‘STARTUP2’,\nuptime: 6,\noptime: { ts: Timestamp({ t: 0, i: 0 }), t: Long(“-1”) },\noptimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long(“-1”) },\noptimeDate: ISODate(“1970-01-01T00:00:00.000Z”),\noptimeDurableDate: ISODate(“1970-01-01T00:00:00.000Z”),\nlastHeartbeat: ISODate(“2022-12-09T08:14:06.367Z”),\nlastHeartbeatRecv: ISODate(“2022-12-09T08:14:06.392Z”),\npingMs: Long(“3”),\nlastHeartbeatMessage: ‘’,\nsyncSourceHost: ‘10.130.10.149:51001’,\nsyncSourceId: 0,\ninfoMessage: ‘’,\nconfigVersion: 2,\nconfigTerm: 1\n}\n],\nok: 1,\n‘$clusterTime’: {\nclusterTime: Timestamp({ t: 1670573647, i: 23064 }),\nsignature: {\nhash: Binary(Buffer.from(“0000000000000000000000000000000000000000”, “hex”), 0),\nkeyId: Long(“0”)\n}\n},\noperationTime: Timestamp({ t: 1670573647, i: 23064 })\n}",
"username": "jing_xu"
}
]
| Mongodb 4.4 STARTUP2 state still participate in write majorities.--this is bug? | 2022-12-02T02:46:00.870Z | Mongodb 4.4 STARTUP2 state still participate in write majorities.–this is bug? | 2,181 |
|
null | [
"serverless"
]
| [
{
"code": "",
"text": "So the amount of data grows, and the cost per query grows as well, since the index increases…\nThe problem I’m facing is that even when I deleted 90% of the data, the total read units I’m been billed stayed the same…The index size didn’t decrease, so that might be the reason, however, I don see a way how to decrease it… (the main problem is with _id index, which I can’t even drop and recreate)I’m running a serverless instance, and a compact comment doesn’t seem to be available there:\n‘CMD_NOT_ALLOWED: compact’,What to do?",
"username": "Juraj_Bezdek"
},
{
"code": "",
"text": "Hi @Juraj_Bezdek,The problem I’m facing is that even when I deleted 90% of the data, the total read units I’m been billed stayed the same…Please contact the Atlas support team via the in-app chat regarding this issue as they would have insight into the particular cluster / project. Please provide them details including when the delete was performed (including the date & time + timezone).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Deleting data doesn't reduce the cost in serverless instance | 2022-12-09T00:57:39.465Z | Deleting data doesn’t reduce the cost in serverless instance | 1,365 |
null | [
"aggregation"
]
| [
{
"code": "[{\n $match: {\n publication_id: 'egm'\n }\n}, {\n $lookup: {\n from: 'Pages',\n localField: '_id',\n foreignField: 'magazine_id',\n as: 'pages'\n }\n}, {\n $project: {\n issue: 1,\n page_count: 1,\n publication_id: 1,\n release_date: 1,\n language: 1,\n pages: {\n $filter: {\n input: '$pages',\n as: 'page',\n cond: {\n $eq: [\n '$$page.number',\n 0\n ]\n }\n }\n }\n }\n}]\n[{\n $match: {\n publication_id: 'egm'\n }\n}, {\n $group: {\n _id: {\n magazine_id: '$magazine_id',\n reviewed: '$reviewed'\n },\n count: {\n $count: {}\n },\n issue: {\n $first: '$issue_date'\n }\n }\n}, {\n $group: {\n _id: '$_id.magazine_id',\n issue: {\n $first: '$issue'\n },\n status: {\n $push: {\n state: '$_id.reviewed',\n count: '$count'\n }\n }\n }\n}, {\n $sort: {\n issue: 1\n }\n}]\n_id",
"text": "Hi folks, I currently have two aggregations that I execute in my collections. I was wondering if it would be possible to turn them into a single one. Currently I execute each, and then on my app I merge the results.There are two collections: Magazines and Pages, a Page has a “fk” to the Magazine _id.The goal is to run a query that count how many pages per magazine have been reviewed (a boolean flag) but at the same time I need to return the magazine along with the very first page (cover).So I first run the aggregation to get all magazines and its first page:Then I run the one to group the status of the reviewed pages:I then merge by the _id property on my code.Can this be simplified by merging into a single pipeline or this is the way to do it?Thank you",
"username": "Vinicius_Carvalho"
},
{
"code": "",
"text": "check this out: $facet (aggregation) — MongoDB Manualit is basically multiple pipeline results assigned to new fields. you can then continue processing them down the main pipeline.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "If I understand correctly what you mean bytwo aggregations that I execute in my collectionstwo collections: Magazines and Pagesget all magazines and its first pagestatus of the reviewed pagesyou run one pipeline on the collection names Magazines and the other on the collection named Pages.If it is the case then what you want is $unionWith rather than $facet.",
"username": "steevej"
}
]
| Working with two pipelines | 2022-12-08T22:20:49.404Z | Working with two pipelines | 1,269 |
null | [
"swift"
]
| [
{
"code": "func configuration<T: BSON>(partitionValue: T)let configuration = user.configuration(partitionValue: Constants.REALM_PARTITION)clientResetMode.manual.recoverUnsyncedChangeslet configuration = user.configuration(partitionValue: Constants.REALM_PARTITION, clientResetMode: ClientResetMode.manual())",
"text": "After upgrading Realm from 10.28.1 to 10.33.0, my app crashes whenever func configuration<T: BSON>(partitionValue: T) is called.\nThe error message only says, “RealmSwift/Sync.swift:314: Fatal error: what’s best in this case?”.What is “this case”??? Does anybody know what is happening?The strange thing is that it works fine on a simulator. It only crashes on a device.According to the Atlas log, Authentication → login is OK. But it crashes when I try to get a config for the user.let configuration = user.configuration(partitionValue: Constants.REALM_PARTITION)I know that the default clientResetMode is switched from .manual to .recoverUnsyncedChanges in 10.32.0.I tried to specify manual. But it didn’t help. I get the same fatal error.let configuration = user.configuration(partitionValue: Constants.REALM_PARTITION, clientResetMode: ClientResetMode.manual())",
"username": "lonesometraveler"
},
{
"code": "let configuration = user.configuration(partitionValue: Constants.REALM_PARTITION)Constants.REALM_PARTITION.manualuser",
"text": "Can you clarify the question a bit? The code provided doesn’t really tell us about what’s being passed in your functions. For example this could be an issuelet configuration = user.configuration(partitionValue: Constants.REALM_PARTITION)as we don’t know what Constants.REALM_PARTITION resolves to. Can you provide a bit more code and details?Also, when you tried .manual did you define a recovery handler?Are you verifying that user is a valid, authenticated user?",
"username": "Jay"
},
{
"code": "func openRealm(user: User) throws {\n print(\"user.id: \", user.id)\n let configuration = user.configuration(partitionValue: Constants.REALM_PARTITION)\n print(\"configuration: \", configuration)\n realm = try Realm(configuration: configuration)\n}\nConstants.REALM_PARTITION\"songbook=default\".manualuseruserapp.login(credentials: Credentials.anonymous)if let user = app.currentUserUserRealm.Configurationuser.id: 6391d6eb2012602a8e1ae433\nRealmSwift/Sync.swift:314: Fatal error: what's best in this case?\nuser.id: 63912efd8c154e44cbdb3149\nconfiguration: Realm.Configuration {\n\tfileURL = file:///Users/kentarookuda/Library/Developer/CoreSimulator/Devices/9B1CDF2B-B213-4D9A-8365-02209D270420/data/Containers/Data/Application/B6B111F0/Documents/mongodb-realm/realm-app-id/63912efd8c154e44cbdb3149/%2522songbook%253Ddefault%2522.realm;\n\tinMemoryIdentifier = (null);\n\tencryptionKey = (null);\n\treadOnly = 0;\n\tschemaVersion = 0;\n\tmigrationBlock = (null);\n\tdeleteRealmIfMigrationNeeded = 0;\n\tshouldCompactOnLaunch = (null);\n\tdynamic = 0;\n\tcustomSchema = (null);\n}\n",
"text": "Hi Jay. Thanks for your help.Constants.REALM_PARTITION is a partition value in String. I don’t know if this is helpful. But it is defined as \"songbook=default\".No, I didn’t define a recovery handler when I tried .manual.user is a valid user. I can print it and confirm the user id. user comes from app.login(credentials: Credentials.anonymous), or if let user = app.currentUser.This is what I get when I print User and Realm.Configuration.on Deviceon Simulator",
"username": "lonesometraveler"
},
{
"code": "Crashed: com.apple.main-thread\n0 libswiftCore.dylib 0x37d7c _assertionFailure(_:_:file:line:flags:) + 312\n1 RealmSwift 0x9e724 SyncConfiguration.init(config:) + 314 (Sync.swift:314)\n2 RealmSwift 0x79f04 static Realm.Configuration.fromRLMRealmConfiguration(_:) + 341 (RealmConfiguration.swift:341)\n3 RealmSwift 0x9f8c8 RLMUser.configuration<A>(partitionValue:) + 432 (<compiler-generated>:432)\n4 Duet SongBook 0x4d190 SyncRealmManager.openRealm(user:) + 87 (SyncRealmManager.swift:87)\nRealmSwift/Sync.swift:314: Fatal error: what's best in this case?",
"text": "Here is the stack trace.I rebuilt the app with 10.28.1. And I now get the same fatal error on a device. (It works on a simulator.) So, I guess it has nothing to do with ClientResetMode.If I reinstall the already published app (that uses 10.28.1), it works perfectly fine on the device.While upgrading to 10.33.0, I had to change some build settings for the latest XCode. Maybe that messed something up. But in any case, RealmSwift/Sync.swift:314: Fatal error: what's best in this case? is not a very helpful message. It appears that line 314 of Sync.swift is a comment. How can I troubleshoot from here?",
"username": "lonesometraveler"
},
{
"code": "",
"text": "Everything works now. It wasn’t the code or realm binary.I was using Carthage as a dependency manager. During the upgrade process, I did a framework migration. It turned out that the migration was incomplete. The project settings had a link to old frameworks.",
"username": "lonesometraveler"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Swift app crashes on a device when func configuration is called | 2022-12-08T13:34:20.674Z | Swift app crashes on a device when func configuration is called | 2,282 |
null | [
"database-tools",
"backup"
]
| [
{
"code": "",
"text": "I want to backup data using a Mongodb query into Google Cloud Storage. The data to be backed up is obtained by filtering out a specific collection in a database based on its “createdDate” field which is in ISO format.\nCan anyone tell me the query for the same?\n(Note:- Prefer to use mongodump because I need the data in BSON for backup)",
"username": "John_Hopkins"
},
{
"code": "",
"text": "The option for mongodump to specify a query is –query.There are a few examples in the link I provided.If you are new to MongoDB and really do not know about making a query on a field I suggest you start with taking some courses from university.mongodb.com.",
"username": "steevej"
}
]
| About a mongodb query | 2022-12-08T13:51:35.802Z | About a mongodb query | 1,238 |
null | [
"queries",
"crud"
]
| [
{
"code": "",
"text": "To keep it somewhat simple:The data I’m dealing with is nested like this:“Location” : {\n“Id” : UUID(“8c8860c6-ddf9-45bf-a2b0-d0457e991d0a”),\n“Name” : “Lot A”,\n“Code” : “A”There are approx 2500 different instances that I need to update as each of these is tied to another item. What I’m wanting to do is run an updateMany to change the UUID to a New UUID with something like this:db.getCollection(‘Permits’).updateMany({\n“Location.Id.UUID”: {\n$in: [\n“8c8860c6-ddf9-45bf-a2b0-d0457e991d0a”\n]\n}\n},\n{\n$set: {\n“Location.Id.UUID”: “NEWUUID”\n}\n},\n{\nmulti: true\n})It doesn’t return errors but also doesn’t update anything. I’m very new to Mongo having come from SSMS an am struggling o this one.",
"username": "Chad_Fender"
},
{
"code": "",
"text": "doesn’t update anythingMost likely because{ “Location.Id.UUID”: { $in: [ “8c8860c6-ddf9-45bf-a2b0-d0457e991d0a” ] } }does not match any document.Most likely because “8c8860c6-ddf9-45bf-a2b0-d0457e991d0a” is a string while“Id” : UUID(“8c8860c6-ddf9-45bf-a2b0-d0457e991d0a”),is a UUID. They must have the same type.",
"username": "steevej"
}
]
| Running Update for nested items | 2022-12-08T22:01:53.754Z | Running Update for nested items | 1,141 |
null | [
"production",
"golang"
]
| [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.10.5 of the MongoDB Go Driver.This release contains a bugfix for heartbeat buildup with streaming protocol when the Go driver process is paused in an FAAS environment (e.g. AWS Lambda). For more information please see the 1.10.5 release notes.You can obtain the driver source from GitHub under the v1.10.5 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver Team",
"username": "benjirewis"
},
{
"code": "",
"text": "Hello,\nIs it also impact v1.11 branch?\nThank in advance,",
"username": "Jerome_LAFORGE"
},
{
"code": "release/1.11",
"text": "Hello, @Jerome_LAFORGE. The GODRIVER-2577 bug does affect the release/1.11 branch. The upcoming 1.11.1 version of the driver will contain the bugfix.",
"username": "benjirewis"
},
{
"code": "",
"text": "@Jerome_LAFORGE version 1.11.1 of the driver has been released.",
"username": "benjirewis"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Go Driver 1.10.5 Released | 2022-12-06T19:55:23.032Z | MongoDB Go Driver 1.10.5 Released | 2,160 |
null | [
"production",
"golang"
]
| [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to release version 1.11.1 of the MongoDB Go Driver.This release contains a bug fix for heartbeat buildup with streaming protocol when the Go driver process is paused in an FAAS environment (e.g. AWS Lambda). This release also includes a bug fix for handling sequential “NoWritesPerformed” labeled operation errors, in that they should still return the “previous indefinite error”. For more information please see the 1.11.1 release notes.You can obtain the driver source from GitHub under the v1.11.1 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,The Go Driver TeamP.S. We want to hear about how Go developers use MongoDB and the MongoDB Go Driver! If you haven’t already, please take the 2022 MongoDB Go Developer Survey.",
"username": "Preston_Vasquez"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB Go Driver 1.11.1 Released | 2022-12-08T20:49:40.540Z | MongoDB Go Driver 1.11.1 Released | 1,672 |
null | [
"data-modeling"
]
| [
{
"code": "",
"text": "Hello experts,I am designing a soft delete functionality for an application that could evolve to hundreds of millions of documents for each collection.\nWhat can perform better in terms of performance, and overall efficiency as the mongodb is hosted on Atlas:\n-Using a trash database for the deleted documents along with audit details.\n-Using field like, is_deleted, deletion_date,… Etc, and consider those fields in indexing as the retrieve apis have to consider them always.Thanks for your help",
"username": "Mohammad_Shawahneh"
},
{
"code": "",
"text": "Hello @Mohammad_Shawahneh\nI recommend to go with a seperate collection this will:Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks Michael, I agree with your considerations ",
"username": "Mohammad_Shawahneh"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Optimal design for soft delete | 2022-12-08T20:19:42.487Z | Optimal design for soft delete | 4,697 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "HelloI have a dataset of movies and i want to show which is the year or years with most number of filmsCould you help me with the code?Thanks in advance",
"username": "Jose_jimenez1"
},
{
"code": "",
"text": "This looks like a course exercise of some sort.If not share some of your documents from the collections.Take a look at $group with the $count accumulator. The other things needed will be $sort and $limit.",
"username": "steevej"
}
]
| Help with aggregate function | 2022-12-08T17:55:58.395Z | Help with aggregate function | 1,020 |
null | [
"kotlin"
]
| [
{
"code": "{\n \"title\": \"Diary\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"ownerId\",\n \"mood\",\n \"title\",\n \"description\",\n \"images\",\n \"date\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"ownerId\": {\n \"bsonType\": \"string\"\n },\n \"mood\": {\n \"bsonType\": \"string\"\n },\n \"title\": {\n \"bsonType\": \"string\"\n },\n \"description\": {\n \"bsonType\": \"string\"\n },\n \"images\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"string\"\n }\n },\n \"date\": {\n \"bsonType\": \"date\"\n }\n }\n}\nopen class Diary : RealmObject {\n @PrimaryKey\n var _id: ObjectId = ObjectId.create()\n var ownerId: String = \"\"\n var mood: String = Mood.Neutral.name\n var title: String = \"\"\n var description: String = \"\"\n var images: RealmList<String> = realmListOf()\n var date: RealmInstant = RealmInstant.from(System.currentTimeMillis(), 0)\n}\n",
"text": "I’ve been trying to use a RealmList as a type of ‘images’ field in the schema. Been struggling for a while now, because I’ve been getting this error, which does not make any sense, since I think I’ve setup everything correctly. Please read down below.ending session with error: non-breaking schema change: adding schema for Realm table “Diary”, schema changes from clients are restricted when developer mode is disabled (ProtocolErrorCode=225)My schema:My model class:I don’t see anything incompatible here, the problem is with that ‘images’ field. Somehow the model class does not seem to be correct, based on the schema I’ve defined. And yes I’ve tried using a development mode to generate the schema automatically from the client. But with that I’ve generated a schema where that ‘images’ field was “NOT” a required field…and I don’t know why?Plus that error log above does not provide any useful information, about what should I do to fix the issue. It’s not even telling me the name of the field. I think that should also be fixed.",
"username": "111757"
},
{
"code": "",
"text": "Hi. So the issue you are running into is that lists cannot be required. We are working on better alerting / UX around this sort of thing but the idea is that the list is implicitly always initialized so it doesn’t matter for the purposed of the JSON Schema. Given that you are just developing I would delete this schema from the App Service UI and then re-upload your schema using dev mode and it should work fine.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Realm Schema Confusion | 2022-12-03T07:59:47.835Z | Realm Schema Confusion | 1,861 |
null | []
| [
{
"code": "",
"text": "Hi folks, I have a dev cluster, and I was wondering if it would be possible to setup a way to replicate one of its databases into another database, ideally this database would contain the dev cluster backup -1 day.The ask is to allow changes in this database to be completely possible to be reverted.I know ideally I should have my own cluster, but this is a personal/small project and right now I have a prod cluster, and a dev cluster.Thanks",
"username": "Vinicius_Carvalho"
},
{
"code": "",
"text": "Hi @Vinicius_Carvalho,I don’t understand what you are trying to do precisely.If you are trying to recreate the prod on the dev cluster, you could restore your prod backup on the dev cluster every morning and each morning you would have a fresh cluster for dev with yesterday’s prod data. I’m pretty sure this can be automated with the Atlas API.If you are trying to create a copy of a MongoDB database in the same cluster like from “my_db” you want “my_db_yesterday”. I would take a mongodump of this DB yesterday and I would mongorestore it with a new name today. It can be easily done with a script.I hope this helps. \nIf not please help me understand your need.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@MaBeuLux88 Thanks, the API is indeed what I need, I will lookup the docs on how to invoke a dump and restore on another database.Cheers",
"username": "Vinicius_Carvalho"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Replicating Database | 2022-12-08T00:23:56.813Z | Replicating Database | 1,780 |
null | [
"aggregation",
"queries",
"node-js",
"transactions"
]
| [
{
"code": "if (role === ROLEV1.MS_TECH_SOFT) {\n let logs = await this.profileModel.aggregate([\n {\n $match: {\n bindedSuperAdmin: name,\n },\n },\n {\n $lookup: {\n from: 'tpes',\n localField: 'nameUser',\n foreignField: 'merchantName',\n as: 'tpesBySite',\n },\n },\n {\n $lookup: {\n from: 'logs',\n localField: 'tpesBySite.terminalId',\n foreignField: 'terminalId',\n as: 'logsByTpes',\n pipeline: [\n {\n $sort: {\n transactionDate: -1,\n // transactionDate: { $in: [startDate, endDate] },\n },\n },\n ],\n },\n },\n\n { $unwind: '$tpesBySite' },\n\n { $unwind: '$logsByTpes' },\n {\n $project: {\n // bindedSuperAdmin: '$bindedSuperAdmin',\n // bindedBanque: '$bindedBanque',\n // bindedClient: '$bindedClient',\n uniqueID: '$logsByTpes.uniqueID',\n sn: '$logsByTpes.sn',\n terminalId: '$logsByTpes.terminalId',\n transactionAmount: '$logsByTpes.transactionAmount',\n currencyCode: '$logsByTpes.currencyCode',\n transactionDate: '$logsByTpes.transactionDate',\n transactionTime: '$logsByTpes.transactionTime',\n transactionType: '$logsByTpes.transactionType',\n cardPAN_PCI: '$logsByTpes.cardPAN_PCI',\n onlineRetrievalReferenceNumber:\n '$logsByTpes.onlineRetrievalReferenceNumber',\n outcome: '$logsByTpes.outcome',\n },\n },\n \n ]);\n console.log(logs.length, ' length from ms prfile service');\n\n return logs;\n }\n",
"text": "lookup aggregation takes; time : 1174ms with size : 62.95kb is that normal ??\nHere is my function belowDoes this query could be better ? any help please",
"username": "skander_lassoued"
},
{
"code": "",
"text": "Were the answers given to you in previous threads useful? It will be nice to have closure on those before we invest time in your new problem.",
"username": "steevej"
},
{
"code": "",
"text": "I answered all my recent topics thank you @steevej for reminding me ",
"username": "skander_lassoued"
},
{
"code": "",
"text": "I answered all my recent topicsThe following still has no solution to it.It is funny because so far it is more or less the same and only thing I can say for this thread as the other one.Many $lookup and many $unwind are worry signs indicating a potential model or schema flaw. It is hard to tell without sample documents and some explication on the context. If the use-case is not frequent, trying to optimize might be useless.",
"username": "steevej"
}
]
| How can test the performance of my query | 2022-12-04T18:49:04.282Z | How can test the performance of my query | 1,824 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.