image_url
stringlengths 113
131
⌀ | tags
list | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | []
| [
{
"code": "exports = async function(changeEvent) {\n \n let fullDocument = JSON.parse(JSON.stringify(changeEvent.fullDocument));\n fullDocument.clusterTime = changeEvent.clusterTime;\n \n const send = await context.http.post({\n url: \"some url\",\n body: fullDocument,\n encodeBodyAsJSON: true\n })\n \n return \"OK\"\n \n};\n{\"_id\":\"636856174a7b34feece637b6\",\"someKey\":\"blah\",\"NewKey\":\"abc\",\"someTime\":\"2010-01-01T05:11:00.000Z\",\"clusterTime\":{\"$timestamp\":{\"t\":1667868115,\"i\":19}}}\nlet clusterTime = changeEvent.clusterTime.$timestamp\nclusterTime",
"text": "Hoping to get some help with an Atlas / Realm Function that is a database trigger for a changed record.Here is the function that is called by the changed record trigger:This sends:How do I get it to send normal JSON for clusterTime?Even if I try to access the “t” or “i” like:I get back clusterTime: {\"$undefined\", true}My questions are:Many thanks!",
"username": "Christopher_Barber"
},
{
"code": " const timestamp = changeEvent.clusterTime.toJSON()[\"$timestamp\"];\n \n console.log(JSON.stringify(timestamp));\n \n const changeTime = new Date(timestamp.t * 1000 + timestamp.i); // Resolution of 1ms - i is just a progressive number\ni",
"text": "Hi @Christopher_Barber ,The following should do:The i part is a progressive number, so the time itself has a 1s resolution. Assuming you never have more than 1000 events in the same second, the above should however provide the proper sequence.",
"username": "Paolo_Manna"
},
{
"code": "const timestamp = changeEvent.clusterTime.toJSON()[\"$timestamp\"];const timestamp = changeEvent.clusterTime.toJSON()[\"$timestamp\"];\n\"timestamp\":{\"t\":{\"$numberInt\":\"1667923613\"},\"i\":{\"$numberInt\":\"29\"}}\n",
"text": "const timestamp = changeEvent.clusterTime.toJSON()[\"$timestamp\"];Many thanks!returnsThis is an improvement, for sure! However…I understand that this is the expanded JSON that mongo uses, but how do I completely get outside of that and get into standard JSON?",
"username": "Christopher_Barber"
},
{
"code": "const timestamp = changeEvent.clusterTime.toJSON()[\"$timestamp\"];\nlongtintiDate const send = await context.http.post({\n url: \"some url\",\n body: JSON.stringify(fullDocument),\n encodeBodyAsJSON: false\n });\n",
"text": "This still returns an object, so if you assign it directly to a field, it will retain its specific data types (long for t, int for i), and that’s fine if you want to store it in MongoDB properly. My example was using a conversion to Date type instead.In your specific example, if you want to retain the timestamp format, you can try the following:",
"username": "Paolo_Manna"
},
{
"code": "changeTime = new Date(timestamp.t * 1000 + timestamp.i)\n",
"text": "Thanks again. Is there some documentation you could point me to so I can understand bettertoJSON()[\"$timestamp\"];why this works even though the timestamp object contains $numberInt in it?",
"username": "Christopher_Barber"
},
{
"code": "$numberInt$xxconsole.log(`t is a ${typeof timestamp.t}`);t is a numberTimestamp",
"text": "Hi @Christopher_Barber,I’ll answer this first, as there’s a misconception here: $numberInt (or any other $xx property) is only added when you represent the timestamp in Extended JSON, it’s not part of what the underlying Javascript object really is in memory (or in the database)!In fact, when you do\nconsole.log(`t is a ${typeof timestamp.t}`);\nwhat you get is\nt is a number\nso it’s perfectly fine to use it in calculations.Per the reason outlined above, to extract the inner representation of the Timestamp as an object, we need to convert it, and this has that purpose: apologies, but I haven’t been able to find the exact documentation.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Thanks so much for pointing me in the right direction!",
"username": "Christopher_Barber"
}
]
| EJSON / JSON problem with clusterTime | 2022-11-08T00:51:34.952Z | EJSON / JSON problem with clusterTime | 2,130 |
null | [
"python",
"change-streams",
"spark-connector"
]
| [
{
"code": "",
"text": "Browsing the Spark Connector Change Stream Configuration Docs and the source code on Github, I’ve been unable to figure out how to specify a resumeAfter/startAfter token when consuming a Mongo db or collection as a readStream the way I would using a Python client like Motor.Resuming consumption from a particular offset is a hard requirement for our use of the Spark Connector as we cannot guarantee 100% consumer uptime, yet need to be able to propagate 100% of the change feed to our sinks.Is resumeAfter/startAfter supported and I’m just missing the documentation? And if not, would it be possible to support this as a read configuration option?",
"username": "Blaize_Berry"
},
{
"code": "",
"text": "I am unable to find this option in the documentation too.\n@Robert_Walters Could you please confirm if this feature is available in version 10.0?\nThanks in Advance.",
"username": "rahul_gautam"
},
{
"code": "",
"text": "Currently it is not possible, I added https://jira.mongodb.org/browse/SPARK-380Can you add your use case to that ticket? If you don’t have a jira account, can you elaborate on what you expect to provide as a resume value? epoch time or Timestamp value ?",
"username": "Robert_Walters"
},
{
"code": "",
"text": "Is it possible right now to pass in the resume token to the spark connector?",
"username": "rahul_gautam"
},
{
"code": "",
"text": "@Robert_Walters I have been unable to locate the documentation for passing resume token to Spark connector.",
"username": "rahul_gautam"
},
{
"code": "",
"text": "Today it is not possible to pass the resume token. We created https://jira.mongodb.org/browse/SPARK-380 to add this functionality",
"username": "Robert_Walters"
}
]
| Support resumeAfter or startAfter in Spark Connector for readStreams | 2022-08-17T20:55:14.859Z | Support resumeAfter or startAfter in Spark Connector for readStreams | 3,259 |
null | [
"queries",
"python",
"atlas-cluster"
]
| [
{
"code": "pymongo.errors.OperationFailure: user is not allowed to do action [bypassDocumentValidation] on [TicketSystem.done]\nmongodb+srv://<username>:<password>@<name_of_database>.<server_address>.mongodb.net/?retryWrites=true&w=majority\n",
"text": "Took a break from a project that uses Mongodb but now my group wants to use the project again. I turned everything back on and I was able to connect to mongodb from python easily. The only issue is that I can read/write to only one collection. I am authenticated as a maximum privilege user so I was surprised to see an error that saysNot sure what changed in the matter of 4 months but here we are.My connection settings are",
"username": "RenDev_N_A"
},
{
"code": "atlasAdminbypassDocumentValidationbypassDocumentValidationdbAdminrestoreatlasAdmindbAdminrestorebypassDocumentValidationbypassDocumentValidation",
"text": "Hi @RenDev_N_A - Welcome to the communityI am authenticated as a maximum privilege userFor the built-in database user roles in Atlas, the atlasAdmin role would have the most access so it would be odd if this error was generated with a user with this role.In terms of the error, it seems bypassDocumentValidation is attempting to be executed. As per the Bypass Schema Validation documentation:For deployments that have enabled access control, to bypass document validation, the authenticated user must have bypassDocumentValidation action. The built-in roles dbAdmin and restore provide this action.I assume if the associated database user has the atlasAdmin (or either the two roles mentioned above dbAdmin and/or restore), then this error should not occur.However, just to be sure, can you advise the following:For troubleshooting purposes and possible use in future, you can Configure Custom Database Roles with the bypassDocumentValidation action. There’s also an interesting example of this action for a particular user mentioned in the Considerations documentation for the custom database roles which may be of use.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks for your help. Seems that the admin account was “AdminReadWriteAnyDatabase” was not the highest privilege. Not sure why it changed 4 months ago but your solution worked. Thanks!",
"username": "RenDev_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Accessed Denied "bypassDocumentValidation" | 2022-11-07T05:11:15.807Z | Accessed Denied “bypassDocumentValidation” | 2,602 |
null | [
"mongoose-odm"
]
| [
{
"code": "",
"text": "Hi!I’ve encountered a strange behavior that I can’t explain nor reproduce.\nThere are documents in my db that contain an array with either null element or element that does not have an _id.I event don’t know how to reproduce something like this. I see no way to insert a null element, remove _id from existing element or insert a new element without _id.How is this kind of data corruption possible?",
"username": "Seweryn_Panczyniak"
},
{
"code": "",
"text": "How is this kind of data corruption possible?No it is not, unless you uncover a major bug. But that is a very unlikely.Please share the documents that you think are not correct.Note that the only mandatory _id is the one at the top level, if you do not supply one, it is automatically added. It cannot be removed or modified.Are you using mongoose or schema validation?",
"username": "steevej"
},
{
"code": "{\"_id\":1234,\n\"skin\":[],\n\"items\":\n[\n{\"_id\":{\"$oid\":\"63643bdb46e71f0004f52dec\"},\"eid\":\"item1\",\"no\":5},\n{\"_id\":{\"$oid\":\"6367d77bc18e5f00045c1701\"},\"eid\":\"item2\",\"no\":1},\n{\"_id\":{\"$oid\":\"6368ef091c664400043a8135\"},\"eid\":\"item3\",\"no\":5},\n{\"_id\":{\"$oid\":\"63694b43af48220004db9976\"},\"eid\":\"item4\",\"no\":35},\n{\"no\":65},\n{\"_id\":{\"$oid\":\"63696bc11105d400048e70b7\"},\"eid\":\"item5\",\"no\":4},\n{\"_id\":{\"$oid\":\"63696bc71105d400048e70ce\"},\"eid\":\"item6\",\"no\":3},\n{\"_id\":{\"$oid\":\"63696bca1105d400048e70d5\"},\"eid\":\"item7\",\"no\":4}\n]\n}\n\nconst Schema = mongoose.Schema({\n _id: {\n type: Number,\n },\n items: [\n {\n eid: String,\n no: Number,\n exp: Date,\n },\n ],\n skin: [\n {\n type: String,\n },\n ],\n});\n",
"text": "It happened again!\nHere is one of such documents and a scheme for it.\nI did manage to produce a document with null entry in an array.\nYou can do this by finding a document, lean it, remove one array entry in the object and then update using this object.\nBut how to remove the _id ?",
"username": "Seweryn_Panczyniak"
},
{
"code": "",
"text": "In plain JSON your document is a valid.With mongoose, it is a different beast. I do not know mongoose. I avoid obsabstruction layers.I think I have seen a way to make a field mandatory in mongoose but I do not remember how. What is funny is that you have the field _id in your items but it is not part of your schema. May be your documents have been added directly in mongosh or Compass.I added the tag mongoose to your post in case someone with that specific knowledge can help you.",
"username": "steevej"
},
{
"code": "_id_id: 1234",
"text": "Hi @Seweryn_Panczyniak welcome to the community!Yes I agree with @steevej that this may be a mongoose-specific issue.However the _id field for the sub-documents in the array is different from the top-level _id: 1234. This looks like the content of the array was referencing some other collection. Could you maybe post a small example that you know can reproduce this? My goal is to be able to run your code example in my PC and recreate what you’re seeing here.Also, you might want to raise an issue in the official mongoose github page describing what happened.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Maybe it is mongoose fault. As I understand the _id in object of an array is mandatory and automatically added by driver. I was thinking that this is by default behavior of mongodb as there is no way to remove the _id from such element in MongoDB Compass.\nJust as I’ve wrote the above line, I’ve double check it in Robo 3T an it was able to remove the _id!\nNow that I know that it is possible to create such a document I will go thru my code again.\nThank you guys!",
"username": "Seweryn_Panczyniak"
},
{
"code": "",
"text": "I was really surprised to readthere is no way to remove the _id from such element in MongoDB Compassbecause the _id within the objects of the items array have no special signification except that by convention we (at least I do) use _id to make it clear I refer to a top level document.So I fired Compass and indeed, the GUI does not present the little X for delete when the field name is _id no matter where it is in the structure of the documents. We even cannot edit the value. It is the same with cloud.mongodb.com. I understand why it is not possible for the top _id. But why the others?Is that a bug or feature?Luckily, we can remove and modify inner fields named _id with $set.",
"username": "steevej"
},
{
"code": "",
"text": "I was thinking that this is by default behavior of mongodb as there is no way to remove the _id from such element in MongoDB Compass.Yes this is a known issue in Compass. This is the relevant ticket: COMPASS-6160Glad to see it all works out!Thanks @steevej for checking this out Best regards\nKevin",
"username": "kevinadi"
}
]
| Missing _id in one element of array | 2022-11-07T10:42:18.919Z | Missing _id in one element of array | 3,315 |
null | []
| [
{
"code": "const user = await realmapp.logIn(credentials);const loggedInNewUser = await realmapp.logIn(newUsercredentials);Error: Request failed (POST https://ap-southeast-1.aws.realm.mongodb.com/api/client/v2.0/app/<my-app>/auth/providers/\nlocal-userpass/login): invalid username/password (status 401 Unauthorized)\n\nerr.errorCode: \"InvalidPassword\"\n",
"text": "I’m having trouble authenticating users. Before today I have had no problems logging in users I created via my localhost app (in dev).\nWith slight variations of username I have consistently used the same password. I have tried different uname/password combinations that are all more than 6 chars long, but I’m now unable to get past:const user = await realmapp.logIn(credentials);with valid credentials. To test registration of a new user the code logs in to the application\nto avoid ‘pending’ status for new users with:const loggedInNewUser = await realmapp.logIn(newUsercredentials);The credentials must be valid otherwise this initial code based login wouldn’t succeed either, but it does.\nSo, I have same code, same credentials (checked in debugger immediately before sending), but\nfor a login (only) authentication fails with error:Nothing that I would expect to affect this functionality has changed in my code (anyway stashing latest changes and reverting to last commit, which was working, makes no difference).\nI therefore believe it is a database related issue.The failed login attempts are logged. But there is no associated information that gives a clue as to what is preventing the logins.Starting out I had trouble configuring Custom User Data and had to create a separateuseridfield, which was impractical from a design perspective and fundamentally confusing to work with. I removed that field so that I my custom User Id field is _id. This change is important to make the db usable and should not affect the ability to login, as it would not impact on login credentials(?).Is there a way to check passwords in Atlas/Realm? What else am I missing? thanks …",
"username": "freeross"
},
{
"code": "",
"text": "Update - I have re-added the userid custom data field with the original codebase and the ‘Invalid password’ problem remains …",
"username": "freeross"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Suddenly Getting 'Invalid Password' | 2022-11-08T06:35:22.006Z | Suddenly Getting ‘Invalid Password’ | 1,516 |
null | [
"aggregation",
"atlas-search",
"flexible-sync"
]
| [
{
"code": "$search$search$search \"Products\": [\n {\n \"name\": \"Belongs to company\",\n \"applyWhen\": {},\n \"read\": {\n \"company\": \"%%user.custom_data.company\"\n },\n \"write\": {\n \"company\": \"%%user.custom_data.company\"\n }\n }\n ],\ntruecompany%%user",
"text": "I’m running into a permissions issue when trying to use Atlas Search while Device Sync enabled. I am trying to run a $search aggregation however no documents are being returned. I’m running the aggregation from the web-sdk. All other aggregations work fine using these permissions, just not $search.I believe this is due to $search using system permissions, stripping away the user details to build rules against.According to the App Services documentation. $search is run with system level permissions. If I try to set a role based on a value in the user, the aggregation returns no documents as it seems it’s still using system permissions.My sync permissions are as such:If I set “read” to true, the search aggregation works fine, but returns any document from any company.I also cannot set this permission within “applyWhen” because the document is not available yet according to the documentation so you cannot us %%user to get the company field on the document.Has anyone figured out how to get $search aggregation working with device permissions setup?",
"username": "Tyler_Collins"
},
{
"code": "",
"text": "I am happy to go into more detail about the issue you are running into but the TLDR is that:Let me know how this works,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "$search",
"text": "@Tyler_Kaye appreciate the reply.Appreciate it!",
"username": "Tyler_Collins"
},
{
"code": "",
"text": "@Tyler_Kaye wanted to see if you wouldn’t mind taking a look at my questions in the thread above. Thank you!",
"username": "Tyler_Collins"
},
{
"code": "$search",
"text": "@Tyler_Kaye appreciate the reply.Sorry about the delay. This got lost somehow.",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Am I understanding this correctly?\nIf you have “Flexible Sync” enabled with user-based read/write role permissions, “Atlas Search” will not function (not return any documents).\nIs that right?",
"username": "Alex_Breen"
},
{
"code": "",
"text": "If you have flexible sync enabled, then those permissions in the sync page are used for all non-sync requests (functions included). Because sync permissions do not have a “search” field, it is functionally not possible to set search permissions for an App Services app while Sync is enabled (though we are in the process of changing this).",
"username": "Tyler_Kaye"
}
]
| Flexible Sync permissions issue with Atlas Search Aggregation | 2022-10-18T20:23:42.927Z | Flexible Sync permissions issue with Atlas Search Aggregation | 3,050 |
[
"aggregation",
"queries",
"crud",
"golang"
]
| [
{
"code": "filter := bson.M{\n\t\t\"job_id\": internalJobID,\n\t}\n\tupdate := bson.A{\n\t\tbson.M{\"$set\": bson.M{\n\t\t\t\"status\": bson.M{\n\t\t\t\t\"$cond\": bson.A{\n\t\t\t\t\tbson.D{{\"$eq\", bson.A{\"$status\", 0}}}, resultStatus, \"$status\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t\t},\n\t}\n\n\tsingleResult := ac.HistoryCollection(groupID).FindOneAndUpdate(context.Background(), filter, update)\n\tif err := singleResult.Err(); err != nil {\n\t\treturn nil, errors.Wrap(err, errString)\n\t}\nupdate := bson.M{\n\t\t\"$set\": bson.M{\n\t\t\t\"status\": bson.M{\n\t\t\t\t\"$cond\": bson.A{\n\t\t\t\t\tbson.D{{\"$eq\", bson.A{\"$status\", 0}}}, resultStatus, \"$status\",\n\t\t\t\t},\n\t\t\t},\n\t\t},\n\t}\n",
"text": "Hi, I tried to update a document with $set and $cond and countered behavior that i cannot understand.above code works but below update code is not working.The difference between two is only wrapping by bson.A (array) or not.\nWhen the below code runs, it write this kind of result, which seems $cond is not recognized as aggregation operator.\nAs all of my update codes are not wrapped by array and it works fine till this time, I wonder what makes difference in this case.Thanks!",
"username": "Damon_Lee"
},
{
"code": "update",
"text": "The update command takes as a second argument either a document which indicates this is traditional update modifier syntax, or an array which indicates this will be aggregation pipeline syntax. You are using agg syntax so I would expect it to fail without being wrapped in an array. Or in your case interpret it as regular $set modifier which only accepts subdocument containing field name and field value.See update command doc.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Mongo go driver update with $set $cond aggregation operator works only when wrapped as array | 2022-11-09T05:15:32.164Z | Mongo go driver update with $set $cond aggregation operator works only when wrapped as array | 2,353 |
|
null | [
"aggregation",
"server",
"release-candidate"
]
| [
{
"code": "",
"text": "MongoDB 4.4.18-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.17. The next stable release 4.4.18 will be a recommended upgrade for all 4.4 users.\nFixed in this release:",
"username": "Aaron_Morand"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB 4.4.18-rc0 is released | 2022-11-09T23:01:05.039Z | MongoDB 4.4.18-rc0 is released | 2,070 |
null | [
"production",
"ruby",
"mongoid-odm"
]
| [
{
"code": "",
"text": "This patch release in the 7.5 series adds the following minor improvements and bug fixes:",
"username": "Dmitry_Rybakov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| Mongoid 7.5.2 released | 2022-11-09T16:34:42.544Z | Mongoid 7.5.2 released | 1,741 |
null | [
"python",
"crud"
]
| [
{
"code": "operations = []\nfor i in data:\n operations.append(updateMany({'location.zip':i['_id']}, {\"$set\":{'calculations.distance':i['distance']}}, upsert=True))\nprint(collection.bulk_write(operations, ordered=False))\n",
"text": "Hi there,I am looking to fulfil the following:But it seems to be deprecated. How can I go about doing this insert, without using UpdateOne, as I have approximately 3000K results, and only 4000 postals and distances that are spread over those?",
"username": "EA_K"
},
{
"code": "",
"text": "Hi @EA_K, nothing about your code snippert should be deprecated, is there a warning or documentation somewhere that says otherwise?",
"username": "Steve_Silvester"
}
]
| PyMongo Bulkwrite UpdateMany deprecated? | 2022-11-08T14:43:44.728Z | PyMongo Bulkwrite UpdateMany deprecated? | 1,232 |
null | [
"change-streams",
"atlas-triggers"
]
| [
{
"code": "",
"text": "I have created an on demand Materialized view in my database. Now, I need to update it after every change in the source collection. I found out that triggers are not supported for on premise MongoDB but are only available in Atlas. Since I do not want to indulge into atlas, what possible options do I have got?\nI am looking for workarounds to achieve this.\nSo far, i have found about oplogs usage in this purpose, can you make me understand this?\nAlso. can I update materialized view through some event?",
"username": "MWD_Wajih_N_A"
},
{
"code": "",
"text": "Check MongoDB Change Streams… they are native to MongoDB and does not need Atlas.",
"username": "shrey_batra"
},
{
"code": "$mergemongo",
"text": "Hi @MWD_Wajih_N_A,Atlas Triggers use the MongoDB Change Streams API which is a standard feature of modern MongoDB Server versions (3.6+). Change Streams use the replication oplog in their underlying implementation, but provide a stable API to subscribe and react to changes for a single collection, database, or an entire deployment (replica set or sharded cluster). You should use the Change Streams API rather than directly reading the oplog.Atlas triggers are provided via Atlas’ cluster management interface. You can implement similar logic in an on-premise deployment by creating a persistent application using change streams.The On-Demand Materialised Views documentation includes examples of creating and updating data using the $merge aggregation functionality in MongoDB 4.2+. The documentation examples use the mongo shell for illustration purposes, but you can translate those approaches into any of the supported drivers for MongoDB 4.2+.You can find examples of working with change streams in different drivers in the MongoDB manual. For example: Open a Change Stream.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I have been exploring the change stream API and workarounds for my usecase.\nWhat I have understood is that I can detect any change in my collection via change stream “watch” function. I am trying to figure out that how can I read two or more fields from a collection and apply aggregation in the collection and get some result and merge result to another collection. All this based on every change stream event occurring.",
"username": "MWD_Wajih_N_A"
},
{
"code": "collection := client.Database(\"poc\").Collection(\"po_new_audit\")\n\n\tmatchStg := bson.D{{\"$match\", bson.D{{\"synced\", false}}}}\n\taddFieldStg1 := bson.D{{\"$addFields\", bson.D{{\"audit_type\", \"purchase_order\"}}}}\n\tlookupStg := bson.D{{\"$lookup\", bson.D{{\"from\", \"user\"}, {\"localField\", \"who.id\"}, {\"foreignField\", \"_id\"}, {\"as\", \"user_details\"}}}}\n\tunwindStg1 := bson.D{{\"$unwind\", \"$user_details\"}}\n\taddFieldStg2 := bson.D{{\"$addFields\", bson.D{{\"breadcrumbs\", bson.D{{\"po_id\", \"$entity_id\"}}}}}}\n\tprojectStg := bson.D{{\"$project\", bson.D{{\"_id\", \"1\"}, {\"what\", \"1\"}, {\"when\", \"1\"}, {\"user_details\", \"1\"}, {\"breadcrumbs\", \"1\"}, {\"audit_type\", \"1\"}}}}\n\tmergeStg := bson.D{{\"$merge\", bson.D{{\"into\", \"audit\"}}}}\n\tc, err := collection.Aggregate(context.TODO(), mongo.Pipeline{matchStg, addFieldStg1, lookupStg, unwindStg1, addFieldStg2, projectStg, mergeStg})\n\tif err != nil {\n\t\tpanic(err)\n\t}\n\n\tvar loaded []bson.M\n\tif err = c.All(context.TODO(), &loaded); err != nil {\n\t\tpanic(err)\n\t}\n\tfmt.Println(loaded)\n",
"text": "hi @Stennie_XI am new to mongo and was looking for ways to trigger the on-demand-materialised view in Go driver defined as a function in one of my mongo DB.I tried running the aggregation pipeline in Go with $merge. But seems like it’s not performing anything.The aggregation works till the projectStg and cursor gives the return values. But adding $merge doesn’t give an error but returns empty cursor. Also, the merge document doesn’t get updated.Any pointers will really help.Regards,\nSayan",
"username": "Sayan_Mitra"
}
]
| Triggers for OnPremise MongoDB | 2021-02-03T07:26:12.685Z | Triggers for OnPremise MongoDB | 5,944 |
[]
| [
{
"code": "",
"text": "Our important client’s 24/7 charting and monitoring system is down, and all because of a javascript bug that appears to be from mongodb charts platform and not from our companyPlease helpapparently the whole domain (charts.mongodb.com) is having the syntax error problem\nimage2432×1764 482 KB\n",
"username": "Hugo_Jerez"
},
{
"code": "",
"text": "https://charts.mongodb.com/ doesn’t work in any browser, even in incognito mode, or on phoneboth the main page of MongoDB Charts and the Charts that have been created do not work and everything appears blank due to a very simple javascript error\nimage2956×1934 324 KB\n",
"username": "Hugo_Jerez"
},
{
"code": "",
"text": "Hi @Hugo_Jerez,Apologies for the inconvenience. We had issues when we were planning a release couple of hours back. The issue was identified and fixed.This should not be happening now. Can you please confirm if you are still seeing the issue?",
"username": "Avinash_Prasad"
},
{
"code": "",
"text": "I’m seeing a blank page with\n",
"username": "Christian_Rorvik"
},
{
"code": "",
"text": "Charts is working now",
"username": "Christian_Rorvik"
},
{
"code": "",
"text": "We have fixed the issue. @Hugo_Jerez Can you please verify?",
"username": "Avinash_Prasad"
},
{
"code": "",
"text": "everything is working fine since hours ago, thank you very much!We had issues when we were planning a release couple of hours back. The issue was identified and fixed",
"username": "Hugo_Jerez"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| [URGENT] https://charts.mongodb.com/ is inaccesible | 2022-11-09T04:02:17.023Z | [URGENT] https://charts.mongodb.com/ is inaccesible | 1,692 |
|
null | [
"atlas-cluster",
"stitch"
]
| [
{
"code": "",
"text": "Hello,\nI have a free shared database running on Atlas.I am trying to set up MongoDB integration on Stitch to see if I can get the MongoDB data in Stitch where my data from other sources live.In the connection step on Stitch, if I provide the hostname from the connection string that I get from Atlas (xxx.4yz03lr.mongodb.net), I get the error:No address associated with hostnameAs an alternative, I tried providing the address of the primary instance (xxx-shard-00-00.4yz03lr.mongodb.net), and with that, I get the error:Exit status is: Discovery failed with code 1 and error message: “connection closed, Timeout: 30s, Topology Description: <TopologyDescription id: 636b89204afdfaf2c89a85cc, topology_type: Single, servers: [<ServerDescription (‘ac-v5vp8h9-shard-00-02.4yz03lr.mongodb.net’, 27017) server_type: Unknown, rtt: None, error=AutoReconnect(‘connection closed’,)>]>”.Is there something that I missing in order to make the connection?",
"username": "Prerak_Sola"
},
{
"code": " mongodb://...",
"text": "Hi @Prerak_Sola ,What is stitch exactly? Does it support latest MongoDB driver?It looks like you used an srv string type from atlas , can you try using the regular mongodb://... type?Ty\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " mongodb://...mongodb://srv://ping meta-creator-data.4yz03lr.mongodb.net \nping: cannot resolve meta-creator-data.4yz03lr.mongodb.net: Unknown host\n\nnslookup meta-creator-data.4yz03lr.mongodb.net\nServer:\t\t1.1.1.1\nAddress:\t1.1.1.1#53\n\nNon-authoritative answer:\n*** Can't find meta-creator-data.4yz03lr.mongodb.net: No answer\n",
"text": "Hi Pavel,Thanks for getting back.What is stitch exactly?Stitch is a data warehouse platform (https://www.stitchdata.com/).Does it support latest MongoDB driver?From what I can find, they use PyMongo 3.12.3 (MongoDB Integration Changelog | Stitch Documentation)It looks like you used an srv string type from atlas , can you try using the regular mongodb://... type?Their connection page accepts only a hostname, and no protocol, so I cannot use mongodb:// or srv://.However, keeping Stitch aside, when I try to resolve the hostname that I get from Atlas from my terminal, it is not able to resolve it. It resolves the individual ones for the 3 instances.So, is there a way to get a common resolvable hostname/IP for the 3 instances?Thanks once again.\nPrerak",
"username": "Prerak_Sola"
},
{
"code": "",
"text": "Never mind, I just managed to make it work by providing the hostname of one of the secondary instances.\nAlso, was passing the wrong authentication database in the config.",
"username": "Prerak_Sola"
},
{
"code": "0.0.0.0/0",
"text": "You should be able to take the hoatnames and use them from the monitoring page.When you click your primary you should see that name.Have you whitelisted access for this service, I suspect you have to add 0.0.0.0/0 unfortunately.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| MongoDB shared cluster connection to Stitch | 2022-11-09T11:08:10.778Z | MongoDB shared cluster connection to Stitch | 1,775 |
null | []
| [
{
"code": "",
"text": "I have a list of documents and each one creates a new, copied document (with some altered data). The new document should refer to the previous document. Should I add a second field (preDoc) with an ObjectID? Because ObjectIDs are working like a primary key, would it be enough to only save the $oid string? Which one performs better or is suited for MongoDB? Or are there even other possibilites to achieve my case?Curious to know more and thanks in advance!",
"username": "borison"
},
{
"code": "{\n_id : \"doc1\",\n text : \"...\", \n parent : \"group1\"\n...\n},\n{\n_id : \"doc2\",\n text : \"...\", \n parent : \"group1\"\n...\n}\ndb.collection.find({parent : \"group1\"}).sort({_id : 1});\n{ parent : 1, _id : 1}",
"text": "Hi @borison ,Interesting question. In that scenario I would not use a “link” list kind of implementation where each document points to the previous but I would tag the related document with one “parent” identifier.For example :Than to get all related documents in the group:Using the following index it can work efficiently : { parent : 1, _id : 1}Let me know if that is what you were looking for…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I will try it out!Thank you!",
"username": "borison"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can I reference a document the id of another document and does it impact performance? | 2022-11-09T11:24:32.539Z | Can I reference a document the id of another document and does it impact performance? | 1,435 |
null | [
"connecting",
"atlas-cluster"
]
| [
{
"code": "",
"text": "Unable to connect: queryTxt ETIMEOUT\nclustertiendaap.7zy2ydf.mongodb.netFrom vs code extension",
"username": "Romario_Julio"
},
{
"code": ";QUESTION\nclustertiendaap.7zy2ydf.mongodb.net. IN ANY\n;ANSWER\nclustertiendaap.7zy2ydf.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-hmp2v1-shard-0\"\nclustertiendaap.7zy2ydf.mongodb.net. 60 IN SRV 0 0 27017 ac-iiwh38a-shard-00-00.7zy2ydf.mongodb.net.\nclustertiendaap.7zy2ydf.mongodb.net. 60 IN SRV 0 0 27017 ac-iiwh38a-shard-00-01.7zy2ydf.mongodb.net.\nclustertiendaap.7zy2ydf.mongodb.net. 60 IN SRV 0 0 27017 ac-iiwh38a-shard-00-02.7zy2ydf.mongodb.net.\n",
"text": "The issue is on your side. I gotMay be VSCode need special extension for SRV records. What is the exact connection string you are using? As you see above clusters do not have A records.Do you mean 8.8.8.8 and 8.8.4.4 rather thanDNS to 8888 and 8844",
"username": "steevej"
},
{
"code": "v0.9.",
"text": "Sure, I mean 8.8.8.8 and 8.8.4.4\nI am trying to connect for the first time mongo to visual studio code.\nright now MongoDB for VS Code v0.9.I had the same error connecting MongoDB compass",
"username": "Romario_Julio"
},
{
"code": ";QUESTION\nclustertiendaap.7zy2ydf.mongodb.net. IN ANY\n;ANSWER\n;AUTHORITY\nmongodb.net. 900 IN SOA ns-761.awsdns-31.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 60\n;ADDITIONAL\n",
"text": "It looks like you have terminated your cluster. I now get:",
"username": "steevej"
},
{
"code": "",
"text": "I solved the error my internet service provider has blocked port 27107 by wifi that’s the reason why it couldn’t reach the external server.",
"username": "Romario_Julio"
}
]
| Unable to connect: queryTxt ETIMEOUT | 2022-11-04T01:21:54.727Z | Unable to connect: queryTxt ETIMEOUT | 3,276 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "Resources: {\nGold: Number\n}\nWorkers: {\nEfficiency: \n{\n Mine: Number\n}\n}\n signUpTemplate.updateMany({},\n\n { $inc:\n\n {\"Resources.Gold\": \"Workers.Efficiency.Mine\"}\n\n }, function(err, response){\n\n if(err) console.log(err);\n\n else console.log(response);\n\n });\n",
"text": "Im trying to update object in schema by other object. somthing like this:Im getting a type cast error since the Workers.Efficienct.Mine is not getting recognized as an object so it count as String.",
"username": "Amit_Hadad"
},
{
"code": "",
"text": "Hello @Amit_Hadad ,Welcome to The MongoDB Community Forums! Could you please provide below details for me to understand your use-case further?Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "c.updateMany( {} ,\n [\n { \"$set\" : {\n \"Resources.Gold\" : { $add : [\"$Resources.Gold\",\"$Workers.Efficiency.Mine\"]}\n }}\n ]\n)\n",
"text": "I do not know if there is another way but you could use update with aggregation pipeline.Something like this untested code:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error when increasing value by other field | 2022-11-05T16:23:19.310Z | Error when increasing value by other field | 1,441 |
null | [
"queries"
]
| [
{
"code": "{\nmetadata:{\neventcode:100\n}\npower:on // this can be either on or off\ntime:1667984669//unix timestamp\n}\n",
"text": "My document looks something like this the power can be on or it can be off, given to and from time I have to calculate how many hours it was on, I am having trouble because in 1 day it can even have 100 on and 100 off values which means 200 documents, so how to calculate the number of operational hour(i.e time that the system was on) in mongodb query?",
"username": "Harsh_Bhudolia"
},
{
"code": "db.log.find()\n{ _id: ObjectId(\"636b789f50f3507ba44a1905\"),\n metadata: { eventcode: 100 },\n power: 'on',\n time: 1667984669 }\n{ _id: ObjectId(\"636b79302571c4c174a3b231\"),\n metadata: { eventcode: 100 },\n power: 'off',\n time: 1667987641 }\n{ _id: ObjectId(\"636b794d2571c4c174a3b232\"),\n metadata: { eventcode: 100 },\n power: 'on',\n time: 1667991241 }\n{ _id: ObjectId(\"636b79742571c4c174a3b233\"),\n metadata: { eventcode: 100 },\n power: 'off',\n time: 1667994841 }\ndb.log.aggregate([{\n $sort: {\n time: 1\n }\n}, {\n $facet: {\n on: [\n {\n $match: {\n power: 'on'\n }\n }\n ],\n off: [\n {\n $match: {\n power: 'off'\n }\n }\n ]\n }\n}, {\n $project: {\n powerHours: {\n $sum: {\n $map: {\n input: {\n $range: [\n 0,\n {\n $size: '$on'\n }\n ]\n },\n 'in': {\n $subtract: [\n {\n $arrayElemAt: [\n '$off.time',\n '$$this'\n ]\n },\n {\n $arrayElemAt: [\n '$on.time',\n '$$this'\n ]\n }\n ]\n }\n }\n }\n }\n }\n}])\n{ powerHours: 6572 }\n",
"text": "Hi @Harsh_Bhudolia ,Its a fairly complex query only managable by the aggregation framework.It sounds like the best way is to either:If you still want to go the aggregation way I took the challenge and created the following aggregation, considering those source documents:Aggregation:As you can see its fairly complex, it uses a facet to get all the “on” in one array and “off” in another sorted and subtract “off” from its “om” one from the other and then sums .Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks man, if I ever meet you I should treat you for just helping me in times",
"username": "Harsh_Bhudolia"
},
{
"code": "",
"text": "One Question, input should have an array but here you have given it a range and not an array, also you directly taken the difference of uix timestamp does it work?",
"username": "Harsh_Bhudolia"
},
{
"code": "on.time",
"text": "Hi @Harsh_Bhudolia ,Well on.time is an element that exists in each array element so it is array of times…As far as I know uix timestamp is the number of seconds since 1970 so the subtraction of 2 timestamps will be the number of seconds between them…",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "number of milliseconds is unix timestamp",
"username": "Harsh_Bhudolia"
},
{
"code": "",
"text": "after facet we get two arrays, on and off which are array of objects then we are mapping it and as an input you are only taking range from o to size of on array, till here it makes sense to me, then in operation for each element in mongodb you are subtracting that also makes sense to me,\nWhat I do not understand it how $off.time resolves to array of times, because if after facet if I try project to display off.time it displays nothing",
"username": "Harsh_Bhudolia"
},
{
"code": "",
"text": "so i checked again and yes $off.time and $on.time is resolving to an array, so thanks for the query",
"username": "Harsh_Bhudolia"
}
]
| MongoDb calculate the operational hours | 2022-11-09T09:33:30.255Z | MongoDb calculate the operational hours | 1,960 |
null | [
"field-encryption"
]
| [
{
"code": "{\"t\":{\"$date\":\"2022-11-08T23:05:45.391-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"-\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.392-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.402-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":23030,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"Ishaans-MBP.hsd1.ca.comcast.net\"}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.405-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.406-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.407-08:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.407-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-11-08T23:05:45.408-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\nIshaans-MBP:~ ishaanagrawal$ \n",
"text": "I’ve tried to install mongoDB via homebrew. I had already installed it one other time ~4 months ago, so I think I may have messed up a few things in the process.When I run mongod, I get this error:I’ve tried all fixes I’ve seen online with restarting / killing prior sockets and whatnot but I can’t get anything to work. Any help is really appreciated!",
"username": "Ishaan_Agrawal"
},
{
"code": "",
"text": "It says non existent dirpath /data/db\nYou have to make sure dir exists\nI think on Macos access to this path is removed\nSo you have to choose another dir where mongod can write",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks for your reply, I thought it may be something related to this. Do you know how I can choose that?",
"username": "Ishaan_Agrawal"
},
{
"code": "",
"text": "When you run mongod without any parameters it tries to bring up mongod on default port 27017 and default dbpath dir /data/db\nMacos doc suggest to use /System/volumes\nPlease search our forum threads\nor\nYou can use your home directory\nIn that case you have to start\nmongod --port xyz --dbpath your_hd_path --logpath --fork\nAgain check mongo documentation for exact syntax and various command line params",
"username": "Ramachandra_Tummala"
},
{
"code": "db\nIshaans-MBP:data ishaanagrawal$ cd db/\nIshaans-MBP:db ishaanagrawal$ ls\nWiredTiger diagnostic.data\nWiredTiger.lock index-1--5011663865657782431.wt\nWiredTiger.turtle index-3--5011663865657782431.wt\nWiredTiger.wt index-5--5011663865657782431.wt\nWiredTigerHS.wt index-6--5011663865657782431.wt\n_mdb_catalog.wt journal\ncollection-0--5011663865657782431.wt mongod.lock\ncollection-2--5011663865657782431.wt sizeStorer.wt\ncollection-4--5011663865657782431.wt storage.bson\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.627-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"thread1\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":17},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":17},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.628-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"thread1\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.631-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648602, \"ctx\":\"thread1\",\"msg\":\"Implicit TCP FastOpen in use.\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"thread1\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardSplitDonorService\",\"namespace\":\"config.tenantSplitDonors\"}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"thread1\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":24715,\"port\":27017,\"dbPath\":\"/data/db\",\"architecture\":\"64-bit\",\"host\":\"Ishaans-MBP.hsd1.ca.comcast.net\"}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23352, \"ctx\":\"initandlisten\",\"msg\":\"Unable to resolve sysctl {sysctlName} (number) \",\"attr\":{\"sysctlName\":\"hw.cpufrequency\"}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23351, \"ctx\":\"initandlisten\",\"msg\":\"{sysctlName} unavailable\",\"attr\":{\"sysctlName\":\"machdep.cpu.features\"}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"6.0.1\",\"gitVersion\":\"32f0f9c88dc44a2c8073a5bd47cf779d4bfdee6b\",\"modules\":[],\"allocator\":\"system\",\"environment\":{\"distarch\":\"aarch64\",\"target_arch\":\"aarch64\"}}}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"Mac OS X\",\"version\":\"21.6.0\"}}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.637-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{}}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.639-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":5693100, \"ctx\":\"initandlisten\",\"msg\":\"Asio socket.set_option failed with std::system_error\",\"attr\":{\"note\":\"acceptor TCP fast open\",\"option\":{\"level\":6,\"name\":261,\"data\":\"00 04 00 00\"},\"error\":{\"what\":\"set_option: Invalid argument\",\"message\":\"Invalid argument\",\"category\":\"asio.system\",\"value\":22}}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.640-08:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file.\"}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.640-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4784900, \"ctx\":\"initandlisten\",\"msg\":\"Stepping down the ReplicationCoordinator for shutdown\",\"attr\":{\"waitTimeMillis\":15000}}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4794602, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to enter quiesce mode\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":6371601, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FLE Crud thread pool\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784901, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MirrorMaestro\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784902, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the WaitForMajorityService\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":20562, \"ctx\":\"initandlisten\",\"msg\":\"Shutdown: going to close listening sockets\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784905, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the global connection pool\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784906, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the FlowControlTicketholder\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20520, \"ctx\":\"initandlisten\",\"msg\":\"Stopping further Flow Control ticket acquisitions.\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4784918, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ReplicaSetMonitor\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.641-08:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":4784921, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the MigrationUtilExecutor\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"ASIO\", \"id\":22582, \"ctx\":\"MigrationUtil-TaskExecutor\",\"msg\":\"Killing all outstanding egress activity.\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":4784923, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the ServiceEntryPoint\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784925, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down free monitoring\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784927, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the HealthLog\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n{\"t\":{\"$date\":\"2022-11-09T02:20:12.642-08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\nIshaans-MBP:~ ishaanagrawal$ \n",
"text": "I created a path as such:~/data/db/And I ran mongod --dbpath /data/db/It seems to have worked the directory now contains:But when I run mongod, I still get this error:",
"username": "Ishaan_Agrawal"
},
{
"code": "",
"text": "You should run mongod only once\nWhat you need is mongo to connect to your mongod\nUse mongo to connect to your instance\nAlso you started mongod in the foreground.This terminal should be always active and you should open another terminal to connect to it\nIf you want to avoid this you should start mongod in background using --fork",
"username": "Ramachandra_Tummala"
},
{
"code": "Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\nIshaans-MBP:~ ishaanagrawal$ \n",
"text": "That makes sense. I tried opening another terminal and running mongosh, but it yielded this error:I think this is because the mongo isn’t was never running because it is still giving me an error?",
"username": "Ishaan_Agrawal"
},
{
"code": "mongod --dbpath data/db\nmongosh\n",
"text": "Nevermind, I got it working with:Thank you kindly for your help, I will pay it forward.",
"username": "Ishaan_Agrawal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error when Running Mongod Command | 2022-11-09T07:10:25.603Z | Error when Running Mongod Command | 6,492 |
null | []
| [
{
"code": "",
"text": "I need to delete my duplicate documents in database. How can i delete those documents.",
"username": "Santhosh_V"
},
{
"code": "",
"text": "Share some examples of what the data looks like.",
"username": "Nanthakumar_DG"
},
{
"code": "{\n \"_id\": {\n \"$oid\": \"5fbff503a4d80a0008052b64\"\n },\n \"meta\": {\n \"origin_port\": \"INNSA\",\n \"destination_port\": \"DEHAM\",\n \"load_type\": \"20GP\",\n \"start_date\": {\n \"$date\": {\n \"$numberLong\": \"1604188800000\"\n }\n },\n \"leg_code\": \"l4_fcl\",\n \"id\": \"8188\",\n \"weight_upto\": null\n },\n \"data\": {\n \"schedule\": {\n \"url\": \"\",\n \"data\": []\n },\n \"analytics\": {\n \"graph\": {\n \"url\": null\n }\n },\n \"ltl_load_details\": null,\n \"is_special\": false,\n \"special_meta\": [],\n \"price_id\": null,\n \"sailing_date\": null,\n \"charges\": [\n {\n \"charge_id\": \"40001\",\n \"charge_name\": \"Destination Terminal Handling charges\",\n \"charge_description\": \"\",\n \"charge_basis\": \"per equipment\",\n \"per_unit_rate\": 130,\n \"total_units\": 1,\n \"charge_cost\": 130,\n \"charge_currency\": \"EUR\",\n \"minimum_charge_cost\": \"NA\",\n \"is_minimum\": false,\n \"container_type\": \"20GP\",\n \"global_charge_id\": \"FBDC2130\",\n \"customer_currency_cost\": 130,\n \"created_at\": \"2020-11-26T18:33:39.037Z\",\n \"updated_at\": \"2020-11-26T18:33:39.037Z\",\n \"leg_currency_cost\": 130,\n \"charge_source\": \"DB\",\n \"charge_type\": \"FIXED\",\n \"slab_type\": \"INCREMENTAL\",\n \"slab_rate\": null,\n \"slab\": \"NULL\",\n \"id\": {\n \"$oid\": \"5fbff503a4d80a0008052b5d\"\n }\n },\n {\n \"charge_id\": \"40060\",\n \"charge_name\": \"Manifest Fee\",\n \"charge_description\": \"\",\n \"charge_basis\": \"per equipment\",\n \"per_unit_rate\": 120,\n \"total_units\": 1,\n \"charge_cost\": 120,\n \"charge_currency\": \"EUR\",\n \"minimum_charge_cost\": \"NA\",\n \"is_minimum\": false,\n \"container_type\": \"20GP\",\n \"global_charge_id\": \"\",\n \"customer_currency_cost\": 120,\n \"created_at\": \"2020-11-26T18:33:39.037Z\",\n \"updated_at\": \"2020-11-26T18:33:39.037Z\",\n \"leg_currency_cost\": 120,\n \"charge_source\": \"DB\",\n \"charge_type\": \"FIXED\",\n \"slab_type\": \"INCREMENTAL\",\n \"slab_rate\": null,\n \"slab\": \"NULL\",\n \"id\": {\n \"$oid\": \"5fbff503a4d80a0008052b5e\"\n }\n },\n {\n \"charge_id\": \"40061\",\n \"charge_name\": \"Container Facilitation Charges\",\n \"charge_description\": \"\",\n \"charge_basis\": \"per container\",\n \"per_unit_rate\": 80,\n \"total_units\": 1,\n \"charge_cost\": 80,\n \"charge_currency\": \"EUR\",\n \"minimum_charge_cost\": \"NA\",\n \"is_minimum\": false,\n \"container_type\": \"20GP\",\n \"global_charge_id\": \"\",\n \"customer_currency_cost\": 80,\n \"created_at\": \"2020-11-26T18:33:39.037Z\",\n \"updated_at\": \"2020-11-26T18:33:39.037Z\",\n \"leg_currency_cost\": 80,\n \"charge_source\": \"DB\",\n \"charge_type\": \"FIXED\",\n \"slab_type\": \"INCREMENTAL\",\n \"slab_rate\": null,\n \"slab\": \"NULL\",\n \"id\": {\n \"$oid\": \"5fbff503a4d80a0008052b5f\"\n }\n },\n {\n \"charge_id\": \"40062\",\n \"charge_name\": \"Off Dock Charges\",\n \"charge_description\": \"\",\n \"charge_basis\": \"per container\",\n \"per_unit_rate\": 90,\n \"total_units\": 1,\n \"charge_cost\": 90,\n \"charge_currency\": \"EUR\",\n \"minimum_charge_cost\": \"NA\",\n \"is_minimum\": false,\n \"container_type\": \"20GP\",\n \"global_charge_id\": \"\",\n \"customer_currency_cost\": 90,\n \"created_at\": \"2020-11-26T18:33:39.037Z\",\n \"updated_at\": \"2020-11-26T18:33:39.037Z\",\n \"leg_currency_cost\": 90,\n \"charge_source\": \"DB\",\n \"charge_type\": \"FIXED\",\n \"slab_type\": \"INCREMENTAL\",\n \"slab_rate\": null,\n \"slab\": \"NULL\",\n \"id\": {\n \"$oid\": \"5fbff503a4d80a0008052b60\"\n }\n },\n {\n \"charge_id\": \"40110\",\n \"charge_name\": \"Dray\",\n \"charge_description\": null,\n \"charge_basis\": \"per container\",\n \"per_unit_rate\": 2,\n \"total_units\": 1,\n \"charge_cost\": 2,\n \"charge_currency\": \"EUR\",\n \"minimum_charge_cost\": \"NA\",\n \"is_minimum\": false,\n \"container_type\": \"20GP\",\n \"global_charge_id\": null,\n \"customer_currency_cost\": 2,\n \"created_at\": \"2020-11-26T18:33:39.037Z\",\n \"updated_at\": \"2020-11-26T18:33:39.037Z\",\n \"leg_currency_cost\": 2,\n \"charge_source\": \"DB\",\n \"charge_type\": \"FIXED\",\n \"slab_type\": \"INCREMENTAL\",\n \"slab_rate\": null,\n \"slab\": \"NULL\",\n \"id\": {\n \"$oid\": \"5fbff503a4d80a0008052b61\"\n }\n },\n {\n \"charge_id\": \"40124\",\n \"charge_name\": \"container movement doc stuffing\",\n \"charge_description\": null,\n \"charge_basis\": \"per equipment\",\n \"per_unit_rate\": 10,\n \"total_units\": 1,\n \"charge_cost\": 10,\n \"charge_currency\": \"EUR\",\n \"minimum_charge_cost\": \"NA\",\n \"is_minimum\": false,\n \"container_type\": \"20GP\",\n \"global_charge_id\": null,\n \"customer_currency_cost\": 10,\n \"created_at\": \"2020-11-26T18:33:39.037Z\",\n \"updated_at\": \"2020-11-26T18:33:39.037Z\",\n \"leg_currency_cost\": 10,\n \"charge_source\": \"DB\",\n \"charge_type\": \"FIXED\",\n \"slab_type\": \"INCREMENTAL\",\n \"slab_rate\": null,\n \"slab\": \"NULL\",\n \"id\": {\n \"$oid\": \"5fbff503a4d80a0008052b62\"\n }\n },\n {\n \"charge_id\": \"40135\",\n \"charge_name\": \"Charge\",\n \"charge_description\": null,\n \"charge_basis\": \"per equipment\",\n \"per_unit_rate\": 70,\n \"total_units\": 1,\n \"charge_cost\": 70,\n \"charge_currency\": \"EUR\",\n \"minimum_charge_cost\": \"NA\",\n \"is_minimum\": false,\n \"container_type\": \"20GP\",\n \"global_charge_id\": null,\n \"customer_currency_cost\": 70,\n \"created_at\": \"2020-11-26T18:33:39.037Z\",\n \"updated_at\": \"2020-11-26T18:33:39.037Z\",\n \"leg_currency_cost\": 70,\n \"charge_source\": \"DB\",\n \"charge_type\": \"FIXED\",\n \"slab_type\": \"INCREMENTAL\",\n \"slab_rate\": null,\n \"slab\": \"NULL\",\n \"id\": {\n \"$oid\": \"5fbff503a4d80a0008052b63\"\n }\n }\n ],\n \"manual\": 0,\n \"airline_code\": null,\n \"airline\": null,\n \"slab_currency\": null,\n \"slab\": null,\n \"rate_type\": \"msr_rate\",\n \"alternate_object\": null,\n \"leg_name\": \"Destination Charges\",\n \"leg_code\": \"l4_fcl\",\n \"vendor_id\": \"109\",\n \"agent_id\": \"\",\n \"sub_vendor_id\": \"2\",\n \"contract_number\": \"\",\n \"remarks\": \"\",\n \"inclusions\": \"\",\n \"expiry\": {\n \"$date\": {\n \"$numberLong\": \"1617148800000\"\n }\n },\n \"other_charges\": \"\",\n \"if_applicable_charges\": \"\",\n \"load_stat\": null,\n \"global_leg_id\": \"l4\",\n \"leg_total_cost\": 0,\n \"leg_total_currency\": null,\n \"leg_currency_cost\": 0,\n \"leg_currency\": null,\n \"is_master\": 0,\n \"batchcode\": \"0DEt0r3Xzrb2SSw50FGhJ36aG5h8RLY95sdhntWr\",\n \"vendor\": {\n \"vendor_name\": \"FWDBFWDB\"\n },\n \"sub_vendor\": {\n \"sv_id\": 2,\n \"sub_vendor_name\": \"CMA CGM\",\n \"carrier_code\": \"CMDU\",\n \"logo\": \"https://s3.amazonaws.com/storage-dir-fb-test%2Fsub_vendors%2Flogo%2Fsv-17a3304d88fc/sv-17a3304d88fc.PNG\"\n },\n \"agent\": null,\n \"card_id\": 0,\n \"tags\": [\n {\n \"tag_name\": null,\n \"tag_ribbon_url\": null,\n \"show_tag\": false,\n \"tag_type\": \"master\"\n },\n {\n \"tag_name\": \"Special Rate\",\n \"tag_ribbon_url\": \"\",\n \"show_tag\": false,\n \"tag_type\": \"special\"\n },\n {\n \"tag_name\": \"Rollable\",\n \"tag_ribbon_url\": \"\",\n \"show_tag\": false,\n \"tag_type\": \"rollable\"\n }\n ],\n \"promotions\": [],\n \"terms_and_condition\": {\n \"url\": null\n },\n \"demurrage\": null,\n \"detention\": null,\n \"via_pol\": \"\",\n \"haulage_available\": false,\n \"origin_haulage\": \"\",\n \"destination_haulage\": \"\",\n \"value_of_good\": {\n \"cost\": 0,\n \"currency\": \"\"\n },\n \"rollable_available\": false,\n \"penalty_details\": null,\n \"via_pod\": \"\",\n \"cfs_stuffing\": \"F\",\n \"rate_origin\": \"\",\n \"service_type\": \"\",\n \"cargo_type\": \"FAK\",\n \"cargo_type_data\": {\n \"name\": \"FAK\",\n \"code\": \"FAK\"\n },\n \"commodity\": \"\",\n \"commodity_data\": []\n },\n \"__v\": 0\n}\n",
"text": "",
"username": "Santhosh_V"
}
]
| How to Find and Delete Duplicate documents In MongoDB | 2022-11-09T05:01:11.931Z | How to Find and Delete Duplicate documents In MongoDB | 1,290 |
null | [
"queries",
"data-modeling",
"many-to-many-relationship"
]
| [
{
"code": "",
"text": "Hi Everyone,I’m new to Node JS and MongoDB and I’ve always had a burning question: How does the tagging work for blog posts?As I slowly learn about MongoDB, I am trying to comprehend how this tagging system work, but can’t seem to understand a few things.I understand that there would a many-to-many relationship between the blog posts and the tags, but from what I know about the MongoDB documents’ 16Mb limit, I can’t seem to understand how potentially millions of blog posts or their IDs could be placed under one tag document.As I said I am still new to MongoDB and stilling getting a handle of schemas and the process of embedding values from one collection to another, so if anyone knows about this topic, I would appreciate even a high-level explanation or an example schema that might highlight how the 16Mb limit isn’t exceed.Thanks for the help!",
"username": "Sanskar_Patel"
},
{
"code": "{\n_id : \"post1\",\ntitle : \"my post\",\ntags : [\"blogpost\", \"goodpost\"]\n...\n},\n{\n_id : \"post2\",\ntitle : \"new post\",\ntags : [\"blogpost\", \"news\"]\n...\n},\n\n{tags : 1}db.posts.find({tags : \"news\"})\n",
"text": "Hi @Sanskar_Patel ,Thats an interesting observation. You are correct that creating a one document that holds pointer (ids) to all the tagged posts with this tag is a bad design that can easily reach 16mb limit.However, MongoDB allows indexing arrays. This mean that each post can have an array of tags :Using this design we will be able to show tags per post easily, but also index {tags : 1} and query by tag:Hope that many to many relationship make sense.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks for the explanation @Pavel_Duchovny!However, for example, let’s say that there are thousands of blog posts with the tag ‘dog’ in them, how would querying for all the posts with the tag ‘dog’ be achieved?How would track all the posts with this tag?\nWould you have a separate collection for ‘posts’ and one for ‘tags’, and have a many to many relationship?",
"username": "Sanskar_Patel"
},
{
"code": "db.posts.find({\"tags\" : \"dog\"});\n({\"tags\" : \"dog\"})db.post.updateOne({_id : \"post1\"}, {\"$addToSet\" : { tags : \"dog\" }});\n",
"text": "Hi @Sanskar_Patel ,I will have one collection name posts and will have a field of “tags” this field will be an array.This means that it can hold one or many different tags.When I want to find all post documents that have a tag of “dog” I just need to search it in the “tags” field of each post:Since I can also index arrays I can index the “tags” field and each value will now be indexed therefore ({\"tags\" : \"dog\"}) will be fast even if it return 10k posts out of total of 10m blogs.I will maintain a “tags” collection only if the application has a defined set of tags the user needs to choose from. But I will then after showing the list of tags will duplicate each tag into the corresponding post document “tags” array as the user tags the post:See the following examples as well:Ty\nPavel",
"username": "Pavel_Duchovny"
}
]
| How do blog tags work? | 2022-11-09T07:59:18.402Z | How do blog tags work? | 2,470 |
null | [
"queries",
"node-js",
"crud",
"transactions"
]
| [
{
"code": "list = items.find(companyId)\nif(list.length === 0){\n defaultList = items.find({default: true});\n defaultList.map(di=>{\n //set new company id, remove default flag\n delete di._id\n })\n items.insertMany(defaultList)\n list = items.find(companyId)\n}\nreturn list\n",
"text": "Hello,I want to build some feature that initializes data for a “company” when a component/endpoint is accessed the first time without the risk of getting duplicates if 2 users or more access that endpoint at the same time.I have something like this, pseudocodeI hope that if I wrap everything in a transaction(multi-document transaction maybe) and use a session for all queries, I can avoid duplicates when 2 or more users access the endpoint simultaneously.Any examples out there to look at, thoughts on the validity of this approach?Thank you",
"username": "Stefan_Badea"
},
{
"code": "IsolationTransientTransactionErrorMongoServerError: E11000 duplicate key error",
"text": "Hello @Stefan_Badea ,Welcome to The MongoDB Community Forums! As per this blog on ACID Properties in MongoDBMongoDB supports multi-document ACID transactions for the use cases that require them. Developers appreciate the flexibility of being able to model their data in a way that does not typically require multi-document transactions but having multi-document transaction capabilities available in the event they do.This means that the Isolation property ensures that all transactions run in an isolated environment. That enables running transactions concurrently because transactions don’t interfere with each other.Now If multi document transactions try to update same document simultaneously, then only the transaction which acquired the lock first for that particular document will be executed and others will get TransientTransactionError. Another case is, if companyId is indexed and multiple transactions are trying to add same value for companyId simultaneously, then the first one to commit will be kept and others will get MongoServerError: E11000 duplicate key error. However I would recommend you to test this behaviour with your specific use case, to ensure that the application can handle different failure scenarios.To learn more about Multi-document transactions, please go through below linksRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can find and insert multiple documents be done in a transaction without creating duplicates? | 2022-11-01T23:23:43.116Z | Can find and insert multiple documents be done in a transaction without creating duplicates? | 1,480 |
null | [
"aggregation",
"python"
]
| [
{
"code": "log: dict = json.loads(decoded_received_data_line)\nlog[\"createdAt\"] = datetime.now()\ncustomer = log.get(\"domainName\")\n\ndatabase = mongoclient.get_database(customer)\n#I create a Dump Collection for each client. I will later run aggregation on this Dump collection.\ncollection_name = \"Dump\" \ndatabase.get_collection(collection_name).insert_one(log)\n\nfrom pymongo import MongoClient\nimport os\nimport urllib\nfrom dotenv import load_env\n\nos.loadenv(\".env\")\n\nHOST = os.environ.get(\"MONGO_HOST\")\nPORT = int(os.environ.get(\"MONGO_PORT\"))\nMONGO_USER = os.environ.get(\"MONGO_USER\")\nMONGO_PWD = urllib.parse.quote_plus(os.environ.get(\"MONGO_PWD\"))\n\nclass MyMongoClient:\n def __init__(self):\n self.client: MongoClient = MongoClient(f\"mongodb://{MONGO_USER}:{MONGO_PWD}@{HOST}:{PORT}\")\n self.coll = \"Dump\"\n\n def get_client(self):\n return self.client\n\n def insert_data(self, log: dict):\n customer = log.get(\"domainName\").replace(\" \", \"\")\n database = self.client.get_database(customer)\n document_id = database.get_collection(self.coll).insert_one(log)\n print(f\"ID: {document_id.inserted_id}\")\n\nimport json\nimport socketserver\nimport threading\nfrom mongoclient import MyMongoClient\nfrom datetime import datetime\n\n\ndef handle_line(data: str, address: str, mongo_client: MyMongoClient):\n stripped_data = data.removeprefix(\"<01>- hostname \")\n log: dict = json.loads(stripped_data)\n log[\"createdAt\"] = datetime.now()\n log[\"eventProcessor\"] = address[0]\n mongo_client.insert_data(log)\n\n\nclass ThreadedTCPServer(socketserver.ThreadingMixIn, socketserver.TCPServer):\n daemon_threads = True\n allow_reuse_address = True\n\n\nclass ConnectionHandler(socketserver.StreamRequestHandler):\n def handle(self):\n mongo_client = MyMongoClient()\n client = f\"{self.client_address} on {threading.current_thread().name}\"\n print(f\"Connected: {client}\")\n while True:\n data = self.rfile.readline().decode(\"utf-8\")\n if not data:\n print(f\"DATA STREAM WAS EMPTY: {data}\")\n break\n else:\n handle_line(data, self.client_address[0], mongo_client)\n\nfrom connectionhandler import ThreadedTCPServer, ConnectionHandler\nfrom dotenv import load_env\nimport os\n\nloadenv(\".env\")\n\nSERVER_HOST = os.environ.get(\"SERVER_HOST\")\nSERVER_PORT = os.environ.get(\"SERVER_PORT\")\n\n\ndef main():\n with ThreadedTCPServer((SERVER_HOST, SERVER_PORT), ConnectionHandler) as server:\n server.serve_forever()\n\n\nif __name__ == '__main__':\n main()\n",
"text": "I have an open-ended question with a reasonably broad scope before I move on to a more specific question I have. I am fairly new to developing applications so please forgive any noobness.Context:\nI am writing a Python Socket Server to accept multiple connections from QRadar event processors. These endpoints forward JSON logs over TCP. I need to ingest each log into MongoDB (the database name depends on the domainName field in each log).\nI do something like this to insert data:Question 1: Is there a better way to achieve flawless data transfer between QRadar and MongoDB? Any useful middleware? MongoDB will be my data lake and is for internal use only. What does a continuously running socketserver look like in production? How to properly and professionally deploy it? Any suggestions, books, courses, or articles are welcome. I expect 15,000 logs per second each around 1Kb in size.The rest of my code is posted below.mongoclient.pyconnectionhandler.pymain.pyThis solution gets the data into MongoDB as required but I encounter TCP ZERO WINDOW according to Wireshark. I understand this means that my buffer fills up and my application cannot process the data fast enough. Unfortunately, I have not found any other way but to insert a single document at a time into MongoDB since I need to know which customer the data is for.I have tried an async version as well to do the same. But the performance is much lower (measuring performance by eyeballing the Ethernet Speed in Task Manager. With Multithreading it is around 14-20Mbps and with Async it is 5-12Mbps)Question 2: How can I make an Async Multithreaded Server to achieve my goal and professionally deploy it into production so that it can run forever? We don’t have devOps at all so no pipelines or anything. I am willing to setup something small just for my team if someone can guide me.",
"username": "Vikram_Tatke1"
},
{
"code": "",
"text": "Hi @Vikram_Tatke1 and welcome to the MongoDB community forum!!The best practice of inserting the data into the database depends on various combinations of the the hardware and the drivers you are using.Firstly, inserting data into database would depend on how the document structure would looks like.Secondly, in your use case, it also depends on the performance of the API layer/middleware you use. From the numbers you posted earlier, I believe it should be able to push about 15 MB per second of data.Another major factor this would depend is based on the type of deployment you have in your setup.However, to answer your questions,Is there a better way to achieve flawless data transfer between QRadar and MongoDB?If you need recommendations regarding middlewares that can connect QRadar to MongoDB, I believe you’ll get more opinions and experience in a programming-related sites such as StackOverflow or ServerFaultHow to properly and professionally deploy it?In terms of deploying a middleware/API layer on top of MongoDB, you can create your own like your Python code examples. However if you don’t have to use Python, a ready-made package like RestHeart which is Java based might be worth considering.Let us know if you have any further queries.Best Reagrds\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hello Aasawari,\nThank you so much for your informative and helpful suggestions. Today, I explored RESTheart with Docker (First time using Docker so took my time to understand how it works). I am yet to fully understand how RESTheart benefits from change streams and implements concurrency.\nIt seems like something I can use with Docker in production. It is quick and easy to setup and monitor.\nHowever, I can’t imagine how it will satisfy my use case, where I need to read data from a socket and ingest data on a per customer basis as mentioned in my post.QRadar just forwards data to hostip:port. It keeps forwarding forever and I am not required to send any form of request. I only need to open the port and create a socket that listens on itCan you guide me, if not through code, conceptually to how I may achieve this? It is a simple task but I can’t imagine how RESTheart will help me out here.",
"username": "Vikram_Tatke1"
},
{
"code": "",
"text": "Hi @Vikram_Tatke1Could you confirm if your use case matches with Adding Fowarding Destinations which makes it a good resource for what you are looking for.Based on the above documentation, QRadar appears to make a TCP connection to a specified IP and begin sending events, which can be configured to be in JSON format.I believe your assumption is right about having a middleware to receive the events and send them to MongoDB.Having said that, we don’t really have an expertise on QRadar’s product. Specifically, we cannot confirm what’s the form of this data and the method that forwarding takes when seen from the client side. Perhaps the people in Qradar forum might have a better idea on this?Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
}
]
| Python Async Multithreader TCP Server to receive data from QRadar and ingest into MongoDB | 2022-10-29T21:31:17.489Z | Python Async Multithreader TCP Server to receive data from QRadar and ingest into MongoDB | 2,286 |
null | []
| [
{
"code": "{created: {'$gte': new Date('2022-11-01'), '$lte': new Date('2022-11-03')}}\n",
"text": "I want to perform a query that allows me to find all the objects existing in the collection within certain dates including the boundary dates.Let’s assume that the date range is 2022-11-01 and 2022-11-03, the query would beBy doing the query this way, I only get results that are greater than or equal to “2022-11-01” and lower than “2022-11-03”. Meaning that, even if I’m performing an $lte query, I always get the $lt results.\nSo how do i proceed if i want even the boundary date",
"username": "sai_sankalp"
},
{
"code": "> db.test.find({created: {'$gte': new Date('2022-11-01'), '$lte': new Date('2022-11-03')}})\n[\n { _id: 0, created: ISODate(\"2022-11-01T00:00:00.000Z\") },\n { _id: 1, created: ISODate(\"2022-11-03T00:00:00.000Z\") }\n]\n2022-11-03T00:00:00.000Z2022-11-03T00:00:00.000Z",
"text": "Hi @sai_sankalpI did a quick test and it seems to work correctly in MongoDB 6.0.2:What I think happened is that your date needs to be exactly 2022-11-03T00:00:00.000Z for it to be included in the result. With real world data this might be tricky, especially if you’re recording the date right down to the millisecond. If you can test it yourself, you can perhaps insert a document with the exact value of 2022-11-03T00:00:00.000Z and see if it’s included in the result or not.If you’re still having issues with this, please provide the output you’re seeing, along with example documents that you think should be included in the output. Please also provide your MongoDB version.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks @kevinadi,let me check that and reply back",
"username": "sai_sankalp"
}
]
| $lte not working as expected | 2022-11-08T15:13:34.407Z | $lte not working as expected | 1,954 |
null | [
"time-series"
]
| [
{
"code": "db.createCollection(\"telemetry\", {\n timeseries: { timeField: \"ts\", metaField: \"meta\", granularity: \"minutes\" }\n})\n{\n\tts: ISODate(\"2022-01-01T02:28:43\"),\n\tmeta: {\n\t\tentity: ObjectId(\"5f4c605b4cf5037c9067ab22\"),\n\t\tdevice: ObjectId(\"5f4c605b4cf5037c9067ab31\"),\n\t\tname: \"temperature\"\n\t},\n\tvalue: 25.0\n});\ndb.getCollection(\"telemetry\").insert(\n{\n\tts: ISODate(\"2022-01-01T02:28:43\"),\n\tmeta: {\n\t\tentity: ObjectId(\"5f4c605b4cf5037c9067ab22\"),\n\t\tdevice: ObjectId(\"5f4c605b4cf5037c9067ab31\")\n\t},\n\ttemperature: 25.0\n});\n",
"text": "I am modelling a time series collection from an existing bucket style approach (~100GB worth of telemetry per year).Basic scalar telemetry enters from edge devices and is then saved with the following information:Should we add the name of the telemetry into the meta object?Eg:db.getCollection(“telemetry”).insert(Option 1:orOption 2:We current use the Option 1 approach in the hourly buckets for data (each bucket is a document).Thanks",
"username": "Jeremy_Carter"
},
{
"code": "",
"text": "Hi @Jeremy_Carter ,The metadata classifier is mainly used to distinguish the source or clasification of the time based metrics/data.So the way to form really depends on the application use case and requirements of queries…In your application do you store other attribute rather than temperature (eg. Wind , humidity etc…)?If so do you plot or present the values separately or in a group? For example all the values (wind , temp, humidity )for a specific hour or day? Or on the other hand you have one graph or interest in a specific aspect?I am asking this since the first method makes more sense if you query on a specific metric type since its already classified under the meta fields. If you need all data for a specific point I would go with approach 2.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Firstly,Thank you very much for your feedback. We have 40 types of sensors producing 10-50 values. We also support fully custom types which are user created. Because its not always “temperature” or “humidity” but rather infinite customisable metrics like “relay-1-status”, “relay-2-status” perhaps it is better to have the metric in the meta data.The use case is industrial IoT if that helps with your feedback.",
"username": "Jeremy_Carter"
},
{
"code": "",
"text": "Is there a secondly ?How is the data queried and presented?",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Data is queried by eitherand obviously a time window.",
"username": "Jeremy_Carter"
},
{
"code": "",
"text": "Hi @Jeremy_Carter ,According to the description option 1 seems as the better one.All predicates are in meta which when indexed will be optimal.I would recommend having an index of each of the 3 fields including time in each and the combination of the 2 that used in the compound query .Thanks\nPavel",
"username": "Pavel_Duchovny"
}
]
| Modelling time series data | 2022-10-27T03:08:51.875Z | Modelling time series data | 2,306 |
null | [
"aggregation",
"queries",
"transactions"
]
| [
{
"code": "{\n \"_id\": \"TT3898\",\n \"ticketActivity\": [\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649694600035\"\n }\n },\n \"author\": \"Nana Ama\",\n \"action\": \"logTicket\",\n \"text\": \"has created the ticket\",\n \"type\": \"TT3898\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649751161650\"\n }\n },\n \"author\": \"vbokai\",\n \"action\": \"assignee\",\n \"text\": \"has changed the assignee to\",\n \"type\": \"vbokai\",\n \"note\": \"\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649752358178\"\n }\n },\n \"author\": \"lokesh\",\n \"action\": \"comment\",\n \"text\": \"has commented\",\n \"type\": \"Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649752372104\"\n }\n },\n \"author\": \"dinesh\",\n \"action\": \"status\",\n \"text\": \"updated the status to\",\n \"type\": \"resolved\",\n \"note\": \"Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649752605548\"\n }\n },\n \"author\": \"nagesh\",\n \"action\": \"assignee\",\n \"text\": \"has sent sms to customer\",\n \"type\": \"Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1650130374339\"\n }\n },\n \"author\": \"vbokai\",\n \"action\": \"status\",\n \"text\": \"updated the status to\",\n \"type\": \"open\",\n \"note\": \"\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1651755688760\"\n }\n },\n \"author\": \"hwahab\",\n \"action\": \"assignee\",\n \"text\": \"has changed the assignee to\",\n \"type\": \"rmolstrevold\",\n \"note\": \"\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1655460430415\"\n }\n },\n \"author\": \"rmolstrevold\",\n \"action\": \"status\",\n \"text\": \"updated the status to\",\n \"type\": \"resolved\",\n \"note\": \"Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1655460564092\"\n }\n },\n \"author\": \"rmolstrevold\",\n \"action\": \"assignee\",\n \"text\": \"has sent sms to customer\",\n \"type\": \"Please be informed that the ECG transaction you made on the 08/02/22 with transaction id 15978305876 was successfully processed . Kindly contact ECG for any assistance on this transaction.\"\n }\n ]\n}\n{\n \"_id\": \"TT3898\",\n \"ticketActivity\": [\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649694600035\"\n }\n },\n \"author\": \"Nana Ama\",\n\t \"assignedByUser\" : \"Nana Ama\",\n\t \"assignedToUser\" : \"vbokai\",\n \"action\": \"logTicket\",\n \"text\": \"has created the ticket\",\n \"type\": \"TT3898\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649751161650\"\n }\n },\n \"author\": \"vbokai\",\n\t \"assignedByUser\" : \"vbokai\",\n\t \"assignedToUser\" : \"lokesh\",\n \"action\": \"assignee\",\n \"text\": \"has changed the assignee to\",\n \"type\": \"vbokai\",\n \"note\": \"\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n\t\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649752358178\"\n }\n },\n \"author\": \"lokesh\",\n\t \"assignedByUser\" : \"lokesh\",\n\t \"assignedToUser\" : \"dinesh\",\n \"action\": \"comment\",\n \"text\": \"has commented\",\n \"type\": \"Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611.\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n\t},\n\t\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649752372104\"\n }\n },\n \"author\": \"dinesh\",\n\t \"assignedByUser\" : \"dinesh\",\n\t \"assignedToUser\" : \"nagesh\",\n \"action\": \"status\",\n \"text\": \"updated the status to\",\n \"type\": \"resolved\",\n \"note\": \"Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611.\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n\t},\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1649752605548\"\n }\n },\n \"author\": \"nagesh\",\n\t \"assignedByUser\" : \"nagesh\",\n\t \"assignedToUser\" : \"vbokai\",\n \"action\": \"assignee\",\n \"text\": \"has sent sms to customer\",\n \"type\": \"Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n\t},\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1650130374339\"\n }\n },\n \"author\": \"vbokai\",\n\t \"assignedByUser\" : \"vbokai\",\n\t \"assignedToUser\" : \"hwahab\",\n \"action\": \"status\",\n \"text\": \"updated the status to\",\n \"type\": \"open\",\n \"note\": \"\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1651755688760\"\n }\n },\n \"author\": \"hwahab\",\n\t \"assignedByUser\" : \"hwahab\",\n\t \"assignedToUser\" : \"rmolstrevold\",\n \"action\": \"assignee\",\n \"text\": \"has changed the assignee to\",\n \"type\": \"rmolstrevold\",\n \"note\": \"\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1655460430415\"\n }\n },\n \"author\": \"rmolstrevold\",\n\t \"assignedByUser\" : \"rmolstrevold\",\n\t \"assignedToUser\" : \"rmolstrevold\",\n \"action\": \"status\",\n \"text\": \"updated the status to\",\n \"type\": \"resolved\",\n \"note\": \"Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n },\n {\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1655460564092\"\n }\n },\n \"author\": \"rmolstrevold\",\n\t \"assignedByUser\" : \"rmolstrevold\",\n\t \"assignedToUser\" : \"\",\n \"action\": \"assignee\",\n \"text\": \"has sent sms to customer\",\n \"type\": \"Please be informed that the ECG transaction you made on the 08/02/22 with transaction id 15978305876 was successfully processed . Kindly contact ECG for any assistance on this transaction.\"\n\t \"comments\": \"lokesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611. \n\t\t\t\t\t\"dinesh:Per instructions, SMS sent to customer to contact ECG for help. The transactions reflected with ECG hence cannot be reversed by MTN. Customer can visit the Branch or call their Call Center on 0302611611\".\n\t\t\t\t\t\"nagesh: Dear Valued Customer, Kindly contact ECG for help on your ECG payment. The payment went to E.C.G. So, cannot be reversed by MTN. You can visit your branch or call their Call Center on 0302611611. Thank you.\"\n\t\t\t\t\t\"rmolstrevold:Transaction was successfully processed to ECG. Customer is advice to contact ecg for assistance.\"\n }\n ]\n}\n",
"text": "Hi Team,I need help from you regarding array object values,Here is sample document,from the above document there are too many objects in ticketActivity …In every object I need separate fields names AssignedByUser, AssignedToUser and comments.AssignedByUser will be the “ticketActivity.author” as mentioned in the same object,AssignedToUser will be the “ticketActivity.author” of the immediate next object .Note: In the last object assignedToUser will be null.Comments will be the “ticketActivity.note” with condition where note filed exists in the object concat of “ticketActivity.author”, “:”, “ticketActivity.note” , it shell be added in all the objects as same,The expected output of aggregate query as below:Please help me on this as soon as possible.",
"username": "Lokesh_Reddy1"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How add field values from one nested array to another nested array | 2022-11-02T08:26:55.975Z | How add field values from one nested array to another nested array | 1,446 |
null | [
"java"
]
| [
{
"code": "{\n \"_id\": \"myId\",\n \"plannedMap\": {\n \"entryKey_1\": \"2022-10-10T20:32:20Z\",\n \"entryKey_2\": \"2022-10-11T20:32:20Z\"\n },\n\"executedMap\": {\n \"2022-10-11T20:32:20Z\": [\n {\n \"calendarEntryKey\": \"entryKey_2\",\n \"count\": 0,\n \"quantity\": 0,\n \"executed\": true\n }\n}\n",
"text": "Hello everyone.I’m quite new working with Mongo and I have a question that I wasn’t able to solve reading info and finding similar issues.In my application we had an initial bad design and we used Java maps to model our document. This wouldn’t be a problem if it was not because we put in the key a dynamic value instead of a fixed one and put inside the key and the value.I will clarify this with an example:What I need is to get each combination of entryKeyX - date - executedValue and save it in another collection to remove this one.The problem I have is how to get the key and value from plannedMap. With this, i could go to executedMap and retrieve the values.Or maybe it’s easier get them directly from executedMap, but i didn’t find a way to do it without knowing the key of the map.Thanks in advance.\nRegards.",
"username": "Pablo_Covarrubias"
},
{
"code": "c.aggregate( { \"$set\" : { \"plannedMap\" : { $objectToArray : \"$plannedMap\"}}})\n{ _id: 'myId',\n plannedMap: \n [ { k: 'entryKey_1', v: '2022-10-10T20:32:20Z' },\n { k: 'entryKey_2', v: '2022-10-11T20:32:20Z' } ],\n executedMap: \n { '2022-10-11T20:32:20Z': \n [ { calendarEntryKey: 'entryKey_2',\n count: 0,\n quantity: 0,\n executed: true } ] } }\n",
"text": "put in the key a dynamic value instead of a fixed one and put inside the key and the value.Indeed a bad design. I am pretty sure there is anti-pattern for that. The remedy is the attribute pattern. The flexible schema nature of MongoDB should allow us to migrate toward a better implementation easily.In the mean time, I think that $objectToArray might provide you a way to find the dynamic keys you are looking for. Using $objectToArray for plannedMap in you sample input document gives:",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to retrieve keys and values from a dynamic map | 2022-11-08T10:17:35.547Z | How to retrieve keys and values from a dynamic map | 2,840 |
null | [
"node-js",
"app-services-data-access"
]
| [
{
"code": "",
"text": "Can I create two MongoDB instances in the same Virtual Machine which will run of different port numbers and will have different versions of mongo?\nFor example lets us suppose a default Mongo DB instance will run as a service in the port number of 27017 with the Mongo version of 4.4.2, I wish to run Mongo version of 6.0.2 in port number 27020 in the same virtual machine as a service. Is it possible?\nIf possible, what is the load on the VM regarding the Huge flow of data, Loss of data and is it a sustainable method to run on a production level Virtual Machine?",
"username": "Ch_Sadvik"
},
{
"code": "",
"text": "Can I create two MongoDB instances in the same Virtual Machine which will run of different port numbers and will have different versions of mongo?\nFor example lets us suppose a default Mongo DB instance will run as a service in the port number of 27017 with the Mongo version of 4.4.2, I wish to run Mongo version of 6.0.2 in port number 27020 in the same virtual machine as a service. Is it possible?I am not Windows savvy, but you can definitively run more than one instances, each listening on different port. And if you can specify the full path name of the mongod executable when defining the service, the yes you may have different version. Note that instances have to store their data and logs in different directories.As forwhat is the load on the VM regarding the Huge flow of dataThe load will be proportionally huge.Loss of dataTotal lost of data if not running replica set.If the reason of running on a VM is to isolate the host OS from the OS running mongod, then it is fine toto run on a production level Virtual Machine?But if it is to share hardware between multiple instances running on multiple VM on the same hardware then it is not something I would do.",
"username": "steevej"
}
]
| Creation of two MongoDB instances in the same server with different versions as a service | 2022-11-07T04:11:45.262Z | Creation of two MongoDB instances in the same server with different versions as a service | 4,137 |
null | [
"aggregation"
]
| [
{
"code": "[\n {\n \"$match\": {\n \"$and\": [\n {\n \"available\": {\n \"$in\": [\n null,\n false\n ]\n }\n },\n {\n \"newAcquisition\": {\n \"$in\": [\n null,\n false\n ]\n }\n },\n {\n \"$or\": [\n {\n \"product._id\": \"62d426209a80df0d3d9ca38e\"\n },\n {\n \"category._id\": \"6305339622cc505054e33353\"\n },\n {\n \"maker.company\": \"630fc3f4d28d4cead8db9054\"\n }\n ]\n }\n ]\n }\n },\n {\n \"$addFields\": {\n \"id\": {\n \"$toString\": \"$_id\"\n }\n }\n },\n {\n \"$sort\": {\n \"_id\": -1\n }\n },\n {\n \"$facet\": {\n \"data\": [\n {\n \"$count\": \"count\"\n },\n {\n \"$addFields\": {\n \"page\": 1\n }\n },\n {\n \"$addFields\": {\n \"limit\": 25\n }\n }\n ],\n \"docs\": [\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 25\n }\n ]\n }\n },\n {\n \"$project\": {\n \"docs\": 1,\n \"total\": {\n \"$arrayElemAt\": [\n \"$data.total\",\n 0\n ]\n },\n \"page\": {\n \"$arrayElemAt\": [\n \"$data.page\",\n 0\n ]\n },\n \"limit\": {\n \"$arrayElemAt\": [\n \"$data.limit\",\n 0\n ]\n }\n }\n }\n]\n\n",
"text": "Hello guys, I’ve been trying to create an index to the following query, but at the moment without any good results. I’ve tried to put the main properties as compound indexes and I’ve tried including the properties inside the $or operator as individuals together the main properties, it looks don’t work, so I hope you can give any clue, whichever info would be well received.",
"username": "PedroFumero"
},
{
"code": "",
"text": "Read https://www.mongodb.com/community/forums/t/update-nested-objects-array-to-string-array/197588/2?u=steevej about storing _id as string (like product._id, category._id, …) instead of ObjectId for references.Share the index you tried.About flags, like your fields available and newAcquisition, I usually try to find meaningful alternative to Boolean flags. In the your case, I would use available_date and acquisition_date. But when I stick with Boolean I like to use partial index using the Boolean in my expression. For example, if most use-cases involve available:true, they would use a much smaller index.You might gain by doing $sort:{_id:1} first because the $addFields might prevent the use of the _id:1 index, as in most case when documents are altered, no indexes can be used to $match or $sort.In your final $project, you access $data.total, but you do not have such field in your data $facet",
"username": "steevej"
}
]
| Create an index over query with aggregate pipeline | 2022-11-05T10:04:33.668Z | Create an index over query with aggregate pipeline | 2,071 |
null | [
"graphql",
"react-js"
]
| [
{
"code": "Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-emfej/graphql. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 204.import React, { useState, useEffect } from \"react\";\nimport \"./App.css\";\nimport { withAuthenticator } from \"@aws-amplify/ui-react\";\nimport Amplify from \"aws-amplify\";\nimport awsExports from \"./aws-exports\";\nimport '@aws-amplify/ui-react/styles.css';\nimport { ApolloClient, ApolloProvider, createHttpLink, InMemoryCache, gql } from '@apollo/client';\nimport { setContext } from '@apollo/client/link/context';\nimport CreateOrganisation from \"./CreateOrganisation\"\n\nAmplify.configure(awsExports);\n\n\nfunction App({ signOut, user }) {\n \n\nconst httpLink = createHttpLink({\n uri: 'https://eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-emfej/graphql',\n});\n\nconst authLink = setContext((_, { headers }) => {\n const token = user.signInUserSession.idToken.jwtToken;\n return {\n headers: {\n 'Access-Control-Allow-Origin': '*',\n jwtTokenString: token ? `${token}` : \"\",\n }\n }\n});\n\nconst client = new ApolloClient({\n link: authLink.concat(httpLink),\n cache: new InMemoryCache()\n});\n\n\n return (\n <ApolloProvider client={client}>\n <CreateOrganisation />\n </ApolloProvider>\n )\n}\nexport default withAuthenticator(App);\n",
"text": "Hello,I get a CORS error when trying to connect to the atlas graphql endpoint\nAccording to @Sumedha_Mehta1 , I would not get the CORS error if i was using the graphql : Mongo Data API and CORS - #4 by Stennie\nThe error :\nCross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-emfej/graphql. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 204.What are the other possibilities?\nhas someone got the same issue?My code :Best regards,",
"username": "cyril_moreau"
},
{
"code": "",
"text": "Rather than calling from React Client, you should try calling from Server side NodeJS\nor else add Proxy to your web container\nfor Nginx you can try like this devops-demo/default.conf at master · psramkumar/devops-demo · GitHub",
"username": "psram"
},
{
"code": "",
"text": "Thank you @psramI was looking for a built-in solution\nI have found in the “app settings” tab → Allowed Request Origins\n\nimage1903×556 81.2 KB\nI was wondering if this could be the built-in solution but i dont succeed to make it work.Any solution provided in Mongo atlas settings?Best regards,\nCyril",
"username": "cyril_moreau"
},
{
"code": "server {\n listen 80;\n\n location /graphql { \n proxy_ssl_session_reuse off;\n proxy_ssl_server_name on;\n proxy_http_version 1.1;\n proxy_set_header Origin http://localhost:3001; # the url of my react app\n proxy_hide_header Access-Control-Allow-Origin;\n add_header Access-Control-Allow-Origin $http_origin;\n add_header Access-Control-Allow-Headers *;\n proxy_pass https://eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-xxx/graphql; \n }\n\n error_page 500 502 503 504 /50x.html;\n location = /50x.html {\n root /usr/share/nginx/html;\n }\n}\n\n\nconst httpLink = createHttpLink({\n uri: 'http://localhost/graphql',\n});\n\nconst authLink = setContext((_, { headers }) => {\n const token = user.signInUserSession.idToken.jwtToken;\n console.log(\"token\",token)\n return {\n headers: {\n ...headers,\n jwtTokenString: token ? `${token}` : \"\",\n }\n }\n});\n\nconst client = new ApolloClient({\n link: authLink.concat(httpLink),\n cache: new InMemoryCache()\n});\n",
"text": "I have followed @psram suggestion and used his nginx docker container.\nJust for other people that may want to do the same thing i let my default.conf file here :\nparameters to change :As i am working in my laptop using docker. I launch 2 docker containers :In your react app code you would access the graphql endpoint this way :And you would access your react app through the browser at http://localhost:3001I hope this will help\nBest regards",
"username": "cyril_moreau"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| CORS error when connecting to MongoDB Atlas using Graphql | 2022-11-05T23:04:03.575Z | CORS error when connecting to MongoDB Atlas using Graphql | 3,832 |
null | [
"java"
]
| [
{
"code": "",
"text": "I need to build a desktop Java standalone app to process a data file. A requirement is that a file can have data with a size of up to 100 GB. It is not easy to divide the data into several portions. So, I need to process the data in one shot. To do so, I need to store data in various stages in a DB. That leads to 200 GB of data storage. I don’t know how much memory usage is required for the size of data in MongoDB with an assumption of an enouge hard drive disk. Is MongoDB suitable for this usage? Thanks for your advice.",
"username": "V_W"
},
{
"code": "mgeneratejsexplain()",
"text": "Welcome to the MongoDB Community @V_W !There is no prescriptive answer for how much RAM is required as this really depends on your use case, system resources (storage type and speed), workload (applications competing for the same resources), and performance expectations. I assume you are talking about 100GB of uncompressed data, which could be significantly less storage size if the data is reasonably compressible.I recommend testing your outcomes with some representative data. You may find a data generation tool like mgeneratejs helpful in this regard. If you have some existing test data, you can also extend this using a recipe like mongodb - duplicate a collection into itself - Stack Overflow.If you have specific scenarios that could perhaps be tuned (for example ingesting, modelling, or updating data) you could start a discussion with more details such as your specific MongoDB driver & server versions, a slow command or query (and associated explain() output), a snippet of code, and how you are measuring the execution time.I also recommend reviewing schema design patterns (and anti-patterns) that may apply to your use case:Building with Patterns: A Summary | MongoDB BlogA Summary of Schema Design Anti-Patterns and How to Spot Them | MongoDBRegards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi, Stennie,Thanks very much for your information.With the data samples I have, I haven’t noticed a big memory usage jump for 500 KB of byte array along with the processed data. I use MongoDB for temporary data storage with a plain byte array, and collection of data entities. So, I don’t need any complex patterns. With Spring Data MongoDB, I won’t notice any native MongoDB features for reading/writing data. Issues may raise when I have a much large data sample.",
"username": "V_W"
}
]
| Is MongoDB Suitable for 200 GB Data On A Desktop? | 2022-10-31T16:34:20.162Z | Is MongoDB Suitable for 200 GB Data On A Desktop? | 1,654 |
null | [
"dot-net"
]
| [
{
"code": "document: {\n object: [{\n field1: \"\"\n field2: \"\"\n }]\n}\n [MongoDB.Bson.Serialization.Attributes.BsonIgnoreExtraElements]\n public class Document\n {\n#nullable enable\n public Object? object {get;set;}\n#nullable disable\n }\n\n [MongoDB.Bson.Serialization.Attributes.BsonIgnoreExtraElements]\n public class Object\n {\n [BsonRepresentation(BsonType.String)]\n [BsonDefaultValue(\"\")]\n public string field1 { get; set; } = \"\";\n\n [BsonRepresentation(BsonType.String)]\n [BsonDefaultValue(\"\")]\n public string field2 { get; set; } = \"\";\n\n#nullable enable\n [BsonRepresentation(BsonType.String)]\n [BsonDefaultValue(\"\")]\n public string? field3 { get; set; } = \"\";\n#nullable disable\n }\n#nullable enable\n public string? field3 { get; set; } = \"\";\n#nullable disable\n",
"text": "I am hitting an issue where I have an added string element in my document inside an object that may not exist. When deserializing, All I get is “Object reference not set to an instance of an object.”. I’ve proven it’s the additional field by removing the field from my model and then it deserializes. I need a way to tell Mongo during deserialization that the element may be null or undefined and have it accept it. Is this possible?Example Model:C# Model Example:I’ve tried removing the BSON attributes as well so the field looked like this:That did not seem to work any better. Any direction would be appreciated.NOTE: I updated to MongoDB.Driver version 2.18.0 to make sure that would not resolve it.",
"username": "gvanriper"
},
{
"code": "[BsonIgnoreIfNull]using System;\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Driver;\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<Document>(\"sample_documents\");\n\ncoll.DeleteMany(Builders<Document>.Filter.Empty);\n\ncoll.InsertOne(new Document {Title = \"Brave New World\"});\ncoll.InsertOne(new Document {Title = \"Much Ado About Nothing\", Optional = \"The bard rocks!\"});\n\nvar query = coll.AsQueryable();\nforeach (var doc in query.ToList())\n{\n Console.WriteLine(doc);\n}\n\nclass Document\n{\n public ObjectId Id { get; set; }\n\n public string Title { get; set; }\n\n [BsonIgnoreIfNull]\n public string Optional { get; set; }\n\n public override string ToString() => $\"{Id}: {Title} ({Optional ?? \"empty\"})\";\n}\n63659e1a354ffbbfa202286b: Brave New World (empty)\n63659e1a354ffbbfa202286c: Much Ado About Nothing (The bard rocks!)\ntest> db.sample_documents.find()\n[\n {\n _id: ObjectId(\"63659e1a354ffbbfa202286b\"),\n Title: 'Brave New World'\n },\n {\n _id: ObjectId(\"63659e1a354ffbbfa202286c\"),\n Title: 'Much Ado About Nothing',\n Optional: 'The bard rocks!'\n }\n]\n",
"text": "Hi, @gvanriper,Welcome to the MongoDB Community Forums. I understand that you’re trying to deserialize a document into a POCO (aka C# object) that has an optional field. This is possible using [BsonIgnoreIfNull].The output of this program is:And the contents of the database is:Hopefully this answers your question.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thanks, @James_Kovacs . Can confirm this worked. I had sworn I tried that but who knows? Brain fatigue. Thank you for the solution!",
"username": "gvanriper"
},
{
"code": "",
"text": "Glad I could help. Thanks for letting us know that the solution worked for you.",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| C# - Deserialize String That May Not Exist | 2022-11-04T15:21:14.049Z | C# - Deserialize String That May Not Exist | 2,581 |
null | [
"flutter"
]
| [
{
"code": "\"Match\": [\n {\n \"name\": \"owner\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": {\n \"$or\": [\n {\n \"user1Id\": {\n \"%stringToOid\": \"%%user.id\"\n }\n },\n {\n \"user2Id\": {\n \"%stringToOid\": \"%%user.id\"\n }\n }\n ]\n }\n }\n ],\n",
"text": "When using Flutter Realm I can see following error in my logs:using sync incompatible role “owner” for table “Match”. this role will default to deny all access. consider changing this role to be sync compatible (ProtocolErrorCode=201)I feel like I have not seen this error before and my codebase already grew too much to narrow the issue down.Does anyone experiences this before or know what this error means? My memory tells me, that I did not get this error in the past.This is the rule:",
"username": "Thomas_Anderl"
},
{
"code": "",
"text": "Hi,\nThanks for reporting. We have identified a bug and will be fixing it.",
"username": "Lyubomir_Blagoev"
},
{
"code": "",
"text": "I have started seeing this as well, using App Services with the Swift SDK.",
"username": "Chris_Wilson"
}
]
| Sync compatible rows | 2022-11-05T20:33:23.772Z | Sync compatible rows | 1,860 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "There are many columns. (3,000 columns)\nDoes it consume this much memory?\nRealm Studio behavior (scrolling, etc.) is seldom.ps.I tried to upload an image, but it doesn’t upload.\nsorry.",
"username": "lasidolasido"
},
{
"code": "",
"text": "Are you asking a question or making a statement?I am not sure what the limit of Realm Studio is, if any but 3000 properties in one model is a LOT of properties - that may need an object design change.Is Realm Studio crashing or having some kind of issue? Maybe if you update the question, what’s being asked will be more clear.",
"username": "Jay"
}
]
| Is it true? Realm Studio | 2022-11-08T03:32:14.812Z | Is it true? Realm Studio | 1,121 |
null | []
| [
{
"code": "libcrypto.so.1.1mongodb-shell-linux-x86_64-ubuntu2004-5.0.9/mongodb-linux-x86_64-ubuntu2004-5.0.9$ ./bin/mongo 192.168.1.20:27017\n./bin/mongo: error while loading shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory\n",
"text": "I’m trying to use mongo shell on Ubuntu 22.04, but libcrypto.so.1.1 isn’t available.That library doesn’t exist in any package for Jammy:https://packages.ubuntu.com/search?suite=jammy&arch=any&mode=exactfilename&searchon=contents&keywords=libcrypto.so.1.1Is there a solution for this?",
"username": "John_Manko"
},
{
"code": "",
"text": "mongosh is available as a PPA for Ubuntu 20.04 (Focal) and Ubuntu 18.04 (Bionic).Wait for a package that supports 22.04.Run a container in the meantime ?",
"username": "chris"
},
{
"code": "",
"text": "Do you know when the package will release to support 22.04?",
"username": "Yavatan"
}
]
| Ubuntu 22.04 and mssing libcrypto.so.1.1 | 2022-06-06T19:50:25.579Z | Ubuntu 22.04 and mssing libcrypto.so.1.1 | 11,486 |
null | [
"dot-net"
]
| [
{
"code": "",
"text": "I am using MongoDB C# driver “2.13.0” with MongoDB.Bson version “2.13.0” to connect and insert and retrieve data from MongoDB.Now as per official website the document limit is 16 MB https://www.mongodb.com/docs/manual/reference/limits/But when I am trying to insert data which is in KB’s if I put it in Notepad but the code is complaining the size limit.System.FormatException: Size 16948171 is larger than MaxDocumentSize 16793600But when I am using file to insert from MongoDB shell using command it’s working fine. Means it’s not complaining for the size there. Written first delete and then insert query.",
"username": "shekhar_kumar2"
},
{
"code": "[Fact]\npublic void InsertOne_with_largest_possible_document_should_work()\n{\n var collection = CreateCollection(); // returns IMongoCollection<C>\n\n var documentWithEmptyString = new C { Id = 1, S = \"\" };\n var documentOverhead = documentWithEmptyString.ToBson().Length; // equals 22 bytes\n\n var maxDocumentSize = 16777216; // 16 MB\n var maxStringSize = maxDocumentSize - documentOverhead; // equals 16777194 chars\n var largestPossibleDocument = new C { Id = 1, S = new string('x', maxStringSize) };\n largestPossibleDocument.ToBson().Length.Should().Be(maxDocumentSize);\n\n collection.InsertOne(largestPossibleDocument);\n var roundTrippedDocument = collection.FindSync(\"{}\").Single();\n roundTrippedDocument.ToBson().Length.Should().Be(maxDocumentSize);\n\n var documentSizeOnServer = collection\n .Aggregate()\n .AppendStage<BsonDocument>(\"{ $project : { documentSize : { $bsonSize : '$$ROOT' }, _id : 0 } }\")\n .Single();\n documentSizeOnServer[\"documentSize\"].AsInt32.Should().Be(maxDocumentSize);\n}\nprivate class C\n{\n public int Id { get; set; }\n public string S { get; set; }\n}\n",
"text": "Thanks for reporting this. I attempted to reproduce it but was unable to. According to my attempt the calculation is spot on.This is the test I wrote:This is the class definition of C:",
"username": "Robert_Stam"
},
{
"code": "",
"text": "I did just notice that in your error message the MaxDocumentSize is 16793600, whereas I was expecting 16777216.So perhaps my test to reproduce does not match what you are doing.Can you provide more information? Ideally you could provide us with a small standalone program that minimally reproduces the problem.",
"username": "Robert_Stam"
}
]
| C# MongoDB driver calculating data size is wrong | 2022-11-08T09:55:49.141Z | C# MongoDB driver calculating data size is wrong | 1,713 |
null | [
"compass",
"php"
]
| [
{
"code": "try\n{\n$m = new MongoClient(\"mongodb://mydbservername:27017\", array(\"username\" => \"joe\", \"password\" => \"test\"));\n} \ncatch (MongoDBDriverExceptionException $e) \n{\n\t\techo 'Failed to connect to MongoDB, is the service intalled and running?<br /><br />';\n\t\techo $e->getMessage();\n\t\texit();\n}\nMongoDB\\ClientMongoDB\\Driver\\Manager",
"text": "Hi there,I have installed MongoBD Community Edition on Windows Server 2016 and enabled Remote Access as well as normal authentication i.e. User and Password. I have tested using Compass everything works fine.I would like to access it using PHP 8.0 which is installed on IIS Server on a separate server (Server 2016). I downloaded MongoDB dll extension file and pasted in ext folder also added on php.iniWhen running my php file with below code. I get the error Uncaught Error: Class ‘MongoClient’ not foundI also tried, MongoDB\\Client , MongoDB\\Driver\\Manager. Please help",
"username": "Wilbard_Mtei"
},
{
"code": "<?php\n phpinfo();\n?>\nphp --iniphp -m",
"text": "Hello @Wilbard_Mtei , welcome to the MongoDB Community!You can check if the MongoDB extension has been loaded and active in PHP, by creating a page withand look at the output. You should see a “mongodb” section with the extension version etc. If you see that, MongoDB is active and well in PHP. If not, check for loading errors of the mongodb PHP extension.phpinfo will also tell you which php.ini is currently active. Look for “Loaded Configuration File”If outputting phpinfo() in a webpage is not convenient, you can use the following command linesphp --iniWill show the curent php.ini pathphp -mShows all the loaded PHP extensions. You should see ‘mongo’ among them, and if you don’t it means the extension has not been loaded, perhaps due to a misconfiguration. You’ll have to check the logs.Next, it could be that the MongoDB PHP Library (PHPLIB) has not been been included in your PHP source file, or has a path issue. I’m not sure if you’re using Composer or are including files manually.Let us know,\nHubert",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "Thanks for your response @Hubert_Nguyen1 , I not using a compose I downloaded a DDL file from php.net . below is my php info.\nimage1476×798 77.2 KB\n",
"username": "Wilbard_Mtei"
},
{
"code": "",
"text": "@Wilbard_Mtei ,The MongoClient class has been deprecated:\nhttp://php.adamharvey.name/manual/en/class.mongoclient.phpUsing the most recent PHPLIB, the init code looks like this:\n$client = new \\MongoDB\\Client(CONNECTION_STRING );There’s a great PHP setup tutorial here\nhttps://www.mongodb.com/quickstart/php-setupAnd another one that shows how to perform CRUD operations:Getting Started with MongoDB and PHP - Part 2 - CRUD",
"username": "Hubert_Nguyen1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| PHP Fatal error: Uncaught Error: Class 'MongoClient' not found | 2022-10-30T09:15:19.809Z | PHP Fatal error: Uncaught Error: Class ‘MongoClient’ not found | 9,493 |
null | [
"aggregation",
"java",
"spark-connector",
"scala"
]
| [
{
"code": "",
"text": "I have Spark Structure Streaming read data from mongodb change stream then send data to Kafka.I want to be able to resume the job when something happen. I know mongodb supports resumeAfter and startAfter using resumeToken.But I can’t find instruction on how to use it with Spark Structure Streaming (spark-connector). One way I try to do is use the read configuration, something like this:.option(“spark.mongodb.read.aggregation.pipeline”, “[{“createdAt”: {$gt: “2022-01-01”}}]”)but it returns errorCaused by: org.bson.BsonInvalidOperationException: Value expected to be of type DOCUMENT is of unexpected type NULL\nat org.bson.BsonValue.throwIfInvalidType(BsonValue.java:419)\nat org.bson.BsonValue.asDocument(BsonValue.java:47)\nat org.bson.BsonDocument.getDocument(BsonDocument.java:524)\nat com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.lambda$tryNext$8(MongoStreamPartitionReader.java:144)\nat com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.withCursor(MongoStreamPartitionReader.java:196)\nat com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.tryNext(MongoStreamPartitionReader.java:137)\nat com.mongodb.spark.sql.connector.read.MongoStreamPartitionReader.next(MongoStreamPartitionReader.java:112)\nat org.apache.spark.sql.execution.streaming.continuous.ContinuousQueuedDataReader$DataReaderThread.run(ContinuousQueuedDataReader.scala:150)Can someone point me to the tutorial for this?",
"username": "khang_pham"
},
{
"code": "",
"text": "I also tried.option(“spark.mongodb.read.resumeAfter”, “xxxsome real _id”)but it shows same error message.",
"username": "khang_pham"
},
{
"code": "",
"text": "Were you able to figure this one out?",
"username": "rahul_gautam"
}
]
| How to use resumeAfter: spark structured streaming + mongodb change stream | 2022-07-13T17:20:21.723Z | How to use resumeAfter: spark structured streaming + mongodb change stream | 2,695 |
[]
| [
{
"code": "",
"text": "Hey, hope someone can point me in the right direction.I’m getting repeated “Replication Oplog Window has gone below 1 hours” alerts over the past few days, after an application change that resulted in lots of bulk writes. In Atlas cluster config, there’s an option to set a minimum oplog window - my understanding is Atlas will then auto-scale the oplog size to guarantee that minimum window.When I try to set this value, I keep getting the error “Additional permissions are required to access the requested resource”. I’m logged in as the account owner so I don’t think it’s an issue with my permissions?\nScreenshot 2022-11-08 at 10.42.501007×783 30.8 KB\nAny tips on getting past this?",
"username": "Sitati"
},
{
"code": "",
"text": "I left it for a few hours and tried again, and this time it worked. Strange!",
"username": "Sitati"
}
]
| I get an error when I try to adjust the minimum oplog window | 2022-11-08T07:49:11.269Z | I get an error when I try to adjust the minimum oplog window | 1,240 |
|
null | [
"queries",
"crud",
"performance"
]
| [
{
"code": "Metricas.updateOne(\n\n {\n\n $or: [\n\n {\n\n tipo_metrica_id,\n\n webinar_id,\n\n ip\n\n },\n\n {\n\n 'participante.participante_id': participante?.participante_id,\n\n tipo_metrica_id,\n\n webinar_id\n\n }\n\n ]\n\n },\n\n {\n\n $set: {\n\n participante,\n\n tipo_metrica_id,\n\n webinar_id,\n\n ip,\n\n criado_em: new Date()\n\n }\n\n },\n\n { upsert: true },\n\n );\n{\ntipo_metrica_id, \nwebinar_id,\nIP\n}\n{\n 'participante.participante_id': participante?.participante_id, \ntipo_metrica_id,\n webinar_id\n}\n",
"text": "Hey guys,I want to make the query createIfNotExists using the MongoDB but focused in performance because this query will be executed thousands of time in the small period of time.So, I tried to make this query using the updateOne, but it’s updating, and I don’t want to update, I just want to create If not existsSo, I want to create a register if these fieldsORDoesn’t exist in my database,Thanks guys!",
"username": "Matheus_Lopes"
},
{
"code": "",
"text": "I am not too sure but I think that the problem is the query part of your updateOne.The condition will be true if the condition is true, so only if a document exists, so an update is triggered. If you want the upsert the condition has to be negative. So if you want to create if it does not exist then you have negate the condition with $not.We could help better if you share sample documents and values for the variables you use in the code you shared.",
"username": "steevej"
}
]
| Help with create if not exists and performance - Mongodb | 2022-11-03T14:24:38.625Z | Help with create if not exists and performance - Mongodb | 2,187 |
null | []
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6272c580d4400d8cb10d5406\"\n },\n \"#CHROM\": 1,\n \"POS\": 286747,\n \"ID\": \"rs369556846\",\n \"REF\": \"A\",\n \"ALT\": \"G\",\n \"QUAL\": \".\",\n \"FILTER\": \".\",\n \"INFO\": [{\n \"RS\": 369556846,\n \"RSPOS\": 286747,\n \"dbSNPBuildID\": 138,\n \"SSR\": 0,\n \"SAO\": 0,\n \"VP\": \"0x050100000005150026000100\",\n \"WGT\": 1,\n \"VC\": \"SNV\",\n \"CAF\": [{\n \"$numberDecimal\": \"0.9381\"\n }, {\n \"$numberDecimal\": \"0.0619\"\n }],\n \"COMMON\": 1,\n \"TOPMED\": [{\n \"$numberDecimal\": \"0.88411856523955147\"\n }, {\n \"$numberDecimal\": \"0.11588143476044852\"\n }]\n },\n [\"SLO\", \"ASP\", \"VLD\", \"G5\", \"KGPhase3\"]\n ]\n}\n{'ID': {'$in': ['rs369556846', 'rs2185539', 'rs2519062', 'rs149363311', 'rs55745762', <...>]}}{'$or': [{'#CHROM': 1, 'POS': 1499125}, {'#CHROM': 1, 'POS': 1680158}, {'#CHROM': 1, 'POS': 1749174}, {'#CHROM': 1, 'POS': 3061224}, {'#CHROM': 1, 'POS': 3589337}, <...>]}{'$or': [{'ID': 'rs149434212', 'REF': 'C', 'ALT': 'T'}, {'ID': 'rs72901712', 'REF': 'G', 'ALT': 'A'}, {'ID': 'rs145474533', 'REF': 'G', 'ALT': 'C'}, {'ID': 'rs12096573', 'REF': 'G', 'ALT': 'T'}, {'ID': 'rs10909978', 'REF': 'G', 'ALT': 'A'}, <...>]}",
"text": "This is a copy of a proposal from the feedback portal, since the latter seems to be poorly visited.The documents to be selected look something like this:For a basic annotation scenario, we need such query:{'ID': {'$in': ['rs369556846', 'rs2185539', 'rs2519062', 'rs149363311', 'rs55745762', <...>]}}\n, where <…> means hundreds/thousands of values.The above-mentioned query is executed in a few seconds.More complex annotation queries:\n{'$or': [{'#CHROM': 1, 'POS': 1499125}, {'#CHROM': 1, 'POS': 1680158}, {'#CHROM': 1, 'POS': 1749174}, {'#CHROM': 1, 'POS': 3061224}, {'#CHROM': 1, 'POS': 3589337}, <...>]}{'$or': [{'ID': 'rs149434212', 'REF': 'C', 'ALT': 'T'}, {'ID': 'rs72901712', 'REF': 'G', 'ALT': 'A'}, {'ID': 'rs145474533', 'REF': 'G', 'ALT': 'C'}, {'ID': 'rs12096573', 'REF': 'G', 'ALT': 'T'}, {'ID': 'rs10909978', 'REF': 'G', 'ALT': 'A'}, <...>]}Despite the involvement of IXSCAN, they run many hours.Please test aforementioned queries thoroughly and improve the performance of their execution. This will help science!",
"username": "Platon_workaccount"
},
{
"code": "db.collection.stats()",
"text": "Hi @Platon_workaccountAnything we can do to help science would be great!Although MongoDB is by no means perfect, we do try to improve performance and features all the time so you’ll get to your data quicker. There are techniques to help you get faster results, and pretty much all of them involve creating indexes. There are many resources for this, such as:There are also other hardware-related methods such as increasing the amount of RAM for the deployment, deplyong larger instances, and scaling horizontally using sharding.Also, newer MongoDB versions usually contain performance improvements. If you’re not using the latest version (currently 5.0.9) it’s probably worth a try.However in the immediate term, I’m more interested in this:Despite the involvement of IXSCAN, they run many hours.There may be some analysis/recommendations that can be made. Could you provide more details:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Here are some details. The file names include the approximate query execution time. Be warned that these are quite toy queries; real queries take hours or even days to execute.6_seconds_pretty.json (71.6 KB)\n6_seconds_plain.json (38.3 KB)\n6_seconds_explain.txt (264.3 KB)7_minutes_pretty.json (207.3 KB)\n7_minutes_plain.json (90.7 KB)\n7_minutes_explain.txt (2.4 MB)21_minutes_pretty.json (277.4 KB)\n21_minutes_plain.json (127.5 KB)\n21_minutes_explain.txt (2.8 MB)stats.txt (14.2 KB)",
"username": "Platon_workaccount"
},
{
"code": "{ID: {\n {$in: [\n \"rs149434212\"\n ... <2841 similar items>\n ]}}}\n \"executionSuccess\": true,\n \"nReturned\": 12,\n \"executionTimeMillis\": 83,\n \"totalKeysExamined\": 2743,\n \"totalDocsExamined\": 12,\n\"indexName\": \"ID_1_REF_1_ALT_1\",\n\"isMultiKey\": true,\n\"multiKeyPaths\": {\n \"ID\": [],\n \"REF\": [],\n \"ALT\": [\n \"ALT\"\n ]\n},\n$in$inIDID_1_REF_1_ALT_1{\"$or\": [\n {\"#CHROM\": 1, \"POS\": 1499125},\n ...<2841 similar items>\n]}\n \"executionSuccess\":true,\n \"nReturned\":12,\n \"executionTimeMillis\":63,\n \"totalKeysExamined\":12,\n \"totalDocsExamined\":12,\n\"indexName\":\"#CHROM_1_POS_1\",\n\"isMultiKey\":false,\n$or$or$or$in{$or: [\n {\"ID\": \"rs149434212\", \"REF\": \"C\", \"ALT\": \"T\"},\n ... <2841 similar items>\n]}\n \"executionSuccess\": true,\n \"nReturned\": 12,\n \"executionTimeMillis\": 109,\n \"totalKeysExamined\": 12,\n \"totalDocsExamined\": 12,\n\"indexName\": \"ID_1_REF_1_ALT_1\",\n\"isMultiKey\": true,\n\"multiKeyPaths\": {\n \"ID\": [],\n \"REF\": [],\n \"ALT\": [\n \"ALT\"\n ]\n},\n$or{a: 1, b: [array of 10 elements]}{a: 1, b: 1}$orexecutionTimeMillisALTALT",
"text": "Hi @Platon_workaccountI found some interesting things on the queries and the explain plain you posted:This query is of the form:Execution stats:Index used:Notes: the query is a very large $in query that examined 2743 keys, 12 documents, and returned 12 documents. In other words, it scanned a lot of index keys, but the number of documents scanned vs. documents returned is 1:1. My take is, other than the very large $in it’s actually not too bad. Since the ID field is a prefix of the index ID_1_REF_1_ALT_1 it can be used for this query, and definitely contributes to the speed.The query is of shape:Execution stats:Index use:Notes: now we’re seeing queries in the minutes range, and the big difference is the use of $or. Notably $or queries behave very differently from other queries, since basically they are multiple queries that can use different indexes for each $or term. In essence, this is like running ~2500 single queries all at once. For this query using $in might yield better result, and believe you’ll see the same output. See $or vs. $inThis query is of the form:Execution stats:Index use:Notes: this is the longest query, I think due to two things: the use of $or as in previous query, and the use of multikey index. Notably multikey index creates multiple index entries per array element, so if you’re indexing something like {a: 1, b: [array of 10 elements]}, it will create 10 index entries. In contrast, a non-multikey index of {a: 1, b: 1} will create one index entry. It’s 1 vs. 10 in this example. I believe this query is slow due to $or coupled with a lot more index entries.What is interesting in all the explain output is that the executionTimeMillis are quite fast, even in the last query there. I’m guessing that you have a bottleneck somewhere else, perhaps when the actual documents are being fetched from disk? How are you running these queries? Are you using the Mongo shell, or are you using other software that’s interfacing with MongoDB?Also, in your example document the field ALT doesn’t seem to be an array, so I wonder if you have other document examples where ALT is an array, and how many elements they usually contain.Moving on to check out the collection stats, it doesn’t look like we’re dealing with gigantic amount of data: it appears that the collection in question is ~1GB in size. Is this correct?So from the hardware spec, how much RAM are you setting the WiredTiger cache? What’s the overall spec of the machine the MongoDB server is running, and are you running MongoDB in the same machine as other resource-intensive processes such as another server?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "$inALTALT{'$expr': {'$isArray': '$ALT'}}$or",
"text": "For this query using $in might yield better result, and believe you’ll see the same output. See $or vs. $inIt sounds like I can place into $in not only single values, but also subqueries. The documentation doesn’t mention that.I’m guessing that you have a bottleneck somewhere else, perhaps when the actual documents are being fetched from disk? How are you running these queries? Are you using the Mongo shell, or are you using other software that’s interfacing with MongoDB?To measure time (6s, 7m, 21m) I applied my high-perf-bio annotate program. To get the explain output I used Compass.Also, in your example document the field ALT doesn’t seem to be an array, so I wonder if you have other document examples where ALT is an array, and how many elements they usually contain.In the given example, the collection contains 1055709 documents. {'$expr': {'$isArray': '$ALT'}} returns 271401 documents.Moving on to check out the collection stats, it doesn’t look like we’re dealing with gigantic amount of data: it appears that the collection in question is ~1GB in size. Is this correct?I provided only toy examples of both queries and DB data. The real data is much larger. Consequently, with real $or-based queries and real collections, execution time skyrockets to many hours or days.So from the hardware spec, how much RAM are you setting the WiredTiger cache?I use the default low-level settings.What’s the overall spec of the machine the MongoDB server is running, and are you running MongoDB in the same machine as other resource-intensive processes such as another server?My hardware info:are you running MongoDB in the same machine as other resource-intensive processes such as another server?No.",
"username": "Platon_workaccount"
},
{
"code": "$or",
"text": "The real data is much larger. Consequently, with real $or -based queries and real collections, execution time skyrockets to many hours or days.From the link you posted, it seems that the collection size are about 25GB, and there are two of them. So worst case, total of a 50GB collection. Is this correct?My hardware info:If I understand correctly, the total collection size absolutely dwarfs the total available RAM in the machine. Unfortunately this makes it somewhat impossible to have good performance, given that there is not enough hardware to cater for the magnitude of the data you’re working with. This is something that is not MongoDB specific, by the way. I believe you would see similar performance issues using any other database products in the market.Depending on your requirements, you might be interested in checking out MongoDB Atlas, since it would allow you to scale your deployment up (or down) depending on your query needs. I understand that you may not need to have the query returned in seconds using the biggest deployment, but you may find it easier to get a good balance between price & performance using Atlas.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Let’s additionally discuss examples of highly truncated data. I think this small data is enough to demonstrate problems on the MongoDB side.I’ve explored the memory usage of the longest (~21m) query. It looks like the problem is not there. Here is the memory consumption before running the query:\n\nMongoDB_mem_before1920×1050 415 KB\nAfter about 15 minutes of calculations, the memory expense increased only slightly:\n\nMongoDB_mem_after1920×1050 355 KB\nI still ask the MongoDB developers to optimize the DBMS for typically bioinformatic queries.",
"username": "Platon_workaccount"
},
{
"code": "/etc/mongod.confmongod",
"text": "Can I ask a couple of things first:Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "/etc/mongod.conf# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 127.0.0.1\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\n#sharding:\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\nmongod",
"text": "Why are you running ~40 mongod processes in your laptop?Perhaps this is caused by the simultaneous use of Compass and PyMongo-based program.",
"username": "Platon_workaccount"
},
{
"code": "mongodb",
"text": "UPD. Immediately after rebooting the laptop I see 33 mongodb processes. I don’t know why that is.",
"username": "Platon_workaccount"
},
{
"code": "",
"text": "Is there any chance that the issue will be fixed in time for MongoDB 7.0 release?",
"username": "Platon_workaccount"
},
{
"code": "mongod",
"text": "Hi @Platon_workaccountI believe the last time we touched base on this, I was suggesting for you to try using a more powerful server hardware since you’re running all this work on a laptop; you have 16GB of RAM in the laptop, with ~50GB of data to process.The hardware of the laptop doesn’t look like it’s nearly enough to cover the workload, and you’re also running ~40 mongod processes in it. Have you been successful in sourcing a more powerful hardware in the meantime, and see if there’s any improvement in performance?On another note, if actual hardware is difficult to source, I also suggested to try using MongoDB Atlas and select a deployment with sufficient RAM size for the workload. Note that you can also pause an Atlas cluster if you’re not using it 24/7 to save some expense, and can unpause it when you need to.Unfortunately for such a big work, I don’t believe the laptop is up to the task Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "RAM: Kingston Fury Beast RGB DDR4-3600; 64 GB\nProcessor: 12th Gen Intel® Core™ i5-12400 × 12\nSSD: WD Black SN750; PCI-E 3.0; 1 TB\nOS: Ubuntu 22.04.1 LTS\nMongoDB version: 6.0.2\nmongod.conf change: cacheSizeGB: 56\nQueried items quantity: 2842\nCollection's docs quantity: 1055709\nFields indexes: #CHROM_1_POS_1; ID_1\n{'ID': {'$in': ['rs149434212', 'rs72901712', 'rs145474533', 'rs12096573', 'rs10909978', '...']}}00:02{'$or': [{'#CHROM': 1, 'POS': 1499125}, {'#CHROM': 1, 'POS': 1680158}, {'#CHROM': 1, 'POS': 1749174}, {'#CHROM': 1, 'POS': 3061224}, {'#CHROM': 1, 'POS': 3589337}, '...']}02:23{'$or': [{'ID': 'rs149434212', 'REF': 'C', 'ALT': 'T'}, {'ID': 'rs72901712', 'REF': 'G', 'ALT': 'A'}, {'ID': 'rs145474533', 'REF': 'G', 'ALT': 'C'}, {'ID': 'rs12096573', 'REF': 'G', 'ALT': 'T'}, {'ID': 'rs10909978', 'REF': 'G', 'ALT': 'A'}, '...']}07:47",
"text": "Now I did a fresh test on a new and relatively powerful computer. The difference between a single field query and a multiple fields query is still huge.Hardware specifications:Software specifications:Toy data used in the test:Abbreviated versions of the queries and the corresponding execution time (minutes:seconds):\n{'ID': {'$in': ['rs149434212', 'rs72901712', 'rs145474533', 'rs12096573', 'rs10909978', '...']}}\n00:02{'$or': [{'#CHROM': 1, 'POS': 1499125}, {'#CHROM': 1, 'POS': 1680158}, {'#CHROM': 1, 'POS': 1749174}, {'#CHROM': 1, 'POS': 3061224}, {'#CHROM': 1, 'POS': 3589337}, '...']}\n02:23{'$or': [{'ID': 'rs149434212', 'REF': 'C', 'ALT': 'T'}, {'ID': 'rs72901712', 'REF': 'G', 'ALT': 'A'}, {'ID': 'rs145474533', 'REF': 'G', 'ALT': 'C'}, {'ID': 'rs12096573', 'REF': 'G', 'ALT': 'T'}, {'ID': 'rs10909978', 'REF': 'G', 'ALT': 'A'}, '...']}\n07:47",
"username": "Platon_workaccount"
}
]
| [Proposal] Boost the performance of bioinformatic annotation queries | 2022-06-22T09:24:33.347Z | [Proposal] Boost the performance of bioinformatic annotation queries | 2,198 |
null | [
"dot-net",
"field-encryption"
]
| [
{
"code": "",
"text": "Hello Team,I am facing the timeout exception for the following steps,While analysis I found that the mongoDB port listening the loopback ipaddress 127.0.0.1 when restarting the system. At this point I am getting the timeout exception.After restarting the system, I tried restarting the MongoDB service also. It started listening with ipv4 Address and port. And communication works properly.My question here is, why the MongoDB is taking loopback ipAddress while restarting the system.Any helping hands??MongoDB version - 4.0.10MongoDB.Driver- 2.10.1.0\nMongoDB.Driver.Core- 2.10.1.0\nMongoDB.Bson -2.10.1.0\nMongoDB.Libmongocrypt - 1.0.0.0Best Regards,\nUsharani",
"username": "usharani_K"
},
{
"code": "",
"text": "Read about the configuration option net.bindip",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi Jack,\nThanks for response.\nYes I tried bindIpAll parameter and it solved the issue.But my doubt here is, why MongoDB service is taking loopback ipAddress while restarting PC? is it possible to clarify this?",
"username": "usharani_K"
},
{
"code": "bindIpAll: truebindIpAll: false\nbindIp: 123.123.123.123",
"text": "bindIpAll: true means “use all interfaces, including the loopback”.\nIf you just want the interface 123.123.123.123 then",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi Jack,\nSo everytime when the PC is restarted will MongoDB start using loopback address?",
"username": "usharani_K"
},
{
"code": "bindIpAll: yes",
"text": "If you say bindIpAll: yes, then it will.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi Jack,Coming back to my first question.I have created the service initially with the following parameter“C:\\Program Files\\MongoDB\\Server\\4.0\\bin\\mongod.exe” -f “C:\\Program Files\\MongoDB\\Server\\mongodb_config.yml” --bind_ip=hostname --port=9876 --serviceName “MongoDB” --serviceDisplayName “MongoDB” --serviceDescription “MongoDB Server” “–dbpath=C:\\data” “–logpath=C:\\data\\log.txt” --serviceHere I have used only --bindip paramater which has the value of fully qualified hostname.after service created, it started listening with ipv4 static address(ex: 123.87.98.187) with the port 9876.Now, I am restarting the PC, my expectation is that the port 9876 should listen in the same ipv4 static address(ex: 123.87.98.187). But it is listening with loopback ipAddress 127.0.0.1. Due to this communication is not happening and timeout exception is occuring.Why this difference happening in MongoDB component? or what should I change to make it works properly?",
"username": "usharani_K"
},
{
"code": "",
"text": "Most likely you have Windows default service active which brings up mongod on default port 27017 and localhost on reboot\nWhat you started is a service from command line but is it configured for autostart?\nYou can check your current mongod service in taskmanager.Does it match with your manually started service?",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I have removed the default service. And then I created the new service with this command line and set it as autostart by default.",
"username": "usharani_K"
},
{
"code": "",
"text": "any inputs for this query?",
"username": "usharani_K"
},
{
"code": "",
"text": "I thought your issue got resolved after making the new service as default\nYour bindIp could be the issue.It cannot be FQDN\nYou have to give your network interface IP",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I have tried bindip with network interface address as well. But after restarting the System, MongoDB service is not started automatically. Service is set to auto start only. So Manual restart is required.It is not solved my issue.Can you tell us, when MongoDB service will take loopback ipaddress or which kind of environment makes the MongoDB port to listen with loopback ipaddress.",
"username": "usharani_K"
},
{
"code": "",
"text": "If it is configured to start auto why manually mongod is being started?\nI think you have to review your service creation\nWhen you are passing config file your mongod why again command line params are being passed?\nFirst you have to create cfg file\nThen create service referring to above cfg file with install option\nThen configure it as auto start",
"username": "Ramachandra_Tummala"
}
]
| MongoDB port listening in loopback ip address when restarting the system | 2022-10-26T09:50:28.345Z | MongoDB port listening in loopback ip address when restarting the system | 3,952 |
[]
| [
{
"code": "\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 1,\n\t\t\"host\" : \"mongoDB2:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 2,\n\t\t\"tags\" : {\n\t\t\t\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 3,\n\t\t\"host\" : \"mongoDB4:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\t\t\t\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 4,\n\t\t\"host\" : \"mongoDB3:27017\",\n\t\t\"arbiterOnly\" : false,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 1,\n\t\t\"tags\" : {\n\t\t\t\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t},\n\t{\n\t\t\"_id\" : 5,\n\t\t\"host\" : \"Citus1:27017\",\n\t\t\"arbiterOnly\" : true,\n\t\t\"buildIndexes\" : true,\n\t\t\"hidden\" : false,\n\t\t\"priority\" : 0,\n\t\t\"tags\" : {\n\t\t\t\n\t\t},\n\t\t\"secondaryDelaySecs\" : NumberLong(0),\n\t\t\"votes\" : 1\n\t}\n],\n\"protocolVersion\" : NumberLong(1),\n\"writeConcernMajorityJournalDefault\" : true,\n\"settings\" : {\n\t\"chainingAllowed\" : true,\n\t\"heartbeatIntervalMillis\" : 2000,\n\t\"heartbeatTimeoutSecs\" : 10,\n\t\"electionTimeoutMillis\" : 10000,\n\t\"catchUpTimeoutMillis\" : -1,\n\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\"getLastErrorModes\" : {\n\t\t\n\t},\n\t\"getLastErrorDefaults\" : {\n\t\t\"w\" : 1,\n\t\t\"wtimeout\" : 0\n\t},\n\t\"replicaSetId\" : ObjectId(\"632c982ca07b1a8a7df1e397\")\n}\n",
"text": "I have 4 MongoDB servers and One Arbiter node. 4 out of 2 servers are down then remaining 2 are working as primary and secondary. At that time i supposed to run write operation on the Primary Node then it stuck And I have to re-login on server I can fetch a new data which was I push through in insert statement.\nWrite_Operation_query863×100 2.45 KB\nmongocluster:PRIMARY> rs.status()\n{\n“set” : “mongocluster”,\n“date” : ISODate(“2022-11-08T07:09:48.980Z”),\n“myState” : 1,\n“term” : NumberLong(74),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“heartbeatIntervalMillis” : NumberLong(2000),\n“majorityVoteCount” : 3,\n“writeMajorityCount” : 3,\n“votingMembersCount” : 5,\n“writableVotingMembersCount” : 4,\n“optimes” : {\n“lastCommittedOpTime” : {\n“ts” : Timestamp(1667887148, 2),\n“t” : NumberLong(67)\n},\n“lastCommittedWallTime” : ISODate(“2022-11-08T05:59:08.814Z”),\n“readConcernMajorityOpTime” : {\n“ts” : Timestamp(1667887148, 2),\n“t” : NumberLong(67)\n},\n“appliedOpTime” : {\n“ts” : Timestamp(1667891386, 1),\n“t” : NumberLong(74)\n},\n“durableOpTime” : {\n“ts” : Timestamp(1667891386, 1),\n“t” : NumberLong(74)\n},\n“lastAppliedWallTime” : ISODate(“2022-11-08T07:09:46.694Z”),\n“lastDurableWallTime” : ISODate(“2022-11-08T07:09:46.694Z”)\n},\n“lastStableRecoveryTimestamp” : Timestamp(1667887148, 2),\n“electionCandidateMetrics” : {\n“lastElectionReason” : “electionTimeout”,\n“lastElectionDate” : ISODate(“2022-11-08T07:06:46.673Z”),\n“electionTerm” : NumberLong(74),\n“lastCommittedOpTimeAtElection” : {\n“ts” : Timestamp(1667887148, 2),\n“t” : NumberLong(67)\n},\n“lastSeenOpTimeAtElection” : {\n“ts” : Timestamp(1667891193, 1),\n“t” : NumberLong(73)\n},\n“numVotesNeeded” : 3,\n“priorityAtElection” : 1,\n“electionTimeoutMillis” : NumberLong(10000),\n“numCatchUpOps” : NumberLong(0),\n“newTermStartDate” : ISODate(“2022-11-08T07:06:46.689Z”)\n},\n“electionParticipantMetrics” : {\n“votedForCandidate” : true,\n“electionTerm” : NumberLong(71),\n“lastVoteDate” : ISODate(“2022-11-08T07:01:40.459Z”),\n“electionCandidateMemberId” : 4,\n“voteReason” : “”,\n“lastAppliedOpTimeAtElection” : {\n“ts” : Timestamp(1667890885, 1),\n“t” : NumberLong(70)\n},\n“maxAppliedOpTimeInSet” : {\n“ts” : Timestamp(1667890885, 1),\n“t” : NumberLong(70)\n},\n“priorityAtElection” : 1\n},\n“members” : [\n{\n“_id” : 0,\n“name” : “mongoDB1:27017”,\n“health” : 0,\n“state” : 8,\n“stateStr” : “(not reachable/healthy)”,\n“uptime” : 0,\n“optime” : {\n“ts” : Timestamp(0, 0),\n“t” : NumberLong(-1)\n},\n“optimeDurable” : {\n“ts” : Timestamp(0, 0),\n“t” : NumberLong(-1)\n},\n“optimeDate” : ISODate(“1970-01-01T00:00:00Z”),\n“optimeDurableDate” : ISODate(“1970-01-01T00:00:00Z”),\n“lastAppliedWallTime” : ISODate(“1970-01-01T00:00:00Z”),\n“lastDurableWallTime” : ISODate(“1970-01-01T00:00:00Z”),\n“lastHeartbeat” : ISODate(“2022-11-08T07:09:47.032Z”),\n“lastHeartbeatRecv” : ISODate(“1970-01-01T00:00:00Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “Error connecting to mongoDB1:27017 (10.0.0.8:27017) :: caused by :: Connection refused”,\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“configVersion” : -1,\n“configTerm” : -1\n},\n{\n“_id” : 1,\n“name” : “mongoDB2:27017”,\n“health” : 0,\n“state” : 8,\n“stateStr” : “(not reachable/healthy)”,\n“uptime” : 0,\n“optime” : {\n“ts” : Timestamp(0, 0),\n“t” : NumberLong(-1)\n},\n“optimeDurable” : {\n“ts” : Timestamp(0, 0),\n“t” : NumberLong(-1)\n},\n“optimeDate” : ISODate(“1970-01-01T00:00:00Z”),\n“optimeDurableDate” : ISODate(“1970-01-01T00:00:00Z”),\n“lastAppliedWallTime” : ISODate(“1970-01-01T00:00:00Z”),\n“lastDurableWallTime” : ISODate(“1970-01-01T00:00:00Z”),\n“lastHeartbeat” : ISODate(“2022-11-08T07:09:48.907Z”),\n“lastHeartbeatRecv” : ISODate(“1970-01-01T00:00:00Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “Error connecting to mongoDB2:27017 (10.0.0.11:27017) :: caused by :: Connection refused”,\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“configVersion” : -1,\n“configTerm” : -1\n},\n{\n“_id” : 3,\n“name” : “mongoDB4:27017”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 497,\n“optime” : {\n“ts” : Timestamp(1667891386, 1),\n“t” : NumberLong(74)\n},\n“optimeDate” : ISODate(“2022-11-08T07:09:46Z”),\n“lastAppliedWallTime” : ISODate(“2022-11-08T07:09:46.694Z”),\n“lastDurableWallTime” : ISODate(“2022-11-08T07:09:46.694Z”),\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“electionTime” : Timestamp(1667891206, 1),\n“electionDate” : ISODate(“2022-11-08T07:06:46Z”),\n“configVersion” : 35,\n“configTerm” : 74,\n“self” : true,\n“lastHeartbeatMessage” : “”\n},\n{\n“_id” : 4,\n“name” : “mongoDB3:27017”,\n“health” : 1,\n“state” : 2,\n“stateStr” : “SECONDARY”,\n“uptime” : 184,\n“optime” : {\n“ts” : Timestamp(1667891386, 1),\n“t” : NumberLong(74)\n},\n“optimeDurable” : {\n“ts” : Timestamp(1667891386, 1),\n“t” : NumberLong(74)\n},\n“optimeDate” : ISODate(“2022-11-08T07:09:46Z”),\n“optimeDurableDate” : ISODate(“2022-11-08T07:09:46Z”),\n“lastAppliedWallTime” : ISODate(“2022-11-08T07:09:46.694Z”),\n“lastDurableWallTime” : ISODate(“2022-11-08T07:09:46.694Z”),\n“lastHeartbeat” : ISODate(“2022-11-08T07:09:48.714Z”),\n“lastHeartbeatRecv” : ISODate(“2022-11-08T07:09:47.728Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “mongoDB4:27017”,\n“syncSourceId” : 3,\n“infoMessage” : “”,\n“configVersion” : 35,\n“configTerm” : 74\n},\n{\n“_id” : 5,\n“name” : “Citus1:27017”,\n“health” : 1,\n“state” : 7,\n“stateStr” : “ARBITER”,\n“uptime” : 495,\n“lastHeartbeat” : ISODate(“2022-11-08T07:09:48.387Z”),\n“lastHeartbeatRecv” : ISODate(“2022-11-08T07:09:48.387Z”),\n“pingMs” : NumberLong(34),\n“lastHeartbeatMessage” : “”,\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“configVersion” : 35,\n“configTerm” : 74\n}\n],\n“ok” : 1,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1667891386, 1),\n“signature” : {\n“hash” : BinData(0,“l/B6dJM0LuraYaFRSSH9UwHg3eY=”),\n“keyId” : NumberLong(“7146254070720757764”)\n}\n},\n“operationTime” : Timestamp(1667891386, 1)\n}\nmongocluster:PRIMARY>mongocluster:PRIMARY>\nmongocluster:PRIMARY> rs.conf()\n{\n“_id” : “mongocluster”,\n“version” : 35,\n“term” : 76,\n“members” : [\n{\n“_id” : 0,\n“host” : “mongoDB1:27017”,\n“arbiterOnly” : false,\n“buildIndexes” : true,\n“hidden” : false,\n“priority” : 3,\n“tags” : {}\nmongocluster:PRIMARY>Kindly help to resole this issue why primary node not execute write operations.",
"username": "Sanjay_Soni"
},
{
"code": "",
"text": "I think your majority data bearing nodes should acknowledge the write but it is not happening so primary is waiting\nYou have majority but arbiter is non data bearing\nCheck this thread",
"username": "Ramachandra_Tummala"
}
]
| I have facing Write Operation issue on MongoDB Primary node | 2022-11-08T08:47:18.493Z | I have facing Write Operation issue on MongoDB Primary node | 1,039 |
|
[
"security"
]
| [
{
"code": "",
"text": "I’m following this documentationA high-level guide on how to securely connect MongoDB Atlas with the Kubernetes offerings from Amazon AWS, Google Cloud (GCP), and Microsoft Azure.which led to thisand I’m trying to do the belowCreate the following network traffic rule on your AWS security group attached to your resources that connect to Atlas:I do not understand what it means or how to do it",
"username": "Jason_Nwakaeze"
},
{
"code": "",
"text": "Apparently I didn’t need to specify as my security group allowed all connections so I’m good.",
"username": "Jason_Nwakaeze"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Creating outbound rule on aws to mongo | 2022-11-01T09:21:28.141Z | Creating outbound rule on aws to mongo | 2,286 |
|
null | []
| [
{
"code": "",
"text": "HiI just went from M0 to M2 because I had no space left (512Mb consumed)After migration, I have only 283Mb consumedAlthough i believe this is not done on purpose, I still feel a bit like I’ve been tricked.Can I now go back to my M0 plan?",
"username": "Martin_Ratinaud"
},
{
"code": "",
"text": "What method did you use to move your data?Are you sure all indexes have been recreated?",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevejI just upgraded my cluster from the Atlas website. That’s why it’s a bit frustrating",
"username": "Martin_Ratinaud"
},
{
"code": "mongodumpmongorestore",
"text": "Hi @Martin_Ratinaud - Welcome to the community.Can I now go back to my M0 plan?Unfortunately shared tier clusters cannot be downgraded, and need to be migrated instead. You can use mongodump and mongorestore for this to migrate back to an M0 cluster as noted in the Backup & Restore considerations documentation.Although i believe this is not done on purpose, I still feel a bit like I’ve been tricked.I understand your frustrations here and it does sound a bit odd regarding the drop in logical size after the upgrade. I would highly recommend you contact the Atlas support team via the in-app chat to investigate this drop in logical size related to your Atlas account / project as they have further insight into the deployment(s).Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @Jason_Tran I’ve contacted them and here is their answerthe drop in the logical size you see is due to initial sync that ran during the time you upgraded your cluster.In MongoDB, document deletion does not decrease the size of the DB storage size.Atlas uses WiredTiger as the default storage engine and it will not release disk space created by deleting documents. However, following a checkpoint, WiredTiger will mark any space freed due to deletes or updates as being available for re-use.The initial sync freed up the unused disk space to make it available for reuse.it seems the “WiredTiger will mark any space freed due to deletes or updates as being available for re-use” part is not working that well",
"username": "Martin_Ratinaud"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Upgrade because of lack of space | 2022-11-07T05:35:21.332Z | Upgrade because of lack of space | 1,501 |
null | [
"serverless"
]
| [
{
"code": "findOne()",
"text": "Just a quick question regarding RPUs in a serverless instance.Say I run a findOne() against a collection, performing an index scan, and no documents are found. How many RPUs are consumed?Thanks!",
"username": "Jared_Lindsay1"
},
{
"code": "findOne()findOne()findOne()findOne()query",
"text": "Hi @Jared_Lindsay1,Say I run a findOne() against a collection, performing an index scan, and no documents are found. How many RPUs are consumed?Please find some details regarding the RPU calculation mentioned within the docs on my reply in the How many RPU MongoDB Atlas uses to find random records using aggregate? post.For testing purposes only, I performed some test runs using findOne() and recorded how many RPU’s were generated in the case that an index is used for both:RPU metrics for the above 2 findOne() commands:\nimage2712×702 45.1 KB\nBoth values are shown at ~0.03/s (RPU). The secondary spike appears to be relatively smaller although very similar and both were rounded to 0.03/s in the metrics view when hovering the mouse over the peaks of each.For comparison, I performed findOne() on the same collection using a field that did not contain an index in which no documents were returned (no documents matching the query) - This generated ~5.49K/s RPU.Please note that our results may vary as the document size and index sizes between our environments will most likely vary. In saying so, it is highly recommend you test the operations on your own cluster to get a general idea of how much RPU’s are generated.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Serverless: RPUs on index search when nothing found? | 2022-11-06T05:50:40.209Z | Serverless: RPUs on index search when nothing found? | 1,864 |
null | [
"replication",
"python"
]
| [
{
"code": "{\"t\":{\"$date\":\"2022-11-07T09:15:54.907+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn43774\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"XXXX\",\"command\":{\"find\":\"XXXXX\",\"filter\":{XXXXX},\"sort\":{XXXXX},\"limit\":1,\"batchSize\":1,\"singleBatch\":true,\"maxTimeMS\":1000,\"$readPreference\":{\"mode\":\"secondaryPreferred\"},\"readConcern\":{\"level\":\"local\"},\"$db\":\"sief\"},\"numYields\":66,\"queryHash\":\"16E6F9A1\",\"planCacheKey\":\"DC1D7D78\",\"ok\":0,\"errMsg\":\"error while multiplanner was selecting best plan :: caused by :: operation exceeded time limit\",\"errName\":\"MaxTimeMSExpired\",\"errCode\":50,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":67}},\"Global\":{\"acquireCount\":{\"r\":67}},\"Mutex\":{\"acquireCount\":{\"r\":1}}},\"readConcern\":{\"level\":\"local\",\"provenance\":\"clientSupplied\"},\"remote\":\"IP_OF_PRIMARY:34400\",\"protocol\":\"op_msg\",\"durationMillis\":1000}}\n",
"text": "Hello everyone,I have a 5.0.13 replicaset (community edition) with one primary / 2 secondaries, all client are using pymongo 4.1.1 and connect to the replicaset with readPreference “primaryPreferred”now I see quite a lot of log on the secondaries node log file looking like this (I scrubbed the find query for sensibility reason)so it seems to me that the PRIMARY node (“remote” in logs above) is sending queries to the secondaries during the “query plan selection” phase, with “secondaryPreferred” readPReference. I am understanding this correctly ? I do not see anything in the documentation about it https://www.mongodb.com/docs/manual/core/query-plans/. If yes is there a configuration option to increase the “maxTimeMS” parameter (1000ms is too low) or to disable this feature and have all query plan candidates run on the primary ?Thank you for you help !\nOlivier",
"username": "Olivier_Le_Rigoleur"
},
{
"code": "conn43774conn43774conn10{\"t\":{\"$date\":\"2022-11-08T16:47:19.858+11:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn10\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49995\",\"client\":\"conn10\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"20.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}primaryPreferredprimarysecondaryPreferred",
"text": "Hi @Olivier_Le_Rigoleur welcome to the community!I think the query’s originator is something marked conn43774 from the log line above. To see what connection is named as conn43774, you might be able to search for that string.For example, if I’m looking for conn10, I would see this log line:{\"t\":{\"$date\":\"2022-11-08T16:47:19.858+11:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn10\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:49995\",\"client\":\"conn10\",\"doc\":{\"driver\":{\"name\":\"nodejs|mongosh\",\"version\":\"4.10.0\"},\"os\":{\"type\":\"Darwin\",\"name\":\"darwin\",\"architecture\":\"x64\",\"version\":\"20.6.0\"},\"platform\":\"Node.js v16.18.0, LE (unified)\",\"version\":\"4.10.0|1.6.0\",\"application\":{\"name\":\"mongosh 1.6.0\"}}}}which shows what client and what IP it’s connecting from.disable this feature and have all query plan candidates run on the primary ?Do you mean to run all queries in the primary? Have you tried changing the readPreference of the application from primaryPreferred to primary instead (the default)? You might also want to double check that no application is accidentally using secondaryPreferred read preference.Best regards\nKevin",
"username": "kevinadi"
}
]
| Multiplanner and 5.0.13 replicaset - primary planner sendind query plan candidate to secondary? | 2022-11-07T09:33:25.663Z | Multiplanner and 5.0.13 replicaset - primary planner sendind query plan candidate to secondary? | 1,557 |
null | [
"python",
"change-streams",
"spark-connector"
]
| [
{
"code": "spark = SparkSession. \\\n builder. \\\n appName (\"streamingExampleRead\"). \\\n config (\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector:10.0.5\"). \\\n getOrCreate ()\n\nquery = (spark.readStream.format (\"mongodb\")\n .option (\"spark.mongodb.connection.uri\", \"mongodb://mongo:27017/\")\n .option (\"spark.mongodb.database\",\"database_1\")\n .option (\"spark.mongodb.collection\", \"read_collection\")\n .option (\"spark.mongodb.change.stream.publish.full.document.only\", \"true\")\n .option (\"forceDeleteTempCheckpointLocation\", \"true\")\n .load ())\n\n query2 = (query.writeStream\n .format (\"mongodb\")\n .option (\"forceDeleteTempCheckpointLocation\", \"true\")\n .option (\"spark.mongodb.connection.uri\", \"mongodb://mongo:27017/\")\n .option (\"spark.mongodb.database\", \"database_2\")\n .option (\"spark.mongodb.collection\", \"write_collection\")\n .option (\"spark.mongodb.change.stream.publish.full.document.only\", \"true\")\n .option (\"forceDeleteTempCheckpointLocation\", \"true\")\n .outputMode (\"append\")\n .trigger(continuous=\"1 second\")\n .start().awaitTermination())\n",
"text": "Hello MongoDB Community,\nI am trying to read from a MongoDB database collection and write back to another MongoDB collection in another database using pyspark and structured streaming (v10.0.5).\nThis is the code I am following (without any processing added for now, I just want to test read & write stream)I keep getting the error Lost task 0.0 in stage 0.0 (TID 0) (172.18.0.4 executor 1): org.apache.spark.SparkException: Data read failed(Using Spark 3.3.0 & Mongo 6)Could anyone tell me if there is something wrong with the syntax or logic of this code in reading & writing data ?\nThank you",
"username": "Serine_Daouk"
},
{
"code": "query2 = (query.writeStream\n .format (\"mongodb\")\n .option (\"spark.mongodb.connection.uri\", \"<<connection string here>>\")\n .option (\"spark.mongodb.database\", \"database2\")\n .option (\"spark.mongodb.collection\", \"write_collection\")\n .option (\"spark.mongodb.change.stream.publish.full.document.only\", \"true\")\n .option (\"forceDeleteTempCheckpointLocation\", \"true\")\n .option(\"checkpointLocation\", \"/tmp/mycheckpoint\")\n .outputMode (\"append\")\n .trigger(continuous=\"1 second\").start().awaitTermination())\n",
"text": "try",
"username": "Robert_Walters"
},
{
"code": "",
"text": "Thank you for your reply, I’ve applied the changed to query2 but I am still getting the errors:\norg.apache.spark.SparkException: Data read failedCaused by: com.mongodb.spark.sql.connector.exceptions.MongoSparkException: Could not create the change stream cursor.Caused by: com.mongodb.MongoCommandException: Command failed with error 40573 (Location40573): ‘The $changeStream stage is only supported on replica sets’ on server mongo:27017. The full response is {“ok”: 0.0, “errmsg”: “The $changeStream stage is only supported on replica sets”, “code”: 40573, “codeName”: “Location40573”}Any possible solutions to these ?",
"username": "Serine_Daouk"
},
{
"code": "",
"text": "– Update,\nFrom what I understood is that since I am using readStream & writeStream, I will be utilizing the change stream which requires a replicateSet set up with at least 3 mongo nodes (I just had a single standalone mongo node in my docker-compose file).\nAfter setting up a proper mongo cluster (3 nodes now) and placing it in the same network as my spark application, the stream seems to be running with no crashes or error.However, I now keep getting empty data in return (check image below) - I decided to output to console just to view some results before outputting to mongo like the original plan.\nScreen Shot 2022-11-07 at 12.47.101436×1224 14.3 KB\nWhat could be the issue in this scenario ?I am passing this uri for my readStream: ‘mongodb://mongo1:27017,mongo2:27018,mongo3:27019/test_data.readCol?replicaSet=dbrs’\nMy database is: test_data\nMy collection is: readColThank you in advance for the help",
"username": "Serine_Daouk"
},
{
"code": "",
"text": "–Update part 2\nSo from what it seems, it won’t pick up data already in the database, but only any new data that gets added after the stream is launched will be detected by the stream and written into the sink. This, of course, is because we are in append mode.I guess to get everything you would have to use the Complete mode (which is not supported with readStream or writeStream from what I’ve seen – Check documentation, don’t take my literal word for it.)",
"username": "Serine_Daouk"
},
{
"code": "priceSchema = ( StructType()\n .add('Date', TimestampType())\n .add('Price', DoubleType())\n)\n...\n .option (\"spark.mongodb.change.stream.publish.full.document.only\", \"false\")\n .option (\"forceDeleteTempCheckpointLocation\", \"false\")\n **.schema(priceSchema)**\n .load ())\n",
"text": "Can you try adding a schema to the readstream ?something likethen add it to the readstream",
"username": "Robert_Walters"
},
{
"code": "spark.mongodb.change.stream.publish.full.document.only",
"text": "Hi Rober, thanks again for the reply.\nI’ve added the schema, however setting the spark.mongodb.change.stream.publish.full.document.only to false only returns the metadata without the rest, so I’ve kept it as True.I can now see any new incoming data in my database1 collection appear in my database2 collection. The real problem in my setup was that I didn’t have a replicaSet cluster configuration for my mongo cluster.Since readStream & writeStream require a change stream, element to follow the databases changes, which itself needs a replicaSet configuration to properly function.Small note: data already present in the database before the stream is launched will not be picked up. My guess is that it’s because I’m using outputMode append in my writeStream.Thanks again Robert !",
"username": "Serine_Daouk"
}
]
| Streaming from and back to MongoDB with Pyspark using Structured Streaming Connector V10.0.5 | 2022-11-06T10:10:30.794Z | Streaming from and back to MongoDB with Pyspark using Structured Streaming Connector V10.0.5 | 3,586 |
null | [
"node-js"
]
| [
{
"code": "let db: Db;\n\nclass Mongo {\n client: MongoClient;\n constructor() {\n this.client = new MongoClient(url);\n }\n\n async boot() {\n await this.client.connect();\n\n db = this.client.db();\n }\n}\n\nconst mongo = new Mongo();\n\nconst connectToMongo = async () => {\n if (!db) {\n console.log(\"Booting Mongo\");\n await mongo.boot();\n } else {\n console.log(\"No boot needed\");\n }\n\n return db;\n};\n",
"text": "I am currently using MongoDB for my front-end application. It appears that every time I am trying to get/post information to my database, I create multiple new connections (and sometimes I end up with 60+ connections after working for 30 minutes). Here is my current setup:I am storing the database variable that I get so that I can make multiple calls with it. When I read data from MongoDB, I always have to boot it initially and then I don’t have to after that (because I have stored the variable) but it seems like multiple connections are made every time I try to use mongo.Right now I am using Nextjs to build my application. What is the proper way to store the database so that I don’t continue to stack up connections?",
"username": "Hunt_Crypto"
},
{
"code": "",
"text": "May be, what you see is normal. See https://www.mongodb.com/docs/manual/administration/connection-pool-overview/.",
"username": "steevej"
}
]
| How to properly connect to MongoDB through front-end application? | 2022-11-06T04:31:27.720Z | How to properly connect to MongoDB through front-end application? | 5,039 |
null | []
| [
{
"code": "",
"text": "What could be the MongoDB version that can run in Intel® Celeron® J4105 Processor",
"username": "Ariel_Arenas"
},
{
"code": "",
"text": "I think you’re capped to 4.4 with the prebuilt binarires available. If you’d like to run 5.0+ you would have to compile your own binaries, this is not a light undertaking.Platform Requirements specify the minimum cpu generations, specifically AVX CPU feature is required for 5.0 and above.Your CPU is of the Gemini Lake generation which does not have this feature.",
"username": "chris"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Ariel_Arenas !As @chris noted, your processor is unfortunately lower spec’d than the minimum microarchitecture required for MongoDB 5.0 and newer binary release packages.Compiling from source is definitely a bit of an undertaking.If you want to use newer versions of MongoDB server, an alternative approach would be to use a hosted service like MongoDB Atlas (there’s a free tier with 512MB of storage) or to install MongoDB in your own cloud or local environment with a supported O/S and CPU version.Regards,\nStennie",
"username": "Stennie_X"
}
]
| Mongo DB version for Intel® Celeron® J4105 Processor | 2022-11-03T22:04:27.585Z | Mongo DB version for Intel® Celeron® J4105 Processor | 2,058 |
null | []
| [
{
"code": "",
"text": "Hello… can you help me with a query that can fetch all data from two different collections in a single query.consider example:\nthere are two models carBookings and bikeBookings. which contain booking details. both have a common field customerId. I want bookings from carBookings and bikeBookings which satisfy condition same customerId in a single query.\nwaiting for your help.",
"username": "Phinahas_Philip"
},
{
"code": "",
"text": "Take a look at $unionWith.",
"username": "steevej"
}
]
| Fetch all data from two different collection using single query | 2022-11-07T10:32:09.515Z | Fetch all data from two different collection using single query | 907 |
null | []
| [
{
"code": "",
"text": "So I wanted to try making like a blog website with Node js and MongoDB. I’m a beginner with it so it’s kinda a mess, but everything worked fine locally (on localhost 3000). I found Railway as a free alternative to Heroku and tried to deploy my project there.For some reason it failed to connect to the database, and this is what I got in the logs.Here is a small piece of code from app.jsWhat could be the problem? I will share more code if it is needed.",
"username": "zmarko"
},
{
"code": "0.0.0.0localhost",
"text": "Welcome to the MongoDB Community @zmarko !ECONNREFUSED 0.0.0.0:27017This error message indicates a local MongoDB server isn’t listening on 0.0.0.0 (all local IPv4 addresses on local machine), port 27017.By default, a running MongoDB server will only bind to localhost (‘127.0.0.1’), so you could try changing your connection string to use the localhost IP (or localhost hostname) instead.Alternatively, if your app server is running on a separate host from MongoDB, you could configure your MongoDB process to bind to all IPs. However, please review the MongoDB Security Checklist and configure appropriate security measures before allowing remote access.If you haven’t installed a MongoDB server yet, please follow the appropriate MongoDB Installation Tutorial or consider using a hosted service like MongoDB Atlas.Regards,\nStennie",
"username": "Stennie_X"
}
]
| MongooseServerSelectionError: connect ECONNREFUSED 0.0.0.0:27017 | 2022-11-07T17:17:09.628Z | MongooseServerSelectionError: connect ECONNREFUSED 0.0.0.0:27017 | 4,792 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Hello Everyone, For the below mentioned document, tagSerialNumbers array with string values. Now i want to update the same from datatype String to Int64. Please help in updating the data type.{\n“_id”: {\n“$oid”: “635286ea1c66064140400d3c”\n},\n“fulfill”: [\n{\n“itemName”: \" \",\n“fulfilledQty”: 360,\n“fullfilledDate”: “2014-10-15 21:16:42”,\n“fulfilledStatus”: {\n“statusId”: 3002,\n“status”: “RReceivedFull”\n},\n“retailer”: {\n“retailerId”: 10003753,\n“name”: “Serco”,\n“retailerType”: {\n“typeId”: 3002,\n“name”: “RegularMasterRetailer”\n}\n},\n“tagSerialNumbers”: [\n“137438957072”,\n“137438957071”,\n“137438957070”,\n“137438957069”,\n“137438957068”,\n“137438957067”,\n“137438957066”,\n“137438957065”,\n“137438957064”,\n“137438957063”,\n“137438957062”,\n“137438957061”,\n“137438957060”,\n“137438957059”,\n“137438957058”,\n“137438957057”,\n“137438957056”,\n“137438957055”,\n“137438957054”,\n“137438957053”\n]\n}\n]\n}",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "That would be the same process as answered to you in",
"username": "steevej"
},
{
"code": "",
"text": "Hello,Thanks for your quick response, i tried updating with the below mentioned query:\ndb.RetailerRequest.updateMany\n(\n{“fulfill.tagSerialNumbers”:{$type:“string”}},\n[\n{$set:\n{fulfill:\n{$map:\n{input:\"$fulfill\",\nin:{$mergeObjects:\n[\"$this\",{tagSerialNumbers:\n{ $toLong: “$$this.tagSerialNumbers”}}]}}}}}] )But i am getting the below mentioned error:\nMongoServerError: Unsupported conversion from array to long in $convert with no onError valuePlease help me in updating the query.",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "The field tagSerialNumbers in the document you shared is an array. That is why you getMongoServerError: Unsupported conversion from array to long in $convertYou have to $map tagSerialNumbers and convert with { $toLong : $$this }.",
"username": "steevej"
},
{
"code": "",
"text": "Hello, Thanks for quick response. I tried using the below mentioned query:\ndb.RetailerRequest.updateMany\n(\n{“fulfill.tagSerialNumbers”:{$type:“string”}},\n[\n{$set:\n{fulfill:\n{$map:\n{input:\"$fulfill\",\nin:{$mergeObjects:\n[\"$this\",{tagSerialNumbers:\n{ $toLong: “$$this”}}]}}}}}] )Now i am getting the error: MongoServerError: Unsupported conversion from object to long in $convert with no onError valuePlease let me know how to proceed further",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "Sorry, but you published exactly the same badly formatted documents and code.Pleaseread Formatting code and log snippets in posts and update your sample documents and pipeline so it is easier to understand and to cut-n-paste for experimentation.",
"username": "steevej"
},
{
"code": "db.RetailerRequest.updateMany\n(\n {\"fulfill.tagSerialNumbers\":{$type:\"string\"}},\n [\n \t{$set:\n \t\t{fulfill:\n \t\t\t{$map:\n \t\t\t{input:\"$fulfill\",\n \t\t\tin:{$mergeObjects:\n \t\t\t[\"$this\",{tagSerialNumbers:\n \t\t\t{ $toLong: \"$$this\"}}]}}}\n \t\t}\n \t}\n ] \n)\n\nNow i am getting the error: MongoServerError: Unsupported conversion from object to long in $convert with no onError value\n\nPlease let me know how to proceed further",
"text": "",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "Your sample document from the first post is still not formatted in a way we can cut-n-paste.",
"username": "steevej"
},
{
"code": "inputthis",
"text": "If you look at the $map documentation you will read the following about $$this:A name for the variable that represents each individual element of the input array. If no name is specified, the variable name defaults to this.In you sample document, the field fulfill is an array of objects. You $map using input:$fulfill and $toLong:$$this.tagSerialNumbers, but tagSerialNumbers is an array and this is why you get:MongoServerError: Unsupported conversion from array to long in $convert with no onError valueBut I already mentioned:The field tagSerialNumbers in the document you shared is an array.An hint to the solution was also mentioned:You have to $map tagSerialNumbers and convert with { $toLong : $$this }.",
"username": "steevej"
}
]
| Update the key data type from string to int64 | 2022-11-01T08:12:33.386Z | Update the key data type from string to int64 | 4,634 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "",
"text": "Hello Everyone, For the below mentioned document i would like to update the StatusDate from String to Date. Please help me in updating the DataType.{\n“_id”: {\n“$oid”: “635f7c5b2cc3505680eb3006”\n},\n“history”: [\n{\n“status”: {\n“statusId”: 1,\n“statusDate”: “2015-10-13 14:08:01”\n},\n“location”: {\n“locationId”: 21,\n“status”: “Shipped to Master”\n},\n“retailer”: {\n“retailerId”: 10003775,\n“retailerType”: {\n“typeId”: 3010,\n“name”: “SalesMasterRetailer”\n}\n}\n}\n]\n}",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "That would be the same way as it was answered to you in",
"username": "steevej"
},
{
"code": "",
"text": "Hello,Thanks for quick response, i used the below mentioned query: db.InventoryHistory.update(\n{“history.status.statusDate”:{$type:“string”}},\n[{$set:{“history.status”:{$map:{\ninput:\"$history.status\",\nin:{$cond:{\nif:{$eq:[“string”,{$type:\"$$this.statusDate\"}]},\nthen:{$mergeObjects:[\"$$this\", {statusDate:{$toDate:\"$$this.statusDate\"}}]},\nelse:\"$$this\"\n}}\n}}}}])But post update, status changed from object to array. Please find the below mentioned document updated to, after updating the collection:\n{\n“_id”: {\n“$oid”: “636236bd2cc35012f45932de”\n},“hexTagId”: “918907048020000003E9”,\n“tagSerialNo”: “137438954473”,\n“history”: [\n{\n“status”: [\n{\n“statusId”: 1,\n“status”: “INVENTORYNEW”,\n“statusDate”: {\n“$date”: {\n“$numberLong”: “1444745265000”\n}\n}\n}\n]\n}\n]\n}Here i don’t want it to be convert from object to array. Please let me know your inputs.",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update your sample documents and pipeline so it is easier to understand and to cut-n-paste for experimentation.Please do that for your other thread too.",
"username": "steevej"
},
{
"code": "",
"text": "Please find the below mentioned query is used:\ndb.InventoryHistory.\nupdate(\n{“history.status.statusDate”:{$type:“string”}},\n[\n{$set:{“history.status”:{$map:{\ninput:\"$history.status\",\nin:{\n$cond:{\nif:{$eq:[“string”,{$type:\"$$this.statusDate\"}]},\nthen:{$mergeObjects:[\"$$this\", {statusDate:{$toDate:\"$$this.statusDate\"}}]},\nelse:\"$$this\"\n}\n}}}}\n}\n]\n)But post update, status changed from object to array. Please find the below mentioned document after updating the collection:{\n“_id”:{\n“$oid”:“636236bd2cc35012f45932de”\n},\n“hexTagId”:“918907048020000003E9”,\n“tagSerialNo”:“137438954473”,\n“history”:[\n{\n“status”:[\n{\n“statusId”:1,\n“status”:“INVENTORYNEW”,\n“statusDate”:{\n“$date”:{\n“$numberLong”:“1444745265000”\n}\n}\n}\n]\n}\n]\n}Please let me know how to update the collection, without changing into array.",
"username": "Amarendra_Krishna"
},
{
"code": "",
"text": "As mentioned in your other post.Sorry, but you published exactly the same badly formatted documents and code.Pleaseread Formatting code and log snippets in posts and update your sample documents and pipeline so it is easier to understand and to cut-n-paste for experimentation.",
"username": "steevej"
},
{
"code": "Please find the below mentioned query is used:\n\ndb.InventoryHistory.\nupdate(\n\t\t{“history.status.statusDate”:{$type:“string”}},\n\t\t\t[\n\t\t\t\t{$set:{“history.status”:{$map:{\n\t\t\t\t\tinput:\"$history.status\",\n\t\t\t\t\t\tin:{\n\t\t\t\t\t\t\t$cond:{\n\t\t\t\t\t\t\t\tif:{$eq:[“string”,{$type:\"$$this.statusDate\"}]},\n\t\t\t\t\t\t\t\tthen:{$mergeObjects:[\"$$this\", {statusDate:{$toDate:\"$$this.statusDate\"}}]},\n\t\t\t\t\t\t\t\telse:\"$$this\"\n\t\t\t\t\t\t\t\t }\n\t\t\t\t\t\t\t}}}}\n\t\t\t\t}\n\t\t\t]\n\t )\n\nBut post update, status changed from object to array. Please find the below mentioned document after updating the collection:\n\n{\n \"_id\":{\n \"$oid\":“636236bd2cc35012f45932de”\n },\n \"hexTagId\":“918907048020000003E9”,\n \"tagSerialNo\":“137438954473”,\n \"history\":[\n {\n \"status\":[\n {\n \"statusId\":1,\n \"status\":\"INVENTORYNEW\",\n \"statusDate\":{\n \"$date\":{\n \"$numberLong\":“1444745265000”\n }\n }\n }\n ]\n }\n ]\n}\n\nPlease let me know how to update the collection, without changing into array.",
"text": "",
"username": "Amarendra_Krishna"
},
{
"code": "$set:{“history.status”:{$map:{...status (is) changed from object to array",
"text": "If you look at the $map documentation, you will see itreturns an array with the applied resultsBy doing$set:{“history.status”:{$map:{...you explicitly ask to $set history.status to an array. So it is totally normal thatstatus (is) changed from object to arrayMost likely you do not $set/$map the appropriate field. But since your sample input document is not formatted correctly we are not to cut-n-paste it to experiment.",
"username": "steevej"
}
]
| Update the Key in the nested array | 2022-11-01T07:32:09.767Z | Update the Key in the nested array | 4,576 |
null | [
"node-js",
"mongoose-odm",
"serverless"
]
| [
{
"code": " {\n \"errorType\": \"Runtime.UnhandledPromiseRejection\",\n \"errorMessage\": \"MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\",\n \"reason\": {\n \"errorType\": \"MongooseServerSelectionError\",\n \"errorMessage\": \"Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\",\n \"message\": \"Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\",\n \"reason\": {\n \"type\": \"ReplicaSetNoPrimary\",\n \"setName\": null,\n \"maxSetVersion\": null,\n \"maxElectionId\": null,\n \"servers\": {},\n \"stale\": false,\n \"compatible\": true,\n \"compatibilityError\": null,\n \"logicalSessionTimeoutMinutes\": null,\n \"heartbeatFrequencyMS\": 10000,\n \"localThresholdMS\": 15,\n \"commonWireVersion\": null\n },\n \"stack\": [\n \"MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\",\n \" at NativeConnection.Connection.openUri (/var/task/node_modules/mongoose/lib/connection.js:846:32)\",\n \" at /var/task/node_modules/mongoose/lib/index.js:351:10\",\n \" at /var/task/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:5\",\n \" at new Promise (<anonymous>)\",\n \" at promiseOrCallback (/var/task/node_modules/mongoose/lib/helpers/promiseOrCallback.js:30:10)\",\n \" at Mongoose._promiseOrCallback (/var/task/node_modules/mongoose/lib/index.js:1149:10)\",\n \" at Mongoose.connect (/var/task/node_modules/mongoose/lib/index.js:350:20)\",\n \" at Object.<anonymous> (/var/task/build/app.js:172:40)\",\n \" at Module._compile (internal/modules/cjs/loader.js:999:30)\",\n \" at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10)\",\n \" at Module.load (internal/modules/cjs/loader.js:863:32)\",\n \" at Function.Module._load (internal/modules/cjs/loader.js:708:14)\",\n \" at Module.require (internal/modules/cjs/loader.js:887:19)\",\n \" at require (internal/modules/cjs/helpers.js:74:18)\",\n \" at _tryRequire (/var/runtime/UserFunction.js:75:12)\",\n \" at _loadUserApp (/var/runtime/UserFunction.js:95:12)\"\n ]\n },\n \"promise\": {},\n \"stack\": [\n \"Runtime.UnhandledPromiseRejection: MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist: https://docs.atlas.mongodb.com/security-whitelist/\",\n \" at process.<anonymous> (/var/runtime/index.js:35:15)\",\n \" at process.emit (events.js:314:20)\",\n \" at process.EventEmitter.emit (domain.js:483:12)\",\n \" at processPromiseRejections (internal/process/promises.js:209:33)\",\n \" at processTicksAndRejections (internal/process/task_queues.js:98:32)\"\n ]\n}\nimport express from \"express\";\nimport serverless from \"serverless-http\";\nimport mongoose, { Error } from \"mongoose\";\nimport { APIGatewayProxyHandler } from 'aws-lambda';\nimport { AppSettings } from \"./constants/constants\";\n\nclass App {\n\n public app: express.Application;\n public mongoUrl: string = AppSettings.MONGODBURL;\n\n constructor() {\n this.setupDBConn();\n }\n async setupDBConn() {\n this.app = express();\n }\n\n private async mongoSetup() {\n await mongoClient;\n console.log('MONGO connection successfully made...');\n }\n}\nconst mongoClient = mongoose.connect(AppSettings.MONGODBURL, { useNewUrlParser: true, useUnifiedTopology: true, useCreateIndex: true, useFindAndModify: false });\nexport const handler: APIGatewayProxyHandler = serverless(new App().app);\n",
"text": "I am using node lambda which is a api server that accepts request from API gateway. The lambda connects to mongo db atlas to pull data from the db and then returns it in the response. For the most part it works fine with no issues but randomly it runs into mongo atlas connection errors:My mongo DB atlas instance allows ip addresses from everywhere: 0.0.0.0.This is how I am making my mongo connection:I am also using mongoose version 5.12.8",
"username": "Shawn_Varughese"
},
{
"code": "",
"text": "Hi @Shawn_Varughese and welcome in the MongoDB Community !Are you creating a new connection (==mongoClient) for each Lambda or your MongoDB Connections are shared across multiple (all) lambdas?I give you a little hint, “yes” is a very bad answer !Check out this doc where they explain how to set up the connection correctly:Using a private endpoint would also be a plus (instead of 0.0.0.0/0).Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks for this and this is helpful I did go through this and we do have the connection outside the lambda function and the connection is shared in the code we have the mongoClient outside the App class and the serverless function just waits for the mongo connection and reuses it. Unfortunately we are still seeing this connection issue.Any other ideas on why we might be seeing this?",
"username": "Shawn_Varughese"
},
{
"code": "",
"text": "I don’t understand why you would get this error if the connection was already established and functional and if your cluster is healthy on the other hand. This error message makes me think that the connection is being created at that moment.If it’s really an occasional error, maybe this is happening when you have a maintenance on the cluster? Can you link this to an event in Atlas?\nBut in theory, it’s supposed to be transparent for the clients, unless something is misconfigured.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Interesting I am not sure that could be possible. I dont see any open alerts in Atlas is that what you were asking for? To give you more context our platform is microservices based so we have 8 microservices which are all separate lambda functions and each connect to atlas to read/write to the database.You dont see this as a problem right?",
"username": "Shawn_Varughese"
},
{
"code": "",
"text": "I dont see any open alerts in Atlas is that what you were asking for?Yes. Then it’s something else.You dont see this as a problem right?No, sounds like I would do exactly the same thing. Given that your connection pool is centralised and reused correctly by all the lambdas, maybe you could increase the size of the pool? If you have many lambdas running in parallel, I guess you need a connection pool large enough to accommodate all the queries.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "So are you all 8 microservices should share the same connection pool? How would i be able to do that if they are all separate lambda functionsOr each lambda function shares its own connection pool and just does not create a new connection each invocation?",
"username": "Shawn_Varughese"
},
{
"code": "",
"text": "Ado proposed an implementation here based on a cache:Learn how to write serverless functions with AWS Lambda and MongoDBIt’s fine if you use like 8 different pools as long as it’s a small number, it’s OK. What’s not OK is one connection to the cluster per lambda execution. That’s definitely a problem.Your cluster can only support a limited number of connections. For an M10 it’s 1500 per node for example.https://www.mongodb.com/docs/atlas/reference/atlas-limits/.Keep an eye on the monitoring to see if you are getting close to that limit.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "ERROR\tUnhandled Promise Rejection \t{\n \"errorType\": \"Runtime.UnhandledPromiseRejection\",\n \"errorMessage\": \"MongoNetworkTimeoutError: connection timed out\",\n \"reason\": {\n \"errorType\": \"MongoNetworkTimeoutError\",\n \"errorMessage\": \"connection timed out\",\n \"name\": \"MongoNetworkTimeoutError\",\n \"stack\": [\n \"MongoNetworkTimeoutError: connection timed out\",\n \" at connectionFailureError (/var/task/node_modules/mongodb/lib/core/connection/connect.js:362:14)\",\n \" at TLSSocket.<anonymous> (/var/task/node_modules/mongodb/lib/core/connection/connect.js:330:16)\",\n \" at Object.onceWrapper (events.js:420:28)\",\n \" at TLSSocket.emit (events.js:314:20)\",\n \" at TLSSocket.EventEmitter.emit (domain.js:483:12)\",\n \" at TLSSocket.Socket._onTimeout (net.js:483:8)\",\n \" at listOnTimeout (internal/timers.js:554:17)\",\n \" at processTimers (internal/timers.js:497:7)\"\n ]\n },\n \"promise\": {},\n \"stack\": [\n \"Runtime.UnhandledPromiseRejection: MongoNetworkTimeoutError: connection timed out\",\n \" at /var/runtime/index.js:35:15\",\n \" at /opt/nodejs/node_modules/@lumigo/tracer/dist/tracer/tracer.js:265:37\",\n \" at processTicksAndRejections (internal/process/task_queues.js:97:5)\"\n ]\n}\n",
"text": "Thanks for this i was able to implementing the caching and it seems like it working well so far! Now after implementing caching i am facing a new error:This doesn’t happen all the time just randomly I assume its because the cached db connection has timed out. Any advise here?",
"username": "Shawn_Varughese"
},
{
"code": "",
"text": "@MaBeuLux88 any thoughts on this error i am getting now after implementing the cacheing?",
"username": "Shawn_Varughese"
},
{
"code": "connectTimeoutMS",
"text": "Hey @Shawn_Varughese,So sorry for the terrible delay to answer. I had a baby on May 1st so I was a little distracted.To be honest, I have no idea at all. If it’s happening “randomly”, is this happening maybe after your lambda wasn’t triggered for a long time?What timeout have your defined? Did you try to increase them a bit so maybe this gives more opportunity for this connection to land?Search for “timeout” in here and try to increase the relevant one maybe? I assume connectTimeoutMS here, no?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "@MaBeuLux88 ,Not a problem at all congratulations on your baby!!!Yeah I tried changing socketTimeout but that did not seem to help. Do you think the connectTimeout might be better? I am stumped on whats causing this.",
"username": "Shawn_Varughese"
},
{
"code": "connectTimeout",
"text": "Thanks !connectTimeout would only apply to the very first connection that is then cached for later use by the lambdas. So I think this isn’t the one you are looking for but… Give it a try?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "We’ve been suffering from the exact same problem - occasional timeouts on connection to Mongo from AWS Lambda. It feels like an event / issue occurring on the MongoDb / Atlas instance - but nothing in the Atlas event log to indicate as such.Our lambda timeout is being hit at 60 seconds - but basically Atlas is just not giving a connection - without error other than via connection timeout.We work in c# and are establishing our MongoClient connections via DI - ie outside of each method function call - just on initialisation.",
"username": "Stewart_Snow"
},
{
"code": "waitQueueTimeoutMS",
"text": "Hi @Stewart_Snow and welcome in the MongoDB Community !Did you try to add some options to the C# driver connection like increase the timeouts, etc ? Maybe waitQueueTimeoutMS can help?Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "https://www.mongodb.com/docs/manual/reference/connection-string/#connection-pool-optionsHey - we can try that - but it feels wrong. In this case the database that we’re connecting to (running in Atlas) is absolutely tiny - barely 50kb of data. At max there is 10 connections open to it. It’s just so small / lightweight that it really shouldn’t be behaving as it is - it makes no sense.I can understand needing big timeouts - some sort of heavily load scenario’s - but we’re dealing with such a low-load scenario it’s strange. Timing out at 60 seconds or so - just to get the connection to the DB seems very strange indeed.We’re having a go at trying a slightly different pattern for the initialization of our mongo connection from our Lambda function to see if that makes any difference.",
"username": "Stewart_Snow"
},
{
"code": "",
"text": "I’m wondering if there is a limit to keep the connection alive if it’s not used for a long time. Please let me know if you find a solution because I don’t have a test environment up to test it at the moment.Also you read that one right?",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Will do - and yes aware of that doc - cheers!",
"username": "Stewart_Snow"
},
{
"code": "1.FromAsyncCoreLogic(IAsyncResult iar, Func1 endAction, Task",
"text": "Okay - just following up here. We tweaked our initialisation code, so that we’re 100% inline with mongo / lambda recommendations - but it’s still producing the same issue on occasion.Each time it occurs we get two close errors:Error 1\nAn unhandled exception has occurred while executing the request.System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = ReadPreferenceServerSelector{ ReadPreference = { Mode : Primary } }, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : “1”, ConnectionMode : “ReplicaSet”, Type : “ReplicaSet”, State : “Connected”, Servers : [{ ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/XXXXXX” }”, EndPoint: “Unspecified/XXXXXXX”, ReasonChanged: “Heartbeat”, State: “Connected”, ServerVersion: 5.0.9, TopologyVersion: { “processId” : ObjectId(“62aa3b6c7c78a0c51037b843”), “counter” : NumberLong(4) }, Type: “ReplicaSetSecondary”, Tags: “{ region : US_EAST_1, provider : AWS, nodeType : ELECTABLE, workloadType : OPERATIONAL }”, WireVersionRange: “[0, 13]”, LastHeartbeatTimestamp: “2022-06-27T10:25:32.0295404Z”, LastUpdateTimestamp: “2022-06-27T10:25:32.0295416Z” }, { ServerId: “{ ClusterId : 1, EndPoint : “Unspecified/XXXXX” }”, EndPoint: “Unspecified/XXXXXX”, ReasonChanged: “Heartbeat”, State: “Disconnected”, ServerVersion: , TopologyVersion: , Type: “Unknown”, HeartbeatException: \"MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. —> System.TimeoutException: Timed out connecting to X.X.X.X. Timeout was 00:00:30. at MongoDB.Driver.Core.Connections.TcpStreamFactory.ConnectAsync(Socket socket, EndPoint endPoint, CancellationToken cancellationToken) at MongoDB.Driver…Error 2\n[Error] An unhandled exception has occurred while executing the request.MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server. —> System.IO.IOException: Unable to read data from the transport connection: Connection reset by peer. —> System.Net.Sockets.SocketException (104): Connection reset by peer — End of inner exception stack trace — at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken) at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.GetResult(Int16 token) at System.Net.FixedSizeReader.ReadPacketAsync(Stream transport, AsyncProtocolRequest request) at System.Net.Security.SslStream.InternalEndProcessAuthentication(LazyAsyncResult lazyResult) at System.Net.Security.SslStream.EndProcessAuthentication(IAsyncResult result) at System.Net.Security.SslStream.EndAuthenticateAsClient(IAsyncResult asyncResult) at System.Net.Security.SslStream.<>c.b__64_2(IAsyncResult iar) at System.Threading.Tasks.TaskFactory1.FromAsyncCoreLogic(IAsyncResult iar, Func2 endFunction, Action1 endAction, Task1 promise, Boolean requiresSynchronization)— End of stack trace from previous location where exception was thrownWe’re using mongo+srv style connection string plus the following options:\nretryWrites=true&w=majority…? We’re at a loss - makes no sense. Any suggestions??",
"username": "Stewart_Snow"
},
{
"code": "",
"text": "Could it be a “genuine” error like there was actually a connection problem between AWS and Atlas?Is this error maybe connected to a maintenance in Atlas ? (Scheduled maintenance, upgrade minor version, auto-scaling, etc)?How do you connect Lambdas to Atlas? Private Link? Peering? (not sure what are the solutions available to be honest).",
"username": "MaBeuLux88"
}
]
| Using MongoDB Atlas on Lambda throws Could not connect to any servers in MongoDB Atlas cluster error | 2022-04-19T01:35:25.931Z | Using MongoDB Atlas on Lambda throws Could not connect to any servers in MongoDB Atlas cluster error | 11,096 |
null | [
"queries",
"node-js",
"mongoose-odm"
]
| [
{
"code": "testmain().catch((err) => console.log(err));\n\nasync function main() {\n const conn = await connectDb('sample_airbnb');\n\n const ps = new mongoose.Schema({\n type: String,\n productNo: String,\n });\n\n const customerDocument = conn.model('customer_coll', ps);\n const customerObj = new customerDocument({ type: 'bbb', test: 'aaa' });\n\n await customerObj.validate().catch((error) => {\n console.log(error);\n });\n\n await customerObj.save();\n const t = await customerDocument.find({});\n console.log(JSON.stringify(t, null, 2));\n}\n]\n {\n \"_id\": \"636545a81cabf00b37f5a1cc\",\n \"type\": \"bbb\",\n \"__v\": 0\n }\n]\n",
"text": "Dear all =)In the below code, I try to write a key/value test that isn’t defined in the Mongoose schema. What I expected was to get a fatal error, but instead Mongoose just ignores that key/value. No errors thrown.QuestionHow can I get Mongosse to throw an error when data doesn’t match the schema?OutputHugs,\nSandra =)",
"username": "Sandra_Schlichting"
},
{
"code": "",
"text": "Found it =)\nhttps://mongoosejs.com/docs/guide.html#strict",
"username": "Sandra_Schlichting"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can Mongoose throw an error when data doesn't match schema? | 2022-11-04T17:42:55.460Z | Can Mongoose throw an error when data doesn’t match schema? | 4,255 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "{ $addFields: {\n IncomeF: { $toInt: {$cond: { if: {$ne: [ \"$Income\", \"\" ] }, then: \"$Income\", else: 0 } } }\n",
"text": "I have an aggregation pipeline that successfully converts a string numeric values to Int:{ $addFields: { IncomeC: { $toInt: “$Income” }However once a condition is added it returns “Null” for all values:$cond without conversion returns correct numeric values so the problem must be with $toint.Would much appreciate any hints.",
"username": "Victor_Mudretsov"
},
{
"code": "{ \n $addFields: {\n IncomeF: { \n $cond: { \n if: { $ne: [ \"$Income\", \"\" ] },\n then: { $toInt: \"$Income\" }, \n else: 0 \n } \n } \n}\n",
"text": "Try this:",
"username": "NeNaD"
},
{
"code": "use('test');\n\n\ndb.foo.drop();\ndb.foo.insertOne({ Income: \"123\" });\ndb.foo.insertOne({ Income: \"\" });\ndb.foo.aggregate([\n { $addFields: {\n IncomeF: { $toInt: {$cond: { if: {$ne: [ \"$Income\", \"\" ] }, then: \"$Income\", else: 0 } } }\n }}\n]);\n// output\n[\n {\n \"_id\": {\n \"$oid\": \"6369591efeaec7258dbf1821\"\n },\n \"Income\": \"123\",\n \"IncomeF\": 123\n },\n {\n \"_id\": {\n \"$oid\": \"6369591efeaec7258dbf1822\"\n },\n \"Income\": \"\",\n \"IncomeF\": 0\n }\n]\n",
"text": "@Victor_Mudretsov what version of MongoDB are you using and how are you querying this data (shell, driver, other)? Can you share a sample document that is returning a correct value without the condition so I can test, because my tests appear to produce the desired result:",
"username": "alexbevi"
},
{
"code": "",
"text": "Thanks a lot for your responses, I have sorted it by moving things around a little.",
"username": "Victor_Mudretsov"
}
]
| String to Int conversion results in Null once condition is added | 2022-11-07T18:47:37.277Z | String to Int conversion results in Null once condition is added | 1,132 |
null | [
"dot-net",
"field-encryption"
]
| [
{
"code": "MongoDB.DriverMongoDB.Driver.CoreClientEncryptionMongoDB.Libmongocrypt.LibraryLoader+FunctionNotFoundException: mongocrypt_setopt_aes_256_ecb\n at MongoDB.Libmongocrypt.LibraryLoader.GetFunction[T](String name)\n at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)\n at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)\n at System.Lazy`1.CreateValue()\n at MongoDB.Libmongocrypt.CryptClientFactory.Create(CryptOptions options)\n at MongoDB.Driver.Encryption.ClientEncryption..ctor(ClientEncryptionOptions clientEncryptionOptions)\nClientEncryptionvar clientEncryptionOptions = new ClientEncryptionOptions(DataKeysMongoClient, _keyVaultNamespace, KmsProviders);\n\nreturn new ClientEncryption(clientEncryptionOptions);\nDataKeysMongoClientIMongoClient_keyVaultNamespaceKmsProviders2.15.12.18.0",
"text": "We are experiencing a degradation between version 2.15.1 and the latest 2.18.0 version.\nWe have upgraded both MongoDB.Driver and MongoDB.Driver.Core.After upgrade, the ClientEncryption constructor crashes with the following stack trace:We initiate the ClientEncryption the following way:Where DataKeysMongoClient is a IMongoClient, _keyVaultNamespace is the collection namespace and KmsProviders is dictionary to hold the encryption relevant data.Again, all of this worked properly in version 2.15.1 and started happening once we upgraded to the latest 2.18.0 version.\nAny help will be very appreciated.",
"username": "Ilia_Shkolyar"
},
{
"code": "mongocrypt_setopt_aes_256_ecbnm -g libmongocrypt.dylib | grep aes_256_ecb",
"text": "Hey @Ilia_Shkolyar , this error happens because the driver can’t find a method mongocrypt_setopt_aes_256_ecb in the embedded native C library called libmongocrypt.so/.dylib. This method was added in the 2.16 release and still presents in that library. So, it’s unclear why you see this exception. Can you please check that library (libmongocrypt.so - for linux, libmongocrypt.dylib - for macos) in a bin folder and do the following:\nnm -g libmongocrypt.dylib | grep aes_256_ecb ?\nBest regards, Dima",
"username": "Dmitry_Lukyanov"
}
]
| LibMongoCrypt fails to create client due to LibraryLoader exception | 2022-11-07T19:33:40.931Z | LibMongoCrypt fails to create client due to LibraryLoader exception | 1,611 |
null | []
| [
{
"code": "class Restaurant : RealmObject {\n @PrimaryKey\n var _id: String = \"\"\n var userID: String = \"\"\n var name: String = \"\"\n var adresa: String? = null\n var telefon: Long? = null\n var meniu: List<Product> = emptyList()\n}\n",
"text": "Hello. Can someone reproduce the Schema for my RealmObject? The schema that was produced when Developer Mode was ON didn’t got my field “meniu”. Also, what is the best way to have an array of RealmObjects as a field of one RealmObject?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Can you post a link to your applications logs to see why the meniu field did not make it (it should have)",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "\nimage1251×140 5.67 KB\n",
"username": "Ciprian_Gabor"
},
{
"code": "class Product : RealmObject {\n @PrimaryKey\n var _id: String = \"\"\n var name: String = \"\"\n var category: String = \"\"\n var description: String? = \"\"\n var price: Float = 0F\n var imagine: String? = null\n}\n\nclass Restaurant : RealmObject {\n @PrimaryKey\n var _id: String = \"\"\n var userID: String = \"\"\n var name: String = \"\"\n var adresa: String? = null\n var telefon: Long? = null\n var meniu: RealmList<Product>? = null\n}\n suspend fun AddRestaurantTest() {\n withContext(Dispatchers.Default) {\n realm.write {\n val restaurant = Restaurant().apply {\n this.userID=\"6360f373a6f3933e4c9cf6f3\"\n this.name = \"name\"\n this.adresa = \"location\"\n this.telefon = \"07500000\".toLong()\n val listt = realmListOf(\n Product().apply {\n this.name = \"cartofi\"\n this.category = \"categorie\"\n this.price = \"10.0\".toFloat()\n this.description = \"daaaaa\"\n },\n Product().apply {\n this.name=\"rata cu cartogi\"\n this.category=\"carne\"\n this.price=\"25.0\".toFloat()\n this.description=\"daaaaa\"\n } )\n this.meniu = listt\n }\n copyToRealm(restaurant)\n }\n }\n }\n",
"text": "this is my class declaration:got this error now:\n\nimage1254×100 18.4 KB\nProduct and Restaurant are empty collections",
"username": "Ciprian_Gabor"
},
{
"code": " suspend fun AddRestaurantTest() {\n withContext(Dispatchers.Default) {\n val ob1 = Product().apply {\n this.name = \"cartofi\"\n this.category = \"categorie\"\n this.price = \"10.0\".toFloat()\n this.description = \"daaaaa\"\n }\n val ob2 = Product().apply {\n this.name=\"rata cu cartogi\"\n this.category=\"carne\"\n this.price=\"25.0\".toFloat()\n this.description=\"daaaaa\"\n }\n realm.write {\n val restaurant = Restaurant().apply {\n this.userID=\"6360f373a6f3933e4c9cf6f3\"\n this.name = \"name\"\n this.adresa = \"location\"\n this.telefon = \"07500000\".toLong()\n }\n val myList=copyToRealm(restaurant)\n myList.meniu.add(ob1)\n myList.meniu.add(ob2)\n }\n }\n }\n",
"text": "Fixed with thisBut only one single Product was added to the collection.\nAlso the meniu array field is empty on the Restaurants collection\n",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Sorry, so have you resolved the issue? It sounds like your SDK model thought the meniu field was a list of links but the json schema in the UI only thought it was a list of strings. One part of it is that your previous schema for Product was incorrect but I get the sense you know that since you fixed it",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "the schema was produced by atlas with Developer mode ON. Why its a list of strings since I have a list of Products?",
"username": "Ciprian_Gabor"
},
{
"code": "class Product : EmbeddedRealmObject {\n var name: String = \"\"\n var category: String = \"\"\n var description: String? = \"\"\n var price: Float = 0F\n var imagine: String? = null\n}\n\nclass Restaurant : RealmObject {\n @PrimaryKey\n var _id: ObjectId = ObjectId.create()\n var userID: String = \"\"\n var name: String = \"\"\n var adresa: String? = null\n var telefon: Long? = null\n var meniu: RealmList<Product> = realmListOf()\n}\n",
"text": "Ok, so I have done some changes to the modelThe Restaurant schema is good but I dont understand why the Product Schema was not produced. Also, Product collection was not created, probably because is embedded and is created inside Meniu field. I am right? Also, the data modeling is done how is supposed to be?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hello @Ciprian_Gabor, When you have developer mode ON, you can create all your schemas from your client side. I infer you are creating Android App, so you can create schema within the app itself, and when you run the app, the schema will be populated on the cloud.All the fields may or may not show on the cloud if the data is not yet added for them. The schema of embedded classes will appear as and when you add data to those fields on the device.If your data is not showing, either the schema needs to change or the way you are adding data. Could you share the cloud realm app id?I look forward to your response.Cheers, \nHenna",
"username": "henna.s"
},
{
"code": "",
"text": "This is the appId: afterfoodsuceavaapp-soumaNeither the collection or schema of Product(EmbeddedRealmObject) was added. But I can add/remove/update Products objects inside Restaurant.",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "afterfoodsuceavaapp-soumaHey, if you look at your schema you will see that the schema for product is embedded within the schema for restaurant. In MongoDB / Realm there are two ways to model your data:Relationships: the two schema represent different collections. The primary keys (string in your case) are used to denote the relationship between the schemas for the field you define.Embedded: one schema “owns” the other and they both map to the same collection. Embedding documents is a powerful paradigm in MongoDB.I would recommend reading through:Learn the key points needed to effectively model your database schemas\nReading time: 6 min read\n",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thank you for the info!",
"username": "Ciprian_Gabor"
}
]
| Schema is not the same as RealmObject | 2022-11-01T18:26:57.179Z | Schema is not the same as RealmObject | 2,081 |
null | [
"dot-net"
]
| [
{
"code": "var filterScore = Builders<Movie>.Filter.Gt(p => p.Score, 5);\nvar filterTitle = Builders<Movie>.Filter.Regex(p => p.Title, \"Summer\");\nvar filterGenre = Builders<Movie>.Filter.Eq(p => p.Genre, Genre.Comedy);\n \n// MQL is dispalyed for each filter variable\nvar filterCombined = filterTitle | filterScore | filterGenre;\n\n// MQL for the combined filter is displayed\nmoviesCollection.Find(filterCombined);\n// MQL is displayed for the Fluent API methods\n_ = moviesCollection\n .Find(u => u.Producer.Contains(\"Nolan\"))\n .SortBy(u => u.Score)\n .ThenBy(u => u.Title);\n// MQL is displayed for LINQ expressions using query syntax \nvar queryable = from movie in moviesCollection\n group movie by movie.Genre into g\n select g;\n// IndexKeys builder is analyzed \n_ = Builders<User>.IndexKeys.Ascending(x => x.Age);\n_ = Builders<Shape>.IndexKeys.Geo2D(u => u.Point);\n// Projection builder is analyzed \n_ = Builders<User>.Projection.Include(u => u.Age);\n_ = Builders<Person>.Projection.Expression(u => u.Address);\n",
"text": "This is the general availability release for the 1.1.0 version of the analyzer.The main new features in 1.1.0 include:The full list of JIRA issues resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20VS%20AND%20fixVersion%20%3D%201.1.0",
"username": "Boris_Dogadov"
},
{
"code": "",
"text": "This topic was automatically closed after 90 days. New replies are no longer allowed.",
"username": "system"
}
]
| MongoDB .NET Analyzer 1.1.0 Released | 2022-11-07T18:18:51.408Z | MongoDB .NET Analyzer 1.1.0 Released | 1,413 |
null | [
"node-js",
"data-modeling",
"java",
"mongoose-odm",
"connecting"
]
| [
{
"code": "{\"t\":{\"$date\":\"2022-11-04T03:59:52.687+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"WTCheckpointThread\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1667534392:687011][1241:0x7fcbeeed4700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 635431, snapshot max: 635431 snapshot count: 0, oldest timestamp: (1667534386, 3) , meta checkpoint timestamp: (1667534391, 3) base write gen: 3910345\"}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5366\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359135}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5367\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359132}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5370\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359130}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5366\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:33762\",\"connectionId\":5366}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5366\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:33762\",\"connectionId\":5366,\"connectionCount\":67}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5367\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:34409\",\"connectionId\":5367}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5370\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:21592\",\"connectionId\":5370}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5367\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:34409\",\"connectionId\":5367,\"connectionCount\":66}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.449+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5370\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:21592\",\"connectionId\":5370,\"connectionCount\":65}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.961+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5373\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359133}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.961+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5373\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:49036\",\"connectionId\":5373}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:22.961+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5373\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:49036\",\"connectionId\":5373,\"connectionCount\":64}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5371\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359134}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5374\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359144}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5377\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359143}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5372\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359128}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5371\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:6226\",\"connectionId\":5371}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5371\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:6226\",\"connectionId\":5371,\"connectionCount\":63}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5372\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:53907\",\"connectionId\":5372}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5377\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:38244\",\"connectionId\":5377}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5374\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:35452\",\"connectionId\":5374}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5372\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:53907\",\"connectionId\":5372,\"connectionCount\":62}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5374\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:35452\",\"connectionId\":5374,\"connectionCount\":61}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.473+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5377\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:38244\",\"connectionId\":5377,\"connectionCount\":60}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.985+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5376\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359145}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.985+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5365\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359131}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.985+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5376\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:36276\",\"connectionId\":5376}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.985+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5365\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:20238\",\"connectionId\":5365}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.985+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5376\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:36276\",\"connectionId\":5376,\"connectionCount\":59}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:23.985+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5365\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:20238\",\"connectionId\":5365,\"connectionCount\":58}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5345\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359179}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5346\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359173}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5341\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359178}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5345\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:39462\",\"connectionId\":5345}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5345\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:39462\",\"connectionId\":5345,\"connectionCount\":57}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5346\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:17341\",\"connectionId\":5346}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5346\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:17341\",\"connectionId\":5346,\"connectionCount\":56}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5341\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:24126\",\"connectionId\":5341}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:24.497+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5341\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:24126\",\"connectionId\":5341,\"connectionCount\":55}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5339\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359185}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5343\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359181}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5375\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359148}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5329\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359190}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5334\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359180}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5369\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359140}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5368\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359138}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5344\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359189}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5340\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359175}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5336\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359177}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5335\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359186}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5339\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:41580\",\"connectionId\":5339}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5369\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:39653\",\"connectionId\":5369}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5339\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:41580\",\"connectionId\":5339,\"connectionCount\":54}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5369\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:39653\",\"connectionId\":5369,\"connectionCount\":53}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5375\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:11970\",\"connectionId\":5375}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5375\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:11970\",\"connectionId\":5375,\"connectionCount\":52}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5329\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:61645\",\"connectionId\":5329}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5329\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:61645\",\"connectionId\":5329,\"connectionCount\":51}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5334\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:8001\",\"connectionId\":5334}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5334\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:8001\",\"connectionId\":5334,\"connectionCount\":50}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5368\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:62304\",\"connectionId\":5368}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.009+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5368\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:62304\",\"connectionId\":5368,\"connectionCount\":49}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5344\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:50864\",\"connectionId\":5344}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5340\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:5189\",\"connectionId\":5340}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5344\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:50864\",\"connectionId\":5344,\"connectionCount\":48}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5343\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:33639\",\"connectionId\":5343}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5340\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:5189\",\"connectionId\":5340,\"connectionCount\":47}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5343\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:33639\",\"connectionId\":5343,\"connectionCount\":46}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5336\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:52618\",\"connectionId\":5336}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5336\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:52618\",\"connectionId\":5336,\"connectionCount\":45}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5335\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:27084\",\"connectionId\":5335}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.010+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5335\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:27084\",\"connectionId\":5335,\"connectionCount\":44}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5380\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359201}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5379\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359197}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5342\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359182}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5338\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359183}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5337\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359184}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5378\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359198}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5330\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359191}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5380\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:56096\",\"connectionId\":5380}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5379\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:44869\",\"connectionId\":5379}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5380\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:56096\",\"connectionId\":5380,\"connectionCount\":43}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5379\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:44869\",\"connectionId\":5379,\"connectionCount\":42}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5378\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:6000\",\"connectionId\":5378}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5378\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:6000\",\"connectionId\":5378,\"connectionCount\":41}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5337\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:37815\",\"connectionId\":5337}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5337\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:37815\",\"connectionId\":5337,\"connectionCount\":40}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5338\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:9350\",\"connectionId\":5338}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5338\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:9350\",\"connectionId\":5338,\"connectionCount\":39}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.521+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5342\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:52443\",\"connectionId\":5342}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.522+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5342\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:52443\",\"connectionId\":5342,\"connectionCount\":38}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.522+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5330\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:11195\",\"connectionId\":5330}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:25.522+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5330\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:11195\",\"connectionId\":5330,\"connectionCount\":37}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5333\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359192}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5332\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359174}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5331\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359188}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5333\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:27547\",\"connectionId\":5333}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5332\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:22094\",\"connectionId\":5332}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5333\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:27547\",\"connectionId\":5333,\"connectionCount\":36}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5331\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:30563\",\"connectionId\":5331}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5332\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:22094\",\"connectionId\":5332,\"connectionCount\":35}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.033+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5331\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:30563\",\"connectionId\":5331,\"connectionCount\":34}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.395+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22402, \"ctx\":\"OplogCapMaintainerThread-local.oplog.rs\",\"msg\":\"WiredTiger record store oplog truncation finished\",\"attr\":{\"durationMillis\":3}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.545+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn5381\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":21359199}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.545+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn5381\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":9001,\"codeName\":\"SocketException\",\"errmsg\":\"Connection timed out\"},\"remote\":\"14.96.11.16:18055\",\"connectionId\":5381}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:26.545+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn5381\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"14.96.11.16:18055\",\"connectionId\":5381,\"connectionCount\":33}}\n{\"t\":{\"$date\":\"2022-11-04T04:00:52.931+00:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22430, \"ctx\":\"WTCheckpointThread\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":\"[1667534452:931854][1241:0x7fcbeeed4700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 635497, snapshot max: 635497 snapshot count: 0, oldest timestamp: (1667534446, 3) , meta checkpoint timestamp: (1667534451, 3) base write gen: 3910345\"}}\n\n\n{\"t\":{\"$date\":\"2022-11-01T07:37:07.629+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22989, \"ctx\":\"conn3405\",\"msg\":\"Error sending response to client. Ending connection from remote\",\"attr\":{\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Connection reset by peer\"},\"remote\":\"14.97.11.70:11641\",\"connectionId\":3405}}\n",
"text": "Hi team,We are getting some issues in DB, DB servers suddenly disconnects from application servers after some time reconnect again. From past 3 days we are facing these issues. Below are the error logsPlease help me on this,Thanks .",
"username": "Lokesh_Reddy1"
},
{
"code": "",
"text": "The mongod server seems to work correctly. It looks like your client code or application is terminating abnormally. Do you have logs or screenshot that shows what happen on the client or application side. The mongod logs do not show anything abnormal on its side.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Suddenly disconnects DB server from application servers | 2022-11-04T06:04:05.798Z | Suddenly disconnects DB server from application servers | 5,080 |
null | []
| [
{
"code": "mongod --keyFile /usr/share/mongodb/certs/authkeyfile.pem --dbpath /var/lib/mongodb version v5.0.9\nBuild Info: {\n \"version\": \"5.0.9\",\n \"gitVersion\": \"6f7dae919422dcd7f4892c10ff20cdc721ad00e6\",\n \"openSSLVersion\": \"OpenSSL 1.1.1k FIPS 25 Mar 2021\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"rhel80\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\nstorage:\n dbPath: /var/lib/mongo\n journal:\n enabled: true\n\nprocessManagement:\n fork: true \n pidFilePath: /var/run/mongodb/mongod.pid \n timeZoneInfo: /usr/share/zoneinfo\n\nnet:\n port: 27017\n bindIp: localhost,127.0.0.1 \n net.bindIpAll setting.\n bindIpAll: true\n\nsecurity:\n authorization: enabled\n keyFile: /usr/share/mongodb/certs/authkeyfile.pem\n\nreplication:\n replSetName: \"rs0\"\nsystemctl start mongod{\"t\":{\"$date\":\"2022-10-17T09:24:22.878+01:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20254, \"ctx\":\"main\",\"msg\":\"Read security file failed\",\"attr\":{\"error\":{\"code\":30,\"codeName\":\"InvalidPath\",\"errmsg\":\"Error reading file /usr/share/mongodb/certs/authkeyfile.pem: Permission denied\"}}}\nls -lahZ /usr/share/mongodb/certs/authkeyfile.pem\n-r--------. 1 mongod mongod system_u:object_r:mongod_var_lib_t:s0 1.0K Oct 15 20:35 /usr/share/mongodb/certs/authkeyfile.pem\nmongod --keyFile /usr/share/mongodb/certs/authkeyfile.pem --dbpath /var/lib/mongo\n",
"text": "I am trying to add security in /etc/mongo.conf. After I added keyFile in con, systemctl can’t start mongod anymore and generate permission denied error in the log. However, it works at commend line under root usermongod --keyFile /usr/share/mongodb/certs/authkeyfile.pem --dbpath /var/lib/mongoHere are some details,My question is why systemctl can’t work, since all keyfile permissions are in place.Thanks",
"username": "Tom_Tm"
},
{
"code": "ls -al /var/lib/mongo/\nls -al /var/run/mongodb/\nls -al /tmp/mongodb-*\nps -aef | grep [m]ongo\nss -tlnp\nnet:\n port: 27017\n bindIp: localhost,127.0.0.1 \n net.bindIpAll setting.\n bindIpAll: true\n",
"text": "Please share the log file.The extract you supplied is an informational message (marked as \"s\":\"I\") and probably not the reason why it does fail with systemctl.Since it works starting as root, I suspect that you might have other files/directories that are now owned by root and cannot be modified/deleted by mongod user.Share the output ofHow do you terminate mongod that was started as root?I suspect that the following might cause some errorThe line net.bindIfAll setting. seems out of place.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for replying Steeve, the net.BindIPAll settings is ok, I actually missed a comment line when I copy and paste here.I still don’t know where the problem is, and I end up with re-install the rocky linux system and did the same configuration again, and it works fine now.",
"username": "Tom_Tm"
}
]
| Systemctl start mongod keyFile Permission denied | 2022-10-17T08:36:30.128Z | Systemctl start mongod keyFile Permission denied | 3,051 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "Hi All,I have two collections, one being Companies and the others being Projects. I am trying to write an aggregation function that first grabs all Companies with the status of “Client”, then from there write a pipeline that will return all filtered Companies where the company._id === project.companyId, as an Array of Objects. An example of the shortened Collections are below:Companies\n{\n_id: ObjectId(‘2341908342’),\ncompanyName: “Meta”,\naddress: “123 Facebook Lane”,\n}Projects\n{\n_id: ObjectId(‘234123840’),\ncompanyId: ‘2341908342’,\nname: “Test Project”,\nprice: 97450,\n}\n{\n_id: ObjectId(‘23413456’),\ncompanyId: ‘2341908342’,\nname: “Test Project 2”,\nprice: 100000,\n}My desired outcome after the Aggregation:Companies and Projects\n{\n_id: ObjectId(‘2341908342’),\ncompanyName: “Meta”,\naddress: “123 Facebook Lane”,\nprojects: [ [Object1], [Object2],\n}The projects field does not currently exist on the Companies collection, so I imagine we would have to add it. I also begun writing a $match function to filter by clients, but I am not sure if this is correct. I am trying to use $lookup for this but can not figure out the pipeline. Can anyone help me?Where I’m currently stuck:try {\nconst allClientsWithProjects = await companyCollection\n.aggregate([\n{\n$match: {\norgId: {\n$in: [new ObjectId(req.user.orgId)],\n},\nstatus: { $in: [“Client”] },\n},\n},\n{\n$addFields: {\nprojects: [{}],\n},\n},\n{\n$lookup: { from: “projects”, (I am stuck here) },\n},\n])\n.toArray()Thank you for any help anyone can provide.",
"username": "Cameron_Wood"
},
{
"code": "",
"text": "Please read Formatting code and log snippets in posts and update your documents and code accordingly.When formatted correctly we can cut-n-paste into our systems to investigate.",
"username": "steevej"
}
]
| How to Merge Two Collections with one being an Array of Objects in the Other | 2022-11-03T16:03:56.609Z | How to Merge Two Collections with one being an Array of Objects in the Other | 3,112 |
null | []
| [
{
"code": "loginViewModel.loginStatus.observeAsState(loginViewModel.loginStatus).apply {\n if (this.value == true) {\n loginViewModel.userInfo.observeAsState(null).apply {\n if (this.value!=null) {\n val isUser = loginViewModel.userInfo.value?.isUser\n showProgress = false\n if (isUser == true) {\n println(\"USERRRRR\")\n NavigateToUsers()\n } else {\n println(\"RESTAURAMT\")\n NavigateToRestaurants()\n }\n }\n }\n\n }\n if(this.value == false)\n {\n Toast.makeText(context, \"Can't join. Wrong password!\", Toast.LENGTH_LONG).show()\n }\n }\n",
"text": "This is my current Login. I have to check if the user type is User or Restaurant. Its kind of slow because of “observeAsState”. Could someone show me a better and efficient solution please?",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hello @Ciprian_Gabor ,Welcome to MongoDB Community From your description, it appears you are designing an app related to food. I would recommend looking at some of the MongoDB Mobile Documentation. Some resources are linked here below:Could you briefly explain the use case or the workflow of your application, to give you the help and support you need for implementing the feature?I hope the provided information is helpful.Cheers, \nHenna",
"username": "henna.s"
},
{
"code": " loginViewModel.loginStatus.observeAsState(loginViewModel.loginStatus).apply {\n if (this.value == true) {\n loginViewModel.userInfo.observeAsState(null).apply {\n if (this.value!=null) {\n val isUser = loginViewModel.userInfo.value?.isUser\n showProgress = false\n if (isUser == true) {\n println(\"USERRRRR\")\n NavigateToUsers()\n } else {\n println(\"RESTAURAMT\")\n NavigateToRestaurants()\n }\n }\n }\n\n }\n if(this.value == false)\n {\n Toast.makeText(context, \"Can't join. Wrong password!\", Toast.LENGTH_LONG).show()\n }\n }\n",
"text": "My app has 2 user types: User and Restaurant. The User collection has a field called: “isUser: Boolean” which determines if its user or restaurant.\nThe login function should check if its user or restaurant. Currently this is my login, but its kinda slow:",
"username": "Ciprian_Gabor"
}
]
| How to make login better | 2022-10-31T21:08:05.383Z | How to make login better | 969 |
null | []
| [
{
"code": " realm.write {\n val restaurant = Restaurant().apply {\n this.userID=\"6360f373a6f3933e4c9cf6f3\"\n this.name = \"name\"\n this.adresa = \"location\"\n this.telefon = \"07500000\".toLong()\n }\n copyToRealm(restaurant)\n }\n",
"text": "Why this code adds a Restaurant document with empty id? I have the @PrimaryKey on _id field.",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Issue solvedddddddddddd",
"username": "Ciprian_Gabor"
},
{
"code": "",
"text": "Hello @Ciprian_Gabor,Glad to know your issue got resolved. Could you please discuss a brief on your solution, to help our mobile community Many Thanks in advance.Cheers, \nHenna",
"username": "henna.s"
},
{
"code": "@PrimaryKey\nvar _id: ObjectId = ObjectId.create()\n",
"text": "Sure!_id field should be:",
"username": "Ciprian_Gabor"
}
]
| copyToRealm adds object with empty id | 2022-11-01T21:51:29.868Z | copyToRealm adds object with empty id | 1,270 |
null | [
"data-modeling",
"swift",
"kotlin",
"realm-web",
"realm-studio"
]
| [
{
"code": "repo.getUserProfile().watch(block: {items in self. my user= items as! UserInfo)}print(self.myUser)application.UserInfo@168695",
"text": "For my kmm project, I get the data into Swift with that well known CommonFlow.But how do i get my realm object properties?the way that I get the data from the shared repo into the self.myUser object:\nrepo.getUserProfile().watch(block: {items in self. my user= items as! UserInfo)}And now how do I get my properties from UserInfo object? Like the name, id, address, etc.When I try to print(self.myUser), this is what i get: application.UserInfo@168695For the android I can use self.myUser.name, but for kotlin not How do I get my object properties?",
"username": "AfterFood_Contact"
},
{
"code": "",
"text": "Hello,Sorry, I couldn’t exactly understand what you are asking for. But if you want to access object properties you can do something like this.",
"username": "Mohit_Sharma"
}
]
| Get realm object properties from KMM project on Switf | 2022-11-02T20:23:07.053Z | Get realm object properties from KMM project on Switf | 2,024 |
null | []
| [
{
"code": "",
"text": "hello guys,\nwhen ever i try to edit the schema i got the following error\nerror validating schema relationships: could not find schema associated with ref",
"username": "ahmed_abdalmged"
},
{
"code": "",
"text": "It sounds like you have a “relationship” in your schema that is mapping to a schema that doesnt exist. https://www.mongodb.com/docs/atlas/app-services/schemas/relationships/#to-oneAre you intending to have a relationship? If so, can you send me a link to your application in the realm console? If not, perhaps you want to remove the relationship (click the expand relationships toggle)",
"username": "Tyler_Kaye"
}
]
| Schema problem in mongo db realm | 2022-11-06T11:15:29.286Z | Schema problem in mongo db realm | 1,068 |
[
"aggregation",
"queries",
"dot-net"
]
| [
{
"code": "",
"text": "How can we group by week number ($week) in c# library linq or Fluent Builder? Upon checking your documentation here: Expressions there are no any example for using $week in c#.Please help we really want to use the $week for our group by.\nweek1415×892 61.5 KB\n",
"username": "Fire"
},
{
"code": "DateTimeWeekDateTimeProjectionDefinition<Document> projection = \"{Week: {$week: '$CreatedAtUtc'}}\";\nvar aggregation = coll.Aggregate().Project(projection);\nConsole.WriteLine(aggregation);\naggregate([{ \"$project\" : { \"Week\" : { \"$week\" : \"$CreatedAtUtc\" } } }])\n$week$week$week[{ $group : { _id : { $week : \"$d\" }, x : { $sum : \"$x\" } } }]$dateTruncTruncatevar linqQuery = coll.AsQueryable()\n .GroupBy(x => x.CreatedAtUtc.Truncate(DateTimeUnit.Week))\n .Select(x => new { Bucket = x.Key, FirstDocumentInBucket = x.First() });\nConsole.WriteLine(linqQuery);\ntest.groupings.Aggregate([{ \"$group\" : { \"_id\" : { \"$dateTrunc\" : { \"date\" : \"$CreatedAtUtc\", \"unit\" : \"week\" } }, \"__agg0\" : { \"$first\" : \"$$ROOT\" } } }, { \"$project\" : { \"Bucket\" : \"$_id\", \"FirstDocumentInBucket\" : \"$__agg0\", \"_id\" : 0 } }])\n",
"text": "Hi, @Fire,Welcome to the MongoDB Community Forums. I understand that you are trying to group by the week of a DateTime. Unfortunately .NET does not include a Week property on DateTime. You can however specify it as JSON:Output:While this allows you to use the $week operator, it lacks strong typing support. So use this technique with caution.Note that you cannot use $week in a grouping, only in a projection.CORRECTION: You can use $week in a grouping as [{ $group : { _id : { $week : \"$d\" }, x : { $sum : \"$x\" } } }]. When I first tried it, I had a syntax error in the MQL, which caused the server to report that it wasn’t supported.You can however use $dateTrunc which was introduced in MongoDB 5.0. (Note that support for Truncate was added in LINQ3, which you must opt into. See LINQ3 for more information.)Output:Hopefully this provides you some options and ideas.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "var linqQuery = coll.AsQueryable()\n .GroupBy(x => x.CreatedAtUtc.Truncate(DateTimeUnit.Week))\n .Select(x => new { Bucket = x.Key, FirstDocumentInBucket = x.First() });\nConsole.WriteLine(linqQuery);\n",
"text": "Wow thank you four response.We are more interested in the LINQ3 solutions, we are using MongoDB 6.0 and latest library c# (2.17). How can we achieve this?var linqQuery = coll.AsQueryable()\n.GroupBy(x => x.CreatedAtUtc.Truncate(DateTimeUnit.Week))\n.Select(x => new { Bucket = x.Key, FirstDocumentInBucket = x.First() });\nConsole.WriteLine(linqQuery);When we tried this one, it is causing some error that this is not supported.Lastly, we would like to request to mongoDB .net driver Team to create a natively $week operator in group by c#, perharps using your own syntax library without depending on the c# Week property. I dont think c# will add the Week property in the DateTime class.",
"username": "Fire"
},
{
"code": "",
"text": "@Fire I hope you missed the settings part to support the Linq3 version of the C# MongoDB driver.clientSettings.LinqProvider = LinqProvider.V3;Refer to the below link for more info https://mongodb.github.io/mongo-csharp-driver/2.14/reference/driver/crud/linq3/",
"username": "Sudhesh_Gnanasekaran"
},
{
"code": "",
"text": "Lastly, we would like to request to mongoDB .net driver Team to create a natively $week operator in group by c#, perharps using your own syntax library without depending on the c# Week property. I dont think c# will add the Week property in the DateTime class.Please file a feature request in our issue tracker and we will be happy to consider it.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thanks! Saw it!We will post a feature request regarding the $week group by in .net driver. Here is our feature request:\nhttps://jira.mongodb.org/browse/CSHARP-4405We really hope that this could be implemented",
"username": "Fire"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Group by $week in c# Library | 2022-11-03T14:34:53.954Z | Group by $week in c# Library | 2,515 |
|
null | [
"queries",
"scala"
]
| [
{
"code": " val mongoClient: MongoClient = MongoClient()\n val database: MongoDatabase = mongoClient.getDatabase(\"db\")\n val collection: MongoCollection[Document] = database.getCollection(\"small\")\n var observable: FindObservable[Document] = collection.find();\n observable.subscribe(new Observer[Document] {\n override def onNext(result: Document): Unit = println(result.toJson())\n\n override def onError(e: Throwable): Unit = println(\"Failed: \" + e.getMessage)\n\n override def onComplete(): Unit = println(\"Completed\")\n })\n",
"text": "hello there, i am new to scala and trying to make examples in the driver page work. but i cant seem to do very basic things like printing results from a query .\nthis is my code and it doesnt print anything to the screen.\nis there a tutorial that can bring me up to the speed ?",
"username": "Ali_ihsan_Erdem1"
},
{
"code": "",
"text": "Hi @Ali_ihsan_Erdem1 welcome to the community!Does this page: MongoDB Scala Driver Quick Start help?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "no it doesnt i couldnt make that examples work",
"username": "Ali_ihsan_Erdem1"
}
]
| How should use the scala driver? | 2022-11-04T20:31:12.829Z | How should use the scala driver? | 2,008 |
null | [
"sharding",
"mongodb-shell",
"transactions"
]
| [
{
"code": "",
"text": "After upgrading 4.4.17 to 5.0.13, the customer database was missing, i’m pretty sure that the config files is the original one.Running transaction\nUpdating : mongodb-org-database-tools-extra-5.0.13-1.el7.x86_64 1/16\nInstalling : mongodb-mongosh-1.6.0-1.el8.x86_64 2/16\nUpdating : mongodb-org-server-5.0.13-1.el7.x86_64 3/16\nUpdating : mongodb-org-shell-5.0.13-1.el7.x86_64 4/16\nUpdating : mongodb-database-tools-100.5.0-1.x86_64 5/16\nUpdating : mongodb-org-tools-5.0.13-1.el7.x86_64 6/16\nUpdating : mongodb-org-mongos-5.0.13-1.el7.x86_64 7/16\nInstalling : mongodb-org-database-5.0.13-1.el7.x86_64 8/16\nUpdating : mongodb-org-5.0.13-1.el7.x86_64 9/16\nCleanup : mongodb-org-4.4.17-1.el7.x86_64 10/16\nCleanup : mongodb-org-tools-4.4.17-1.el7.x86_64 11/16\nCleanup : mongodb-org-database-tools-extra-4.4.17-1.el7.x86_64 12/16\nCleanup : mongodb-org-server-4.4.17-1.el7.x86_64 13/16\nCleanup : mongodb-org-shell-4.4.17-1.el7.x86_64 14/16\nCleanup : mongodb-org-mongos-4.4.17-1.el7.x86_64 15/16\nCleanup : mongodb-database-tools-100.4.1-1.x86_64 16/16\nVerifying : mongodb-org-5.0.13-1.el7.x86_64 1/16\nVerifying : mongodb-org-mongos-5.0.13-1.el7.x86_64 2/16\nVerifying : mongodb-database-tools-100.5.0-1.x86_64 3/16\nVerifying : mongodb-org-tools-5.0.13-1.el7.x86_64 4/16\nVerifying : mongodb-org-shell-5.0.13-1.el7.x86_64 5/16\nVerifying : mongodb-org-database-5.0.13-1.el7.x86_64 6/16\nVerifying : mongodb-org-database-tools-extra-5.0.13-1.el7.x86_64 7/16\nVerifying : mongodb-org-server-5.0.13-1.el7.x86_64 8/16\nVerifying : mongodb-mongosh-1.6.0-1.el8.x86_64 9/16\nVerifying : mongodb-org-mongos-4.4.17-1.el7.x86_64 10/16\nVerifying : mongodb-org-tools-4.4.17-1.el7.x86_64 11/16\nVerifying : mongodb-org-server-4.4.17-1.el7.x86_64 12/16\nVerifying : mongodb-org-shell-4.4.17-1.el7.x86_64 13/16\nVerifying : mongodb-org-database-tools-extra-4.4.17-1.el7.x86_64 14/16\nVerifying : mongodb-org-4.4.17-1.el7.x86_64 15/16\nVerifying : mongodb-database-tools-100.4.1-1.x86_64 16/16Installed:\nmongodb-mongosh.x86_64 0:1.6.0-1.el8 mongodb-org-database.x86_64 0:5.0.13-1.el7Updated:\nmongodb-database-tools.x86_64 0:100.5.0-1 mongodb-org.x86_64 0:5.0.13-1.el7 mongodb-org-database-tools-extra.x86_64 0:5.0.13-1.el7 mongodb-org-mongos.x86_64 0:5.0.13-1.el7\nmongodb-org-server.x86_64 0:5.0.13-1.el7 mongodb-org-shell.x86_64 0:5.0.13-1.el7 mongodb-org-tools.x86_64 0:5.0.13-1.el7Complete!\n[ezhang@prod-mgdb-pos-shard1-data-101e MongoDB5.0.13]$ systemctl status mongod\n● mongod.service - MongoDB Database Server\nLoaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\nActive: active (running) since Thu 2022-11-03 01:26:12 UTC; 28s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 27605 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=0/SUCCESS)\nProcess: 27602 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 27599 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 27598 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\nMain PID: 27611 (mongod)\nCGroup: /system.slice/mongod.service\n└─27611 /usr/bin/mongod -f /etc/mongod.conf\n[aaaaaaa@xxxxxxxxx MongoDB5.0.13]$ mongosh localhost\nCurrent Mongosh Log ID: 636318dcc319989d556c419a\nConnecting to: mongodb://127.0.0.1:27017/localhost?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.6.0\nUsing MongoDB: 5.0.13\nUsing Mongosh: 1.6.0For mongosh info see: https://docs.mongodb.com/mongodb-shell/To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (Privacy Policy | MongoDB).\nYou can opt-out by running the disableTelemetry() command.Warning: Found ~/.mongorc.js, but not ~/.mongoshrc.js. ~/.mongorc.js will not be loaded.\nYou may want to copy or rename ~/.mongorc.js to ~/.mongoshrc.js.\nrs01 [direct: primary] localhost> db.getSiblingDB(“admin”).auth(“xxxxxxx”, “xxxxxxxx” )\n{ ok: 1 }\nrs01 [direct: primary] localhost> show dbs;\nadmin 156.00 KiB\nconfig 512.00 KiB\nlocal 147.47 MiBThere should be a customer database, but now we can not see it.even through I rollbacked the server to 4.4.17 with the data backup folder, we also can not see the customer database.",
"username": "Zhang_Eddie"
},
{
"code": "sh.status()mongoddbpath",
"text": "Hi @Zhang_Eddie welcome to the community!How did you perform the upgrade? Did you follow the recommendations in Upgrade a Sharded Cluster to 5.0? I ask because even though you mention that this is a sharded cluster, it looks like this is upgraded using some type of package manager?Could you elaborate on the OS, your deployment details (how many shards, the output of sh.status(), etc.), and where you are in the upgrade process?Also please provide the logs of the mongod in question, especially when it starts up, until it’s ready to accept connections.There should be a customer database, but now we can not see it.Typically this was due to the new server not using the old dbpath of the previous server. Could you double check if they’re using the same dbpath? If yes, could you provide the content of that folder?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "@kevinadi , many thanks for your help, we found the root cause, this is our new cluster which is not in use currently, and there is no shard collections in this customer DB. , but your suggestion is also precious for us, we checked this cluster according to your suggestion.",
"username": "Zhang_Eddie"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Shard Upgrade from 4.4.17 to 5.0.13, and then user database was missing | 2022-11-03T03:54:13.024Z | Shard Upgrade from 4.4.17 to 5.0.13, and then user database was missing | 1,856 |
null | [
"aggregation"
]
| [
{
"code": "{\n \"_id\" : ObjectId(\"62c3ff3114d4f445ef75d1b5\"),\n \"ver\" : \"1.0\",\n \"mac\" : \"123456789000243\",\n \"did\" : \"123456789000243\",\n \"dvt\" : \"water\",\n \"dvm\" : \"jvt1440\",\n \"tid\" : \"1ce9effd-b291-4f65-bbe6-ed06f1dca80c\",\n \"type\" : \"water_metering_data\",\n \"source\" : \"EV\",\n \"eventCode\" : \"332\",\n \"location\" : {\n \"_id\" : ObjectId(\"62c3ff3114d4f445ef75d1b6\"),\n \"accuracy\" : NumberInt(0),\n \"timestamp\" : ISODate(\"2022-07-05T08:00:13.000+0000\")\n },\n \"timezone\" : \"Asia/Calcutta\",\n \"meter\" : {\n \"ver\" : NumberInt(1),\n \"sgp\" : NumberInt(28),\n \"sgq\" : NumberInt(30),\n \"sgn\" : NumberInt(-10),\n \"tms\" : ISODate(\"2022-07-05T08:00:13.000+0000\"),\n \"sno\" : NumberInt(1472),\n \"waterConsumption\" : NumberInt(200),\n \"waterBatteryVoltage\" : 3.4,\n \"waterBatteryStatus\" : NumberInt(70),\n \"tid\" : NumberInt(263843937)\n },\n \"time\" : 1657008013000.0,\n \"meterId\" : \"123456789000243\",\n \"watchId\" : \"621f1a71d46261f4385e94e9\",\n \"groupId\" : null,\n \"parentId\" : \"61558575921c023a93f81362\",\n \"device\" : ObjectId(\"621f1a71d46261f4385e94e6\"),\n \"rootDat\" : \"61558575921c023a93f81362\",\n \"dat\" : \"61558575921c023a93f81362\",\n \"assetCode\" : \"water\",\n \"actionTaken\" : false,\n \"createdAt\" : ISODate(\"2022-07-05T09:06:57.130+0000\"),\n \"updatedAt\" : ISODate(\"2022-07-05T09:07:09.696+0000\"),\n \"__v\" : NumberInt(0),\n \"isWaterStat\" : NumberInt(1),\n \"migrated\" : true,\n \"isMeterStat\" : 0.0,\n \"tsFlag\" : true\n}\n $match: {\n dat: { $regex: \"^\" + eventStat.dat },\n time: {\n $gte: eventStat.time.from,\n $lte: eventStat.time.to,\n },\n },\n\n$sort: { time: 1 } \n",
"text": "This is how a document looks like, now I need to calculate some value for which I am using aggregation pipeline and I am using the match and sort operators first, what I am using is.So I am using this two opeartors in the pipeline first,Now Mongodb Document says that aggregation will always implement match first before sort but in some cases it performs sort first, I am not sure but I think that happens when there is a index on field key used in sort not present in match and Mongodb decides it better to sort first. Here I am using time in both match and sort so I want to know that is there still any case possible where sort might happen before match? If yes, I read that a dummy project operator can force it to match first but what exactly is a dummy project opeartor?",
"username": "Harsh_Bhudolia"
},
{
"code": "{time : 1, dat : 1} dat: /^eventStat.dat/ \n",
"text": "Hi @Harsh_Bhudolia ,If you have an index like {time : 1, dat : 1} then it is better for the query to use the index to sort first and then run a range scan for all the events starting with the string.This is according to the esr rule:Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Now in your specific syntax I am not sure the dynamic regex will be able to pickup the range correctly so I would try and write the match:Have you looked at the explain plan of this queryThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "How do I force mongoDb to apply match first?",
"username": "Harsh_Bhudolia"
},
{
"code": "",
"text": "There is no way to force it. If the index is used it might be better to doe a sort + match together on the index.Can you provide an explain plan?",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Sort and Match Optimization | 2022-11-07T06:26:43.931Z | Sort and Match Optimization | 1,317 |
null | [
"node-js"
]
| [
{
"code": "{\n \"_id\": {\n \"$oid\": \"6363a10923887993e298351b\"\n },\n \"length\": 75454,\n \"chunkSize\": 261120,\n \"uploadDate\": {\n \"$date\": {\n \"$numberLong\": \"1667473674312\"\n }\n },\n \"filename\": \"d9f260cb12e32f05427bd26603e079a2.docx\",\n \"contentType\": \"application/vnd.openxmlformats-officedocument.wordprocessingml.document\"\n}\nsecond example\n{\n \"_id\": {\n \"$oid\": \"6363a19a23887993e298351d\"\n },\n \"length\": 414802,\n \"chunkSize\": 261120,\n \"uploadDate\": {\n \"$date\": {\n \"$numberLong\": \"1667473819023\"\n }\n },\n \"filename\": \"8d5ffa4b3ed864f8453444430adb4c9c.pdf\",\n \"contentType\": \"application/pdf\"\n",
"text": "Hi, I have some MS document or PDF on mongo db atlas.\nI have an application in nodejs that uploads these documents on my collection using gridfs.\nexample:I need to search a single or multiple words (string) in my MS documents or pdf.\nIn order to do that may I use Atlas Search? Is it possible to do that or I have to use elasticsearch on mongodb?\nthank you for your response.\nEmiliano",
"username": "emiliano_colatosti"
},
{
"code": "",
"text": "Hi @emiliano_colatosti welcome to the community!GridFS is a specification for storing and retrieving files that exceed the BSON-document size limit of 16 MB.Notably, this is a convention rather than a specific mechanism in the server itself.Also, specific to your question, MongoDB has no understanding of Office documents nor PDF file types, so it cannot index the contents of those files.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,\nThank’s for your information.\nI have another doubt: is it possible to store the ms file as text with “String“ data type and not as “binary data”?\nIf I uploaded them as “string data” would I be able to do my full text search?THK for your kind reply\nEmiliano",
"username": "emiliano_colatosti"
},
{
"code": "{ _id: ...,\n title: \"some title\",\n text: \" ... text in the document ... \",\n author: \"document author\",\n metadata: <some other metadata> }\n",
"text": "is it possible to store the ms file as text with “String“ data type and not as “binary data”?It is possible, but you would need an additional pre-processing step before inserting the data into the database.For PDF, perhaps this topic on Super User How to extract text from pdf in script on Linux? might be useful as a starting point. For Word documents, perhaps Extract text from a Word document using VBScript will give some ideas. Note that the PDF solution is for Linux, but the Word solution is for Windows since I’m not sure what OS you are using, but I’m certain there are solutions for both OSes.Once the text is extracted, then you can put it in a MongoDB document, perhaps:Note that the schema design above is just a really oversimplified example. The actual schema that serves your needs best could look radically different. See 6 Rules of Thumb for MongoDB Schema Design and MongoDB Schema Design Best Practices for more information about this.One thing to note is that MongoDB document has a hard limit of 16MB. If you have extremely large documents (even in text form) then you’ll need to have some strategy on how to handle those specific cases.Hopefully this is useful as a starting point!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "thank you so much for your reply!",
"username": "emiliano_colatosti"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Search a word in ms document (doc, docx) or pdf file | 2022-11-03T15:44:41.171Z | Search a word in ms document (doc, docx) or pdf file | 1,735 |
null | [
"queries"
]
| [
{
"code": "db.adminCommand(\"listDatabases\").databases.forEach(function (d) {\n mdb = db.getSiblingDB(d.name);\n printjson(mdb.stats(1024*1024*1024).storageSize.sum);\n})\n",
"text": "Can someone help to get sum of sizes of all Databases sizes. Just need to tune below query to get SUM of mdb.stats(102410241024).storageSize . Kindly Help in resolving this.",
"username": "Krishna_Sai1"
},
{
"code": "db.adminCommand(\"listDatabases\").totalSizeMb;\nsizeOnDisk",
"text": "Hello,You could try to use just the below command.This should return the sum of all the sizeOnDisk fields, expressed in megabytes. as per this documentation link.Regards,\nMohamed Elshafey",
"username": "Mohamed_Elshafey"
},
{
"code": "",
"text": "Any possibility to get by using db.stats(). I need sum using that command. If you see , I am printing values. instead i need to sum all values.",
"username": "Krishna_Sai1"
},
{
"code": "let totalSize = 0\ndb.adminCommand(\"listDatabases\").databases.forEach(function (d) {\n mdb = db.getSiblingDB(d.name);\n totalSize += mdb.stats(1024*1024*1024).storageSize\n })\nprint(`total size: ${totalSize}`)\ndb.adminCommand(\"listDatabases\").databases.reduce(\n (b,a) => b + db.getSiblingDB(a.name).stats(1024*1024*1024).storageSize, 0\n)\n",
"text": "Hi @Krishna_Sai1How about replacing the print statement there with an addition operation?or, slightly less readable:Would this work? Note that this is very untested, so make sure this code is correct before relying on it Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks for the script this is working fine. can we exclude the 3 databases(admin,local,config) using the same query .db.adminCommand(“listDatabases”).databases.reduce(\n(b,a) => b + db.getSiblingDB(a.name).stats(102410241024).storageSize, 0\n)",
"username": "Krishna_Sai1"
},
{
"code": "let totalSize = 0\ndb.adminCommand(\"listDatabases\").databases.forEach(function (d) {\n mdb = db.getSiblingDB(d.name);\n if (!['admin', 'config', 'local'].includes(d.name)) {\n totalSize += mdb.stats(1024*1024*1024).storageSize\n }\n })\nprint(`total size: ${totalSize}`)\ndb.adminCommand(\"listDatabases\").databases.\nfilter(\n n => !['admin', 'config', 'local'].includes(n.name)).\nreduce(\n (b,a) => b + db.getSiblingDB(a.name).stats(1024*1024*1024).storageSize, 0)\n",
"text": "can we exclude the 3 databases(admin,local,config) using the same query .You can, although I think it may be better to use your original loop instead:but you also might find this readable:However I believe this is mainly a Javascript question by now, as this method is not limited to MongoDB, but general Javascript operation on any array of objects.I suggest if you’re having issues with Javascript coding, you might want to ask the relevant questions in a programming-oriented site such as StackOverflow.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks a lot Script is working as expected. ",
"username": "Krishna_Sai1"
}
]
| Sum of Sizes of All MongDB Databases | 2022-11-02T00:35:40.749Z | Sum of Sizes of All MongDB Databases | 2,736 |
null | [
"queries"
]
| [
{
"code": "",
"text": "So in our app when a user deletes a project we go to each of the collections that have that specific project’s documents and delete them\nLike deleting all their data from each collectionSo I was wondering if its possible to delete all records from all collections based on a filter in one querydb.allCollections.deleteMany({id:123})Some thing like this where all collections is gonna go to each collection and remove documents",
"username": "AbdulRahman_Riyaz"
},
{
"code": "for x in [list of collection names]:\n db[x].deleteMany( <some condition> )\n",
"text": "Hi @AbdulRahman_RiyazSo I was wondering if its possible to delete all records from all collections based on a filter in one queryNo I don’t believe such a method exists today. You would have to go into each collection, and execute the delete command in each of them. Something like:However note that you’ll also need to cover the case of interruptions with the command (replica set elections, network issues, etc.) where there’s a possibility that not all collections were processed. This would lead to some orphaned documents that may need extra cleanup steps.I imagine if such a command exist, it will be a pretty dangerous command. Also depending on the number of collections you have, it could have a non-trivial running time that’s dfifficult to predict.Hope this helps!Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Remove documents from all collections in the db that have a specific Id | 2022-11-02T08:50:20.627Z | Remove documents from all collections in the db that have a specific Id | 2,254 |
null | []
| [
{
"code": "",
"text": "Hello everyone… is it possible to see the current progress (in percent) from MongoDB sharding processing for a specific collection?",
"username": "Luis_Alexandre_Rodrigues"
},
{
"code": "sh.statusdb.printShardingStatus()",
"text": "Hi @Luis_Alexandre_Rodrigues and welcome to the MongoDB community forum!!Unfortunately, there is no direct method to view the current progress in form of percentage as of today.However, sh.status and db.printShardingStatus() are two good ways to view details on the sharded cluster and chunk distribution inside the sharded cluster. They may act as a proxy to the information you seek.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| How to see the MongoDB Sharding progress? | 2022-11-01T17:47:12.774Z | How to see the MongoDB Sharding progress? | 1,823 |
[
"aggregation",
"queries",
"python",
"time-series"
]
| [
{
"code": "\n def create_timeseries_collection(self, name: str, granularity: str) -> Collection:\n try:\n self.db.command(\n \"create\",\n name,\n timeseries={\n \"timeField\": \"date_utc\",\n \"metaField\": \"symbol\",\n \"granularity\": granularity,\n },\n )\n self.db.get_collection(name).create_index([(\"symbol\", ASCENDING), (\"date_utc\", DESCENDING)])\n except OperationFailure:\n logger.exception(f\"Collection {name} already exists. Skipping..\")\n coll = self.db.get_collection(name)\n \n return coll\n def _get_latest_historical_values(self, symbol, coll_name):\n \n cursor = self.db[coll_name].aggregate([\n {\n '$sort': {\n 'symbol': 1, \n 'date_utc': -1\n }\n }, {\n '$match': {\n 'symbol': symbol\n }\n }, {\n '$limit': 1\n }\n ])\n for el in cursor:\n latest_value = el\n cursor.close()\n \n return latest_value\n latest_value = self.db[coll_name].find({\"symbol\": symbol}).hint('symbol_1_date_utc_-1').sort({\"symbol\":1, \"date_utc\":-1}).limit(1)[0]\n",
"text": "Hi all,Thank you for reading my post! I really need your help.\nI have a timeseries collection to query.\nMy Atlas Cluster installs a MongoDB 6.0.2 Enterprise.My collection is created with the following command which leverages pymongo.I would need to get the last value of my timeseries.\nThese are stock market timeseries.These are the ways I have tried.Or:But I keep getting the alert (by email) every 5 minutes–because I run the query every 5 minutes.I’ve tried following Mongo’s recommended best practices…but I don’t understand what I’m doing wrong and especially if there is a better way to grab the last value of a collection by filtering on a metaField.Time series, IOTPlease Help Me!",
"username": "Vatemecum"
},
{
"code": "{$match: {date_utc: {$gte: <some recent date time>}}}db.collection.explain('executionStats').aggregate(...)",
"text": "Hi @VatemecumI believe a very similar question was answered by @Aasawari on TimeSeries last x documents please have a look at that thread and see if it helps your case.Basically due to the way the time series collection is working currently, I think adding a timestamp window should make this better. For example, if you add {$match: {date_utc: {$gte: <some recent date time>}}} as the first stage of the pipeline, it should perform better.If this doesn’t work, could you post the output of db.collection.explain('executionStats').aggregate(...) output, both before and after adding the timestamp window matching? Also some example documents will be helpful.Best regards\nKevin",
"username": "kevinadi"
}
]
| TimeSeries Collection - Query Targeting: Scanned Objects / Returned has gone above 1000 | 2022-11-07T01:19:46.669Z | TimeSeries Collection - Query Targeting: Scanned Objects / Returned has gone above 1000 | 1,792 |
|
null | [
"node-js",
"connecting",
"atlas-cluster"
]
| [
{
"code": "",
"text": "any help\nError: querySrv ETIMEOUT _mongodb._tcp.classcompanion.tjg97lq.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/callback_resolver:47:19) {\nerrno: undefined,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.classcompanion.tjg97lq.mongodb.net’\n}",
"username": "Abdelmoudjib_CHIHOUB"
},
{
"code": "",
"text": "Hi @Abdelmoudjib_CHIHOUB - Welcome to the community.Regarding the error you’ve provided, please view the following Can’t connect to MongoDB Atlas - querySrv ENOTFOUND post. There are some suggestions, troubleshooting tools / steps you can try and explanation of possible causes of the error.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Error while trying to connect to atlas database | 2022-11-06T22:46:15.317Z | Error while trying to connect to atlas database | 1,505 |
null | []
| [
{
"code": "",
"text": "We want to have a “hot hot” setup where for example we’ll haveprod app/env running in us-east-1backup prod app/env running in us-east-2in case of an outage in us-east-1, we just need to re-route our DNS to the already running backup app in us-east-2In this case, I was thinking MongoDB atlas:2 nodes in us-east-12 nodes in us-east-21 node in us-west-1But, I wasn’t sure howe we connect to our private VPCRight now we just have 3 notes in us-east-1 and VPC peering set up, so a very simple setup. Not sure how this translates to multi region (especially since our app doesn’t run at all in us-west-1).Do we do 2 VPC peerings to the us-east-1 vpc and us-east-2 vpc? Or two private endpoints? Or somethign else entirely?",
"username": "Anthony_C"
},
{
"code": "us-east-1us-east-1",
"text": "Hi @Anthony_C,As per the Set Up a Network Peering Connection documentation regarding multi-region Atlas clusters:Atlas deployments in multiple regions must have a peering connection for each Atlas region.\nFor example: If you have a VPC in Sydney and Atlas deployments in Sydney and Singapore, create two peering connections.Right now we just have 3 notes in us-east-1 and VPC peering set up, so a very simple setup.Is this a single VPC peering connection from your prod app/env in us-east-1 to the 3 node atlas cluster in us-east-1? Or do you also mean to include the backup prod app/env as well? i.e. 2 peering connections to the Atlas cluster, 1 for each environment.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "us-east-1us-east-1",
"text": "Is this a single VPC peering connection from your prod app/env in us-east-1 to the 3 node atlas cluster in us-east-1 ? Or do you also mean to include the backup prod app/env as well?Correct. Right now we just have a nodejs app in a private us-east-1 VPC that connects to an atlas 3-node cluster that is also in us-east-1 via peering.We want to move to a 2:2:1 atlas setup with us-east-1, us-east-2, and us-west-2 respectivelyIn the above case, we’d have a hot-hot setup with:The “backup/disaster recovery” app in us-east-2 would always be running but wouldn’t be getting any traffic. Then, if us-east-1 were to ever go down, we can divert traffic from our us-east-1 app to our us-east-2 app via DNS.We’re hoping that having the 2:2:1 means both apps in each region will already be connected to the same mongodb atlas, so nothing needs to be done for the database during a us-east-1 outage – it can handle connections from our app in us-east-2 or us-east-1 without additional work.In this case, we need to have a VPC peering with us-west-2 even though we have no application running in a VPC there? I would think it’d be just two peerings between nodejs app in us-east-1 and us-east-2.This was the simplest (in terms of devops labor) setup we could come up with that also would let us keep RPO/RTO under an hour for a regional outage",
"username": "Anthony_C"
},
{
"code": "us-east-1us-east-2us-east-1us-east-2us-west-1us-west-1",
"text": "In this case, we need to have a VPC peering with us-west-2 even though we have no application running in a VPC there? I would think it’d be just two peerings between nodejs app in us-east-1 and us-east-2.I’m assuming your app connects to MongoDB using an official MongoDB driver but please correct me if i’m wrong here.In saying so, one of the main requirements for an official MongoDB driver is for it to be able to connect directly to each replica set member. The applications on your end in the us-east-1 and us-east-2 regions must be able to connect to all members of the replica set, in this particular case, the members that exist in Atlas VPC’s in us-east-1 , us-east-2 and us-west-1 .Additionally, the Atlas documentation advises that a peering connection must be made for each region that your cluster is deployed on. Based off this, you will need a peering connections with the us-west-1 VPC where the 1 node exists as well.Hope this helps.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "us-east-1us-east-2us-west-1",
"text": "Hey Jason, I’ve set up 3 peering connections to us-east-1 , us-east-2 and us-west-1 but no luckI’ve set up peering before for a single region cluster with the nodejs app in the same region, and it seems like I have the 3 connections set up right.Any docs on how to debug peering connections?",
"username": "Anthony_C"
},
{
"code": "us-east-1us-east-2us-west-1us-east-1us-east-2",
"text": "Hi Anthony,I’ve set up 3 peering connections to us-east-1 , us-east-2 and us-west-1 but no luckI presume you’ve set up the 3 above peering connections to/from your us-east-1 application first to test before replicating a similar set up for the us-east-2 backup prod app (or vice-versa) but correct me if i’m wrong here.Any docs on how to debug peering connections?Unfortunately there isn’t any specific AWS peering connection troubleshooting documentation to my knowledge on how to debug peering connections. However, in saying so, can you advise on the following:The following pages may help troubleshoot the issue:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
]
| Multi-region AWS setup private endpoints vs peering | 2022-10-27T02:58:45.205Z | Multi-region AWS setup private endpoints vs peering | 3,246 |
null | [
"replication"
]
| [
{
"code": "",
"text": "Hi,We have replica set cluster with 3 members(P-S-S) and priority set to 1 for all the 3 members. Could someone pls help me to understand 1) Is it ok to set priority 1 for all the 3 members 2) What will happen if primary goes down 3) What will happen when down node back.Basically I have to understand is it ok to set priority 1 for primary and secondary members or we have to set lesser value for secondary compare to primary.Thanks,\nVikas",
"username": "Vikas_Reddy"
},
{
"code": "",
"text": "All nodes having priority 1 is considered standard. Each node should be of equal capability and would be able to handle the production load. If you need the primary to shift back to a particular node/datacentre adjusting priority is the mechanism you would use.If the primary goes down or is partitioned from the other nodes an election for a new primary will occur and the winner will become the new primary.When the failed node comes back it will reconnect to the replica set, roll back any writes that did not replicate to the majority, catch up to the primary and become a secondary.",
"username": "chris"
},
{
"code": "",
"text": "Thanks Chris for the detailed info…I have one more concern w.r.to replica set, believe it is altogether different. Could you pls help to understand\nIf I set “writeMajorityCount” : 2 for 3 member replica set(P-S-S), then write acknowledgement will get only when write happens on primary and in one of the secondary. In this case how secondary will be choose, Is it based on priority set or any other formula?Thanks,\nVikas",
"username": "Vikas_Reddy"
},
{
"code": "",
"text": "Hi @Vikas_ReddyWhichever secondary acknowledges the write first fulfills the write concern, various conditions could affect which one(s) that is .",
"username": "chris"
}
]
| Setting priority for replica set member | 2022-11-03T12:21:17.907Z | Setting priority for replica set member | 1,525 |
null | [
"atlas-cluster"
]
| [
{
"code": "host cluster0.xxxx.mongodb.net // No answer\nnslookup cluster0.xxxxx.mongodb.net // No answer\nping cluster0.xxxxx.mongodb.net // ping: cannot resolve cluster0.xxxxx.mongodb.net: Unknown host\n",
"text": "Is is possible to get the IP of a shared cluster in MongoDB Atlas?I need the IP so that I can whitelist it for bypassing my VPN, that at moment is blocking it.The domain of my cluster is “cluster0.xxxxx.mongodb.net”.\nI’ve tried the following but with no successThank you",
"username": "x81da"
},
{
"code": "",
"text": "Hi @x81da - Welcome to the community.I need the IP so that I can whitelist it for bypassing my VPN, that at moment is blocking it.Unfortunately it’s difficult to provide the actual IP of the nodes for a cluster in MongoDB Atlas since Atlas is a hosted service, and consequently the actual physical deployment details are constantly changing to provide you with the best service experience. Is there a particular reason the DNS is not resolving the IP’s? I ask this as the best way to connect is to use a hostname instead of IP.This appears to be an outgoing restriction implemented at the VPN level so I would suggest perhaps contacting your networking team to see if they’re able to assist you with this connection.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "I would like to add.In the commands you used, host, nslookup and ping, you assumed that a cluster is an host that has DNS records of type A.However, a cluster is not an host, it is a group of hosts, it has many IP addresses (that may change dynamically) as specified by its seed list. Read about SRV DNS at https://www.cloudflare.com/learning/dns/dns-records/dns-srv-record/ to see why your commands did not work and why you should use DNS rather than IP in your VPN setup.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the clarification. Apparently the VPN I am using (ProtonVPN) is blocking all the traffic to well known port of databases, like, in this case, 27017.Do you know if it is possible to change the port for the cluster. I am guessing already the answer is no.",
"username": "x81da"
},
{
"code": "",
"text": "Hi @x81da,Do you know if it is possible to change the port for the cluster. I am guessing already the answer is no.It is not possible to change the port for the cluster. You may wish to also check out the following documentation regarding port ranges for different types of connections if you choose to configure these in the future:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Get Mongo Atlas shared cluster IP address | 2022-10-29T05:38:37.119Z | Get Mongo Atlas shared cluster IP address | 4,022 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "Hi Team,\nI am trying to read data from databricks but I would like to filter the data with 2 conditions .Would like to add these 2 conditions in aggregation pipeline and need inputs.",
"username": "Sai_Saran_Gangisetty"
},
{
"code": "",
"text": "Hi @Sai_Saran_Gangisetty welcome to the community!I am trying to read data from databricks but I would like to filter the data with 2 conditionsCould you elaborate on what you need exactly? Databricks is a product that is not created by MongoDB, so I wonder if you’re in the right forum Would like to add these 2 conditions in aggregation pipeline and need inputs.If you need help with MongoDB queries specifically, could you post an example document and the desired output? Very helpful if you can also post what you have tried already.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thanks Kevin for the reply and my intention is like need to filter 2 fields in the same collection, 1 is string type and am able to successfully do it, but another filter is like current date to last 1 year.Filter:1\npipeline=[{’$match’: { ‘chem.code’:‘bbnehuyd’ }}]",
"username": "Sai_Saran_Gangisetty"
},
{
"code": "{$match: {'chem.code': <some condition>, <some date field>: <some date condition>}}",
"text": "pipeline=[{’$match’: { ‘chem.code’:‘bbnehuyd’ }}]This is a typical MongoDB aggregation pipeline for matching documents with a certain condition.another filter is like current date to last 1 year.With regard to your question, unfortunately I still don’t have all the information I need to be able to help you. Perhaps you’re looking for something like this?{$match: {'chem.code': <some condition>, <some date field>: <some date condition>}}Could you post the relevant example documents, and what’s the desired output?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Exactly and my date field in timestamp. Not sure how to filter with 365 days or 1 year like that.",
"username": "Sai_Saran_Gangisetty"
},
{
"code": "",
"text": "try $dayOfYear https://www.mongodb.com/docs/manual/reference/operator/aggregation/dayOfYear/#-dayofyear--aggregation-",
"username": "psram"
},
{
"code": "",
"text": "Thanks Ram for the help, but am not able to apply 2 filters at a time in aggregation pipelines. Like am getting nothing data from Mongo but am getting data with one filter.",
"username": "Sai_Saran_Gangisetty"
},
{
"code": "",
"text": "please share your query and some sample data.",
"username": "psram"
},
{
"code": "var start = new Date(new Date().setFullYear(new Date().getFullYear() - 1)); //Minus One Year \nstart.setHours(0,0,0,0); //Set to Start of the Day\n\nvar end = new Date();\nend.setHours(23,59,59,999); //Set to End of the Day\nprint(start, end)\n\ndb.collection.aggregate()\n.match({country: \"Australia\", created_date: {$gte: start, $lt: end}});\n",
"text": "need",
"username": "psram"
},
{
"code": "",
"text": "Exactly and my date field in timestamp. Not sure how to filter with 365 days or 1 year like that.Hi @Sai_Saran_Gangisetty I would reitrate @psram 's request for an example document that you’re working with.How a MongoDB query looks like would depend entirely on the document to be queries (structure, field names, datatypes on those fields, etc.). This is similar to how you would be required to know how an SQL relational schema look like and how each table connects to each other before you can create a query. Just as it’s impossible to create an SQL query without knowing the schema, it is impossible to create a MongoDB query without knowing the example document.So in order to help you further, please post an example document, and the desired result in as much details you can.Best regards\nKevin",
"username": "kevinadi"
}
]
| How to filter multiple conditions with single aggregation pipeline | 2022-11-03T10:00:21.918Z | How to filter multiple conditions with single aggregation pipeline | 9,458 |
null | [
"aggregation",
"queries"
]
| [
{
"code": "statusCANCELEDWONLOSTCANCELEDWONLOST[{\n \"_id\": {\n \"operatorBrandId\": \"0742589f-6e45-4f49-9a84-cbb74eba8320\",\n \"status\": \"CANCELED\"\n },\n \"betAmount\": 300,\n \"countBets\": 1\n},{\n \"_id\": {\n \"operatorBrandId\": \"0742589f-6e45-4f49-9a84-cbb74eba8320\",\n \"status\": \"LOST\"\n },\n \"betAmount\": 3450600,\n \"countBets\": 9430\n},{\n \"_id\": {\n \"operatorBrandId\": \"0742589f-6e45-4f49-9a84-cbb74eba8320\",\n \"status\": \"WON\"\n },\n \"betAmount\": 321900,\n \"countBets\": 691\n}]\n[{\n $match: {\n dateOfDemand: {\n $gte: ISODate('2022-10-01T00:00:00.000Z'),\n $lt: ISODate('2022-10-31T23:59:59.999Z')\n },\n operatorBrandId: {\n $in: [\n '0742589f-6e45-4f49-9a84-cbb74eba8320',\n '24ccd712-ad7a-47ca-b959-b64f7a5249be',\n '225fa4b2-ea62-4922-8e53-67d54d6e3a8c',\n 'a3a0993a-6cca-4b97-b722-278339ba7550',\n 'd3739f59-d652-45db-b2ee-13592982a1b3'\n ]\n }\n }\n}, {\n $group: {\n _id: {\n operatorBrandId: '$operatorBrandId',\n status: '$status'\n },\n betAmount: {\n $sum: '$amountTotal'\n },\n countBets: {\n $sum: 1\n }\n }\n}, {\n $sort: {\n '_id.operatorBrandId': 1,\n '_id.status': 1\n }\n}]\n",
"text": "Hi everyone,I’m pretty to new aggregation with MongoDB and I’m having some mental breakdown over something.I’m trying to aggregate some data and I’m 99% satisfied with I’ve been able to achieve so far.\nIv’e got a field named status that can have 3 different valuesI’m able to group my data according to these 3 status. But what I want, is to group my data that have a status CANCELED and group the rest of the data WON & LOST togetherFor example, here’s what I get right nowI would like to group WON & LOST together so I end up with a betAmount of 3 772 500 and a countBets of 10 121.Here’s my pipelineThanks for the help !",
"username": "Steven_Koralplay"
},
{
"code": "",
"text": "Hey @Steven_Koralplay, are you using Charts? (This is the Charts forum after all!). You can accomplish this by enabling the Binning feature when a string field is in a category channel.If you want to see the raw pipeline that this generated from your choices, you can see that in the View Aggregation Pipeline dialog. You can see we do this by creating a new field that has different values depending on the raw value.\nimage820×903 30.9 KB\nHTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "HeyThank you for your reply.I wasn’t aware of such feature. I haven’t seen anything like that on Compass so far. I’ll have to dig into it and see if it can helps",
"username": "Steven_Koralplay"
},
{
"code": "",
"text": "This is in Charts, not in Compass.",
"username": "tomhollander"
}
]
| Group data according to field value instead of field name | 2022-11-05T15:53:33.193Z | Group data according to field value instead of field name | 1,883 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "How to enabled javascript engine in digital ocean",
"username": "jediiry_johnson"
},
{
"code": "db.eval()",
"text": "Welcome to the MongoDB community @jediiry_johnson !Per Digital Ocean’s MongoDB Limits documentation, they do not support server-side JavaScript:DigitalOcean Managed MongoDB does not support server-side Javascript. We support MongoDB’s more recent and secure Aggregation Pipeline framework.Aggregation Pipeline queries are the recommended approach for improved security, performance, and concurrency. The db.eval() command has been deprecated since MongoDB 3.0 (March 2015) and was removed in MongoDB 4.2 (August 2019).If you can share more details on what you are trying to do in server-side JavaScript, we can try to suggest an alternative approach.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Cannot run server-side javascript without the javascript engine enabled | 2022-11-05T23:19:03.579Z | Cannot run server-side javascript without the javascript engine enabled | 1,316 |
null | [
"dot-net"
]
| [
{
"code": "public IRealmCollection<FuCode> FuCodes { get; private set; }\nprivate Realm _realm;\n\npublic async Task SubscribeAsync(CancellationToken cancellationToken)\n {\n _realm = await Realm.GetInstanceAsync(cancellationToken: cancellationToken);\n FuCodes = _realm.All<FuCode>().AsRealmCollection();\n FuCodes.CollectionChanged += OnSubscribeCollectionChanged;\n\n SubscribeForPropertyChanged(FuCodes); //NO1\n }\n\n private void OnSubscribeCollectionChanged(object sender, NotifyCollectionChangedEventArgs e)\n {\n if (e.Action == NotifyCollectionChangedAction.Add)\n {\n SubscribeForPropertyChanged(e.NewItems.Cast<FuCode>()); //NO2\n }\n }\n\n private void SubscribeForPropertyChanged(IEnumerable<FuCode> fuCodes)\n {\n foreach (FuCode fuCode in fuCodes)\n {\n fuCode.PropertyChanged += OnSubscribePropertyChanged;\n }\n }\n\n private void OnSubscribePropertyChanged(object sender, PropertyChangedEventArgs e)\n {\n L.Trace($\"{e.PropertyName}\");\n }\n\n",
"text": "NO1 → not fire propertychagned!!!\nNO2 → fire propertychagned!!!why?",
"username": "lasidolasido"
},
{
"code": "PropertyChangedFuCodesPropertyChangedFuCodesFuCodes",
"text": "Hi @lasidolasido, thanks for your message.I think it’s a little difficult to understand what’s happening here without looking at exactly what you are doing and how you are modifying your collection. Nevertheless, what I suppose is happening is:I suppose you’re adding new FuCodes to the list and then changing their properties, and that’s why you get only notifications in the second case.",
"username": "papafe"
}
]
| Is this normal behavior? PropertyChanged | 2022-11-04T08:48:37.406Z | Is this normal behavior? PropertyChanged | 1,045 |
null | [
"python",
"indexes"
]
| [
{
"code": "",
"text": "Hello,\nIm having an issue while trying to create indexes that already exists.\nMy scenario is that i have a service that every time it resets, it trying to create indexes.\nFrom what i’ve read on MongoDB documentation, it should not recreate it. When i tried it, i saw that our Mongo servers got lots of “CreateIndexes” commands when i issued db.currentOp() and it impacted the performance of our server. I wonder why it still tries to create the indexes and if there is something i can do to solve it? We are using python with pymongo to trigger the CreateIndexes command.PS\nWhen i saw the commands when used db.currentOp(), i saw that all of the commands had $truncated before them, couldn’t find something about it, what does it mean?",
"username": "Ronen_Zolicha"
},
{
"code": "",
"text": "Hi @Ronen_Zolicha and welcome to the MongoDB community forum!!From what i’ve read on MongoDB documentation, it should not recreate it.Yes, you are right. Starting from MongoDB version 2.0, the db..createIndex() would not recreate the indexes provided the field name on the index are same.\nHowever please note that, the issuing the createIndex on the same field would not result with an error log message.if there is something i can do to solve it?To understand the issue in more detail, could you share the following information:My scenario is that i have a service that every time it resets,Could you also help in understanding, what does the reset operation does in your service?Let us know if you have any further queries.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hey Aasawari,I’m using python 3.6\nMongo version is 4.2\nPymongo version 3.12.3The code I’m trying to do is Pymongo’s create_indexesThe response from db.currentOp() is showing the create_index commandWhen i’m calling to create_indexes, the call is waiting for response and it won’t get it till the mongo server will finish to scan the collection. I’m not sure why it scans the collection because this specific index is already exists.Im attaching one photo which contains all of the info since i cant attach more than one because i’m new here.\n",
"username": "Ronen_Zolicha"
},
{
"code": "",
"text": "Update:\ni looked into it more and more and found that one of the indexes passed with wrong name and everything seems fine. Thanks for the help ",
"username": "Ronen_Zolicha"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| createIndexes triggers operations while the indexes already exists | 2022-10-31T06:04:29.951Z | createIndexes triggers operations while the indexes already exists | 2,373 |
null | [
"aggregation"
]
| [
{
"code": "db.Collection_A.aggregate([\n{\n $match : {\n DELETED : { $ne: 'N'},\n }\n},{\n $group: {\n _id: {\n uderid : '$userid',\n email: '#email' \n },\n distinct: {$first: '$$ROOT'}\n }\n},{\n $replaceRoot: {newRoot: '$distinct'}\n},{\n $skip : 0\n},{\n $limit: 100\n}\n])\n\nThis query is\n",
"text": "Hi,\nI have a task to get data from collection which have millions of records and need to support pagination. THe issue with that is the collection also contains duplicate data and we don’t need that to come up in result. I can use group but that is making the query very slow. For example the limit is 2000 than using group after using limit always gives result less that 2000. and i I first use group than limit than the query is grouping all data in collection and than giving first 2000 records. The issue with this approach is currently I have more than million data in db so the group operation is very slow. can any one help.for example:-\nCollection_A\nall fields:-\n_id | firstname | lastname | userid | email | purchseOrder | purchaseUnit | contact number | …Data example_id | john | doe | johndoe | [email protected] | 12345 | 78 | +1 (123)- (123) - 7896 | …_id | john | doe | johndoe | [email protected] | 89076 | 800 | +1 (123)- (123) - 7896 | …So in this collection one user can have multiple purchase order , and I only want this user details only one time… This collection have more than million dataSO currently I am using this aggregation:-vey slow as I have million of records and I need to support pagination",
"username": "Aggregation_need_help"
},
{
"code": "",
"text": "Hi @Aggregation_need_help ,Yes this sounds like a sub optimal way.I suggest to create a materialized view of the distinct values using a $merge or $out periodically. This will allow you to index and query already a distinct set of values.In this materialised view you can also removed deleted users. We also recommend to use _id in the created collection for pegination:A look at how the Bucket Pattern can speed up paging for usersThis is a better way than skip and limit.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny I am new to this this topic could you help mw with the query",
"username": "Aggregation_need_help"
},
{
"code": "db.Collection_A.aggregate([\n{\n $match : {\n DELETED : { $ne: 'N'},\n }\n},{\n $group: {\n _id: {\n uderid : '$userid',\n email: '#email' \n },\n distinct: {$first: '$$ROOT'}\n }\n},{\n $replaceRoot: {newRoot: '$distinct'}\n},\n{\n $out : \"distinct_Collection_A\"\n} ])\ndb.distinct_Collection_A.find({}).sort({_id : 1}).limit(100);\ndb.distinct_Collection_A.find({_id : {$gt : <LAST_ID_ABOVE>}}).sort({_id : 1}).limit(100);\n...\n",
"text": "Hi @Aggregation_need_help ,First create the new collection by running:Now you can fetch the first round of documents:Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Get data from collection having millions of record and also contains duplicate data using limit() and skip() | 2022-11-06T10:11:33.905Z | Get data from collection having millions of record and also contains duplicate data using limit() and skip() | 2,229 |
null | [
"aggregation"
]
| [
{
"code": "",
"text": "I have two different collections: One is Twitter_accounts where i have the info about the accounts(name, language, url…) And in the other collection is tweets_Activity. I have in it 600 tweets per acount from the first collection. In this collection (tweets_Activity)I have a field named user in that field i have two fields that i want to export to the first collection(followers and following).I tried using aggregation but it wont update in the first collection. And i dont know how to use the pipline efficiently",
"username": "Mahmoud_Bouhorma"
},
{
"code": "twitter_accountstweets_activity",
"text": "Hi @Mahmoud_Bouhorma welcome to the community!Could you post example documents from the two collections? What I understand is, you wanted to update the documents in the collection twitter_accounts using the information in the collection tweets_activity. Is this correct? Could you please post an example of the desired end result?I tried using aggregation but it wont update in the first collection.What have you tried so far that didn’t work?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "{ $merge: {\n into: <collection> -or- { db: <db>, coll: <collection> },\n on: <identifier field> -or- [ <identifier field1>, ...], // Optional\n let: <variables>, // Optional\n whenMatched: <replace|keepExisting|merge|fail|pipeline>, // Optional\n whenNotMatched: <insert|discard|fail> // Optional\n} }\n",
"text": "Look at the $merge as it can be used to modify the origin collection:The pattern is $lookup from the foreign collection, $unwind , $project then the $merge",
"username": "Ilan_Toren"
},
{
"code": "_id = i[\"_id\"]\n\nfollowers = i[\"followers\"]\n\nquery = {\"Twitter_handle\" : _id}\n\naux = {\"$set\" : {\"followers\" : followers}}\n\naccounts.update_one(query, aux)",
"text": "I solved it, thank you. Here is the solution:result = tweets.aggregate([ {\"$group\" : {\"_id\": “$user.screen_name”, “followers”: {\"$max\" : “$user.followers_count”}}} ])for i in result:",
"username": "Mahmoud_Bouhorma"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Adding a specific field from a collection to an other collection, in the same DB | 2022-11-02T16:20:53.508Z | Adding a specific field from a collection to an other collection, in the same DB | 1,587 |
null | [
"sharding",
"field-encryption"
]
| [
{
"code": "",
"text": "Hi\nThe documentation mentions the following on creating a shard key on encrypted fields:\n“Specifying a shard key on encrypted fields or encrypting fields of an existing shard key may result in unexpected or incorrect sharding behavior.”What is the unexpected or incorrect behaviour? Is it documented anywhere? Is it referring to uneven distribution or the queries going to the wrong shard? Does it apply to both random and deterministic algorithms?Thank you",
"username": "Sason_Braha1"
},
{
"code": "",
"text": "Hi @Sason_Braha1 ,Since encryption messes with the values that are stored on the database side without the ability of the server to decrypt them we cannot say how sharding would behave as sharding strategies depends on values.Therefore, it is technically possible to do but unadvisable so consider doing so at your own risk.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you @Pavel_Duchovny",
"username": "Sason_Braha1"
},
{
"code": "xxf(x, key)f(x, key)xstringf(x, key)binData",
"text": "Hi @Pavel_Duchovny , can you give an example of a scenario where because of encryption we can’t say how sharding would behave in client side field level encryption?As we are discussing CSFLE, if both encryption and decryption happens on the client side, why does it change mongo behavior in regards to sharding? Isn’t encrypted and unencrypted values for mongos are in both cases, just values? Why does the database side need to decrypt them? It’s done on the client side.Instead of inserting value x and querying by value x, we insert value f(x, key) (deterministic algorithm) and query by value f(x, key).x might just be a string, and f(x, key) is binData, is that the difference maker?",
"username": "tamir_guez2"
},
{
"code": "",
"text": "Hi @tamir_guez2 ,I don’t have a good answer for that simply because I don’t know all the internal processes that might be impacted.But from my familiarity with MongoDB documentation we usually write those disclaimer simply because we don’t test those scenarios. Untested scenario means it could yield all kind of unexpected behavior or bugs. The last thing you want with sharding is unexpected behavior.Therefore I would not go this route (personal opinion)",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I see, fair enough thank you",
"username": "tamir_guez2"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Shard key and CSFLE (encryption) | 2022-11-06T06:45:12.340Z | Shard key and CSFLE (encryption) | 2,739 |
[
"dot-net"
]
| [
{
"code": "System.ArgumentException: Unsupported filter: Invoke(x => ((Convert(x.FromDate, Nullable`1) >= 2020/6/8 7:12:07) AndAlso (Convert(x.FromDate, Nullable`1) <= 2020/6/8 8:12:07)), {document}).\n at MongoDB.Driver.Linq.Translators.PredicateTranslator.Translate(Expression node)\n at MongoDB.Driver.Linq.Translators.PredicateTranslator.TranslateAndAlso(BinaryExpression node)\n at MongoDB.Driver.Linq.Translators.PredicateTranslator.Translate(Expression node)\n at MongoDB.Driver.Linq.Translators.PredicateTranslator.TranslateAndAlso(BinaryExpression node)\n at MongoDB.Driver.Linq.Translators.PredicateTranslator.Translate(Expression node)\n at MongoDB.Driver.Linq.Translators.PredicateTranslator.Translate(Expression node, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.Linq.Translators.PredicateTranslator.Translate[TDocument](Expression`1 predicate, IBsonSerializer`1 parameterSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.ExpressionFilterDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry)\n at MongoDB.Driver.MongoCollectionImpl`1.CreateCountDocumentsOperation(FilterDefinition`1 filter, CountOptions options)\n at MongoDB.Driver.MongoCollectionImpl`1.CountDocumentsAsync(IClientSessionHandle session, FilterDefinition`1 filter, CountOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.<>c__DisplayClass33_0.<CountDocumentsAsync>b__0(IClientSessionHandle session) at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n/// <summary>\n/// True\n/// </summary>\n/// <typeparam name=\"T\"></typeparam>\n/// <returns></returns>\npublic static Expression<Func<T, bool>> True<T>() => parameter => true;\n\n/// <summary>\n/// False\n/// </summary>\n/// <typeparam name=\"T\"></typeparam>\n/// <returns></returns>\npublic static Expression<Func<T, bool>> False<T>() => parameter => false;\n\n/// <summary>\n/// Or\n/// </summary>\n/// <typeparam name=\"T\"></typeparam>\n/// <param name=\"this\"></param>\n/// <param name=\"other\"></param>\n/// <returns></returns>\npublic static Expression<Func<T, bool>> Or<T>(this Expression<Func<T, bool>> @this, Expression<Func<T, bool>> other)\n{\n var invokedExpr = Expression.Invoke(other, @this.Parameters.Cast<Expression>());\n return Expression.Lambda<Func<T, bool>>(Expression.OrElse(@this.Body, invokedExpr), @this.Parameters);\n}\n\n/// <summary>\n/// And\n/// </summary>\n/// <typeparam name=\"T\"></typeparam>\n/// <param name=\"this\"></param>\n/// <param name=\"other\"></param>\n/// <returns></returns>\npublic static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> @this, Expression<Func<T, bool>> other)\n{\n var invokedExpr = Expression.Invoke(other, @this.Parameters.Cast<Expression>());\n return Expression.Lambda<Func<T, bool>>(Expression.AndAlso(@this.Body, invokedExpr), @this.Parameters);\n}\n",
"text": "The MongoDB.Driver version : image494×559 21.3 KBWhen I use Expression<Func<TDocument, bool>> filter as following\nimage1129×213 8.8 KBThrow Exception :The Expression.True Extesions scource:",
"username": "zq1314"
},
{
"code": "",
"text": "Hi!\ni have this problem.Are You found the problem?",
"username": "Saeed_Safaee"
}
]
| MongoDB.Driver Resolve Expression Throw Exception | 2020-06-08T08:05:46.154Z | MongoDB.Driver Resolve Expression Throw Exception | 4,533 |
|
null | []
| [
{
"code": "",
"text": "Hi,\nI’m trying to create a unique complex index. [(‘COMPANY_NAME’, -1), (‘user_name’, 1)].\nI’m getting a 11000 error:\ndup key: { COMPANY_NAME: null, user_name: null } and it’s referring to a document that doesn’t have that feiald!\nWhat can be the reason for that?\nThank you",
"username": "Moshe_G"
},
{
"code": "",
"text": "Non existing fields default to null in terms of index value.You may use a partial index to explicit ignore null or non existing fields.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @Moshe_G welcome to the community!@steevej is correct. For completeness, here’s the relevant snippet regarding this in the Unique Index page:If a document does not have a value for the indexed field in a unique index, the index will store a null value for this document. Because of the unique constraint, MongoDB will only permit one document that lacks the indexed field.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Thank you all for the helpful answer. Based on your response, I added {sparse=True} and it works just fine!",
"username": "Moshe_G"
},
{
"code": "",
"text": "@steevej, I see the “partial index” is better than {sparse=True}…",
"username": "Moshe_G"
}
]
| E11000 on field that is not duplicated | 2022-11-03T15:29:38.571Z | E11000 on field that is not duplicated | 1,519 |
null | [
"aggregation",
"queries",
"node-js",
"data-modeling"
]
| [
{
"code": "const obj = {\n _id: ObjectId('3432sdscscsc'),\n b: 'string'\n};\n",
"text": "Hi,I want to perform a $lookup, where the localField depends on the foreignField beign of type objectId or a string.\nExample:if parameter b is a string, the localField must be x; had been a objectId type, the localField whould be y.Someone knows how to handle the matter?\nThanks\nkm",
"username": "kevin_Morte_i_Piferrer"
},
{
"code": "pipeline = [ { /* any stage */ } ]\nlookup = {}\nif ( obj.b == 'string' )\n{\n lookup =\n { '$lookup' :\n { 'from' : YourCollectionName ,\n 'localField' : x ,\n 'foreignField' : ... ,\n 'as' : ...\n }\n }\n}\nelse if ( obj.b === 'objectId' )\n{\n lookup =\n { '$lookup' :\n { 'from' : YourCollectionName ,\n 'localField' : y ,\n 'foreignField' : ... ,\n 'as' : ...\n }\n }\n}\npipeline.push( lookup ) ; \n\n\n",
"text": "Please provide sample documents of both the source and looked up collections that represents the 2 situations.But, an aggregation pipeline is simply an array of stages. Nothing stops you from using normal control flow statement of the programming language you use.For example:",
"username": "steevej"
},
{
"code": "",
"text": "Hi Steevej,well, I should try this. If it works it is it.My problem is that I wanted to allow the user to upload data, for which some references were not passed along. Like: an invoice with a supplier, but without providing information from this supplier apart from the mere string of the name.\nBut now I thing I should just create an id for this supplier and simply not allow in the app entities without id.I don’t know if that’s the solution. (I do thing so, although I also thought the other way was the solution). But I noticed that managing the inherent complexity of working with both id and string is just huge and not worth it.Regarding your solution, I think I didn’t explained it well, but the obj.b, in my case, is not present in my app memory. obj is the mongo collection I’m looking it up. The “YourCollectionName”. The ideawas to provide a localField depending on one field in the retrieved object.",
"username": "kevin_Morte_i_Piferrer"
},
{
"code": "const obj = {\n _id: ObjectId('3432sdscscsc'),\n b: 'string'\n};\n",
"text": "That is why it is best to publish real documents from your collections. When I seeI see a constant object defined in JS code. Not a document in a collection. For this my solution is inadequate unless you want to pay the performance penalty of 2 database access.You will not be able to use the localField/foreingField version. You will need to use the version at https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/#join-conditions-and-subqueries-on-a-joined-collection.Also see Conditional $lookup",
"username": "steevej"
},
{
"code": "",
"text": "Very true Steevej,\nI over supposed. I looked those links you send and I’m wondering if that could be the solution for another issue I have.\nCould you check this?\nI think It can’t because that solution has to do with $lookup and I would need to perform with a bulk operation.",
"username": "kevin_Morte_i_Piferrer"
},
{
"code": "",
"text": "I saw your other question and I have no solution.",
"username": "steevej"
},
{
"code": "db.history.aggregate()\n .project({ \"year\": { \"$year\": \"$created_date\" }, user_id: 1 })\n .match({ year: 2021, \"user_id\": 0 })\n .group({ _id: \"$user_id\", count: { $sum: 1 } })\n .lookup({\n \"from\": \"lookup_c\",\n \"let\": { \"id\": \"$_id\" },\n \"pipeline\": [\n { \"$project\": { \"_id\": 1, \"user_name\": 1 } },\n { \"$match\": { \"$expr\": { \"$and\": [{ \"$eq\": [\"$_id\", \"$$id\"] }] } } }\n ],\n \"as\": \"p\"\n })\n .unwind(\"$p\")\n .project({ _id: 1, count: 1, username: \"$p.user_name\" })\n .sort({ count: -1 });\n",
"text": "(this may help others to get answer)\nhere is one solution to One2One mapping using look up",
"username": "psram"
}
]
| $lookup with conditional localField | 2021-12-21T00:29:39.267Z | $lookup with conditional localField | 6,700 |
null | [
"sharding",
"upgrading"
]
| [
{
"code": "mongosmongodmongosmongos",
"text": "Hi, we currently run a mongodb 4.4 sharded cluster. The documentation saysThe mongos binary will crash when attempting to connect to mongod instances whose feature compatibility version (fCV) is greater than that of the mongos Does this mean that a mongos 6.0 instance should be able to connect to a mongodb 4.4 sharded cluster since the sharded cluster’s FCV (4.4) is less tha 6.0? Sorry if this is an obvious/dumb question.",
"username": "AmitG"
},
{
"code": "{\n \"t\": {\n \"$date\": \"2022-11-05T18:20:19.555+00:00\"\n },\n \"s\": \"I\",\n \"c\": \"NETWORK\",\n \"id\": 4712102,\n \"ctx\": \"ReplicaSetMonitor-TaskExecutor\",\n \"msg\": \"Host failed in replica set\",\n \"attr\": {\n \"replicaSet\": \"c0\",\n \"host\": \"mongo-c-a:27019\",\n \"error\": {\n \"code\": 188,\n \"codeName\": \"IncompatibleServerVersion\",\n \"errmsg\": \"remote host has incompatible wire version: Server min and max wire version (9,9) is incompatible with client min wire version (13,13).You (client) are attempting to connect to a node (server) with a binary version with which you (client) no longer accept connections. Please upgrade the server’s binary version.\"\n },\n \"action\": {\n \"dropConnections\": false,\n \"requestImmediateCheck\": false,\n \"outcome\": {\n \"host\": \"mongo-c-a:27019\",\n \"success\": false,\n \"errorMessage\": \"IncompatibleServerVersion: remote host has incompatible wire version: Server min and max wire version (9,9) is incompatible with client min wire version (13,13).You (client) are attempting to connect to a node (server) with a binary version with which you (client) no longer accept connections. Please upgrade the server’s binary version.\"\n }\n }\n }\n}\n",
"text": "In fact no, this is not the case. mongos need to be the same version as the mongod.mongos can be a major version below the rest of the sharded cluster, during a version upgrade mongos is the last component to be upgraded before upgrading the FCV.If you upgrade or use a mongos version higher than that of the cluster you’re likely to see mongos not connecting to the shard and config replica sets.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Can mongos 6.0 connect to mongodb 4.4? | 2022-11-05T06:32:34.330Z | Can mongos 6.0 connect to mongodb 4.4? | 2,059 |
null | [
"node-js",
"connecting",
"atlas-cluster"
]
| [
{
"code": "",
"text": "When I run my backend code It showed me this error: “queryTxt ETIMEOUT pwdbtest.swa3aiu.mongodb.net”.\nMy connection string is correct with the correct credentials. I also have updated my current IP address in my network security option, I couldn’t get why the error is persistent.\nIf anybody helps me to understand this issue.Thanks",
"username": "Iram_Barkat"
},
{
"code": "mongosh0.0.0.0",
"text": "Hi @Iram_Barkat, and welcome to the MongoDB Community forums! Can you state if you are able to connect from the machine running your Node.JS application with the mongosh shell?I would also double check that you’ve put the correct IP address in the allow list.As a final test you could temporarily add 0.0.0.0 to the allow list to see if you’re able to connect from your application, just make sure you remove that so unwanted visitors cannot access your database once you’re testing has been completed.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "My connection string is correct with the correct credentials.Could you share the exact form you use? You may redact the credentials for security reasons. The errorqueryTxt ETIMEOUT …seems to indicate a DNS issue. It might be caused byTo solve point 2 you may try to use 8.8.8.8 Google’s DNS.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you very much my problem is sorted out.",
"username": "Iram_Barkat"
},
{
"code": "",
"text": "Could you please elaborate on how it was solved?This would benefits all users of this forum.Ad Thanks vance",
"username": "steevej"
},
{
"code": "",
"text": "it was just IP address problem. IP address of my home network causing this problem.",
"username": "Iram_Barkat"
},
{
"code": "",
"text": "how did you change it?",
"username": "Romario_Julio"
},
{
"code": "curl ifconfig.me",
"text": "Make sure you have the correct external IP address. You can run a command such as curl ifconfig.me from a terminal window while connected to the machine you want to access MongoDB from. This will give you your machine’s external IP address. Once you have that, follow the instructions to add IP access list entries to your Atlas cluster.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thank you I have applied this command and also I add this IP address to my atlas cluster but shows me same error whenever I tried to connect from my home network",
"username": "Iram_Barkat"
},
{
"code": "0.0.0.0/0",
"text": "You can try the following:",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "curl ifconfig.meS C:\\Users\\Alipser\\Jugando Con el Front> curl ifconfig.meStatusCode : 200\nStatusDescription : OK\nContent : 181.55.62.205\nRawContent : HTTP/1.1 200 OK\naccess-control-allow-origin: *\nx-envoy-upstream-service-time: 1\nstrict-transport-security: max-age=2592000; includeSubDomains\nContent-Length: 13\nContent-Type: text/plain; charset=…\nForms : {}\nHeaders : {[access-control-allow-origin, *], [x-envoy-upstream-service-time, 1], [strict-transport-security, max-age=2592000; includeSubDomains], [Content-Length, 13]…}\nImages : {}\nInputFields : {}\nLinks : {}\nParsedHtml : mshtml.HTMLDocumentClass\nRawContentLength : 13",
"username": "Romario_Julio"
},
{
"code": "",
"text": "I’m not sure what you’re asking here, but you would take the IP address (shown as Content in the results from PowerShell) and put that into the allow list in Atlas.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Doug, you are my best friend from now.I allowed the IP address and also I allowed the 0.0.0.0./0 IP for connection from everywhere.\nbut I realized that mongod.exe were not starting right now I fixed that bug by creating a folder in local disk C:data/db.\nNow it runs Mongod.exebut it says: {“t”:{\"$date\":“2022-11-04T15:03:52.754-05:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}I think that there is a problem with 27017 port",
"username": "Romario_Julio"
},
{
"code": "mongoshmongod",
"text": "Doug, you are my best friend from now.I am honored to have you consider me a friend. but it says: {“t”:{\"$date\":“2022-11-04T15:03:52.754-05:00”},“s”:“I”, “c”:“NETWORK”, “id”:23016, “ctx”:“listener”,“msg”:“Waiting for connections”,“attr”:{“port”:27017,“ssl”:“off”}}I think that there is a problem with 27017 portThis is normal. As the message states, the server is Waiting for connections. This means that you can connect to the server on port 27017. For a test run mongosh from a command prompt and you should be able to connect.I do see that you’re running mongod locally on your your Windows machine. My instructions were for allowing connections to Atlas based instances. ",
"username": "Doug_Duncan"
},
{
"code": "mongosh",
"text": "This is normal. As the message states, the server is Waiting for connections . This means that you can connect to the server on port 27017. For a test run mongosh from a command prompt and you should be able to connect.Doug I figured out the problem is with my internet provider I changed the network I got connected",
"username": "Romario_Julio"
},
{
"code": "",
"text": "My firewall was off while doing testing. let me again check with “Allow access from anywhere” .\nyet While doing testing again, I conclude my nodemon is not working, it showing me an error or “connection fail”.It seems the server is not being connected with localhost. Can you please help me to understand why is this happening?",
"username": "Iram_Barkat"
},
{
"code": "",
"text": "This topic was automatically closed after 180 days. New replies are no longer allowed.",
"username": "system"
}
]
| Got error while connecting database | 2022-10-29T12:50:53.187Z | Got error while connecting database | 3,568 |
null | [
"database-tools",
"backup"
]
| [
{
"code": "replication:\n oplogSizeMB: 2048\n replSetName: rs0\nrs.initiate({ \"_id\" : \"rs0\", \"version\" : 1, \"members\" : [{ \"_id\" : 0, \"host\" : \"127.0.0.1:27017\" }]})\nmongodump.exe --oplog --archive=f:\\test.mongo_data.gz --gzip --username \"xxxx\" --password \"yyyy\"\nmongorestore.exe --oplogReplay --drop --preserveUUID --gzip --archive=E:\\data\\test.mongo_data.gz --dryRun --verbose\nusing write concern: &{majority false 0}\narchive prelude wdms.journal\narchive prelude config.image_collection\narchive prelude wdms.blobs\narchive prelude wdms.settings\narchive prelude wdms.nodes\narchive prelude wdms.clusterNodes\narchive prelude .oplog\narchive prelude admin.system.users\narchive prelude admin.system.version\npreparing collections to restore from\nfound collection wdms.journal bson to restore to wdms.journal\nfound collection metadata from wdms.journal to restore to wdms.journal\nfound collection wdms.blobs bson to restore to wdms.blobs\nfound collection metadata from wdms.blobs to restore to wdms.blobs\nfound collection wdms.settings bson to restore to wdms.settings\nfound collection metadata from wdms.settings to restore to wdms.settings\nfound collection wdms.nodes bson to restore to wdms.nodes\nfound collection metadata from wdms.nodes to restore to wdms.nodes\nfound collection wdms.clusterNodes bson to restore to wdms.clusterNodes\nfound collection metadata from wdms.clusterNodes to restore to wdms.clusterNodes\nfound collection config.image_collection bson to restore to config.image_collection\nfound collection metadata from config.image_collection to restore to config.image_collection\nfound collection .oplog bson to restore to .oplog\ndon't know what to do with subdirectory \"wdms\", skipping...\ndon't know what to do with subdirectory \"config\", skipping...\ndon't know what to do with subdirectory \"\", skipping...\ndon't know what to do with subdirectory \"admin\", skipping...\nfound collection admin.system.users bson to restore to admin.system.users\nfound collection metadata from admin.system.users to restore to admin.system.users\nfound collection admin.system.version bson to restore to admin.system.version\nfound collection metadata from admin.system.version to restore to admin.system.version\ndry run completed\n0 document(s) restored successfully. 0 document(s) failed to restore.\n",
"text": "I have a single mongo db 4.2 instance running on windows. I am trying to provide a ‘hot’ or live backup of the instance without stopping or shutting down. I am getting errors during a dry run of the restore and am unsure if I am doing something wrong.To ensure the backup is consistent, I am running with the --oplog option. To support the oplog, I attempted to configure a replication set with a single member, the server itself. The server configuration was modified to include:I the ran the follow command on the server:I restarted the server instance and attempted to create a backup as such:This appears to run correctly and produces the test.mongo_data.gz file.Now I want to restore this to another server to test the restore process. I have copied the file to the other server, which is configured identically. It has some existing data, but it is safe to remove it all. To simplify restore, I disable authorization and only allow connections from localhost. I attempt to restore with command:This results in:I am concerned with the “don’t know what to do with subdirectory …” messages. Are they indicative of an error that will affect the restore?",
"username": "Jesse_N_A"
},
{
"code": "",
"text": "You have to give --d or --db flag in restore command for older versions\nI think for 3.0 & above you have to give nsInclude flag\nCheck mongo documentation for details",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thanks, but the --oplogReplay flag does not allow --db or --collection options to be used. I don’t think that will work.",
"username": "Jesse_N_A"
},
{
"code": "",
"text": "Did you try nsInclude?\n–db is for older versions as I mentioned above",
"username": "Ramachandra_Tummala"
},
{
"code": "When using mongorestore with --oplogReplay to restore a replica set, you must restore a full dump of a replica set member created using mongodump --oplog.\n\nmongorestore with --oplogReplay fails if you use any of the following options to limit the data to be restored:\n --db\n --collection\n --nsInclude\n --nsExclude\n",
"text": "nsInclude is also unavailable: mongorestore — MongoDB Manual",
"username": "Jesse_N_A"
},
{
"code": "",
"text": "Cd to the directory where your dump file is residing and try to run or you may have to use oplogfile parameter\nKevin(Kevinadi) may be able to help you more on this as ihave seen a thread asking to use oplogfile",
"username": "Ramachandra_Tummala"
}
]
| Dump and Restore Live instance | 2022-11-04T16:26:12.962Z | Dump and Restore Live instance | 1,909 |
null | [
"atlas-cluster",
"database-tools",
"backup"
]
| [
{
"code": "",
"text": "I tried to dump database from atlas is done… but when i tried to restore the same database it failingfor restore database i used this command\nmongorestore --uri “mongodb+srv://:@.mongodb.net/sample_supplies” --drop dumperror :\nThe --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}\n2022-08-19T14:39:22.401+0530 building a list of collections to restore from dump dir\n2022-08-19T14:39:22.404+0530 don’t know what to do with subdirectory “dump\\sample_guides”, skipping…\n2022-08-19T14:39:22.405+0530 don’t know what to do with subdirectory “dump\\sample_supplies”, skipping…\n2022-08-19T14:39:22.405+0530 0 document(s) restored successfully. 0 document(s) failed to restore.Help !!!",
"username": "Sanjay_Prasad"
},
{
"code": "",
"text": "Try with nsInclude as suggested in your error message\nWhich is the correct db name?\nsample_supplies vs sample_guides",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I had same issue (and more) with mongorestore to load the sample data in my VirtualBox machine.\nThe root cause was different. I created a user in mongodb with userAdminAnyDatabase role. That proved to be a mistake. Just create a user with root role, give your user’s credentials in mongorestore, with -u and -p options (in case that you didn’t start you instance with these options) and then everything will work.\nIt is a miss that mongodb user creation is not clearly referred in installation instuctions of the documentation and/or in m001 notes.",
"username": "John_Stavroulakis"
},
{
"code": "",
"text": "Root is a superuser with unlimited privileges\nThere are other buit in roles like backup & restore.Those should suffice",
"username": "Ramachandra_Tummala"
}
]
| Mongorestore is not working with --uri string | 2022-08-19T11:51:01.440Z | Mongorestore is not working with –uri string | 3,370 |
null | []
| [
{
"code": "",
"text": "I understood that the Balancer will rearrange chunks if one shard grows too big. It identifies the need by looking at the number of chunks per shard and decides accordinglyBut what if 2 servers have different disk capacities?What if S1 is 100 GB and the other server S2 is 200 GB? In this case, it’ll make more sense for it to have more chucks in S2 than in S1",
"username": "Shrinidhi_Rao"
},
{
"code": "",
"text": "Hi @Shrinidhi_Rao and welcome to the MongoDB community!!The chunk balancing between the shard servers depends on the number of chunks in each of the shard servers and is not dependent on the hardware of the machine. or the document or collection size of the chunks being migrated.The MongoDB cluster balancer deals with redistribution of the shards evenly among the other shards of the sharded collection.\nThe balancer migrates the chunks from shards with higher number of chunks to shards with lower number of chunks till there is an even distribution of chunks between shard servers.Please refer to the documentation of Sharded Cluster Balancer for further understanding.Let us know if you have any further questions.Best Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Thanks!This means I need to make sure everyone is on same hardware specs ",
"username": "Shrinidhi_Rao"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Does the chunk balancing take the SSD size into consideration? | 2022-10-30T06:50:11.162Z | Does the chunk balancing take the SSD size into consideration? | 1,779 |
null | []
| [
{
"code": "{\n \"$and\":[\n {\n \"inquiryDate\":{\n \"$gte\":ISODate(\"2022-04-01\"),\n \"$lt\":ISODate(\"2022-05-01\")\n },\n \"reasonLost\":{\n \"$ne\":\"Existing Customer\"\n }\n }\n ]\n}\n",
"text": "Hi there,Is it possible to search dates within the mongodb Integromat/Make module?Here is the search query I’m trying to runThis is the error I’m getting in integromat:\n“Query: invalid JSON. Unexpected token I in JSON at position 59”Thank you!",
"username": "spencerm"
},
{
"code": "",
"text": "Hi Spencer, I hadn’t seen https://www.make.com/en/integrations/mongodb before to be honest with you: super interesting that they ave search called out. It’s unclear to me if this is using mongodb’s legacy text search indexes or Atlas Search: do you have a relationship with the Make folks to push to get to the bottom of this? we’d be happy to help/reach out to them separately",
"username": "Andrew_Davidson"
},
{
"code": "$searchreasonLostdateinquiryDatestringreasonLost$andmust$search$nemustNot{\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"reasonLost\": {\n \"type\": \"string\"\n },\n \"inquiryDate\": {\n \"type\": \"date\"\n }\n }\n }\n}\ndefault{\n index: 'default',\n compound: {\n must: [{\n range: {\n 'gte':ISODate('2022-04-01'),\n 'lt':ISODate('2022-05-01'),\n path: 'inquiryDate'\n }\n }],\n mustNot: [{\n string: {\n query: 'Existing Customer',\n path: 'reasonLost'\n }\n }]\n }\n}",
"text": "Unfortunately, this is not actually a $search query. The good thing about the question the question, though, is that the reasonLost field suggests that this query is a good fit for Atlas Search and would likely be lightning fast. I’ve put some details on how to solve this problem with Atlas Search below.It is possible to search by dates but you first need to create a search index with date type for inquiryDate and string for reasonLost. Then it will be relatively easy. As an FYI, $and becomes must in $search and $ne becomes mustNot in Atlas Search.An index definition that would satisfy this query is:And a query, assuming your index is named default that would satisfy the requirement would be:",
"username": "Marcus"
},
{
"code": "{\n\"stringField\": \"62c66186eb4ce3f6851bb798\",\n \"dateField\": {\n \"$lt\": \"2022-11-04T23:28:34.201Z\",\n \"$gt\": \"2022-10-05T23:28:34.201Z\"\n }\n}\n",
"text": "Hey Andrew, another frustrated Integromat/Make Mongo DB User here. I wanted to share what worked for me:The dates should be in ISO 8601 format. Make.com’s built-in {{now}} and date functions like {{formatDate()}} should work.Image Screen-Shot-2022-11-04-at-6-29-02-PM hosted in ImgBBImage Screen-Shot-2022-11-04-at-6-28-45-PM hosted in ImgBB",
"username": "Cristian_Ventura"
}
]
| Searching dates with Integromat | 2022-04-29T17:19:29.660Z | Searching dates with Integromat | 2,342 |
null | [
"dot-net",
"field-encryption"
]
| [
{
"code": "decodedWorkflow.CustomFields = (await clientEncryption.DecryptAsync(workflow.CustomFields)).AsBsonDocument;\nSystem.InvalidCastException: Specified cast is not valid.\n at MongoDB.Bson.BsonValue.System.IConvertible.ToType(Type conversionType, IFormatProvider provider)\n at Newtonsoft.Json.JsonWriter.ResolveConvertibleValue(IConvertible convertible, PrimitiveTypeCode& typeCode, Object& value)\n at Newtonsoft.Json.JsonWriter.WriteValue(JsonWriter writer, PrimitiveTypeCode typeCode, Object value)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)\n at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)\n at Newtonsoft.Json.JsonSerializer.SerializeInternal(JsonWriter jsonWriter, Object value, Type objectType)\n at Microsoft.AspNetCore.Mvc.Formatters.NewtonsoftJsonOutputFormatter.WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)\n at Microsoft.AspNetCore.Mvc.Formatters.NewtonsoftJsonOutputFormatter.WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)\n at Microsoft.AspNetCore.Mvc.Formatters.NewtonsoftJsonOutputFormatter.WriteResponseBodyAsync(OutputFormatterWriteContext context, Encoding selectedEncoding)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResultFilterAsync>g__Awaited|30_0[TFilter,TFilterAsync](ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResultExecutedContextSealed context)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.ResultNext[TFilter,TFilterAsync](State& next, Scope& scope, Object& state, Boolean& isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeResultFilters()\nobject IConvertible.ToType(Type conversionType, IFormatProvider provider)\n {\n if (conversionType == typeof(object))\n {\n return this;\n }\n\n switch (BsonType)\n {\n case BsonType.Boolean: return Convert.ChangeType(this.AsBoolean, conversionType, provider);\n case BsonType.DateTime: return Convert.ChangeType(this.ToUniversalTime(), conversionType, provider);\n case BsonType.Decimal128: return Convert.ChangeType(this.AsDecimal128, conversionType, provider);\n case BsonType.Double: return Convert.ChangeType(this.AsDouble, conversionType, provider);\n case BsonType.Int32: return Convert.ChangeType(this.AsInt32, conversionType, provider);\n case BsonType.Int64: return Convert.ChangeType(this.AsInt64, conversionType, provider);\n case BsonType.ObjectId: return Convert.ChangeType(this.AsObjectId, conversionType, provider);\n case BsonType.String: return Convert.ChangeType(this.AsString, conversionType, provider);\n default: throw new InvalidCastException();\n }\n }\nBsonDocument.Parse((await clientEncryption.DecryptAsync(workflow.CustomFields)).ToString());```\n\nI also tried:\nAnd on that way it returns BsonDocument, but shouldnt it be sufficient just to use AsBsonDocument?",
"text": "Hi,\nwe are using explicit field level encryption and MongoDbDriver 2.17.1\nThis is how we decrypt the value that should become BsonDocument:All the casts that I try to do to get BsonDocument, instead I get RawBsonDocument. All is good until it comes to serialize to Newtonsoft.Json to return the data from the controller.\nIt throws this error:It is the method IConvertible.ToType of MongoDB.BSON package version 2.17.0 that throws the error for BsonNull field:Does this method needs to support Bson.Null type because this is regular Bson type?\nI have solved this by converting tostring and then parsing to BsonDocument, but I dont think it is the best solution.\nDid someone have the similar problem?BsonDocument.Parse(JsonConvert.SerializeObject(BsonTypeMapper.MapToDotNetValue(await clientEncryption.DecryptAsync(workflow.CustomFields))));",
"username": "Simona_Kamberi"
},
{
"code": "",
"text": "Hi, @Simona_Kamberi,Thank you for filing CSHARP-4398. We have reproduced the issue and will investigate how best to resolve it. In the meantime I provided a suggestion for a potential workaround on that ticket.Sincerely,\nJames",
"username": "James_Kovacs"
}
]
| Serialization of RawBsonDocument throws System.InvalidCastException: Specified cast is not valid. on BsonNull field | 2022-11-04T08:44:54.240Z | Serialization of RawBsonDocument throws System.InvalidCastException: Specified cast is not valid. on BsonNull field | 2,489 |
null | [
"react-native"
]
| [
{
"code": "",
"text": "Hello all,We are currently using AWS Cognito as our custom JWT authentication provider with our client Realm app. We have the whole flow working smoothly and have end-users interacting with our mobile app with no problems regarding authentication whatsoever. However, we have been trying to come up with a solution to creating backups of our infrastructure and data and getting stuck on one particular point, which is restoring user auth information.When a user submits their credentials to Cognito, an Id token is retrieved from the response that has all the fields and information that we expect to see. This includes the ‘sub’ value. The ‘sub’ value is generated to be universally unique for each user by Cognito while a user is being created and is immutable. When the Id token is passed onto the Realm app as JWT credentials, a new app user is created if none exist or retrieved if it exists. Realm uses the ‘sub’ value received from Cognito and we can observe this in the identities array of the Realm app user object.The catch, and the source of our problems, is that the ‘sub’ value generated by Cognito cannot be restored when restoring user data from backups. It is generated from scratch even if you are restoring a lost user account from an earlier backup and you know the previous ‘sub’ value. In such an event, since Realm keeps the old ‘sub’ value as the user’s identity, which is now lost, we have no means of linking this existing Realm user to their new ‘sub’ generated by Cognito.We have been in contact with AWS about this and learned that we have no power over the ‘sub’ value or the token itself. So, we were hoping that there is a way for us to manage this from Realm. Ideally, we would be able to use a custom value as the value of the ‘id’ field within the identities array of the user instead of the ‘sub’ value automatically. However, we couldn’t figure out how to achieve this. So, we were wondering if there is a way of doing this without having to go the Custom Function Authentication route. Lastly, if that is a must, how would we go about migrating our existing Realm app users to this new method. Would that be done via identity linking?Any help on this lengthy issue is very much appreciated.\n-Cagri",
"username": "CagriC"
},
{
"code": "",
"text": "Hello @CagriC\nSorry i wont be able to help you about your issue but i would like to setup aws cognito with mongodb atlas and it seems like you have done it.\nI did not see any documentation about it and even in the configuration page of custom JWT authentication i dont see any field talking about authentication url where i could put my cognito information to validate the JWT token once someone is trying to use the graphql endpoint.So i would like to know how you did to setup the JWT authenticator for cognito?\nDid you use custom JWT authenticator? if yes , what parameter did you put?\nDid you use custom function authenticator? if yes, could you share the function?Thank you for your help",
"username": "cyril_moreau"
},
{
"code": "",
"text": "Hello @cyril_moreauFiguring out the JWT configuration between Realm and Cognito was a bit tricky when you’re trying to figure out the parameters, but very straightforward when you do to set up once you figure out the parameters. Here is the configuration screen we have:\nCleanShot 2022-11-04 at 07.07.52@2x3110×1492 354 KB\nThe JWK URI has the same pattern for all Cognito pools. Once you construct that URI using your own pool information, see if you can open it in a browser, you should be able to view a JSON response when you open that URI on a browser.We are not using GraphQL, though, so I can’t help you if there are additional steps required to configure GraphQL endpoints with JWT.",
"username": "CagriC"
},
{
"code": "",
"text": "Thank you for the information, it works now",
"username": "cyril_moreau"
}
]
| Issues with Identities provided by AWS Cognito as Custom JWT Provider | 2021-11-10T20:43:59.632Z | Issues with Identities provided by AWS Cognito as Custom JWT Provider | 4,166 |
null | [
"queries"
]
| [
{
"code": "",
"text": "Hello there,Developing my first big project using MongoDB, I have multiple issues with query targeting.To sum up my project. I have a DB of a huge amount of vehicles (+80k), and I want the customer to be able to query a specific vehicle with multiple filter that are all optionals (make, model, maxMileage, minMileage, minPrice, maxPrice, bodyColor, …)Currently, I am creating indexes with the Performance Advisor on Atlas, since I have a lot of critea to filter, it’s not working in 100% of cases.I read the documentation about indexes, but I find it very difficult to understand. Do you have a use-case of queries where you have multiple criterias to filter your queries and which are all optionals?Thanks for you help!",
"username": "Robin_J"
},
{
"code": "{\n _id: 0,\n make: 'xxx',\n model: 'yyy',\n mileage: 1234,\n ...\n}\n{\n _id: 0,\n attributes: [\n {key: 'make', value: 'xxx'},\n {key: 'model', value: 'yyy'},\n {key: 'mileage', value: 1234},\n ...\n ]\n}\nattribute",
"text": "Hi @Robin_J welcome to the community!Perhaps you’re looking for the Attribute pattern?The tldr is, instead of doing this:you do this:Then you can create an index on the attribute field and index all the contents of that field efficiently. Please see the linked page for more details. The polymorphic pattern is another that you may find interesting as well.Note that this is just an idea off the top of my head. It’s best to test this schema’s performance and usability according to your specific use case.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hello @kevinadi,Thanks for your help, it was a great idea, it works very good!\nHowever, I have a problem to filter using $gte and $lte for example on key like “maxPrice”, “minPrice”.\nAll “value” are integer when key is “maxPrice” or “minPrice”, is not working because other values are not integer, for example “model” ?Thanks !",
"username": "Robin_J"
},
{
"code": "> db.test.find()\n[\n {\n _id: 0,\n attributes: [\n { k: 'make', v: 'cheapcar' },\n { k: 'price', v: 1000 },\n { k: 'mileage', v: 10000 }\n ]\n },\n {\n _id: 1,\n attributes: [\n { k: 'make', v: 'expensivecar' },\n { k: 'price', v: 10000 },\n { k: 'mileage', v: 12000 }\n ]\n }\n]\n> db.test.find({attributes: {$elemMatch: {k: 'price', v: {$gt:5000}}}})\n[\n {\n _id: 1,\n attributes: [\n { k: 'make', v: 'expensivecar' },\n { k: 'price', v: 10000 },\n { k: 'mileage', v: 12000 }\n ]\n }\n]\n",
"text": "However, I have a problem to filter using $gte and $lte for example on key like “maxPrice”, “minPrice”.If you need to search for something with a specific value range in an array of sub-documents, you should use $elemMatch.For example:Finding cars where the price is greater than 5000:There are more interesting examples in the page Query an Array of Embedded Documents.Is this the method you need? If not, could you please provide some example data & the desired output?Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "_id: 1,\n...\nattributes: [\n { key: 'totalPrice', value: 40600 },\n { key: 'mileage', value: 67953 },\n { key: 'firstRegistrationDate', value: 2019 },\n { key: 'make', value: 9 },\n { key: 'model', value: 19155 },\n { key: 'hp', value: 190 },\n { key: 'numberOfSeats', value: 5 },\n {\n key: 'modelVersionInput',\n value: '40 TDI Q SPORT LM20 eSITZE KAMERA AHK ACC '\n },\n { key: 'fuelCategory', value: 1 },\n { key: 'transmissionType', value: 1 },\n { key: 'numberOfDoors', value: 5 },\n { key: 'bodyType', value: 3 },\n { key: 'bodyColor', value: 10 },\n {\n key: 'equipments',\n value: [4, 50, 20, 130, 187, 140, 239, 38, 133, 158, 139, 6, 34, 221, 23, 124, 224, 157, 153, 11],\n }\n],\n{\n $and: [\n { \"attributes.key\": \"make\", \"attributes.value\": 9 },\n { \n $or: [\n { \"attributes.key\": \"model\", \"attributes.value\": 19155 },\n { \"attributes.key\": \"model\", \"attributes.value\": 19156 },\n { \"attributes.key\": \"model\", \"attributes.value\": 19157 },\n ] \n },\n { attributes: { key: \"totalPrice\", value: { $lte: 50000 } } },\n { \n $or: [\n { \"attributes.key\": \"bodyColor\", \"attributes.value\": 1 },\n { \"attributes.key\": \"bodyColor\", \"attributes.value\": 2 },\n { \"attributes.key\": \"bodyColor\", \"attributes.value\": 10 },\n ] \n },\n { \"attributes.key\": \"equipments\", \"attributes.value\": 4 },\n { \"attributes.key\": \"equipments\", \"attributes.value\": 50 },\n { \"attributes.key\": \"equipments\", \"attributes.value\": 20 },\n { \"attributes.key\": \"equipments\", \"attributes.value\": 130 },\n ]\n}\n",
"text": "Yes sure, this is an example of one of my document :Make, model, fuelCategory, transmissionType, bodyType, bodyColor, equipments values are ids FYI.The customer has to be able to query vehicles with as much criteria he wants. For example, something like this :Moreover, if possible I would like to sort these results, by totalPrice asc and desc, mileage, firstRegistrationDate…Regards,",
"username": "Robin_J"
},
{
"code": "",
"text": "Actually, this solution is working, but is very long to proceed sometimes (more than 10 sec, and I limit the result to 12 out of 80k)",
"username": "Robin_J"
},
{
"code": "db.collection.explain('executionStats').find(....)$or$or$or",
"text": "Hi @Robin_JAlthough the attribute pattern is a good pattern when you can’t guarantee that all documents follow a certain schema, it is not a silver bullet unfortunately.From Building with Attribute Pattern, the Attribute Pattern is particularly well suited when:The attribute pattern allows you to minimize the number of index you need to create, but if this approach is not performant for your use case, then you might need to combine this with other patterns. Here’s a good article: Building with Patterns: A SummaryThe key to better performance is to check the explain plan output that can show you how the query planner plans to answer the query, and if you run db.collection.explain('executionStats').find(....) it will execute the plan and show you how much time it spends on each stage of the plan, along with how many documents/index keys are scanned and returned. Ideally you want as few documents/index keys scanned vs. returned documents, which means that your query is very targeted, and your indexes are working well.This design step, however, is very use case specific, so the solutions are very customized for your use case. The same schema & index design would not necessarily work well with the same document structure but with different query patterns.There’s a MongoDB University course that specializes in this: M320 Data Modeling, so it may be useful for you.A note about the $or operator though: it works differently compared to other operators, and indexes may need to be optimized specifically with the use of $or in mind. Otherwise you can get a big hit in performance. Please see $or Clauses and Indexes.Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "/* the whole collection */\nmongosh> ap.find()\n> { _id: 1,\n attributes: [ { k: 'make', v: 19155 }, { k: 'model', v: 9 } ] }\n> { _id: 2,\n attributes: [ { k: 'make', v: 9 }, { k: 'model', v: 19155 } ] }\n\n/* the wrong query that matches more documents that is intended */\nmongosh> ap.find( { \"attributes.k\" : \"make\" , \"attributes.v\" : 9 })\n> { _id: 1,\n attributes: [ { k: 'make', v: 19155 }, { k: 'model', v: 9 } ] }\n> { _id: 2,\n attributes: [ { k: 'make', v: 9 }, { k: 'model', v: 19155 } ] }\n\n/* the correct query that only matches the correct make */\nmongosh> ap.find( { \"attributes\" : { \"$elemMatch\" : { \"k\" : \"make\" , \"v\" : 9 }} })\n> { _id: 2,\n attributes: [ { k: 'make', v: 9 }, { k: 'model', v: 19155 } ] }\n",
"text": "With the sample query you shared, I want to emphasis the need to use $elemMatch as expressed by @kevinadi to make your query working correctly.Your query will match too many documents. All document with attributes.key:make and any of the values 9, 19155, 1, 2, 4 will also be matched, which is not what you intend to do.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you @kevinadi and @steevej for your help.\nIt is much appreciated.I checked the $elemMatch solution, unfortunately the results were still very slow to come.The best solution, unfortunately, was to create a SQL DB only with the values I need for queries and the _id of MongoDB.\nWith the new approach, I have results in around 100ms against 10 to 20 seconds for the previous one.I kept MongoDB to store all other data since data structure is way better than with SQL databases.Thanks !",
"username": "Robin_J"
},
{
"code": "",
"text": "Hi @Robin_J glad you have found a workable solution!If you don’t mind describing your final solution, it will be greatly appreciated. I’m wondering if this is a use case that can be improved, either with a new design pattern, or something else.Thanks!\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
]
| Multiple criteria search / issue query targeting | 2022-10-31T09:54:36.955Z | Multiple criteria search / issue query targeting | 2,852 |
null | [
"node-js",
"containers"
]
| [
{
"code": "",
"text": "Hello community,\nMy data is deleted every day. The mongo server run under docker.\nOn the docker side:The mongo setup is set to persistant data\nvolumes: - mongodata:/data/dbThank you",
"username": "SAR_N_A"
},
{
"code": "",
"text": "do you use docker-compose ? Sounds like you are not persisting the volume. Volumes | Docker Documentation. I think if you do a “docker-compose down -v” when you want to stop the containers it will wipe away the volumes.",
"username": "Robert_Walters"
},
{
"code": "version:\"3.7\"\nservices:\n sar_data:\n image : mongo\n container_name: monogodb\n enviroment:\n - MONGO_INITDB_ROOT_DATABASE=sar-mongo\n volumes:\n - ./init-mngo.js:/doker-entrypoint-initdb.d/init-mongo-js:ro\n - ./mongodb/data:/.. /.. /data/db\n ports:\n - 27017:27017\n \nAnd my init-mongo.js file is :\ndb = dv.getSiblingDB('sar_mongo')\ndb.createUser)\n { \n user: 'root',\n pwd: 'admin',\n roles: [{role: 'readWrite', db : 'sar_mongo'}],\n}\n); \n",
"text": "Hello ,\nThank you for your reply, Yes, I am using docker compose as follows :\nunder the folder :/var/www/ and I havemongo ( folder )\ndocker-compose.yaml\ninit-mongo.js\nThe docker-compose .yml file is ",
"username": "SAR_N_A"
}
]
| Mongodb collection lost after successful build | 2022-11-01T06:58:30.178Z | Mongodb collection lost after successful build | 2,208 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.