image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"queries",
"crud"
] | [
{
"code": "const InviteeListSchema = new Schema([\n {\n //add this part later on to link a list to the host\n // it should look up for host ID etc. Make sure it is INT/NUMBER\n // host_id: {},\n ownerId: { type: Schema.Types.ObjectId, ref: \"Customer_Host\", required: true },\n invitees: [\n {\n _id: { type: Schema.Types.ObjectId, required: true, auto: true },\n firstname: { type: String, required: true },\n lastname: { type: String, required: false },\n email: { type: String, required: true },\n //emailConfirm: { type: String, required: true },\n mobile: { type: String, required: false },\n createdAt: { type: Date, required: true, default: Date.now() },\n lastUpdatedAt: { type: Date, required: true, default: Date.now() },\n isDeleted: { type: Boolean, required: true, default: false }\n }\n ],\n createdAt: { type: Date, required: true, default: Date.now() },\n lastUpdatedAt: { type: Date, required: true, default: Date.now() },\n isDeleted: { type: Boolean, required: true, default: false }\n\n }]);\n",
"text": "I have a collection of list that contain invitee information as an array of objects like below:I want to be able to add/push new objects into the invitees array. However, I am having issues getting the newly created object within invitees array.I have tried findandupdate, but returns the entire object. tried updateOne, but doesn’t return the new objectID. I am wondering if there is a one step/elegant way to achieve this? Otherwise it requites at least 2 steps, where the second step would be a find() method to fetch the last array item from invitees.Keen to hear the experts thoughts on this.Thanks in advance",
"username": "illay_senner"
},
{
"code": "",
"text": "Are you passing the optional parameter to return the updated document as opposed to the document before the update?",
"username": "John_Sewell"
},
{
"code": "db.invitee_lists.updateOne({_id:ObjectId(\"64fa8e44cb08d093f33d0a08\")},{$push: {'invitees':{'firstname':'Ilay'}}},{new:true, returnNewDocument: true})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\n",
"text": "This is what I get from each queries:",
"username": "illay_senner"
},
{
"code": "db.invitee_lists.findOneAndUpdate({_id:ObjectId(\"64fa8e44cb08d093f33d0a08\"),},{$push: {'invitees':{'firstname':'Ilay', \"_id\":new ObjectId()}}},{new:true, returnNewDocument:true})\n_id: ObjectId(\"64fa8e44cb08d093f33d0a08\"),\n ownerId: ObjectId(\"64f845f0f4e46cb7a0467053\"),\n invitees: [\n {\n _id: ObjectId(\"64fa8e44cb08d093f33d0a09\"),\n firstname: 'firstname',\n lastname: 'lastname',\n email: '[email protected]',\n mobile: '0400000111',\n createdAt: 2023-09-08T01:40:51.946Z,\n lastUpdatedAt: 2023-09-08T01:40:51.946Z,\n isDeleted: false\n },\n {\n _id: ObjectId(\"64fa8e44cb08d093f33d0a0a\"),\n firstname: 'Micky,\n lastname: 'Lee',\n email: '[email protected]',\n mobile: '+6140000000',\n createdAt: 2023-09-08T01:40:51.946Z,\n lastUpdatedAt: 2023-09-08T01:40:51.946Z,\n isDeleted: false\n },\n {\n firstname: 'Ilay',\n _id: ObjectId(\"65039d48b6a4eb9cb140d6d9\")\n }\n ],\n createdAt: 2023-09-08T01:40:51.946Z,\n lastUpdatedAt: 2023-09-08T01:40:51.946Z,\n isDeleted: false,\n __v: 0\n}\ntype or paste code here\n",
"text": "andtype or paste code here",
"username": "illay_senner"
},
{
"code": "",
"text": "by trial and error, used projection: { ‘invitees’: { $slice: -1 } } … using any other parameter/element in projection breaks it…thanks for the prompt response, appreciated.",
"username": "illay_senner"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to fetch the _id of the subdocument from array | 2023-09-14T16:21:11.120Z | How to fetch the _id of the subdocument from array | 284 |
[] | [
{
"code": "\n if(context.token.custom_data.company){\n return { owner_id: context.token.custom_data.company };\n }\n return {};\n",
"text": "Hello,\nI’m having trouble creating user permissions for charts in my dashboard. I’ve created an authenticated dashboard with Atlas App Services as my Authentication provider. In creating an authentication provider, I’ve made sure that fetch data from app services is toggled on. My rules for my data source is as below:\n\nScreenshot 2023-09-12 at 5.00.51 PM1422×1022 71.4 KB\n\nDespite this, when I login as a user with specific permissions, I get the same output as a user with the no permissions. I’m not sure what I’m doing wrong here. Hope someone can help!Alternatively, I’ve tried to also use an Injected function with this code:But this still displays the chart header - just without the data included which is not what I want.Thank you in advance for your help!",
"username": "Sidhika_Tripathee"
},
{
"code": "return {}custom_data.company",
"text": "Hi @Sidhika_Tripathee -I’m not an expert in how App Services rules should be configured, but if the rules are correct (i.e. they filter data as expected when using the App Services SDK) then they should work for embedded charts too.The injected filter approach should also work. If you are seeing just the chart header and nothing else, that would imply that the function is falling through to the return {} code. Have you tried inspecting your JWT token in jwt.io to ensure it includes the expected value under the custom_data.company path?Tom",
"username": "tomhollander"
}
] | Change Chart Views according to User Roles | 2023-09-12T21:10:23.781Z | Change Chart Views according to User Roles | 345 |
|
[
"replication",
"compass",
"containers"
] | [
{
"code": "",
"text": "Hi everyone! I’m new to kubernetes and as I was exploring what I can do with it I end up with this link here Deploying A MongoDB Cluster With Docker | MongoDB. I created this docker-compose.yml derived from the link, but I’m not sure if I’ve done it correctly.\n\nimage757×978 32.1 KB\n\n\nimage1638×451 39.5 KB\n\ncontainers are running just fine and I configured the replica set according to the blog post that I’m following. Everything seems okay, but I’m not sure why on the blog post, volumes was not mentioned at all, anyway as you can see at my docker-compose I added volumes. I tried connecting to port 27017 using mongodb compass and it works, I imported sample data and it seems okay. Now, I tried connecting to port 27018, I got in but I have no access to any database. Is that normal? and also I stopped member1 the default PRIMARY node, I check the rs.status() and member2 became the PRIMARY node. How do I connect to the cluster and automatically connect to the primary node? My experience with Mongodb atlas is not the same and I’m curious how they did ",
"username": "Console0811"
},
{
"code": "",
"text": "I read this https://www.mongodb.com/docs/manual/core/replica-set-members/ that solves one of my question \nMy major problem now is connecting into the primary, like what if port 27018 restarts? A secondary node will be assign as the new primary node. So how do I always connect into the primary node without thinking what port I should connect into?",
"username": "Console0811"
}
] | Using Docker to Deploy a MongoDB Cluster/Replica Set | 2023-09-14T16:33:08.448Z | Using Docker to Deploy a MongoDB Cluster/Replica Set | 396 |
|
null | [
"replication"
] | [
{
"code": "RECOVERING",
"text": "In mongo document ,\nhttps:// www.mongodb.com/docs/manual/reference/replica-states/#mongodb-replstate-replstate.RECOVERINGIt mentions:\nWhen this happens, the member enters the RECOVERING state and requires manual intervention.My case:\nI have three node in cluster for mongo ,and one node is in recovering mode, may I know how to fix it by manual intervention ?now\n‘Thu Sep 14 2023 07:44:06 GMT+0000 (Coordinated Universal Time)’Thanks.",
"username": "Wong_Hung_Bun1"
},
{
"code": "RECOVERINGSECONDARYRECOVERING",
"text": "A member transitions from RECOVERING to SECONDARY after replicating enough data to guarantee a consistent view of the data for client reads.andDue to overload, a secondary may fall far enough behind the other members of the replica set such that it may need to resync with the rest of the set. When this happens, the member enters the RECOVERING state and requires manual intervention.so you need to understand which case this is. if it’s still catching up with primary, then no manual work needed. Otherwise it is already falling behind and the oplog entries on primary are already gone. In that case do the resync.",
"username": "Kobe_W"
}
] | RECOVERING state and requires manual intervention - three nodes | 2023-09-14T08:48:55.200Z | RECOVERING state and requires manual intervention - three nodes | 339 |
null | [
"database-tools",
"backup"
] | [
{
"code": "",
"text": "Hi,\nHave a 2TB MongoDB v4.4.3 running as stand-alone instance on Centos.\nNeed to migrate the databases to Ubuntu.My options are:Minimum downtime is critical.\nIs option 3 (copy dbPath ) supported?\nRegards,\[email protected]",
"username": "Shay_Salomon"
},
{
"code": "",
"text": "Is option 3 (copy dbPath ) supported?physical file copy requires compatibility on many things i believe, if there are some tricks that have been applied on centos but not supported on ubuntu, then it won’t work.That being said, it is very easy to just try it. Otherwise no.1 or 2 can be used.",
"username": "Kobe_W"
}
] | Migrating from Centos to Ubuntu | 2023-09-14T13:48:48.839Z | Migrating from Centos to Ubuntu | 477 |
null | [] | [
{
"code": "\n{\"t\":{\"$date\":\"2023-09-12T19:16:41.306+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":{\"hostAndPort\":\"10.10.0.175:27017\",\"error\":\"ConnectionPoolExpired: Pool for 10.10.0.175:27017 has expired.\"}}\n{\"t\":{\"$date\":\"2023-09-12T19:16:44.242+00:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22572, \"ctx\":\"MirrorMaestro\",\"msg\":\"Dropping all pooled connections\",\"attr\":\n",
"text": "Hi mongo geeks,\nI hope you are doing well\nI have storage issue where mongodb opens a lot of connections then they all gets dropped suddenly and I found this log lines,PS. I’m running mongo 4.4\nseeking your usual support ",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "Why do you think it’s a problem? what bad things are you seeing?",
"username": "Kobe_W"
},
{
"code": "",
"text": "@Kobe_W not sure but is this graph looks normal to you?\nimage (14)1714×790 128 KB\n",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "this sudden drop in the connections is not something normal from my point of view\n\nimage1340×672 71.7 KB\n",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "I see similar question on stackoverflow, but no answers.You can try tuning connection pool settings like min or max value. This might also be an internal logic in mongoDb.\nAfter all, Mongodb drivers will take a good care of connection pooling for most use cases by default.I personally wouldn’t spend too much time on this, if i 'm not noticing any other issues from application level (e.g. high latency on requests).",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hi @Ahmed_Asim.Try to observe memory usage during spikes. There may be some correlation, if you could post the graphics it would be interesting.Best!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "@Kobe_W t’s my question I believe, I posted on stackoverflow also ",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "anyone knows what are the tcp parameters for the host os that might affect the number of the connections ?\nI mean we might be hitting the limit of the TCP/IP connections",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "OS related parameters can be comprehensive to tune. e.g. this. You can try mongodb configuration first as that’s easier.",
"username": "Kobe_W"
},
{
"code": "net.maxIncomingConnectionscore file size (blocks, -c) unlimited\ndata seg size (kbytes, -d) unlimited\nscheduling priority (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 126920\nmax locked memory (kbytes, -l) 64\nmax memory size (kbytes, -m) unlimited\nopen files (-n) 1048576\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nreal-time priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 32768\nvirtual memory (kbytes, -v) unlimited\nfile locks (-x) unlimited\n",
"text": "thanks @Kobe_W any idea which config could be related to this ?\nas I know the net.maxIncomingConnections to limited to the os limits which in my case are :",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "see this it reached 8.94K then sudden drop which is vey weird !!\n\nimage1688×570 39.3 KB\n",
"username": "Ahmed_Asim"
},
{
"code": "var idleConnections = db.serverStatus().connections.available;\nprint(\"Number of idle connections: \" + idleConnections);\nNumber of idle connections: 837677",
"text": "I suspected the idle connections and look what it found here :Number of idle connections: 837677not sure why all of this connections are idle and how to check it ?",
"username": "Ahmed_Asim"
},
{
"code": "",
"text": "if your deployment is big or you have many connections from those drivers, 837k might be fine. It only means at the time you run that query, those connections are idle.did you try setting a max connection number for the pools from driver side?\ntry using a very small number and then see if anything changes on the server side numbers.i have no experience with atlas, so i don’t know if those numbers are aggregated from all nodes or not.",
"username": "Kobe_W"
}
] | Sudden drop in mongodb connections as ConnectionPoolExpired | 2023-09-12T19:29:39.397Z | Sudden drop in mongodb connections as ConnectionPoolExpired | 525 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "ValidationError: <MySchema> validation failed: name: Cast to String failed for value \"some-string\" (type string) at path \"name\" because of \"TypeError\"var MySchema = {\n ...\n name: { type: String, get: encryption.decrypt, set: encryption.encrypt },\n ...\n}\n{ typeKey: '$type' }var validateBeforeSave = process.env.NODEJS_ENV !== \"test-ci\";\nMySchema.set(\"validateBeforeSave\", validateBeforeSave);\n",
"text": "I’m getting this error: ValidationError: <MySchema> validation failed: name: Cast to String failed for value \"some-string\" (type string) at path \"name\" because of \"TypeError\" while running my unit tests in GitHub Actions. Obviously, that’s weird since it gets a string for casting so I really can’t see why it doesn’t work…\nSchema is along:This does not reproduce locally nor there’s this problem in our testing/production environments.I’ve tried many things, like adding the { typeKey: '$type' } but it’s not what I really want, since my key should be automatically used for type.\nI’ve worked around this by using:But then I miss this functionality in our tests.Mongoose version: 6.10.1\nMongo version: 5.0.6Thanks for any help!",
"username": "Moshe_Azaria"
},
{
"code": "",
"text": "Hi Moshe,As your problem is with mongoose I believe that the better place to you ask this question is directly on his issues. Here you will find a lot of people using mongoDB but probably using mongo driver.So I recommend you to take a look at mongoose GitHub issues.",
"username": "Jennysson_Junior"
}
] | Why does Mongoose throws ValidationError when running test in CI although the type declaration is OK? | 2023-09-14T08:24:05.072Z | Why does Mongoose throws ValidationError when running test in CI although the type declaration is OK? | 362 |
null | [
"app-services-user-auth"
] | [
{
"code": "const auth: RequestHandler = async function(req, res, next) {\n\tconst authorization = req.headers.authorization;\n\n\tif (!authorization) {\n\t\treturn res.status(401).json({ message: \"No authorization header\" });\n\t}\n\n\tconst app = new Realm.App({ id: process.env.REALM_APP_ID as string });\n\tconst credentials = Realm.Credentials.jwt(authorization);\n\n\ttry {\n\t\tconst user = await app.logIn(credentials);\n\t\treq.user = user;\n\t} catch (err: any) {\n\t\treturn res.status(401).json({ message: err.message });\n\t}\n\n\tnext();\n};\n",
"text": "I’m looking for a way to handle authentication on a websocket/express server with realm-web. What I’m thinking is using JWT authentication in a piece of middleware, but not sure if that is the best way to handle it. I could use stateless auth and create an app instance for each request, but that seems a bit heavy.\nI’m not very experienced with handling user authentication on a server like this, so any guidance is very much appreciated.Here’s my tentative code",
"username": "Jonathan_Roley"
},
{
"code": "",
"text": "Or is there possibly a more low-level setup where I can just verify the JWTs (user accessToken and refreshToken) on the server without even using the SDK?",
"username": "Jonathan_Roley"
}
] | Realm Web authentication on a server | 2023-09-11T02:11:57.202Z | Realm Web authentication on a server | 374 |
null | [
"queries",
"android",
"kotlin"
] | [
{
"code": "object RealmManager {\n var realm: Realm? = null\n\n @OptIn(DelicateCoroutinesApi::class)\n fun initRealm(context: Context) {\n val config = RealmConfiguration.Builder(\n schema = setOf(TestItem::class, Test2Item::class))\n .name(\"database\")\n .schemaVersion(1) // Set the new schema version to 2\n .migration(AutomaticSchemaMigration {\n /*it.enumerate(className = \"Person\") { oldObject: DynamicRealmObject, newObject: DynamicMutableRealmObject? ->\n newObject?.run {\n // Change property type\n set(\n \"_id\",\n oldObject.getValue<ObjectId>(fieldName = \"_id\").toString()\n )\n // Merge properties\n set(\n \"fullName\",\n \"${oldObject.getValue<String>(fieldName = \"firstName\")} ${oldObject.getValue<String>(fieldName = \"lastName\")}\"\n )\n // Rename property\n set(\n \"yearsSinceBirth\",\n oldObject.getValue<String>(fieldName = \"age\")\n )\n }\n }*/\n })\n .build()\n realm = Realm.open(config)\n }\n\n @JvmStatic\n fun getRealmInstance(): Realm {\n return realm ?: throw IllegalStateException(\"Realm not initialized. Call initRealm() first.\")\n }\n\n fun getNextId(clazz: KClass<out TypedRealmObject>): Long {\n val maxId: RealmScalarNullableQuery<Long> = realm!!.query(clazz).max(\"id\")\n return maxId.find()?.plus(1) ?: 0 // Falls keine Einträge vorhanden sind, starte bei 0\n }\n\n @OptIn(DelicateCoroutinesApi::class)\n inline fun <reified T : RealmObject> addItem(item: T) {\n realm?.writeBlocking {\n val nextId = getNextId(T::class)\n if (item is TestItem) {\n item.id = nextId\n } else if (item is Test2Item) {\n item.id = nextId\n }\n try {\n copyToRealm(item)\n } catch (e: Exception) {\n println(\"error: \"+e.message)\n }\n }\n }\n\n @JvmStatic\n fun getTestItems(fType: Int): RealmResults<TestItem> {\n return realm!!.query<TestItem>(\"fType == $0\", fType).find()\n }\n\n\n @JvmStatic\n fun getTest2Items(fType: Int): RealmResults<Test2Item> {\n return realm!!.query<Test2Item>(\"fType == $0\", fType).find()\n }\n}\n\n........................................................................\n\nopen class TestItem() : RealmObject {\n @PrimaryKey\n var id: Long = 0\n var name: String = \"\"\n var owner_id: String = \"\"\n var fType: Int = 0\n constructor(ownerId: String = \"\") : this() {\n owner_id = ownerId\n }\n}\n..........................................................\n..........................................................\nvar testItems: RealmResults<TestItem>? = null\ntestItems = getTestItems(1)\nid(\"io.realm.kotlin\") version \"1.10.0\" apply false\n",
"text": "Hello everyone,I’m working on an Android app using Kotlin and Realm as my database. One of the issues I’m facing is that my RealmResults list is not auto-updating when the underlying data changes. I expected the list to be “live” and reflect any changes made to the database automatically.Here’s a simplified version of my code:Here is the variable that contains the list:Here is the Realm version I’m using:What could be the reason that my list is not updating automatically when a new entry is saved to the Realm database?Thank you in advance for the help. Regards.",
"username": "Fal_Ko"
},
{
"code": "",
"text": "Can’t anyone help me?\nI found realm-android much better and easier, but now I’m having problems integrating it into the new Android Studio, so I’m now using realm-kotlin.\nI simply can’t make any more progress and will be forced to switch back to SQLite if realm doesn’t work. ",
"username": "Fal_Ko"
},
{
"code": "",
"text": "What could be the reason that my list is not updating automaticallyHow do you know the list is not updating? If you’re not watching for changes, the data may change and you would never know it.I may be a good idea to review the Getting Started Guide React To Changes as that covers how to add an observer to the data so the app is notified of events when the underlying data changes - to which the UI can then be updated to reflect that change.Maybe you’ve implemented that but omitted it from the question?",
"username": "Jay"
},
{
"code": "val job = CoroutineScope(Dispatchers.Default).launch {\n val testFlow = DataHolder.testItems!!.asFlow()\n val testSubscription = testFlow.collect { changes: ResultsChange<TestItem> ->\n\n when (changes) {\n is UpdatedResults -> {\n if (changes.insertions.isNotEmpty()) {\n withContext(Dispatchers.Main) {\n DataHolder.testItems = changes.list\n adapter!!.updateItems(DataHolder.testItems!!)\n adapter?.let {\n it.notifyItemInserted(DataHolder.testItems!!.size-1)\n } ?: run {\n println(\"Adapter is null.\")\n }\n }\n }\n if (changes.deletions.isNotEmpty()) {\n withContext(Dispatchers.Main) {\n DataHolder.testItems = changes.list\n adapter!!.updateItems(DataHolder.testItems!!)\n val deletedIndexes = changes.deletions.sortedDescending()\n DataHolder.testItems = changes.list\n for (index in deletedIndexes) {\n adapter?.notifyItemRemoved(index)\n }\n }\n }\n }\n else -> {\n // types other than UpdatedResults are not changes -- ignore them\n }\n }\n }\n }\n",
"text": "Thank you for the message.\nI recently did it this way and accordingly updated my list:That already works with adding and deleting the items.\nHowever, I had to reassign the updated list to the adapter so that it always gets updated in the RecyclerView.Regards",
"username": "Fal_Ko"
}
] | RealmResults Not Auto-Updating in Kotlin Android App | 2023-09-07T13:38:01.715Z | RealmResults Not Auto-Updating in Kotlin Android App | 433 |
null | [
"swift"
] | [
{
"code": "import SwiftUI\nimport RealmSwift\n\nclass MyObject: Object, Identifiable {\n @Persisted var id = UUID().uuidString\n @Persisted var text:String = String(\"abcdefghijklmnopqrstuvwxyz\".randomElement()!)\n}\n\nstruct ContentView: View {\n let realm = try! Realm()\n @State var obj: MyObject? = nil\n @ObservedResults(MyObject.self) var items\n \n var body: some View {\n VStack {\n Text(\"Selected ID: \\(obj?.text ?? \"<none>\")\")\n Text(\"Add\")\n .onTapGesture {\n try! realm.write {\n let o = MyObject()\n realm.add(o)\n }\n }\n if items.count > 0 {\n TabView(selection: $obj) {\n ForEach(items, id: \\.self) { t in\n Text(\"\\(t.text)\")\n .tag(t as MyObject?)\n }\n }\n .tabViewStyle(.page(indexDisplayMode: .never))\n }\n }\n .frame(maxWidth: .infinity, maxHeight: .infinity)\n .background(.orange)\n }\n}\n",
"text": "Hi guys,having a really annoying issue right now. I have a SwiftUI TabView, with a selection, that gets reset if the realm changes. I made a tiny example:You can swipe right a few times, and then tap on “add”, which just adds another object to the collection. The TabView will reset to the first item of the list, while the selected object still is at the one you previously swiped to (see Text String at top).This does not happen when using Integer for a selection, and work around this with setting IDs as tag etc., which can make things a lot more complex in other situations obviously.Is there any way to avoid / conquer this behavior? I would REALLY like to avoid having to use Integers and Array-Indices here.Thanks",
"username": "Lila_Q"
},
{
"code": " .onTapGesture {\n try! realm.write {\n let o = MyObject()\n realm.add(o)\n obj = o //<<<<<<<<<\n } \n }",
"text": "Try to add:",
"username": "olegmoseyko1978"
}
] | TabView resets when Realm changes? | 2022-11-21T02:14:02.038Z | TabView resets when Realm changes? | 1,485 |
null | [
"aggregation",
"queries",
"atlas-cluster",
"atlas-search"
] | [
{
"code": " {\n \"_id\": \"artificialtree\",\n \"search_term\": \"artificial tree\",\n \"normalized_term\": \"aaceefiiilrrtt\",\n \"v1_score\": 0.544350665956381\n }\n [\n {'$search': {'index': 'default',\n 'autocomplete': {'query': 'Book', 'path': 'search_term', 'tokenOrder': \n 'sequential','fuzzy':{'maxEdits':2}}}},\n \n {\"$group\": {\n \"_id\": '$normalized_term',\n \"data\": {\n \"$max\": {\n \"score\": \"$v1_score\",\n \"search_term\": \"$search_term\"\n }\n }\n }},\n {'$sort':{'data.score': -1}},\n {'$limit':10},\n {'$project': {'_id':0, 'data.search_term' : 1}}\n ]\n\"normalized_term\"\"v1_score\"{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$_internalSearchMongotRemote\": {\n \"mongotQuery\": {\n \"index\": \"default\",\n \"autocomplete\": {\n \"query\": \"Book\",\n \"path\": \"search_term\",\n \"tokenOrder\": \"sequential\",\n \"fuzzy\": {\n \"maxEdits\": 2\n }\n }\n },\n \"explain\": {\n \"type\": \"BooleanQuery\",\n \"args\": {\n \"must\": [\n {\n \"type\": \"MultiTermQueryConstantScoreWrapper\",\n \"args\": {\n \"queries\": [\n {\n \"type\": \"DefaultQuery\",\n \"args\": {\n \"queryType\": \"AutomatonQuery\"\n },\n \"stats\": {\n \"context\": {\n \"nanosElapsed\": 0\n },\n \"match\": {\n \"nanosElapsed\": 0\n },\n \"score\": {\n \"nanosElapsed\": 0\n }\n }\n }\n ]\n },\n \"stats\": {\n \"context\": {\n \"nanosElapsed\": 5648467,\n \"invocationCounts\": {\n \"createWeight\": 1,\n \"createScorer\": 24\n }\n },\n \"match\": {\n \"nanosElapsed\": 42304,\n \"invocationCounts\": {\n \"nextDoc\": 1009\n }\n },\n \"score\": {\n \"nanosElapsed\": 38731,\n \"invocationCounts\": {\n \"setMinCompetitiveScore\": 8,\n \"score\": 1001\n }\n }\n }\n }\n ],\n \"mustNot\": [],\n \"should\": [\n {\n \"type\": \"TermQuery\",\n \"args\": {\n \"path\": \"search_term\",\n \"value\": \"book\"\n },\n \"stats\": {\n \"context\": {\n \"nanosElapsed\": 13145,\n \"invocationCounts\": {\n \"createWeight\": 1,\n \"createScorer\": 8\n }\n },\n \"match\": {\n \"nanosElapsed\": 0\n },\n \"score\": {\n \"nanosElapsed\": 0\n }\n }\n }\n ],\n \"filter\": [],\n \"minimumShouldMatch\": 0\n },\n \"stats\": {\n \"context\": {\n \"nanosElapsed\": 5702139,\n \"invocationCounts\": {\n \"createWeight\": 1,\n \"createScorer\": 16\n }\n },\n \"match\": {\n \"nanosElapsed\": 115013,\n \"invocationCounts\": {\n \"nextDoc\": 1009\n }\n },\n \"score\": {\n \"nanosElapsed\": 112075,\n \"invocationCounts\": {\n \"setMinCompetitiveScore\": 8,\n \"score\": 1001\n }\n }\n }\n }\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 9\n },\n {\n \"$_internalSearchIdLookup\": {},\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 9\n },\n {\n \"$group\": {\n \"_id\": \"$normalized_term\",\n \"data\": {\n \"$max\": {\n \"score\": \"$v1_score\",\n \"search_term\": \"$search_term\"\n }\n }\n },\n \"maxAccumulatorMemoryUsageBytes\": {\n \"data\": 0\n },\n \"totalOutputDataSizeBytes\": 0,\n \"usedDisk\": false,\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 9\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"data.score\": -1\n },\n \"limit\": 10\n },\n \"totalDataSizeSortedBytesEstimate\": 0,\n \"usedDisk\": false,\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 9\n },\n {\n \"$project\": {\n \"data\": {\n \"search_term\": true\n },\n \"_id\": false\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 9\n }\n ],\n \"serverInfo\": {\n \"host\": \"cluster0-shard-00-01.gpz1o.mongodb.net\",\n \"port\": 27017,\n \"version\": \"5.0.9\",\n \"gitVersion\": \"6f7dae919422dcd7f4892c10ff20cdc721ad00e6\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 16793600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 33554432,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"product\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"default\",\n \"autocomplete\": {\n \"query\": \"Book\",\n \"path\": \"search_term\",\n \"tokenOrder\": \"sequential\",\n \"fuzzy\": {\n \"maxEdits\": 2\n }\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$normalized_term\",\n \"data\": {\n \"$max\": {\n \"score\": \"$v1_score\",\n \"search_term\": \"$search_term\"\n }\n }\n }\n },\n {\n \"$sort\": {\n \"data.score\": -1\n }\n },\n {\n \"$limit\": 10\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"data.search_term\": 1\n }\n }\n ],\n \"cursor\": {},\n \"$db\": \"pro_data\"\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1658657083,\n \"i\": 11\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"HpXa/pD3Fyy2UgOluvUFCKsBhpI=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 7072407648573850000\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1658657083,\n \"i\": 10\n }\n }\n}\n",
"text": "Hello, Guys, I am a beginner and I am trying to build an autocomplete feature.\nMy Document looks like this.And my aggregation pipeline is.Here I am trying to achieve is that all documents that are returned by the search stage I want to get the top ten documents where if the \"normalized_term\" of two documents matches then I want the document with maximum \"v1_score\"\nWhat are the improvements I can do I my Pipeline to increase the execution time ?\nHere are the executionStats.",
"username": "Manhar_N_A"
},
{
"code": "",
"text": "Hi!\nDid you find any solution to this?\nI am trying to figure out why there is no index information for my aggregate, and it looks like you had the same problem?Regards\nMats",
"username": "Mats_Rydgren"
}
] | Help me optimising/building a pipeline for best performance | 2022-07-24T10:07:54.294Z | Help me optimising/building a pipeline for best performance | 1,613 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hello good morning.Could you please help me, I need to convert the values of a numeric field to a string, massively in a collection.Example:idEmpleado: NumberLong(22)\nto\nidEmpleado: “22”When idEmpleado has different valuesBeforehand thank you very much.",
"username": "Blanca_Edith_Tlacomulco_Moncada"
},
{
"code": "",
"text": "Update statement in aggregate form to use the $toString:Mongo playground: a simple sandbox to test and share MongoDB queries onlineYou may want to work in batches if you need to limit server usage while it’s running.",
"username": "John_Sewell"
}
] | Data conversion | 2023-09-14T14:46:04.780Z | Data conversion | 325 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "I’m fairly new to MongoDB, so not sure if this easily fits into a data modeling pattern.We have a collection of contracts (essentially construction work to be performed), and for each contract, we need to, in our application, creating a cascading list of values.For example:I’m trying to figure out the best way to model the relationship between the potential values in each drop down, for each contract. Is there a modeling pattern, or other design pattern that others have used, that has worked well for this? To note, the possible list of values in the 3rd drop down, could be in the thousands - this is why we have the first and second drop down, to filter to that specific list.I couldn’t find a topic that seemed similar to this, but if this question has been posed before, happy to research the responses given there.thank you,",
"username": "Amy_Malecha-Kedrowski"
},
{
"code": "",
"text": "Hi, @Amy_Malecha-Kedrowski,\nThat doesn’t appear to be a significant issue for a database. Perhaps you could consider using an index on the region and grid field. This would allow for efficient filtering of the desired circuits.",
"username": "Jack_Yang1"
},
{
"code": "",
"text": "Thank you for your response! That makes sense, the thing I’m not sure of, is how to model the relationships in the database; so in the schema, somehow indicating that a particular circuit resides within a specific grid & region. There would be multiple circuits that reside in the same grid and region. I’m not sure if I create a document for every circuit, or there is a way to put multiple circuits in the same document, or if all of this can be embedded somehow into the contract collection.",
"username": "Amy_Malecha-Kedrowski"
},
{
"code": "",
"text": "For your task, I think both approaches will work well in MongoDB. You can add multiple circuits in one document by using an array field. Just make sure the document doesn’t go over the 16MB limit set by MongoDB.",
"username": "Jack_Yang1"
},
{
"code": "",
"text": "Perfect, thank you for your help.",
"username": "Amy_Malecha-Kedrowski"
}
] | Creating a hierarchy of associated values | 2023-09-12T22:22:05.369Z | Creating a hierarchy of associated values | 358 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "I am building an real time application website that displaying the data from mongodb using Mongoose, I had a backend nodejs system to update the database frequently, and my current approach is to query the database every second, after doing some filtering and calculation, then display to the frontend.Is there any way to open an websocket or any feature in alias can help me reduce the amount of request while fetching the database frequently like every second, to get the update of the current values in Mongodb?Many thanks.",
"username": "WONG_TUNG_TUNG"
},
{
"code": "",
"text": "If I understand your Use-Case correctly you want to display Updates to Documents within your Database\nmade by your Node.js backend directly inside your Application.You could use the changeStream Feature for that. Inside your Application you can listen on multiple Collections for changes and display them accordingly.See here for more Information: https://www.mongodb.com/docs/manual/changeStreams/I hope that helps!",
"username": "NiklasB"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to extract real time data without requesting to Mongodb every second | 2023-09-14T04:47:56.669Z | How to extract real time data without requesting to Mongodb every second | 319 |
null | [
"node-js",
"mongodb-shell"
] | [
{
"code": "",
"text": "Current Mongosh Log ID: 6502b9718661819c4a199cf2Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.3MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017for several times I am getting this error, and reinstalling mongodb, I am using the local mongodb version, which shows the error",
"username": "Foxiom_Devs"
},
{
"code": "",
"text": "What OS are you running and have you verified that the server is running locally?I’m assuming you installed the server package as well as the mongosh package?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Yes, actually the db worked fine till today and suddenly this happened",
"username": "Foxiom_Devs"
},
{
"code": "",
"text": "Ok, what do the server logs say then? Does it show a crash reason?",
"username": "John_Sewell"
}
] | Database Crashes | 2023-09-14T08:02:46.449Z | Database Crashes | 303 |
[
"manila-mug"
] | [
{
"code": "",
"text": "\nMUG-Manila1920×1080 197 KB\nMark your calendars for the first-ever MongoDB Meetup in Manila! Additional information about the sessions, speakers, and locations will be announced shortly.The session will talk more about utilizing MongoDB for your production workload, the experiences, and the good and the bad with regards to it.The meetup will be in the 15th Floor, AWS Office BGC (Arthaland Building).To get meetup updates and connect with other MongoDB users, join the newly created Facebook Group: https://bit.ly/manilamugRSVP to stay updated! To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.\nScreenshot 2023-07-28 at 7.30.41 AM1694×270 39.8 KB\n",
"username": "Matthew_Seaver_Choy"
},
{
"code": "",
"text": "Hi, this is Randy Bardaje from Cloud Users Philippines, we are a Social Media Community that helps IT Professionals, IT Educators and Students to level up their skills in Cloud Technology and Applications. I would like to inquire, how can we be a Community Partner for this upcoming MongoDB Inaugural Meetup in Manila? you may contact me via email [email protected] or via mobile phone +639473552227.",
"username": "Randy_Bardaje"
},
{
"code": "",
"text": "Hey @Randy_Bardaje!\nI will connect you with our community leaders in Manila over the email and then you can take up the conversation with them. ",
"username": "Harshit"
},
{
"code": "",
"text": "Hey everyone!Hi guys, I’m Seaver, currently the Associate Director of Engineering for edamama. We’ve utilized MongoDB as our main database for our e-commerce platform.We hope to see you there so we can discuss and collaborate on the best practices and use cases when it comes to MongoDB as a NoSQL Database.",
"username": "Matthew_Seaver_Choy"
},
{
"code": "",
"text": "Thank you for initiating a meetup event in Manila about MongoDB. I think a lot of developers in the Philippines need more exposure on different database engines other that the usual mysql. Looking forward to this event. Cheers, Juvar.",
"username": "juvarabrera"
},
{
"code": "",
"text": "Hi there, I am a Technical Trainer for a Full Stack Web Dev in one of the company in BGC. MongoDB and Atlas is part of the topics in my training. I am excited for this event to cross elbows and exchange ideas with the community. See you there! Cheers",
"username": "Ryan_Azur"
},
{
"code": "",
"text": "Hey everyone!We’re still looking for another speaker who would be willing to share more about their expertise in MongoDB.We’d love to have you there if you want to speak on the event. I’ll be speaking for the first part of the event.Send me an email, or send me a message on this platform. Thank you!",
"username": "Matthew_Seaver_Choy"
},
{
"code": "",
"text": "Hello everyone!Sharing with you all the updated poster with the speakers and topics! We’ll be posting more updates in the coming days!In the mean time, be sure to join the Manila MongoDB User Group Facebook Group via Manila MongoDB User Group | Facebook.\nSept 14 - Manila MUG Meetup1080×1080 188 KB\n",
"username": "haifacarina"
},
{
"code": "",
"text": "Should I have an email from you guys in order to totally received my slot :< Because atm I didn’t receive any emails.",
"username": "Francisco_Dwayne_Panganiban"
},
{
"code": "",
"text": "No need! You can register on the event itself.See you all!",
"username": "Matthew_Seaver_Choy"
}
] | Manila MUG: Inaugural Meetup - Sept 14th @AWS Manila | 2023-07-28T06:13:09.417Z | Manila MUG: Inaugural Meetup - Sept 14th @AWS Manila | 3,379 |
|
null | [
"sydney-mug"
] | [
{
"code": "",
"text": "Hi MongoDB Users,We’re excited to announce an in-person meetup session scheduled for Thursday 21st September 2023. This is a FREE event.The agenda for the evening would be:The talk sessions are still to be confirmed.Join us to learn & share more about MongoDB. Please RSVP if you plan to attend so we can plan for appropriate food & beverage supplies.If you’d like to present a talk or demo tech related to MongoDB (or suggest topics of interest) please contact the organisers.https://www.mongodb.com/community/forums/sydney-mugSee you on the event!",
"username": "wan"
},
{
"code": "",
"text": "Hi Sydney MongoDB User Groups,We have more details for the talks for the evening next Thursday 21st SeptemberThe first session talk is Effervescent Efficiency: IoT Solutions for Breweries and BeyondThis talk would be presented by Brad Rocheleau. He is a hands-on engineering leader with a focus on turning data from distributed sensor networks into elegant decision support solutions. Based in Sydney, New South Wales, Brad currently serves as the Head of Product and Engineering at Konvoy Kegs, where he leads the strategic direction and delivery of Konvoy Cloud, a B2B beverage logistics platform. Konvoy’s industry first solution connecting over 120,000 IoT equipped kegs helps optimise supply chains, increase asset utilisation, and unlocks previously unattainable insights for producers of all shapes and sizes.The second session talk is Advancing Data Visualisation with Atlas ChartsThis talk would be presented by @Avinash_Prasad , Senior Product Manager, MongoDB ChartsMongoDB Charts is a tool to create visual representations of your MongoDB data. Data visualisation is a key component to providing a clear understanding of your data, highlighting correlations between variables and making it easy to discern patterns and trends within your dataset. MongoDB Charts makes communicating your data a straightforward process by providing built-in tools to easily share and collaborate on visualisations.RSVP Now and see you there!Regards,\nWan",
"username": "wan"
}
] | Sydney MongoDB User Group Meetup September 2023 | 2023-08-14T22:20:02.737Z | Sydney MongoDB User Group Meetup September 2023 | 1,518 |
null | [
"dot-net"
] | [
{
"code": "BsonDefaults.GuidRepresentationMode = GuidRepresentationMode.V3;\nBsonSerializer.RegisterSerializer(new GuidSerializer(GuidRepresentation.Standard));\nvar filter = Builders<BsonDocument>.Filter.In(\"externalId\", new List<dynamic> { Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\"), Guid.Parse(\"c7b1ebaf-4ac1-4fe0-b066-1282e072585a\") });var filter = Builders<BsonDocument>.Filter.In(\"externalId\", new List<Guid> { Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\"), Guid.Parse(\"c7b1ebaf-4ac1-4fe0-b066-1282e072585a\") });",
"text": "I’m using MongoDB.Driver 2.21.0 in my .NET 7 API. I am trying to use reflection to dynamically return data. For the most part, it works great. I’ve been able to dynamically query on strings and ints without issue. But, when I query with Guids, I start getting errors.In my Program.cs, I have the following code:Although my code is doing things dynamically, I’ve hardcoded a filter in to narrow down the issue. If I use this filter:\nvar filter = Builders<BsonDocument>.Filter.In(\"externalId\", new List<dynamic> { Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\"), Guid.Parse(\"c7b1ebaf-4ac1-4fe0-b066-1282e072585a\") });I get this error:MongoDB.Bson.BsonSerializationException: GuidSerializer cannot serialize a Guid when GuidRepresentation is Unspecified.Even though it is specified in the Program.cs. But, if I make the List, it works perfectly fine and returns the expected data.var filter = Builders<BsonDocument>.Filter.In(\"externalId\", new List<Guid> { Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\"), Guid.Parse(\"c7b1ebaf-4ac1-4fe0-b066-1282e072585a\") });Since, in my code requires dynamic since I don’t know the type until runtime, is there a suggested work-around for this?",
"username": "Steven_Rothwell"
},
{
"code": "externalIddynamicdynamic",
"text": "Hey @Steven_Rothwell, it looks like you’re using externalId to store values of different data types, which is why you’re resorting to using dynamic. Feel free to correct me if I’m mistaken. If my deduction is correct, could you consider modifying the schema to have multiple arrays, each with a specific data type? This might help you avoid using the dynamic type.Thanks,\nMahi",
"username": "Mahi_Satyanarayana"
},
{
"code": "In(field, value)",
"text": "Hey @Mahi_Satyanarayana, sorry yeah, I didn’t make it clear. The field name sent to the In(field, value) is not hardcoded in the code either. It figures out the name of the property and that is sent as the field, which is a string. The value is a string, but is converted to whatever type the property is. So, that’s why value is a dynamic type. It works just fine for all other types, except Guids.",
"username": "Steven_Rothwell"
},
{
"code": "dynamic value = Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\")List<dynamic> value = new List<dynamic> { Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\"), Guid.Parse(\"c7b1ebaf-4ac1-4fe0-b066-1282e072585a\") }",
"text": "Basically, it boils down to, the C# driver can handle dynamic value = Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\"), but it cannot handle List<dynamic> value = new List<dynamic> { Guid.Parse(\"6cd6f392-8271-49bb-8564-e584ddf48890\"), Guid.Parse(\"c7b1ebaf-4ac1-4fe0-b066-1282e072585a\") }.",
"username": "Steven_Rothwell"
},
{
"code": "",
"text": "This issue is currently being looked into. There is a workaround here: https://jira.mongodb.org/browse/CSHARP-4784.",
"username": "Steven_Rothwell"
}
] | Dynamic variables are causing issues | 2023-09-04T02:24:31.913Z | Dynamic variables are causing issues | 519 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I am trying to select every document in a list of collections in my database where it has a particular field value",
"username": "Precious_Affiah"
},
{
"code": "",
"text": "What have you tried so far?",
"username": "John_Sewell"
},
{
"code": "const collections = [\"Collection1\", \"Collection2\", \"Collection3\", \"Collection4\"];\nfor (const collectionName of collections) {\n const documents = await collectionName .deleteMany({itemId: itemId})\n}\n",
"text": "const itemId = “item_id”;but i do not want to use a loop because of performance, 50 users could be requesting the same thing at the same time",
"username": "Precious_Affiah"
},
{
"code": "",
"text": "I’m not sure that’s possible then, running a delete at a database level. How unique is itemID? I assume it’s indexed, have you done any testing to verify your scenario of if this will bottleneck if it’s called 50 times simultaniously?I assume you’re running node, why not call deletes on all collections at the same time and then wait for them all to complete?",
"username": "John_Sewell"
},
{
"code": "",
"text": "The time spent looping over the list of collections on your client code will be insignificant compared to the work that has to be done on the server during any major deleteMany.It might even help distribute over time the work that has to be done by the server. The side effect being less CPU spike and better response time for other less intensive use-cases.You are worrying too much. Early optimization is often a mistake. Implement the simplest algorithm first and optimize if is a frequent bottleneck.The code you share looks like a cleanup code when an account is delete or something like that. That is probably not a frequent use-case. You probably would want to throttle that and use some king of notification rather than have a user waiting for the operation to complete.",
"username": "steevej"
}
] | Selecting and deleteing collcection documents | 2023-09-13T14:03:53.798Z | Selecting and deleteing collcection documents | 338 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "We conducted a performance test on MongoDB for one of our scenarios. We set up a test collection containing 100,000 records with fields like _id, userId, offerId, and createdAt. Specifically, for each unique offerId, we added 20,000 documents and created an index on the offerId field.Mongo DB instance config : 2 core CPU and 16 GB ram AWS instance.Our goal was to execute an aggregation pipeline with the following stages:During testing, we observed that when a particular offerId had 20,000 records and was subjected to a load test at approximately 90 requests per second (RPS), the average response time for the aggregation pipeline ranged from 200 to 300 milliseconds. This was despite having indexes on both offerId and userId, which were utilized in the aggregation pipeline.However, when we tested an offerId with only 1,000 records out of the total 100,000, the response time improved significantly to 20-30 milliseconds. Even when performing a simple find query, like find({ offerId: “123” }), for an offerId with 20,000 documents, the response time was around 40-50 milliseconds. The response time decreased when the offerId had only 1-2 thousand documents, despite having an index on offerId.Questions:",
"username": "ishan_khan1"
},
{
"code": "",
"text": "as it will be more performantAre you sure? Did you performed the same load testing? With the same server specs? If you did and you really found that the performance are better then you should consider it. If you did not load test as you did with a MongoDB implementation, then you should test to really determine what is more performant. And if performance is the only criteria then implement the most performant.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @steevej for looking into this,\nYes, We did similar load test on Mysql Maria DB and we found better results,\nApart from Performance, could you please point out a few other points we should consider for choosing DB specially MongoDB vs MySQL/Postgres? As choosing the right DB will be very important for the future of our business-critical application.",
"username": "ishan_khan1"
},
{
"code": "",
"text": "Perhaps, your aggregation and/or your schema was sub-optimal. If you could share it we could see if it can be improved. Of course, sample input and result documents are required. Schema wise, for example, using a string offerId might not be the best since strings are compared character per character. Using a number or and ObjectId would be more efficient. Aggregation wise, using $group (a blocking stage) might not be the best, sometimes a self $lookup with a pipeline speeds things out and reduce the memory needed. The explain plan is also important to have.In your first post you mentionedperformance test on MongoDB for one of our scenariosWould that be the most frequent use-case? You should find out the most frequent use-case and perform performance tests on that scenario.One strong point in favour of MongoDB is the flexible schema where you do not have to write migration script using ALTER TABLE. For example, you may simply modify your code and start using a new attribute. I often spent more time writing and testing migration scripts compared The possibility to have documents with complete different schema within the same collection. I often use this in a config collection. As a Unix veteran, the aggregation pipeline feels so natural. And easier to work with compared to stored procedure. After all, an aggregation pipeline is simply an array of JSON documents it is thus very easy to manipulate the pipeline as any other document.",
"username": "steevej"
}
] | MongoDB aggregation pipeline performance | 2023-09-12T17:34:17.681Z | MongoDB aggregation pipeline performance | 398 |
null | [
"queries"
] | [
{
"code": "",
"text": "Is there a planned feature in charts to allow all/some chart queries to be routed to replica members other than the primary? Seems like this could be a common use case for larger data sets.",
"username": "andrew_morcomb"
},
{
"code": "",
"text": "Charts already uses the secondary read preference by default. You can change it per deployment (e.g. to use analytics nodes) on the Data Sources Page.Tom",
"username": "tomhollander"
}
] | Charts: Routing to a secondary in the replica set | 2023-09-13T18:03:33.414Z | Charts: Routing to a secondary in the replica set | 284 |
[
"atlas",
"conference",
"saopaulo-mug"
] | [
{
"code": "Arquiteto Sênior de Solução na MongoDBSão Paulo, MongoDB User Group Leader & MongoDB Champion / Head of NoSQL@Power TuningSão Paulo, MongoDB User Group Leader & MongoDB Enthusiast Senior Workload Performance Engineer @ NetApp",
"text": "\nScreenshot 2023-08-23 at 10.34.351854×1014 268 KB\n MUG SP - MongoDB User Group São Paulo Está curioso para saber mais sobre o que há de novo no MongoDB? Não perca esta chance! Convidamos todos os entusiastas, profissionais e amantes do MongoDB para um encontro imperdível no escritório oficial da MongoDB em São Paulo! Data: 19 de Setembro\n Horário: 18h\n Local: Escritório da MongoDB - São Paulo O que esperar?E tem mais!Além das palestras incríveis, o evento contará com: Espaço para Networking: Amplie sua rede de contatos, troque experiências e conheça outros profissionais da área. Pizza & Cerveja: Sabemos que ninguém vive só de código! Vamos alimentar o corpo e a alma com pizzas deliciosas e cerveja gelada. Brindes Exclusivos: Sortearemos brindes especiais para os participantes. Não perca a chance de levar para casa algo único!Vagas limitadas! Garanta já a sua inscrição e venha descobrir, aprender e se conectar com a comunidade MongoDB de São Paulo!Nos vemos lá! #MUGSP #MongoDBSPEvent Type: In-Person\nLocation: MongoDB Office São Paulo - Av. das Nações Unidas, 14261 - Vila Gertrudes, São Paulo - SP, 04730-090 - Sala 24FArquiteto Sênior de Solução na MongoDBQuer saber mais sobre o MongoDB Atlas? Lourenço Taborda, um dos maiores especialistas no tema, trará todas as novidades e características desta ferramenta que tem revolucionado a forma como trabalhamos com dados.São Paulo, MongoDB User Group Leader & MongoDB Champion / Head of NoSQL@Power TuningMergulhe nas funcionalidades e melhorias que a versão 7.0 do MongoDB traz ao mundo dos bancos de dados. Leandro Domingues, MongoDB Community Champion e uma referência no assunto, te guiará por uma jornada de descobertas, tirando todas as suas dúvidas!São Paulo, MongoDB User Group Leader & MongoDB Enthusiast Senior Workload Performance Engineer @ NetApp",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Olá, @Leandro_Domingues e demais membros da comunidade MongoDB!Estou entusiasmado com a palestra sobre as novidades do MongoDB 7.0 e MongoDB Atlas. No post, foi mencionado para garantirmos a presena devemos realizar nossa inscrição. No entanto, não consegui encontrar o link ou o formulário de inscrição. Poderiam fornecer mais detalhes sobre como realizar essa inscrição?Muito obrigado e aguardo ansiosamente o evento!",
"username": "Carlos_Biagolini-Jr"
},
{
"code": "",
"text": "Olá @Carlos_Biagolini-Jr, aqui mesmo perto do título do post, tem um link RSVP. Por enquanto é só clicar nesse carinha.abraços e nos vemos lá!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Muito obrigado pela resposta rápida e direta. Localizei o link RSVP e já realizei minha inscrição.\nUm abraço e até lá!",
"username": "Carlos_Biagolini-Jr"
},
{
"code": "",
"text": "Olá, boa tarde Leandro, tudo bem?? Uma dúvida não estou achando o link para fazer a inscrição no evento, poderia coloca-lo aqui em baixo do comentário ??",
"username": "Marcio_Santos"
},
{
"code": "",
"text": "Marcio é só clicar nesse botão:\n",
"username": "Carlos_Biagolini-Jr"
}
] | MUG SP - Conheça as novidades do MongoDB 7.0 e MongoDB Atlas | 2023-08-23T13:16:26.558Z | MUG SP - Conheça as novidades do MongoDB 7.0 e MongoDB Atlas | 2,133 |
|
null | [
"containers"
] | [
{
"code": "",
"text": "I have a MongoDB image in a Kubernetes deployment and I’m trying to change the “dbpath” for my database. I’ve specified it in the /etc/mongod.conf file, but it doesn’t take the change:# Where and how to store data.\nstorage:\ndbPath: /tmp/mongo/db\njournal:\nenabled: trueWhen I check the paths, “/tmp/mongo/db” is empty, and /data/db remains the primary location. I also attempted to run “mongod --dbpath /tmp/mongo/db,” but it didn’t make any changes. I’ve noticed that one of the recommendations is to stop the mongod service to make this change, but I can’t do that in the Docker image.",
"username": "Kaedra_N_A"
},
{
"code": "",
"text": "Hello, welcome to the MongoDB community.Yes, it is necessary to restart the service. You can manually move the data to the future path so as not to lose the data. If you don’t need to keep them, you can just change the path and restart.You can restart the container.",
"username": "Samuel_84194"
}
] | Cant change dbpath | 2023-09-13T16:00:17.430Z | Cant change dbpath | 338 |
null | [] | [
{
"code": "",
"text": "Hi, with reference to this Multiple Vector Embeddings in one document? we can generate vector embeddings for the multiple fields but can we query on multiple fields as well, I mean finding search results on the basis of multiple vector embedded fields?\n@Aasawari",
"username": "Noman_Saleem"
},
{
"code": "",
"text": "Hi @Noman_Saleem -You can index multiple vector embedding fields within a single search index as that example you linked shows, but you can only search these fields one at a time.If you wanted to combine the results of multiple searches. you would have to do this with a $unionWith or $lookup",
"username": "Henry_Weller"
}
] | Can we query on multiple vector embedding fields in vector search? | 2023-09-12T13:41:23.061Z | Can we query on multiple vector embedding fields in vector search? | 299 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Question regarding Zone sharding with Range key, referred doc: https://www.mongodb.com/docs/v5.0/tutorial/sharding-segmenting-data-by-location/data estimation: 1500GBHave a Collection with shard key like: country: 1, cityId: 1, userId: 1\ncountry has only 4 option(low cardinality): US, UK, IND, UE\ncityId and userId is UUID,I am planning to use Zone Sharding, 1 zone (4 shard).\nwith zone range:\nexample:\nsh.addTagRange(“db.collection”, { “country” : “US”, “city” : MinKey, “userId”: MinKey }, { “country” : “US”, “city” : MaxKey, “userId”: MaxKey }, “USA”);wanted to understand whether the shard key will do balanced distribution in zone cluster and how.(having low cardinality of first prefix value)",
"username": "Sudarshan_Bisht"
},
{
"code": "",
"text": "Hey Sudarshan, good question!MongoDB will direct all documents in the “db.collection” where the “country” value is “US”, regardless of the “cityId” and “userId” values (since MinKey and MaxKey represent the lower and upper bounds of possible values, respectively), to the shards associated with the “USA” tag.Documents with the same cityId are more likely to be on the same shard, but that’s not guaranteed. For example, it’s possible (although unlikely since nothing is hashed) that a document with country:US, cityId:1234, userId:0001 lives on shard0 and a document with country:US, cityId:1234, userId:0002 lives on shard1. But all the documents with “US” will live on one of the 4 shards tagged to the “USA” zone.",
"username": "Garaudy_Etienne"
},
{
"code": "",
"text": "should we look for Hash sharding with Zone cluster?Planning separate Zone clusters for 2 of them, the rest 2 will be in a single cluster.Yes.",
"username": "Sudarshan_Bisht"
},
{
"code": "userIduserIduserIdcountry:1,cityId:1,userId:\"hashed\"userIdcityId",
"text": "Hashed sharding will help ensure inserts go to all shards equally. But if userId is already randomly generated then you write distribution will be random/good enough. Is userId random or is it monotonically increasing? If it’s monotonic then hashing just the userId piece should be enough to avoid hot shards.So a shard key of country:1,cityId:1,userId:\"hashed\" should be good enough to ensure good write distribution if userId isn’t random. I would guess that there could be too many documents with the same cityId for some popular cities.",
"username": "Garaudy_Etienne"
},
{
"code": "3 shards : s1, s2, s3 and 3 zones: z1, z2, z3shards: z1 (s1,s2,s3), z2 (s1,s2,s3), z3 (s1,s2,s3)country: 1, cityId: 1, userId: 11. sh.addTagRange(“db.collection”, { “country” : “US”, “city” : MinKey, “userId”: MinKey }, { “country” : “US”, “city” : MaxKey, “userId”: MaxKey }, “Z1”);\n2. sh.addTagRange(“db.collection”, { “country” : “IND”, “city” : MinKey, “userId”: MinKey }, { “country” : “IND”, “city” : MaxKey, “userId”: MaxKey }, “Z2”);\n3. sh.addTagRange(“db.collection”, { “country” : “UE”, “city” : MinKey, “userId”: MinKey }, { “country” : “UE”, “city” : MaxKey, “userId”: MaxKey }, “Z3”);\n(country: 1, cityId: 1, userId: 1)(cityId: 1, userId: 1) : because now the cardinality of the country is 1, and already used to find zone",
"text": "Sorry for the late response… was awayI have another question regarding the zone sharding.\nSuppose, I have 3 shards : s1, s2, s3 and 3 zones: z1, z2, z3\nall zone has all shards: z1 (s1,s2,s3), z2 (s1,s2,s3), z3 (s1,s2,s3)\nkey: country: 1, cityId: 1, userId: 1now if I would say:Then which approach data distribution will use, 1 or 2",
"username": "Sudarshan_Bisht"
}
] | Zone based sharding data balance issue | 2023-08-05T11:16:07.101Z | Zone based sharding data balance issue | 713 |
null | [] | [
{
"code": "",
"text": "I’ve created a dashboard with about a dozen charts. Many are of the ‘Number’ variety, just displaying a count of a data point I care about. I’ve scheduled the dashboard to be delivered as a report via email to myself in PDF form. The emails arrive with a PDF attachment and a link to the dashboard.What’s odd is that the PDF doesn’t match the numbers in the dashboard for several of the charts. And these are numbers that don’t vary greatly from day to day, so it’s not a timing issue. I’ve received the report several times through the week, and several of the charts in the PDF are consistently incorrect. If I manually save the dashboard as a PDF file, the numbers are displayed correctly, it’s just the attached file in the generated report email that is incorrect. This seems to be a bug.",
"username": "andrew_morcomb"
},
{
"code": "",
"text": "Hi @andrew_morcomb, By any chance, do you have a dashboard filter on the dashboard? If you have, you need to add any filters and save them and click “Apply Filters” and your pdf should reflect what is on the dashboard. Let me know if that works.\n\nimage589×1034 40.7 KB\n",
"username": "Avinash_Prasad"
},
{
"code": "",
"text": "good insight, it wasn’t it exactly this, but it pointed me in the right direction. in my case there were no dashboard wide filters in place. however, examining each individual chart, i saw the\nsetting this value to ‘Ignore’ for the charts that were problematic in the attached pdf solved the issue. thank you.",
"username": "andrew_morcomb"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Chat Reports: Emailed report does not match actual numbers in dashboard | 2023-08-24T18:35:06.523Z | Chat Reports: Emailed report does not match actual numbers in dashboard | 477 |
[
"dot-net",
"compass",
"mongodb-shell"
] | [
{
"code": "mongosh// Javascript helper functions for parsing and displaying UUIDs in the MongoDB shell.\n// This is a temporary solution until SERVER-3153 is implemented.\n// To create BinData values corresponding to the various driver encodings use:\n// var s = \"{00112233-4455-6677-8899-aabbccddeeff}\";\n// var uuid = UUID(s); // new Standard encoding\n// var juuid = JUUID(s); // JavaLegacy encoding\n// var csuuid = CSUUID(s); // CSharpLegacy encoding\n// var pyuuid = PYUUID(s); // PythonLegacy encoding\n// To convert the various BinData values back to human readable UUIDs use:\n// uuid.toUUID() => 'UUID(\"00112233-4455-6677-8899-aabbccddeeff\")'\n// juuid.ToJUUID() => 'JUUID(\"00112233-4455-6677-8899-aabbccddeeff\")'\n// csuuid.ToCSUUID() => 'CSUUID(\"00112233-4455-6677-8899-aabbccddeeff\")'\n// pyuuid.ToPYUUID() => 'PYUUID(\"00112233-4455-6677-8899-aabbccddeeff\")'\n// With any of the UUID variants you can use toHexUUID to echo the raw BinData with subtype and hex string:\n// uuid.toHexUUID() => 'HexData(4, \"00112233-4455-6677-8899-aabbccddeeff\")'\n// juuid.toHexUUID() => 'HexData(3, \"77665544-3322-1100-ffee-ddccbbaa9988\")'\n// csuuid.toHexUUID() => 'HexData(3, \"33221100-5544-7766-8899-aabbccddeeff\")'\n// pyuuid.toHexUUID() => 'HexData(3, \"00112233-4455-6677-8899-aabbccddeeff\")'\n\nfunction HexToBase64(hex) {\nthis.base64 is not a function.toCSUUID()CSUUID(xxxxx)",
"text": "Hi,we are currently working on a new project and use MongoDB as data store. In a lot of the tables we are using GUIDs as primary keys and within our data model. Everything works very well as long as we stay in C# and work directly with the driver.But it is very cumbersome working with the guids directly in the database i.e. through mongosh or mongodb compass. The GUIDs are in binary format v3.\nIn case we want to debug and directly lookup a GUID we couldn’t get it to work. We tried using the uuidhelper.js script which is in the mongodb c# driver repository but it does not seem to work with mongosh.\nWe get the following error this.base64 is not a function. when calling .toCSUUID().When using the methods to convert any GUID from the application via CSUUID(xxxxx) the system does not return data. I assume there is something wrong with the script and the format doesn’t fit.Is there any updated version of the script? Is there any way to use it with mongoDB compass.Thanks in advance!",
"username": "d_r"
},
{
"code": "mongoshuuidhelpers.jsmongomongoshGuidRepresentation.StandardGuidRepresentationMode.V3GuidRepresentation.StandardObjectIdGuidRepresentationmongoshuuidhelpers.jsBinData.prototype.base64 = function() { return this.buffer.base64Slice(); };\nBinData.prototype.subtype = function() { return this.sub_type; };\nGuidRepresentationMode",
"text": "Hi, @d_r,Welcome to the MongoDB Community Forums. I understand that you’re having some challenges using GUIDs for primary keys in your C# applications and how they are rendered by mongosh.Regarding the uuidhelpers.js script, those helper functions were written for the previous mongo shell and have not been updated for the new mongosh shell. Apologies for the confusion. Follow CSHARP-4786 for more information.The root cause of the problem is that legacy GUIDs (subtype 3) do not have a defined byte ordering. The byte ordering varies by language (C#, Python, and Java can use different byte ordering). We introduced GUID subtype 4 (aka GuidRepresentation.Standard), which does have a defined byte ordering. We couldn’t simply make this the default as it would be a backward breaking change.If you are writing a net new application, I would strongly recommend using GuidRepresentationMode.V3 and GuidRepresentation.Standard (subtype 4) for all your GUIDs. Another option is to use ObjectId instead as it has similar properties to GUIDs without the hassles around byte ordering inconsistencies.If you are working with existing data that contains subtype 3 GUIDs, you should configure your GuidRepresentation based on the application that generated the data so that your C# application interprets the byte order correctly. If you need to work with these GUIDs in mongosh, you can add the following functions to uuidhelpers.js so that the script works with {{mongosh}}.You can read more about the GuidRepresentationMode and how to set it in GUID Serialization in the .NET/C# Driver documentation. For an in-depth discussion of the various legacy GUID byte ordering issues, I would recommend Handling UUID Data in the Python Driver documentation. (We are adding this information to the .NET/C# docs, but haven’t completed it yet.)I hope this helps answer your questions.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "BsonSerializer.RegisterSerializer(new GuidSerializer(GuidRepresentation.Standard));\n#pragma warning disable CS0618 // Error in mongodb driver reference https://jira.mongodb.org/browse/CSHARP-3195\nBsonDefaults.GuidRepresentationMode = GuidRepresentationMode.V3;\n#pragma warning restore CS0618\n",
"text": "Hey @James_Kovacs ,thank you very much for the detailed answer. We are working on a new project so we will switch to V3 GuidRepresentation and RepresentationMode V3.I added the following in our Program.cs and it is working.Thanks!",
"username": "d_r"
},
{
"code": "",
"text": "Hi, @d_r,Glad I could be of assistance. Thank you for using MongoDB!Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C# decode and encode guids in mongosh | 2023-09-12T14:06:16.288Z | C# decode and encode guids in mongosh | 605 |
|
null | [
"atlas-online-archive"
] | [
{
"code": "",
"text": "Hi,I was reviewing this document https://www.mongodb.com/docs/atlas/online-archive/manage-online-archive/ , which discusses about the online archiving possibilities of MongoDB Atlas.However this document only has AWS cloud object storage references, is it possible to setup online archiving to Azure storage account for an Atlas environment which has been deployed to Azure ?",
"username": "UB_K"
},
{
"code": "",
"text": "Hi @UB_K ,Yes, it is possible to setup Online Archive on Azure. However, we have released OA on Azure in Private preview and hence available to be setup in your non production environments.For more details, please see details and FAQs in the link present: https://www.mongodb.com/community/forums/t/invitation-to-participate-in-the-private-preview-program-of-online-archive-on-azure/233412If interested, you can contact me directly (email provided in the link) and I can help you with the next steps.Thanks,\nPrem",
"username": "Prem_PK_Krishna"
}
] | MongoDB Atlas Archiving in Azure deployment | 2023-09-13T01:16:01.833Z | MongoDB Atlas Archiving in Azure deployment | 305 |
null | [
"compass"
] | [
{
"code": "",
"text": "Hi community,I created a database project and then tried to connect it with Compass as instructed. It did not work. After writing the password after removing the <>. It said “bad auth”. Any clues how to fix this in 2023?\nOr any community question you can recommend where this problem was answered well?Many thanks",
"username": "Leah_Dunne"
},
{
"code": "",
"text": "Bad auth means wrong user id or password\nHave you created DB user?\nUse that user to login to your DB\nTry with mongosh shell if you can connect or not",
"username": "Ramachandra_Tummala"
}
] | Bad auth : authentication failed MongoDB Atlas | 2023-09-13T13:01:38.830Z | Bad auth : authentication failed MongoDB Atlas | 277 |
null | [
"python"
] | [
{
"code": "",
"text": "Hello MongoDB community!Pydantic v2 was recently released.I found this document (Getting Started with MongoDB and FastAPI | MongoDB) in the MongoDB quickstarts about using FastAPI, MongoDB and Pydantic, but as Pydantic v2 has several API changes and deprecations, wanted to ask if someone knows already which changes are necessary, or what is the best, correct way to create an ObjectID field in a Pydantic v2 schema.Thanks in advance!",
"username": "CarlosDC"
},
{
"code": "bump-pydantic --diff",
"text": "Hi @CarlosDC, I don’t believe any changes will be required, based on the FastAPI migration guide and running bump-pydantic --diff on the source code in the blog post.",
"username": "Steve_Silvester"
},
{
"code": "__modify_schema____get_pydantic_json_schema__",
"text": "I get an error message when I try to use the described “PyObjectId” class.“The __modify_schema__ method is not supported in Pydantic v2. Use __get_pydantic_json_schema__ instead.”To anyone searching for a solution to the problem, please look at this answer in S.O.\nIt worked for me https://stackoverflow.com/a/76837550",
"username": "CarlosDC"
},
{
"code": "__modify_schema____get_pydantic_json_schema___idstringObjectId('6501....')string",
"text": "Hello,\nI’ve also encountered this error and the solution of @CarlosDC is working but the behavior is different from the previous version of the guide. This causes also an issue with finding the correct DB entries to delete, because the ID’s are not the “same” anymore.When I’m using the exact code from the guide I also get the error: “pydantic.errors.PydanticUserError: The __modify_schema__ method is not supported in Pydantic v2. Use __get_pydantic_json_schema__ instead.”When I try the suggested solution on stackoverflow I’ve noticed that the _id attribute is no longer a string but it’s stored as an ObjectId('6501....'). When I try to use the ID as a string it no longer finds the DB entry.\nThis breaks the old way it was used and renders the guide incorrect.Any suggestions how to fix this to get the old behavior?Thanks in advance!",
"username": "rico_ski"
}
] | Pydantic v2 and ObjectID fields | 2023-08-30T08:29:10.621Z | Pydantic v2 and ObjectID fields | 1,738 |
null | [
"mongodb-shell"
] | [
{
"code": "tcmallocReleaseRatemongosh\nuse admin\ndb.adminCommand( { setParameter: 1, tcmallocReleaseRate: 5.0 } )\n**MongoServerError**: attempted to set unrecognized parameter [tcmallocReleaseRate], use help:true to see options",
"text": "Hello Team,We are using the folloiwng param for clearing up memory which is unused.tcmallocReleaseRatedoc ref : https://www.mongodb.com/docs/v6.0/reference/parameters/#mongodb-parameter-param.tcmallocReleaseRateBut we dont see this working. i have set this param in the following way.\nwe are using mongo DB community server 6.0.0And this is not working in my local mac machine, getting following error ==>**MongoServerError**: attempted to set unrecognized parameter [tcmallocReleaseRate], use help:true to see optionsIs am missing anything here ?thanks",
"username": "Gangadhar_M"
},
{
"code": "myrs [direct: primary] admin> db.adminCommand( { setParameter: 1, tcmallocReleaseRate: 5.0 } )\n\n{\n was: 1,\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1694601309, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"0000000000000000000000000000000000000000\", \"hex\"), 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1694601309, i: 1 })\n}\nmyrs [direct: primary] admin> db.version()\n6.0.9\n",
"text": "Hi @Gangadhar_M,\nAre you certain that the version of the instance you are currently using is correct? In my case, it is functioning properly. I am using MongoDB of version 6.0.9.",
"username": "Jack_Yang1"
},
{
"code": "**MongoServerError**: attempted to set unrecognized parameter [tcmallocReleaseRate], use help:true to see options",
"text": "Thanks Jack_Yang1 for quick reply.There are different issues in different ENVs for this issue. I am putting down things clear here.In my Test server (linux, Ubuntu 20.04.4 LTS), we are able to set the parameter, but i don’t see any clearing of memory happening. mongo db version here is 6.0.0 community edition. Here Replication in enabled with Primary and a single Secondary(with priority 0).in my local macOS (12.1), am getting following error. and no replication enabled in local. mongo version here is , 6.0.1 communitty ediition\n**MongoServerError**: attempted to set unrecognized parameter [tcmallocReleaseRate], use help:true to see options",
"username": "Gangadhar_M"
}
] | MongoDB Server Param ==> tcmallocReleaseRate is not effective | 2023-09-13T09:55:41.587Z | MongoDB Server Param ==> tcmallocReleaseRate is not effective | 337 |
null | [
"serverless"
] | [
{
"code": "",
"text": "I’m having issues similar to this at my organization:If a request is made to the cluster after some time of inactivity, the request either takes a significantly more time, or in some cases times out.",
"username": "Ravi_Jain"
},
{
"code": "",
"text": "Hi Ravi,Thank you for reaching out to us. I will send you a direct message to get more details on your account so that we can debug from there.Thanks\nAnurag",
"username": "Anurag_Kadasne"
}
] | Cold start in serverless | 2023-09-13T04:48:05.259Z | Cold start in serverless | 398 |
null | [] | [
{
"code": "",
"text": "Hi everyone,I’m trying to add index creation to our devops pipeline and I have succeeded in creating one in all of our non prod environments using the atlas cli via bash script in the pipeline using the clusters indexes command described here - https://www.mongodb.com/docs/atlas/cli/stable/command/atlas-clusters-indexes-create/. The issue I have is that on repeated runs of this script, once the index has been created I get a 500 error and the pipeline fails. Is there a way to check if an index exists on the atlas cli before creating one or a different way of doing this which enables checking before creation?",
"username": "Steven_Wilson1"
},
{
"code": "",
"text": "Hello, welcome to the MongoDB community.I tried looking for an API for a GET of indexes and it currently doesn’t exist. What you can do is connect to the cluster and validate the existing indexes or try to handle the error. Maybe it would be a good idea to open an idea at https://feedback.mongodb.com/",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Hi Samuel,Thanks for the response, much appreciated, I suspected it didn’t exist I was just double checking myself. I think I’ll handle the error in the script and then open it up as a suggestion.",
"username": "Steven_Wilson1"
},
{
"code": "",
"text": "Okay, if it helped, mark it as solved, so other people will benefit when they search for the same topic. I’m available for you.",
"username": "Samuel_84194"
}
] | Atlas Cli - Check if rolling index has been created | 2023-09-12T09:58:41.788Z | Atlas Cli - Check if rolling index has been created | 233 |
[
"queries",
"node-js"
] | [
{
"code": "[root@birbank-mongodb02 ~]# mongostat \ninsert query update delete getmore command dirty used flushes vsize res qrw arw net_in net_out conn set repl time\n *22 149 *228 *0 0 261|0 0.7% 81.2% 0 35.3G 21.4G 0|0 11|0 179k 1.44m 665 birbankrs SEC Sep 12 21:02:51.274\n *43 106 *166 *0 0 214|0 0.7% 84.7% 0 35.3G 21.4G 0|0 8|0 139k 1.68m 665 birbankrs SEC Sep 12 21:02:52.277\n *130 253 *327 *0 0 242|0 0.7% 81.8% 0 35.3G 21.4G 0|0 12|0 194k 10.1m 666 birbankrs SEC Sep 12 21:02:53.275\n *38 156 *264 *0 0 217|0 0.8% 80.8% 0 35.3G 21.4G 0|0 11|0 161k 1.69m 668 birbankrs SEC Sep 12 21:02:54.276\n *7 125 *240 *0 0 204|0 0.8% 80.5% 0 35.3G 21.4G 0|0 11|0 147k 581k 670 birbankrs SEC Sep 12 21:02:55.275\n *2 123 *183 *0 0 190|0 0.8% 81.4% 0 35.3G 21.4G 0|0 12|0 137k 481k 671 birbankrs SEC Sep 12 21:02:56.276\n",
"text": "\nScreenshot 2023-09-12 at 21.00.40934×286 18.9 KB\n",
"username": "Murad_Samadov"
},
{
"code": "",
"text": "Queries or Explains? You mention slow queries in logs but dont show any.",
"username": "John_Sewell"
},
{
"code": " {\n type: 'op',\n host: 'mongoserver.com:27017',\n desc: 'conn207308',\n connectionId: 207308,\n client: 'client_ip:49188',\n clientMetadata: {\n driver: {\n name: 'mongo-java-driver|sync|spring-boot',\n version: '4.8.2'\n },\n os: {\n type: 'Linux',\n name: 'Linux',\n architecture: 'amd64',\n version: '4.18.0-305.el8.x86_64'\n },\n platform: 'Java/Eclipse Adoptium/17.0.5+8'\n },\n active: true,\n currentOpTime: '2023-09-12T21:42:59.983+04:00',\n threaded: true,\n opid: -1623456351,\n secs_running: Long(\"9\"),\n microsecs_running: Long(\"9807260\"),\n op: 'command',\n ns: 'admin.$cmd',\n command: {\n hello: 1,\n helloOk: true,\n topologyVersion: {\n processId: ObjectId(\"64de9980542caf5a1f6d18be\"),\n counter: Long(\"3\")\n },\n maxAwaitTimeMS: Long(\"10000\"),\n '$db': 'admin'\n },\n numYields: 0,\n waitingForLatch: {\n timestamp: ISODate(\"2023-09-12T17:42:50.276Z\"),\n captureName: 'AnonymousLockable'\n },\n locks: {},\n waitingForLock: false,\n lockStats: {},\n waitingForFlowControl: false,\n flowControlStats: {}\n },\n",
"text": "",
"username": "Murad_Samadov"
},
{
"code": "{\n \"t\": {\n \"$date\": \"2023-09-12T21:45:53.358+04:00\"\n },\n \"s\": \"I\",\n \"c\": \"COMMAND\",\n \"id\": 51803,\n \"ctx\": \"conn526866\",\n \"msg\": \"Slow query\",\n \"attr\": {\n \"type\": \"command\",\n \"ns\": \"mb_front_prod.notifications\",\n \"command\": {\n \"count\": \"notifications\",\n \"query\": {\n \"subscribers\": \"34EC52260C77403B992C62E8C3725E05\",\n \"state\": 2,\n \"action.read\": {\n \"$nin\": [\n \"34EC52260C77403B992C62E8C3725E05\"\n ]\n },\n \"action.delete\": {\n \"$nin\": [\n \"34EC52260C77403B992C62E8C3725E05\"\n ]\n },\n \"type\": {\n \"$nin\": [\n \"TRANSACTION\"\n ]\n }\n },\n \"lsid\": {\n \"id\": {\n \"$uuid\": \"2f2b831f-8867-4530-8e07-3df1ea0693e3\"\n }\n },\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1694540751,\n \"i\": 38\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\",\n \"subType\": \"0\"\n }\n },\n \"keyId\": 0\n }\n },\n \"$db\": \"mb_front_prod\",\n \"$readPreference\": {\n \"mode\": \"secondaryPreferred\"\n }\n },\n \"planSummary\": \"IXSCAN { subscribers: 1, state: 1 }\",\n \"keysExamined\": 3280,\n \"docsExamined\": 3280,\n \"fromMultiPlanner\": true,\n \"replanned\": true,\n \"replanReason\": \"cached plan was less efficient than expected: expected trial execution to take 156 works but it took at least 1560 works\",\n \"numYields\": 115,\n \"queryHash\": \"8798D071\",\n \"planCacheKey\": \"B688F8B7\",\n \"reslen\": 170,\n \"locks\": {\n \"FeatureCompatibilityVersion\": {\n \"acquireCount\": {\n \"r\": 116\n }\n },\n \"Global\": {\n \"acquireCount\": {\n \"r\": 116\n }\n },\n \"Mutex\": {\n \"acquireCount\": {\n \"r\": 1\n }\n }\n },\n \"readConcern\": {\n \"level\": \"local\",\n \"provenance\": \"implicitDefault\"\n },\n \"storage\": {\n \"data\": {\n \"bytesRead\": 205471880,\n \"timeReadingMicros\": 1039721\n }\n },\n \"remote\": \"remote_ip:2772\",\n \"protocol\": \"op_msg\",\n \"durationMillis\": 1334\n }\n}\n",
"text": "",
"username": "Murad_Samadov"
},
{
"code": "",
"text": "You are using the $nin operator, which is known not to be very selective. You can see the IXSCAN is matching a large number of documents, possibly because of this. Is there another way you can write the query to be more selective? See https://www.mongodb.com/docs/manual/core/query-optimization/#query-selectivity",
"username": "Peter_Hubbard"
},
{
"code": "",
"text": "This problem was solved. We got queries which worked slowly. The developer indexed them.",
"username": "Murad_Samadov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb server working high cpu wait percentage and application code running on nodejs. Mongodb server logs appear 'slow query' logs, so server working on load | 2023-09-12T17:01:06.782Z | Mongodb server working high cpu wait percentage and application code running on nodejs. Mongodb server logs appear ‘slow query’ logs, so server working on load | 245 |
|
null | [
"aggregation",
"queries",
"atlas-search"
] | [
{
"code": "\t[\n\t\t { \n\t\t $search: { \n\t\t compound: { \n\t\t should: [ \n\t\t { \n\t\t text: { \n\t\t query: \"9628 California City Blvd\", \n\t\t path: \"street\", \n\t\t fuzzy: {}, \n\t\t }, \n\t\t }, \n\t\t { \n\t\t text: { \n\t\t query: \"93505\", \n\t\t path: \"zip\", \n\t\t fuzzy: {}, \n\t\t }, \n\t\t }, \n\t\t { \n\t\t text: { \n\t\t query: \"USA\", \n\t\t path: \"country\", \n\t\t fuzzy: {}, \n\t\t }, \n\t\t }, \n\t\t { \n\t\t text: { \n\t\t query: \"USA\", \n\t\t path: \"country\", \n\t\t synonyms: \"address_synonym_mapping\", \n\t\t }, \n\t\t }, \n\t\t { \n\t\t text: { \n\t\t query: \"CA\", \n\t\t path: \"state\", \n\t\t fuzzy: {}, \n\t\t }, \n\t\t }, \n\t\t { \n\t\t text: { \n\t\t query: \"CA\", \n\t\t path: \"state\", \n\t\t synonyms: \"address_synonym_mapping\", \n\t\t }, \n\t\t }, \n\t\t ], \n\t\t must: [ \n\t\t { \n\t\t text: { \n\t\t query: \"California City\", \n\t\t path: \"city\", \n\t\t fuzzy: {}, \n\t\t }, \n\t\t }, \n\t\t ], \n\t\t }, \n\t\t }, \n\t\t }, \n\t\t { \n\t\t $project: { \n\t\t state: 1, \n\t\t county: 1, \n\t\t city: 1, \n\t\t country: 1, \n\t\t street: 1, \n\t\t zip: 1, \n\t\t score: 1, \n\t\t }, \n\t\t }, \n\t\t { \n\t\t $addFields: { \n\t\t score: { $meta: \"searchScore\" }, \n\t\t }, \n\t\t }, \n\t\t ] \n\t{\n\t\t text: { \n\t\t query: \"California City City City City City\", \n\t\t path: \"city\", \n\t\t fuzzy: {}, \n\t\t }, \n\t\t },\n",
"text": "HI!\nI writing a service that look up for USA addresses in our db\ntwo of the requirements for looking up addresses were:we put city as a “must” because we want to make sure that the found address will be in the same city, and since state can be received as an abbreviation or with a typo, searching for the state in the “must” section can lead to the address not being found at all, so we keep this search in the “should” section.\nthis search query finds the given address document with a search score of: 1.1768810749053955\nwe noticed that in the “fuzzy” searches, having a part of the address repeated will bump up the searchScore\nexample, having the next city search will give the search score of: 1.699939489364624we want to receive the address that is the most accurate, but we want to put a threshold to filter out addresses that are not accurate enough.\nbecause we might get addresses with partial information, we want the the threshold to be dynamic\nwhat should the threshold searchScore be in each case? can we build it dynamically based on the input address we received? not only if the address is full or partial, but based on the address strings themselves?",
"username": "Yoav_Zinger"
},
{
"code": "",
"text": "You can use the score to create a max_score then filter by the percentage of the max_score.\nsee: https://www.mongodb.com/docs/atlas/atlas-search/score/normalize-score/",
"username": "Ilan_Toren"
},
{
"code": "",
"text": "thanks, but I don’t think this is the solution I’m looking for\nthe way they calculate “normalized score” is by taking the searchScorer and dividing it by the max searchScore of the query, so this way the “normalized score” of the fist document is always “1”.\nI just want the first document, and check if it’s search score is enough to be a reliable address, and if not, get the address from an external source.",
"username": "Yoav_Zinger"
}
] | Fine tuning atlas search searchScore filter | 2023-09-13T08:38:55.998Z | Fine tuning atlas search searchScore filter | 345 |
null | [] | [
{
"code": "",
"text": "Hello,\nI have to install a web service (usenet Nemo) that needs mongodb.\nMy distribution of choice is openSUSE Tumble weed.\nNone of the docs I found on the topic “mongodb suse” do works and the openSUSE repos are empty - I guesse it’s since the licence change.\nIs there a way to install from official mongodb repositories?\nIf it’s not possible, what is the distro of choice to run mongodb?\nThanks",
"username": "Jean-Daniel_Dodin"
},
{
"code": "",
"text": "As a first try, I installed mogodb and mongosh (I use aa64 arm processor) from tgz, but right now don’t yet have a working mongodb.\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017\nmongod --dbpath /var/lib/mongo --logpath /var/log/mongodb/mongod.log --fork\nIllegal instruction (core dumped)but right now this is run from root account, not ideal. I don’t know yet what user I will have to use in fine.\nthanks",
"username": "Jean-Daniel_Dodin"
},
{
"code": "",
"text": "Hi, as far as I can see openSuse is not supported, only SLES and then only when using x86_64 architecture.",
"username": "Peter_Hubbard"
},
{
"code": "",
"text": "yes, not officially supported, I know, but I guess it’s possible to install it manually, given some dependencies are met. I could also compile the source but if I can avoid it, the betterthanks",
"username": "Jean-Daniel_Dodin"
}
] | Installing mongodb on openSUSE Tumbleweed | 2023-09-12T10:40:04.748Z | Installing mongodb on openSUSE Tumbleweed | 411 |
null | [
"mongodb-shell"
] | [
{
"code": "mongodb-mongosh-1.10.6-1.el8.x86_64mongodb-mongosh-2.0.0-1.el8.x86_64mongosh: OpenSSL configuration error:\n001908B0DC7F0000:error:030000A9:digital envelope routines:alg_module_init:unknown option:../deps/openssl/openssl/crypto/evp/evp_cnf.c:61:name=rh-allow-sha1-signatures, value=yes\n",
"text": "Hello. I am getting an OpenSSL error after Mongosh package upgrade from mongodb-mongosh-1.10.6-1.el8.x86_64 to mongodb-mongosh-2.0.0-1.el8.x86_64. The error is exactly this:I have tried to uninstall and install MongoDB, but I am getting the same error.",
"username": "John_Doe8"
},
{
"code": "",
"text": "got the same error. started yesterday. I am on fedora 38 workstation. What I did was I uninstalled mongodb and the delete mongodb repo from /etc/yum.repos.d/. And then I manually installed the previous rpms version of mongosh shell which is 1.10.6: MongoDB Repositories.In addition, I reinstall the following:PS.\nI tried installing the mongodb-mongosh-2.0.0.x86_64.rpm from the same repo and got the same openssl config error.If you are going to follow this workaround, install this first mongodb-mongosh-1.10.6.x86_64.rpm. It will override the installation of other files otherwise.Once installed:\nsudo systemctl start mongod\nsudo systemctl enable mongod\n- if failed:\nsudo systemctl daemon-reload\n- then restart again…\nsudo systemctl status mongodafter that I was able to use mongo shell once again.\nHope that helps. If not, please submit your workaround.",
"username": "Joseph_Banaag"
},
{
"code": "mongodb-mongoshmongodb-mongosh-shared-openssl3mongodb-mongosh-shared-openssl11$ # This will do\n$ dnf install -qy mongodb-mongosh-shared-openssl3\n\nInstalled:\n mongodb-mongosh-shared-openssl3-2.0.0-1.el8.aarch64\n\n$ if `mongosh --help 1>/dev/null`; then echo 'OK'; else echo '!!!NG!!!'; fi\nOK\n$ # This WILL NOT do\n$ dnf install -qy mongodb-mongosh\n\nInstalled:\n mongodb-mongosh-2.0.0-1.el8.aarch64\n\n$ if `mongosh --help 1>/dev/null`; then echo 'OK'; else echo '!!!NG!!!'; fi\nmongosh: OpenSSL configuration error:\n20B0BFA4FFFF0000:error:030000A9:digital envelope routines:alg_module_init:unknown option:../deps/openssl/openssl/crypto/evp/evp_cnf.c:61:name=rh-allow-sha1-signatures, value=yes\n\n\n!!!NG!!!\n$ dnf erase -qy mongodb-mongosh\n\nRemoved:\n mongodb-mongosh-2.0.0-1.el8.aarch64\n",
"text": "Instead of mongodb-mongosh, try installing mongodb-mongosh-shared-openssl3\nor mongodb-mongosh-shared-openssl11 that fits for your running environment.For Amazon Linux 2023 where OpenSSL v3 is bundled with, it solved the problem:This is the same answer I posted at SO:",
"username": "Shigekazu_Fukui"
},
{
"code": "",
"text": "sam error occurred today.aws ec2 Linux 2023\nmongo 7.0.1I solved it as follows.",
"username": "_N_A97"
},
{
"code": "",
"text": "Thanks guys. I’ve done exactly this and it worked. The @_N_A97’s solution also works.",
"username": "John_Doe8"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | OpenSSL error when starting Mongosh | 2023-09-07T13:58:29.238Z | OpenSSL error when starting Mongosh | 3,065 |
null | [
"crud"
] | [
{
"code": "",
"text": "Hi ,I would like to know how to convert my date which is in string format (yyyymmdd) to date format yyyy-mm-dd. How this can be achieved .",
"username": "Arjun_Dasari"
},
{
"code": "",
"text": "Hey Arjun,If you mean querying and output the date in that format you can apply:type conversion, convert to date, date conversion, aggregationIf you mean to update all documents so that they all get the new format instead? In my opinion that’s better handled with a database migration outside of Mongo DB so that you can version control it.",
"username": "Raymundo_63313"
},
{
"code": "ISODate",
"text": "Welcome to the community, @Arjun_DasariCan you clarify what you mean by “convert”? Do you want to update all your documents to change how the date fields is stored now (something that can be done inside the database IMO) or do you just want to change how it’s returned when the document is returned to the client or do you want to convert it for purposes of comparison or something else?Also, dates in MongoDB are full date-time so when you say “yyyy-mm-dd” format that still looks like a string to me and not an ISODate type.Asya",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Hi @Asya_Kamsky ,I am looking output format like yyyy-mm-dd . I am exporting the data into CSV format but I am having date format as yyyymmdd in mongoDB . So while exporting the data into CSV i am looking for yyyy-mm-dd format.",
"username": "Arjun_Dasari"
},
{
"code": "\"d\"{$concat:[ {$substr:[\"$d\", 0, 4]}, \"-\", {$substr:[\"$d\", 4, 2]}, \"-\", {$substr:[\"$d\", 6, 2]} ]}",
"text": "Ah, then it’s just a string format change. If you start with string \"d\" that’s “20211101” and want to end up with “2021-11-01” then this expression will do it:",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "Thank you.\nIt was informative, and I was searching for this information. Additionally, I found something even more exciting - a Date Converter. find the link . It will also be informative for anyone interested in Date Conversion.",
"username": "Believe.Islamic_N_A"
}
] | Date Conversion | 2021-07-15T07:18:27.204Z | Date Conversion | 8,103 |
null | [
"node-js"
] | [
{
"code": "",
"text": "I do some fairly long processing from Node and I regularly get an error message that says:\nName: PooleClearedOnNetworkError\nmessage: Connection to <…> interrupted due to server monitor timeoutI’ve seen these after i upgraded to an M10 instance. I am running Node on a local machine connecting through to AtlasI’ve seen some suggestions for different error messages for GCPAny advice?",
"username": "michael_hyman1"
},
{
"code": "PoolClearedError",
"text": "Hey @michael_hyman1,Name: PoolClearedOnNetworkError\nmessage: Connection to <…> interrupted due to server monitor timeoutThe PoolClearedError can occur due to Intermittent network outages that cause the driver to lose connectivity.Essentially, this error happens when the driver believes the server associated with a connection pool is no longer available. The pool gets cleared and closed so that operations waiting for a connection can retry on a different server.To address this:Let me know if you have any other questions!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "thanks. I have put in the SDAM and connection level monitoring and will see what i find. retry is already on.\ni reduced the size of my bulk writes but that doesn’t seem to have had any impact on this. it happens when i’m doing very long runs (> 30 minutes), although i do a lot of reads and writes throughout that period. will see if the new logs show anything",
"username": "michael_hyman1"
},
{
"code": "",
"text": "did a bunch of refactoring of the connection pool and also added processing from some of these messages; just did a 90 minute run without trouble so i’m hoping the problem is behind me",
"username": "michael_hyman1"
},
{
"code": "",
"text": "still occurs. this is happening during reads of very long files. somewhere in I get a serverHeartbeatFailed for two connectionId, then a serverDescriptionChanged, mix of pool changes, then finally the PoolClearedOnNetworkError that halts everythingDoes this mean i need to check the connection status before every read?\nDo i need to switch to something like mongoose?",
"username": "michael_hyman1"
},
{
"code": "",
"text": "This happens when I am around 2M records or so into iterating through a table using for await (const row of cursor). I’m not sure how to recover from it, the behavior is inconsistent. It starts with a serverHeartbeatFailed and then a connectionPoolCleared. Since I’m in the midst of iterating through a collection, how do I recover? Do I need to artificially introduce a monotonically increasing page value to go through in chunks?",
"username": "michael_hyman1"
}
] | Interrupted due to server monitor timeout | 2023-09-05T02:37:54.960Z | Interrupted due to server monitor timeout | 682 |
null | [
"queries",
"mongodb-shell"
] | [
{
"code": "",
"text": "Hey friends,So, mongosh doesn’t like returning documents with very long lines but I have a large monitor and my text small. Does anyone know how to change it’s config so it can output longer lines on it’s query return rather than wrapping the document it returns onto multiple lines?Cheers",
"username": "Harrison_Morrow"
},
{
"code": "",
"text": "Welcome to the MongoDB communityI believe this helps you–eval 'DBQuery.shellBatchSize = 1000; db.getSiblingDB(“DATABASE”).getCollection(“COOL”).find({}, {“value”:1, _id:0}).sort({“lastUpdated”: -1}).limit(1000) ’",
"username": "Samuel_84194"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Samuel,Thank you for your effort. the limit method changes the number of results you get but I am trying to change t the wrapping behaviour of the output lines.Regards",
"username": "Harrison_Morrow"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
},
{
"code": "DBQuery.shellBatchSize = 1000",
"text": "Hi Harrison, I believe I expressed myself wrong. The item I commented was not about the limit but about the parameter DBQuery.shellBatchSize = 1000",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Hi Samuel,DBQuery.,shellBatchSize is the number of results that a cursor displays. I am looking to change the wrap behaviour of the shell.Regards",
"username": "Harrison_Morrow"
},
{
"code": "",
"text": "Hi @Samuel_84194, can you provide an example or screenshot? This will help us better understand your question. In my environment, the text will only wrap at the end of the line.",
"username": "Jack_Yang1"
}
] | Increasing mongosh line length output | 2023-09-10T02:01:34.247Z | Increasing mongosh line length output | 397 |
null | [
"atlas-device-sync"
] | [
{
"code": "FoldersFoldersconst config = {\n schema: [Schema.Folder, Schema.Note],\n schemaVersion: 2.2,\n path: `${app.getPath('userData')}/${this.app.currentUser.id}/default.realm`,\n sync: {\n user: this.app.currentUser,\n partitionValue: this.app.currentUser.id,\n existingRealmFileBehavior: {\n type: \"openImmediately\"\n },\n newRealmFileBehavior: {\n type: \"openImmediately\"\n }\n }\n }\nSchema.FolderSchema.Notes const Folder = {\n name: \"Folder\",\n primaryKey: \"_id\",\n _partition: \"string?\",\n properties: {\n _id: \"string\",\n name: { type: \"string\", default: \"Untitled Folder\" },\n folder_id: { type: \"string?\", default: \"\" },\n is_collapsed: { type: \"bool\", default: true },\n children: {\n type: \"list\",\n objectType: \"string\"\n },\n notes: \"Note[]\",\n created_at: \"date\",\n updated_at: \"date?\",\n deleted_at: \"date?\",\n }\n }\n\n const Note = {\n name: \"Note\",\n \"primaryKey\": \"_id\",\n _partition: \"string?\",\n properties: {\n _id: \"string\", // uuid\n body: { type: \"string?\", default: \"\" },\n preview: \"string?\",\n is_pinned: { type: \"bool\", default: false },\n folder: {\n type: \"linkingObjects\",\n objectType: \"Folder\",\n property: \"notes\"\n },\n created_at: \"date\",\n updated_at: \"date?\",\n deleted_at: \"date?\",\n margins: { type: \"bool\", default: false },\n }\n }\n",
"text": "My Realm app is failing to sync, and I’m getting the following error:ERROR: Connection[1]: Session[1]: Failed to transform received changeset: Schema mismatch: Property ‘children’ in class ‘Folder’ is nullable on one side and not on the other.In my app, Folders can have many, and belong to many Folders. I’m not sure what this error means by “nullable on one side and not on the other”?MORE CONTEXT:This is the config I’m using when opening a Realm connection:And as for the Schema.Folder and Schema.Notes it’s referencing, those are defined as this:",
"username": "Annie_Sexton"
},
{
"code": "",
"text": "Hi Annie,I’m not sure if you’re still having this problem as this was posted last year.This error likely means that the property in question has been made Required either in the client code or in the Sync cloud schema. Please compare the two and see if they match.If anyone sees this error in Kotlin SDK please be aware of the bug below and try the included workaround.#### Goal\nUse private fields to store custom types or enums.\nFor example: \n\n```\n…//Foo.kt\n\nenum class MyEnum {\n FOO\n}\n\nopen class Foo: RealmObject() {\n\n private var _enum: String = MyEnum.FOO.name\n var enum: MyEnum\n get() { return MyEnum.valueOf(_enum) }\n set(value) { _enum = value.name }\n \n\n var myString : String = \"myString\"\n}\n```\n\n#### Actual Results\nWhen the Kotlin code gets compiled into Java bytecode, the field ```_enum``{{ won't be annotated with }}``@NotNull```. As a consequence the field _enum in the database will be nullable.\n\n#### Steps & Code to Reproduce\nCreate the file Foo.kt in Android Studio and run your project. Afterwards you can copy your ```myRealm.realm``{{ file from }}``/data/data/com.example.myProject/files/myRealm.realm``{{ via the Device File Explorer to your computer and open the file via RealmStudio. In the class Foo the field }}``_enum``{{ will not be }}``String``{{, but }}``String?```\n\n#### Version of Realm and tooling\nRealm version(s): 6.1.0\n\nRealm Sync feature enabled: No\n\nKotlin version 1.3.72\n\nAndroid Studio version: 3.6.3\n\nWhich Android version and device: Any device\nminSdkVersion 21\ntargetSdkVersion 29\ncompileSdkVersion 29\n\nAndroid Build Tools version: 29.0.3\n\nGradle version: 5.6.4Regards",
"username": "Mansoor_Omar"
},
{
"code": "",
"text": "Hey There!In my case just by adding manually the field to the schema required array made it work.\nThanks",
"username": "Valak"
}
] | Schema mismatch: Property X in class Y is nullable on one side and not on the other | 2021-06-16T15:30:44.629Z | Schema mismatch: Property X in class Y is nullable on one side and not on the other | 3,688 |
null | [
"python",
"sharding"
] | [
{
"code": "maxIncomingConnections: 500ERROR:\npymongo.errors.OperationFailure: cannot add session into the cache\nWhen checked db.serverStatus().logicalSessionRecordCache\n\"activeSessionsCount\" is around 300 which is still less than max incoming connections. Also current connections count is 28, not sure why application is unable to add new session. \n\n db.serverStatus().connections;\n {\n\"current\" : 28,\n\"available\" : 472,\n\"totalCreated\" : 315382,\n\"active\" : 2\n }\nmaxIncomingConnections",
"text": "I have a non-sharded mongo replication cluster with maxIncomingConnections: 500. There were no application code changes but recently app is facing the below error. What is the reason for this error and how to resolve the issue.As a quick turn around when maxIncomingConnections was increased to 1000, the error no longer occurred. But what are the other options I can work on to resolve this situation.",
"username": "shirisha_medi"
},
{
"code": "db.system.sessions.count()\n",
"text": "Based on the error and the code, it sounds like you are maxing out your connections. You might have a session leak where you are starting a large amount of connections.Can you check",
"username": "tapiocaPENGUIN"
},
{
"code": "",
"text": "db.system.sessions.count() value is 2840, I don’t see anywhere near 500 connections present in netstat on any of the machines.Is there a way I find the session leaks or the culprit sessions/processes?\nWhat are the remediation steps to handle such scenario from database end as well as application end ?",
"username": "shirisha_medi"
},
{
"code": "with MongoClient(...) as client:\n run_app(client)\n# or:\nclient = MongoClient(...)\ntry:\n run_app(client)\nfinally:\n client.close()\n>>> client.admin.command('getParameter', maxSessions=1)['maxSessions']\n1000000\n",
"text": "Sessions automatically timeout on the server after 30 minutes of inactivity (see https://www.mongodb.com/docs/v7.0/reference/parameters/#mongodb-parameter-param.localLogicalSessionTimeoutMinutes). It’s possible your application could be unintentionally creating sessions without closing them efficiently if the app’s MongoClient is not closed. Could you try calling MongoClient.close() or use MongoClient in a with block and report back if this fixes the problem?:Could you also check the maxSessions server parameter? It defaults to 1000000 so 2840 should not be a problem unless the maxSessions param was lowered. For example:",
"username": "Shane"
}
] | pymongo.errors.OperationFailure: cannot add session into the cache | 2023-08-21T19:38:54.914Z | pymongo.errors.OperationFailure: cannot add session into the cache | 639 |
null | [] | [
{
"code": "",
"text": "Continuing the discussion from Mongodb connection url:",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "@Justin_Jaeger , @John_Sewell , @Kushagra_Kesav thanks I was able to reaolve it the screenshot will explain better, the ones I commented are where the issues are coming from, i.e I did wrong import\n\nScreenshot_20230911_234710_Gallery720×1600 58.7 KB\n\n\nScreenshot_20230911_234700_Gallery720×1600 53.7 KB\n\nThough I also have some issues that I will share this night(now)",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "\nScreenshot_20230912_003455_Gallery720×1600 60.3 KB\n\nSo this is where the issue came from after running this command in my terminal editor",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "ld > workbox-window > [email protected]: this package has been deprecated\nwarning react-scripts > workbox-webpack-plugin > workbox-build > [email protected]: this package has been deprecated \nwarning react-scripts > workbox-webpack-plugin > workbox-build > [email protected]: this package has been deprecated\nwarning react-scripts > workbox-webpack-plugin > workbox-build > workbox-recipes > [email protected]: this package has been deprecated\nwarning react-scripts > workbox-webpack-plugin > workbox-build > workbox-precaching > [email protected]: this package has been deprecated\nwarning react-scripts > workbox-webpack-plugin > workbox-build > workbox-precaching > [email protected]: this package has been deprecated\nwarning react-scripts > workbox-webpack-plugin > workbox-build > workbox-precaching > [email protected]: this package has been deprecated\nwarning react-scripts > workbox-webpack-plugin > workbox-build > [email protected]: this package has been deprecated\nwarning react-scripts > workbox-webpack-plugin > workbox-build > workbox-range-requests > [email protected]: this package has been deprecated\nwarning react-scripts > @svgr/webpack > @svgr/plugin-svgo > svgo > [email protected]: Modern JS already guarantees Array#sort() is a stable sort, so this library is deprecated. See the compatibility table on MDN: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#browser_compatibility\nwarning react-scripts > css-minimizer-webpack-plugin > cssnano > cssnano-preset-default > postcss-svgo > svgo > [email protected]: Modern JS already guarantees Array#sort() is a stable sort, so this library is deprecated. See the compatibility table on MDN: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort#browser_compatibility\nwarning react-scripts > workbox-webpack-plugin > workbox-build > @surma/rollup-plugin-off-main-thread > magic-string > [email protected]: Please use @jridgewell/sourcemap-codec instead\nwarning react-scripts > jest > jest-cli > jest-config > jest-environment-jsdom > jsdom > [email protected]: Use your platform's native performance.now() and performance.timeOrigin.\n[2/4] Fetching packages...\n[3/4] Linking dependencies...\nwarning \"react-scripts > eslint-config-react-app > [email protected]\" has unmet peer dependency \"@babel/plugin-syntax-flow@^7.14.5\".\nwarning \"react-scripts > eslint-config-react-app > [email protected]\" has unmet peer dependency \"@babel/plugin-transform-react-jsx@^7.14.9\".\nwarning \"react-scripts > react-dev-utils > [email protected]\" has unmet peer dependency \"typescript@>= 2.7\".\nwarning \"react-scripts > eslint-config-react-app > @typescript-eslint/eslint-plugin > [email protected]\" has unmet peer dependency \"typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta\".\n\nDone in 505.52s.\nGit repo not initialized Error: Command failed: git --version\n at checkExecSyncError (node:child_process:887:11) \n at execSync (node:child_process:959:15)\n at tryGitInit (C:\\Users\\HP\\Documents\\FULLSTACK-MERN-MOVIE-2023\\client\\node_modules\\react-scripts\\scripts\\init.js:46:5)\n at module.exports (C:\\Users\\HP\\Documents\\FULLSTACK-MERN-MOVIE-2023\\client\\node_modules\\react-scripts\\scripts\\init.js:276:7)\n at [eval]:3:14\n at Script.runInThisContext (node:vm:122:12)\n at Object.runInThisContext (node:vm:298:38)\n at node:internal/process/execution:83:21\n at [eval]-wrapper:6:24 {\n status: 1,\n signal: null,\n output: [ null, null, null ],\n pid: 3852,\n stdout: null,\n stderr: null\n}\n\nInstalling template dependencies using yarnpkg...\nyarn add v1.22.19\n[1/4] Resolving packages...\n[2/4] Fetching packages...\n[3/4] Linking dependencies...\nwarning \"react-scripts > eslint-config-react-app > [email protected]\" has unmet peer dependency \"@babel/plugin-syntax-flow@^7.14.5\".\nwarning \"react-scripts > eslint-config-react-app > [email protected]\" has unmet peer dependency \"@babel/plugin-transform-react-jsx@^7.14.9\".\nwarning \"react-scripts > react-dev-utils > [email protected]\" has unmet peer dependency \"typescript@>= 2.7\".\nwarning \"react-scripts > eslint-config-react-app > @typescript-eslint/eslint-plugin > [email protected]\" has unmet peer dependency \"typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta\".\nwarning \" > @testing-library/[email protected]\" has unmet peer dependency \"@testing-library/dom@>=7.21.4\".\n\nDone in 41.58s.\nRemoving template package using yarnpkg...\n\nyarn remove v1.22.19\n[1/2] Removing module cra-template...\n[2/2] Regenerating lockfile and installing missing dependencies...\nwarning \" > @testing-library/[email protected]\" has unmet peer dependency \"@testing-library/dom@>=7.21.4\".\nwarning \"react-scripts > eslint-config-react-app > [email protected]\" has unmet peer dependency \"@babel/plugin-syntax-flow@^7.14.5\".\nwarning \"react-scripts > eslint-config-react-app > [email protected]\" has unmet peer dependency \"@babel/plugin-transform-react-jsx@^7.14.9\".\nwarning \"react-scripts > react-dev-utils > [email protected]\" has unmet peer dependency \"typescript@>= 2.7\".\nwarning \"react-scripts > eslint-config-react-app > @typescript-eslint/eslint-plugin > [email protected]\" has unmet peer dependency \"typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta\".\nsuccess Uninstalled packages.\nDone in 16.29s.\n\nSuccess! Created client at C:\\Users\\HP\\Documents\\FULLSTACK-MERN-MOVIE-2023\\client\nInside that directory, you can run several commands: \n\n yarn start\n Starts the development server.\n\n yarn build\n Bundles the app into static files for production. \n\n yarn test\n Starts the test runner.\n\n yarn eject\n Removes this tool and copies build dependencies, configuration files\n and scripts into the app directory. If you do this, you can’t gcan’t go back!\n\nWe suggest that you begin by typing:\n\n cd C:\\Users\\HP\\Documents\\FULLSTACK-MERN-MOVIE-2023\\client \n yarn start\n\nHappy hacking!\nDone in 572.12s.\nPS C:\\Users\\HP\\Documents\\FULLSTACK-MERN-MOVIE-2023\\client>\n",
"text": "",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Pls why the warnings, though I upgraded my tar to the lastedt version making use of the tar version sig so pls is there anyway to clear the warnings, I thought upgrading my tar will clear it",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "@Ramachandra_Tummala Pls what could be wrong",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "@Andy_Rich1 @Bruno_Lopes , @Ceebig_US , @Dachary_Carey , @Eduardo_Kraus_Nunes , @Fabio_Cunha @Gabriele_Cimato , @Fabio_Cunha @Hamilton_Vera , @Ismael_Guilherme\nPls guys sorry foe the tags pls can I continue with the projects and avoid the warnings, that I sent to u as snippet code, does it matter I mind it bcos I want to continue with the projects and avoid it\nPls I would be grateful if u can provide my ways to fix it",
"username": "Somtochukwu_Kelvin_Akuche"
}
] | Fixed my code errors but the screenshot will explain better | 2023-09-11T22:39:22.075Z | Fixed my code errors but the screenshot will explain better | 452 |
null | [] | [
{
"code": "",
"text": "Please either create a new dedicated Atlas account for your MongoDB University learnings or use ‘atlas config set project_id ’ in the terminal to switch the Atlas project with your Atlas cluster, myAtlasClusterEDU. Please visit ‘Getting Started with MongoDB Atlas’ MongoDB Courses and Trainings | MongoDB University, if you are unsure of the next steps. this is the rest of the message i achieved every step and logged in into my account but the terminal does not let me cross anymore idk why can someone help me i have to finish this course",
"username": "Nancy_Valdez"
},
{
"code": "myAtlasClusterEDUMDB_EDUMDB_EDU",
"text": "Hi Nancy,Thank you for reaching out.The myAtlasClusterEDU cluster is under the MDB_EDU project in you Atlas Account. The project can be selected from the dropdown menu on the left of the navigation bar (under the Atlas logo).If you can’t find the MDB_EDU project, please try the lab again. If the issue still persist please email the MongoDB University team at [email protected] as they will be in a better position to assist.Thanks!",
"username": "Davenson_Lombard"
},
{
"code": "",
"text": "yes i did selected once i get the authorization but i did it in the practice lab as it appeared me the option, anyway ill do it under my atlas account in my browser to see if i can solve it that way thanks for the early answer ",
"username": "Nancy_Valdez"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to locate Atlas cluster, myAtlasClusterEDU in the current Atlas project | 2023-09-11T01:56:53.284Z | Unable to locate Atlas cluster, myAtlasClusterEDU in the current Atlas project | 394 |
null | [
"aggregation",
"time-series"
] | [
{
"code": "{ ts: 1, x: 1 } // ts: timestamp, x: integer value\n{ ts: 2, x: 2 }\n{ ts: 3, x: 2 }\n{ ts: 4, x: 2 }\n{ ts: 5, x: 1 }\n{ ts: 6, x: 1 }\n{ ts: 7, x: 4 }\n{ ts: 8, x: 4 }\n{ ts: 9, x: 4 }\n{ ts: 10, x: 5 }\n{ ts: 12, x: 5 }\n{ ts: 13, x: 1 }\n{ ts: 14, x: 1 }\n{ ts: 15, x: 1 }\n{ ts: 16, x: 1 }\n{ ts: 17, x: 2 }\nx2 < ts < 16{ ts: 1, x: 1 } // filtered out\n{ ts: 2, x: 2 } // filtered out\n{ ts: 3, x: 2 } *\n{ ts: 4, x: 2 } // removed, because no change in `x`\n{ ts: 5, x: 1 } *\n{ ts: 6, x: 1 } // removed, because no change in `x`\n{ ts: 7, x: 4 } *\n{ ts: 8, x: 4 } // removed, because no change in `x`\n{ ts: 9, x: 4 } // removed, because no change in `x`\n{ ts: 10, x: 5 } *\n{ ts: 12, x: 5 } // removed, because no change in `x`\n{ ts: 13, x: 1 } *\n{ ts: 14, x: 1 } // removed, because no change in `x`\n{ ts: 15, x: 1 } * (kept to mark the end of the timeseries)\n{ ts: 16, x: 1 } // filtered out\n{ ts: 17, x: 2 } // filtered out\n{ ts: 3, x: 2 }\n{ ts: 5, x: 1 }\n{ ts: 7, x: 4 }\n{ ts: 10, x: 5 }\n{ ts: 13, x: 1 }\n{ ts: 15, x: 1 }\naggregationprevioustsaggregationaggregation$function",
"text": "I’m searching for an algorithm in MongoDB concerning the following problem. Assume the following data (simplified):I’m looking for an algorithm that compresses the data points based on x in the timeseries, essentially converting the timeseries into a series of change points rather than returning all individual data points.The output would be the following (assuming that I prefiltered the data for 2 < ts < 16):Final output (each data marks the first point of value change):My implementation ideas and attempts include the following:Using an aggregation for each data point, look up the previous data point in the collection and merge the previous ts with the current one. | 1M data points, 5 minutes of runtime, frequent OOMsUsing an aggregation group all data together, run a pairwise reduce, detect the changes, and then filter out elements without a change. | 1M data points, 5 minutes of runtime, frequent OOMs(in progress) Using an aggregation group all data together and run a custom $function that scans the data and detects the change points. | (in progress)Should MongoDB be used for these algorithms (it seems not), or am I missing something?",
"username": "Zoltan_Zvara"
},
{
"code": "",
"text": "This does seem similar to a post a while back, I came up with an abomination of a solution but the one on SO was performant.\nThe idea is kind of the same where you want to watch for groups of data that are sequential, i.e. next to each other…I’ve not played with time series data in Mongo so a similar type of approach may not be suitable…",
"username": "John_Sewell"
},
{
"code": "aggregationpreviousts",
"text": "It would be great if you could share the complete aggregations you tried. Especially forUsing an aggregation for each data point, look up the previous data point in the collection and merge the previous ts with the current one. | 1M data points, 5 minutes of runtime, frequent OOMsbecause I feel it is the way to go and very surprised that you get OOM.",
"username": "steevej"
},
{
"code": "$functions[\n {\n $sort: {\n ts: 1,\n }, // Sort the documents by ts in case they are not sorted\n },\n {\n $set: {\n prevValue: {\n $function: {\n body: function (\n currentValue,\n prevValue\n ) {\n return currentValue - prevValue;\n },\n args: [\"$x\", 0],\n // 0 for the initial previous value\n lang: \"js\",\n },\n },\n },\n },\n {\n $match: {\n $expr: {\n $ne: [\"$x\", \"prevValue\"],\n },\n },\n },\n {\n $project: {\n ts: 1,\n x: 1\n },\n },\n {\n $out:\n \"agg-out\",\n },\n]\n",
"text": "Using mainly the $functions operator, the pipeline would look something like this:",
"username": "taglas_tamas"
}
] | How to run a compression aggregation on timeseries data? | 2023-09-11T17:22:19.812Z | How to run a compression aggregation on timeseries data? | 341 |
null | [
"aggregation",
"queries",
"data-modeling"
] | [
{
"code": "_id: ObjectId,\ntweets: [tweetObjectId1, tweetObjectId2, ...],\nfollowing: ['user1, 'user2, ...]\ndb.users.aggregate([\n <!-- Get document of our user -->\n { $match: { username: username}},\n <!-- Remove irrelevant information -->\n { $project: { tweets: 1, following: 1 }},\n <!-- Lookup to get tweets array from Following -->\n { $lookup: {\n from: \"users\",\n localField: \"following\",\n foreignField: \"username\",\n <!-- Keep only tweets array from following -->\n pipeline: [\n { $project: { tweets: 1 }}\n ],\n as: \"data\"\n }},\n <!-- Remove following from original user document now that we don't need it anymore -->\n { $project: { following: 0 }}\n])\n{\n _id: ObjectId(\"64f692b507474379dca9374d\"),\n tweets: [\n ObjectId(\"64fc126f0fb11976be60988a\"),\n ObjectId(\"64fc127d0fb11976be60988d\"),\n ObjectId(\"64fc12890fb11976be609890\")\n ],\n data: [\n {\n _id: ObjectId(\"64f982c2fc1555b92020cac7\"),\n tweets: [\n ObjectId(\"64fc120d0fb11976be609882\"),\n ObjectId(\"64fc121c0fb11976be609885\")\n ]\n },\n {\n _id: ObjectId(\"64f98ec10fb11976be609865\"),\n tweets: [\n ObjectId(\"64fc11c50fb11976be609879\"),\n ObjectId(\"64fc11e50fb11976be60987c\")\n ]\n }\n ]\n}\n$addToSet$unwind{ \n $group: { \n _id: null,\n owner: {\n $addToSet: '$tweets'\n },\n following: {\n $addToSet: '$data.tweets'\n }\n }\n },\n { $project: {\n allTweets: {\n $concatArrays: ['$owner', '$following']\n }\n }},\n { $unwind: '$allTweets' },\n { $unwind: '$allTweets' },\n { $unwind: '$allTweets' },\n {\n $group: {\n _id: '$allTweets'\n }\n }\nObejctId_id{\n _id: ObjectId(\"64fc121c0fb11976be609885\")\n}\n{\n _id: ObjectId(\"64fc121c0fb11976be609886\")\n}\n{\n _id: ObjectId(\"64fc121c0fb11976be609887\")\n}\n{\n _id: ObjectId(\"64fc121c0fb11976be609888\")\n}\n{\n _id: ObjectId(\"64fc121c0fb11976be609889\")\n}\n",
"text": "Hi! I’m working on a Twitter clone and having some trouble with my aggregation for fetching a user feed’s tweets. My DB has two collections, one for tweets and one for users.User Model (irrelevant info removed)For a start, I’m trying to get the user’s as well as their following’s tweets in one array. This is my first time using aggregate and this is what I have so far:Return Value:This is close but I’m having trouble flattening the arrays especially since they’re nested. I tried something with $addToSet for the arrays inside data and concatenating them but I ended up having to chain 3 $unwind’s to get something desirable which feels hacky in the worst way.Which returned individual documents containing just each tweet with its ObejctId as the _id key:Could someone guide me towards the proper way of doing this? Thanks!",
"username": "Nick_Teo"
},
{
"code": "",
"text": "Someone else may provide more specific/deep help but I wanted to flag Asya’s comment on Efficient Structure for Social Media Feeds (fan-out on write)? - #3 by Asya_Kamsky as it may include some helpful links / a reference architecture related to what you’re doing",
"username": "Andrew_Davidson"
}
] | Querying Twitter Styled Schemas | 2023-09-12T14:34:39.375Z | Querying Twitter Styled Schemas | 373 |
null | [] | [
{
"code": "",
"text": "Hello :Im trying to find if is it possible to have a mongo database running on a NAS .\nIf yes is there any tutorial so i can read it ?\nIm using community version .Best regards",
"username": "Bruno_Lopes"
},
{
"code": "",
"text": "It all depends on what NAS you have and what operating system it’s running. Some are locked down and some are more flexible, some have interfaces for running docker as well.For example:Learn how to install MongoDB on your Synology NAS with our step-by-step guide.To be honest just google your NAS and MongoDB and see what pops up to start with.",
"username": "John_Sewell"
},
{
"code": "",
"text": "i was thinking to get one asustor to work with nvme ssd … i will check if there is any tutorial for that modelThanks",
"username": "Bruno_Lopes"
},
{
"code": "",
"text": "Looks like they have an app download on their pages BUT it’s an ancient version:https://www.asustor.com/app_central/app_detail?id=1087&type=#:~:text=Description,License%20and%20the%20Apache%20License.Looks like those boxes support using Portainer to manage a local docker engine:Docker is a set of platform as a service products that uses OS-level virtualization to deliver software in packages called containers. Docker virtualizes an interface exactly the same as the underlying hardware functions, allowing you to quickly...So if you want to use a more modern install that may be way to go, Mongo have an official image from what I can see:https://hub.docker.com/_/mongo",
"username": "John_Sewell"
},
{
"code": "",
"text": "I thought I recognised the form factor of some of those, LTT did a video of on them",
"username": "John_Sewell"
},
{
"code": "",
"text": "its exactly what i mean , just dont know if it works , i was in work and didnt make any browsing about it .Thanks",
"username": "Bruno_Lopes"
}
] | Mongo working in NAS | 2023-09-11T19:01:38.540Z | Mongo working in NAS | 545 |
null | [] | [
{
"code": "",
"text": "Please, who can help me, I need a query in mongodb to execute with a date of the last 4 days",
"username": "Alvaro_Gallardo"
},
{
"code": "",
"text": "something like that, but it extracts the last 3 days{\n$expr:{\n$eq:[\n‘$accountingDate’,\n{\n$dateToString:{\ndate:{\n$dateAdd:{\nstartDate:‘$$NOW’,\nunit:‘day’,\namount: -0\n}\n},\nformat:‘%Y-%m-%d’\n}\n}\n]\n}\n}",
"username": "Alvaro_Gallardo"
}
] | Query MONGO fecha | 2023-09-12T16:31:19.274Z | Query MONGO fecha | 296 |
null | [] | [
{
"code": "",
"text": "I am almost 50. However, I have a desire to learn. I feel MongoDB is the way to go. Can anyone help me out with logic? BTW, I am not into fantasy dragon stuff. I prefer John Cage i-ching operations to teach.",
"username": "Marcus_W"
},
{
"code": "",
"text": "I have no problem learning MongoDB inside out if there is an instructor.",
"username": "Marcus_W"
}
] | Who is ready to take on this young padawan? | 2023-09-12T14:50:58.921Z | Who is ready to take on this young padawan? | 327 |
null | [] | [
{
"code": "",
"text": "“After finishing with your Quickstart security database”Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Sorry, I am lost. Can anyone help?",
"username": "Marcus_W"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can someone triangulate this so I can understand it? | 2023-09-12T14:46:43.897Z | Can someone triangulate this so I can understand it? | 345 |
null | [] | [
{
"code": "",
"text": "Hello, I am trying to go over the videos for M103. I am finding the hand gestures rather distracting and would prefer more of a powerpoint or textbook format where I can go at my own pace. I am finding I have to keep stopping and rewinding to transcribe what is being said. Are there .pdf files I can download and read about this on a tablet or ebook reader? I would even accept a book if it wasn’t priced too high.",
"username": "Marcus_W"
},
{
"code": "",
"text": "Hi Marcus,Thank you for reaching out.Unfortunately, the course is not available in a different format. That said, I can recommend the different courses available on learn.mongodb.com as great reference do learn different part of the product suite. Additionally, the book MongoDB: The Definitive Guide: Powerful and Scalable Data Storage also covers the content of the M103 course. Keep in mind that the book was released in 2019. New features are therefore not covered.If you have more questions about MongoDB University or need assistance with one of the course, don’t hesitate to send an email to the team at [email protected] you,Davenson Lombard",
"username": "Davenson_Lombard"
},
{
"code": "",
"text": "Release a 2023 book without distracting fingers.",
"username": "Marcus_W"
},
{
"code": "",
"text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Online lecture material (videos) | 2023-09-11T17:46:39.073Z | Online lecture material (videos) | 372 |
null | [] | [
{
"code": "",
"text": "Hello,I need to call a REST API to another service from within an App Service’ s Function.\nIf anyone knows how to do this, could you please tell me how?Regards",
"username": "Enoooo"
},
{
"code": "context.httpnpmnode",
"text": "Hi @Enoooo,The easiest way is to use context.http, otherwise you may want to try 3rd party npm packages, however you’ll need to check which version is compatible, as the node version emulated by the Functions environment is v10.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Hi Paolo,Thank you for your reply.\nSince context.http is likely to be abolished in the future,\nI decided to install and use axios.Regards",
"username": "Enoooo"
}
] | How to call REST API from within a App Service's Function | 2023-01-06T10:53:53.783Z | How to call REST API from within a App Service’s Function | 1,434 |
[
"node-js"
] | [
{
"code": "Documents\\FULLSTACK-MERN-MOVIE-2022\\server\\src\\routes\\index.js\n at new NodeError (node:internal/errors:405:5)\n at finalizeResolution (node:internal/modules/esm/resolve:226:11)\n at moduleResolve (node:internal/modules/esm/resolve:838:10)\n at defaultResolve (node:internal/modules/esm/resolve:1036:11)\n at DefaultModuleLoader.resolve (node:internal/modules/esm/loader:251:12)\n at DefaultModuleLoader.getModuleJob (node:internal/modules/esm/loader:140:32)\n at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:76:33)\n at link (node:internal/modules/esm/module_job:75:36) {\n code: 'ERR_MODULE_NOT_FOUND'\n}\n\nNode.js v20.5.0\n[nodemon] app crashed - waiting for file changes before starting...\n",
"text": "Good day hope u are having a nice day and a blessed month\nMy issue is that after clicking ctrl+ s, to run my code and connect to MongoDB on port 5000 it showed some errors, the images below will be a guide to u\nBut it keeps on showing this error:\n001720×1600 83.5 KB\n\n002720×1600 122 KB\n\n003720×1600 114 KB\n\n\n003720×1600 108 KB\n\n004720×1600 115 KB\n\n005720×1600 90.6 KB\n\n006720×1600 125 KB\n",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "import express from \"express\";\nimport userRoute from \"./user.route.js\";\nimport mediaRoute from \"./media.route.js\";\nimport personRoute from \"./person.route.js\";\nimport reviewRoute from \"./review.route.js\";\n./user.route.jsnode_modulesnpm install",
"text": "Hey @Somtochukwu_Kelvin_Akuche,Welcome to the MongoDB Community!Thank you for sharing the error stack trace.I would recommend posting the code snippet as text instead of an image, as it will make it easier for the community to read and assist. However, from what I can see in the image, it looks like there is a relative path error in your import.Here are a few things you can try to resolve this:Adding more logging and verifying that the path is correct would be my recommended first step. Please feel free to post a follow-up with the specific error text if you are still stuck.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Okay though I reran the yarn start and it went smoothly again so I think I solved that but for the logging something pls can u tell me how þo go about it?\nPls how will I know the path is exported properly the user/…\nThough it exists user.route.js, I will still do more work on it I think Ibknow where the issues for and for the code snippets I will do that 11pm UTC and send to u",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "import express from \"express\"; \nimport userRoute from \"./user . route. js\";\nimport mediaRoute from \"./media.route. js\";\nimport personRoute from \"./person.route. js\";\nimport reviewRoute from \"./review. route. js\"; \nconst router = express. Router( ); \nrouter .use(\"/user\", userRoute);\nrouter .use(\"/person\", personRoute) ;\n\nrouter .use(\"/reviews\", reviewRoute);\n\nImport express from \"express\" \nimport [body) from\"express-validator\" \nimport favoriteController from \" .. /controllers/favorite. controller. js\"\nimport userController from \" .. /controllers/user. controller.js\" \nimport requestHandler from \" .. /handlers/request. handler.js\"\nimport userModel from \" .. /models/user.model.js\"\n\nimport tokenMiddleware from \" .. /middlewares/token.middleware. js\"\n const router = express.Router();\n\nrouter .post ( \n \"/signup\", \n body (\"username\") \n exists() .withMessage(\"username is required\")\n\n .isLength([ min: 8 }) .withMessage(\"username minimum 8 characters\")\n\n .custom(async value => { \n const user - await userModel.findone(( username: value });\n if (user) return Promise.reject(\"username already used\");\n });\n\n body(\"password\")\n\n .exists/1.withMessage(\"password is required\")\n .isLength({ min: 8 } ) .withMessage( \"password minimum 8 characters\"), \n body (\"confirmPassword\")\n\n .exists().withMessage(\"confirmPassword is required\")\n\n .isLength({ min: 8 }) .withMessage(\"confirmPassword minimum 8 characters\")\n\n .custom((value, { req }) -> { \n if (value ! == req.body .password) throw new Error(\"confirmPassword not match\"); \n return true;\n }),\n \n body(\"displayName\")\n\n .exists() .withMessage(\"displayName is required\")\n\n .isLength({ min: 8 }) .withMessage(\"displayName minimum 8 characters\"), \n requestHandler . validate,\n userController .signup\n),\n\nrouter .post (\n\n \"/signin\",\n body(\"username\")\n .exists() .withMessage(\"username is required\")\n\n .isLength({ min: 8 }) .withMessage(\"username minimum 8 characters\"), \n body (\"password\")\n\n .exists() .withMessage(\"password is required\")\n\n .isLength({ min: 8 }) . withMessage(\"password minimum 8 characters\"), \n requestHandler .validate, \n userController.signin\n);\nrouter.put(\n\n \"/update-password\",\n tokenMiddleware.auth,\n body(\"password\")\n\n .exists() .withMessage(\"password is required\")\n\n .isLength({ min: 8 }) . withMessage( \"password minimum 8 characters\"), \n body( \"newPassword\")\n\n .exists() .withMessage(\"newPassword is required\")\n\n .isLength({ min: 8 }) . withMessage(\"newPassword minimum 8 characters\"),\n body(\"confirmNewPassword\")\n\n .exists() .withMessage(\"confirmNewPassword is required\")\n\n .isLength({ min: 8 }) . withMessage(\"confirmNewPassword minimum 8 characters\") \n .custom((value, { req }) => { \n if (value ! == req. body . newPassword) throw new Error(\"confirmNewPassword not match\"); \n return true;\n }),\n requestHandler . validate, \n userController .updatePassword\n ), \n\nrouter .get( \n \"/info\",\n tokenMiddleware.auth, \n userController .getInfo,\n);\n\nrouter .get (\n\n \"/favorites\",\n tokenMiddleware.auth,\n favoriteController.getFavoritesofUser\n);\nrouter . post ( \n \"/favorites\", \n tokenMiddleware.auth, \n body(\"mediatype\")\n .exists() .withMessage(\"mediatype is required\")\n\n .custom(type => [\"movie\", \"tv\"].includes(type)) .withMessage(\"mediaType invalid\"), \n body(\"mediaId\")\n\n .exists().withMessage(\"mediaId is required\")\n\n .isLength(( min: 1 )) .withMessage(\"mediaId can not be empty\"),\n body ( \"mediaTitle\")\n\n .exists().withMessage(\"mediaTitle is required\"), \n body ( \"mediaPoster\")\n .exists( ) .withMessage( \"mediaPoster is required\").\n body ( \"mediaRate\")\n .exists() .withMessage(\"mediaRate is required\"), \n favoriteController.addFavorite\n);\nrouter .delete(\n \"/ favorites/ : favoriteId\", \n tokenMiddleware. auth, \n favoriteController.removeFavorite\n);\n\nexport default router;\n\n",
"text": "@Kushagra_Kesav below are the snipppet for all of them accordingly2)user.route.jsThere might be a little mistake maybe some wrong punctuation marks or the spacing even the brackets I used but from the photos I sent b4 should guide u, though I cross-checked b4 sending",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Hi pls is there anyone that can look into thia for me @Justina_Ackah , @Comma_Pump_Sound-Dev , @Jad_Bsaibes , @John_Sewell , @Prakul_Agarwal @Qdev , @Vishnu_Satis @WONG_TUNG_TUNG , @Xinying_Hu , @Xin_Wen_Yap, pla the person that answered pr replied to my lasst comments he hasn’t replied me for up to 5 to 6 days npw, I understand he is busy I wpuld appreciate it if u guys can help me or refer me to where I can get help, and I would be open to explain to u incase u don’t understand my errors",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Looks like an issue with inport only, can you post your project structure here, the folder structure, it will help in understanding the issue",
"username": "Vishnu_Satis"
},
{
"code": "",
"text": "As others have said looks like a reference error, either in referencing your other code or a library you depend on.If you’ve added all dependencies, or taken some and not yet restored them from the package with npm install, I’d try commenting out vast swathes of your code and getting the most basic code path working and then build up until you find the error.Or upload all your code to GIT and it would be simpler to take a peek.You should be able to comment out everything, and get it working, then introduce one route and gradually add more etc. until you find the line that’s causing the issue.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Okay I will do that, that should be the folders section in my editor right?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "The whole thing, including the root as that has the config files in it.Be wary of any logons / passwords / server addresses and blank them out before you upload!",
"username": "John_Sewell"
},
{
"code": "",
"text": "Okay thanks , but when u said dependencies , though I was confused of the one to add so that it won’t affect anything, later when am running the codes, though I couldn’t use npm install in the beginning bcos it kept showing lock file found and no license field so I had to reconfigure some files and added the license and remove package. Lock.json file b4 the error was removed but for that dependencies I didn’t add much just a little to my package. Json file.\nI will try commenting the codes to see what’s wrong but I think it should be when I used import from a particular file that’s when the error came by, I will also upload my whole files in Github so a peek can be given",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Ooh okay u mean b4 I upload to github right?",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Ooh sorry to here(mongodb)",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "Yes, easiest is probably just copy your whole folder and then redact that copy so you don’t lose anything.",
"username": "John_Sewell"
},
{
"code": "",
"text": "Okay will do that, thanks for your time",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "\nPls this is the link to another post, I couldn’t post it here though bcos they said “topic has been closed by mongodb” the time I wanted to post here, my issue was the warning after running a command and I also stated what caused my issue b4 in this post I tagged u here though it was wrong import but the screenshot will explain more",
"username": "Somtochukwu_Kelvin_Akuche"
},
{
"code": "",
"text": "@Vishnu_Satis I tagged u on a post on continuity on the post",
"username": "Somtochukwu_Kelvin_Akuche"
}
] | How to fix import in Node.js | 2023-09-07T04:14:51.683Z | How to fix import in Node.js | 911 |
|
[
"crud"
] | [
{
"code": "db.getCollection(\"cs\").updateOne(\n {},\n [\n {\n $set: {\n \"forwardData.clients.$[].relayedEndpoints\": [{\n type: \"vehicle\",\n endpoint: \"$forwardData.vehicle.path\"\n }]\n }\n }\n ]\n);\n",
"text": "Hello guys,\nI’m struggling a bit trying to update my document on Mongo 4.4.Is pretty straightforward:Running the query above, i’m facing this error:\nInvalid $set :: caused by :: FieldPath field names may not start with ‘$’.After some research, i saw some forum answers to similar problems to remove the brackets from the second argument. It doesn’t throw any errors doing this, but the value are being saved as a string with the field name instead of his value.\nScreenshot 2023-09-12 at 08.23.32815×115 11 KB\nCould you guys tell me what i’m doing wrong? I tried with $push instead of $set too, and got the same result.",
"username": "Maykel_Esser"
},
{
"code": "db.getCollection(\"cs\").updateOne(\n {},\n [{\n $set: {\n \"forwardData.clients.relayedEndpoints\": {\n type: \"vehicleMakes\",\n endpoint: \"$forwardData.vehicleMakes.path\"\n }\n }\n }]\n);\n",
"text": "The problem was the array after the clients object…Solved with this",
"username": "Maykel_Esser"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Trying to update my collection with a new field containing the value of other field are being stored as the name field | 2023-09-12T11:24:00.850Z | Trying to update my collection with a new field containing the value of other field are being stored as the name field | 310 |
|
null | [
"serverless",
"api"
] | [
{
"code": "",
"text": "This my queryexports = function(arg) {\nconst mongodb = context.services.get(““);\nconst mycollection = mongodb.db(””).collection(“********”);\nreturn mycollection.findOne({user_name: arg },{_id:0, password: 1 });\n}When i test it in Testing Console it works good !\nbut when test it in postman it gets null !",
"username": "Pola_Khalil"
},
{
"code": "",
"text": "Getting the same.\nCan someone from mongodb please take a look?\nI can’t get any endpoint that requires a parameter (function argument) to work.\nEndpoints that don’t require a parameter work fine.",
"username": "Crypto_Nympho"
},
{
"code": "exports = function(request) {\nconst mongodb = context.services.get(““);\nconst mycollection = mongodb.db(””).collection(“********”);\nreturn mycollection.findOne({user_name: request.query.arg},{_id:0, password: 1 });\n}\n",
"text": "I think I figured it out.\nTry this:I noticed that linking a function to an https endpoint doesn’t mean that the api parameters are parsed automatically. The https endpoint send the request object to your function, which you need to access manually to find your api parameters.\nIn the code above, “arg” is the parameter name you use in the fetch link.",
"username": "Crypto_Nympho"
}
] | Can't use api parameter! | 2023-09-09T13:20:36.202Z | Can’t use api parameter! | 444 |
null | [
"node-js",
"schema-validation"
] | [
{
"code": "colors_availablecolors_available: {\n bsonType: 'array',\n items: [\n {\n bsonType: 'object',\n additionalProperties: false,\n uniqueItems: true,\n minItems: 1,\n required: [\n 'color',\n 'stock'\n ],\n properties: {\n color: {\n bsonType: 'string'\n },\n stock: {\n bsonType: 'int'\n }\n }\n }\n ]\n }\ncolors_available\"colors_available\": [\n {\n \"color\": \"Green\",\n \"stock\": 42\n },\n {\n \"color:\": \"Black\",\n \"stock\": 13\n },\n {\n \"color\": \"White\",\n \"stock\": 21\n }\n ]\n",
"text": "I had built an API that was working with NodeJS and express, ones I felt confident with it I learnt Mongo in order to get closer to the MERN stack. Everything is working smoothly and now, just for learning purposes, I’m trying to enforce a schema so that post and put methods are somewhat limited.My schema works fine except for the colors_available array where the criteria that should be filtering the whole, works only for the first item of the array: (please notice that ‘type’ has been replaced with ‘bsonType’ as it’s, again, working with MongoDB)As a reference, this is what a document’s colors_available field looks like: (the ‘colors_available’ array could be of any length >=1)I have tried removing the squared brackets at the items field but it just broke everything…Any suggestions are more than welcome! ",
"username": "Marco_Pagotto"
},
{
"code": "additionalProperties:false_id_iddb.createCollection(\"colors\", {\n validator: {\n $jsonSchema: {\n bsonType: \"object\",\n title: \"Colors Available\",\n required: [\"colors_available\"],\n properties: {\n \"colors_available\": {\n \"bsonType\": \"array\",\n \"minItems\": 1,\n \"uniqueItems\": true,\n \"items\": {\n \"bsonType\": \"object\",\n \"additionalProperties\": false,\n \"required\": [\n \"color\",\n \"stock\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"color\": {\n \"bsonType\": \"string\"\n },\n \"stock\": {\n \"bsonType\": \"int\"\n }\n }\n }\n }\n }\n }\n }\n})\n\"color:\"db.colors.insertOne({\n \"colors_available\": [\n {\n \"color\": \"Green\",\n \"stock\": 42\n },\n {\n \"color\": \"Black\",\n \"stock\": 13\n },\n {\n \"color\": \"White\",\n \"stock\": 21\n }\n ]\n \n })\n",
"text": "I also replied to your SO post.The array assertions are at the wrong level.One thing to note is the use of additionalProperties:false causes a validation error with MongoDB schemas if you don’t define _id in your schema definition. This is because all documents are inserted with _id by default if you omit this property from your document insertion.source: https://www.mongodb.com/docs/manual/core/schema-validation/specify-json-schema/json-schema-tips/#_id-field-and-additionalproperties--falseThe example you provided has an error in the second schema \"color:\". The colon should be outside of the quotes.A valid payload for this schema follows:",
"username": "jeremyfiel"
},
{
"code": "colors_available{\n $jsonSchema: {\n bsonType: 'object',\n additionalProperties: false,\n required: [\n '_id',\n 'brand',\n 'model',\n 'price',\n 'colors_available',\n 'autopilot'\n ],\n properties: {\n _id: {\n bsonType: 'objectId'\n },\n brand: {\n bsonType: 'string'\n },\n model: {\n bsonType: 'string'\n },\n price: {\n bsonType: 'decimal'\n },\n colors_available: {\n bsonType: 'array',\n items: {\n bsonType: 'object',\n additionalProperties: false,\n uniqueItems: true,\n minItems: 1,\n required: [\n 'color',\n 'stock'\n ],\n properties: {\n color: {\n bsonType: 'string'\n },\n stock: {\n bsonType: 'int'\n }\n }\n }\n },\n autopilot: {\n bsonType: 'bool'\n }\n }\n }\n}\ntypo{\n \"_id\": {\n \"$oid\": \"64f84820853dde3e3c6b10f2\"\n },\n \"brand\": \"Maserati\",\n \"model\": \"Granturismo\",\n \"price\": {\n \"$numberDecimal\": \"180000.00\"\n },\n \"colors_available\": [\n {\n \"color\": \"Blue\",\n \"stock\": 2\n },\n {\n \"stock\": 4,\n \"color\": \"White\"\n }\n ],\n \"autopilot\": false\n}\n",
"text": "Hey, thanks for taking time in answering my question \nThe schema provided includes just the colors_available property of a major schema (for clarity purposes), if anyone got back to the post later, here is the full schema in case they were interested in it:In my case, the error was caused by the typo that you have noticed, in fact, that’s the reason that made my schema break when I tried to remove the squared brackets.If anyone were interested, here is also a correct example of a document:",
"username": "Marco_Pagotto"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | jsonSchema working only for the first item of an array of objects | 2023-09-10T11:58:15.282Z | jsonSchema working only for the first item of an array of objects | 378 |
null | [
"node-js",
"transactions"
] | [
{
"code": "T1 reads x = 1\nT2 reads x = 1\nT1 writes x = 0\nT2 writes x = 0 \n",
"text": "I have a value called x in my database and two transactions, T1 & T2. Assume both transactions use the multi-document ACID API MongoDB provides.Each transaction reads the value of x and decrements it by 1. When reading, the code checks that the value of x is greater than 0.Now, the value of x is 1 and T1 & T2 are concurrent. Assume the following execution threadIn this situation will T2 detect that the value of x that it read has been changed and abort the transaction (presumably based on some internal timestamp) or will it continue with the write?I have tried looking through the MongoDB docs and I can’t find a satisfactory explanation to this.",
"username": "Zaid_Humayun"
},
{
"code": "",
"text": "Olá, bem-vindo à comunidade MongoDB. Acredito que este link pode ajudá-lo a entenderTransactions, a long awaited and most requested feature for many, has finally arrived in MongoDB v4.0\nReading time: 4 min read\n",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Thanks Samuel, that helped me figure out a way to replicate the issue locally and confirm that a WriteConflict does indeed occur.",
"username": "Zaid_Humayun"
},
{
"code": "",
"text": "You’re welcome, if this solved your problem, leave the topic as resolved so that if other people have the same problem, look at this topic as a reference =D",
"username": "Samuel_84194"
},
{
"code": "",
"text": "I’ve marked your previous answer as the solution. Hope that’s enough!",
"username": "Zaid_Humayun"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Transactions Write Conflict | 2023-09-10T05:06:26.363Z | MongoDB Transactions Write Conflict | 397 |
null | [
"queries",
"data-modeling"
] | [
{
"code": "{\n \"date\":ISODate(\"2022-03-23T15:33:15.551Z\")\n}\n{\n \"date\":ISODate(\"2022-03-24T15:19:15.551Z\")\n}\n{\n \"date\":ISODate(\"2022-03-25T20:33:15.551Z\")\n}\n{\n \"date\":ISODate(\"2022-03-26T20:33:15.551Z\")\n}\n{\n \"date\":ISODate(\"2022-03-27T20:33:15.551Z\")\n}\n",
"text": "Hi,Here is smaple dataWill i be able to query date field just by date without time ?\nExample check if document with date 2022-03-26 matches ? If yes how can do that ?",
"username": "Manjunath_k_s"
},
{
"code": "myDate:{$gte:ISODate(\"2022-03-23T00:00:00.000Z\"), $lt:ISODate(\"2022-03-24T00:00:00.000Z\")}",
"text": "Most simple would be to use date ranges, greater than or equal the check date without a time component and less than the next day:myDate:{$gte:ISODate(\"2022-03-23T00:00:00.000Z\"), $lt:ISODate(\"2022-03-24T00:00:00.000Z\")}You could project the data field to remove the time and then compare to that, but that would remove any benefits of indexes as you would be checking a calculated value.",
"username": "John_Sewell"
},
{
"code": "{\n \"date\":ISODate(\"2022-03-27T20:33:15.551Z\")\n}\n",
"text": "Hi John,My focus is just store only date (yyyy-mm-dd). Unfortunately there is no way to store just date object in mongodb. Even if i store document with Date object with time.I am really looking to check if any way exists to check if incoming date in query matches the document date. Just date (yyyy-mm-dd) and ignoring time.I do not want to use range because i am looking for one specific date, my ask is very specific if Date (2023-08-12) if found do something. Any suggestions ?",
"username": "Manjunath_k_s"
},
{
"code": "",
"text": "Strip the times out before you save them then and then you can just query on a simple date object./Edit and to add to the above, a date range does not have to be multiple dates, it can be all times within ONE day, as per the example above.",
"username": "John_Sewell"
}
] | Query on Date object | 2023-09-11T10:12:50.544Z | Query on Date object | 342 |
null | [] | [
{
"code": "",
"text": "What are the steps to remove an export bucket associated with a cluster?My situation:I’ve successfully created a cluster, back up schedule with export bucket to AWS S3 with Terraform.However when trying to destroy the created resources, deletion step for “mongodbatlas_cloud_backup_snapshot_export_bucket” resource hangs indefinitely until I SIGINT with ctrl+c the terraform destroy command.I’ve also tried to delete the export bucket by first finding the id of the bucket with atlas cli:\natlas backups exports buckets listAnd then attempted to delete it with:\natlas backups exports buckets delete --bucketId=<bucket_id_from_above_command_here>\nHowever, this resulted in an error:\nError: https://cloud.mongodb.com/api/atlas/v2/groups/<project_id>/backup/exportBuckets/<bucket_id> DELETE: HTTP 400 (Error code: “EXPORT_BUCKET_DELETE_FAILED”) Detail: Failed to delete export Bucket with ID <bucket_id>. Reason: Bad Request. Params: [<bucket_id>]I’ve tried authenticating to atlas cli with api keys with Organization Owner and Project Owner permissionsSo my question is:How do I actually remove an export bucket without destroying my cluster?",
"username": "Mika_Knuuttila"
},
{
"code": "{\n \"detail\": \"Failed to delete export Bucket with ID <bucket_id>.\",\n \"error\": 400,\n \"errorCode\": \"EXPORT_BUCKET_DELETE_FAILED\",\n \"parameters\": [\n \"<bucket_id>\"\n ],\n \"reason\": \"Bad Request\"\n}\n",
"text": "I’ve now also tried the Atlas Administration API by successfully authenticating with the auth digest header, fetching export buckets with this\nhttps://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Cloud-Backups/operation/listExportBucketsand tried to delete the export bucket association with\nhttps://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Cloud-Backups/operation/deleteExportBucketwhich gives me the following json error:I’m at a loss of what to do next, as destroying the cluster is not an option-",
"username": "Mika_Knuuttila"
},
{
"code": "\"_id\"\"iamRoleId\"results\"exportBucketId\"",
"text": "Hi Mika,I’ve now also tried the Atlas Administration API by successfully authenticating with the auth digest header, fetching export buckets with thisWhat was the output here? Redact the \"_id\" and \"iamRoleId\" values within the results array.and tried to delete the export bucket association withCan you provide the full command here used? Include a replacement / dummy value for \"exportBucketId\" or be sure to redact the actual value used before posting here.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "\"_id\"\"iamRoleId\"results{\n \"links\": [\n {\n \"href\": \"https://cloud.mongodb.com/api/atlas/v2/groups/<our_project_id>/backup/exportBuckets?pageNum=1&itemsPerPage=100\",\n \"rel\": \"self\"\n }\n ],\n \"results\": [\n {\n \"_id\": <bucket_id_1>\",\n \"bucketName\": \"<bucket_name_1>\",\n \"cloudProvider\": \"AWS\",\n \"iamRoleId\": \"<role_1_id>\"\n },\n {\n \"_id\": \"<bucket_id_2>\",\n \"bucketName\": \"taika3d-backups-mongodb-snapshots\",\n \"cloudProvider\": \"AWS\",\n \"iamRoleId\": \"<role_2_id>\"\n }\n ],\n \"totalCount\": 2\n}\natlas backup exports buckets delete --bucketId=<bucket_2_id>\n[https://cloud.mongodb.com/api/atlas/v2/groups/<our_project_id>/backup/exportBuckets/<bucket_2_id>\n{\n \"detail\": \"Failed to delete export Bucket with ID <bucket_2_id.\",\n \"error\": 400,\n \"errorCode\": \"EXPORT_BUCKET_DELETE_FAILED\",\n \"parameters\": [\n \"<bucket_2_id>\"\n ],\n \"reason\": \"Bad Request\"\n}\natlas backup exports buckets describe --bucketId=<bucket_2_id>\n",
"text": "What was the output here? Redact the \"_id\" and \"iamRoleId\" values within the results array.I have to redact more than your suggested redactions since this is a public forum and bucket names would expose information that I’m not allowed to share publiclyI used the “bucket_2_id” value from the API responseI also tried the admin API with HTTP DELETE methodWhich resulted in the following response:The atlas cli commandCorrectly returns the same information about the bucket as admin API GET request earlier in this post",
"username": "Mika_Knuuttila"
}
] | Steps to remove export bucket from a cluster | 2023-09-06T09:39:35.570Z | Steps to remove export bucket from a cluster | 319 |
null | [] | [
{
"code": "",
"text": "hi,\ni have a centos 7 box without gui that has mongodb 4.4 that im trying to upgrade to mongo 5.0\ncurrently, this is what i’ve been doing:\nedit /etc/yum.repos.d/mongodb-org-5.0.repo\nand the contents are:\n[mongodb-org-5.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/5.0/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=https://www.mongodb.org/static/pgp/server-5.0.ascnow when i do a\nyum install mongodb*it fails with a bunch of conflicts. so instead, i did this:yum remove mongodb*yum clean allyum install mongodb-orgwhich looked like it installed fine but i cant find out if the data got wiped or not. how do i check this?\nalso, how should i be doing the upgrades if i get conflicts like these?\nthanks",
"username": "Stan_C"
},
{
"code": "",
"text": "Hi @Stan_C,\nFrom the documentation:Regards",
"username": "Fabio_Ramohitaj"
}
] | Help upgrading mongodb 4.4 to 5.0 in centos 7 terminal | 2023-09-12T04:33:09.322Z | Help upgrading mongodb 4.4 to 5.0 in centos 7 terminal | 304 |
null | [] | [
{
"code": "",
"text": "M40 General has 4 CPU, M40 SSD NVME only 2 CPUs.",
"username": "Hamilton_Vera"
},
{
"code": "",
"text": "Hi @Hamilton_Vera,M40 General has 4 CPU, M40 SSD NVME only 2 CPUs.This is looks to be expected and is not specific to M40’s from what I can see in the configuration screen for my test cluster.It’s recommended choosing the configuration best suited to your workload / use case but the Customize cluster storage documentation may be of use to you with specific regards to the storage options available.Regards,\nJason",
"username": "Jason_Tran"
}
] | M40 General vs NVMe SSD CPU count | 2023-09-12T05:04:50.092Z | M40 General vs NVMe SSD CPU count | 337 |
null | [
"monitoring"
] | [
{
"code": "",
"text": "Dear, i am using MongoDb atlas with M40 configuration.\nWhen use M40 General, CPU is stability ~70% with abount 1300-1500 connections\nbut when switch to M40 SSD NVMe, CPU is up to 100% with the same traffic.Anyone can support me about thís issue ???thanks a lot.",
"username": "Chuong_LA"
},
{
"code": "",
"text": "May be, just may be.Before disk I/O was a bottleneck. It is not anymore. Traffic is the same, but what about latency?",
"username": "steevej"
},
{
"code": "",
"text": "latencies were about 0.8 - 2 seconds",
"username": "Chuong_LA"
},
{
"code": "",
"text": "So, which choice is better, pls",
"username": "Chuong_LA"
},
{
"code": "",
"text": "one more thing i forgot, the Disk Util just about <= 10%, so i dont think it is disk I/O issue",
"username": "Chuong_LA"
},
{
"code": "",
"text": "smaller latency is better",
"username": "steevej"
},
{
"code": "",
"text": "10% in both cases? Or 10% in case of ssd? What was it with the old config?",
"username": "steevej"
},
{
"code": "",
"text": "10% in case of ssd, with general, old config, it was higher, about 30-40%",
"username": "Chuong_LA"
},
{
"code": "",
"text": "SoSSD => more CPU, less disk\nOld HD => less CPU, more diskWe need to compare the latency of both. You gave a number but it is not clear for which. The specific number is also useless. The comparative values are important. The comparative values at different traffic load can be useful.Are you maxed out in terms of traffic in any of the two cases?",
"username": "steevej"
},
{
"code": "",
"text": "I’ll will do more test and send you the more detail report, currently i did the test with CCU is 500, backend app with 20 threads, db is M40 5 nodesAs you said:\nSSD => more CPU, less disk\nOld HD => less CPU, more disk\nSo i understood a bit the issue.",
"username": "Chuong_LA"
},
{
"code": "",
"text": "A post was split to a new topic: M40 General vs NVMe SSD CPU count",
"username": "Jason_Tran"
},
{
"code": "",
"text": "",
"username": "Jason_Tran"
}
] | CPU 100% when using SSD NVME | 2020-11-30T17:06:29.292Z | CPU 100% when using SSD NVME | 3,458 |
null | [
"atlas-functions",
"app-services-user-auth",
"api"
] | [
{
"code": "app.emailPasswordAuth.registerUser({email, password})usersuserscontext",
"text": "Hello, I have the following use case: An admin user (without restrictions) goes off the grid and meets another person. The admin must create a new user for that person and also possibly create a few documents that include the new user id. Btw, I am only using the Email/password auth provider.My attempt: From my testing registering new users through the Realm SDK - app.emailPasswordAuth.registerUser({email, password}) requires a connection to the internet (makes sense). Let me know if I am wrong. I’ve read this topic.Different approach: I’ve set up my entire database so that any dependency on the user id is linked to the object id of the document inside my custom user data collection(collection name is users). My plan is while being offline, the admin will create a new document inside the synced users collection. Later when he comes back online the new document will sync, then it will trigger a function and inside the function, I create and confirm the actual app user.Question 1: Is there a way to do this using the function context or is using the admin API the only way?Question 2: Any recommendations for the use case or possible pitfalls that I am not noticing?",
"username": "Damian_Danev"
},
{
"code": "",
"text": "Hi @Damian_Danev,Did you ever figure out an effective solution for this use case? I have the exact same requirement to create a new user while offline.",
"username": "Phil_Seeman"
}
] | Offline App User Registraion | 2023-03-28T08:31:51.959Z | Offline App User Registraion | 871 |
[
"aggregation"
] | [
{
"code": "",
"text": "Hi everyone.\nI have a huge collection as below image. Currently, I want to reduce the size of this collection without deleting documents in the collection. Can someone give me some recommendations to resolve this problem??\n\nimage984×232 62.7 KB\n",
"username": "ducdn_N_A"
},
{
"code": "",
"text": "That storage to data ratio seems high…what are you storing and whats the server version and storage engine?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Hi @John_Sewell, I am storing the click events information which is tracked from websites. This is connection info\n",
"username": "ducdn_N_A"
},
{
"code": "",
"text": "Can you do db.collectionName.stats() to get the storage information about the collection?",
"username": "John_Sewell"
},
{
"code": "",
"text": "The only way to reduce the size of the collection would be to delete or archive documents, or store less data in each document. It looks like the average document size is 4kb, which is relatively large for a click event. What data are you storing?",
"username": "Peter_Hubbard"
},
{
"code": "",
"text": "@John_Sewell this is storage information about my collection\nclick_events_cac_stats.txt (30.5 KB)",
"username": "ducdn_N_A"
},
{
"code": "{\n \"collection\": \"click_events645878c061508eb06f341cac\",\n \"query\": \"insert\",\n \"data\": {\n \"events\": [\n {\n \"key\": \"[CLY]_action\",\n \"count\": 1,\n \"segmentation\": {\n \"type\": \"click\",\n \"x\": 1001,\n \"y\": 333,\n \"width\": 1920,\n \"height\": 931,\n \"view\": \"/xx/xx/xxx\",\n \"parent\": {\n \"x\": 0,\n \"y\": 0,\n \"width\": 0,\n \"height\": 0\n },\n \"domain\": \"xxxxxx.xx.x\"\n },\n \"timestamp\": 1694422960299,\n \"hour\": 16,\n \"dow\": 1\n }\n ],\n \"app_key\": \"f977476dde83086c0eb9a69d14f1a3ed52a937a7\",\n \"device_id\": \"d0940293-d24e-4632-baf7-fb3735089542\",\n \"sdk_name\": \"javascript_native_web\",\n \"sdk_version\": \"22.06.0\",\n \"t\": 1,\n \"timestamp\": 1694422960300,\n \"hour\": 16,\n \"dow\": 1,\n \"raw_html\": null,\n \"screen_size_type\": \"Desktop1920\",\n \"_id\": \"64fed7b0b92d6649b012cc3c\"\n }\n}\n",
"text": "@Peter_Hubbard, I store click events information. This is an event in my collection",
"username": "ducdn_N_A"
},
{
"code": "",
"text": "From the stats you’re using snappy as the compression routine, you could try making a new DB with different compression zlib etc and then copying a sample of the data into that and check what the compression rates you get are like.Obviously there are upsides and downsides to different compression engines so read around that, I’m surprised by the low compression of the data though in your collection, currently using snappy in prod we’re getting a compression ratio of about 7:1 so 14TB of data requires 2TB of storage.\nI know not all data is compressible but you’re basically getting no compression on your data.I’ve not played about enough with compression at that level to suggest much more I’m afraid, perhaps one of the Mongo team can see something amiss in the stats output.",
"username": "John_Sewell"
},
{
"code": "\"parent\": {\n \"x\": 0,\n \"y\": 0,\n \"width\": 0,\n \"height\": 0\n },\n\"parent\": [ 0, 0, 0, 0 ] ,\n",
"text": "You may try to see if the bucket pattern is applicable.You may also make your schema less verbose by transforming some of your x,y,width,height fields to an array such as making",
"username": "steevej"
},
{
"code": "",
"text": "Hii @Peter_Hubbard ,plz give solution to this Performance issue,I am facing… I am getting performance issue with this aggregation pipeline?This almost takes 20 sec to give response of 30000 records",
"username": "Umasankar_Swain"
},
{
"code": "",
"text": "Thanks for your help @John_Sewell, I will try with zlib",
"username": "ducdn_N_A"
}
] | How to reduce size of huge collection? | 2023-09-11T03:45:32.478Z | How to reduce size of huge collection? | 545 |
|
null | [
"backup"
] | [
{
"code": "",
"text": "I have hosted a website on AWS EC2 instance along with MongoDB database. But, taking backups of the database manually is time-consuming.So I wants to automate taking database Backup continuously without getting bogged down in different backup and failure scenarios in MongoDB database systems. So, we want to create effective Automatic\nMongoDB backup Method. Any help?",
"username": "Bhagyashri_Chaudhari"
},
{
"code": "",
"text": "give me guides how to start the project",
"username": "Dhakshi_Vinny"
},
{
"code": "",
"text": "Try AWS EBS volume snapshot APIs.",
"username": "Kobe_W"
}
] | MongoDB backup automatically | 2023-08-10T07:42:28.815Z | MongoDB backup automatically | 565 |
null | [] | [
{
"code": "[nodemon] restarting due to changes...\n[nodemon] starting node server.js\nServidor escuchando en el puerto: 3001\nERROR en la conexión con la base de datos MongoServerError: bad auth : Authentication failed.\n at Connection.onMessage (/Users/carlos/Downloads/atlantico-proyecto/node_modules/mongodb/lib/cmap/connection.js:202:26)\n at MessageStream.<anonymous> (/Users/carlos/Downloads/atlantico-proyecto/node_modules/mongodb/lib/cmap/connection.js:61:60)\n at MessageStream.emit (node:events:511:28)\n at processIncomingData (/Users/carlos/Downloads/atlantico-proyecto/node_modules/mongodb/lib/cmap/message_stream.js:124:16)\n at MessageStream._write (/Users/carlos/Downloads/atlantico-proyecto/node_modules/mongodb/lib/cmap/message_stream.js:33:9)\n at writeOrBuffer (node:internal/streams/writable:399:12)\n at _write (node:internal/streams/writable:340:10)\n at Writable.write (node:internal/streams/writable:344:10)\n at TLSSocket.ondata (node:internal/streams/readable:774:22)\n at TLSSocket.emit (node:events:511:28) {\n ok: 0,\n code: 8000,\n codeName: 'AtlasError',\n connectionGeneration: 0,\n [Symbol(errorLabels)]: Set(2) { 'HandshakeError', 'ResetPool' }\n",
"text": "Hi there, Im was working on my PC with no problems, everything works. I want to work in my project on my laptop and copied the entire project folder. When I want to start (npm run dev) the console gave me this error. I dont know what to do, if anyone could help I will be greatful.",
"username": "Carlos_Castellano_Herrera"
},
{
"code": "MongoServerError: bad auth : Authentication failed.",
"text": "Hello @Carlos_Castellano_Herrera ,Welcome to The MongoDB Community Forums! I saw that you have not had a response to this topic yet, were you able to find a solution?MongoServerError: bad auth : Authentication failed.An authentication error typically indicates an incorrect password. If your password contains special characters, try wrapping them in quotes, or using URL encoding for special characters.You may also want to verify that you are using your MongoDB user password. Note that this may be different to the password you use for your Atlas login. You can see your MongoDB Users by clicking the Security tab on your MongoDB Atlas cluster.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "I’m getting the same errors. On my PC, I have no errors with authentication using the same username, password, and database name. On my Mac, I keep getting authentication errors using the same .env file as on my PC.",
"username": "Steven_Xu"
}
] | MongoServerError: bad auth: Auth failed in other pc | 2023-06-29T10:29:22.060Z | MongoServerError: bad auth: Auth failed in other pc | 2,883 |
null | [
"aggregation"
] | [
{
"code": "// Posts table\n{\n \"_id\": {\n \"$oid\": \"64e04965f31b0b617fd1168a\"\n },\n \"_t\": \"Post\",\n \"PostText\": \"\",\n \"PostIncrementId\": \"mLYGr9\"\n...and more omitted objects\n}\n// Relationships table\n{\n \"_id\": {\n \"$oid\": \"64fc90ff4a40b8da206f94ce\"\n },\n \"_t\": \"SocialRelationships\",\n \"TargetId\": \"OUCPy0BU\", // PostIncrementId on Post model\n \"RelationshipType\": 1, // action type, 1 = like\n \"UserUniqueId\": \"sample-user-id\" // user who did the action\n}\n\n // aggregationPipeline is a List<BsonDocument> of aggregations \n // for the Posts table\n aggregationPipeline.Add(\n new BsonDocument\n {\n {\n \"$lookup\", new BsonDocument\n {\n { \"from\", \"relationships\" },\n { \"localField\", \"PostIncrementId\" },\n { \"foreignField\", \"TargetId\" },\n { \"as\", \"user_likes\" } // user_likes should be array\n }\n }\n }\n );\n\n aggregationPipeline.Add(new BsonDocument\n {\n {\n \"$match\", new BsonDocument\n {\n {\n \"$and\", new BsonArray\n {\n new BsonDocument\n {\n {\n \"user_likes\", new BsonDocument\n {\n {\n \"$elemMatch\", new BsonDocument\n {\n { \"UserUniqueId\", \"sample-user-id\" },\n { \"RelationshipType\", 1 }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n });\n",
"text": "Alright so I have two tables that I need to join with a condition, I have a massive aggregation that works flawlessly at around 20-30ms queries at 15M+ records, however I want to add another aggregation to match documents but it’s in another table, I’ve tried lookup aggregation but it’s really slow, my 20-30ms queries are now reaching 20+ seconds if I add this aggregation, below is a sample model of the Post collectionbelow is a sample model of the Relationships collection where I log users that liked a postnow, how do I get all Posts that has been liked by a user with the ID of sample-user-id using the Relationships table?\nthis is what my current aggregation looks like, this takes more than 20 seconds to run, I need this to run below 1 second or much better if below 500ms, I am writing in C# but any solution of any programming language will suffice",
"username": "Saylent_N_A"
},
{
"code": "",
"text": "I may be mis-understanding this but as opposed to joining likes to posts and then filtering down by likes from a user, find likes by a user and then join THAT onto the posts collection?With an appropriate index or two it should be very fast? I’m sure you have a load of other stages that are actually doing stuff that may prevent this or I’ve mis-read the code fragments.We can’t see the rest of the query or sample documents, but have you run an explain to see why it’s now taking so much longer?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Perhaps you could provide the index information for the collections and the explain result.",
"username": "Jack_Yang1"
}
] | Need help with really slow aggregation | 2023-09-11T13:57:45.662Z | Need help with really slow aggregation | 232 |
[
"queries",
"node-js",
"atlas-cluster",
"next-js"
] | [
{
"code": "",
"text": "Hello community,\nI’m developing an web app and found a bottleneck in my app related to api calls today.I’m retrieving my app data from mongodb server through find() operation in next js app and found that it works perfectly fine when run in development mode but when I deployed my application and tried to retrieve data through https:// promptopia-theta-swart.vercel.app/api/prompt , its returning outdated data each and every time and I had searched all over internet but can’t find any suitable answers for that.please help me out!whole code: GitHub - meetgovindbajaj/Promptopia: Promptopia is an open-source AI prompting tool for modern world to discover create and share creative prompts.env variables you have to setup are for :\n->GOOGLE_ID\n->GOOGLE_SECRET\n->MONGODB_URI\n->NEXTAUTH_URL\n->NEXTAUTH_URL_INTERNAL\n->NEXTAUTH_SECRET\nin .env filesteps to get these are:if you set it up …you will find no issue in development phase i.e. while you are running it in localhost …when you host it on vercel (its perfect hosting platform for next js apps and really easy to upload and running in no time) then you will notice the issue herethe big one is showing 2 prompts in database\nand the one in middle showing only 1 single prompt in websiteimg1920×1920 103 KBi have tried creating several prompts … but to no effect… it fetches updated data in localhost and outdated data in website : https:// promptopia-theta-swart.vercel.app/api/promptnotice:\nif you decide to host it on vercel just follow these steps:\n → upload your project on github\n → login with same github account in vercel\n → create new app\n → select repo\n → set env variables manually (scroll down a bit and you will find the place to add them)\n → deploy\n → go to google cloud console and add your web app link in credentials → Authorised JavaScript origins and Authorised redirect URIs with same follow ups ahead of them as localhost (this step is required for google login)\n → update nextauth with new link in your env variables as welland at last please find the solution to this issue… its so annoying ",
"username": "Govind_Bajaj"
},
{
"code": "",
"text": "hello … i think that i found where the problem lies but still have 0 success rate of solving it .my api route /api/prompt is being treated as ISR ( Incremental static regeneration) which keeps page static instead of dynamicand the solution to this is via revalidating …but its not working either … \n\nimage1920×1080 200 KB\n",
"username": "Govind_Bajaj"
},
{
"code": "",
"text": "I am having the same problem. Did you find a solution for this",
"username": "Gaurav_Suvarna"
},
{
"code": "",
"text": "I am also having this problem.My app runs flawlessly on localhost. Adding, deleting, and updating the database is immediately updated on the page automatically.When I run on Vercel, it builds just fine and the connection to MongoDB atlas is good but I am not able to interact with the data.If I try to create a new document or delete it, nothing happens. If I delete from localhost, it seems there is a delay of several hours until the hosted Vercel version removes the data.",
"username": "Carla_H"
},
{
"code": "",
"text": "did you find any solution yet?",
"username": "vishwajeet_N_A"
},
{
"code": "import Prompt from \"@models/prompt\";\n\nimport { connectToDB } from \"@utils/database\";\n\nexport const revalidate = 0; // this is the new line added\n\nexport const GET = async (req) => { ... }\n",
"text": "I was able to fix the problem, this is not a Mongodb problem, the problem lies in the part that vercel takes the route ‘/api/prompt’ inside ISR which doesn’t update always. Our goal should be to move it out of ISR, for that we can add a ‘revalidate=0’ inside the ‘/api/prompt/route.js’ file in the following way:-This solved the problem for me, hope it solves yours problem too.\nThis is my github repo for any help:- GitHub - subhajit100/promptify_prompt_library",
"username": "Subhajit_Adhikary"
},
{
"code": "",
"text": "Thank you! I was having this issue too, no idea what it was!",
"username": "Adam_Romero"
}
] | Mongodb api returning outdated data in next js application when in production mode | 2023-06-07T00:52:38.999Z | Mongodb api returning outdated data in next js application when in production mode | 1,496 |
|
null | [
"php"
] | [
{
"code": "",
"text": "We’re getting ready for CentOS to expire next June - and working on setting up our webservers on a supported operating system. One of our applications REQUIRES old php version 7.0.33 - I tried a few operating systems and finally was able to get the correct version set up on Ubuntu 22.04. However, I need to get the php-mongo module installed. And that isn’t going so smoothly - can anyone point me to the repositories I need to accomplish this?The mongo software installed on the old CentOS webserver is:php-mongodb-1.0.4-1.el7.noarch\nphp70-php-pecl-mongodb-1.9.2-1.el7.remi.x86_64\nphp-pecl-mongodb-1.9.2-1.el7.remi.7.0.x86_64I need Ubuntu replacements that will interface with php 7.0.33Please bear with me. I’m not a mongo user - just the admin - so I don’t know all the ins and outs of the package or the best way to ask my questions. Any help is very much appreciated.",
"username": "Bruce_Clegg"
},
{
"code": "",
"text": "",
"username": "Jack_Woehr"
}
] | Install php-mongo on ubuntu | 2023-09-11T15:43:11.910Z | Install php-mongo on ubuntu | 339 |
null | [
"queries",
"node-js"
] | [
{
"code": "",
"text": "I have seen questions here which may look similar to this but they are not…I have a list of documents in my database and each document has an array of objects. I want to update a certain value of a certain object in the array but I have no possible solution at the moment…The solutions am seeing here take into assumption that the document has already been found but I want to first find the right document, then access its array, the access the right object and then finally update the object…I’ll be glad if I get a response as soon as possible",
"username": "iamodreck"
},
{
"code": "",
"text": "Good afternoon, welcome to the community.Can you give an example of your document and which field you are searching for and which field you want to update?",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Hello, thank you for taking time to reach out. …\nI’ve got a sample collection containing 3 documents… supposing I want to update the a value in the links array (the value is inside an object embedded in an array of objects)…\nYou can use any document for your explanation[{\n“_id”: {\n“$oid”: “64ee4ae8535221d4e56a0150”\n},\n“UserName”: “iamodreck”,\n“Email”: “Kigo”,\n“PhoneNumber”: “+2”,\n“Password”: “”,\n“Reg_Date”: “Tue Aug 29 2023 22:45:44 GMT+0300 (East Africa Time)”,\n“UserNumber”: 1,\n“Links”: [{“A”:1 , “B”:2} , {“A”:1 , “B”:2}]\n},\n{\n“_id”: {\n“$oid”: “64fb588b3c896e63ec553b0f”\n},\n“UserName”: “iam”,\n“Email”: “Kiom”,\n“PhoneNumber”: “+2”,\n“Password”: “Is”,\n“Reg_Date”: “Tue Aug 29 2023 22:45:44 GMT+0300 (East Africa Time)”,\n“UserNumber”: 1,\n“Links”: [{“A”:9 , “B”:2} , {“A”:5 , “B”:2}]\n},\n{\n“_id”: {\n“$oid”: “64fb589f3c896e63ec553b10”\n},\n“UserName”: “ieck”,\n“Email”: “Kigom”,\n“PhoneNumber”: “776421”,\n“Password”: “Isa”,\n“Reg_Date”: “Tue Aug 29 2023 22:45:44 GMT+0300 (East Africa Time)”,\n“UserNumber”: 1,\n“Links”: [{“A”:6 , “B”:2} , {“A”:7, “B”:2}]\n}]",
"username": "iamodreck"
},
{
"code": "db.cool.updateMany(\n { \"events.eventType\": \"SaleCredit\" },\n { $set: { \"events.$[elem].eventType\": \"Sale\" } },\n { arrayFilters: [{ \"elem.eventType\": \"SaleCredit\" }] }\n)\n",
"text": "If I understood correctly, here is an example of what would solve:You search the document by filter and only update the array fields that match your filter in arrayFilters. See if it makes sense. I’m available.",
"username": "Samuel_84194"
},
{
"code": "",
"text": "the parameters you’ve used have made it more confusing because i don’t have anything like “sales credit” in my documents\"",
"username": "iamodreck"
},
{
"code": "db.cool.updateMany(\n { \"Username\": \"ieck\" },\n { $set: { \"Links.$[elem].A\": \"10\" } },\n { arrayFilters: [{ \"elem.A\": \"6\" }] }\n)\n",
"text": "Sorry for confusing you, I’ll put it for your example, follow below:Where the user is ieck and your Links..A is 6, put 10. is any number in the array.",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Thanx… though am just wondering if I can implement this using node js because I’m building a node js service",
"username": "iamodreck"
},
{
"code": "",
"text": "Yes, you can.const result = await coll.updateMany(filter, update, options);These variables are the values, for example:const options = {[ arrayFilters: … ]}",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Thanx so much…\nI appreciate this help",
"username": "iamodreck"
},
{
"code": "",
"text": "Hello …hope your doing fine.\nIf I may ask more about this incident,\nIn the previous incident, I aimed to update a specific item in an object contained in an array of a specific user . …But now I have a counter item in each object contained in an array in every document.\nAnd I want to reset the counter to zero but I want this action to be performed on all documents at once.\nCould you be having an idea on how I can archive that?",
"username": "iamodreck"
},
{
"code": "",
"text": "I believe that if you want to update them all, you could put the arrayFilters as $exists: true and the set however you want, wouldn’t that work?",
"username": "Samuel_84194"
},
{
"code": "",
"text": "I get the idea. But my biggest problem here is how to organize the syntax.\nThat’s a big challenge to me so far and I think it’s because am a newbie to the database",
"username": "iamodreck"
},
{
"code": "db.cool.updateMany(\n { \"Username\": \"ieck\" },\n { $set: { \"Links.$[elem].A\": \"10\" } },\n { arrayFilters: [{ \"elem.counter\": {$exists: true} }] }\n)\n",
"text": "Ok, no problem.Test this:",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Am seeing in your syntax statement, you’ve specified the username ( which I think will only update one document) but in my case , I want to update all documents",
"username": "iamodreck"
},
{
"code": " {},\n { $set: { \"Links.$[elem].A\": \"10\" } },\n { arrayFilters: [{ \"elem.counter\": {$exists: true} }] }\n)\n",
"text": "Oh yes, true.db.cool.updateMany(",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Thank you for this…\nIt worked",
"username": "iamodreck"
},
{
"code": "",
"text": "I’m glad you got it resolved, I’m at your disposal.",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Hello Samuel,\nHope yo doing fine .\nI need some help on something, If possible*Incase I have a number of documents and each of those documents has a field containing email.\nHow do I check for the existence of a certain email in all the documents without having to return the document but only the response which I think it May either be true (if the email exists) and false if it doesn’t exist",
"username": "iamodreck"
},
{
"code": "db.coll.find({},{exists: \"true\", _id: 0}).limit(1)\n[ { existe: 'verdadeiro' } ]\n",
"text": "Bom dia! Você quer saber se esse e-mail existe em algum documento, isso? Tente usar uma projeção para retornar o que você precisa.If the solution helped you, put the item as solved, this way it helps other people with the same question ;D",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Ive just seen this now but I’ll try it tomorrow in the morning since it’s now late…\nAbout the projection, it failed to work in my code and am still trying to find out why",
"username": "iamodreck"
}
] | Updating an item value which is in an array of objects containing in a document which is in a group of documents | 2023-09-08T15:56:14.152Z | Updating an item value which is in an array of objects containing in a document which is in a group of documents | 540 |
null | [
"python"
] | [
{
"code": "{\n \"_id\": \"DOC1\",\n \"EL1\": \"my1\",\n \"EL2\": [\n \"my11\"\n ],\n \"EL3\": [\n \"my21\",\n \"my22\",\n \"my23\"\n ],\n ...\n}\n{\n \"_id\": \"DOC1\",\n \"DOC1\": [\n {\n\t \"EL1\": \"my1\",\n\t \"EL2\": [\n\t\t\"my11\"\n\t ],\n\t \"EL3\": [\n\t\t\"my21\",\n\t\t\"my22\",\n\t\t\"my23\"\n\t ]\n\t ...\n }\n ]\n}\nmy_collection.update(\n { \"_id\": \"DOC1\" },\n {\n \"$set\": {\n \"DOC1\": [\n {\n \"$objectToArray\": \"$$ROOT\"\n }\n ]\n }\n }\n)\nmy_collection.update_one(\n {\"_id\": \"DOC1\"},\n [\n {\"$set\": {\"DOC1\": [{\"$mergeObjects\": [\"$DOC1\", {}]}]}}\n ]\n)\n",
"text": "Hi,Using pymongo:self.db.command({‘buildInfo’:1})[‘version’]\n‘4.4.3’I would like to convert a doc like:into array in the same doc like:I try:andbut i give an error in each case:The dollar ($) prefixed field ‘$objectToArray’ in is not valid for storage.Is there a solution to update one directly without get all docs and set ?Thanx in advance.",
"username": "Gui_Tou"
},
{
"code": "",
"text": "Welcome to the MongoDB community.Did you look at this link node.js - MongoDB \"The dollar ($) prefixed field is not valid for storage.\" - Stack Overflow?",
"username": "Samuel_84194"
},
{
"code": "my_collection.update_one(\n {\"_id\": \"DOC1\"},\n [\n {\"$set\": {\"DOC1\": [{\"$mergeObjects\": [\"$DOC1\", {}]}]}}\n ]\n)\n",
"text": "but i give an error in each case:The dollar ($) prefixed field ‘$objectToArray’ in is not valid for storage.I am pretty sure the error message is different when you tryBecause $objectToArray is not used at all. The error must be different. Please share as we might have a better idea of what is happening. In my case it worked with an error.In the future, please avoid adding three little dots to your documents to show that you have more fields because we cannot cut-n-paste directly your documents.",
"username": "steevej"
},
{
"code": "db.getCollection(\"Test\").updateMany(\n{\n \"_id\": \"DOC1\"\n},\n[\n {\n $set:{\n newRoot:{\n _id:'$_id',\n \"DOC1\":['$$ROOT'],\n }\n } \n },\n {\n $replaceRoot:{\n newRoot:'$newRoot'\n }\n } \n])\n",
"text": "It’s not quite clear from your original post, but are you looking for a generic solution to update all at once or specific query you can apply to a document?So you could do something like this:Which obviously needs you to pass in the ID so that the array can be named.I was trying to find a generic solution where it would take a field value as the input to a field name, but could not work out a way of doing that…maybe someone else can.",
"username": "John_Sewell"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Merge all documents of collection into array | 2023-09-11T09:40:21.978Z | Merge all documents of collection into array | 342 |
null | [
"node-js",
"atlas-cluster"
] | [
{
"code": "",
"text": "error: Error: querySrv EREFUSED _mongodb._tcp.cluster0.xlbe3.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/callback_resolver:47:19)I am Getting this error even though my string is correct and i get this error mutliple times i think there is something wrong with my PC",
"username": "SA_mir"
},
{
"code": "",
"text": "Good afternoon, welcome to the MongoDB community.Have you allowed access to your project’s Access List to connect to your cluster?",
"username": "Samuel_84194"
},
{
"code": "",
"text": "yup0.0.0.0/0i use this one to have access from anywheresometimes it get connected to it but once i restart my project it shows above error this happens multiple times with me",
"username": "SA_mir"
},
{
"code": "",
"text": "Good morning!What would “restart the project” mean? I’m not sure I understand this terminology.Are you trying to access from your desktop? Is your computer’s IP not changing? Isn’t there a firewall in between that could be preventing the connection?",
"username": "Samuel_84194"
},
{
"code": "",
"text": "i am running my project in my desktop using vsCode.\nRestart means whenever i restart my computer and run my project again.\nYes i do have firewall in my computer i use windows 11i used to off the firewall then restart my project it get connected to mongoDB but now this hack is also not workingI just want to know thw cause of this error",
"username": "SA_mir"
},
{
"code": "",
"text": "Thanks for the answer. This behavior is strange. If you have 0.0.0./0 it shouldn’t be a problem in Atlas.Can you do some tests like ping and telnet to help with the resolution?Also, if you could tell me the error you get when trying to connect.",
"username": "Samuel_84194"
},
{
"code": "",
"text": "I am doing this way trying to connect to cluster and i am getting thisping cluster0.xxxxxxx.mongodb.net\nPing request could not find host cluster0.b1p7eko.mongodb.net. Please check the name and try again.",
"username": "SA_mir"
},
{
"code": "",
"text": "You are unable to resolve the cluster’s DNS. You don’t even get to Atlas. It is possibly a problem with your computer’s resolution. When you disable the firewall does it still not work?",
"username": "Samuel_84194"
},
{
"code": "id 54082\nopcode QUERY\nrcode NOERROR\nflags QR RD RA\n;QUESTION\ncluster0.b1p7eko.mongodb.net. IN ANY\n;ANSWER\ncluster0.b1p7eko.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-cyxfww-shard-0\"\ncluster0.b1p7eko.mongodb.net. 60 IN SRV 0 0 27017 ac-uk99ezp-shard-00-00.b1p7eko.mongodb.net.\ncluster0.b1p7eko.mongodb.net. 60 IN SRV 0 0 27017 ac-uk99ezp-shard-00-01.b1p7eko.mongodb.net.\ncluster0.b1p7eko.mongodb.net. 60 IN SRV 0 0 27017 ac-uk99ezp-shard-00-02.b1p7eko.mongodb.net.\n;AUTHORITY\n;ADDITIONAL\n",
"text": "You cannot ping a cluster because it is not an host with an IP address.What you may do is ping the hosts of your replica set. You may get the list of hosts of your replica set directly in Atlas. DNS for a cluster has 2 types of records. A TXT record and SRV records. For your cluster we getThe ac-*-b1p7eko.mongodb.net are the 3 hosts of your cluster.",
"username": "steevej"
}
] | Not Able to connect To MongoDB Atlas | 2023-09-09T14:48:38.191Z | Not Able to connect To MongoDB Atlas | 382 |
null | [] | [
{
"code": "",
"text": "Hello,I’ve been using MongoDB cloud free tier for my personal project. I’m also a Github Student (but I have yet to claim the offer). Question hereIf later on I want to claim the MongoDB Student Pack for the free $50 credits, does it require me to create new MongoDB account or can I link my current one that I’m using right now?Once claimed, is there expiry for the $50 Credits? For comparison, DigitalOcean also offers free credits for Github Student, but in 12 months, those credits will expire and gone regardless.Thanks",
"username": "godot"
},
{
"code": "",
"text": "Good afternoon, welcome to the MongoDB community.I believe that in this link you will find the answers. It is filtered for this area (sorry I can’t help much rsrsrs)This category is for students interested in, or participating in the MongoDB for Academia Student program including the MongoDB Student Pack and Student Spotlights.Regarding item 2, the answer can be found in this link (frequently asked questions - bottom of the page)MongoDB Student Pack",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Hi there! The link to the MongoDB for Students program page has already been posted but to answer your questions.No, you don’t need to create a new MongoDB account to claim your Atlas credit code. To claim your credit code, you have to sign-in at MongoDB Student Pack with your GitHub account details. You can apply those credits to your existing Atlas account.Yes! If unused, credit codes will expire 8-12 months after you claim them. If you were to claim your Atlas code today and not apply it to an Atlas account, it would expire July 31st, 2024. Once applied to your Atlas account, you have 12 months to use your $50 in credits. If your code expires either before you apply it or before you could use all of your credits, reach out to us and we can work with you to extend your code.",
"username": "Aiyana_McConnell"
},
{
"code": "",
"text": "Thank you! Crystal clear",
"username": "godot"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Do I need new account to claim my student pack? | 2023-09-09T08:53:58.723Z | Do I need new account to claim my student pack? | 466 |
null | [
"dot-net",
"containers",
"field-encryption"
] | [
{
"code": "mcr.microsoft.com/dotnet/aspnet:6.0-alpinemcr.microsoft.com/dotnet/aspnet:6.0mcr.microsoft.com/dotnet/aspnet:6.0-alpinemcr.microsoft.com/dotnet/sdk:6.0 > at MongoDB.Libmongocrypt.LibraryLoader.LinuxLibrary.dlopen(String filename, Int32 flags)\n > at MongoDB.Libmongocrypt.LibraryLoader.LinuxLibrary..ctor(String path)\n > at MongoDB.Libmongocrypt.LibraryLoader..ctor()\n > at MongoDB.Libmongocrypt.Library.<>c.<.cctor>b__0_61()\n > at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)\n > at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)\n > at System.Lazy`1.CreateValue()\n > at MongoDB.Libmongocrypt.Library.<>c.<.cctor>b__0_1()\n > at System.Lazy`1.ViaFactory(LazyThreadSafetyMode mode)\n > at System.Lazy`1.ExecutionAndPublication(LazyHelper executionAndPublication, Boolean useDefaultConstructor)\n > at System.Lazy`1.CreateValue()\n > at MongoDB.Libmongocrypt.CryptClientFactory.Create(CryptOptions options)\n > at MongoDB.Driver.Encryption.ClientEncryption..ctor(ClientEncryptionOptions clientEncryptionOptions)\n",
"text": "HiI get errors while debugging ASP.NET 6 app that uses MongoDriver with client encryption in Visual Studio 2022 using docker image.With these images I get the following error:Exception thrown: ‘System.DllNotFoundException’ in MongoDB.Libmongocrypt.dll: ‘Unable to load shared library ‘libdl’ or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: liblibdl: cannot open shared object file: No such file or directory’I added LD_DEBUG environment variable. I cannot add attachments as I am a new user - attaching a link to google drive MongoDB issue in docker - Google Drive.With the mcr.microsoft.com/dotnet/aspnet:6.0-alpine image I get other error:‘System.IO.FileNotFoundException’ in MongoDB.Libmongocrypt.dll: ‘/app/bin/Debug/net6.0/runtimes/linux/native/libmongocrypt.so’Thanks.",
"username": "Oleksii"
},
{
"code": "libmongocrypt.so",
"text": "Hey @Oleksii , thanks for your report.\nI’ve created a ticket to investigate this issue https://jira.mongodb.org/browse/CSHARP-4363. Please follow this ticket for updates and further discussions.\nMeanwhile, can you please check whether this folder \" /app/bin/Debug/net6.0/runtimes/linux/native/libmongocrypt.so\" contains libmongocrypt.so binary and if no what binaries are there?",
"username": "Dmitry_Lukyanov"
},
{
"code": "libmongocrypt.soroot@b566ee55750b:/app/bin/Debug/net6.0/runtimes/linux/native# ls -l\ntotal 3332\n-rwxrwxrwx 1 root root 1886064 Jul 29 20:51 libmongocrypt.so\n-rwxrwxrwx 1 root root 134080 Jul 29 23:24 libsnappy64.so\n-rwxrwxrwx 1 root root 1384960 Jul 29 23:24 libzstd.so\nlibdl.soroot@b566ee55750b:/# ls -l /lib/x86_64-linux-gnu/ | grep libdl\n-rw-r--r-- 1 root root 18688 Aug 26 21:32 libdl-2.31.so\nlrwxrwxrwx 1 root root 13 Aug 26 21:32 libdl.so.2 -> libdl-2.31.so\nroot@b566ee55750b:/# find / -type f -name \"*libdl*\"\n/lib/x86_64-linux-gnu/libdl-2.31.so\n",
"text": "Yes, the libmongocrypt.so is present.I tried to find libdl.so lib:",
"username": "Oleksii"
},
{
"code": "libc6-devDockerfileFROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base\nRUN apt-get update\nRUN apt-get install libc6-dev -y\n...\n",
"text": "As a temporary workaround, we installed libc6-dev in the container (see an extract from the Dockerfile).Did not check the final image size.",
"username": "Oleksii"
},
{
"code": "",
"text": "Hey!\nI have the same issue using Debian as a base image, but on an ARM Mac.\nis there any progress regarding this? there is nothing on the ticket",
"username": "Tomer_Amir"
}
] | MongoDB.Driver 2.17.1 with client ecryption in docker throws: "Unable to load shared library 'libdl' or one of its dependencies" | 2022-10-13T08:24:29.564Z | MongoDB.Driver 2.17.1 with client ecryption in docker throws: “Unable to load shared library ‘libdl’ or one of its dependencies” | 3,377 |
null | [] | [
{
"code": "",
"text": "If you want to try again, please restart the track.\nIs restarting the session but says (Your session expired) every time. Also tried to start the unit from the beggining did not work.\nHow can i fix this if anyone knows please help trying for MongoDB Database Administrator (DBA) Path",
"username": "Nurul_Amin1"
},
{
"code": "",
"text": "Hi Nurul and welcome to the forums!Please reach out to [email protected] with the issue you’re experiencing in MongoDB University. The more information you’re able to provide in your ticket, the more quickly they’ll be able to resolve your issue.",
"username": "Aiyana_McConnell"
}
] | Lab session expired | 2023-09-09T07:03:58.768Z | Lab session expired | 385 |
null | [
"kafka-connector"
] | [
{
"code": "value.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicRecordNameStrategyorg.apache.kafka.common.errors.SerializationException: In configuration value.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicRecordNameStrategy, the message value must only be a record schema",
"text": "In case if we want to add heartbeat to a connect that using schema registry and subject name strategy like:\nvalue.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicRecordNameStrategy\nwe do get the following error:\norg.apache.kafka.common.errors.SerializationException: In configuration value.subject.name.strategy = io.confluent.kafka.serializers.subject.TopicRecordNameStrategy, the message value must only be a record schema\nIs there any workaround can be applied or we need to implement custom serializer that will skip heartbeat messages?",
"username": "Vlad_Goldman"
},
{
"code": "",
"text": "@Vlad_Goldman - have you found a resolution to this issue?",
"username": "Tasos_Zervos"
}
] | Heartbeat and schema registry | 2022-01-16T15:41:25.451Z | Heartbeat and schema registry | 2,495 |
[
"cxx"
] | [
{
"code": " // Creating the array of line items\n bsoncxx::builder::basic::array lineItemArr = bsoncxx::builder::basic::array{};\n for(LineItem *lineItem: getLineItems()) {\n lineItemArr.appendlineItem);\n }\n // Creating the array of line items\n bsoncxx::builder::basic::array lineItemArr = bsoncxx::builder::basic::array{};\n for(LineItem *lineItem: getLineItems()) {\n lineItemArr.append(bsoncxx::builder::basic::make_document(bsoncxx::builder::basic::kvp(\"description\", lineItem->getDescription()),\n bsoncxx::builder::basic::kvp(\"grouping\", lineItem->getGrouping()),\n bsoncxx::builder::basic::kvp(\"partNumber\", lineItem->getPartNumber()),\n bsoncxx::builder::basic::kvp(\"price\", lineItem->getPrice())));\n }\n",
"text": "I have an array within Mongo that I had populated, but I am unsure how to populate it using the C++ driver.The is what the array looks like.I tried the following,Where LineItem is an object that has the fields within the array. I am perplexed on how to get this, as when I run this I get the following errorC2338: append is disabled for non-char pointer typesI also tried this, and it crashed without giving an error.",
"username": "bigorca54"
},
{
"code": "",
"text": "Hi @bigorca54You can find examples that work with array with C++ driver here - Working with BSON\nThe second piece of code you shared shouldn’t crash unless the lineItem pointer is null/junk. Perhaps try adding a check to ensure the pointer you are using to get the data is valid (null-check?).",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Yeah turns out it works fine (the last one)… I guess I just had some other small things wrong, I fixed a bunch of stuff that was not shown with the class. Thanks Rishabh, appreciate the link and feedback!",
"username": "bigorca54"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Inserting an array of objects into through C++ | 2023-09-08T19:22:11.607Z | Inserting an array of objects into through C++ | 337 |
|
null | [
"queries",
"crud"
] | [
{
"code": "var counter = 0\n\ndb.posts.find().forEach(function (doc) {\n\tdb.posts.updateOne(\n { _id: doc._id},\n { $set : {PostIncrementId: counter }}\n\t);\n\tcounter++\n})\n",
"text": "I have a problem with the following code below which I run on a 4M+ documents collection, what I wish to do is create a new autoincrementing field for all documents but this takes too long, I can see it change all documents but it’s running for around 30 minutes now, can we optimize this code more?",
"username": "Saylent_N_A"
},
{
"code": "postIncrementIdpostIncrementIdconst totalDocuments = db.posts.countDocuments();\n\n// documents to process in batch. \nconst limit = 10000; \n\n// decide, how many batch operations we will need \n// to update all documents\nconst nIterations = Math.ceil(totalDocuments / limit);\n\nfor (let i = 0; i++; i <= nIterations) {\n const offet = i * limit;\n // call function with aggregation\n // make sure this aggregation is called sequentially,\n // not in parallel to avoid conflicts\n batchIncrement(limit, offet);\n}\n\nfunction batchIncrement(limit, offset) {\n db.posts.aggregate([\n {\n // $sort is needed so $skip and $limit stages give \n // predictable results\n $sort: {\n _id: 1\n }\n },\n // match only documents, that do not have postIncrementId yet\n {\n $match: {\n postIncrementId: null,\n }\n }, \n {\n $skip: offset, // function variable is used here!\n },\n // limit number of documents per batch\n {\n $limit: limit, // function variable is used here!\n },\n // leave only _id of each post, \n // so each document could take less RAM \n // and more documents could fit in $group stage\n {\n $project: {\n _id: true,\n }\n },\n {\n // collect all selected posts into array field \n // for later use in $project + $reduce\n $group: {\n _id: null,\n posts: {\n $push: {\n _id: '$_id'\n }\n }\n }\n },\n {\n // calculate and assign postIncrementId for each post \n $project: {\n posts: {\n $reduce: {\n input: '$posts',\n initialValue: {\n i: offset + 1, // function variable is used here!\n incrementedPosts: [],\n },\n in: {\n i: {\n $add: ['$$value.i', 1],\n },\n incrementedPosts: {\n $concatArrays: ['$$value.incrementedPosts', [\n {\n _id: '$$this._id',\n postIncrementId: '$$value.i'\n }\n ]]\n }\n }\n }\n }\n }\n },\n // convert 'posts' array back to documents\n // with $unwind + $replaceWith stages\n {\n $unwind: '$posts.incrementedPosts',\n },\n {\n $replaceWith: '$posts.incrementedPosts'\n },\n // save documents into collection,\n // this will only add 'postIncrementId' field to each document\n // other fields will not be affected\n {\n $merge: {\n into: 'posts',\n on: '_id',\n whenMatched: 'merge',\n whenNotMatched: 'discard'\n }\n }\n ]);\n}\n[\n { _id: 'P1', title: 'P1-title' },\n { _id: 'P2', title: 'P2-title' },\n { _id: 'P3', title: 'P3-title' },\n { _id: 'P4', title: 'P4-title' },\n { _id: 'P5', title: 'P5-title' },\n { _id: 'P6', title: 'P6-title' },\n { _id: 'P7', title: 'P7-title' },\n { _id: 'P8', title: 'P8-title' },\n { _id: 'P9', title: 'P9-title' }\n]\n[\n { _id: 'P1', title: 'P1-title', postIncrementId: 1 },\n { _id: 'P2', title: 'P2-title', postIncrementId: 2 },\n { _id: 'P3', title: 'P3-title', postIncrementId: 3 },\n { _id: 'P4', title: 'P4-title', postIncrementId: 4 },\n { _id: 'P5', title: 'P5-title', postIncrementId: 5 },\n { _id: 'P6', title: 'P6-title', postIncrementId: 6 },\n { _id: 'P7', title: 'P7-title', postIncrementId: 7 },\n { _id: 'P8', title: 'P8-title', postIncrementId: 8 },\n { _id: 'P9', title: 'P9-title', postIncrementId: 9 }\n]\npostIncrementIdpostIncrementId",
"text": "Hello, @Saylent_N_A ! The main issue with your approach is that you update each document with separate request. That means, if you have 4M+ documents, your code will make 4M+ requests to your database server and that is a lot of time and lots network interaction!Try to send update commands in batch. I can suggest two ways of how it can be done:Solution 1. Using bulkWrite()Solution 2. Aggregation pipeline with $merge\nThe steps are similar to the ones in previous solution, but postIncrementId calculation and document selection is done on the database-server’s side.Pseudo code (use it just as an example):Tested this aggregation pipeline on these sample documents:Documents after batchIncrement() function execution with limit=10, offset=0:Note: you may want to create index on postIncrementId field, so there won’t be two documents, that have same value for postIncrementId field.",
"username": "slava"
},
{
"code": "",
"text": "i was able to fix it by running the command on the server, i didnt know that Compass was fetching and sending data",
"username": "Saylent_N_A"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Adding a single autoincrementing field on all documents taking too long | 2023-08-26T14:53:32.253Z | Adding a single autoincrementing field on all documents taking too long | 456 |
null | [
"serverless"
] | [
{
"code": "",
"text": "Is there a MongoDB product road-map? I am particularly curious as to when MongoDB serverless instances will support triggers.",
"username": "Perminus_Gaita"
},
{
"code": "",
"text": "Hi Perminus_GaitaWe do not have a public roadmap for serverless instances. Please see your direct messages for more details.Thank you,\nAnurag Kadasne",
"username": "Anurag_Kadasne"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is there a MongoDB product roadmap? | 2023-09-08T16:19:00.611Z | Is there a MongoDB product roadmap? | 506 |
null | [
"queries",
"crud"
] | [
{
"code": "await collectionName.updateMany({\n user: userId,\n}, records, {upsert: true});\n\n// records here are array of object belonging to the userId which need to be updated / created\n// records does have userId in each of the document \nError: Invalid update pipeline operator: “chain”\n",
"text": "QueryThe below error is thrownThis error also occurs with updateOne and update",
"username": "rajat_hongal"
},
{
"code": "db.getCollection('Test').updateMany(\n{\n user:'John'\n},\n[\n {\n $set:{\n 'Name is John':true\n }\n }\n]\n)\n",
"text": "Looks like the format of your query is a bit off, see:So the second part “records” should be the actions you want to take on documents that match user:userID.So for example if I wanted to add a field, called “Name is John” to documents, I could do this:What exactly does your update statement looks like, as well as the document before what you expect it to look like?",
"username": "John_Sewell"
}
] | UpdateMany throws error for update multiple documents | 2023-09-11T10:39:18.490Z | UpdateMany throws error for update multiple documents | 325 |
null | [
"compass"
] | [
{
"code": "",
"text": "when trying to connect my database through Mongodb compass i am getting this error , any idea how to resolve it ?2281984:error:10000438:SSL routines:OPENSSL_internal:TLSV1_ALERT_INTERNAL_ERROR:…..\\third_party\\boringssl\\src\\ssl\\tls_record.cc:592:SSL alert number 80",
"username": "Kartik_Patekar"
},
{
"code": "",
"text": "I’m having the same issue4998787232:error:10000438:SSL routines:OPENSSL_internal:TLSV1_ALERT_INTERNAL_ERROR:…/…/third_party/boringssl/src/ssl/tls_record.cc:592:SSL alert number 80I got around it going to Network access > adding a new IP and clicking ALLOW ACCESS FROM ANYWHERE\n0.0.0.0/0 (includes your current IP address)",
"username": "Constanza_Mallea_Riveros"
},
{
"code": "",
"text": "You need to pick the IP address to connect\n\nerror981×477 19.3 KB\nAs you can see it’s asking to add the current IP address You have to pick that one, I had the same issue for the past few days, but now it’s fixed",
"username": "RISHIK_KUMAR_B"
},
{
"code": "",
"text": "Thank you so much, it was the solution.",
"username": "Miguel_Nino"
},
{
"code": "",
"text": "But it keeps showing me to add current ip address.\n\nimage1382×531 24.4 KB\n",
"username": "Ibrahim_Ahmed1"
},
{
"code": "",
"text": "Yes I’m encountering the same issue whenever I close the compose It seems like some technical issue",
"username": "RISHIK_KUMAR_B"
},
{
"code": "",
"text": "I’m glad that i could help you.",
"username": "RISHIK_KUMAR_B"
},
{
"code": "",
"text": "Thankyou soo much brother i really gave up trying the solution was simple . thankyou",
"username": "Kartik_Patekar"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error while connecting my database through Mongodb compass | 2023-09-07T13:30:54.044Z | Error while connecting my database through Mongodb compass | 8,115 |
null | [
"queries"
] | [
{
"code": "",
"text": "I set the webhook of GoogleChat with Mongodb Atlas to send the alerts, but its not working because it does not support with GoogleChat.\nNow it wanted to know is their any API to get the CPU utilization. OR mongodb query for the same.Thanks in Advance.",
"username": "Shivam_Tiwari2"
},
{
"code": "",
"text": "Hello, welcome to the community.Have you tried using this API?https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Monitoring-and-Logs",
"username": "Samuel_84194"
},
{
"code": "",
"text": "@Samuel_84194 Yes I have tried, But No API give CPU Utilization.\nIf you Know the exact API, Please share.Thanks in Advance.",
"username": "Shivam_Tiwari2"
},
{
"code": "m",
"text": "Yes I have tried, But No API give CPU Utilization.@Shivam_Tiwari2 - I believe this should be what you’re after. Please view the possible values for m.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongodb Atlas API for CPU Utilization? | 2023-09-08T06:52:39.912Z | Mongodb Atlas API for CPU Utilization? | 273 |
null | [
"replication"
] | [
{
"code": "",
"text": "Good day! Quite new to developing with MongoDB Atlas and I just provisioned a multi-region replicaset for my atlas cluster, with:I’m able to connect to my cluster just fine from a server located in eu-west-1 but when I try to connect with the same mongodb+srv and same credentials from a server in us-east-1, I’m getting the following error:\nError: Could not find host matching read preference { mode: “primary”, tags: [ {} ] } for set atlas-xxxApologies if I’m missing something very obvious here. Thank you!",
"username": "Jose_Jimenez3"
},
{
"code": "us-east-1",
"text": "Hi @Jose_Jimenez3 - Welcome to the community!I’m able to connect to my cluster just fine from a server located in eu-west-1 but when I try to connect with the same mongodb+srv and same credentials from a server in us-east-1all regions with AWS PrivateLink configuredCurious to know more details about how you have your AWS PrivateLink configured at a high level - Could you describe this? I’m wondering if it’s due to (as per the limitations section of the private endpoint documentation):I.e. Do you have a VPC peering connection from your VPC in us-east-1 (Client side) to your VPC in `eu-west-1) (Client side)?Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "us-east-1us-east-1eu-west-1",
"text": "Sorry Jose - I misread and confused myself with private endpoints Just clarifying you only have AWS VPC peering configured from your environment to the Atlas cluster is this correct?In this case, are you able to ping the 1 node from each region from the us-east-1 host to see if the IP addresses returned are private addresses (should be addresses that exist within the configured Atlas CIDR when setting up the VPC Peering connections in Atlas).Additionally, when you state “all regions with AWS PrivateLink configured” is this all the Atlas regions to both the us-east-1 and eu-west-1 VPC you have on your AWS end?Regards,\nJason",
"username": "Jason_Tran"
}
] | Cannot connect to a multi-region replicaset from secondary region | 2023-09-08T04:54:12.029Z | Cannot connect to a multi-region replicaset from secondary region | 371 |
null | [
"aggregation",
"mongodb-shell",
"atlas-triggers"
] | [
{
"code": "",
"text": "I have a trigger that deletes old data and but keeps latest 10 entries for each player for particular time. Hence, all the data should be deleted except that 10 entries. My code looks something like this.",
"username": "Bhakti_Vyas"
},
{
"code": "",
"text": "Hi @Bhakti_Vyas - Welcome to the community.My code looks something like this.Unfortunately i’m not able to see any code provided. Perhaps it was not pasted into the post?Note: Please redact any personal or sensitive information before posting here.Regards,\nJason",
"username": "Jason_Tran"
}
] | MongoDB Atlas trigger 50k records limitation | 2023-09-08T05:10:26.444Z | MongoDB Atlas trigger 50k records limitation | 349 |
null | [
"security"
] | [
{
"code": "",
"text": "Context\nI want to only use biometric keys as 2FA method.Issue\nI can only enroll 1 biometric key at Atlas.Pain\nWhen using Security Keys, it’s very common to be able to enroll more than 1 key, so if I lose one (damage or theft), I can use the back up one in my safety vault.Definition of Ready\nYou already support adding 1 key, I’d assume all it takes is either display the option to add it again.",
"username": "Angelo_Reale1"
},
{
"code": "[bug]",
"text": "Hi @Angelo_Reale1,Thanks for those details. I note that you’ve put the title to include [bug] but I do not believe this to be the case. This appears to be more of a feature request in which case I would recommend posting this type of feedback on the MongoDB feedback engine in which yourself and others can vote for.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi Jason.Thanks for your reply.I appreciate your perception that this does not qualify as a bug, as it intersects with a feature request.I think the more fundamental question we need to ask ourselves is: what is our definition of bug?I personally like that of Rubin J. and Dana Chisnell take on the user perspective to validate software functionally, which could help us understand that a bug can be literally any aspect the user believes to not be working as intended or according to expectations.Objectively, people use Biometric/U2F/FIDO for many reasons, including, but not limited to: reduced MITM, SIM swapping, phishing, or even mobile theft + takeover risks.Many people feel safer using only this method for 2FA.If I factually need to add a secondary authentication method, e.g. OTP, I’m usually adding a point of vulnerability to my data platform authentication mechanism. This means that if my OTP provider or setup code are compromised, “Mallory” can win control over mine - and my client’s data.I prefer to use a physical authentication mechanism because I trust it better. Products like Google, Twitter, Apple, Okta, 1password and Github, &c. support FIDO authentication and the addition of multiple security keys - they understand this need.While some people might trust SMS/Email/OTP better, and that’s OK, it’s their preference, I believe there is a bug in the way that the security keys authentication method offered in Cloud’s IAM is incomplete as it does not yet meet industry standards for the actual use-case described above.Back to the original question, is this a bug (or not)?As the user who have my intent frustrated - yes.\nAs the feature that is incomplete / does not meet quality standards - yes.Can this be a feature request instead of a bug? Also yes.I personally don’t see a difference in priority between a bug and a feature by the very taxonomy - but rather on the impact it provides by either addressing or not addressing it.Have a nice day!",
"username": "Angelo_Reale1"
}
] | [bug] [security] Add more than 1 Biometric Key as 2FA method | 2023-09-10T14:42:09.352Z | [bug] [security] Add more than 1 Biometric Key as 2FA method | 288 |
null | [
"queries",
"node-js",
"crud"
] | [
{
"code": "osumieEnemyModel.updateMany({}, { $unset: { atk_mult: '',\n\n hp_mult: '',\n\n maxHp_mult: '',\n\n minHp_mult: '',\n\n maxAtk_mult: '',\n\n minAtk_mult: '',\n\n } })\n",
"text": "Hi, I can’t remove some fields with a updateOne query or updateMany, these fields are not present anymore in the schema, but I still can’t remove them.\nI’m usingI don’t use these fields anymore as they changed name.\nEDIT: When I say I can’t remove them, running this query doesn’t change anything to the documents.",
"username": "Simon_N_A"
},
{
"code": "1test:PRIMARY> db.coll.findOne()\n{\n\t\"_id\" : ObjectId(\"6391ece76b130c4eb3fc8982\"),\n\t\"name\" : \"Max\",\n\t\"atk_mult\" : \"a\",\n\t\"hp_mult\" : \"b\",\n\t\"maxHp_mult\" : \"c\",\n\t\"minHp_mult\" : \"d\",\n\t\"maxAtk_mult\" : \"e\",\n\t\"minAtk_mult\" : \"f\"\n}\ntest:PRIMARY> db.coll.updateMany({},{$unset: {atk_mult: 1, hp_mult: 1, maxHp_mult: 1, minHp_mult: 1, maxAtk_mult: 1, minAtk_mult:1}})\n{ \"acknowledged\" : true, \"matchedCount\" : 1, \"modifiedCount\" : 1 }\ntest:PRIMARY> db.coll.findOne()\n{ \"_id\" : ObjectId(\"6391ece76b130c4eb3fc8982\"), \"name\" : \"Max\" }\n",
"text": "Hi @Simon_N_A,You need a 1 instead of an empty string.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "$unset\"\"new mongoose.Schema({\n\n name: { type: String, required: true },\n\n image: { type: String, required: true },\n\n min_hp: { type: Number, required: true },\n\n max_hp: { type: Number, required: true },\n\n atkMin_mult: {\n\n type: mongoose.Schema.Types.Decimal128, required: true, get: (v: any) => {\n\n const num = +v.toString()\n\n return num\n\n }\n\n },\n\n atkMax_mult: {\n\n type: mongoose.Schema.Types.Decimal128, required: true, get: (v: any) => {\n\n const num = +v.toString();\n\n return num;\n\n }\n\n },\n\n hpMin_mult: {\n\n type: mongoose.Schema.Types.Decimal128, required: true, get: (v: any) => {\n\n const num = +v.toString();\n\n return num;\n\n }\n\n },\n\n hpMax_mult: {\n\n type: mongoose.Schema.Types.Decimal128, required: true, get: (v: any) => {\n\n const num = +v.toString();\n\n return num;\n\n }\n\n },\n\n stun_chance: {\n\n type: mongoose.Schema.Types.Decimal128, default: 0.05, required: true, get: (v: any) => {\n\n const num = +v.toString();\n\n return num\n\n }\n\n }\n\n})\n",
"text": "Hi, thank you for replying, but changing these to 1 didn’t work. (I also saw on the docs that what we put after the fields in the $unset doesn’t change anythingThe specified value in the $unset expression (i.e. \"\" ) does not impact the operation.Here’s my schema:(I needed to .toString() because I was getting a Decimal128 Object, when I only wanted the value, if you have a cleaner way to do that, I’d be thankful too)",
"username": "Simon_N_A"
},
{
"code": "",
"text": "Hi @Simon_N_A,didn’t workWhat’s the error message? What do you get? Can you reproduce in a small script like me?I also saw on the docs that what we put after the fields in the $unset doesn’t change anythingOops my bad! \nYou are totally right but this was the only suspicious thing to me that I could find in your question and I didn’t think of it twice as I made it work in my little script.I needed to .toString() because I was getting a Decimal128 Object, when I only wanted the value, if you have a cleaner way to do that, I’d be thankful tooNo idea for this one, I’m not the best in JS. \nI stopped at “Java”. I couldn’t handle the “script”. Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I dont get any error, the data is just not modified.\nIs there an option that would prevent me from doing operations on fields that are not present in the schema ?",
"username": "Simon_N_A"
},
{
"code": "",
"text": "Hi again, apparently your solution worked when using it in mongosh, in compass.\nBut, how is it possible that the same query didn’t work using the mongoose library ?\nI’m on MongoDB 5.0.8 Community",
"username": "Simon_N_A"
},
{
"code": "",
"text": "No idea, I never used Mongoose and I generally don’t really recommend using an extra layer between the back-end code and the JS Driver.My guess is that something could be wrong in your schema or the query isn’t translated correctly and thus isn’t doing what you think it’s doing when it’s executed.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Add back the fields you want to delete to the Schema definition. Try again",
"username": "Kreshel_N_A"
},
{
"code": "",
"text": "OMG! this works like magic! Why does this work? thank you I spend the morning trying to delete this column.",
"username": "Adeline_Graciani"
}
] | Can't unset some values | 2022-12-07T20:34:35.927Z | Can’t unset some values | 3,189 |
null | [
"data-modeling",
"java"
] | [
{
"code": "",
"text": "MongoDB’s semantics of null is not the same as the semantics of null in SQL databases. As a consequent, unique indexes in MongoDB will raise uniqueness violation error when more than one document has null for the field that has unique index. Furthermore, there appears to be no way to create a unique index where nulls are ignored.MongoDB seems to have adopted the null value semantics of Java where null is a literal value assigned to variables of reference type to indicate that it has no referent. Thus, in Java null == null is true.However in the database world, null could have many meanings including indicating that the value of an attribute is unknown. Since two unknowns cannot be compared, null == null should be null, i.e., it is unknown if two unknowns are the same. With this semantics, two null values in a unique index are never equal in that null==null is not true. Thus unique indexes will allow multiple null values.This is an issue in migrating SQL database to MongoDB.",
"username": "Rafiul_Ahad"
},
{
"code": "",
"text": "Hi @Rafiul_Ahad ,You should have a look at sparse indexes and, for even more flexibility, partial indexes.Regards,\nJeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "I did. And nothing worked so far.",
"username": "Rafiul_Ahad"
},
{
"code": "",
"text": "From indexing - Unique index in mongoDB 3.2 ignoring null values - Stack Overflow, the following works for unique index for field of type String with nulls\nIndexOptions indexOptions = new IndexOptions().unique(true).partialFilterExpression(Filters.eq(“fieldName”,Filters.eq(“$type”,“string”)));\nbut the note says:\n\"Just a note from my testing for anyone else looking for the same solution: although this worked as a mechanism for enforcing uniqueness while allowing nulls, Mongo didn’t use the index for lookup queries using the indexed field and using hint() to force it resulted in very slow performance– Pete S \"Apr 20, 2018 at 10:25",
"username": "Rafiul_Ahad"
},
{
"code": "nullnullnullnullpartialFilterExpressionfieldNamestring",
"text": "Welcome to the MongoDB Community @Rafiul_Ahad!MongoDB’s semantics of null is not the same as the semantics of null in SQL databases.Null values are a placeholder for unknown values in both cases, but semantics for comparison and indexing may differ depending on the databases you are comparing. For example, SQL Server used to (and may still) disallow multiple nulls in a unique index by default – you have to create a filtered unique index (analogous to a partial index in MongoDB).I think the most straightforward way to ignore null values for the purposes of a unique index would be to avoid serialising null property values. For example, if you are using the MongoDB Java driver’s POJO support this happens by default (see Serialization Customization).For more information on null semantics in MongoDB, please refer to Query for Null or Missing Fields.I did. And nothing worked so far.It seems like you may have found an acceptable solution for your use case.If that isn’t the case or you are looking for further advice on this topic, please provide some more information on your environment:Per the answer you referenced from Stack Overflow, you can also use a partialFilterExpression matching on type to index a subset of documents based on field type (eg those where the fieldName value is a string ). However, as mentioned in the Partial Index Query Coverage documentation a partial index filter will not be chosen by the query planner unless a query includes the same filters:MongoDB will not use the partial index for a query or sort operation if using the index results in an incomplete result set.To use the partial index, a query must contain the filter expression (or a modified filter expression that specifies a subset of the filter expression) as part of its query condition.Regards.\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie,\nThank your for your response.\nMy current workaround is just that, a workaround to be able to create unique indexes on fields that have nulls in them. Since it comes with a performance penalty, it is not acceptable.\nLet me get on the soapbox for a moment. Indexes are physical database constructs. They should never be seen by the application. Asking an application to include the filter used in the index in their query is not acceptable.\nWithout divulging much about what I am doing, let’s say that an application is programmatically generating data for insertion into MongoDB or MySQL database. It is also translating queries from a high level language to database query language. For MongoDB, this application will have to deal with this MongoDB idiosyncrasy adding to the complexity of the code.\nTo answer your questions, I am using MongoDB server version 6.0.0 and the application is using MongoDB Java Sync driver version 4.7.1. Initially the unique indexes were created using the following option:\nIndexOptions indexOptions = new IndexOptions().unique(true);\nBut that caused uniqueness violation exception so I included the partialFilterExpression that I mentioned in this thread.\nThanks.\n-Rafiul",
"username": "Rafiul_Ahad"
},
{
"code": "",
"text": "I agree, this is not ideal. We should be able to enforce uniqueness when not null, it seems like such a basic functionality is missing. Yes, you can implement it on an application level, but indexes should be DB level. I see it as a safeguard from devs doing something wrong on the application level that can mess up the DB.",
"username": "Mark_Chang1"
}
] | Can't create a unique index that ignores nulls in MongoDB | 2022-11-13T23:27:14.807Z | Can’t create a unique index that ignores nulls in MongoDB | 10,082 |
null | [
"charts"
] | [
{
"code": "\"use client\"\n\nimport { useState, useEffect } from \"react\";\nimport axios from \"axios\";\nimport { toast } from \"react-hot-toast\";\nimport { useClerk } from \"@clerk/clerk-react\";\nimport { Pencil, Trash2 } from \"lucide-react\";\nimport Link from \"next/link\";\n\n\nconst Weight = () => {\n const [weight, setWeight] = useState(\"\");\n const [canSubmit, setCanSubmit] = useState(true);\n const [loading, setLoading] = useState(false);\n const [realProduct, setRealProduct] = useState([]);\n const clerk = useClerk();\n const userId = clerk.user ? clerk.user.id : null;\n\n\n useEffect(() => {\n // Načítanie existujúcich váh používateľa\n axios\n .get(\"/api/weight\")\n .then((response) => {\n const data = response.data;\n const dataWeights = data.Weights;\n\n const matchingEntries = dataWeights.filter(\n (entry) => entry.userId === userId\n );\n\n setRealProduct(matchingEntries);\n\n // Kontrola stavu canSubmit\n if (matchingEntries.length === 0) {\n setCanSubmit(true); // Ak používateľ nemá žiadny existujúci produkt, môže pridávať váhu\n } else {\n setCanSubmit(false); // Ak používateľ už má produkt, nemôže pridávať váhu\n }\n })\n .catch((error) => {\n console.error(\"Chyba při načítaní váh:\", error);\n });\n }, [userId]);\n\n const getCurrentDate = () => {\n const today = new Date();\n const year = today.getFullYear();\n const month = String(today.getMonth() + 1).padStart(2, \"0\");\n const day = String(today.getDate()).padStart(2, \"0\");\n return `${year}-${month}-${day}`;\n };\n\n const handleSubmit = async (e) => {\n e.preventDefault();\n\n if (!weight) {\n return;\n }\n\n try {\n const data = {\n weight,\n date: getCurrentDate(),\n userId,\n };\n setLoading(true);\n\n // Kontrola, či používateľ nemá žiadny existujúci produkt\n if (canSubmit) {\n // Odeslání dat na server\n const res = await axios.post(\"/api/weight\", data);\n\n const refreshPage = () => {\n setTimeout(() => {\n window.location.reload();\n }, 500); // Zmena na 500 ms (pol sekundy)\n };\n\n if (res) {\n toast.success(\"Saved!\");\n setWeight(\"\");\n refreshPage();\n setLoading(false);\n setCanSubmit(false);\n } else {\n throw new Error(\"Error\");\n }\n } else {\n toast.error(\"You already submitted your weight today.\");\n setLoading(false);\n }\n } catch (error) {\n console.error(error);\n }\n };\n\n return (\n <div className=\"flex flex-col justify-center items-center mt-5 gap-5\">\n <form onSubmit={handleSubmit}>\n <h1 className=\"text-center text-3xl font-bold mb-4\">\n Enter your weight\n </h1>\n <input\n onChange={(e) => setWeight(e.target.value)}\n value={weight}\n className=\"border border-teal-500 m-1 pl-1 p-1 rounded-lg w-56\"\n type=\"number\"\n step=\"0.1\"\n placeholder=\"Your Weight today (kg/lbs)\"\n min={1}\n max={150}\n />\n <button\n className=\"bg-black p-1 rounded-lg text-white\"\n disabled={!canSubmit}\n >\n Submit\n </button>\n </form>\n\n {canSubmit ? (\n <p>You can submit your weight now.</p>\n ) : (\n <p>You already submitted your weight today.</p>\n )}\n\n {loading === false &&\n realProduct.map((e) => (\n <div key={e._id} className=\"static\">\n <p className=\"m-0 p-0 static flex gap-3\">\n {e.date}: <span className=\"font-bold\">{e.weight}kg</span>{\" \"}\n <Link href={\"/edit/\" + e._id} className=\"hover:cursor-pointer\">\n <Pencil />\n </Link>\n <Link href={\"/delete/\" + e._id} className=\"hover:cursor-pointer\">\n <Trash2 color=\"#ff0000\" />\n </Link>\n </p>\n </div>\n ))}\n\n {loading === true && <div>Loading...</div>}\n\n <iframe\n className=\" shadow-lg mb-2 \"\n width=\"640\"\n height=\"480\"\n src=\"https://charts.mongodb.com/charts-healthity-hfgnz/embed/charts?id=64f48268-cd93-4215-89ef-198706220d4b&maxDataAge=60&theme=light&autoRefresh=true\"\n ></iframe>\n </div>\n );\n};\n",
"text": "I have a mongodb database and in it I have the userId from the clerk, the user who is currently logged in and his id.I compare it with the id of the users who entered their weight - their id … if it is the same, then the id created by the users for their UserId is filtered out from the clerk and show the right weights for specific user on website . well, I need a dynamic mongodb chart that would be different for each user. I made the chart and it works according to the data from the database, but I need a filter there that will show only that user’s data, that is, to the user who is logged in, only those that he sent there, not others. I’ll send the code … is it even possible to make a dynamic chart in next 13 ?export default Weight;",
"username": "JaXo_N_A"
},
{
"code": "",
"text": "Yes you can do this using Injected Filters, see Filter Embedded Charts — MongoDB Charts",
"username": "tomhollander"
}
] | Can you make a dynamic mongoDB chart based on login user ? | 2023-09-10T22:27:58.955Z | Can you make a dynamic mongoDB chart based on login user ? | 364 |
null | [] | [
{
"code": "",
"text": "Hello :Im new to mongoDb but last month i have been working with one big database and my disk is almost full , now im just asking if i copy and paste in new disk it will work normal or i need to use mongo tools to dump and restore ?Best regards",
"username": "Bruno_Lopes"
},
{
"code": "",
"text": "Hi Bruno, welcome to the community forums and I hope your journey with MongoDB is great! About the copy, I don’t see any problems with it, as long as the service is stopped. This way you will have a consistent image of the data and you will be able to copy them at will. Let me know if everything went well, or if you have any other questions!Best!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Hello, welcome to the MongoDB community.Make sure MongoDB is stopped for this to be done to ensure it has no data in memory. After that, you can move data from one disk to another.Any questions, count me in.",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Great , tomorrow the new disk will arrive , im going to make it tomrrow and let you know how it went .\nThanks for the help",
"username": "Bruno_Lopes"
},
{
"code": "",
"text": "Update :Mongo is runnig without problems after cloning to a bigger disk .Thank you",
"username": "Bruno_Lopes"
},
{
"code": "",
"text": "Nice! If you can mark it as solved, it helps to better index the results for others who have the same question.Best!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Clone database! | 2023-09-08T16:27:45.175Z | Clone database! | 461 |
[
"replication"
] | [
{
"code": "cfg = rs.conf() \ncfg.members[1].priority = 0 \ncfg.members[1].hidden = true \ncfg.members[1].secondaryDelaySecs = 180 \nrs.reconfig(cfg)\n",
"text": "Hi, I am trying to setup a Replica set with 1 Secondary to have some delay from Primary.\nEven though rs.config() shows correct desired configurations, there is no delay when I make changes in Primary. The Delayed one is instantly reflecting changes. Not sure why configurations look fine but there is no delay.It will be really helpful if someone can give some suggestions to fix this or experienced something similar.\nOr if you know some guide other below doc that can help.I followed below steps to make it delay by following these docs: https://www.mongodb.com/docs/manual/tutorial/configure-a-delayed-replica-set-member/#exampleAlso attaching screenshot of rs.conf() for reference.\n\nCapture794×907 33 KB\n",
"username": "Naman_Saxena1"
},
{
"code": "",
"text": "Hi @Naman_Saxena1,\nThe configuration is correct!\nDo you read the data from the hidden node?\nYou should do it in 3 minutes.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Hi @Fabio_Ramohitaj ,\nso after inserting a new document in Primary, I instantly opened Secondary Delayed node in MongoDb Compass and it was showing latest document. I basically wanted to test if delay is working properly.\nMy understanding was MongoDb Compass will NOT show newly inserted document before 3 mins or am I missing some information?",
"username": "Naman_Saxena1"
},
{
"code": "",
"text": "Hi @Naman_Saxena1,\nAre you sure you logged in to the correct instance? Because I see that you created the replica set on the same machine, with 3 different ports, so you will have to connect to the instance with port 27012.\nTry to see it by going up directly to the server via the mongoshell.\nI created a replica set for test and it works for me.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "@Fabio_Ramohitaj yeah I am sure I am accessing correct instance.\nHmm, I will setup new replica set from scratch, maybe I made some mistake that is why delay is not happening. ",
"username": "Naman_Saxena1"
},
{
"code": "",
"text": "@Naman_Saxena1, That seems pretty strange.\nWhat version of MongoDB are you using?\nTry recreating and let me know!Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "cd C:\\Program Files\\MongoDB\\Server\\6.0\\bin\nmongod -f node1.cfg\t-> Runs on 127.0.0.1:27011\nmongod -f node2.cfg\t-> Runs on 127.0.0.1:27012\nmongod -f node3.cfg\t-> Runs on 127.0.0.1:27013\n\nmongosh --port 27011\nuse admin\nrs.initiate()\ndb.createUser({user:\"m103-admin\",pwd: \"m103-pass\",roles:[{role:\"root\",db:\"admin\"}]})\nexit\nmongosh --host \"m103-example/127.0.0.1:27011\" -u \"m103-admin\" -p \"m103-pass\" --authenticationDatabase \"admin\"\n\n10) Create config variable: rsconfig = { _id: \"m103-example\", members: [ { _id: 0, host: \"localhost:27011\" }, { _id: 1, host: \"localhost:27012\" }, { _id: 2, host: \"localhost:27013\" }] }\n11) Initiate replica: rs.reconfig(rsconfig)\n\n// To make 2nd Node Delayed\ncfg = rs.conf()\ncfg.members[1].hidden = true\ncfg.members[1].priority = 0\ncfg.members[1].secondaryDelaySecs = 600\nrs.reconfig(cfg)\n\nexit\n\nRestart node1.cfg, node2.cfg and node3.cfg\n\n// Connect to 27011\nmongosh --host \"m103-example/127.0.0.1:27011\" -u \"m103-admin\" -p \"m103-pass\" --authenticationDatabase \"admin\"\nuse practicedb\t\t\t\t\t\ndb.users.insertOne({name:'User1'})\t\nexit\n\n\nmongosh --host \"m103-example/127.0.0.1:27012\" -u \"m103-admin\" -p \"m103-pass\" --authenticationDatabase \"admin\"\nshow databases\t\t\t\t-> Showing practicedb before 5 mins (No Delay)\ndb.users.find({})\t\t\t\t-> Showing newly inserted document before 5 mins (No Delay)\n",
"text": "@Fabio_Ramohitaj I am using version 6.0Even tried creating a new replica set and still there is no delay. I am sharing steps I am following, maybe you will be able notice if I am making some mistake.Or let me know if you have some guide that I can follow.Thanks a lot for helping!",
"username": "Naman_Saxena1"
},
{
"code": "",
"text": "Hi @Naman_Saxena1,\nReally strange.\nIf I can, I will do more tests on my machine when I get home tonight.Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "m103-example/127.0.0.1:27012127.0.0.1:27012\nm103-example/127.0.0.1:27012\n",
"text": "When you connect withm103-example/127.0.0.1:27012you are not reading from the secondary because you are connecting to the replica set. The first thing the shell does is to read the rs config and reconnect and read from the primary. To connect and read to the secondary you have to remove the replica set name from the connection. this means you need to connect withrather than",
"username": "steevej"
},
{
"code": "",
"text": "@steevej You are right!\nConnected using 127.0.0.1:27012 and then ran db.getMongo().setReadPref(“secondary”) for testing.\nIt is working. Thanks a lot! ",
"username": "Naman_Saxena1"
},
{
"code": "",
"text": "Hi @steevej,\nHonestly, I didn’t know it connected to the cluster, I suspected it, but I wanted to test before I said nonsense.Best Rergards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issue with MongoDb Delay Replica | 2023-09-09T15:36:36.009Z | Issue with MongoDb Delay Replica | 439 |
|
null | [] | [
{
"code": "valueOf()exports(\n EJSON.parse('{\"$numberDouble\":\"2\"}'),\n EJSON.parse('{\"$numberInt\":\"2\"}'),\n)\nexports = async function(a, b) {\n console.log(a)\n console.log(a.valueOf())\n console.log(a.toString())\n console.log(b)\n console.log(b.valueOf())\n console.log(b.toString())\n console.log(a)\n console.log(a.valueOf())\n console.log(b)\n console.log(b.valueOf())\n console.log(a == b)\n console.log(a.valueOf() == b)\n console.log(a == b.valueOf())\n console.log(a.valueOf() == b.valueOf())\n console.log(a === b)\n console.log(a.valueOf() === b)\n console.log(a === b.valueOf())\n console.log(a.valueOf() === b.valueOf())\n console.log(typeof a)\n console.log(typeof b)\n console.log(typeof a.valueOf())\n console.log(typeof b.valueOf())\n console.log(Object.prototype.toString.call(a))\n console.log(Object.prototype.toString.call(b))\n console.log(Object.prototype.toString.call(a.valueOf()))\n console.log(Object.prototype.toString.call(b.valueOf()))\n console.log(EJSON.stringify(a))\n console.log(EJSON.stringify(b))\n console.log(EJSON.stringify(a.valueOf()))\n console.log(EJSON.stringify(b.valueOf()))\n};\n2\n2\n2\n2\n2\n2\n2\n2\n2\n2\ntrue\ntrue\nfalse\nfalse\ntrue\ntrue\nfalse\nfalse\nnumber\nnumber\nnumber\nnumber\n[object Number]\n[object Number]\n[object Number]\n[object Number]\n{\"$numberDouble\":\"2\"}\n{\"$numberInt\":\"2\"}\n{\"$numberDouble\":\"2\"}\n{\"$numberLong\":\"2\"}\n",
"text": "It looks like a comparison is broken between Double and Long numbers in Realm functions. Moreover, it returns Long when calling valueOf() on the Int type.Parameters:Code:Output:It would be good if someone has an explanation for this magic behavior",
"username": "Anton_P"
},
{
"code": "",
"text": "Hi @Anton_P, thanks for surfacing this issue! We filed a ticket internally to investigate the behavior, will provide an update once available.",
"username": "Laura_Zhukas1"
},
{
"code": "",
"text": "Hi @Laura_Zhukas1, thanks ",
"username": "Anton_P"
},
{
"code": "context.http.getJSONvalueOf()const response = await context.http.get({ url: 'https://www.mongodb.com/wtf' });\nconsole.log(response.statusCode); // 404\nconsole.log(JSON.stringify(response.statusCode)); // 404\nconsole.log(EJSON.stringify(response.statusCode)); // {\"$numberInt\":\"404\"}\nconsole.log(response.statusCode.valueOf()); // 404\nconsole.log(JSON.stringify(response.statusCode.valueOf())); // undefined - WTF O_o\nconsole.log(JSON.stringify({ statusCode: response.statusCode.valueOf() })); // {} - WTF O_o\nconsole.log(EJSON.stringify(response.statusCode.valueOf())); // {\"$numberLong\":\"404\"}\n",
"text": "@Laura_Zhukas1 I will just post all magic behaviors here.context.http.get response status code is some weird number that JSON is unable to stringify if valueOf() is called",
"username": "Anton_P"
}
] | Realm Functions magic behavior | 2023-08-19T15:36:20.689Z | Realm Functions magic behavior | 567 |
null | [
"python"
] | [
{
"code": "",
"text": "Need to work with MongoDB from MATLAB but don’t have access to the Database Toolbox? Work with MongoDB in MATLAB Using the pymongo Python Module shows one way to do it.I’m far from a MongoDB expert so if anyone has tips on improving either my explanations or the MongoDB code examples, send them in.\n– Al",
"username": "Al_Danial"
},
{
"code": "",
"text": "Nice, but you’re showing without any authentication, which is the default state of a MongoDB installation, but hardly a practical usage model.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "I do show how the authentication setup looks like for a real connection at Work with MongoDB in MATLAB Using the pymongo Python Module · TechThought but you’re right, the individual code examples don’t include this.",
"username": "Al_Danial"
},
{
"code": "",
"text": "Your exercise is good material.",
"username": "Jack_Woehr"
},
{
"code": "pip install pymongo\nimport pymongo\nfrom pymongo import MongoClient\nclient = MongoClient('mongodb://localhost:27017/')\nfind()db = client['myDatabase']\ncollection = db['myCollection']\nresult = collection.find({})\n",
"text": "Copy codepythonCopy codepythonCopy codepythonCopy codeTo view more information about PyMongo and its functionalities, refer to the official documentation here.If you have any specific queries or need more assistance, please let me know.",
"username": "mojot1_Lisa"
},
{
"code": "",
"text": "PyMongo is a Python distribution containing tools for working with MongoDB, and is the recommended way to work with MongoDB from Python. This documentation attempts to explain everything you need to know to use PyMongo. Installing / Upgrading. Instructions on how to get the distribution.\njitter speed test",
"username": "junaid_Shah"
}
] | MongoDB + MATLAB via pymongo how-to | 2022-07-23T16:10:13.303Z | MongoDB + MATLAB via pymongo how-to | 3,011 |
null | [] | [
{
"code": "",
"text": "HI Team,\nAfter restarting the mongodb server we can observe that the status of mongo db server showing as Active:failed after running the command systemctl status mongod. Pls help on issues.Regards,",
"username": "anjaneya_prasad"
},
{
"code": "",
"text": "Hi @anjaneya_prasad and welcome to the MongoDB Community forums. You would need to look at the log files to figure out why the process failed. Without seeing the logs we can’t help point you in a direction to resolve the issue.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @anjaneya_prasad,In addition to an excerpt of the MongoDB server log it would be helpful to include:As @Doug_Duncan mentioned, if a MongoDB process fails to start or exits unexpectedly the server log will usually include some error messages relating to the reason for shutting down the process.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "HI Team,I’m sharing my mongodb logs kindly check and resolve the issue. and below details.-O/S version - NAME=Red Hat Enterprise Linux Server VERSION= 7.9\n-MongoDB server version - MongoDB shell version v5.0.9\n-Type of deployment - replica set\n-This is a existing installation2022-09-19T18:57:11.247+0530 E STORAGE [initandlisten] Failed to set up listener: SocketException: Cannot assign requested address\n2022-09-19T18:57:11.248+0530 I CONTROL [initandlisten] now exiting\n2022-09-19T18:57:11.248+0530 I CONTROL [initandlisten] shutting down with code:48\n2022-09-20T11:28:41.904+0530 I CONTROL [main] ***** SERVER RESTARTED *****\n2022-09-20T11:28:41.909+0530 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] MongoDB starting : pid=230912 port=27017 dbpath=/var/lib/mongo 64-bit host=dctestapi1\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] db version v4.2.0\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] allocator: tcmalloc\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] modules: none\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] build environment:\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] distmod: rhel70\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] distarch: x86_64\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] target_arch: x86_64\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { bindIp: “10.1.4.33”, port: 27017 }, processManagement: { fork: true, pidFilePath: “/var/run/mongodb/mongod.pid”, timeZoneInfo: “/usr/share/zoneinfo” }, replication: { replSetName: “tykrs” }, storage: { dbPath: “/var/lib/mongo”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongod.log” } }\n2022-09-20T11:28:42.599+0530 E STORAGE [initandlisten] Failed to set up listener: SocketException: Cannot assign requested address\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] now exiting\n2022-09-20T11:28:42.599+0530 I CONTROL [initandlisten] shutting down with code:48\n2022-09-20T11:28:49.094+0530 I CONTROL [main] ***** SERVER RESTARTED *****\n2022-09-20T11:28:49.099+0530 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] MongoDB starting : pid=230944 port=27017 dbpath=/var/lib/mongo 64-bit host=dctestapi1\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] db version v4.2.0\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] allocator: tcmalloc\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] modules: none\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] build environment:\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] distmod: rhel70\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] distarch: x86_64\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] target_arch: x86_64\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { bindIp: “10.1.4.33”, port: 27017 }, processManagement: { fork: true, pidFilePath: “/var/run/mongodb/mongod.pid”, timeZoneInfo: “/usr/share/zoneinfo” }, replication: { replSetName: “tykrs” }, storage: { dbPath: “/var/lib/mongo”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongod.log” } }\n2022-09-20T11:28:49.110+0530 E STORAGE [initandlisten] Failed to set up listener: SocketException: Cannot assign requested address\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] now exiting\ncally disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] MongoDB starting : pid=230944 port=27017 dbpath=/var/lib/mongo 64-bit host=dctestapi1\n2022-09-20T11:28:49.110+0530 I CONTROL [initandlisten] shutting down with code:48",
"username": "anjaneya_prasad"
},
{
"code": "",
"text": "2022-09-20T11:28:49.110+0530 E STORAGE [initandlisten] Failed to set up listener: SocketException: Cannot assign requested addressYou are trying to bind to an IP address that does not belong to this machine.",
"username": "steevej"
},
{
"code": "",
"text": "Hello communityI have the same problem… Any solution please?Best regards.",
"username": "Jose_Manuel"
},
{
"code": "",
"text": "By same problem you mean sysctl error or listener setup error?\nIs your bindIp value correct?\nAre you connecting remotely?Can you connect locally",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hello misterI am going to do all again and we´ll see what happens. Because I getout to trash the virtual machine so…\nI will informe about that as soon as possible.Best regards.",
"username": "Jose_Manuel"
},
{
"code": "",
"text": "The problem is that MONGODB version 6 needs virtual hardware to run so, I installed a old version of mongodb.Best regards.",
"username": "Jose_Manuel"
},
{
"code": "",
"text": "I’m unable to connect mongoDB with redhat, below is the log{“t”:{“$date”:“2023-09-10T06:06:20.630+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“main”,“msg”:“***** SERVER RESTARTED **“}\n{“t”:{”$date\":“2023-09-10T06:06:20.647+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“main”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:21},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:21},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:21},“isInternalClient”:true}}}\n{“t”:{“$date”:“2023-09-10T06:06:20.649+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2023-09-10T06:06:20.649+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“namespace”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“TENANT_M”, “id”:7091600, “ctx”:“main”,“msg”:“Starting TenantMigrationAccessBlockerRegistry”}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:39591,“port”:27017,“dbPath”:“/var/lib/mongo”,“architecture”:“64-bit”,“host”:“MongoVM”}}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“7.0.1”,“gitVersion”:“425a0454d12f2664f9e31002bbe4a386a25345b5”,“openSSLVersion”:“OpenSSL 1.1.1g FIPS 21 Apr 2020”,“modules”:[“enterprise”],“allocator”:“tcmalloc”,“environment”:{“distmod”:“rhel80”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:51765, “ctx”:“initandlisten”,“msg”:“Operating System”,“attr”:{“os”:{“name”:“CentOS Linux release 8.3.2011”,“version”:“Kernel 4.18.0-240.1.1.el8_3.x86_64”}}}\n{“t”:{“$date”:“2023-09-10T06:06:20.797+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:21951, “ctx”:“initandlisten”,“msg”:“Options set by command line”,“attr”:{“options”:{“config”:“/etc/mongod.conf”,“net”:{“bindIp”:“127.0.0.1”,“port”:27017},“processManagement”:{“timeZoneInfo”:“/usr/share/zoneinfo”},“storage”:{“dbPath”:“/var/lib/mongo”},“systemLog”:{“destination”:“file”,“logAppend”:true,“path”:“/var/log/mongodb/mongod.log”}}}}\n{“t”:{“$date”:“2023-09-10T06:06:20.799+05:30”},“s”:“E”, “c”:“NETWORK”, “id”:23024, “ctx”:“initandlisten”,“msg”:“Failed to unlink socket file”,“attr”:{“path”:“/tmp/mongodb-27017.sock”,“error”:“Operation not permitted”}}\n{“t”:{“$date”:“2023-09-10T06:06:20.799+05:30”},“s”:“F”, “c”:“ASSERT”, “id”:23091, “ctx”:“initandlisten”,“msg”:“Fatal assertion”,“attr”:{“msgid”:40486,“file”:“src/mongo/transport/asio/asio_transport_layer.cpp”,“line”:1202}}\n{“t”:{“$date”:“2023-09-10T06:06:20.799+05:30”},“s”:“F”, “c”:“ASSERT”, “id”:23092, “ctx”:“initandlisten”,“msg”:\"\\n\\naborting after fassert() failure\\n\\n”}{“t”:{“$date”:“2023-09-10T06:09:17.619+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:20698, “ctx”:“main”,“msg”:“***** SERVER RESTARTED *****”}\n{“t”:{“$date”:“2023-09-10T06:09:17.621+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23285, “ctx”:“main”,“msg”:“Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’”}\n{“t”:{“$date”:“2023-09-10T06:09:17.628+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4915701, “ctx”:“main”,“msg”:“Initialized wire specification”,“attr”:{“spec”:{“incomingExternalClient”:{“minWireVersion”:0,“maxWireVersion”:21},“incomingInternalClient”:{“minWireVersion”:0,“maxWireVersion”:21},“outgoing”:{“minWireVersion”:6,“maxWireVersion”:21},“isInternalClient”:true}}}\n{“t”:{“$date”:“2023-09-10T06:09:17.628+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4648601, “ctx”:“main”,“msg”:“Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.”}\n{“t”:{“$date”:“2023-09-10T06:09:17.643+05:30”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationDonorService”,“namespace”:“config.tenantMigrationDonors”}}\n{“t”:{“$date”:“2023-09-10T06:09:17.643+05:30”},“s”:“I”, “c”:“REPL”, “id”:5123008, “ctx”:“main”,“msg”:“Successfully registered PrimaryOnlyService”,“attr”:{“service”:“TenantMigrationRecipientService”,“namespace”:“config.tenantMigrationRecipients”}}\n{“t”:{“$date”:“2023-09-10T06:09:17.644+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:5945603, “ctx”:“main”,“msg”:“Multi threading initialized”}\n{“t”:{“$date”:“2023-09-10T06:09:17.644+05:30”},“s”:“I”, “c”:“TENANT_M”, “id”:7091600, “ctx”:“main”,“msg”:“Starting TenantMigrationAccessBlockerRegistry”}\n{“t”:{“$date”:“2023-09-10T06:09:17.644+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4615611, “ctx”:“initandlisten”,“msg”:“MongoDB starting”,“attr”:{“pid”:39681,“port”:27017,“dbPath”:“/var/lib/mongo”,“architecture”:“64-bit”,“host”:“MongoVM”}}\n{“t”:{“$date”:“2023-09-10T06:09:17.645+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23403, “ctx”:“initandlisten”,“msg”:“Build Info”,“attr”:{“buildInfo”:{“version”:“7.0.1”,“gitVersion”:“425a0454d12f2664f9e31002bbe4a386a25345b5”,“openSSLVersion”:“OpenSSL 1.1.1g FIPS 21 Apr 2020”,“modules”:[“enterprise”],“allocator”:“tcmalloc”,“environment”:{“distmod”:“rhel80”,“distarch”:“x86_64”,“target_arch”:“x86_64”}}}}",
"username": "Saurabh_verma2"
}
] | Active:failed after running the command systemctl status mongod | 2022-09-19T12:11:42.743Z | Active:failed after running the command systemctl status mongod | 4,168 |
null | [] | [
{
"code": "/var/lib/mongodb",
"text": "Hi there.I blindly updated mongo to 7.0 without any backups and without setting the feature compatibility command, and now I’m getting error 62 when trying to start.Is there any way to fix this without deleting the /var/lib/mongodb directory (without losing the data)?Thanks.",
"username": "Juanpi_N_A"
},
{
"code": "",
"text": "Good afternoon, welcome to the MongoDB community.You can downgrade the cluster and try starting to see what the problem is. Also, don’t you have replicas in your cluster?",
"username": "Samuel_84194"
},
{
"code": "featureCompatibilityVersionstorage.journal.enabled",
"text": "I just tried that after making the post. Started all over again following the update guide and everything worked fine.Seems the issue was caused because of featureCompatibilityVersion set to 5.0 instead of 6.0 before upgrading. Because when I deleted the removed storage.journal.enabled config after updating to 7.0 I got the error 62.This is just a dev server I have working as standalone, so no replicas.Thanks anyway ",
"username": "Juanpi_N_A"
},
{
"code": "",
"text": "I’m glad you got it resolved, I’m at your disposal!",
"username": "Samuel_84194"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Any way to recovery from error 62 without losing data? | 2023-09-09T19:34:33.633Z | Any way to recovery from error 62 without losing data? | 332 |
null | [
"queries",
"node-js"
] | [
{
"code": "",
"text": "Dear MongoDB Support Team,I hope this message finds you well. I am reaching out to report a significant issue I am experiencing with my MongoDB cluster, which is unfortunately hindering my project’s progress. Despite having subscribed to a paid plan and maintaining a relatively low number of requests, I am encountering noticeably slow performance. This has been a persistent issue, and I am eager to find a resolution as quickly as possible.Here are the specifics of my current situation:I am quite concerned as this issue is affecting the performance and usability of my application. I would greatly appreciate it if someone could assist me in identifying the root cause and suggesting possible solutions to mitigate this problem.Thank you for your attention to this matter. I look forward to hearing from you soon with insights or troubleshooting steps to help resolve this issue.",
"username": "Manish_Kukreja"
},
{
"code": "",
"text": "Good afternoon, welcome to the MongoDB community.Have you tried opening support on Atlas, since you have the paid plan?I didn’t find the attachments, could you go to the thread? Also, a look at the collection and which queries you are noticing are having degraded performance. In addition to the M10 cluster having 0.5vCPU, I have already run several productive workloads and had no degradation problems. I am available to try to help.",
"username": "Samuel_84194"
}
] | Urgent Assistance Required: Slow Cluster Performance Despite Minimal Requests on a Paid Plan | 2023-09-09T06:40:37.237Z | Urgent Assistance Required: Slow Cluster Performance Despite Minimal Requests on a Paid Plan | 328 |
null | [
"queries",
"data-modeling"
] | [
{
"code": "{\n name : \"A\",\n\touterArr : [\n\t\t{\n\t\t\touterId : 1,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\touterId : 2,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t},\n\t\t{\n\t\t\touterId : 3,\n\t\t\tinnerArr : [\n\t\t\t\t{\n\t\t\t\t\t { date: '2023-08-1', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-2', type: 'Normal'},\n\t\t\t\t\t { date: '2023-08-3', type: 'Normal'}\n\t\t\t\t}\n\t\t\t]\n\t\t}\n\t]\n}\ndb.collection.find(\n { name: \"abc\", \"outerArr.outerId\": 1, \"outerArr.innerArr.date\": \"2023-08-1\" }\n)\n",
"text": "Hi,I have a schema like thisI want to find a document with inputs\nname = “A” and outerId=1 and date= “2023-08-1”Here is my query for the sameHowever, my intention is to get/retrieve only innerArr element matching date{ date: ‘2023-08-1’, type: ‘Normal’}But i am getting complete data since i do not have any filter or projection.Please help in writing projection field to get only{ date: ‘2023-08-1’, type: ‘Normal’}",
"username": "Manjunath_k_s"
},
{
"code": "",
"text": "A aggregation with $match, $unwind and repalce root operation could be the solution here.Build the aggregation pipeline stage by stage so you can track whats going on.You may also want to think about changing the date to a date object as opposed to string…",
"username": "John_Sewell"
},
{
"code": "",
"text": "$replaceRootBeautiful!\nThanks a lot John. Enjoyed the process of building pipeline and observing outcome in each stage. Great to see its working.One question",
"username": "Manjunath_k_s"
},
{
"code": "",
"text": "Should not in this case as the primary match should identify the actual document and then you are just manipulating the details of one document. Of course the alternative is to do the extraction in code as opposed to on the server, which probably is more of an overhead.Date are actually stored as numbers, so the storage needs are much smaller than a string. Its also much faster to compare to numbers than two strings so you get a boost there.",
"username": "John_Sewell"
}
] | How to write projection to get nested object only from collection schema | 2023-09-09T08:09:22.716Z | How to write projection to get nested object only from collection schema | 352 |
null | [] | [
{
"code": "",
"text": "I am trying to connect to Mongo shell, this is the error am receiving (“MongoNetworkError: Client network socket disconnected before secure TLS connection was established”)",
"username": "Ibrahim_Ahmed1"
},
{
"code": "",
"text": "Hello @Ibrahim_Ahmed1 ,Welcome to The MongoDB Community Forums! Can you please confirm if you are trying to connect to MongoDB Atlas?\nIf yes, then please confirm if you have added your IP address to IP Access List in Network Access tab of Atlas menu?\nAlso, please make sure that any firewall is not causing this network issue.\nOnce that is done, please try to connect again.Let me know if this works! Cheers!\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Am trying to connect to mongo compass, but it keeps showing me to add current IP address.\n\nimage1382×531 24.4 KB\n",
"username": "Ibrahim_Ahmed1"
}
] | Mongo Network Error | 2023-09-04T17:40:32.312Z | Mongo Network Error | 598 |
[] | [
{
"code": "",
"text": "why Date need ‘new’ keyword to create Date object and ObjectId or BinData etc. do not need ‘new’ keyword? I was hoping for consistency.",
"username": "ajinkya_shidhore1"
},
{
"code": "DateDatenewDateDatenew Date()Date()ObjectIdBinDatanew",
"text": "Hey @ajinkya_shidhore1,Welcome to the MongoDB Community!why Date need ‘new’ keyword to create Date object and ObjectId or BinData etc. do not need ‘new’ keyword? I was hoping for consistency.The Date object is a built-in constructor in JavaScript, and when you create a Date object, you use the “new” keyword to call the constructor.The Date constructor function requires the use of the “new” keyword because it creates an instance of the “Date” class.On the other hand, ObjectId and BinData are not classes but rather objects created by MongoDB. They have their own set of methods and properties, and they can be used directly without requiring the creation of instances through a constructor. Therefore, there is no need to use the new keyword when working with these objects.I hope this clarifies your doubts.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "\nimage709×148 2.27 KB\nI think adding or not adding ‘new’ keyword does not make any difference. I tried with different data types. It creates more confusion. Also, IMO mongodb should have it’s own rules and should not mix itself with javascript.",
"username": "ajinkya_shidhore1"
}
] | Why Date need 'new' keyword to create Date object and ObjectId do not need any? | 2023-09-07T05:51:42.479Z | Why Date need ‘new’ keyword to create Date object and ObjectId do not need any? | 223 |
|
null | [
"aggregation",
"queries",
"node-js",
"performance",
"react-js"
] | [
{
"code": "createdAt: 1\n businessUnitId: 1\n commodityId: 1\n commodityVariantId: 1\n createdBy: 1\n isDeleted: 1\nisSLCMQcInspection: 1\ncommodityDetail.CIDNumber_text\nconst result = await this.qcInspectionModel.aggregate([],{ explain: true })async getQCResultSLcm(\n filters: GetAllQcInspectionWithFilterSlcmDto,\n businessUnitId: string,\n ) {\n let startDateQuery = {};\n let endDateQuery = {};\n let commoditySearchQuery = {};\n let variantSearchQuery = {};\n const statusQuery = {};\n let cidNumberSearchQuery = {};\n let lotNoSearchQuery = {};\n const businessUnitFilterQuery = {};\n let generalSearchQuery = {};\n\n if (filters.startDate) {\n startDateQuery = {\n $expr: {\n $gte: [\n '$createdAt',\n {\n $dateFromString: {\n dateString: filters.startDate,\n timezone: '+05:30',\n format: '%m-%d-%Y',\n },\n },\n ],\n },\n };\n }\n\n if (filters.endDate) {\n endDateQuery = {\n $expr: {\n $lt: [\n '$createdAt',\n {\n $dateAdd: {\n startDate: {\n $dateFromString: {\n dateString: filters.endDate,\n timezone: '+05:30',\n format: '%m-%d-%Y',\n },\n },\n unit: 'day',\n amount: 1,\n },\n },\n ],\n },\n };\n }\n\n // if (filters.startDate) {\n // startDateQuery = { createdAt: { $gte: DateTime.fromFormat(filters.startDate, \"MM-dd-yyyy\").setZone(\"+05:30\").toJSDate() } }\n // }\n // if (filters.endDate) {\n // endDateQuery = { createdAt: { $lt: DateTime.fromFormat(filters.startDate, \"MM-dd-yyyy\").plus({ days: 1 }).setZone(\"+05:30\").toJSDate() } }\n // }\n\n if (filters.searchByCommodity) {\n commoditySearchQuery = {\n 'commodityData.name': {\n $regex: `${filters.searchByCommodity}`,\n $options: 'i',\n },\n };\n }\n if (filters.searchByVariant) {\n variantSearchQuery = {\n 'commodityVariantData.name': {\n $regex: `${filters.searchByVariant}`,\n $options: 'i',\n },\n };\n }\n\n if (filters.searchByStatus) {\n statusQuery['status'] = filters.searchByStatus;\n }\n\n if (filters.searchByCIDNumber) {\n cidNumberSearchQuery = {\n $or: [\n {\n 'commodityDetail.CIDNumber': {\n $regex: `${filters.searchByCIDNumber}`,\n $options: 'i',\n },\n },\n // {\n // 'businessUnitData.name': {\n // $regex: `${filters.searchByCIDNumber}`,\n // $options: 'i',\n // },\n // },\n // {\n // 'commodityDetail.LOTNumber': {\n // $regex: `${filters.searchByLotNo}`,\n // $options: 'i',\n // },\n // },\n // {\n // 'qcId': {\n // $regex: `${filters.searchByCIDNumber}`,\n // $options: 'i',\n // },\n // },\n ],\n };\n }\n if (filters.searchByLotNo) {\n lotNoSearchQuery = {\n $or: [\n {\n 'commodityDetail.LOTNumber': {\n $regex: `${filters.searchByLotNo}`,\n $options: 'i',\n },\n },\n ],\n };\n }\n if (filters.searchByGeneralSearch) {\n generalSearchQuery = {\n $or: [\n {\n qcId: {\n $regex: `${filters.searchByGeneralSearch}`,\n $options: 'i',\n },\n },\n {\n 'businessUnitData.name': {\n $regex: `${filters.searchByGeneralSearch}`,\n $options: 'i',\n },\n },\n {\n 'userData.name': {\n $regex: `${filters.searchByGeneralSearch}`,\n $options: 'i',\n },\n },\n ],\n };\n }\n\n if (businessUnitId) {\n businessUnitFilterQuery['businessUnitId'] = new mongoose.Types.ObjectId(\n businessUnitId,\n );\n }\n // const startTime = Date.now();\n const result = await this.qcInspectionModel.aggregate([\n {\n $match: {\n $and: [\n startDateQuery,\n endDateQuery,\n statusQuery,\n businessUnitFilterQuery,\n { isDeleted: false },\n { isSLCMQcInspection: true },\n ],\n },\n },\n {\n $lookup: {\n from: 'mastercommodities',\n localField: 'commodityId',\n pipeline: [\n {\n $project: {\n name: 1,\n },\n },\n ],\n foreignField: '_id',\n as: 'commodityData',\n },\n },\n {\n $unwind: '$commodityData',\n },\n {\n $lookup: {\n from: 'commodityvariants',\n localField: 'commodityVariantId',\n pipeline: [\n {\n $project: {\n name: 1,\n },\n },\n ],\n foreignField: '_id',\n as: 'commodityVariantData',\n },\n },\n {\n $unwind: '$commodityVariantData',\n },\n {\n $lookup: {\n from: 'businessunits',\n localField: 'businessUnitId',\n pipeline: [\n {\n $lookup: {\n from: 'businesses',\n localField: 'businessId',\n foreignField: '_id',\n as: 'businessClientName',\n },\n },\n {\n $unwind: '$businessClientName',\n },\n {\n $project: {\n name: 1,\n businessClientName: '$businessClientName.displayName',\n },\n },\n ],\n foreignField: '_id',\n as: 'businessUnitData',\n },\n },\n {\n $unwind: {\n path: '$businessUnitData',\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $lookup: {\n from: 'users',\n localField: 'createdBy',\n foreignField: '_id',\n as: 'userData',\n pipeline: [\n {\n $project: {\n firstName: 1,\n lastName: 1,\n _id: 0,\n name: { $concat: ['$firstName', ' ', '$lastName'] },\n },\n },\n ],\n },\n },\n {\n $unwind: {\n path: '$userData',\n preserveNullAndEmptyArrays: true,\n },\n },\n {\n $match: {\n $and: [\n commoditySearchQuery,\n variantSearchQuery,\n generalSearchQuery,\n cidNumberSearchQuery,\n lotNoSearchQuery\n ],\n },\n },\n {\n $sort: {\n createdAt:\n filters.sortOrder && filters.sortOrder != SortOrder.Ascending\n ? SortOrder.Descending\n : SortOrder.Ascending,\n },\n },\n {\n $project: {\n _id: 1,\n status: 1,\n commodityData: 1,\n commodityDetail: 1,\n commodityVariantData: 1,\n createdAt: 1,\n qcId: 1,\n sampleName: 1,\n businessUnitData: 1,\n userData: 1,\n location: 1,\n middlewareStatus: 1,\n },\n },\n {\n $facet: {\n records: [\n { $skip: (filters.pageNumber - 1) * filters.count },\n { $limit: filters.count * 1 },\n ],\n total: [{ $count: 'count' }],\n },\n }\n ]);\n // const endTime = Date.now();\n // const executionTimeMs = endTime - startTime;\n // console.log('Execution time:', executionTimeMs, 'ms');\n return result;\n }\n",
"text": "Hii, The aggregation pipeline taking almost 20 to 25 sec to execute and give response.But inspite of creating indexes it is still taking time 20 to 22 sec.I guess these lookups takes more time ,Byt why and how I can solve this issue?\nNote:It fetches mostly 30400 records. My MongoDB version is 5.0.20…I have created Indexes,And I am strange whenever I am using explain() with this pipeline getting “MongoInvalidArgumentError: Option “explain” cannot be used on an aggregate call with writeConcern” error.\nMy syntax for explain() was:const result = await this.qcInspectionModel.aggregate([],{ explain: true })Aggregation pipeline:",
"username": "Umasankar_Swain"
},
{
"code": "",
"text": "I’m not sure putting “Solve ASAP” on your message title is going to get people running to try and resolve the issue.Have you tried commenting out pipeline stages and trying to debug what’s causing the timeouts yourself?There are a load of filters you’re passing into the aggregate query that are not documented so we can’t see what’s going on, you have also not included example documents if someone wanted to try and reproduce the issue.Have you tried running the query in compass or the shell to get around the explain issue you’re having?",
"username": "John_Sewell"
},
{
"code": "",
"text": "Hii @John_Sewell ,thanks for your replay,Yes I have tried by commenting out pipeline stages .The result I got most of the lookups taking time 5 sec to 7 sec for execution.Now I have updated my code ,you can find filters there.But why I am unable to use explain() ,getting ‘MongoInvalidArgumentError: Option “explain” cannot be used on an aggregate call with writeConcern’ error.",
"username": "Umasankar_Swain"
},
{
"code": "",
"text": "Does you node code set the default write concern at a connection level? In code or in the connection string?",
"username": "John_Sewell"
},
{
"code": "",
"text": "No,there is no code which sets the writeConcern.",
"username": "Umasankar_Swain"
},
{
"code": "",
"text": "What’s your full connection string, with redacted username / password?",
"username": "John_Sewell"
},
{
"code": "",
"text": "mongodb+srv://user:password.mongodb.net/test?retryWrites=true&w=majority",
"username": "Umasankar_Swain"
},
{
"code": "",
"text": "Remove the write concern from the connection string and try again",
"username": "John_Sewell"
},
{
"code": "{\n explainVersion: '1',\n stages: [\n {\n '$cursor': [Object],\n nReturned: 30104,\n executionTimeMillisEstimate: 4925\n },\n {\n '$lookup': [Object],\n totalDocsExamined: 30103,\n totalKeysExamined: 30103,\n collectionScans: 0,\n indexesUsed: [Array],\n nReturned: 30103,\n executionTimeMillisEstimate: 8556\n },\n {\n '$lookup': [Object],\n totalDocsExamined: 30103,\n totalKeysExamined: 30103,\n collectionScans: 0,\n indexesUsed: [Array],\n nReturned: 30103,\n executionTimeMillisEstimate: 11726\n },\n {\n '$lookup': [Object],\n totalDocsExamined: 30103,\n totalKeysExamined: 30103,\n collectionScans: 0,\n indexesUsed: [Array],\n nReturned: 30103,\n executionTimeMillisEstimate: 18457\n },\n {\n '$lookup': [Object],\n totalDocsExamined: 30103,\n totalKeysExamined: 30103,\n collectionScans: 0,\n indexesUsed: [Array],\n nReturned: 30103,\n executionTimeMillisEstimate: 23009\n },\n {\n '$sort': [Object],\n totalDataSizeSortedBytesEstimate: 55157785,\n usedDisk: false,\n nReturned: 30103,\n executionTimeMillisEstimate: 23014\n },\n {\n '$project': [Object],\n nReturned: 30103,\n executionTimeMillisEstimate: 23154\n },\n {\n '$facet': [Object],\n nReturned: 1,\n executionTimeMillisEstimate: 23232\n }\n ],\n serverInfo: {\n host: '********',\n port: 27017,\n version: '5.0.20',\n gitVersion: '2cd626d8148120319d7dca5824e760fe220cb0de'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n command: {\n aggregate: 'qcinspections',\n pipeline: [\n [Object], [Object],\n [Object], [Object],\n [Object], [Object],\n [Object], [Object],\n [Object], [Object],\n [Object], [Object],\n [Object]\n ],\n cursor: {},\n '$db': 'agriReach-test'\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: new Timestamp({ t: 1694197055, i: 1 }),\n signature: {\n hash: new Binary(Buffer.from(\"e6895df3583ae070a1f20e275eb9a3f094981121\", \"hex\"), 0),\n keyId: new Long(\"7226035368871591938\")\n }\n },\n operationTime: new Timestamp({ t: 1694197055, i: 1 })\n}\n",
"text": "Yes,Now getting ,",
"username": "Umasankar_Swain"
},
{
"code": "",
"text": "Some of those stages are taking a long time, I don’t have time to take apart and re-work the logic of the pipeline but you may want to think about reworking relationships and storage as opposed to storing things in multiple collections.\nThings to think about:If this is a well used query then re-shape the data to match it, if it’s not then don’t and take the performance hit.The original subject said it was slow with 30,000 records, but it’s not really, it’s slow to process 30,000 records and ALL the data that you’re linking in (and unwinding, sorting etc).We have queries that are massive, basically end of month rec type reports and we either dealt with time for them to run, after all they ran once a month, or for more regular queries we kept the data up-to-date on writes so it was in a fast format to query. There is no point re-calculating data for 30,000 records every time, when only 3 of them change every few days etc.Sorry not been more helpful, but I think that’s your next approach, take a step back and work out what you really want this query to do and how you can optimise the process as opposed to query.Maybe someone else has an idea of what to change easily that I’ve missed.",
"username": "John_Sewell"
},
{
"code": "",
"text": "okay,Thank you @John_Sewell for your time…",
"username": "Umasankar_Swain"
}
] | I am getting performance issue with this aggregation pipeline?This almost takes 20 sec to give response of 30000 records | 2023-09-08T09:16:15.138Z | I am getting performance issue with this aggregation pipeline?This almost takes 20 sec to give response of 30000 records | 571 |
null | [] | [
{
"code": "",
"text": "Is Atlas’s cloud backup data compressed?",
"username": "Kevin_An"
},
{
"code": "",
"text": "Hi @Kevin_An and welcome to MongoDB community forums!!The backup on MongoDB uses snapshots which are stored on the same regions as your cluster or could also be distributed between the regions automatically.\nPlease refer to the documentation that mentions Atlas uses the native snapshot capabilities of your cloud provider to support full-copy snapshots and localised snapshot storage.\nAtlas supports Cloud Backups on:Since the backups are managed by the underlying cloud providers, the snapshot management is also taken care by the deployed providers.\nIn saying so, the data on the MongoDB deployment, on Atlas as well on server are compressed by default using the Wired Tiger which uses block compression with the snappy compression library for all collections and prefix compression for all indexes.If your questions are still unanswered, could you please help me understand on what specific information you are looking for regarding the snapshots compression.Warm Regards\nAasawari",
"username": "Aasawari"
}
] | Is Atlas's cloud backup data compressed? | 2023-09-02T16:15:13.031Z | Is Atlas’s cloud backup data compressed? | 284 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.