image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "compass", "mongodb-shell" ]
[ { "code": "db.aa.countDocuments({\"borough\":\"Bronx\"})\n", "text": "Hello\nI dont understand why when I run this query in mogosh, I have the count result which is displayed and why when I run it in DB Compass nothing happen\nWhat is wrong please?", "username": "jip31" }, { "code": "MongoDB compassmongosh", "text": "Hello @jip31,why when I run it in DB Compass nothing happensCould you please share the screenshot of the MongoDB compass and mongosh?Also, share the log file snippet from the MongoDB Compass after executing this query to better understand the issue. You can find the log file from Help → Open Log FileBest,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi\nHere is the screenshotsLe service des pièces jointes, CJoint.com est un service de partage de fichier gratuit pour partager vos documents dans vos courriels, sur les forums ou dans vos petites annonces.Le service des pièces jointes, CJoint.com est un service de partage de fichier gratuit pour partager vos documents dans vos courriels, sur les forums ou dans vos petites annonces.Le service des pièces jointes, CJoint.com est un service de partage de fichier gratuit pour partager vos documents dans vos courriels, sur les forums ou dans vos petites annonces.Thanks", "username": "jip31" }, { "code": "{\"borough\":\"Bronx\"}\n", "text": "Hi @jip31,Thanks for sharing the screenshot.In MongoDB Compass, when you run a query you see the number of results without the need for a count.For example, input the following queryand you will see the total count of documents as shown in the screenshot.\nimage1392×892 169 KB\nI hope it answers your question.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thanks a lot\nSorry I work on Mongo DB since 2 days and I try to understand ", "username": "jip31" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Basic question on mongosh vs Mongo DB Compass
2023-03-14T15:02:59.051Z
Basic question on mongosh vs Mongo DB Compass
853
null
[ "swift", "migration" ]
[ { "code": "", "text": "Hello.We are having a lot of issues with a known bug in the Realm iOS SDK that has been around for years but hasn’t been fixed and it doesn’t appear Realm has any plans to address this issue: Keeping file locks for database in shared app group after write violates Apple's recommendation and causes the system kill the app with 0xdead10cc · Issue #8017 · realm/realm-swift · GitHubWe have millions of users, and we’ve been storing the realm db instead an app group container. Realm locks a file, the app is suspended, and the watchdog terminates the app.This only seems to happen if the db is stored in the app group container; we no longer have need for the db to be in a container, so we want to migrate the location to the app itself.What do I need to know in order to do this without data loss or breaking anything?", "username": "Vincent_Frascello" }, { "code": "", "text": "That’s an interesting bug. In reading the response from Realm, it statesOn iOS (and watchOS) we do not hold a file lock the entire time the Realm file is open. We acquire one when opening the file, but release it before the initializer returns. Write transactions also hold a lock for the duration of the write, but release it once the write is complete.which would indicate they do not have a file lock after write. We have not experienced that either.Is it possible there’s something else in the app that’s doing a perpetual write, or potentially moving a function to say, a background thread, causing a persistent lock?Just curious.", "username": "Jay" }, { "code": "", "text": "Hello Jay,This has been a known problem as least as far back as 2017 - if you search the GitHub issues for Realm-swift for ‘deadlock’ you’ll see it’s never really been addressed.I do not believe the file locked but rather, writes are not properly managed from the Realm SDK so that they can potentially be locking the file during a suspension that happens while a write operation is occurring.Of course, this should be managed from within Realm and not an implementation detail for consumers to manage (although if there were documentation from Realm on specifically how to do so safely, that would be a great compromise).Now there are some workarounds and fixes mentioned on GitHub, but our team is planning to move the realm DB from app group container to app container.I am simply looking for guidance on how to do so without impacting users or losing data. Are there any FAQs, or discussions related to doing this?", "username": "Vincent_Frascello" }, { "code": "autoreleasepool {\n let realm = try! Realm(...)\n try! realm.write {\n ...\n }\n}\n// realm will be released along with file lock after this block\n.initialwriteupdateDog/updateNames", "text": "An issue like this would have far reaching affects and needs to be addressed. I am just searching for an actual bug here - one that can be duplicated so I know what not to do in our own code.Once that’s established, actually answering the question would be possible.I am simply looking for guidance on how to (Migrating database file location from App Group Container to App) without impacting users or losing dataSo I read through all of those “deadlock” links. The issue is that “deadlock” describes different things to different people, so many of the bug reports were not on topic… for this topic.What I am seeing over and over is this:@tkafka we do release the file lock on the Realm file. You need to make sure the Realm is not in use otherwise you app will continue to hold the lock. If you want to release the file lock after a write consider the following approach:andHi @alexeichhorn,\nThat’s expected. You should not run blocking code in .initial and that’s exactly what write does.\nYou may asynchronously dispatch calls for updateDog/updateNames .andThread 2 (a background thread) is holding the write lock and attempting to synchronously dispatch work to the main thread, but the main thread is already waiting for the write lock and so things deadlock.and then other threads that have ‘deadlock’’ issues caused by infinite loops or setting up objects that perform recursive initialization.We’ve tried multiple gyrations to duplicate the issue and even with a massive file 2Gb worth of data, we simply can’t make it happen so I don’t know what the answer would be. We’ve never lost any data with Realm, regardless of the file structure or location. Is this maybe something related to the OS?Do you have a minimal, reproducible example of how to make this bug appear?", "username": "Jay" } ]
Migrating database file location from App Group Container to App
2023-03-09T04:14:45.331Z
Migrating database file location from App Group Container to App
1,148
null
[ "replication", "containers" ]
[ { "code": "{\"log\":\"{\\\"t\\\":{\\\"$date\\\":\\\"2023-02-27T16:35:02.652+00:00\\\"},\\\"s\\\":\\\"W\\\", \\\"c\\\":\\\"QUERY\\\", \\\"id\\\":23798, \\\"ctx\\\":\\\"conn13376\\\",\\\"msg\\\":\\\"Plan executor error during find command\\\",\\\"attr\\\":{\\\"error\\\":{\\\"code\\\":5642403,\\\"codeName\\\":\\\"Location5642403\\\",\\\"errmsg\\\":\\\"Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\\\"},\\\"stats\\\":{\\\"stage\\\":\\\"SORT\\\",\\\"nReturned\\\":0,\\\"works\\\":162854,\\\"advanced\\\":0,\\\"needTime\\\":162853,\\\"needYield\\\":0,\\\"saveState\\\":174,\\\"restoreState\\\":174,\\\"failed\\\":true,\\\"isEOF\\\":0,\\\"sortPattern\\\":{\\\"-$natural\\\":1},\\\"memLimit\\\":104857600,\\\"type\\\":\\\"simple\\\",\\\"totalDataSizeSorted\\\":0,\\\"usedDisk\\\":false,\\\"spills\\\":0,\\\"inputStage\\\":{\\\"stage\\\":\\\"COLLSCAN\\\",\\\"nReturned\\\":162853,\\\"works\\\":162854,\\\"advanced\\\":162853,\\\"needTime\\\":1,\\\"needYield\\\":0,\\\"saveState\\\":174,\\\"restoreState\\\":174,\\\"isEOF\\\":0,\\\"direction\\\":\\\"forward\\\",\\\"docsExamined\\\":162853}},\\\"cmd\\\":{\\\"find\\\":\\\"oplog.rs\\\",\\\"filter\\\":{},\\\"sort\\\":{\\\"-$natural\\\":1},\\\"lsid\\\":{\\\"id\\\":{\\\"$uuid\\\":\\\"d77e409f-c760-4a20-9d06-2354573a53aa\\\"}},\\\"$clusterTime\\\":{\\\"clusterTime\\\":{\\\"$timestamp\\\":{\\\"t\\\":1677515699,\\\"i\\\":2}},\\\"signature\\\":{\\\"hash\\\":{\\\"$binary\\\":{\\\"base64\\\":\\\"K2Zxssj9aVxFx7AUshry/eug8Os=\\\",\\\"subType\\\":\\\"0\\\"}},\\\"keyId\\\":7144359882769039361}},\\\"$db\\\":\\\"local\\\",\\\"$readPreference\\\":{\\\"mode\\\":\\\"primaryPreferred\\\"}}}}\\r\\n\",\"stream\":\"stdout\",\"time\":\"2023-02-27T16:35:02.653055582Z\"}\n{\"log\":\"{\\\"t\\\":{\\\"$date\\\":\\\"2023-02-27T16:35:02.678+00:00\\\"},\\\"s\\\":\\\"I\\\", \\\"c\\\":\\\"COMMAND\\\", \\\"id\\\":51803, \\\"ctx\\\":\\\"conn13376\\\",\\\"msg\\\":\\\"Slow query\\\",\\\"attr\\\":{\\\"type\\\":\\\"command\\\",\\\"ns\\\":\\\"local.oplog.rs\\\",\\\"command\\\":{\\\"find\\\":\\\"oplog.rs\\\",\\\"filter\\\":{},\\\"sort\\\":{\\\"-$natural\\\":1},\\\"lsid\\\":{\\\"id\\\":{\\\"$uuid\\\":\\\"d77e409f-c760-4a20-9d06-2354573a53aa\\\"}},\\\"$clusterTime\\\":{\\\"clusterTime\\\":{\\\"$timestamp\\\":\n{\\\"t\\\":1677515699,\\\"i\\\":2}},\\\"signature\\\":{\\\"hash\\\":{\\\"$binary\\\":{\\\"base64\\\":\\\"K2Zxssj9aVxFx7AUshry/eug8Os=\\\",\\\"subType\\\":\\\"0\\\"}},\\\"keyId\\\":7144359882769039361}},\\\"$db\\\":\\\"local\\\",\\\"$readPreference\\\":{\\\"mode\\\":\\\"primaryPreferred\\\"}},\\\"planSummary\\\":\\\"COLLSCAN\\\",\\\"numYields\\\":174,\\\"queryHash\\\":\\\"35E1175D\\\",\\\"planCacheKey\\\":\\\"35E1175D\\\",\\\"queryFramework\\\":\\\"classic\\\",\\\"ok\\\":0,\\\"errMsg\\\":\\\"Executor error during find command :: caused by :: Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\\\",\\\"errName\\\":\\\"Location5642403\\\",\\\"errCode\\\":5642403,\\\"reslen\\\":362,\\\"locks\\\":{\\\"FeatureCompatibilityVersion\\\":{\\\"acquireCount\\\":{\\\"r\\\":175}},\\\"Global\\\":{\\\"acquireCount\\\":{\\\"r\\\":175}},\\\"Mutex\\\":{\\\"acquireCount\\\":{\\\"r\\\":1}}},\\\"readConcern\\\":{\\\"level\\\":\\\"local\\\",\\\"provenance\\\":\\\"implicitDefault\\\"},\\\"storage\\\":{\\\"data\\\":{\\\"bytesRead\\\":42052646,\\\"timeReadingMicros\\\":584398}},\\\"remote\\\":\\\"172.18.0.1:49438\\\",\\\"protocol\\\":\\\"op_msg\\\",\\\"durationMillis\\\":1931}}\\r\\n\",\"stream\":\"stdout\",\"time\":\"2023-02-27T16:35:02.678270963Z\"}\n\"cmd\\\":{\\\"find\\\":\\\"oplog.rs\\\",\\\"filter\\\":{},\\\"sort\\\":{\\\"-$natural\\\":1}", "text": "Hi,We have a replicaset with three nodes in the docker container hosted on the same host on a 32GB Filesystem that supports three mongodb data nodes.\nWe’ve got a problem after upgrading mongodb RS from 4.4.8 version to 5.0 and after 6.0.4 after two hours in Mongo 6.0.4 we see that the file system is filling up with temporary files caused by a Mongo request.\nOn Mongo node 1 we see in log after mongo has completely filled the filesystem this message:This message is caused by the Mongo query which executes the command :\"cmd\\\":{\\\"find\\\":\\\"oplog.rs\\\",\\\"filter\\\":{},\\\"sort\\\":{\\\"-$natural\\\":1}Also the file system is not immediately filled , because mongodb performs at some time the deletion of these temporary files but what is the rule who define that is there configurable?I don’t understand why mongodb executes this command in a loop and what is the reason for executing this command.\nIs it possible to restrict the size of temporary files written to the drive by Mongo?We have restarted the migration process from 4.4.8 to 6.0.4 and we do not have the same issue. However, when we manually execute this command in the replica set, we have the same problem and the file system is filling up.", "username": "R_HU" }, { "code": ",\\\"errmsg\\\":\\\"Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\\\"}rs.conf()rs.status()", "text": "Hi @R_HU and welcome to the MongoDB Community forum!!,\\\"errmsg\\\":\\\"Error writing to file /data/db/_tmp/extsort-sort-executor.480: errno:28 No space left on device\\\"}From the above error message shared in the error logs, it seems while upgrading, the disk ran out of space.We have a replicaset with three nodes in the docker container hosted on the same host on a 32GBAs mentioned in the docker documentation quotes:It is generally recommended that you separate areas of concern by using one service per containeras it many result into resource contention.Mongo 6.0.4 we see that the file system is filling up with temporary files caused by a Mongo request.Is this specific to to the version mentioned or this is seen in other versions too.We have restarted the migration process from 4.4.8 to 6.0.4 and we do not have the same issue. However, when we manually execute this command in the replica set, we have the same problem and the file system is filling up.After the successful upgrade of the deployment, do you see the similar issues. And, could you help me with the upgrade process and the error message received.\nAlso, please help us understand why you are executing the command.Finally, please share the output for rs.conf() and rs.status()Let us know if you have any queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "{\"log\":\"{\\\"t\\\":{\\\"$date\\\":\\\"2023-03-04T16:22:54.348+00:00\\\"},\\\"s\\\":\\\"I\\\", \\\"c\\\":\\\"COMMAND\\\", \\\"id\\\":51803, \\\"ctx\\\":\\\"conn125400\\\",\\\"msg\\\":\\\"Slow query\\\",\\\"attr\\\":{\\\"type\\\":\\\"command\\\",\\\"ns\\\":\\\"local.oplog.rs\\\",\\\"command\\\":{\\\"find\\\":\\\"oplog.rs\\\",\\\"filter\\\":{},\\\"sort\\\":{\\\"-$natural\\\":1},\\\"lsid\\\":{\\\"id\\\":{\\\"$uuid\\\":\\\"9a02cb13-57ee-4793-a27d-ae51ef6774af\\\"}},\\\"$clusterTime\\\":{\\\"clusterTime\\\":{\\\"$timestamp\\\":{\\\"t\\\":1677946973,\\\"i\\\":2}},\\\"signature\\\":{\\\"hash\\\":{\\\"$binary\\\":{\\\"base64\\\":\\\"EBMoYzvf/BJE1tf1ghz6ZkQ+E2Y=\\\",\\\"subType\\\":\\\"0\\\"}},\\\"keyId\\\":7206450133318238212}},\\\"$db\\\":\\\"local\\\",\\\"$readPreference\\\":{\\\"mode\\\":\\\"primaryPreferred\\\"}},\\\"planSummary\\\":\\\"COLLSCAN\\\",\\\"numYields\\\":71,\\\"queryHash\\\":\\\"A11B6D23\\\",\\\"planCacheKey\\\":\\\"A11B6D23\\\",\\\"ok\\\":0,\\\"errMsg\\\":\\\"Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.\\\",\\\"errName\\\":\\\"QueryExceededMemoryLimitNoDiskUseAllowed\\\",\\\"errCode\\\":292,\\\"reslen\\\":378,\\\"locks\\\":{\\\"ReplicationStateTransition\\\":{\\\"acquireCount\\\":{\\\"w\\\":72}},\\\"Global\\\":{\\\"acquireCount\\\":{\\\"r\\\":72}},\\\"Database\\\":{\\\"acquireCount\\\":{\\\"r\\\":72}},\\\"Mutex\\\":{\\\"acquireCount\\\":{\\\"r\\\":1}},\\\"oplog\\\":{\\\"acquireCount\\\":{\\\"r\\\":72}}},\\\"storage\\\":{},\\\"protocol\\\":\\\"op_msg\\\",\\\"durationMillis\\\":105}}\\r\\n\",\"stream\":\"stdout\",\"time\":\"2023-03-04T16:22:54.349177116Z\"}\nrs.status()\n{\n\t\"set\" : \"ds-rs\",\n\t\"date\" : ISODate(\"2023-03-04T20:28:31.952Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(3),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 2,\n\t\"writeMajorityCount\" : 2,\n\t\"votingMembersCount\" : 3,\n\t\"writableVotingMembersCount\" : 3,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2023-03-04T20:28:27.492Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"readConcernMajorityWallTime\" : ISODate(\"2023-03-04T20:28:27.492Z\"),\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2023-03-04T20:28:27.492Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2023-03-04T20:28:27.492Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1677961703, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2023-03-04T20:23:43.468Z\"),\n\t\t\"electionTerm\" : NumberLong(3),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1677959280, 1),\n\t\t\t\"t\" : NumberLong(2)\n\t\t},\n\t\t\"numVotesNeeded\" : 2,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"numCatchUpOps\" : NumberLong(0),\n\t\t\"newTermStartDate\" : ISODate(\"2023-03-04T20:23:43.564Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-03-04T20:23:44.805Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"11.0.50.9:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 290,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\t\"t\" : NumberLong(3)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\t\"t\" : NumberLong(3)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-04T20:28:27Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-03-04T20:28:27Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-03-04T20:28:31.779Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-03-04T20:28:31.022Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"11.0.50.11:27017\",\n\t\t\t\"syncSourceId\" : 3,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 3\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"11.0.50.10:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 298,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\t\"t\" : NumberLong(3)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\t\"t\" : NumberLong(3)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-04T20:28:27Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-03-04T20:28:27Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-03-04T20:28:31.781Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-03-04T20:28:30.026Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"11.0.50.9:27017\",\n\t\t\t\"syncSourceId\" : 1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 3\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"name\" : \"11.0.50.11:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 311,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1677961707, 41),\n\t\t\t\t\"t\" : NumberLong(3)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-04T20:28:27Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1677961423, 1),\n\t\t\t\"electionDate\" : ISODate(\"2023-03-04T20:23:43Z\"),\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 3,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1677961707, 41),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"p7TYQ+teX1adsMmdcQUirXqfXIE=\"),\n\t\t\t\"keyId\" : NumberLong(\"7206450133318238212\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1677961707, 41)\n}\nrs.status()\n{\n\t\"set\" : \"ds-rs\",\n\t\"date\" : ISODate(\"2023-03-04T20:35:46.296Z\"),\n\t\"myState\" : 1,\n\t\"term\" : NumberLong(4),\n\t\"syncSourceHost\" : \"\",\n\t\"syncSourceId\" : -1,\n\t\"heartbeatIntervalMillis\" : NumberLong(2000),\n\t\"majorityVoteCount\" : 2,\n\t\"writeMajorityCount\" : 2,\n\t\"votingMembersCount\" : 3,\n\t\"writableVotingMembersCount\" : 3,\n\t\"optimes\" : {\n\t\t\"lastCommittedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\"t\" : NumberLong(4)\n\t\t},\n\t\t\"lastCommittedWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\"readConcernMajorityOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\"t\" : NumberLong(4)\n\t\t},\n\t\t\"appliedOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\"t\" : NumberLong(4)\n\t\t},\n\t\t\"durableOpTime\" : {\n\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\"t\" : NumberLong(4)\n\t\t},\n\t\t\"lastAppliedWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\"lastDurableWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\")\n\t},\n\t\"lastStableRecoveryTimestamp\" : Timestamp(1677962126, 1),\n\t\"electionCandidateMetrics\" : {\n\t\t\"lastElectionReason\" : \"electionTimeout\",\n\t\t\"lastElectionDate\" : ISODate(\"2023-03-04T20:34:56.969Z\"),\n\t\t\"electionTerm\" : NumberLong(4),\n\t\t\"lastCommittedOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(0, 0),\n\t\t\t\"t\" : NumberLong(-1)\n\t\t},\n\t\t\"lastSeenOpTimeAtElection\" : {\n\t\t\t\"ts\" : Timestamp(1677961933, 1),\n\t\t\t\"t\" : NumberLong(3)\n\t\t},\n\t\t\"numVotesNeeded\" : 2,\n\t\t\"priorityAtElection\" : 1,\n\t\t\"electionTimeoutMillis\" : NumberLong(10000),\n\t\t\"numCatchUpOps\" : NumberLong(0),\n\t\t\"newTermStartDate\" : ISODate(\"2023-03-04T20:34:56.991Z\"),\n\t\t\"wMajorityWriteAvailabilityDate\" : ISODate(\"2023-03-04T20:34:57.032Z\")\n\t},\n\t\"members\" : [\n\t\t{\n\t\t\t\"_id\" : 1,\n\t\t\t\"name\" : \"11.0.50.9:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 47,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-04T20:35:36Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-03-04T20:35:36Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-03-04T20:35:45.018Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-03-04T20:35:44.896Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"11.0.50.11:27017\",\n\t\t\t\"syncSourceId\" : 3,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 4\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 2,\n\t\t\t\"name\" : \"11.0.50.10:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 2,\n\t\t\t\"stateStr\" : \"SECONDARY\",\n\t\t\t\"uptime\" : 58,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDurable\" : {\n\t\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-04T20:35:36Z\"),\n\t\t\t\"optimeDurableDate\" : ISODate(\"2023-03-04T20:35:36Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\t\"lastHeartbeat\" : ISODate(\"2023-03-04T20:35:45.029Z\"),\n\t\t\t\"lastHeartbeatRecv\" : ISODate(\"2023-03-04T20:35:45.559Z\"),\n\t\t\t\"pingMs\" : NumberLong(0),\n\t\t\t\"lastHeartbeatMessage\" : \"\",\n\t\t\t\"syncSourceHost\" : \"11.0.50.11:27017\",\n\t\t\t\"syncSourceId\" : 3,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 4\n\t\t},\n\t\t{\n\t\t\t\"_id\" : 3,\n\t\t\t\"name\" : \"11.0.50.11:27017\",\n\t\t\t\"health\" : 1,\n\t\t\t\"state\" : 1,\n\t\t\t\"stateStr\" : \"PRIMARY\",\n\t\t\t\"uptime\" : 76,\n\t\t\t\"optime\" : {\n\t\t\t\t\"ts\" : Timestamp(1677962136, 1),\n\t\t\t\t\"t\" : NumberLong(4)\n\t\t\t},\n\t\t\t\"optimeDate\" : ISODate(\"2023-03-04T20:35:36Z\"),\n\t\t\t\"lastAppliedWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\t\"lastDurableWallTime\" : ISODate(\"2023-03-04T20:35:36.996Z\"),\n\t\t\t\"syncSourceHost\" : \"\",\n\t\t\t\"syncSourceId\" : -1,\n\t\t\t\"infoMessage\" : \"\",\n\t\t\t\"electionTime\" : Timestamp(1677962096, 1),\n\t\t\t\"electionDate\" : ISODate(\"2023-03-04T20:34:56Z\"),\n\t\t\t\"configVersion\" : 1,\n\t\t\t\"configTerm\" : 4,\n\t\t\t\"self\" : true,\n\t\t\t\"lastHeartbeatMessage\" : \"\"\n\t\t}\n\t],\n\t\"ok\" : 1,\n\t\"$clusterTime\" : {\n\t\t\"clusterTime\" : Timestamp(1677962136, 1),\n\t\t\"signature\" : {\n\t\t\t\"hash\" : BinData(0,\"4zdWQKCg1WIw8twSfdxRx49MlsU=\"),\n\t\t\t\"keyId\" : NumberLong(\"7206450133318238212\")\n\t\t}\n\t},\n\t\"operationTime\" : Timestamp(1677962136, 1)\n}\nrs.status()\n{\n set: 'ds-rs',\n date: ISODate(\"2023-03-04T20:41:13.192Z\"),\n myState: 1,\n term: Long(\"5\"),\n syncSourceHost: '',\n syncSourceId: -1,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 3,\n writableVotingMembersCount: 3,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n lastCommittedWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n appliedOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n durableOpTime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n lastAppliedWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:41:11.565Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1677962337, i: 1 }),\n electionCandidateMetrics: {\n lastElectionReason: 'electionTimeout',\n lastElectionDate: ISODate(\"2023-03-04T20:40:50.481Z\"),\n electionTerm: Long(\"5\"),\n lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677962347, i: 1 }), t: Long(\"4\") },\n numVotesNeeded: 2,\n priorityAtElection: 1,\n electionTimeoutMillis: Long(\"10000\"),\n numCatchUpOps: Long(\"0\"),\n newTermStartDate: ISODate(\"2023-03-04T20:40:51.551Z\"),\n wMajorityWriteAvailabilityDate: ISODate(\"2023-03-04T20:40:56.028Z\")\n },\n members: [\n {\n _id: 1,\n name: '11.0.50.9:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 11,\n optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n optimeDurable: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-03-04T20:41:11.000Z\"),\n optimeDurableDate: ISODate(\"2023-03-04T20:41:11.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n lastHeartbeat: ISODate(\"2023-03-04T20:41:11.593Z\"),\n lastHeartbeatRecv: ISODate(\"2023-03-04T20:41:11.896Z\"),\n pingMs: Long(\"3\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '11.0.50.11:27017',\n syncSourceId: 3,\n infoMessage: '',\n configVersion: 2,\n configTerm: 5\n },\n {\n _id: 2,\n name: '11.0.50.10:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 22,\n optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n optimeDurable: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-03-04T20:41:11.000Z\"),\n optimeDurableDate: ISODate(\"2023-03-04T20:41:11.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n lastHeartbeat: ISODate(\"2023-03-04T20:41:11.574Z\"),\n lastHeartbeatRecv: ISODate(\"2023-03-04T20:41:11.586Z\"),\n pingMs: Long(\"1\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '11.0.50.11:27017',\n syncSourceId: 3,\n infoMessage: '',\n configVersion: 2,\n configTerm: 5\n },\n {\n _id: 3,\n name: '11.0.50.11:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 38,\n optime: { ts: Timestamp({ t: 1677962471, i: 1 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-03-04T20:41:11.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:41:11.565Z\"),\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1677962450, i: 1 }),\n electionDate: ISODate(\"2023-03-04T20:40:50.000Z\"),\n configVersion: 2,\n configTerm: 5,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1677962471, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"a93b92c3b1b03f506d12a906e8b1b2aa853d7ccc\", \"hex\"), 0),\n keyId: Long(\"7206450133318238212\")\n }\n },\n operationTime: Timestamp({ t: 1677962471, i: 1 })\n}\nrs.status()\n{\n set: 'ds-rs',\n date: ISODate(\"2023-03-04T20:42:30.793Z\"),\n myState: 1,\n term: Long(\"5\"),\n syncSourceHost: '',\n syncSourceId: -1,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 3,\n writableVotingMembersCount: 3,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n lastCommittedWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n appliedOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n durableOpTime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n lastAppliedWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:42:18.702Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1677962491, i: 1 }),\n electionCandidateMetrics: {\n lastElectionReason: 'electionTimeout',\n lastElectionDate: ISODate(\"2023-03-04T20:40:50.481Z\"),\n electionTerm: Long(\"5\"),\n lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 0, i: 0 }), t: Long(\"-1\") },\n lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1677962347, i: 1 }), t: Long(\"4\") },\n numVotesNeeded: 2,\n priorityAtElection: 1,\n electionTimeoutMillis: Long(\"10000\"),\n numCatchUpOps: Long(\"0\"),\n newTermStartDate: ISODate(\"2023-03-04T20:40:51.551Z\"),\n wMajorityWriteAvailabilityDate: ISODate(\"2023-03-04T20:40:56.028Z\")\n },\n members: [\n {\n _id: 1,\n name: '11.0.50.9:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 89,\n optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n optimeDurable: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-03-04T20:42:18.000Z\"),\n optimeDurableDate: ISODate(\"2023-03-04T20:42:18.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n lastHeartbeat: ISODate(\"2023-03-04T20:42:29.673Z\"),\n lastHeartbeatRecv: ISODate(\"2023-03-04T20:42:29.985Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '11.0.50.11:27017',\n syncSourceId: 3,\n infoMessage: '',\n configVersion: 2,\n configTerm: 5\n },\n {\n _id: 2,\n name: '11.0.50.10:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 100,\n optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n optimeDurable: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-03-04T20:42:18.000Z\"),\n optimeDurableDate: ISODate(\"2023-03-04T20:42:18.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n lastHeartbeat: ISODate(\"2023-03-04T20:42:29.674Z\"),\n lastHeartbeatRecv: ISODate(\"2023-03-04T20:42:29.675Z\"),\n pingMs: Long(\"1\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '11.0.50.11:27017',\n syncSourceId: 3,\n infoMessage: '',\n configVersion: 2,\n configTerm: 5\n },\n {\n _id: 3,\n name: '11.0.50.11:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 115,\n optime: { ts: Timestamp({ t: 1677962538, i: 4 }), t: Long(\"5\") },\n optimeDate: ISODate(\"2023-03-04T20:42:18.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n lastDurableWallTime: ISODate(\"2023-03-04T20:42:18.702Z\"),\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1677962450, i: 1 }),\n electionDate: ISODate(\"2023-03-04T20:40:50.000Z\"),\n configVersion: 2,\n configTerm: 5,\n self: true,\n lastHeartbeatMessage: ''\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1677962538, i: 4 }),\n signature: {\n hash: Binary(Buffer.from(\"13338366b839f370e3acf785eb93e20276953006\", \"hex\"), 0),\n keyId: Long(\"7206450133318238212\")\n }\n },\n operationTime: Timestamp({ t: 1677962538, i: 4 })\n}\nrs.printReplicationInfo()\nconfigured oplog size: 6481.01123046875MB\nlog length start to end: 77846secs (21.62hrs)\noplog first event time: Fri Mar 03 2023 22:48:47 GMT+0000 (UTC)\noplog last event time: Sat Mar 04 2023 20:26:13 GMT+0000 (UTC)\nnow: Sat Mar 04 2023 20:26:17 GMT+0000 (UTC)\nrs.printReplicationInfo()\nconfigured oplog size: 6481.01123046875MB\nlog length start to end: 78370secs (21.77hrs)\noplog first event time: Fri Mar 03 2023 22:48:47 GMT+0000 (UTC)\noplog last event time: Sat Mar 04 2023 20:34:57 GMT+0000 (UTC)\nnow: Sat Mar 04 2023 20:35:14 GMT+0000 (UTC)\nrs.printReplicationInfo()\nactual oplog size\n'6481.01123046875 MB'\n---\nconfigured oplog size\n'6481.01123046875 MB'\n---\nlog length start to end\n'78843.99999809265 secs (21.9 hrs)'\n---\noplog first event time\n'Fri Mar 03 2023 22:48:47 GMT+0000 (Coordinated Universal Time)'\n---\noplog last event time\n'Sat Mar 04 2023 20:42:51 GMT+0000 (Coordinated Universal Time)'\n---\nnow\n'Sat Mar 04 2023 20:43:00 GMT+0000 (Coordinated Universal Time)'\nrs.conf()\n{\n \"_id\" : \"ds-rs\",\n \"version\" : 1,\n \"term\" : 1,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 1,\n \"host\" : \"11.0.50.9:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 2,\n \"host\" : \"11.0.50.10:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 3,\n \"host\" : \"11.0.50.11:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"6403ba5704b0564c053d341b\")\n }\n}\n", "text": "hello,I will explain again from 0 because we have encountered the problem on another cluster:The cluster is composed of 3 Machines with 128Gb of disk, 7Gb of memory and 2Vcpu.\nOur mongo services are containerised on each machine with a limit of 5Gb of memory and 1,5 CPU, mongodb wiredTiger cacheSizeGB is define to 2.5 for each node.The upgrade process followed is that of the documentation provided here:After investigation in the logs of the different versions I see that this error is present in version 4.4.8 and 5.0 :this error refers to the command : \"command\":{\"find\":\"oplog.rs\",\"filter\":{},\"sort\":{\"-$natural\":1}\nIt fails because the result is too large to be stored in memory and writing to disk is disabled by default in version 4.4.8 and 5.0 which is different from version 6.0\nIn my case it’s exactly this command which causes the filling of my disk in version 6.0.\nI do not understand which event causes the execution of this command.I suppose this event is due to out of sync of the replication or election process which causes the OPLOG reading for data recovering on every node, but this process is it a normal process of mongodb?Must I set up a specific configuration for this process to run normally without having to serialize on disk?\nIs it possible to restrict the size of temporary files written to the drive by Mongo?\nIs my mongodb cluster wrongly sized ?Here is the output of the rs.status() command for each versionVersion 4.4.8Version 5.0Version 6.0.4 Before Before setting FeatureCompatibilityVersion \"6.0Version 6.0.4 After setting FeatureCompatibilityVersion \"6.0Here is the output of the rs.printReplicationInfo() command for each versionVersion 4.4.8Version 5.0Version 6.0.4 After setting FeatureCompatibilityVersion \"6.0Here is the output of the rs.conf() command for version 4.4.8", "username": "R_HU" }, { "code": "// Licensed to Elasticsearch B.V. under one or more contributor\n// license agreements. See the NOTICE file distributed with\n// this work for additional information regarding copyright\n// ownership. Elasticsearch B.V. licenses this file to you under\n// the Apache License, Version 2.0 (the \"License\"); you may\n// not use this file except in compliance with the License.\n// You may obtain a copy of the License at\n//\n// http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied. See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npackage replstatus\n\nimport (\n", "text": "Problem solved, these requests were due to my monitoring solution metricbeat and its module mongodb, more precisely the module replstatus.Problem Statement:\n\nCustomer is setting up beats with the mongodb module and i…n replstatus module is showing the below error in logs.\n\nERROR:\n`{\"log.level\":\"error\",\"@timestamp\":\"2022-11-03T12:05:26.360Z\",\"log.origin\":{\"file.name\":\"module/wrapper.go\",\"file.line\":256},\"message\":\"Error fetching data for metricset mongodb.replstatus: error getting replication info: could not get last operation timestamp in op log: could not get cursor on collection 'oplog.rs': (QueryExceededMemoryLimitNoDiskUseAllowed) Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.\",\"service.name\":\"metricbeat\",\"ecs.version\":\"1.6.0\"}`", "username": "R_HU" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Issue when upgrading Mongodb 4.4.8 to 6.0.4
2023-02-28T14:27:10.228Z
Issue when upgrading Mongodb 4.4.8 to 6.0.4
1,513
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 6.0.5 is out and is ready for production deployment. This release contains only fixes since 6.0.4, and is a recommended upgrade for all 6.0 users.\nFixed in this release:", "username": "James_Hippler" }, { "code": "", "text": "Also, it’s important to read the notes about remove “fork: true” from your /etc/mongod.conf in certain scenarios. Particularly if you install from repos.", "username": "AmitG" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 6.0.5 is released
2023-03-14T15:14:22.169Z
MongoDB 6.0.5 is released
1,973
null
[ "crud" ]
[ { "code": "updateOne(\n {\n _id: propertyId,\n 'spaces.items.id': itemId\n }, \n {\n $set: {\n 'spaces.$[].items.$[item].<property>: value'\n }\n }\n)\n", "text": "I’m having a discussion with one of my developers.Here’s a useful snippet of relations:\nCollection: ‘Property’\nProperty hasMany Spaces\nProperty hasMany Items\nSpaces hasMany ItemsNote that an Item subdocument may exist in a Property, and it may exist within a Space in a Property.When this query is made, it is known that the item exists in a space, and the logic to determine that also easily provides the unique identifier for the Space.My developer has written:Can I know for certain that this query will look for one item, and after it finds it, stop looking? It seems to me that the query should go:Find the property\nLook in each space for an item with the given idBut I guess my developer thinks it would go:Find the property\ni = 0\nLook in space[i].\nLook in items\nIf item with itemId, update then return\nElse i++Thanks for any help.Edit to add:\nI notice that $ is actually called the “all positional operator” and:indicates that the update operator should modify all elements in the specified array field.\nhttps://www.mongodb.com/docs/manual/reference/operator/update/positional-all/#:~:text=Definition,-%24[]&text=The%20all%20positional%20operator%20%24[],in%20the%20specified%20array%20field.&text=For%20an%20example%2C%20see%20Update%20All%20Elements%20in%20an%20Array.So in a battle of wills with my junior developer, would it be fair to say that AT BEST, his use of $ is a misuse since the intended usage of $ is to modify all elements in the specified array field?", "username": "Michael_Jay2" }, { "code": "$[]arrayFiltersarrayFilters", "text": "So in a battle of willsWhy battle?The documentation is clear about $[]The all positional operator $[] indicates that the update operator should modify all elements in the specified array field.Ditto for $[item]Use in conjunction with the arrayFilters option to update all elements that match the arrayFilters conditions in the document or documents that match the query conditions.All examples in both documentation pages clearly show that all matching elements are updated.But why battle?Simply test the code. If you do you might get an error because your are using $[item] and I don’t see a corresponding arrayFilters option.", "username": "steevej" }, { "code": "$$[]$[name]", "text": "Following up on my previous answer.Thanks to @Aasawari’s reply in another thread, I realized that the confusion might come from the fact there is another array update positional operator to only update the first element that match the query.$ - to update the first matching element\n$[] - to update all the matching element\n$[name] - to update all the matching element based on arrayFilters.", "username": "steevej" } ]
What behavior would this Mongo query have - double nested documents
2023-03-11T18:48:46.871Z
What behavior would this Mongo query have - double nested documents
830
null
[ "aggregation", "queries", "data-modeling" ]
[ { "code": "", "text": "I have a test database, where 5:100 cardinality exist between Parent:Child entity.\nI have used Parent document embedding in child document. A child document may associated with more than one parent document.To search non-duplicated set of child documents by parent’s properties, I have used $group as mentioned below in query:\ncollection.aggregate([{“$match”:{“p_field”:{“$gte”:ranges1,“$lte”:ranges2}}},{“$unwind”:“$cs”},{“$group”:{“_id”:{“c_id”:“$cs.c_id”,“cfield”:“$cs.c_field”}}}])Database size is approx 1.2GB., and wiredTigerCache is configured 11G. Even after that, I am getting Error :Exceed Memory for $group operation.Can any one help me to resolve this error?", "username": "Monika_Shah" }, { "code": "", "text": "Is this correct? I don’t have a good test-case in front of me but I’m curious and the link doesn’t really seem to describe what you posted… Are you saying that DISTINCT would only select one row with “USA” if I had two columns with countries listed (say one for a Supplier and one for a Destination)?", "username": "gaya_gaya" }, { "code": "{ \"$group\" : {\n \"_id\" : \"$cs.c_id\" ,\n \"cfield\" : { \"$addToSet\" : \"$cs.c_field\" }\n} }\n", "text": "What are the specification of the machine running this? RAM, CPU, disk, dedicated to mongod or shared?Note that $group blocks until all incoming documents are processed. It is possible that you too many unique pair of c_id and c_field.One idea is to reduce the total size used by the documents of the $group result set.One way to do it could be like:You will of course to do another $unwind to get one top level document per unique pair like you have right now.", "username": "steevej" }, { "code": "{ \"$group\" : {\n \"_id\" : \"$cs.c_id\" ,\n \"cfield\" : { \"$first\" : \"$cs.c_field\" }\n} }\n", "text": "Thank you steevej for your prompt response.\nc_id is id of child document, and cfield is property of child document.\nSo, for every c_id, there is only one cfield associated.Therefore, I have also tried", "username": "Monika_Shah" }, { "code": "", "text": "don’t have a good test-case in front of me but I’m curious and the link doesn’t really seem to describe what you posted… Are you saying that DISTINCT would only select one row with “USA” if I haYes sir. It is analogous to supply relationship between supplier and product, where same supplier can supply multiple product , and same product can be supplied by multiple supplier.\nAnd query something like find products supplied by supplier having rating between 2 to 5 .\nSo, query would be\ncollection.aggregate( [\n{“$match”: {“rating”: { “$gte”: 2 ,“$lte”: 5 }}},\n{“$unwind”:“$products”},\n{“$group”:\n{ “_id” : { “$ products.sku”, “products.name” } }\n}])", "username": "Monika_Shah" }, { "code": "{ \"$group\" : {\n \"_id\" : \"$cs.c_id\" ,\n \"cfield\" : { \"$first\" : \"$cs.c_field\" }\n} }\n", "text": "Yes,\nI am already using $first to reduce size. For duplicated records, $first is sufficient.", "username": "Monika_Shah" } ]
How to query to get Distinct data from embedded documents
2023-03-13T13:26:25.379Z
How to query to get Distinct data from embedded documents
770
null
[ "graphql" ]
[ { "code": "", "text": "We have two insert triggers for a collection; one which sets an auto incrementing field using a function and a second trigger which fires an EventBridge event for processing within AWS. Individually these work fine, but the order in which they are executed and logged is not consistent. Ideally, the function trigger fires first and the eventbridge second (in our case the auto incremented field is useful on the AWS processing side). We can work around this with GraphQL, but seems like it could be a common use case. Event Ordering is enabled, but I dont think is relevant here. Is there some way to guarantee the order that the triggers are fired?", "username": "James_Kucharski" }, { "code": "", "text": "Hi @James_Kucharski welcome to the community!This is an interesting question. I have a couple of questions though:One idea that comes to mind is how about combining the two triggers into one when you need strict ordering of events?If you need further help on this, perhaps you could post the requirements and the current scenario, along with some examples so we can understand the use case better?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Yes, in our case the order is strictly important. The modifications to the document in the first trigger are essentially relied upon by the event bus integration.Because there is a setting to guarantee sequential execution of triggers on multiple documents, it seemed reasonable to be able to configure the order that multiple triggers are fired on a single document.Certainly there are work-arounds, like firing the event bus trigger on an update made by the first trigger, but this seems less elegant for a number of reasons.Thanks.", "username": "James_Kucharski" } ]
Order of execution for multiple triggers on same collection
2023-03-11T06:02:38.674Z
Order of execution for multiple triggers on same collection
1,237
https://www.mongodb.com/…_2_1024x571.jpeg
[ "singapore-mug" ]
[ { "code": "MongoDB Solutions ArchitectMongoDB Solutions Architect", "text": "\nbanner1345×750 123 KB\nWelcome back to the second community gathering. We wanted to do this in December last year but decided to postpone to March this year because of all the holidays and festivities that were going on. We are looking forward to seeing everyone again!The theme of this gathering is focused on MongoDB document model and query language. To make things fun, we will be playing an escape room game to test your MQL skills (do not worry if you are not familiar, the purpose is for us all to learn together). As usual, there will be food and prizes!There are limited seats so please sign in/sign up to RSVP using the button above. You will need an account so please join the community forums so that you can get updates on upcoming events.Event Type: In-Person\nLocation: Workshop@Lavender, Aperia Mall, 12 Kallang Ave, #01-56, Singapore 339511MongoDB Solutions Architect\nMongoDB Solutions Architect\n", "username": "DerrickChua" }, { "code": "", "text": "Hi everyone, here are some directions to help you with getting to our meetup place in case you are not sure. Looking forward to meeting everyone!\n\nimage1056×613 121 KB\n", "username": "DerrickChua" }, { "code": "", "text": "Hi everyone,thanks again for joining in the second SG MUG gathering and making it a vibrant community. Here are some photos we took at the gathering.\nimage1920×1440 179 KB\n\nimage1600×1200 173 KB\nHere is also the slide deck I presented as some of you have requested.Hope to see everyone again 3 months later!", "username": "DerrickChua" } ]
Singapore MUG: Escape Game Edition
2023-01-25T04:39:49.387Z
Singapore MUG: Escape Game Edition
2,752
null
[ "dot-net" ]
[ { "code": "var tag = new Tag { Id: 1, many many fields};\n\n var f = Builders<Tag>.Filter.Eq(t => t.id, i.id);\n var u = Builders<Tag>.Update.Set( tag ) // ???\n return new UpdateOneModel<Tag>(f, u) { IsUpsert = true };\n", "text": "hello.\nI work on bulk insert method ‘Add if not exists’;How i can set full object to set ? i have big object , dont want enumarable all fields with method ‘Set’", "username": "alexov_inbox" }, { "code": "ReplaceOneModel<T>", "text": "Hi, I’m commenting as I am interested in the same question.I think this should be done with ReplaceOneModel<T> instead?", "username": "Vedran_Mandic" } ]
C# Update with Upsert Set object (not fields)
2022-01-31T10:44:55.886Z
C# Update with Upsert Set object (not fields)
2,695
null
[ "java", "crud" ]
[ { "code": "{\n _id: 'test1',\n arr: [ '111', '222', '333', '333' ]\n},\n{\n _id: 'test2',\n arr: [ '111', '222', '333' ]\n}\nBson filter = Filters.and(Filters.exists(\"arr\"), Filters.all(\"arr\", \"333\"));\nBson bson = Updates.set(\"arr.$\", \"999\");\nUpdateResult result = collection.updateMany(filter, bson);\nBson filter = Filters.all(\"arr\", \"333\");\nBson bson = Updates.set(\"arr.$\", \"999\");\nUpdateResult result = collection.updateMany(filter, bson);\n{\n _id: 'test1',\n arr: [ '999', '222', '333', '333' ]\n},\n{\n _id: 'test2',\n arr: [ '999', '222', '333' ]\n}\n{\n _id: 'test1',\n arr: [ '111', '222', '999', '333' ]\n},\n{\n _id: 'test2',\n arr: [ '111', '222', '999' ]\n}\n", "text": "Hi, I currently working with java and mongodb.original documents :executed java code No.1 :executed java code No.2 :Result of No.1 :Result of No.2 :What I intended was No.2, and by deleting “and” and “exists”, it worked the way I wanted.\nbut at the same time, new questions arose.Isn’t a filter that checks if a field exists a “redundant” condition?\nso I thought the result would be the same even if I deleted it.I don’t know why these two filters give different results…", "username": "gantodagee_N_A" }, { "code": "Atlas atlas-b8d6l3-shard-0 [primary] test> db.AllAnd.updateMany( { arr: { $exists: true, $all: [ 333]}}, { $set: { \"arr.$\": 999} })\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 2,\n modifiedCount: 2,\n upsertedCount: 0\n}\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.AllAnd.find()\n[\n { _id: 'test1', arr: [ 999, 222, 333, 333 ] },\n { _id: 'test2', arr: [ 999, 222, 333 ] }\n]\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.AllAnd.updateMany( { \"arr\": 333}, { $set: { \"arr.$\": 999}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 2,\n modifiedCount: 2,\n upsertedCount: 0\n}\nAtlas atlas-b8d6l3-shard-0 [primary] test> db.AllAnd.find()\n[\n { _id: 'test1', arr: [ 111, 222, 999, 333 ] },\n { _id: 'test2', arr: [ 111, 222, 999 ] }\n]\nBson filter = Filters.eq(\"arr\", 333);\nBson update = Updates.set(\"arr.$\", 999);\nUpdateResult result = collection.updateMany(filter, update);\n\n\n{\"_id\": \"test1\", \"arr\": [111, 222, 333, 333]}\n{\"_id\": \"test2\", \"arr\": [111, 222, 333]}\nMatched documents: 2\nModified documents: 2\n{\"_id\": \"test1\", \"arr\": [111, 222, 999, 333]}\n{\"_id\": \"test2\", \"arr\": [111, 222, 999]}\n", "text": "Hi @gantodagee_N_A and welcome to the MongoDB community.As mentioned in the MongoDB documentation for $(update) operator, for multiple array matches, the $ operator might behave ambiguously.Since for the query 1, both the conditions inside the AND operator are true, I think the $ becomes unreliable and points to the first element for both the documents match.However, in the query 2, the $ can point to the exact element according to the filter criteria and set the 999 value at the required position.If for your case, the second query performs the exact operation as expected, the recommendation would be use the same in order to avoid confusions in the future.I tried to replicate the above in my local environment and below are the results for the following:\nQuery1:Query 2:We tried to perform the query 2 in the Java sync driver version 4.9, and to get the desired output as mentioned, we modified the query as:Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
I don't know why these two results are different
2023-03-06T05:30:47.618Z
I don&rsquo;t know why these two results are different
867
https://www.mongodb.com/…b_2_1023x553.png
[]
[ { "code": "", "text": "i have created a regular index with two fields status and createdAt and i want to sort my result with createdAt but it gives me memory sort, why not sorting using index?\nimage1609×870 57.3 KB\n\n\nimage1369×286 13 KB\n", "username": "Ahmed_Naser1" }, { "code": "{ createdAt: 1 }{ status: { $ne: \"DELETED\" } }.createIndex({ createdAt: 1, status: 1 })\n", "text": "Hello @Ahmed_Naser1, Welcome back to the MongoDB community forum,You need to understand the ESR (Equality, Sort, Range) rule of indexing, this is really the best technique to satisfy the compound index, refer to the documentation for more details:In your case:On the basis of the ESR (Equality, Sort, Range) rule, your index should be as below and it will definitely satisfy the query’s sort operation.", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Memory sort even the sort key is part of an index
2023-03-13T19:43:24.951Z
Memory sort even the sort key is part of an index
701
null
[ "aggregation", "data-modeling" ]
[ { "code": "", "text": "Basically what i’m trying to design is spreadsheets of data with MongoDB, or lets call it tabular data, where user can define arbitrary number of columns of rows for a table.Names of the columns itself, their data types that they will hold and metadata like is column unique, default value for it, order in which column should appear, and similar, is not known in advance.\nTable itself belongs to organisation/company entity (lets call it tenants)Lets imagine for the sake of simplicity, we have user defined table 2 x 2, that looks something like following. Table name is “Parking spots” and table description is “List of our parking spots”.Over the time i could expect not that much created columns, but i can expect thousand(s) of rows. Lets say that above table from 2x2 could become 20 * 2000. Further more i could expect a lot of updates when it comes to updating specific cell of a table.Being not that experienced with mongodb, im looking for some optimal solution how to design this having in mind query performances supporting regular operation what end user can due with tables (like updating individual cells, sorting rows based on some columns, etc)My first guess would be to have single collection “tables” with embedded array of “columns” which itself would have embedded “cells” array. In above example of user defined table with parking data, with everything embedded, data in database would look like following: http://json-parser.com/96f5929e/1My guess for embedding is based on that, that “cells” and “columns” can not exists on their own, and there is no use case for end-user for fetching cell without column, or fetching column without other table data.\nBut again, in that approach, i would end up with possibility of having thousands and thousands of embedded documents.\nStill i would maintain objectId for each cell and each column, for being able to faster find/update individual cell. (Not even sure is it possible having unique _id per each nested document within the parent)Another approach would be to use referencing instead of embedding, to perhaps have separate collection for columns and separate collection for cells. Or to use embedding for table data and columns, while having separated collection for individual cells. But somehow to me, with embedding everything seems like better educated guess.", "username": "Srdjan_Cengic" }, { "code": "things that are queried together should stay together", "text": "Hey @Srdjan_Cengic,Welcome to the MongoDB Community Forums! Your initial approach of embedding columns and cells within a single collection of “tables” is a reasonable one, given that columns and cells are tightly coupled to the table and have no meaning outside of it. However, as you pointed out correctly, this approach could lead to performance issues as the number of cells and columns grows. Also, MongoDB has a limit of 16MB on the BSON document size.Still i would maintain objectId for each cell and each column, for being able to faster find/update individual cell. (Not even sure is it possible having unique _id per each nested document within the parent)Yes, you can have different ids in each embedded document for referencing easily.Additionally, I would like to point out one thing about data modeling in MongoDB. A general thumb rule to follow while schema designing in MongoDB is things that are queried together should stay together. Thus, it may be beneficial to work from the required queries first and let the schema design follow the query pattern.\nUltimately, the best approach depends on your specific use case and performance requirements. You may want to consider testing out different approaches and measuring their performance to determine which one works best for your needs. You can use mgeneratejs to create sample documents quickly in any number, so the design can be tested easily.Additionally, since you’re new to MongoDB, I’m linking some more useful resources that you can explore:\nData Modelling Course\nMongoDB IndexingPlease let us know if you have any additional questions. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "tables: [{\n _id: …,\n tenantId: 1,\n name: “Parking spots”,\n description: “List of our parking spots”.\n columns: [{\n _id: …,\n columnName: “parkingType”,\n order: 1\n }, {\n _id: …,\n columnName: “numberOfSpots”,\n order: 2\n }],\n …\n]\ntable_cells: [{\n _id: …,\n columnId: first column _id in “tables” collection,\n rowIndex: 1,\n value: “garage”\n }, {\n _id: …,\n columnId: first column _id in “tables” collection,\n rowIndex: 2,\n value: “public”\n }, {\n _id: …,\n columnId: second column _id in “tables” collection,\n rowIndex: 1,\n value: 10\n }, {\n _id: …,\n columnId: second column _id in “tables” collection,\n rowIndex: 2,\n value: 100\n }\n]\n", "text": "@Satyam thank you very much for your answer. Still im trying to find some approach to start with, that would, at least, be a solid candidate to start with.I gave up from my solution, described in my previous answer, cause according to some mongodb articles, one should avoid unbounded arrays. That would definitelly happen in my solution with “cells” array.Would be great if you could share any kind of approach you think it would be good to start with for described issue.Besides regular operation that user can do with normal spreadsheet/table, very important requirement would be to sort data based on whatever columns, like in example above, if you sort by “numberOfSpots” rows would change order.I was reading a lot about “attribute” pattern with mongodb, somehow im trying now two have 2 different collections.One collection “tables” to hold meta data about table columns such as:And then to have separated collection “table_cells” to hold value of individual table cells.Maybe something like:With this at least i wouldn’t have unbounded arrays like in first solution. Now i need to think that this table can grow to like 20(columns)x 2000(rows) and to think how exactly to add indexes on design like this.Third approach would be to somehow maintain “table_rows” collection instead of “table_cells”, where each document in “table_rows” would represent one row of the table. But i dont know how then to create schema for that cause meta data about columns (including column name) is created by user.I would really really appreciate any opinion or solution you could advise.", "username": "Srdjan_Cengic" } ]
Database design for tabular data (user defined columns with potentially lof of rows of data to maintain)
2023-03-13T01:12:45.647Z
Database design for tabular data (user defined columns with potentially lof of rows of data to maintain)
924
null
[ "app-services-user-auth", "realm-web" ]
[ { "code": "app.logIn(Realm.Credentials.google(authCode));{error: \"error exchanging access code with OAuth2 provider\", error_code: \"AuthError\",…}Client ID for Web application\nAuthorized JavaScript origins\nURI: [https://realm.mongodb.com]\nAuthorized redirect URIs: [\nhttps://realm.mongodb.com/api/client/v2.0/auth/callback, \nhttps://realm.mongodb.com/api/client/v2.0/auth/callback, \nhttps://us-west-2.aws.realm.mongodb.com/api/client/v2.0/auth/callback, \nhttps://eu-west-1.aws.realm.mongodb.com/api/client/v2.0/auth/callback, \nhttps://ap-southeast-2.aws.realm.mongodb.com/api/client/v2.0/auth/callback, \nhttps://stitch.mongodb.com/api/client/v2.0/auth/callback]\noauth2-google.json{\n \"id\": \"5fc81536e620d067d2edcfac\",\n \"name\": \"oauth2-google\",\n \"type\": \"oauth2-google\",\n \"config\": {\n \"clientId\": \"10571797xxxx-xxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com\"\n },\n \"secret_config\": {\n \"clientSecret\": \"google_ouath_client_secret\"\n },\n \"disabled\": false\n}\n <GoogleLogin\n clientId=\"10571797xxxx-xxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com\"\n buttonText=\"Login\"\n responseType=\"code\"\n onSuccess={(response) => {\n if (response.code) {\n loginWithGoogle(response.code);\n }\n }}\n onFailure={(response) => {/*omited*/}}\n />\n\n const loginWithGoogle = async (authCode: string) => {\n try {\n await app.logIn(Realm.Credentials.google(authCode));\n } catch (e) {\n console.error(e)\n }\n };\nresponseType=\"code\"app.loginRequest URL: https://stitch.mongodb.com/api/client/v2.0/app/xxxxxxx-app-pozwq/auth/providers/oauth2-google/login\nRequest Method: POST\nStatus Code: 401 \nRemote Address: 52.16.113.157:443\nReferrer Policy: strict-origin-when-cross-origin\n\n{\"authCode\":\"4/0AY0e-g6OJPnXe4KLQYWOYSkm2b6aWxxxxxxxxxxxxxxxxxxxxxxxx\",\"options\":{\"device\":{\"sdkVersion\":\"1.0.0\",\"platform\":\"chrome\",\"platformVersion\":\"86.0.4240\",\"deviceId\":{\"$oid\":\"5fc802b2723axxxxx\"}}}}\n{\"error\":\"error exchanging access code with OAuth2 provider\",\"error_code\":\"AuthError\",\"link\":\"https://realm.mongodb.com/groups/5f71b53f1bbd91xxxxxxxxxxxxxxxxxxxxxxxxx\"}\n", "text": "Hi, I’m trying to implement Google Auth using realm-web but I’m getting error during exchanging authCode for accessToken with stitch service\napp.logIn(Realm.Credentials.google(authCode));stitch respond with{error: \"error exchanging access code with OAuth2 provider\", error_code: \"AuthError\",…}My Google OAuth2 Clientmy oauth2-google.jsonclient appYes, I’m using responseType=\"code\", and I successfully receive authCode from Google.\nBut, HTTP call of app.login looks like thisResponseWhat am I doing wrong ?", "username": "Stanislaw_Baranski" }, { "code": "", "text": "Hi @Stanislaw_Baranski,The Google auth api has changed and now you need to get the Auth code from the google sdk and not from our endpoint\nhttps://docs.mongodb.com/realm/web/authenticate#google-oauthThe redirect will work but only for the first 100 google users then it will block. Its for development purpose only and is not recommended way.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "@Pavel_Duchovny, I know, and this is exactly what I’m doing, and it doesn’t work, that’s why I’m writing here.", "username": "Stanislaw_Baranski" }, { "code": "authCodetoken\"authCode\":\"4/0AY0e-g6OJPnXe4KLQYWOYSkm2b6aWxxxxxxxxxxxxxxxxxxxxxxxx\"", "text": "I’m the only one facing this problem? I’m trying to solve it for a week now. I checked everything. @Pavel_Duchovny have I done something wrong ? I’m getting the auth code from google sdk, not from your endpoint. I’m sending you the authCode and not a token.\"authCode\":\"4/0AY0e-g6OJPnXe4KLQYWOYSkm2b6aWxxxxxxxxxxxxxxxxxxxxxxxx\"", "username": "Stanislaw_Baranski" }, { "code": "react-native-app-auth@react-native-community/google-signin", "text": "@Stanislaw_Baranski I ran into this issue a ton while setting up Google auth in a React Native project and never really found a true solution. I was originally using the library react-native-app-auth and nothing seemed to be working even though I’m pretty certain everything was set up correctly. I then tried using a different library, @react-native-community/google-signin, and everything just started working. I still don’t know why the first library didn’t work with Realm, but I guess my suggestion is to try using a different library to handle the oauth and see if that does anything.", "username": "Peter_Stakoun" }, { "code": "react-google-loginauthorizeopenIdauth_providers/oauth2-google.json/config/openId", "text": "Thank you! Although you didn’t solve my problem, you gave me a clue that the problem may lie on my side.\nAfter digging in, I found a possible cause.\nRealm auth uses non-standard gapi.auth2.authorize authentication flow, while the library I use react-google-login uses default gapi.auth2.init/sign-in. I rewrite everything to use standard google sdk, and recommended authorize flow, but it still does not work, I get the same error. Could you please share your configuration? What scopes do you use? Do they match exactly on client side and realm config? Do you enable openId parameter in auth_providers/oauth2-google.json/config/openId (there is no documentation about it, so I assume they added it recently)?", "username": "Stanislaw_Baranski" }, { "code": "GoogleSignin.configure({\n webClientId: GOOGLE_WEB_CLIENT_ID,\n iosClientId: GOOGLE_IOS_CLIENT_ID,\n offlineAccess: true,\n})\nprofileopenId", "text": "The only configuration I really needed for the flow in React Native wasOn the Google dev console side, when I download the json for the credentials here’s what I have:{“web”: {\n“client_id”:“xxxxx.apps.googleusercontent.com”,\n“project_id”:“xxxxx”,\n“auth_uri”:“Sign in - Google Accounts”,\n“token_uri”:“https://oauth2.googleapis.com/token”,\n“auth_provider_x509_cert_url”:“https://www.googleapis.com/oauth2/v1/certs”,\n“client_secret”:“xxxxx”,\n“redirect_uris”:[\n“https://realm.mongodb.com/api/client/v2.0/auth/callback”,\n“https://us-east-1.aws.realm.mongodb.com/api/client/v2.0/auth/callback”\n],\n“javascript_origins”:[“https://realm.mongodb.com”]\n}}For scopes, I have the profile scope and openId.", "username": "Peter_Stakoun" }, { "code": "import * as Realm from \"realm-web\";\nimport googleOneTap from 'google-one-tap';\nconst app = new Realm.App(\"<Your Realm App ID>\");\nconst client_id = \"<Your Google Client ID>\";\n// Open the Google One Tap menu\ngoogleOneTap({ client_id }, async (response) => {\n // Upon successful Google authentication, log in to Realm with the user's credential\n const credentials = Realm.Credentials.google(response.credential)\n const user = await app.logIn(credentials);\n console.log(`Logged in with id: ${user.id}`);\n});\nUnhandled Rejection (Error): Request failed (POST https://stitch.mongodb.com/api/client/v2.0/app/tasktracker-msbya/auth/providers/oauth2-google/login): error exchanging access code with OAuth2 provider (status 401){\"authCode\":\"eyJhbGciOiJSUzI1....", "text": "@Pavel_Duchovny Can you help us to fix this thanks.I cant use this example:Every time i try i gotUnhandled Rejection (Error): Request failed (POST https://stitch.mongodb.com/api/client/v2.0/app/tasktracker-msbya/auth/providers/oauth2-google/login): error exchanging access code with OAuth2 provider (status 401)\nAnd body posted was:\n{\"authCode\":\"eyJhbGciOiJSUzI1....Thanks for your help.", "username": "Jonathan_Gautier" }, { "code": "", "text": "I believe you need to have OpenID Connect enabled if you want to use Google One Tap. Check your Google provider configuration to see if it’s on - if it’s not try turning it on and see if you still get errors.Note that OpenID Connect doesn’t include metadata fields, so if your app needs those it won’t fit your use case.", "username": "nlarew" }, { "code": "", "text": "@nlarew Already Turn On I didn’t need metadata fields dont worrie ! Just need to have this Google Tab Menu Working image963×240 19.4 KB", "username": "Jonathan_Gautier" }, { "code": "", "text": "Can you verify that you’re using Realm Web v1.1.0 or newer?", "username": "kraenhansen" }, { "code": "const credentials = Realm.Credentials.google(response.credential)\nconst user = await app.logIn(credentials);\n", "text": "@kraenhansen Now i am using 1.2.1 ( before 0.8.1 ).And got new error or code logic, when i login with googleOneTap and use this code:realm app create new user just with name data and without email address.image1326×194 12.1 KBI have checked email address, email verify etc was in JWT token given by googleonetap.Any idea ? Thanks for your help.", "username": "Jonathan_Gautier" }, { "code": "", "text": "JWT given\nimage1031×335 24.5 KB", "username": "Jonathan_Gautier" }, { "code": "", "text": "@kraenhansen any idea ? ", "username": "Jonathan_Gautier" }, { "code": "", "text": "This seems like a configuration / potential server-side issue, which is slightly out of my domain of expertise. Since this is also a bit off topic, in respects to the original post, my best suggestion is to create a demo app that displays this behavior and create a new post on the forum referencing it, then I’m sure someone will follow up and help you resolve this.", "username": "kraenhansen" }, { "code": "", "text": "What you need is the “id_token” property from the signIn response from google. I’m using “vue-google-oauth2” package and the “await this.$gAuth.signIn()” method. Unfortunately, this method throws error that I’m closing the page, so I picked the “id_token” from the browser network request to test.", "username": "AbdulGafar_Ajao" }, { "code": "", "text": "Facing the same problem, anyway to solve this?\nI’m getting the JWT with all data, my provider is configured without openid connect, and I’m getting the same error: “error fetching info from OAuth2 provider”\nAny help will be appreciated!\nThank you\nIdan", "username": "idan_stark" }, { "code": "", "text": "It seems this just got resolved today (at least for me). See link below for the realm-js issue:### How frequently does the bug occur?\n\nAll the time\n\n### Description\n\nUncaught …(in promise) Error: `google(<tokenString>)` has been deprecated. Please use `google(<authCodeObject>)`.\nI have received this error. All tough I was following the original docs both for google one tap connect and realm. Did some digging and found that this error shows when creating realm credential for google. I am using the `realm web sdk with react`. This error is generating from `bundle.dom.es.js `at line 2386 in method `derivePayload`. If there is any solution or some has faced this issue before it will be grate if you can share the solution. The same issue can be recreated by using the code provided in the `realm web sdk google authentication code sample`. \n\nhttps://www.mongodb.com/docs/realm/web/authenticate/#std-label-web-google-onetap\n\n### Stacktrace & log output\n\n_No response_\n\n### Can you reproduce the bug?\n\nYes, always\n\n### Reproduction Steps\n\n_No response_\n\n### Version\n\n2.0.0\n\n### What SDK flavour are you using?\n\nAtlas App Services (auth, functions, etc.)\n\n### Are you using encryption?\n\nNo, not using encryption\n\n### Platform OS and version(s)\n\nMac\n\n### Build environment\n\nReact\n\n\n### Cocoapods version\n\n_No response_And here is the code sandbox with the solution you will find in the above link:realm-web-google-auth-vanilla by kraenhansen using parcel-bundlerI hope this helps. I can’t tell you how many hours I spent trying to figure this out. Friday the 13th isn’t so unlucky.", "username": "thecyrusj13_N_A" }, { "code": "", "text": "Did you have to have OpenID Connect enabled for this to work?We are using @react-oauth/google which just wraps the google gsi client and I followed the example you posted and couldn’t get it to work. The only thing I haven’t tried is enabling OpenId Connect.", "username": "Matt_Tew" }, { "code": "", "text": "Yes, I have OpenID Connect enabled for it to work.", "username": "thecyrusj13_N_A" } ]
Realm-web Google Auth error exchanging access code with OAuth2 provider
2020-12-03T02:16:15.622Z
Realm-web Google Auth error exchanging access code with OAuth2 provider
8,413
null
[ "student-developer-pack" ]
[ { "code": "", "text": "hey @Lieke_Booni have completed my learning path .how to get 100% coupon.", "username": "Bhukya_Venkata_sai_naik" }, { "code": "", "text": "", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Certification exam coupon
2023-03-13T13:38:36.911Z
Certification exam coupon
1,628
null
[]
[ { "code": "", "text": "i have completed my learning path.How do i get 100% free certification coupon?", "username": "Bhukya_Venkata_sai_naik" }, { "code": "", "text": "Hi @Bhukya_Venkata_sai_naik,Welcome to the MongoDB Community forums In order to obtain the free certification coupon, please send an email to [email protected] with all the details such as the registered email with the Github Student pack and proof of completion certification. The team will provide further assistance.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Certification exam
2023-03-13T13:29:12.196Z
Certification exam
1,631
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi guys,We are a start-up releasing a new white label product and want to create a new database for each client in Mongodb for obvious reasons of privacy, security, ownership.Our clients have their clients using the white label platform we provide. Is there a way to programmatically create a new Atlas Authentication Service for each client in MongoBD when we do their on-boarding?Thanks!", "username": "cris" }, { "code": "how to create a database using MongoDB Shell (mongosh)", "text": "Hello @cris ,Could you please confirm, when a new customer is on-boarding, do you want to create a new database in your cluster programatically or do you want to add new users to your cluster programatically?MongoDB creates a database when you first store data. This step-by-step guide will walk you through how to deploy your own database.https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Database-Users/operation/createDatabaseUserNote: This MongoDB Cloud supports a maximum of 100 database users per project. If you require more than 100 database users on a project, contact Support.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Creating multiple ATLAS databases for the same account
2023-03-07T15:00:45.779Z
Creating multiple ATLAS databases for the same account
673
https://www.mongodb.com/…afacfe7f63a0.png
[ "atlas-functions" ]
[ { "code": "import moment from \"moment\";\nexports = function ({ query, headers, body }, response) { \n\treturn {\n\t\tstatus: \"ok\",\n\t\tbodyy:body,\n\t\tquery:query,\n\t\tresponse:response\n\t};\t\n};\n{\n \"status\": \"ok\",\n \"bodyy\": {\n \"Subtype\": 0,\n \"Data\": \"ew0KICAgICJwYXJ0eV9uYW1lIjoiUmFodWwgVlMgUGlua2kiDQp9\"\n },\n \"query\": {},\n \"response\": {}\n}\n", "text": "I have created a function in Atlas function and created an HTTPS Endpoint to insert some data,\nbut how to get the body in the post request. I am getting query parameter in the request but body is not there.This is the response I am getting when hitting Post request with body.This is the screenshot of my postman post request with simple body and response I am getting.\n1652×509 22.7 KB\nRead operation is working fine. But with post request body is having subtype and data with encoded string.Kindly help on this as I googled so much but not getting any help.Regards\nZubair", "username": "Zubair_Rajput" }, { "code": "bodyyCanonical{ \"$binary\":\n {\n \"base64\": \"<payload>\",\n \"subType\": \"<t>\"\n }\n}\n\"<payload>\"\"<t>\" return{\n\t\tstatus: \"ok\",\n\t\tbodyy:body.text(),\n\t\tquery:query,\n\t\tresponse:response\n\t};\n", "text": "Hello @Zubair_Rajput ,Please confirm if my understanding of your use-case is correct? You are sending a Post request to your function in Atlas and it has a return statement where the body of the request should be returned but instead you are getting a base64 encoded value of the parameter in body.If yes, then, can you please confirm if you want to just insert data or you also want to read inserted data from the Post request?Additionally, the response you shared in the bodyy. It consists of binary datatype whose Canonical representation is as belowWhere the values are as follows:If you change the return statement to below, you will be able to see the body sent in your POST requestRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get body in post request to HTTPS Endspoint in atlas function?
2023-03-03T14:49:54.321Z
How to get body in post request to HTTPS Endspoint in atlas function?
1,825
null
[ "python" ]
[ { "code": "", "text": "I am facing difficulty in creating arrayfield in mongo db with django… I am using djongo for mongodb and django connection… Can’t we simply create array field without creating model container…", "username": "Swapnil_Joshi" }, { "code": "", "text": "Hi @Swapnil_Joshi,Welcome to the MongoDB Community forums I am facing difficulty in creating arrayfield in mongo db with djangoCould you share what particular error message you are getting while doing this? Can you share the code snippet for the same?I am using Djongo for MongoDB and Djongo connection… Can’t we simply create an array field without creating a model container…You can refer to the Djongo documentation on how to create an array field in MongoDB.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
How to create array field in mongo db and django using djongo?
2023-03-13T05:33:33.646Z
How to create array field in mongo db and django using djongo?
1,203
null
[]
[ { "code": "", "text": "Hi All, I’m Eoin . I joined MongoDB in late 2013 in Dublin, Ireland. I was part of the team that supported our customers and last year I moved to the Education team where I write, teach, and deliver training to our customers and to our own engineers.I’m one of the co-authors of “MongoDB: The Definitive Guide, 3rd Ed” and help run the MongoDB Dublin MongoDB Meetup group.I am also involved in running MongoDB World as the technical programme chair. I help select and mentor the technical speakers to ensure we deal the best ever MongoDB World. If you haven’t checked it out, then please do so as it’s our flagship event for our community.Eoin", "username": "Eoin_Brazil" }, { "code": "", "text": "Hi @Eoin_Brazil,Hello from the Mojave Desert in Southern California!! I’ve been a fan and student of MongoDB on and off for the last couple of years, Thank you for all that you do and thank you for all that you have done to educate others about MongoDB. My mom is from Ireland (originally) and I’ve been hoping to make it over there sometime in the future. It would be awesome to attend a MongoDB meetup in Dublin someday!! I look forward to diving into “MongoDB: The Definitive Guide, 3rd Ed. Thank you for posting the link here!Cheers:-)", "username": "Juliette_Tworsey" }, { "code": "", "text": "Thanks @Juliette_Tworsey!Wow, that’s a world away from Ireland. If you ever need suggestions for Ireland holidays, let me know. We’ve a nice list compiled here in the office as we often get that as a request for visitors.Hope you enjoy the book as well!Eoin", "username": "Eoin_Brazil" }, { "code": "", "text": "Hi, have you Garretstown down south of Cork at Old Head on your list?\nJust a great spot for surfing, not much more \nFaily recreational when you have a customer in Cork…", "username": "michael_hoeller" }, { "code": "", "text": "Hi @Eoin_Brazil,For sure, where I live is a world away from Ireland. Sometimes it feels like I’m living on another planet up here in the High Desert with all of the Joshua trees dotting the horizon.I will be sure to hit you up for some holiday suggestions if/when I get over to Ireland.Thanks!Juliette", "username": "Juliette_Tworsey" }, { "code": "", "text": "Hi everyone, I a newbie here and I am student, I have a question from all,\nWhat courses or educational resources are available through MongoDB Education and how can they benefit individuals interested in learning about MongoDB and its applications?", "username": "James_Robert" }, { "code": "", "text": "I got my answer. “MongoDB Education offers a range of courses and resources for individuals interested in learning about MongoDB and its applications. These include free online courses, paid certification programs, and on-demand training resources. Through MongoDB Education, learners can gain knowledge and skills in topics such as data modeling, query optimization, and application development with MongoDB. These resources can benefit individuals by providing them with the tools and expertise needed to work with MongoDB effectively and efficiently.”", "username": "James_Robert" } ]
🌱 Hi, I'm Eoin from MongoDB Education
2020-02-10T09:42:25.180Z
:seedling: Hi, I&rsquo;m Eoin from MongoDB Education
3,087
null
[ "dot-net", "atlas-device-sync", "flexible-sync", "unity" ]
[ { "code": "realm.All<Foo>()bar Object\n a: \"aaa\"\n b: \"bbb\"\n c: \"ccc\"\n{\n \"title\": \"Foo\",\n \"properties\": {\n \"_id\": { \"bsonType\": \"string\" },\n \"bar\": {\n \"bsonType\": \"object\",\n \"additionalProperties\": { \"bsonType\": \"string\" }\n }\n}\npublic class Foo : RealmObject\n{\n [MapTo(\"_id\")]\n [PrimaryKey]\n [Required]\n public string Id { get; set; }\n\n [MapTo(\"bar\")]\n [Required]\n public IDictionary<string, string> Bar { get; }\n}\nRealmDictionary<string>", "text": "Hey there! I’m trying to sync data from the atlas. Here are some problems I collided with:Sample object in the collection:Here’s my schema:and c# class:I also tried to use RealmDictionary<string> but the result was the same.Thanks!", "username": "Daniil_T" }, { "code": "", "text": "Just a guess, but do you maybe have FlexibleSync enabled instead of the old partition strategy? New apps default to FlexibleSync, and you probably want to be using that in your code too because it’s slick.", "username": "Jonathan_Czeck" } ]
Incompatible property
2023-02-28T14:52:57.798Z
Incompatible property
1,142
null
[ "sharding" ]
[ { "code": "", "text": "Dear Experts,I have designed an architecture for our up coming project so I am looking recommendation from your sideThere are total 3 ServersServer01dtnode1 rep01\ndtnode2 rep02\ncfgnode1 cfg01\nmongosServer02dtnode1 rep01\ndtnode2 rep02\ncfgnode1 cfg01\nmongosServer03dtnode1(Arbiter) rep01\ndtnode2(Arbiter) rep02\ncfgnode1 cfg01Shard01 → rep01\nShard02 → rep02Data should be for 1 year is around 7 TB and Data velocity is around on every 5 seconds so please share your feedback about this architecture will work well or any type of problem we could face", "username": "RD_Burman" }, { "code": "", "text": "Too bad a reply needs to be 20 characters. Because the answer is simply DO NOT.You only have 3 machines, do not make them suffer by having to do context switch between 2 instances of mongod.", "username": "steevej" } ]
Recommendation About Architecture of 2 SHARD with Total 3 Servers only
2023-03-13T17:41:28.075Z
Recommendation About Architecture of 2 SHARD with Total 3 Servers only
680
null
[ "golang" ]
[ { "code": "", "text": "I want to understand more about MongoDB. I have used it in many projects before but don’t know much about the internals. So, contributing to the go-driver sounds like an excellent way to start.But since I haven’t contributed to open source before, I’m a little lost here. I went to the Jira board of the driver and tried to find good-first-issues but there weren’t any. I couldn’t figure out which task to pick and where to start.I’m quite proficient with Go and have about four years of experience developing/debugging distributed systems.Any help is appreciated.", "username": "Shivansh_Kuchchal" }, { "code": "", "text": "Hi @Shivansh_Kuchchal welcome to the community!Glad to have you in the community, and thanks for your interest in contributing.There’s a contributing guideline for the Go driver in the Github repo for the Go driver that should serve as a starting point.Hope this helps!Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "@Shivansh_Kuchchal I’m glad to hear you’re interested in contributing to the Go driver!Adding on to @kevinadi’s answer, a really helpful way to contribute to the Go driver is to add testable examples. I noticed that the go.mongodb.org/mongo-driver/bson package has a shortage of examples and would benefit from more, especially for the bson.Marshal and bson.Unmarshal functions.", "username": "Matt_Dale" } ]
How to start contributing to the go-driver?
2023-02-20T06:06:21.215Z
How to start contributing to the go-driver?
900
null
[ "swift", "atlas-device-sync" ]
[ { "code": "", "text": "I develop a Mac app that uses the Realm Swift SDK and Device Sync. The backend is an M20 cluster. The data model for this app has 8 collections.I am trying to determine how many users can simultaneously run this app and still sync to/from the backend cluster. I’ve read the limitations page here: https://www.mongodb.com/docs/atlas/app-services/reference/service-limitations/, which says the M20 cluster has 100 maximum “change streams” and that one stream is opened for each collection.So, 100/8 = 12.5.But I already have more than 12 users running my app simultaneously and sync appears to function just fine. Does the Realm SDK open and close sync connections automatically as-needed? Or, once I open the Realm when my app launches, is a persistent sync session (and therefore 8 change streams) held forever, until the Realm is closed?I cannot find a simple answer to a very basic question: how many simultaneous users of the Realm Device Sync SDK can the M20, M30, and M50 tiers support?And a related question: what happens if the number of users grows and there is contention for “change streams”? Will all users eventually have their sync requests filled?", "username": "Bryan_Jones" }, { "code": "", "text": "Those limits are for database connections; Realm Sync doesn’t connect directly to your cluster but rather goes through an Atlas App Services instance which in turn connects to the Atlas cluster via a pool of connections. So there’s no hard limit on the number of users active, but there are of course some hardware limits (I.e. the hardware itself can’t support trillions of concurrent connections).And yes, in the unlikely scenario that you hit the limit the cluster connection limits, eventually all sync requests will be handled.", "username": "nirinchev" }, { "code": "", "text": "Thanks! Do you have any idea how many simultaneous sync clients an M20 can handle, roughly? I don’t need an exact number, just a ballpark so I know approximately when I’ll need to upgrade to the next tier.", "username": "Bryan_Jones" }, { "code": "", "text": "@Bryan_Jones long ago, I had a similar situation, here was my learning, but let me know if this helps ?Unfortunately, there is no straightforward answer to this question as the number of simultaneous sync clients an M20 cluster can handle depends on various factors such as the size and complexity of the data being synced, the network bandwidth, and the hardware capacity of the client devices.\nHowever, MongoDB Atlas provides a feature called Performance Advisor that can help estimate the capacity of your cluster for your workload by analyzing the cluster usage and providing recommendations. Additionally, you can consider load testing your app and measuring the performance to get an idea of how many simultaneous sync clients your M20 cluster can handle.\nIt’s important to note that tuning the sync settings such as batch size, sync frequency, and conflict resolution policy can also improve the sync performance and capacity of your sync clients.", "username": "Deepak_Kumar16" }, { "code": "", "text": "Yup, I can follow up here. We are planning on trying to release some formal numbers around performance on various cluster sizes, but it is difficult since there are so many factors at play including:So we will follow up when our work on trying to formalize some numbers is completed but in the meantime testing it yourself is sometimes the best way to do this. We find that in general the best way to do this is to use the NodeJS driver since it can be easier to implement a realistic load test.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Yea, I get that there will be variation. But having even SOME guidance would help a lot because what I’m trying to do is estimate costs for a client. I just need a ballpark estimate. Can an M20 cluster handle 100 sync users? 200? 1000? 10000?Just having an order-of-magnitude answer would be enough to project rough backend costs, which is what I’m trying to do.", "username": "Bryan_Jones" }, { "code": "", "text": "Thats fair. For some ballpark calculations from some of our existing customers. I suspect an M20 should be able to handle somewhere between 1,000-10,000 clients at the same time. We have some customers that push those bounds, but generally, they increase their cluster tier due to throughput as they hit the 10,000’s and then they can push through that to higher numbers.Additionally, we have some customers that are at around 1 million daily active users (but note that does not mean 1 million clients active at the same time, just over the course of a day)", "username": "Tyler_Kaye" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Max Concurrent Users from Realm Swift SDK For Device Sync
2023-03-10T00:16:57.144Z
Max Concurrent Users from Realm Swift SDK For Device Sync
1,377
null
[]
[ { "code": "", "text": "Hello,\nDoes Atlas allow partial searches in datetime fields? e.g. Searching for “june” in a given field should return all documents with “June” in the field.Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "", "text": "Hello,Is this possible?", "username": "Prasad_Kini" }, { "code": "", "text": "Hello,\nDoes Atlas allow partial searches in datetime fields? e.g. Searching for “june” in a given field should return all documents with “June” in the field.Thanks,\nPrasad@Prasad_Kini Yes, Atlas Search allows partial searches in datetime fields. One way to achieve this is by using the $regex operator in the search query. For example, the query { $search: “june”, “dateField”: { $regex: “.June.” } } will return all documents with “June” in the “dateField” field. However, note that this approach may not be efficient for large datasets and it may be better to use a full-text search engine instead.HTH", "username": "Deepak_Kumar16" }, { "code": "{ \"regex\": { \"path\": \"created_on\" , \"query\": \".*july.*\", \"allowAnalyzedField\": true } }\n", "text": "Hi @Deepak_Kumar16,\nThanks for the reply. The regex operator in $search doesn’t seem to support regex for dates - sample clause below. This clause filters out all the documents even when there are matching ones.Are you suggesting using $regex in the $match stage? If yes, this is not an option for us due to serious performance issues.Thanks,\nPrasad", "username": "Prasad_Kini" } ]
Partial searches of datetime fields in Atlas
2023-03-08T20:41:47.401Z
Partial searches of datetime fields in Atlas
694
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "I have used a aggregate query with $group operation.\nThe database tested is of around 674MB. And, I have configured 11GB as wiredTigerCache.\nThen, Why am I getting Exceed Memory error for $group operation on this database?\nHow can I avoid it?", "username": "Monika_Shah" }, { "code": "", "text": "Please share the aggregation pipeline that causes the issue.There is no way we can tell if there is an issue with your code if we do not see your code.", "username": "steevej" } ]
Exceed Memory error for $group operation on very small database
2023-03-13T13:09:15.330Z
Exceed Memory error for $group operation on very small database
448
null
[ "dot-net", "transactions", "cxx", "field-encryption" ]
[ { "code": "///\n/// Enum representing the various error types that can occur during driver usage.\n///\nenum class error_code : std::int32_t {\n /// More than one mongocxx::instance has been created.\n k_cannot_recreate_instance = 1,\n\n /// A default-constructed or moved-from mongocxx::client object has been used.\n k_invalid_client_object,\n\n /// A default-constructed or moved-from mongocxx::collection object has been used.\n k_invalid_collection_object,\n\n /// A default-constructed or moved-from mongocxx::database object has been used.\n k_invalid_database_object,\n\n /// An invalid or out-of-bounds parameter was provided.\n k_invalid_parameter,\n\n /// An SSL operation was used without SSL support being built.\n k_ssl_not_supported,\n\n /// An unknown read concern level was set.\n k_unknown_read_concern,\n\n /// An unknown write concern level was set.\n k_unknown_write_concern,\n\n /// The server returned a malformed response.\n k_server_response_malformed,\n\n /// An invalid MongoDB URI was provided.\n k_invalid_uri,\n\n /// A default-constructed or moved-from mongocxx::gridfs::bucket object has been used.\n k_invalid_gridfs_bucket_object,\n\n /// A default-constructed or moved-from mongocxx::gridfs::uploader object has been used.\n k_invalid_gridfs_uploader_object,\n\n /// A default-constructed or moved-from mongocxx::gridfs::downloader object has been used.\n k_invalid_gridfs_downloader_object,\n\n /// A mongocxx::gridfs::uploader object was not open for writing, or a\n /// mongocxx::gridfs::downloader object was not open for reading.\n k_gridfs_stream_not_open,\n\n /// A mongocxx::gridfs::uploader object has exceeded the maximum number of allowable GridFS\n /// chunks when attempting to upload the requested file.\n k_gridfs_upload_requires_too_many_chunks,\n\n /// The requested GridFS file was not found.\n k_gridfs_file_not_found,\n\n /// A GridFS file being operated on was discovered to be corrupted.\n k_gridfs_file_corrupted,\n\n /// The mongocxx::instance has been destroyed.\n k_instance_destroyed,\n\n /// mongocxx::client.create_session failed to create a mongocxx::client_session.\n k_cannot_create_session,\n\n /// A failure attempting to pass a mongocxx::client_session to a method.\n k_invalid_session,\n\n /// A moved-from mongocxx::options::transaction object has been used.\n k_invalid_transaction_options_object,\n\n // A resource (server API handle, etc.) could not be created:\n k_create_resource_fail,\n\n // Add new constant string message to error_code.cpp as well!\n};\nnamespace bsoncxx {\nBSONCXX_INLINE_NAMESPACE_BEGIN\n\n///\n/// Enum representing the various error types that can occur while operating on BSON values.\n///\nenum class error_code : std::int32_t {\n /// A new key was appended while building a subarray.\n k_cannot_append_key_in_sub_array = 1,\n\n /// A subarray was closed while building a subdocument.\n k_cannot_close_array_in_sub_document,\n\n /// A subdocument was closed while building a subarray.\n k_cannot_close_document_in_sub_array,\n\n /// An array operation was performed while building a document.\n k_cannot_perform_array_operation_on_document,\n\n /// A document operation was performed while building an array.\n k_cannot_perform_document_operation_on_array,\n#define BSONCXX_ENUM(name, value) k_need_element_type_k_##name,\n#include <bsoncxx/enums/type.hpp>\n#undef BSONCXX_ENUM\n /// No key was provided when one was needed.\n k_need_key,\n\n /// An array was closed while no array was open.\n k_no_array_to_close,\n\n /// A document was closed while no document was open.\n k_no_document_to_close,\n\n // Attempted to view or extract a document when a key was still awaiting a matching value.\n k_unmatched_key_in_builder,\n\n /// An empty element was accessed.\n k_unset_element,\n\n /// A JSON document failed to parse.\n k_json_parse_failure,\n\n /// An Object ID string failed to parse.\n k_invalid_oid,\n\n // This type is unused and deprecated.\n k_failed_converting_bson_to_json,\n\n /// A Decimal128 string failed to parse.\n k_invalid_decimal128,\n\n /// BSON data could not be processed, but no specific reason was available.\n k_internal_error,\n\n /// Failed to begin appending an array to a BSON document or array.\n k_cannot_begin_appending_array,\n\n /// Failed to begin appending a BSON document to a BSON document or array.\n k_cannot_begin_appending_document,\n\n /// Failed to complete appending an array to a BSON document or array.\n k_cannot_end_appending_array,\n\n /// Failed to complete appending a BSON document to a BSON document or array.\n k_cannot_end_appending_document,\n\n /// Invalid binary subtype.\n k_invalid_binary_subtype,\n\n /// Invalid type.\n k_invalid_bson_type_id,\n\n/// A value failed to append.\n#define BSONCXX_ENUM(name, value) k_cannot_append_##name,\n#include <bsoncxx/enums/type.hpp>\n#undef BSONCXX_ENUM\n k_cannot_append_utf8 = k_cannot_append_string,\n k_need_element_type_k_utf8 = k_need_element_type_k_string,\n // Add new constant string message to error_code.cpp as well!\n};\n\n///\n/// Get the error_category for exceptions originating from the bsoncxx library.\n///\n/// @return The bsoncxx error_category\n///\nBSONCXX_API const std::error_category& BSONCXX_CALL error_category();\n\n///\n/// Translate a bsoncxx::error_code into a std::error_code.\n///\n/// @param error An error from bsoncxx\n/// @return An error_code\n///\nBSONCXX_INLINE std::error_code make_error_code(error_code error) {\n return {static_cast<int>(error), error_category()};\n}\n\nBSONCXX_INLINE_NAMESPACE_END\n} // namespace bsoncxx\nBSON_BEGIN_DECLS\n\n\ntypedef enum {\n MONGOC_ERROR_CLIENT = 1,\n MONGOC_ERROR_STREAM,\n MONGOC_ERROR_PROTOCOL,\n MONGOC_ERROR_CURSOR,\n MONGOC_ERROR_QUERY,\n MONGOC_ERROR_INSERT,\n MONGOC_ERROR_SASL,\n MONGOC_ERROR_BSON,\n MONGOC_ERROR_MATCHER,\n MONGOC_ERROR_NAMESPACE,\n MONGOC_ERROR_COMMAND,\n MONGOC_ERROR_COLLECTION,\n MONGOC_ERROR_GRIDFS,\n MONGOC_ERROR_SCRAM,\n MONGOC_ERROR_SERVER_SELECTION,\n MONGOC_ERROR_WRITE_CONCERN,\n MONGOC_ERROR_SERVER, /* Error API Version 2 only */\n MONGOC_ERROR_TRANSACTION,\n MONGOC_ERROR_CLIENT_SIDE_ENCRYPTION, /* An error coming from libmongocrypt */\n MONGOC_ERROR_POOL\n} mongoc_error_domain_t;\n\n\ntypedef enum {\n MONGOC_ERROR_STREAM_INVALID_TYPE = 1,\n MONGOC_ERROR_STREAM_INVALID_STATE,\n MONGOC_ERROR_STREAM_NAME_RESOLUTION,\n MONGOC_ERROR_STREAM_SOCKET,\n MONGOC_ERROR_STREAM_CONNECT,\n MONGOC_ERROR_STREAM_NOT_ESTABLISHED,\n\n MONGOC_ERROR_CLIENT_NOT_READY,\n MONGOC_ERROR_CLIENT_TOO_BIG,\n MONGOC_ERROR_CLIENT_TOO_SMALL,\n MONGOC_ERROR_CLIENT_GETNONCE,\n MONGOC_ERROR_CLIENT_AUTHENTICATE,\n MONGOC_ERROR_CLIENT_NO_ACCEPTABLE_PEER,\n MONGOC_ERROR_CLIENT_IN_EXHAUST,\n\n MONGOC_ERROR_PROTOCOL_INVALID_REPLY,\n MONGOC_ERROR_PROTOCOL_BAD_WIRE_VERSION,\n\n MONGOC_ERROR_CURSOR_INVALID_CURSOR,\n\n MONGOC_ERROR_QUERY_FAILURE,\n\n MONGOC_ERROR_BSON_INVALID,\n\n MONGOC_ERROR_MATCHER_INVALID,\n\n MONGOC_ERROR_NAMESPACE_INVALID,\n MONGOC_ERROR_NAMESPACE_INVALID_FILTER_TYPE,\n\n MONGOC_ERROR_COMMAND_INVALID_ARG,\n\n MONGOC_ERROR_COLLECTION_INSERT_FAILED,\n MONGOC_ERROR_COLLECTION_UPDATE_FAILED,\n MONGOC_ERROR_COLLECTION_DELETE_FAILED,\n MONGOC_ERROR_COLLECTION_DOES_NOT_EXIST = 26,\n\n MONGOC_ERROR_GRIDFS_INVALID_FILENAME,\n\n MONGOC_ERROR_SCRAM_NOT_DONE,\n MONGOC_ERROR_SCRAM_PROTOCOL_ERROR,\n\n MONGOC_ERROR_QUERY_COMMAND_NOT_FOUND = 59,\n MONGOC_ERROR_QUERY_NOT_TAILABLE = 13051,\n\n MONGOC_ERROR_SERVER_SELECTION_BAD_WIRE_VERSION,\n MONGOC_ERROR_SERVER_SELECTION_FAILURE,\n MONGOC_ERROR_SERVER_SELECTION_INVALID_ID,\n\n MONGOC_ERROR_GRIDFS_CHUNK_MISSING,\n MONGOC_ERROR_GRIDFS_PROTOCOL_ERROR,\n\n /* Dup with query failure. */\n MONGOC_ERROR_PROTOCOL_ERROR = 17,\n\n MONGOC_ERROR_WRITE_CONCERN_ERROR = 64,\n\n MONGOC_ERROR_DUPLICATE_KEY = 11000,\n\n MONGOC_ERROR_MAX_TIME_MS_EXPIRED = 50,\n\n MONGOC_ERROR_CHANGE_STREAM_NO_RESUME_TOKEN,\n MONGOC_ERROR_CLIENT_SESSION_FAILURE,\n MONGOC_ERROR_TRANSACTION_INVALID_STATE,\n MONGOC_ERROR_GRIDFS_CORRUPT,\n MONGOC_ERROR_GRIDFS_BUCKET_FILE_NOT_FOUND,\n MONGOC_ERROR_GRIDFS_BUCKET_STREAM,\n\n /* An error related to initializing client side encryption. */\n MONGOC_ERROR_CLIENT_INVALID_ENCRYPTION_STATE,\n\n MONGOC_ERROR_CLIENT_INVALID_ENCRYPTION_ARG,\n\n\n /* An error related to server version api */\n MONGOC_ERROR_CLIENT_API_ALREADY_SET,\n MONGOC_ERROR_CLIENT_API_FROM_POOL,\n MONGOC_ERROR_POOL_API_ALREADY_SET,\n MONGOC_ERROR_POOL_API_TOO_LATE,\n\n MONGOC_ERROR_CLIENT_INVALID_LOAD_BALANCER,\n} mongoc_error_code_t;\n\nMONGOC_EXPORT (bool)\nmongoc_error_has_label (const bson_t *reply, const char *label);\n\nBSON_END_DECLS\n", "text": "I need to test application responses to possible errors to ensure the error handling and information to user is correct at application level. Is there a library to force errors to test how robust an application is to these?Ideally I want c++ source code or a tutorial for doing this.\nI have read these:-\nAtlas: Handle Errors (for java)\nError Handling (for .NET)\ngit example: exception.cpp\nThe core application is most likely to be accessing an Atlas database.At this point I would probably be targeting the most common errors, but it would be much better if there was a comprehensive test suite I could just link in.These files seem to containe the potential error codes (extracts are just for general reference at this point as I don’t anticipate testing them all).mongocxx\\exception\\error_code.hpp\nbsoncxx\\exception\\error_code.hpp\nlibmongoc-1.0\\mongoc\\mongoc-error.h\nlibbson-1.0\\bson\\bson-error.h", "username": "david_d" }, { "code": "mongod--setParameter enableTestCommands=1", "text": "Hi @david_d,\nThe MongoDB server provides failpoints to simulate failures from the server.\nHere is an example in the C++ driver test code that uses a failpoint: mongo-cxx-driver/collection.cpp at c2e6eb42c626620b43ea308db2e21e6888bfa94e · mongodb/mongo-cxx-driver · GitHub\nUsing failpoints requires setting the mongod parameter --setParameter enableTestCommands=1 , and I expect it is not possible to set failpoints on an Atlas cluster. However, you can test with a local mongod for this scenario.", "username": "Rishabh_Bisht" } ]
Is there a source code example or library for testing basic Application-level Error Handling?
2023-03-11T10:29:35.647Z
Is there a source code example or library for testing basic Application-level Error Handling?
1,232
null
[ "dot-net", "field-encryption", "schema-validation" ]
[ { "code": "", "text": "Hey MongoDB C# Driver Team,\nI tried out the field encryption feature with the MongoDB C# driver. I would like to be able to easily declare via my C# data classes which fields should be encrypted (Attribute based). Without first having to create a JSON schema that would bloat the code considerably.I would also like to be able to work without a BsonDocument object. Unfortunately only the examples can be found in your documentation. Also the work with the integration of KMS with own complex dictionaries and magic strings.Using mongocrypt.dll is also unreasonable. For this, an additional library simply has to be provided via NuGet, which is automatically available to me for the cross-platform client, without extensive setup. That is what I want.Please please change this complexity. Currently, an own small solution without field encryption is the simplest variant. But then you lose the automatic comfort at all.I haven’t ventured into Queryable Encryption yet, but I think it was also made so complex.If you want to pick up and inspire someone with this feature set, please pay attention to a significantly better developer experience.@Community what is your opinion on this?", "username": "Gregor_Biswanger" }, { "code": "mongocrypt.dllmongocryptd", "text": "Hey @Gregor_Biswanger ,which fields should be encrypted (Attribute based)I would also like to be able to work without a BsonDocument object. Unfortunately only the examples can be found in your documentation.At this point it’s the only way, but we have an improvement ticket to allow better user’s experience in this case. Please follow to this ticket for updates.Using mongocrypt.dll is also unreasonable. For this, an additional library simply has to be provided via NuGet,mongocrypt.dll (which is c++ library) is already part of additional nuget called MongoDB.Libmongocrypt. You don’t need any additional steps to work with it, it’s already part of the driver. Do you mean configuring mongocryptd daemon (which is a different binary)? If so, then if you can’t use a default configuration (mainly I mean default binary path and mongocryptd port which is 27020), you should configure it explicitly, however Queryable Encryption provides a way called Shared library that fully supersedes it and allows much easier configuring it, see here for details.Also the work with the integration of KMS with own complex dictionaries and magic strings.can you elaborate? It doesn’t require any dictionaries. It only requires specifying data (explicitly or via env variables) required by KMS itself.Best regards, Dima", "username": "Dmitry_Lukyanov" } ]
Bad developer experience for field encryption with .NET in C#
2023-03-11T12:42:13.674Z
Bad developer experience for field encryption with .NET in C#
1,017
null
[ "server", "installation" ]
[ { "code": "", "text": "Hello everyone,I am looking to start a discussion about the possibility of supporting ARM packages for Debian 11? I see from the release page that you support the major linux distros.The reason behind my request is that the Bitnami charts currently do not support MongoDB on ARM due to the Debian dependency. Here is the GitHub thread for more info.Thank you!", "username": "Zaid_Albirawi" }, { "code": "", "text": "Hi @Zaid_AlbirawiTry creating a new idea at https://feedback.mongodb.com/ and link it in this topic so others can vote for it.", "username": "chris" }, { "code": "", "text": "Well do! Thank you @chris!", "username": "Zaid_Albirawi" }, { "code": "", "text": "Please vote here if you need this", "username": "Zaid_Albirawi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can we please have Debian 11 ARM packages for the MongoDB Community Server?
2023-02-24T15:35:47.869Z
Can we please have Debian 11 ARM packages for the MongoDB Community Server?
1,371
null
[ "queries" ]
[ { "code": "", "text": "I would like to monitor performance of queries.But,cache will not give true performance.\ntherefore, I would like to clear cache before executing similar query.I have clear PlanCache usingdb.collection.getPlanCache.clear()\nand verified null cache using\ndb.collection.getPlanCache().listQueryShapes()Even after, why similar query consumes very less execution time?", "username": "Monika_Shah" }, { "code": "", "text": "Hi @Monika_Shah,Welcome to the MongoDB Community I would like to clear the cache before executing a similar query.Please note that clearing the plan cache does not remove documents and data from other caches such as the filesystem and WT. Instead, it simply removes existing entries from the cache. Upon running new queries, new entries may be added. This is because the PlanCache serves as a cache for the query planner which picks all candidate indexes and then runs queries using them to score which is most efficient and then adds its choice of the best index to the PlanCache again.Even after, why similar query consumes very less execution time?Could you please provide more details on what you mean by a “similar query” in this context? Are you asking about why a query is running faster instead of slow, also what metric are you trying to obtain?Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "I’m guessing that the plan caching is just a small contributor of the overall execution time. the major deciding factor might be if the working set is entirely in RAM and how many disk blocks are to be read and seek and blabla…So deleting the plan cache doesn’t mean a lot.", "username": "Kobe_W" }, { "code": "", "text": "ing is just a small contributor of the overall execution time. the major deciding factor might be if the working set is entirely in RAM andCache memory is double the Database size.", "username": "Monika_Shah" } ]
Why does similar query takes less execution time evenafter PlanCache().clear
2023-02-07T08:04:47.456Z
Why does similar query takes less execution time evenafter PlanCache().clear
591
null
[ "aggregation" ]
[ { "code": "", "text": "Hello,\nSensor sends packets in a 15-30 second interval which is stored in the mongodb collection.\nFor example\n1-{18:56:23}\n2-{18:56:56}\n3-{18:57:23}\n4-{18:57:45}\n.\n.\n.Now I have to do calculation for which I need to take out the difference between each such consecutive data packets time difference,\nHow can I find the difference between each consecutive packet time difference", "username": "Harsh_Bhudolia" }, { "code": "", "text": "Please provide sample documents in JSON format so that we can cut-n-paste directly into our system.Please define how a packet is consecutive one or it is related to something else. For example, do you have packets from multiple devices in the same collection.If this is a frequent use-case you might want to consider storing the time of the previous or next packet or the time interval permanently when you insert a new packet. Finding the time interval or the time of the previous / next packet will probably involve $lookup, so you either do it once at insertion time or every time you run the query.Please close your redundant post How to calculate time difference between two consecutive events.", "username": "steevej" }, { "code": "{\nmetadata:{\neventcode:100\n}\npower:on // this can be either on or off\ntime:1667984669//unix timestamp\ntimestamp: ISODate(\"2020-03-02T01:11:18.965Z\")\n}\n", "text": "Sorry for not making it more clear\nthe doc will look likeNow the packets come every 15-30 sec, and I need the difference of every consecutive document timestamp for lets say 1 day.\nFor a day there will be 2000-3000 documents and I need time difference for every consecutive document, time difference calculated using timestamp", "username": "Harsh_Bhudolia" }, { "code": "", "text": "The plant has sensor, which sends data packets every 15-30 seconds, those packets are stored in a timeseries collection", "username": "Harsh_Bhudolia" }, { "code": "SyntaxError: Unexpected token, expected \",\" (5:0)\n\n 3 | eventcode:100\n 4 | }\n> 5 | power:on // this can be either on or off\n | ^\n 6 | time:1667984669//unix timestamp\n 7 | timestamp: ISODate(\"2020-03-02T01:11:18.965Z\")\n 8 | }\n", "text": "A single document is not enough to let us experiment easily on your use-case. Especially when the use-case is about calculating value between 2 documents.And we want realdocuments in JSON format so that we can cut-n-paste directly into our systemIf I cut-n-paste in mongosh the only document that you shared, I getAnd pleasePlease close your redundant post How to calculate time difference between two consecutive events.this way we do not have to look at 2 places to see if someone else provided a solution or not.", "username": "steevej" } ]
How to calculate time difference between two consecutive events in a collection
2023-03-10T10:29:24.871Z
How to calculate time difference between two consecutive events in a collection
1,064
null
[ "c-driver" ]
[ { "code": "serverSelectionTryOnce", "text": "Hello,I am working on integrating my app with mongodb. Coming straight from the tutorial at mongoc.org and getting this error:No suitable servers found (serverSelectionTryOnce set): [connection refused calling hello on ‘localhost:27017’]Using WSL2 Ubuntu on windows.\nNot a newbie to mongo though. I have been using it extensively for webapps. I have checked if mongo is running usingC:\\WINDOWS\\system32> mongodand it is running just fine. So I think the issue is with mongoc driver. The installation is also correct. No other errors.Kindly help me out. I want to do just simple CRUD operations.", "username": "Z.O.E_N_A" }, { "code": "", "text": "Hello @Z.O.E_N_A ,Welcome to The MongoDB Community Forums!I notice you haven’t had a response to this topic yet - were you able to find a solution?\nIf not, could you please try below?Also please provide version of MongoDB and driver version used.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Yes, I changed from localhost to 127.0.0.1", "username": "Z.O.E_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB C Driver refusing connection
2022-10-28T13:46:54.728Z
MongoDB C Driver refusing connection
2,045
null
[ "node-js" ]
[ { "code": "MongoDB Atlastech stackMongodb Atlas", "text": "I have roughly studied the MongoDB Atlas, and I feel it’s very handy for starting a new project. as a front-end developer, I am not experienced in back-end tech stack I have to use Node.js Express or Koa as a middle layer to process and fetch data. is that a common way to interact with Mongodb Atlas in the browser end?", "username": "Yoha_Li" }, { "code": "Mongodb Atlas", "text": "Hello @Yoha_Li, Welcome to the MongoDB community forum,is that a common way to interact with Mongodb Atlas in the browser end?It is up to your requirement for a particular project, There are so many languages has official driver libraries for MongoDB, you can follow the link,Recent and trading feature of the Atlas is Data API and Custom Data API, they don’t require any server to deploy your code, you can just integrate your JS code with functions, triggers and do operations in your DB, and connect with your custom APIs, so you can call those APIs in your frontend, refer the link for more details:A serverless, secure API for accessing your Atlas data. The Data API makes it easy to query Atlas over HTTPS without the operational overhead.You can follow the tutorials and demos in the developer center,Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!", "username": "turivishal" }, { "code": "", "text": "That means Mongodb Altas has provided a set of out-of-box features as a back-end infrastructure, so we do not only need Express.js(self-deployment), Altas has covered this server layer for us?", "username": "Yoha_Li" }, { "code": "", "text": "Altas has covered this server layer for us?That is correct, and it’s a serverless API ", "username": "turivishal" } ]
What is best practice to fetch and process data with AltasCloudDB in Browser?
2023-03-12T10:07:48.338Z
What is best practice to fetch and process data with AltasCloudDB in Browser?
802
null
[ "aggregation", "queries", "java", "mongodb-shell" ]
[ { "code": "", "text": "I am facing timeout error when i am fetching same collection simultaneously. Could you help me fix this issue ? i have mentioned the error belowTimed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@b586e86. Client view of cluster state is {type=REPLICA_SET, servers={, type=UNKNOWN, state=CONNECTING}, {address=, type=UNKNOWN, state=CONNECTING}, {address=](), type=UNKNOWN, state=CONNECTING}]", "username": "Madhan_Kumar" }, { "code": "fetching the same collection simultaneouslysimultaneously", "text": "Hello @Madhan_Kumar ,To understand your use case better, could you please share more details such as:I am facing timeout error when i am fetching same collection simultaneously.Timed out after 30000 ms while waiting for a server that matches com.mongodb.client.internal.MongoClientDelegate$1@b586e86. Client view of cluster state is {type=REPLICA_SET, servers={, type=UNKNOWN, state=CONNECTING}, {address=, type=UNKNOWN, state=CONNECTING}, {address=](), type=UNKNOWN, state=CONNECTING}]This error message indicates that your MongoDB driver was not able to find a server that can handle your request within the timeout period. This can happen for a number of reasons, including network issues, or improperly configured client or server settings.Here are a few suggestions for troubleshooting which might help you in resolving this issue:Ensure that the servers in your replica set are reachable from the client and that there are no firewalls or other network restrictions preventing the client from connecting.Check the server logs to see if there are any errors or warnings that could be related to this issue. You may need to adjust or upgrade your server to handle the load, or to optimize performance. Make sure that your MongoDB servers have enough resources (CPU, memory, disk space) to handle the load. If the servers are overloaded, they might not be able to respond to incoming requests, resulting in timeouts.If you are using an older version of the MongoDB driver, it may be worth upgrading to a newer version to see if that resolves the issue. Newer versions may have better error handling and recovery mechanisms, or may be more optimized for your use case.From a developer’s perspective, if your application is making simultaneous requests to the same collection, it’s possible that the servers are becoming overloaded and unable to handle the load. Implementing retry logic can help alleviate this issue by automatically retrying failed requests after a short delay.Please refer below documentation to learn further about troubleshooting replica setRegards,\nTarun", "username": "Tarun_Gaur" } ]
MongoDB timeout error
2023-03-06T11:56:46.861Z
MongoDB timeout error
10,166
null
[ "swift", "atlas-device-sync" ]
[ { "code": "", "text": "When I tried to deploy a new version of my production app, I got the following error:Increasing Minimum Required Protocol Version\nThe pending changes use newer data types which are not be supported on client apps built with older SDK versions. Client connections from SDKs that do not support these data types will be rejected upon connection. Are you sure you want to save changes?minimum required protocol version increase change not allowed: will add new schema “…”, updating the sync schema compatibility version from 1 to 3, please ensure that you have updated to the most recent version of the client SDKI understand, so to evaluate the impact of pushing this, I’d like to know which versions of my app will be broken, and so which SDK versions.How to know from which SDK version the sync schema compatibility version is at least 3? I’m interested in Swift SDK specifically.", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hi, apologies, it definitely can be a bit hard to find. They were released so long ago that it might be worth considering removing the restriction. You can see here for the answer (Swift version 10.8.0) : https://www.mongodb.com/docs/realm/sdk/swift/model-data/define-model/supported-types/#mutable-setThanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Perfect thanks a lot!", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to know which "sync schema compatibility version" is linked to which SDK version?
2023-03-10T07:53:16.322Z
How to know which &ldquo;sync schema compatibility version&rdquo; is linked to which SDK version?
1,090
null
[ "node-js" ]
[ { "code": "getServerSidePropsreturn", "text": "How do you connect next js application to mongodb? I have followed the docs in setting up, but when i try to getseversideprops it says “Your getServerSideProps function did not return an object. Did you forget to add a return?” would really appreciate if someone could go over the steps with me. Thank you!", "username": "Matthew_Sheppard" }, { "code": "npm install mongodb\ndb.jsMongoClientimport { MongoClient } from 'mongodb'\nconst uri = 'mongodb+srv://<username>:<password>@<cluster>.mongodb.net/<database>?retryWrites=true&w=majority'\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true })\nexport default async function connectToDatabase() {\n if (!client.isConnected()) await client.connect()\n return client.db('mydatabase')\n}\n<username><password><cluster><database>connectToDatabasedb.jsimport connectToDatabase from '../db'\nexport async function getServerSideProps(context) {\n const db = await connectToDatabase()\n const data = await db.collection('mycollection').find({}).toArray()\n return {\n props: {\n data: JSON.parse(JSON.stringify(data))\n }\n }\n}\nmycollectionpropsJSON.stringifyJSON.parsegetServerSidePropsreturngetServerSidePropsreturnprops", "text": "@Matthew_Sheppard To connect a Next.js application to MongoDB, you can follow these steps:Replace <username>, <password>, <cluster>, and <database> with your own values. You can find these values in the MongoDB Atlas dashboard.\n3. In your Next.js pages, you can import the connectToDatabase function from db.js to connect to the database and retrieve data from it:This example retrieves all documents from the mycollection collection and returns them as a JSON object in the props of the page. Note that you need to use the JSON.stringify and JSON.parse functions to convert the MongoDB objects to plain JavaScript objects.\n4. If you encounter the error message “Your getServerSideProps function did not return an object. Did you forget to add a return?” it means that your getServerSideProps function is not returning anything. Make sure that you include a return statement with an object that contains the props you want to pass to the page.\nI hope this helps!", "username": "Deepak_Kumar16" }, { "code": "", "text": "Thank you for the response, i have implemented the steps and have this error\n\nerror1319×694 26.7 KB\n", "username": "Matthew_Sheppard" }, { "code": "", "text": "This is the code\n\nerror21825×991 69.3 KB\n", "username": "Matthew_Sheppard" }, { "code": "", "text": "I got rid of the if statement to check for other problems and i now have this\n\nmongoserver1012×806 33.9 KB\n", "username": "Matthew_Sheppard" } ]
Connecting next js to mongodb using node driver
2023-03-10T15:58:13.350Z
Connecting next js to mongodb using node driver
972
null
[ "aggregation", "views" ]
[ { "code": "db.collection1.aggregate([\n {\n $project:\n {\n _id:0, \n <fieldsList>:1\n }\n },\n {\n $merge : { into: \"collection2\", on: \"<unique_field>\", whenMatched: \"keepExisting\", whenNotMatched: \"insert\" \n }\n }\n])\n{ \n \"operationTime\" : Timestamp(1677514685, 51), \n \"ok\" : 0.0, \n \"errmsg\" : \"BSONObj size: 20141031 (0x13353E7) is invalid. Size must be between 0 and 16793600(16MB) First element: update: \\\"collection2\\\"\", \n \"code\" : NumberInt(10334), \n \"codeName\" : \"BSONObjectTooLarge\", \n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1677514685, 51), \n \"signature\" : {\n \"hash\" : BinData(0, \"7xcBrDcNZ+/bCEJ2gSFuIojEkjI=\"), \n \"keyId\" : NumberLong(7174701554677579780)\n }\n }\n}\n", "text": "I am merging one collection documents into another collection using mongodb merge aggregation. I am trying to merge 10 fields of basic details i.e. name, age, dob etc. into another collection using a unique key. My mongodb query is given below :The query gives me an error given below :The full response is:Total documents to be merged around 6 million.This makes me wonder that why am i getting document size error when no single document that will be merged is more than even 512 kb. Let me know if i am understanding the merge query wrong.I expect that the query should work fine because no single document is more than 16 MB.", "username": "jony_chawla" }, { "code": "", "text": "Hi @jony_chawla and welcome to the MongoDB community forum!!Does this happen using readPreference secondary (and not reproducible with readPreference primary)? If yes, I believe you may have experienced SERVER-66289 which was resolved in MongoDB 4.4.18, 5.0.14, and 6.0.3.\nIf you’re not using the latest version 4.4.19, 5.0.15 and 6.0.4, could you please upgrade to the latest version and let us know if the issue still persists?Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi @Aasawari,\nThe issue was already resolved on the same day, following the upgrade process from 4.4.17 to 4.4.19.Thanks and Regards,\nJony", "username": "jony_chawla" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb $merge not working as it should
2023-03-11T08:52:02.846Z
Mongodb $merge not working as it should
1,358
null
[]
[ { "code": "Starting in 3.6, MongoDB drivers associate all operations with a server session, with the exception of unacknowledged writes", "text": "here it says:Starting in 3.6, MongoDB drivers associate all operations with a server session, with the exception of unacknowledged writesAs i understand, it means all operations initiated from a client are associated with a session.then here it says cursors may or may not be opened within a session.Doesn’t these two statements conflict with each other? when are server sessions created? when are client sessions created?Sometimes the doc just says a session, not mentioning explicitly it’s a server session or client session which is very confusing.", "username": "Kobe_W" }, { "code": "client.start_session()bulk_writefindinsert_onesessionsession", "text": "Hi @Kobe_WServer session represents a group of sequential operations performed by a single client against a standalone mongod, replica set, or sharded cluster. When a client connects to the cluster, it will request a new globally unique session id from mongos or mongod. The client may continue to use this session id across different connections to different mongos (in a sharded cluster scenario).Client session is described more thoroughly in the driver session spec, implemented by drivers in the client side to allow for causal consistency, transactions, and snapshot reads. Perhaps Pymongo’s client_session documentation is a good read in terms of explaining client sessions.The sentence “cursors may or may not be opened within a session” refers to these client (driver) sessions. For example in Pymongo, you can execute client.start_session() to open a client session and execute collection-level operations such as bulk_write, find, insert_one, etc. all with optional session parameter to tie the operation to a specific client session (e.g. in a transaction). You can also not use the session parameter, and the command will not be tied to a specific session (i.e. executed outside of a currently running transaction).In short, server sessions and client sessions are tools to allow for advanced capabilities such as transactions or causal consistency. In practice you’ll only need to use client sessions for this.Hope this helps.Best regards\nKevin", "username": "kevinadi" }, { "code": "session", "text": "You can also not use the session parameter, and the command will not be tied to a specific session (i.e. executed outside of a currently running transaction).Thx for the answer. But regarding this, i read the manual somewhere and this doc both say if clients are not explicitly starting a client session, a client session will be implicitly created for it. I also took a look at java driver source code, and this statement seems to be true (though the creation is in a if-else branch).So does that mean all operations are actually always associated with a client session, and only difference is explicit or implicit?", "username": "Kobe_W" }, { "code": "", "text": "So does that mean all operations are actually always associated with a client session, and only difference is explicit or implicit?Yes I think the ultimate goal is to have everything in a session. In the Explicit vs implicit sessions heading in the above driver spec page:The motivation for starting an implicit session for all methods that don’t take an explicit session parameter is to make sure that all commands that are sent to the server are tagged with a session ID. This improves the ability of an operations team to monitor (and kill if necessary) long running operations. Tagging an operation with a session ID is specially useful if a deployment wide operation needs to be killed.Basically always using a session in the client side and associating it with a session on the server side allows better manageability as well. Hope this clears it up.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "thx, this is clear now. The specification repo is really a valuable source.", "username": "Kobe_W" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help me understand how sessions are used on client side and/or server side
2023-03-12T04:58:57.878Z
Help me understand how sessions are used on client side and/or server side
1,046
https://www.mongodb.com/…2_2_1024x359.png
[ "queries", "dot-net" ]
[ { "code": "", "text": "I was using collection.find() method in my c# solution and it was working fine.\nSyntax:\nvar doc = collection.find(filter).first()\nvar doc = collection.find(filter).ToList()\ncurrently we are moving to azure cloud and I am using csx Azure function app for development. None of the extension methods for find is working while trying it in function app. Attaching the error below. I tried adding class template based on the mongodb documentation too. But no use. Please help in finding the issue in code.\nimage1481×520 67.8 KB\n", "username": "Krishnadas_Narayanapillai" }, { "code": "FirstFindFindSyncFindAsyncFirst<T>IAsyncCursor<T>IFindFluent<T,T>MongoDB.Driver.Core", "text": "Hi, @Krishnadas_Narayanapillai,Welcome to the MongoDB Community Forums. I understand that you’re receiving an error when attempting to use First with Find, FindSync, or FindAsync in a csx Azure Function.First<T> is defined as an extension method on both IAsyncCursor<T> and IFindFluent<T,T> in MongoDB.Driver.Core. It is unexpected that the compiler would be unable to find this extension method. To troubleshoot further, try including the code in a C# console project which includes the MongoDB .NET/C# Driver NuGet package and examine the resulting compilation errors.If you are unsuccessful in locating the source of the problem, you can file a CSHARP bug along with a self-contained repro of the problem so that we can investigate further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "FirstFindFindSyncFindAsyncFirst<T>IAsyncCursor<T>IFindFluent<T,T>MongoDB.Driver.Core", "text": "Welcome to the MongoDB Community Forums. I understand that you’re receiving an error when attempting to use First with Find, FindSync, or FindAsync in a csx Azure Function.First<T> is defined as an extension method on both IAsyncCursor<T> and IFindFluent<T,T> in MongoDB.Driver.Core. It is unexpected that the compiler would be unable to find this extension method. To troubleshoot further, try including the code in a C# console project which includes the MongoDB .NET/C# Driver NuGet package and examine the resulting compilation errors.If you are unsuccessful in locating the source of the problem, you can file a CSHARP bug along with a self-contained repro of the problem so that we can investigate further.Sincerely,\nJamesThanks @James_Kovacs my problem was solved by using the method you provided in this reply. Thanks.", "username": "Brain_Jhon" }, { "code": "", "text": "Thank you James. Issue resolved for me in a trial and error.\nSolution:\nError message displayed in the Azure function app is not accurate. In this case, First was returning exception when its value was null and Function app returns a non related error. It was resolved by changing the function from First() to FirstorDefault() which returns null in case of null value.", "username": "Krishnadas_Narayanapillai" } ]
Unable to use the methods of Find\FindAsync\FindSync in Azure Function App
2023-03-02T11:27:11.099Z
Unable to use the methods of Find\FindAsync\FindSync in Azure Function App
1,181
null
[ "aggregation", "queries" ]
[ { "code": "test:SECONDARY> db.dataflowdatahistories.find({\"_id\":ObjectId(\"63bcc384f70dce0013cfcf0c\")})\n{ \"_id\" : ObjectId(\"63bcc384f70dce0013cfcf0c\"), \"status\" : \"PENDING\", \"analysisResult\" : \"PENDING\", \"e2eTimeCost\" : 0, \"failedMilestoneKeys\" : [ ], \"failedMilestoneData\" : [ ], \"parties\" : [ ], \"dataFlowId\" : ObjectId(\"61b806a83c4b660016af6767\"), \"traceId\" : \"test-123\", \"projectId\" : ObjectId(\"61b8057a3c4b660016af6651\"), \"aggregateTime\" : { \"year\" : 2023, \"month\" : 1, \"week\" : 2, \"day\" : 10, \"hour\" : 1, \"minute\" : 46 }, \"createdAt\" : ISODate(\"2023-01-10T01:46:44.399Z\"), \"updatedAt\" : ISODate(\"2023-02-08T07:08:05.273Z\"), \"__v\" : 0, \"messageType\" : \"tt\", \"succeedAt\" : \"tt\", \"businessKey\" : null, \"failedCause\" : null, \"failedCode\" : null, \"tt\" : null, \"senderCode\" : null }\n\ntest:SECONDARY> db.dataflowdatahistories.find({\"_id\":ObjectId(\"639978c679921f6d022e814e\")})\n[Object]\n\nopc:SECONDARY> var test = db.dataflowdatahistories.find({\"_id\":ObjectId(\"639978c679921f6d022e814e\")});\nopc:SECONDARY> print(test[0])\n\n2023-03-10T11:23:55.791+0800 E QUERY [js] Error: TypeError: can't convert test[0] to string :\n@(shell):1:1\nopc:SECONDARY> printjson(test[0])\n[Object]\n", "text": "MongoDB server version: 4.2.15seems other data is nomally show the json format detail .but only this data , got the [Object]I want to know this ‘[Object]’ data how to insert into the collection? and how can I got the [Object] detail ?Thank you for your browsing and help , Have a nice day.", "username": "harz_wang" }, { "code": "mongomongoshmongoexport", "text": "Are you using mongo or mongosh and what version?Can you give an example of a document that creates this output? Obviously another tool like Compass or mongoexport might be needed.", "username": "chris" }, { "code": " ~]$ mongo --version\nMongoDB shell version v4.2.15\ngit version: d7fd78dead621a539c20791a93abec34bb1be385\nOpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013\nallocator: tcmalloc\nmodules: none\nbuild environment:\n distmod: rhel70\n distarch: x86_64\n target_arch: x86_64\n$ mongoexport --version\nmongoexport version: r4.2.15\ngit version: d7fd78dead621a539c20791a93abec34bb1be385\nGo version: go1.12.17\n os: linux\n arch: amd64\n compiler: gc\n$ mongoexport --authenticationDatabase admin --port 27017 -u xxx -p xxx -d dbname -c dataflowdatahistories --type=json -o dataflowdatahistories.json --query='{\"_id\":{\"$oid\":\"639978c679921f6d022e814e\"}}'\n2023-03-13T10:17:19.887+0800 connected to: mongodb://localhost:27017/\n2023-03-13T10:17:19.905+0800 exported 1 record\n$ cat dataflowdatahistories.json\n", "text": "Hi @chris\nThank you for your reply and help.\nimage1089×653 181 KB\ntry to export data.cat the export data.export.json (115.9 KB)Thanks again for your help. Have a nice day.", "username": "harz_wang" }, { "code": "test:PRIMARY> db.test_h.insert(db)\ntest:PRIMARY> db.test_h.find()\n{ \"_id\" : ObjectId(\"640e944beca0dd84c5bc153a\"), \"_mongo\" : { \"slaveOk\" : false, \"host\" : \"127.0.0.1:27017\", \"defaultDB\" : \"admin\", \"_defaultSession\" : [Object], \"_causalConsistency\" : false, \"_clusterTime\" : { \"clusterTime\" : Timestamp(1678677059, 57), \"signature\" : { \"hash\" : BinData(0,\"5kBzUeXFPank5uJ2eTrtcu+kmJE=\"), \"keyId\" : NumberLong(\"7171362007151542467\") } }, \"_readMode\" : \"commands\", \"promptPrefix\" : \"\", \"authStatus\" : { \"replSetGetStatus\" : true, \"isMaster\" : true }, \"_writeMode\" : \"commands\" }, \"_name\" : \"opc_dft\", \"_session\" : [Object], \"_attachReadPreferenceToCommand\" : { \"code\" : \"function(cmdObj, readPref) {\\n \\\"use strict\\\";\\n // if the user has not set a readpref, return the original cmdObj\\n if ((readPref === null) || typeof (readPref) !== \\\"object\\\") {\\n return cmdObj;\\n }\\n\\n // if user specifies $readPreference manually, then don't change it\\n if (cmdObj.hasOwnProperty(\\\"$readPreference\\\")) {\\n return cmdObj;\\n }\\n\\n // copy object so we don't mutate the original\\n var clonedCmdObj = Object.extend({}, cmdObj);\\n // The server selection spec mandates that the key is '$query', but\\n // the shell has historically used 'query'. The server accepts both,\\n // so we maintain the existing behavior\\n var cmdObjWithReadPref = {query: clonedCmdObj, $readPreference: readPref};\\n return cmdObjWithReadPref;\\n}\" }, \"_mergeCommandOptions\" : { \"code\" : \"function(obj, extraKeys) {\\n \\\"use strict\\\";\\n\\n if (typeof (obj) === \\\"object\\\") {\\n if (Object.keys(extraKeys || {}).length > 0) {\\n throw Error(\\\"Unexpected second argument to DB.runCommand(): (type: \\\" +\\n typeof (extraKeys) + \\\"): \\\" + tojson(extraKeys));\\n }\\n return obj;\\n } else if (typeof (obj) !== \\\"string\\\") {\\n throw Error(\\\"First argument to DB.runCommand() must be either an object or a string: \\\" +\\n \\\"(type: \\\" + typeof (obj) + \\\"): \\\" + tojson(obj));\\n }\\n\\n var commandName = obj;\\n var mergedCmdObj = {};\\n mergedCmdObj[commandName] = 1;\\n\\n if (!extraKeys) {\\n return mergedCmdObj;\\n } else if (typeof (extraKeys) === \\\"object\\\") {\\n // this will traverse the prototype chain of extra, but keeping\\n // to maintain legacy behavior\\n for (var key in extraKeys) {\\n mergedCmdObj[key] = extraKeys[key];\\n }\\n } else {\\n throw Error(\\\"Second argument to DB.runCommand(\\\" + commandName +\\n \\\") must be an object: (type: \\\" + typeof (extraKeys) +\\n \\\"): \\\" + tojson(extraKeys));\\n }\\n\\n return mergedCmdObj;\\n}\" }, \"_runCommandImpl\" : { \"code\" : \"function(name, obj, options) {\\n const session = this.getSession();\\n return session._getSessionAwareClient().runCommand(session, name, obj, options);\\n}\" }, \"_dbCommand\" : { \"code\" : \"function(obj, extra, queryOptions) {\\n \\\"use strict\\\";\\n\\n // Support users who call this function with a string commandName, e.g.\\n // db.runCommand(\\\"commandName\\\", {arg1: \\\"value\\\", arg2: \\\"value\\\"}).\\n var mergedObj = this._mergeCommandOptions(obj, extra);\\n\\n // if options were passed (i.e. because they were overridden on a collection), use them.\\n // Otherwise use getQueryOptions.\\n var options = (typeof (queryOptions) !== \\\"undefined\\\") ? queryOptions : this.getQueryOptions();\\n\\n try {\\n return this._runCommandImpl(this._name, mergedObj, options);\\n } catch (ex) {\\n // When runCommand flowed through query, a connection error resulted in the message\\n // \\\"error doing query: failed\\\". Even though this message is arguably incorrect\\n // for a command failing due to a connection failure, we preserve it for backwards\\n // compatibility. See SERVER-18334 for details.\\n if (ex.message.indexOf(\\\"network error\\\") >= 0) {\\n throw new Error(\\\"error doing query: failed: \\\" + ex.message);\\n }\\n throw ex;\\n }\\n}\" }, \"_dbReadCommand\" : { \"code\" : \"function(obj, extra, queryOptions) {\\n \\\"use strict\\\";\\n\\n // Support users who call this function with a string commandName, e.g.\\n // db.runReadCommand(\\\"commandName\\\", {arg1: \\\"value\\\", arg2: \\\"value\\\"}).\\n obj = this._mergeCommandOptions(obj, extra);\\n queryOptions = queryOptions !== undefined ? queryOptions : this.getQueryOptions();\\n\\n {\\n const session = this.getSession();\\n\\n const readPreference = session._getSessionAwareClient().getReadPreference(session);\\n if (readPreference !== null) {\\n obj = this._attachReadPreferenceToCommand(obj, readPreference);\\n\\n if (readPreference.mode !== \\\"primary\\\") {\\n // Set slaveOk if readPrefMode has been explicitly set with a readPreference\\n // other than primary.\\n queryOptions |= 4;\\n }\\n }\\n }\\n\\n // The 'extra' parameter is not used as we have already created a merged command object.\\n return this.runCommand(obj, null, queryOptions);\\n}\" }, \"_adminCommand\" : { \"code\" : \"function(obj, extra) {\\n if (this._name == \\\"admin\\\")\\n return this.runCommand(obj, extra);\\n return this.getSiblingDB(\\\"admin\\\").runCommand(obj, extra);\\n}\" }, \"_runAggregate\" : { \"code\" : \"function(cmdObj, aggregateOptions) {\\n assert(cmdObj.pipeline instanceof Array, \\\"cmdObj must contain a 'pipeline' array\\\");\\n assert(cmdObj.aggregate !== undefined, \\\"cmdObj must contain 'aggregate' field\\\");\\n assert(aggregateOptions === undefined || aggregateOptions instanceof Object,\\n \\\"'aggregateOptions' argument must be an object\\\");\\n\\n // Make a copy of the initial command object, i.e. {aggregate: x, pipeline: [...]}.\\n cmdObj = Object.extend({}, cmdObj);\\n\\n // Make a copy of the aggregation options.\\n let optcpy = Object.extend({}, (aggregateOptions || {}));\\n\\n if ('batchSize' in optcpy) {\\n if (optcpy.cursor == null) {\\n optcpy.cursor = {};\\n }\\n\\n optcpy.cursor.batchSize = optcpy['batchSize'];\\n delete optcpy['batchSize'];\\n } else if ('useCursor' in optcpy) {\\n if (optcpy.cursor == null) {\\n optcpy.cursor = {};\\n }\\n\\n delete optcpy['useCursor'];\\n }\\n\\n const maxAwaitTimeMS = optcpy.maxAwaitTimeMS;\\n delete optcpy.maxAwaitTimeMS;\\n\\n // Reassign the cleaned-up options.\\n aggregateOptions = optcpy;\\n\\n // Add the options to the command object.\\n Object.extend(cmdObj, aggregateOptions);\\n\\n if (!('cursor' in cmdObj)) {\\n cmdObj.cursor = {};\\n }\\n\\n const pipeline = cmdObj.pipeline;\\n\\n // Check whether the pipeline has a stage which performs writes like $out. If not, we may\\n // run on a Secondary and should attach a readPreference.\\n const hasWritingStage = (function() {\\n if (pipeline.length == 0) {\\n return false;\\n }\\n const lastStage = pipeline[pipeline.length - 1];\\n return lastStage.hasOwnProperty(\\\"$out\\\") || lastStage.hasOwnProperty(\\\"$merge\\\");\\n }());\\n\\n const doAgg = function(cmdObj) {\\n return hasWritingStage ? this.runCommand(cmdObj) : this.runReadCommand(cmdObj);\\n }.bind(this);\\n\\n const res = doAgg(cmdObj);\\n\\n if (!res.ok && (res.code == 17020 || res.errmsg == \\\"unrecognized field \\\\\\\"cursor\\\") &&\\n !(\\\"cursor\\\" in aggregateOptions)) {\\n // If the command failed because cursors aren't supported and the user didn't explicitly\\n // request a cursor, try again without requesting a cursor.\\n delete cmdObj.cursor;\\n\\n res = doAgg(cmdObj);\\n\\n if ('result' in res && !(\\\"cursor\\\" in res)) {\\n // convert old-style output to cursor-style output\\n res.cursor = {ns: '', id: NumberLong(0)};\\n res.cursor.firstBatch = res.result;\\n delete res.result;\\n }\\n }\\n\\n assert.commandWorked(res, \\\"aggregate failed\\\");\\n\\n if (\\\"cursor\\\" in res) {\\n let batchSizeValue = undefined;\\n\\n if (cmdObj[\\\"cursor\\\"][\\\"batchSize\\\"] > 0) {\\n batchSizeValue = cmdObj[\\\"cursor\\\"][\\\"batchSize\\\"];\\n }\\n\\n return new DBCommandCursor(this, res, batchSizeValue, maxAwaitTimeMS);\\n }\\n\\n return res;\\n}\" }, \"_groupFixParms\" : { \"code\" : \"function(parmsObj) {\\n var parms = Object.extend({}, parmsObj);\\n\\n if (parms.reduce) {\\n parms.$reduce = parms.reduce; // must have $ to pass to db\\n delete parms.reduce;\\n }\\n\\n if (parms.keyf) {\\n parms.$keyf = parms.keyf;\\n delete parms.keyf;\\n }\\n\\n return parms;\\n}\" }, \"_getCollectionInfosCommand\" : { \"code\" : \"function(\\n filter, nameOnly = false, authorizedCollections = false, options = {}) {\\n filter = filter || {};\\n const cmd = {\\n listCollections: 1,\\n filter: filter,\\n nameOnly: nameOnly,\\n authorizedCollections: authorizedCollections\\n };\\n\\n const res = this.runCommand(Object.merge(cmd, options));\\n if (!res.ok) {\\n throw _getErrorWithCode(res, \\\"listCollections failed: \\\" + tojson(res));\\n }\\n\\n return new DBCommandCursor(this, res).toArray().sort(compareOn(\\\"name\\\"));\\n}\" }, \"_getCollectionInfosFromPrivileges\" : { \"code\" : \"function() {\\n let ret = this.runCommand({connectionStatus: 1, showPrivileges: 1});\\n if (!ret.ok) {\\n throw _getErrorWithCode(res, \\\"Failed to acquire collection information from privileges\\\");\\n }\\n\\n // Parse apart collection information.\\n let result = [];\\n\\n let privileges = ret.authInfo.authenticatedUserPrivileges;\\n if (privileges === undefined) {\\n return result;\\n }\\n\\n privileges.forEach(privilege => {\\n let resource = privilege.resource;\\n if (resource === undefined) {\\n return;\\n }\\n let db = resource.db;\\n if (db === undefined || db !== this.getName()) {\\n return;\\n }\\n let collection = resource.collection;\\n if (collection === undefined || typeof collection !== \\\"string\\\" || collection === \\\"\\\") {\\n return;\\n }\\n\\n result.push({name: collection});\\n });\\n\\n return result.sort(compareOn(\\\"name\\\"));\\n}\" }, \"_getCollectionNamesInternal\" : { \"code\" : \"function(options) {\\n return this._getCollectionInfosCommand({}, true, true, options).map(function(infoObj) {\\n return infoObj.name;\\n });\\n}\" }, \"_modifyCommandToDigestPasswordIfNecessary\" : { \"code\" : \"function(cmdObj, username) {\\n if (!cmdObj[\\\"pwd\\\"]) {\\n return;\\n }\\n if (cmdObj.hasOwnProperty(\\\"digestPassword\\\")) {\\n throw Error(\\\"Cannot specify 'digestPassword' through the user management shell helpers, \\\" +\\n \\\"use 'passwordDigestor' instead\\\");\\n }\\n var passwordDigestor = cmdObj[\\\"passwordDigestor\\\"] ? cmdObj[\\\"passwordDigestor\\\"] : \\\"server\\\";\\n if (passwordDigestor == \\\"server\\\") {\\n cmdObj[\\\"digestPassword\\\"] = true;\\n } else if (passwordDigestor == \\\"client\\\") {\\n cmdObj[\\\"pwd\\\"] = _hashPassword(username, cmdObj[\\\"pwd\\\"]);\\n cmdObj[\\\"digestPassword\\\"] = false;\\n } else {\\n throw Error(\\\"'passwordDigestor' must be either 'server' or 'client', got: '\\\" +\\n passwordDigestor + \\\"'\\\");\\n }\\n delete cmdObj[\\\"passwordDigestor\\\"];\\n}\" }, \"_updateUserV1\" : { \"code\" : \"function(name, updateObject, writeConcern) {\\n var setObj = {};\\n if (updateObject.pwd) {\\n setObj[\\\"pwd\\\"] = _hashPassword(name, updateObject.pwd);\\n }\\n if (updateObject.extraData) {\\n setObj[\\\"extraData\\\"] = updateObject.extraData;\\n }\\n if (updateObject.roles) {\\n setObj[\\\"roles\\\"] = updateObject.roles;\\n }\\n\\n this.system.users.update({user: name, userSource: null}, {$set: setObj});\\n var errObj = this.getLastErrorObj(writeConcern['w'], writeConcern['wtimeout']);\\n if (errObj.err) {\\n throw _getErrorWithCode(errObj, \\\"Updating user failed: \\\" + errObj.err);\\n }\\n}\" }, \"_removeUserV1\" : { \"code\" : \"function(username, writeConcern) {\\n this.getCollection(\\\"system.users\\\").remove({user: username});\\n\\n var le = this.getLastErrorObj(writeConcern['w'], writeConcern['wtimeout']);\\n\\n if (le.err) {\\n throw _getErrorWithCode(le, \\\"Couldn't remove user: \\\" + le.err);\\n }\\n\\n if (le.n == 1) {\\n return true;\\n } else {\\n return false;\\n }\\n}\" }, \"__pwHash\" : { \"code\" : \"function(nonce, username, pass) {\\n return hex_md5(nonce + username + _hashPassword(username, pass));\\n}\" }, \"_defaultAuthenticationMechanism\" : null, \"_getDefaultAuthenticationMechanism\" : { \"code\" : \"function(username, database) {\\n if (username !== undefined) {\\n const userid = database + \\\".\\\" + username;\\n const result = this.runCommand({isMaster: 1, saslSupportedMechs: userid});\\n if (result.ok && (result.saslSupportedMechs !== undefined)) {\\n const mechs = result.saslSupportedMechs;\\n if (!Array.isArray(mechs)) {\\n throw Error(\\\"Server replied with invalid saslSupportedMechs response\\\");\\n }\\n\\n if ((this._defaultAuthenticationMechanism != null) &&\\n mechs.includes(this._defaultAuthenticationMechanism)) {\\n return this._defaultAuthenticationMechanism;\\n }\\n\\n // Never include PLAIN in auto-negotiation.\\n const priority = [\\\"GSSAPI\\\", \\\"SCRAM-SHA-256\\\", \\\"SCRAM-SHA-1\\\"];\\n for (var i = 0; i < priority.length; ++i) {\\n if (mechs.includes(priority[i])) {\\n return priority[i];\\n }\\n }\\n }\\n // If isMaster doesn't support saslSupportedMechs,\\n // or if we couldn't agree on a mechanism,\\n // then fallthrough to configured default or SCRAM-SHA-1.\\n }\\n\\n // Use the default auth mechanism if set on the command line.\\n if (this._defaultAuthenticationMechanism != null)\\n return this._defaultAuthenticationMechanism;\\n\\n return \\\"SCRAM-SHA-1\\\";\\n}\" }, \"_defaultGssapiServiceName\" : \"mongodb\", \"_authOrThrow\" : { \"code\" : \"function() {\\n var params;\\n if (arguments.length == 2) {\\n params = {user: arguments[0], pwd: arguments[1]};\\n } else if (arguments.length == 1) {\\n if (typeof (arguments[0]) != \\\"object\\\")\\n throw Error(\\\"Single-argument form of auth expects a parameter object\\\");\\n params = Object.extend({}, arguments[0]);\\n } else {\\n throw Error(\\n \\\"auth expects either (username, password) or ({ user: username, pwd: password })\\\");\\n }\\n\\n if (params.mechanism === undefined) {\\n params.mechanism = this._getDefaultAuthenticationMechanism(params.user, this.getName());\\n }\\n\\n if (params.db !== undefined) {\\n throw Error(\\\"Do not override db field on db.auth(). Use getMongo().auth(), instead.\\\");\\n }\\n\\n if (params.mechanism == \\\"GSSAPI\\\" && params.serviceName == null &&\\n this._defaultGssapiServiceName != null) {\\n params.serviceName = this._defaultGssapiServiceName;\\n }\\n\\n // Logging in doesn't require a session since it manipulates connection state.\\n params.db = this.getName();\\n var good = this.getMongo().auth(params);\\n if (good) {\\n // auth enabled, and should try to use isMaster and replSetGetStatus to build prompt\\n this.getMongo().authStatus = {authRequired: true, isMaster: true, replSetGetStatus: true};\\n }\\n\\n return good;\\n}\" } }\n\n", "text": "Thanks, Chris.\nMaybe I find out the root cause.I’m trying to use this command to save data. Then got the same result, but find the result did not output the [Object]. Look at the following commands.There is another puzzle. Why the find command output is json format not the [Object]? Thanks", "username": "harz_wang" } ]
Find the collection result is [Object] ,How can show the result '[Object]' detail?
2023-03-10T03:30:02.809Z
Find the collection result is [Object] ,How can show the result &lsquo;[Object]&rsquo; detail?
979
null
[ "aggregation" ]
[ { "code": " $lookup: {\n from: 'messages',\n let: { \"iId\": \"$_id\" },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n { $eq: [\"$interactionId\", '$$iId'] },\n { $eq: [\"$author.role\", 'agent'] }\n ]\n }\n }\n }\n ],\n as: \"messages\"\n }\n },\n {\n $lookup: {\n from: 'messages',\n pipeline: [\n {\n $match: {\n \"interactionId\": \"$_id\",\n \"author.role\": \"agent\"\n }\n },\n ],\n as: \"messages\"\n }\n },\n", "text": "I’ve been trying to get data from collection without using $expr inside $match pipeline stage because it’s less performant and compared to simple $matchHere’s a piece of code I’ve been working on:I am expecting to change this condition to simply this:Secondly, I am expecting pipeline should return count of documents instead of complete document, for this I’ve added a $count stage but that return count based on _id in a new array object\nPlease let me know how’s this possible", "username": "Ali_Awan" }, { "code": " let: { \"iId\": \"$_id\" }, { $eq: [\"$interactionId\", '$iId'] }, { $eq: [\"$author.role\", 'agent'] }\"$lookup\" : {\n \"from\" : \"messages\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"interactionId\" \n \"pipeline\" : [\n {\n \"$match\" : { \"author.role\" : 'agent' }\n }\n ] ,\n \"as\" : \"messages\"\n", "text": "Is there any links you can share that confirms that$expr inside $match pipeline stage because it’s less performant and compared to simple $matchStarting with 5.0 you may use consise syntax to use localField:_id and foreignField:interactionId rather than let: { \"iId\": \"$_id\" },with { $eq: [\"$interactionId\", '$iId'] },Since { $eq: [\"$author.role\", 'agent'] }does not use any of the let variable you could already to the $match you want.Using the concise syntax your $lookup could be", "username": "steevej" } ]
Why $lookup requires $expr if I add pipeline and make a variable?
2023-03-10T12:42:52.318Z
Why $lookup requires $expr if I add pipeline and make a variable?
542
null
[ "configuration", "storage" ]
[ { "code": "storage.wiredTiger.engineConfig.cacheSizeGB$ service mongod status\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Drop-In: /etc/systemd/system/mongod.service.d\n └─always_restart.conf\n Active: active (running) since Mon 2023-03-06 12:57:35 EST; 3min 26s ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 13318 (mongod)\n Memory: 6.4G\n CGroup: /system.slice/mongod.service\n └─13318 /usr/bin/mongod --config /etc/mongod.conf\n\nMar 06 12:57:35 chat systemd[1]: Stopped MongoDB Database Server.\nMar 06 12:57:35 chat systemd[1]: Started MongoDB Database Server.\n$ free -h\n total used free shared buff/cache available\nMem: 7.8Gi 7.2Gi 129Mi 4.0Mi 435Mi 309Mi\nSwap: 511Mi 511Mi 0B\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n engine: wiredTiger\n wiredTiger:\n engineConfig:\n cacheSizeGB: 2\nMar 6 13:19:13 host kernel: [ 5638.313813] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/mongod.service,task=mongod,pid=13997,uid=116\nMar 6 13:19:13 host kernel: [ 5638.314025] Out of memory: Killed process 13997 (mongod) total-vm:8773352kB, anon-rss:6775648kB, file-rss:0kB, shmem-rss:0kB, UID:116 pgtables:14084kB oom_score_adj:0\ncacheSizeGB", "text": "I’m having an issue with MongoDB continually consuming all available RAM and then getting killed by oom. I’ve read a few questions on the stackexchange network that suggest setting storage.wiredTiger.engineConfig.cacheSizeGB can resolve the issue but it is not helping.Right at this moment, here is the situation:As you can see I have 8GB of RAM and even now Mongo is on the verge of consuming most of it. By the time I posted this OOM had already intervened:I realize cacheSizeGB may not be the only factor to consider here, but what else should I be looking at?", "username": "billy_noah" }, { "code": "", "text": "What’s mongod doing? Serving requests? Just idling?\nWhat version? What OS release? etc.\nMore info, please.", "username": "Jack_Woehr" }, { "code": "", "text": "Did you check this? The default value shouldn’t be that high. Do you have any index building in progress?https://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-storage.wiredTiger.engineConfig.cacheSizeGB", "username": "Kobe_W" }, { "code": "db.currentOp()db.currentOp()rid\n type: 'op',\n host: 'chat:27017',\n desc: 'conn41',\n connectionId: 41,\n client: '127.0.0.1:53152',\n clientMetadata: {\n driver: { name: 'nodejs', version: '4.3.1' },\n os: {\n type: 'Linux',\n name: 'linux',\n architecture: 'x64',\n version: '5.4.0-144-generic'\n },\n platform: 'Node.js v14.19.3, LE (unified)|Node.js v14.19.3, LE (unified)'\n },\n active: true,\n currentOpTime: '2023-03-07T09:41:07.276-05:00',\n effectiveUsers: [ { user: 'rocketchat', db: 'admin' } ],\n threaded: true,\n opid: 13676,\n lsid: {\n id: new UUID(\"bcbe2f99-cdf9-4c09-bca3-c8f3d175a374\"),\n uid: Binary(Buffer.from(\"56bb3afa50a12c12bd55cb8cc97243c1cc61c311a559ffab86114c472d35e7d4\", \"hex\"), 0)\n },\n secs_running: Long(\"536\"),\n microsecs_running: Long(\"536208026\"),\n op: 'query',\n ns: 'rocketchat.rocketchat_message',\n command: {\n find: 'rocketchat_message',\n filter: {\n '$text': {\n '$search': 'https://example.com/path/index.php?type=test'\n },\n t: { '$ne': 'rm' },\n _hidden: { '$ne': true },\n rid: 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i'\n },\n sort: { ts: -1 },\n projection: { score: { '$meta': 'textScore' } },\n skip: 0,\n limit: 10,\n lsid: { id: new UUID(\"bcbe2f99-cdf9-4c09-bca3-c8f3d175a374\") },\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1678199528, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"222888b20decf0073dbf33332c7f1236a7473034\", \"hex\"), 0),\n keyId: Long(\"7176963756902055940\")\n }\n },\n '$db': 'rocketchat',\n '$readPreference': { mode: 'secondaryPreferred' }\n },\n planSummary: 'IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }, IXSCAN { _fts: \"text\", _ftsx: 1 }',\n numYields: 27991,\n locks: { FeatureCompatibilityVersion: 'r', Global: 'r' },\n waitingForLock: false,\n lockStats: {\n FeatureCompatibilityVersion: { acquireCount: { r: Long(\"27993\") } },\n ReplicationStateTransition: { acquireCount: { w: Long(\"1\") } },\n Global: { acquireCount: { r: Long(\"27993\") } },\n Database: { acquireCount: { r: Long(\"1\") } },\n Collection: { acquireCount: { r: Long(\"1\") } },\n Mutex: { acquireCount: { r: Long(\"2\") } }\n },\n waitingForFlowControl: false,\n flowControlStats: {}\n }\n", "text": "What’s mongod doing?I am just learning how to debug mongod server issues. I know now I can use db.currentOp() to check this in the future.What version? What OS release?mongod v5.0.14\nUbuntu 20.04.5 LTSAs the issue is still happening I examined the output of db.currentOp() and I think the following query is probably the culprit. Looking at the rid reveals that there are only a few hundred records which match but for some reason the dbms doesn’t use that index, but instead uses only the full text search? I might be misreading the output here. Is there a simple change I can make to get this query to perform normally? I also don’t understand why the plansummary seems to include the same index over and over again.", "username": "billy_noah" }, { "code": "cacheSizeGB2", "text": "Did you check this?Yes.The default value shouldn’t be that high.Specifically which default value are you referring to? I already posted the cacheSizeGB setting I am using which is 2 - I think this is considerably restricting the default value which is “50% of (RAM - 1 GB)”, i.e. 3GB.Do you have any index building in progress?No.", "username": "billy_noah" }, { "code": "", "text": "I think you might try to simplify the problem query or factor it in some fashion and do some testing on a test partition to validate your assumption and perhaps find a way of making it more performant.", "username": "Jack_Woehr" }, { "code": "rid", "text": "That’s pretty vague Jack. I can see the plan is breaking the search phrase into individual words, but even enclosing it in quotes does not help. I’ve also tried rebuilding the entire index to no avail. I think Mongo should be smart enough to know that if there are 150 messages with rid we don’t need to do FTS on 300,000 messages - and even so, why is it so slow?Even when I remove all other criteria it is very slow.The bigger picture issue is that I am not the author of Rocketchat so I essentially have no real control over how it builds queries. I can only say that Mongo doesn’t seem to be properly using the FTS index in this case - or I need to adjust my config to get it to perform.I’m seeking real concrete advice. If you have experience in this area and want to DM me, we are willing to offer compensation for direct support.", "username": "billy_noah" }, { "code": "riddb.rocketchat_messages.createIndex( { \"msg\" : \"text\", \"rid\" : 1 } )\nridrid", "text": "I made some progress on this by dropping the text index and creating a new one with rid included:Does this look correct? Queries seem to perform a bit better now when a rid is included but still generally very slow and without rid it’s unusable. I’m very interested in understanding what I can do to make this index perform. I have a MySQL db with a fulltext search on something like 3 million rows and it’s very fast - often less than 1 second for results. If I could get Mongo to behave anything like that it would be a dream.", "username": "billy_noah" }, { "code": "", "text": "See:", "username": "Jack_Woehr" }, { "code": "storage.wiredTiger.engineConfig.cacheSizeGB$ service mongod status\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Drop-In: /etc/systemd/system/mongod.service.d\n └─always_restart.conf\n Active: active (running) since Mon 2023-03-06 12:57:35 EST; 3min 26s ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 13318 (mongod)\n Memory: 6.4G\n CGroup: /system.slice/mongod.service\n └─13318 /usr/bin/mongod --config /etc/mongod.conf\n\nMar 06 12:57:35 chat systemd[1]: Stopped MongoDB Database Server.\nMar 06 12:57:35 chat systemd[1]: Started MongoDB Database Server.\n$ free -h\n total used free shared buff/cache available\nMem: 7.8Gi 7.2Gi 129Mi 4.0Mi 435Mi 309Mi\nSwap: 511Mi 511Mi 0B\n# Where and how to store data.\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n engine: wiredTiger\n wiredTiger:\n engineConfig:\n cacheSizeGB: 2\nMar 6 13:19:13 host kernel: [ 5638.313813] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/mongod.service,task=mongod,pid=13997,uid=116\nMar 6 13:19:13 host kernel: [ 5638.314025] Out of memory: Killed process 13997 (mongod) total-vm:8773352kB, anon-rss:6775648kB, file-rss:0kB, shmem-rss:0kB, UID:116 pgtables:14084kB oom_score_adj:0\ncacheSizeGBcacheSizeGB", "text": "I’m having an issue with MongoDB continually consuming all available RAM and then getting killed by oom. I’ve read a few questions on the stackexchange network that suggest setting storage.wiredTiger.engineConfig.cacheSizeGB can resolve the issue but it is not helping.Right at this moment, here is the situation:As you can see I have 8GB of RAM and even now Mongo is on the verge of consuming most of it. By the time I posted this OOM had already intervened:I realize cacheSizeGB may not be the only factor to consider here, but what else should I be looking at?@billy_noah not sure, if you are still stuck … but here a few things I can suggestTo address the issue of MongoDB consuming all available RAM, apart from cacheSizeGB, one can consider other factors such as the size of the database and the number of connections to the server. It is also worth checking if there are any poorly written queries that are not optimized and are causing the database to use a lot of memory. Another possible issue could be that the hardware resources are insufficient for the workload.\nTo further diagnose the issue, one can check the MongoDB logs for any warnings or errors related to memory usage. It is also recommended to monitor the memory usage of the server and the mongod process over time to understand how the consumption is changing. Additionally, it is worth considering provisioning a swap space to prevent the mongod process from being killed by the OOM killer.", "username": "Deepak_Kumar16" }, { "code": "TEXTexplain()find()", "text": "It’s not possible to identify the specific issues without further information or error messages. However, some possible reasons for poor query performance are:", "username": "Deepak_Kumar16" }, { "code": "", "text": "@Deepak_Kumar16, another answer that looks like ChatGPT.What do you quote the whole message? It looks like you use the same cut-n-paste in ChatGPT to get the answer.", "username": "steevej" }, { "code": "ridrid$text", "text": "I’ve learned quite a lot about MongoDB over the last week or so and have the following to offer future readers:Mongo only uses one index at a time and key order is important. I was able to rebuild my text index and add rid as the first key. This filters records by rid first then searches by $text. I still am at a loss to explain the very poor performance of my text index. The total index size was around 2GB but Mongo was consuming over 8Gb of RAM on a long running query involving this index.", "username": "billy_noah" } ]
MongoDB consuming all available memory
2023-03-06T18:35:49.065Z
MongoDB consuming all available memory
3,265
null
[ "atlas-functions" ]
[ { "code": "", "text": "I am creating a Mongodb atlas function for insert operation but I don’t know how to get the body in the altas function. I am using the HTTPS Endspoint in Atlas App Services for the API.\nKindly help", "username": "Zubair_Rajput" }, { "code": "", "text": "I believe what you are looking for is under context -", "username": "Ian_Ward" } ]
How to get the body of post atlas function
2023-03-03T13:23:16.536Z
How to get the body of post atlas function
1,133
null
[ "aggregation" ]
[ { "code": "db.mycollection.aggregate([\n{\"$match\":{\n \"create_date_audit\":{\n $gte: ISODate('2022-07-25T18:27:56.084+00:00'),\n $lte: ISODate('2022-07-26T20:15:50.561+00:00')\n }\n}},\n{\"$sort\":{\n _id: -1\n}},\n{\"$group\":{\n _id: {\n notification_id: '$notifId',\n empId: '$empId',\n date: '$date'\n },\n dups: {\n $push: '$_id'\n },\n creationTimestamp: {\n $push: '$create_date'\n },\n count: {\n $sum: 1\n }\n}},\n{\"$match\":{\n _id: {\n $ne: null\n },\n count: {\n $gt: 1\n }\n}},\n{\"$sort\":{\n create_date: -1\n}},\n], { allowDiskUse: true }).forEach(function(doc) { \n db.mycollection.deleteMany({_id : {doc.dups[0]}); \n})```", "text": "Hi Team,We’re trying to find duplicates methods in our collection with about 2 or 3 millions of documents, we have tried this way but this seems slow and cannot be ideal when deploying to production, do you know what’s the best and fastest way to delete duplicate records with million of data.Sample code we have:", "username": "Paul_N_A1" }, { "code": "", "text": "Dealing with duplicates is always a problem. So please do not post duplicate post about the same issue.", "username": "steevej" }, { "code": "", "text": "Hi, I am facing the same issue. how to manage it?", "username": "Sam_parker" }, { "code": "", "text": "Removing duplicates from millions of records can be a challenging task. Here are some best ways to remove duplicates for millions of records:", "username": "Barry_Stark" } ]
Best way to remove duplicates for millions of records
2022-08-01T16:37:33.106Z
Best way to remove duplicates for millions of records
2,607
https://www.mongodb.com/…_2_731x1024.jpeg
[ "cxx" ]
[ { "code": "ExternalProject_Add( \n\tmongo_c_driver \n\tGIT_REPOSITORY https://github.com/mongodb/mongo-c-driver.git \n\tGIT_TAG 1.17.2 \n\tSOURCE_DIR \"${PROJECT_SOURCE_DIR}/mongo-c-driver\" \n\tBINARY_DIR \"${PROJECT_SOURCE_DIR}/mongo-c-driver/build\" \n\tBUILD_ALWAYS 1 \n\tUPDATE_COMMAND \"\" \n\tCMAKE_CACHE_ARGS \n\t\t-DCMAKE_BUILD_TYPE:STRING=Release \n\t\t-DENABLE_AUTOMATIC_INIT_AND_CLEANUP:BOOL=OFF\n\t\t-DCMAKE_INSTALL_PREFIX:STRING=${PROJECT_SOURCE_DIR}/bin \n\t\t)\n\nExternalProject_Add( \n\tmongo_cxx_driver \n\tGIT_REPOSITORY https://github.com/mongodb/mongo-cxx-driver.git \n\tGIT_TAG c315129c7b70c304d894ea60b7df71d1f3a71acf \n\tSOURCE_DIR \"${PROJECT_SOURCE_DIR}/mongo-cxx-driver\" \n\tBINARY_DIR \"${PROJECT_SOURCE_DIR}/mongo-cxx-driver/build\" \n\tBUILD_ALWAYS 1 \n\tUPDATE_COMMAND \"\" \n\tCMAKE_CACHE_ARGS \n\t\t-DBUILD_SHARED_AND_STATIC_LIBS:BOOL=ON \n\t\t-DMONGOCXX_ENABLE_SSL:BOOL=ON \n\t\t-DBSONCXX_POLY_USE_MNMLSTC:BOOL=ON \n\t\t-DCMAKE_BUILD_TYPE:STRING=Release \n\t\t-DCMAKE_INSTALL_PREFIX:STRING=${PROJECT_SOURCE_DIR}/bin \n\tDEPENDS mongo_c_driver \n\t)\nset(LIBBSONCXX_STATIC_VERSION_MAJOR 3)\nset(LIBBSONCXX_STATIC_VERSION_MINOR 6)\nset(LIBBSONCXX_STATIC_VERSION_PATCH 1)\nset(LIBBSONCXX_STATIC_PACKAGE_VERSION 3.6.1-pre)\n\n# We need to pull in the libbson-static-* library to read the BSON_STATIC_LIBRARIES variable. We\n# can ignore the other variables exported by that package (e.g. BSON_STATIC_INCLUDE_DIRS,\n# BSON_STATIC_DEFINITIONS), since bsoncxx hides the existence of libbson from the user through\n# abstraction. bsoncxx users generally should not need to include libbson headers directly.\nfind_package(libbson-static-1.0 1.13.0 REQUIRED)\n", "text": "Hi,first off all, sorry, if I miss something, or my problem is due to the lack of my CMake knowledge.\nAs I googled the problem, I found a similar issue, however my problem remains unsolved. The related link:Issue description:\nI need to compile mongo-c-driver (1.17.2) and mongo-cxx-driver (3.6.2) for my project and link statically. It is done in a superbuild CMake configuration in our product. The related cmake file:At this point, everything seems to be perfect, the target installation folder structure:\n\nimage776×1087 204 KB\nThe problem is when I want to use the result, libbsoxx-static wants to load libbison-static 1.13.0, however I have build the 1.17.2 of it. The related section of libbsoncxx-static-config.cmakeCan anybody help me, what did I wrong please?", "username": "norbert_NNN" }, { "code": "find_package()find_package()mongo::bsoncxx_staticbsoncxx-config.cmakeexamples/projects/bsoncxx/cmake/static/CMakeLists.txt", "text": "@norbert_NNN could you explain in a bit more detail what went wrong from your perspective? Was there an error message? The comment above the find_package() command explains why it is there. It is needed and should not hinder your ability to use Additionally, the way that find_package() works, since you have built C driver 1.17.2, it will satisfy the dependency requirement of version 1.13.0.As a side note, you are likely to find integration easier by using the mongo::bsoncxx_static target available from the bsoncxx-config.cmake package script. You can see an example of this in examples/projects/bsoncxx/cmake/static/CMakeLists.txt.", "username": "Roberto_Sanchez" }, { "code": "list(APPEND CMAKE_MODULE_PATH \"${PROJECT_SOURCE_DIR}/bin\")\n", "text": "Hi,\nit seems, the problem is solved by adding the line to my CMakeList.txt:after that, I could use mongocxx successfully.\nThank you.I will check the example for the target", "username": "norbert_NNN" }, { "code": "", "text": "hello norbert @norbert_NNN\ncan you tell me please what Integrated development environment (c++) are using\nand my email is:[email protected]", "username": "Khalil_Toubia" } ]
Build mongo cxx driver
2021-03-01T08:59:42.694Z
Build mongo cxx driver
3,384
https://www.mongodb.com/…4_2_1024x512.png
[ "queries" ]
[ { "code": "wtimeoutwtimeout", "text": "Hello,So while reading the page on wtimeout I found this paragraph, where the first line seem to disagrees with the second line.wtimeout causes write operations to return with an error after the specified limit, even if the required write concern will eventually succeed. When these write operations return, MongoDB does not undo successful data modifications performed before the write concern exceeded the wtimeout time limit.So, the driver will throw an error if wtimeout is exceeded, however the write will potentially still happen.The next line says the MongoDB doesn’t undo a successful data modification before the wtimeout time limit is exceeded, which implies that it does if it is exceeded??Cheers. I appreciate that I have read it incorrectly.", "username": "NeilM" }, { "code": "", "text": "Hi @NeilM,If the Wtimeout exceed the write might succeed , retrayble writes could help in some secnarios .However a Wtimeout is something application logic should address , potentially with unique indexes data Will fail reinserting or with update/upsert logic the action can be redone…Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "If we use the premise of catching the wtimeout issue in the appplication code, and retrying with an upsert using a unique key.The problem that caused the wtimeout, may still be prevalant, at which point we may have written the data, or may not, but because the application doesn’t know even after a retry.The application, can report to the user, that things are not working (Please try again later), but we have this piece of data which is potentially incomplete, from a processing point of view.How would one manage this orphan data? Just accept, it is incomplete, and move on? Coding around the issue of incomplete data?From my previous background, the process would have been one complete transaction, committed in it’s entirety at the end of the process (Depending on size of the transaction of course), using a before image if we had to roll back.", "username": "NeilM" }, { "code": "", "text": "@NeilM I have a similar question, what happens to the replicas to which data is not written by the time a timeout occurs, can it still be written after the timeout? Have you found an answer to this?", "username": "Reddington" }, { "code": "", "text": "Due to the nature of network, even if you receive a timeout error, it simply means the the servers fail to respond within that time and nothing else. The writes may or may not succeed at that time, and may eventually succeed.The application will not know that unless some certain conditions are met (e.g. unique index in place).To make sure all happen or nothing happens, you have to use a transaction. With transaction semantics, once you get an abort error, you know the changes will be eventually rolled back, and if you successfully commit it, you know the changes will be eventually there.", "username": "Kobe_W" }, { "code": "", "text": "Hi @Kobe_W ,MongoDB offers other mechanisms than transactions to secure partial failed writes to be completed safely from the application perspective.For example retryable writes:To be honest I will recommend to not use transaction just for the sake of retries of potential failed write operations.Transactions should be used only when you 100% need to secure cross documents ACID guarantees.Thanks\nPavel", "username": "Pavel_Duchovny" } ]
Wtimeout - What happens if it is exceeded, to the data written?
2021-08-30T17:16:47.806Z
Wtimeout - What happens if it is exceeded, to the data written?
2,203
null
[ "queries", "replication" ]
[ { "code": "", "text": "Does Mongo have any option using which update/insert operations can be timed-out? Is MaxTimeMS only supported by query operations or write operations as well?Can wTimeout option be used for this? If write concern is set to the majority and a timeout occurs, does mongo roll back the changes? Even after timeout can it be written to other replicas?If wtimeout with write concern majority guarantees that it is not written to the majority, can reading the database using read concern majority guarantees that records with timeout are not read?my use case is I want to write some data within a certain interval and return a response with a guarantee on whether the data is written or not.", "username": "Reddington" }, { "code": "", "text": "Answered in Wtimeout - What happens if it is exceeded, to the data written? - #5 by Kobe_W", "username": "Kobe_W" }, { "code": "", "text": "A timeout on read can be different from a timeout on write, and a timeout on client side can be different from a timeout on server. so understand the exact meaning is sometimes important.for example:timeout on client for write:\n → i will only wait for this amount of time for the server’s response. and in that ase I don’t know if the write actually succeeds or nottimeout on server for write:\n → i will only allocate this amount of time for this write task, and if it times out, i will try to cancel it (some databases do this, some don’t) if possible. But the write may still succeed eventually (cancelling an async operation is difficult).", "username": "Kobe_W" } ]
Mongo Operation Timeout for Insert/Update operations
2023-03-11T09:39:19.573Z
Mongo Operation Timeout for Insert/Update operations
1,083
null
[ "react-native" ]
[ { "code": "", "text": "Been using realm with react-native for quite sometime . And we are looking it as our offline first db .\nHave few queries on the same -", "username": "Adithya_Sundar" }, { "code": "", "text": "Redux is for local state management for the application where as MongoDB Realm is the offline first database. Yes you can use redux and realm together and can create very good application. I have build a moderate complex app using the both and trust me realm is so good. I have used partition sync because at that time flexible sync was not recommended for complex app but now realm team is recommending flexible sync and I am thinking to create an app using flexible sync and I am pretty much sure it would be great.", "username": "Zubair_Rajput" } ]
Realm react native Best practices
2023-03-11T14:07:26.908Z
Realm react native Best practices
989
null
[ "golang" ]
[ { "code": "", "text": "How can I enable logging or set a logger in go driver?", "username": "Chaoxing_Xian" }, { "code": "package main\n\nimport (\n\t\"context\"\n\t\"log\"\n\t\"os\"\n\t\"time\"\n\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\nvar (\n\tWarningLogger *log.Logger\n\tInfoLogger *log.Logger\n\tErrorLogger *log.Logger\n)\n\nfunc init() {\n\tfile, err := os.OpenFile(\"logs.txt\", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0666)\n\tif err != nil {\n\t\tlog.Fatal(err)\n\t}\n\n\tInfoLogger = log.New(file, \"INFO: \", log.Ldate|log.Ltime|log.Lshortfile)\n\tWarningLogger = log.New(file, \"WARNING: \", log.Ldate|log.Ltime|log.Lshortfile)\n\tErrorLogger = log.New(file, \"ERROR: \", log.Ldate|log.Ltime|log.Lshortfile)\n}\n\nfunc main() {\n\t// Set up MongoDB client options\n\tclientOptions := options.Client().ApplyURI(\"mongodb://localhost:27017\")\n\n\t// Connect to MongoDB\n\tctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\n\tdefer cancel()\n\tclient, err := mongo.Connect(ctx, clientOptions)\n\tif err != nil {\n\t\tErrorLogger.Println(err)\n\t}\n\tdefer func() {\n\t\tif err = client.Disconnect(ctx); err != nil {\n\t\t\tErrorLogger.Println(err)\n\t\t}\n\t}()\n\n\t// Ping the MongoDB server to verify that the connection is established\n\tif err = client.Ping(ctx, nil); err != nil {\n\t\tErrorLogger.Println(err)\n\t} else {\n\t\tInfoLogger.Println(\"Connected to MongoDB!\")\n\t}\n\n\t// Access a database and a collection\n\tdb := client.Database(\"mydatabase\")\n\tcollection := db.Collection(\"mycollection\")\n\n\t// Insert a document into the collection\n\tres, err := collection.InsertOne(ctx, map[string]string{\"name\": \"John Doe\", \"email\": \"[email protected]\"})\n\tif err != nil {\n\t\tErrorLogger.Println(err)\n\t} else {\n\t\tInfoLogger.Printf(\"Inserted document with ID %v\\n\", res.InsertedID)\n\t}\n\n\t// Find a document in the collection\n\tvar result map[string]string\n\tfilter := map[string]string{\"name\": \"John Doe\"}\n\terr = collection.FindOne(ctx, filter).Decode(&result)\n\tif err != nil {\n\t\tErrorLogger.Println(err)\n\t} else {\n\t\tInfoLogger.Printf(\"Found document: %v\\n\", result)\n\t}\n\n\t// Disconnect from MongoDB\n\tif err = client.Disconnect(ctx); err != nil {\n\t\tErrorLogger.Println(err)\n\t} else {\n\t\tInfoLogger.Println(\"Disconnected from MongoDB\")\n\t}\n}\nINFO: 2023/03/07 11:46:20 hello.go:51: Connected to MongoDB!\nINFO: 2023/03/07 11:46:20 hello.go:63: Inserted document with ID ObjectID(\"6406d6b4b8e2aafb98416f08\")\nINFO: 2023/03/07 11:46:20 hello.go:73: Found document: map[_id:6406d6b4b8e2aafb98416f08 email:[email protected] name:John Doe]\nINFO: 2023/03/07 11:46:20 hello.go:88: Disconnected from MongoDB\nERROR: 2023/03/07 11:46:20 hello.go:43: client is disconnected\nlogs.txt", "text": "Hi @Chaoxing_Xian,Welcome to the MongoDB Community forums Here is the code snippet which connects to a MongoDB instance using the MongoDB Golang driver and logs all the steps in a text file:It outputs the following:This uses MongoDB’s official Go driver to connect to a local MongoDB instance. It creates a context with a timeout of 10 seconds to manage the connection and disconnection. It uses the log package to log all the steps to a text file named logs.txt .I hope it helps!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "package main\n\nimport (\n\t\"context\"\n\t\"go.mongodb.org/mongo-driver/mongo\"\n\t\"go.mongodb.org/mongo-driver/mongo/options\"\n\t\"log\"\n\t\"os\"\n)\n\ntype customLogger struct {}\n\nfunc (l *customLogger) Logf(level mongo.Level, format string, args ...interface{}) {\n\tlog.Printf(\"MongoDB %s: %s\\n\", level, fmt.Sprintf(format, args...))\n}\n\nfunc main() {\n\t// Create a new client with the logger set\n\tclient, err := mongo.NewClient(options.Client().ApplyURI(\"mongodb://localhost:27017\").SetAppName(\"my-app\").SetLogger(&customLogger{}))\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to create client: %v\", err)\n\t}\n\t\n\t// Connect to MongoDB\n\terr = client.Connect(context.Background())\n\tif err != nil {\n\t\tlog.Fatalf(\"Failed to connect to MongoDB: %v\", err)\n\t}\n\tdefer client.Disconnect(context.Background())\n\t\n\t// Use the client to perform operations on the database\n\t// ...\n}\n\n", "text": "Appreciate your apply, actually I tried to ask ChatGPT and got the below answer:I initially believed that I had found the solution I needed, but it appears that the ‘SetLogger’ method suggested by ChatGPT is no longer available. So my new question is: Does the Go driver not generate any logs at all?", "username": "Chaoxing_Xian" }, { "code": "", "text": "Hi @Chaoxing_Xian,So my new question is: Does the Go driver not generate any logs at all?From my understanding, the MongoDB Go Driver has the capability to generate logs. Here is the documentation link where you can learn about various error logs that can be generated by the MongoDB Go driver.In general, the drivers generate the logs to notify the user of any issues that may arise within the development. However, if you want to customize the logs according to your specific needs or preferences, you can use the log package of Golang, as I’ve shared in the above response.Let us know if you have any further questions!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "I think I’ve got my answer, thanks~", "username": "Chaoxing_Xian" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb go driver
2023-03-04T01:55:54.786Z
Mongodb go driver
1,454
null
[ "backup" ]
[ { "code": "curl --user \"<public-key>:<private-key>\" --digest \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --include \\\n --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/<GROUP-ID>/clusters/<CLUSTER-NAME>/backup/exports/\" \\\n --data '{\n \"snapshotId\" : \"<snapshotId>\",\n \"exportBucketId\" : \"<exportBucketId>\"\n }'\n", "text": "I’m trying to export mongoDB atlas snapshot to S3, but not able to apply a export policy for automating the export but if i did for a single snapshot via API and I found that the export was so granular that i have a json.gz for each database and for each collection in the database, which i think its way too complex to restore.\nIs there something i’m doing wrong with the export or it is expected to be this granular only ?\nI’m using the following API to export a single snapshot to S3", "username": "Vineet_Jain" }, { "code": "", "text": "+1\nStill cannot see any options to apply an export policy. Were you able to figure out something?", "username": "Ojaswa_Sharma" }, { "code": "autoExportEnabledexportBucketIdcurl --user \"<public-key>:<private-key>\" --digest \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --include \\\n --request PATCH \"https://cloud.mongodb.com/api/atlas/v1.0/groups/<GROUP-ID>/clusters/<CLUSTER-NAME>/backup/schedule/\" \\\n --data '{\n \"autoExportEnabled\" : true,\n \"export\" : {\n \"exportBucketId\" : \"<exportBucketId>\",\n \"frequencyType\": \"monthly\"\n }\n }'\n", "text": "Hope it can help\nAs I understand, you need to patch the backup schedule and set autoExportEnabled to true and exportBucketIdSee https://www.mongodb.com/docs/atlas/backup/cloud-backup/export/", "username": "Xavier_Arques" } ]
Export MongoDB Atlas snapshot to S3
2022-01-19T11:00:02.709Z
Export MongoDB Atlas snapshot to S3
3,452
null
[ "queries", "flutter" ]
[ { "code": "@RealmModel()\nclass _MainObj {\n @PrimaryKey()\n @MapTo('_id')\n late ObjectId id;\n @MapTo('name')\n late String name;\n @MapTo('description')\n late String? description;\n @MapTo('embedded_obj'')\n late List<_EmbeddedObj> embeddedObj;\n}\n@RealmModel(ObjectType.embeddedObject)\nclass _EmbeddedObj {\n @MapTo('embedded_obj2'')\n late _EmbeddedObj2 embeddedObj2;\n @MapTo('id_in_another_collection'')\n late ObjectId anotherCollectionId;\n}\n\n@RealmModel(ObjectType.embeddedObject)\nclass _EmbeddedObj2 {\n @MapTo('name')\n late String name;\n}\n\n@RealmModel()\nclass _AnotherCollection {\n @PrimaryKey()\n @MapTo('_id')\n late ObjectId id;\n @MapTo('name')\n late String name;\n @MapTo('embedded_obj2')\n late _EmbeddedObj2? embeddedObj2;\n}\n @override\n Widget build(BuildContext context) {\n final realmServices = Provider.of<RealmServices>(context);\n return Stack(\n children: [\n Column(\n children: [\n Expanded(\n child: Padding(\n padding: const EdgeInsets.fromLTRB(16, 0, 16, 0),\n child: StreamBuilder<RealmResultsChanges<MainObj>>(\n stream: realmServices.realm.all<MainObj>().changes,\n builder: (context, snapshot) {\n final data = snapshot.data;\n\n if (data == null) return waitingIndicator();\n\n final results = data.results;\n return ListView.builder(\n shrinkWrap: true,\n itemCount: results.realm.isClosed ? 0 : results.length,\n itemBuilder: (context, index) => results[index].isValid\n ? Container(\n margin: const EdgeInsets.symmetric(\n horizontal: 12.0,\n vertical: 4.0,\n ),\n decoration: BoxDecoration(\n border: Border.all(),\n borderRadius: BorderRadius.circular(12.0),\n shape: BoxShape.rectangle,\n ),\n child: MainObjItem(results[index]))\n : Container());\n },\n ),\n ),\n ),\n ],\n ),\n realmServices.isWaiting ? waitingIndicator() : Container(),\n ],\n );\n }\nclass MainObjItem extends StatelessWidget {\n final MainObj _mainObj;\n const MainObjItem(this. _mainObj, {Key? key}) : super(key: key);\n\n @override\n Widget build(BuildContext context) {\n return ListTile(\n title: Text(_mainObj.name),\n subtitle: Text(\n _getAnotherCollectionName(context, _mainObj.embedded[0].anotherCollectionId)), // this checks out fine, there is always a property here\n onTap: () => {\n infoMessageSnackBar(context, \"Not yet implemented\").show(context),\n },\n );\n }\n\n String _getAnotherCollectionName(BuildContext context, ObjectId id) =>\n Provider.of<AnotherCollectionModel>(context, listen: false). getAnotherCollectionName(id);\n}\nclass AnotherCollectionModel extends ChangeNotifier {\n Realm _realm;\n\n AnotherCollectionModel(this._realm);\n\n String getAnotherCollectionName(ObjectId id) {\n var queryStr = '_id == oid(${id.toString()})';\n var anotherCollection = _realm.query<AnotherCollection>(queryStr);\n return anotherCollection.length > 0 ? anotherCollection.name : \"NOT FOUND\";\n }\n}\n`realm.query( '_id == $0, [id])';`\n", "text": "my schema.dart looks like this (simplified):I have a top level StatefulWidget doing a realm.all for MainObj and doing a ListView.builder() on the results, which is working fine (based on the to_do flutter flexible template)…and the ListView.builder invokes a Stateless Widget build (seems to be ok as well)…and finally, a ChangeNotifier that queries the realm by id to find the document in the other collection…This, finally, is where my query is failing. I’ve confirmed the id coming in is correct, exists in the collection I’m querying, but both realm.query() and realm.find() don’t return any results for me.\nI actually tested this last AnotherCollectionModel successfully using an id from an object that isn’t from an embedded object, it fails when the id is from an embedded object (To be precise, an id contained in an embedded object in a list held by the main object).I’ve tried realm.query() with id as a parameterized query, iealong with the way I show above, and using realm.find(). None of them seem to work.\nI’m not understanding why this fails, or a good way to troubleshoot it.Thanks", "username": "Josh_Whitehouse" }, { "code": "realm.query<AnotherCollection>( r'_id == $0', [id]);\n", "text": "Did you mean:?Can you see the expected objects in the database with Realm Studio?Also, are you using flexible sync? If so, how does your subscriptions look?", "username": "Kasper_Nielsen1" }, { "code": "realm.query<AnotherCollection>( r'_id == $0', [id]);RealmServices(this.app) {\n if (app.currentUser != null || currentUser != app.currentUser) {\n currentUser ??= app.currentUser;\n realm = Realm(Configuration.flexibleSync(\n currentUser!,\n [\n MainObj.schema,\n EmbeddedObj.schema,\n EmbeddedObj2.schema,\n AnotherCollection.schema,\n ],\n clientResetHandler: DiscardUnsyncedChangesHandler(\n onAfterReset: _onAfterReset,\n onBeforeReset: _onBeforeReset,\n onManualResetFallback: _onManualReset)));\n if (realm.subscriptions.isEmpty) {\n updateSubscriptions();\n }\n }\n Future<void> updateSubscriptions() async {\n if (realm.isClosed) {\n return;\n }\n realm.subscriptions.update((mutableSubscriptions) {\n mutableSubscriptions.clear();\n mutableSubscriptions.add(realm.all<MainObj>(), name: 'queryMainObj' );\n mutableSubscriptions.add(realm.query<AnotherCollection>(r'_id == $0', []),\n name: 'queryAnotherCollection);\n });\n await realm.subscriptions.waitForSynchronization();\n }\n", "text": "Hi Kasper,I can see my objects in the realm using MongoDB Compass. I can confirm they exist in the collection, and the only rules in the database applying to the collections are readWriteAll (the top level default role).I tried switching my query to your version, realm.query<AnotherCollection>( r'_id == $0', [id]);\nand it didn’t work (I had tried this before). I didn’t have a query subscription to the AnotherCollection in my realm configuration. I went ahead and added it, but it’s not working for me, and I’m sure I have it wrong. Here’s what it currently looks like. I can’t seem to find any docs to help me set up a parameterized query subscription in flutter, not sure how it’s declared, it might be what I’m needing here…I’m including my flexible sync configuration here, along with the subscriptions, but I’m pretty sure I’m not doing subscriptions for needed parameterized query correctly, and your help will get this all working for me.Flexible Sync configuration:Subscriptions:Thanks for the help!\nBest,Josh", "username": "Josh_Whitehouse" }, { "code": "mutableSubscriptions.add(realm.all<AnotherCollection>(), 'anotherCollectionSubscription');", "text": "I added a mutableSubscriptions.add(realm.all<AnotherCollection>(), 'anotherCollectionSubscription'); line to my subscriptions code, and it’s working now! Thanks for the help, it all got me pointed in the right direction!Best, Josh", "username": "Josh_Whitehouse" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Flutter atlas realm querying a collection using ObjectId contained in an embedded object from another collection failing
2023-03-11T16:18:28.403Z
Flutter atlas realm querying a collection using ObjectId contained in an embedded object from another collection failing
1,550
null
[ "node-js", "mongoose-odm", "connecting" ]
[ { "code": "Code : const mongoose = require(\"mongoose\");\nmongoose.set('strictQuery', false);\nmongoose.connect(\"mongodb://localhost:27017/newDataSet\", { useNewUrlParser: true })\n.then(() => {\n console.log(\"Connected to MongoDB!\");\n })\n.catch((error) => {\n console.error(\"Error connecting to MongoDB: \", error);\n});\n\nError connecting to MongoDB: MongooseServerSelectionError: connect ECONNREFUSED ::1:27017\n at Connection.openUri (e:\\Web\\Mongooes\\node_modules\\mongoose\\lib\\connection.js:825:32)\n at e:\\Web\\Mongooes\\node_modules\\mongoose\\lib\\index.js:411:10\n at e:\\Web\\Mongooes\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:41:5\n at new Promise (<anonymous>)\n at promiseOrCallback (e:\\Web\\Mongooes\\node_modules\\mongoose\\lib\\helpers\\promiseOrCallback.js:40:10)\n at Mongoose._promiseOrCallback (e:\\Web\\Mongooes\\node_modules\\mongoose\\lib\\index.js:1285:10)\n at Mongoose.connect (e:\\Web\\Mongooes\\node_modules\\mongoose\\lib\\index.js:410:20)\n at Object.<anonymous> (e:\\Web\\Mongooes\\source\\app.js:3:10)\n at Module._compile (node:internal/modules/cjs/loader:1226:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1280:10) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) { 'localhost:27017' => [ServerDescription] },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined\n}\n", "text": "Please give me code of this.", "username": "Abcdefg_Hijklmnop" }, { "code": "", "text": "Try with 127.0.0.1 instead of localhost in your code\nPlease search our forum threads for IPv6 for more details", "username": "Ramachandra_Tummala" }, { "code": "", "text": "127.0.0.1I have same error with 127.0.0.1 without localhost. I don’t know how resolve this error.\nThis error like this javascript - Error establishing MongoDB connection on localhost - Stack Overflow\nwithout resolve yet", "username": "Max_Max1" } ]
Error Please help me in this
2023-02-23T08:59:52.999Z
Error Please help me in this
836
null
[ "aggregation", "queries" ]
[ { "code": "a1[\"attr1\", \"attr2\", \"attr3\"]\nd1{\n \"_id\": ObjectId(\"640b0e3629cb7001946137a3\")\n \"field1\": {\n \"attr1\": { /some data},\n \"attr2\": { /some data},\n \"attr3\": { /some data},\n \"attr4\": { /some data},\n \"attr5\": { /some data},\n },\n \"field2\": {\n \"val\": \"some_val\"\n }\n}\na1d1.field1d1.field2.val2db.coll.update_one(\n {\n \"_id\": ObjectId('640b0e3629cb7001946137a3')\n },\n [\n {\n \"$set\": {\n \"temp_f1\": {\"$objectToArray\": \"$fiedl1\"}\n }\n },\n {\n \"$set\": {\n \"field2.val\": {\"$cond\": [{\"temp_f1.k\": {\"$all\": a1}}, \"UPDATED!\", \"\"]}\n }\n },\n {\n \"$unset\": \"temp_f1\"\n },\n ]\n )\n", "text": "Assuming that I have the following array a1:and document d1:How can I validate that each value in a1 has a field key in d1.field1 and based on the validation result update d1.field2.val2 in a single update query?I tried the following query but it’s not working:I’m new to MongoDB and any help is greatly appreciated.", "username": "loay_khateeb" }, { "code": "", "text": "Hello @loay_khateeb, Welcome to the MongoDB community forum,Can you show how the document will look after the update? and a little bit of explanation would be easier to understand.", "username": "turivishal" }, { "code": "a1field1field2.valUpdated{\n \"_id\": ObjectId(\"640b0e3629cb7001946137a3\")\n \"field1\": {\n \"attr1\": { /some data},\n \"attr2\": { /some data},\n \"attr3\": { /some data},\n \"attr4\": { /some data},\n \"attr5\": { /some data},\n },\n \"field2\": {\n \"val\": \"Updated\"\n }\n}\nUpdateResult.modified_count", "text": "Hi @turivishal,I want the document to be updated based on the following condition:if all the values of a1 are included in the field1 keys then the value of of field2.val would be updated to Updated. a successful result would look like this:Otherwise, no changes should be made to the document.I’m using the latest version of MongoDB (6.0.0) and the latest version of motor driver (3.1.1).\nI would also like the UpdateResult.modified_count to be 0 in case the condition wasn’t met, if possible.Thank you.", "username": "loay_khateeb" }, { "code": "a1d1.field1d1.field2.val2a1d1.field1d1.field2.val2db.coll.updateOne(\n { \"_id\": ObjectId(\"640b0e3629cb7001946137a3\") },\n [\n {\n \"$addFields\": {\n \"temp_f1\": { \"$objectToArray\": \"$field1\" }\n }\n },\n {\n \"$set\": {\n \"field2.val\": {\n \"$cond\": [\n {\n \"$allElementsTrue\": {\n \"$map\": {\n \"input\": [\"attr1\", \"attr2\", \"attr3\"],\n \"in\": {\n \"$in\": [\n \"$$this\",\n \"$temp_f1.k\"\n ]\n }\n }\n }\n },\n \"UPDATED!\",\n \"\"\n ]\n }\n }\n },\n {\n \"$unset\": \"temp_f1\"\n }\n ]\n)\n$objectToArrayfield1field2.val$conda1temp_f1.kfield1field2.val\"UPDATED!\"$unsettemp_f1", "text": "@loay_khateeb The relevant issue is to validate each value in a1 against the keys in d1.field1 and update d1.field2.val2 based on the validation result in a single update query. The provided query is not working, so an alternative solution is needed.\nOne solution can be to iterate over the values in a1 and check if they exist as keys in d1.field1. Then, based on the validation result, update d1.field2.val2. This can be achieved using the following query:The query first uses $objectToArray to convert field1 into an array of key-value pairs. Then, it creates a new field field2.val using $cond that checks if all elements in a1 are present in temp_f1.k, which is an array of keys in field1. If the condition is true, it sets the value of field2.val to \"UPDATED!\", otherwise, it sets it to an empty string. Finally, it uses $unset to remove the temp_f1 field.le me know if this worked ?", "username": "Deepak_Kumar16" }, { "code": "$objectToArrayfield1field2.val$conda1temp_f1.kfield1field2.val\"UPDATED!\"$unsettemp_f1", "text": "@Deepak_Kumar16, the followingThe query first uses $objectToArray to convert field1 into an array of key-value pairs. Then, it creates a new field field2.val using $cond that checks if all elements in a1 are present in temp_f1.k, which is an array of keys in field1. If the condition is true, it sets the value of field2.val to \"UPDATED!\", otherwise, it sets it to an empty string. Finally, it uses $unset to remove the temp_f1 field.is exactly the same as the original post, except that you use $addFields in the first stage rather than $set. But they are equivalent.", "username": "steevej" }, { "code": "$existstruemodified_countdb.coll.update_one({\n \"_id\": ObjectId(\"640b0e3629cb7001946137a3\"),\n \"field1.attr1\": { $exists: true },\n \"field1.attr2\": { $exists: true },\n \"field1.attr3\": { $exists: true }\n},\n{\n $set: {\n \"field2.val\": \"UPDATED!\"\n }\n})\n", "text": "As per your last post, I think you can do it in by normal update query, by checking are properties exist or not,", "username": "turivishal" }, { "code": "a1", "text": "Thank you @turivishal, this is exactly what I needed.I ended up using a for loop in python to prepare the second part of the filter query since the array a1 is dynamic but your query works just like I needed.", "username": "loay_khateeb" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Compare array with object field keys
2023-03-10T11:04:59.728Z
Compare array with object field keys
568
null
[]
[ { "code": "time_start: 1990-01-02T00:00:00.000+00:00\ntime_end: 2023-03-06T00:00:00.000+00:00\nlast_updated: 2023-03-06T21:16:58.096+00:00\n{'datetime': {'$gte': '2018-01-01', '$lte': '2023-03-09'}}\n", "text": "Hi, I have an issue with date and query.Got a MongoDB with collection and there are records with fields with data:So when I wish to find records with ‘find’ I am using:however it is telling me - TypeError: ‘str’ object is not a mappingI am not sure if I store properly data - it is ISO 8601 I think, but checked on StackOverflow that such filter with $gte / $lte should work.Could you please help me? Thank you.", "username": "Jakub_Polec" }, { "code": "'$gte': '2018-01-01''$gte': new Date('2018-01-01'", "text": "If your dates are stored as date data type you must use data data type in your query.Rather than'$gte': '2018-01-01'try with'$gte': new Date('2018-01-01')The field name in your query must also match a field name from your documents. If your documents have the fields time_start, time_end and last_updated, a query with the field name datetime will not produce any result.", "username": "steevej" }, { "code": "\"last_updated\": datetime.now(pytz.utc)\n'time_end': datetime.datetime(2023, 3, 7, 0, 0)\n", "text": "Thanks. Yes I’am using time_start or time_end for query.Tried to do with new Date(‘’) however it still shows me same error with str.I have checked the way I add / update records and it is as follows:so it should be ISO 8601 properly added. And even if I find the record and then do to_list(length=None) function it shows as:So a bit confused. Probably something simple, but not see yet. Thanks for help.", "username": "Jakub_Polec" }, { "code": "", "text": "It might be helpful if you could share the whole function where you insert and the whole function where you query.I suspect the error is in the context of query rather than the query itself.Also specify the programming language. I think it is python by the use of datetime.now() but a confirmation will be nice. You could tag the post with the language.If you could run an aggregation that uses $type on your date fields to confirm the data type.", "username": "steevej" } ]
Issue with date and query
2023-03-09T09:04:39.356Z
Issue with date and query
564
null
[]
[ { "code": "processor\t: 0\nvendor_id\t: GenuineIntel\ncpu family\t: 6\nmodel\t\t: 26\nmodel name\t: Intel(R) Xeon(R) CPU W3520 @ 2.67GHz\nstepping\t: 5\nmicrocode\t: 0x1d\ncpu MHz\t\t: 1599.968\ncache size\t: 8192 KB\nphysical id\t: 0\nsiblings\t: 8\ncore id\t\t: 0\ncpu cores\t: 4\napicid\t\t: 0\ninitial apicid\t: 0\nfpu\t\t: yes\nfpu_exception\t: yes\ncpuid level\t: 11\nwp\t\t: yes\nflags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida flush_l1d\nvmx flags\t: vnmi preemption_timer invvpid ept_x_only flexpriority tsc_offset vtpr mtf vapic ept vpid\nbugs\t\t: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown\nbogomips\t: 5333.21\nclflush size\t: 64\ncache_alignment\t: 64\naddress sizes\t: 36 bits physical, 48 bits virtual\npower management:\nmongodbmongodbmongodbmongodb 5.0mongodb 5.0ubuntu 226.0.3 version", "text": "My issue is very similar with setting-up-mongodb-v5-0-on-ubuntu-20-core-dump-status-4-ill, except my cpu has plenty of flags:I don’t know why, but when I install or upgrade mongodb I started feeling like in the early 90’s: every time I bought a new game it came up that I need to upgrade my pc in order to be able to play it…How can we know what CPU instructions are needed in order to be able to install a specific version of mongodb? I ask this because as a software developer I need to use the latest version of mongodb too, and when I buy a dedicated server I need to know if would be able to run it! Of course, in the same time I don’t want to buy the most expensive server just to be sure that will fit… In absence of precise specifications this task would be a gamble. For example, can anyone tell me if this particular CPU will be able to run mongodb 5.0 ? Or I just need to spend more time to test myself by installing one lower version at a time?NB: of course, installing mongodb 5.0 on ubuntu 22 would be another problem, since the repo only contains the 6.0.3 version…", "username": "Sorin_GFS" }, { "code": "mongodb <6.0ubuntu 22OpenSSL 1.x.xubuntu 22OpenSSL 3.0mongodbubuntu 22mongodbubuntu20.04mongodb 5.0", "text": "Update:Installing mongodb <6.0 on ubuntu 22 not possible since it requires OpenSSL 1.x.x while ubuntu 22 has OpenSSL 3.0 Compiling mongodb from source would simply be too much.In other words, this pc is able to install ubuntu 22 and tons of other stuff… but isn’t good enough for… mongodb!In absence of some miracle idea it looks like the only option I have is to downgrade ubuntu to 20.04, pray to Manitu to chase the bad spirits, fingers crossed not to see the same “core-dump: STATUS 4/ILL” and try to install mongodb 5.0…Tommorow is just another day!", "username": "Sorin_GFS" }, { "code": "mongodb 5.0ubuntu 20illmongodb", "text": "Update 2:Installing mongodb 5.0 on ubuntu 20 also not possible, as I suspected earlier it gives the same ill status.At this point I’m sure that this is a mongodb strategy… A dirty one!And it confirms one of my oldest claims: what commes for free… you’ll end up paying tens of times more than the most expensive item in that category!", "username": "Sorin_GFS" }, { "code": "", "text": "If you read the thread you already linked and the resources referenced you would be aware of the requirements and the options available to you if you don’t meet them.Thanks for visiting.", "username": "chris" }, { "code": "mongodb", "text": "If you read the thread you already linked and the resources referenced you would be aware of the requirements and the options available to you if you don’t meet them.I missed that part in my first read, and after that I was to busy finding a solution and I forgot to update my post. My bad, sorry, but the problem still remains: the oldest intel CPU supported starting with 5.0 is 11 years old, and this age decreases with 6.0. As I said, this policy is forcing clients to upgrade machines in order to benefit the latest versions. On the other way, is impossible to avoid using the latest versions since there are tons and tons of major or minor changes at short periods of time, therefore it would be impossible to catch up if too much time is passed since the last upgrade…I don’t know any other major software behaving as mongodb in this matter!", "username": "Sorin_GFS" }, { "code": "", "text": "Nothing especially bad about this post imho ( it is hidden atm. just helping to bring it back to life. )", "username": "santimir" }, { "code": "", "text": "Disinformation and FUD.", "username": "chris" }, { "code": "", "text": "@Sorin_GFS I’ve just gone through this painful process of debugging what could be wrong.Unfortunately my PC doesn’t have the AVX instruction set", "username": "Moses_J" }, { "code": "", "text": "I can’t downgrade back to version 4.x.x without downgrading Ubuntu version.Exactly! MongoDB 4.4 not supported on ubuntu 22.04, same goes for 5.0.I still couldn’t find a solution for that machine since I learned this lesson the hard way after I installed ubuntu 22.04, now I simply can’t find the motivation to downgrade since I’ll lose all the improvements I upgraded for, mainly for security. Is been almost a year since ubuntu 22.04 was released, and they doesn’t even tell if mongoDb 4.4 will be supported or not. See here.Looks like we are being driven to;\nuse MongoDB cloud or upgrade hardware or find an alternative DatabaseAgain you’re right. And for developers this constraint is even worse since their users will run into the same problem and they will lose their credibility. So, not this particular situation concerns me, but the policy! A high end hardware can be used for a maximum of 10 years, which is nothing for the lifetime of a database.", "username": "Sorin_GFS" } ]
Setting up MongoDB v6.0 on Ubuntu 22: “core-dump: STATUS 4/ILL”
2022-12-19T18:58:02.251Z
Setting up MongoDB v6.0 on Ubuntu 22: “core-dump: STATUS 4/ILL”
4,780
null
[]
[ { "code": "", "text": "Hi, I’m fairly new to mongodb, and I’m building a small chat project. I’m trying to think of the best way to retrieve a users message history.I will save the messages with the _id of the sender and receiverSo in the first scenario, I will save the _id of the message and place it in an array in both the sender and receiver documents. and when I fetch the messages, I will use that array to look for the messages.In the second scenario, I will not save the _id to the sender and receiver, and when it comes time to fetch the messages, I will search for the messages by using the sender or receiver _id.Which do you suppose will be faster in a large dataset. For small ones, like the one i’m building, i supposed it doesn’t really matter, but say for example the message documents grow to a couple of million.And I would add that the senderId and receiverId will be indexed as wellI know there are better ways, but this are the only two I can think of at the moment.", "username": "adonis_avance" }, { "code": "", "text": "WithSo in the first scenario, I will save the _id of the message and place it in an array in both the sender and receiver documents. and when I fetch the messages, I will use that array to look for the messages.you may end up with the Massive Array anti-pattern for very popular sender or receiver.", "username": "steevej" }, { "code": "", "text": "Hi, I’m fairly new to mongodb, and I’m building a small chat project. I’m trying to think of the best way to retrieve a users message history.I will save the messages with the _id of the sender and receiverSo in the first scenario, I will save the _id of the message and place it in an array in both the sender and receiver documents. and when I fetch the messages, I will use that array to look for the messages.In the second scenario, I will not save the _id to the sender and receiver, and when it comes time to fetch the messages, I will search for the messages by using the sender or receiver _id.Which do you suppose will be faster in a large dataset. For small ones, like the one i’m building, i supposed it doesn’t really matter, but say for example the message documents grow to a couple of million.And I would add that the senderId and receiverId will be indexed as wellI know there are better ways, but this are the only two I can think of at the moment.@adonis_avance\nIn a large dataset, the first scenario where the _id of the message is saved in an array in both the sender and receiver documents is likely to be faster. This is because when fetching messages, the array can be used to quickly find the relevant messages, without the need for an index lookup. In the second scenario, a search needs to be performed on the sender or receiver _id, which would require an index lookup and could be slower.\nIt’s worth noting that there may be other ways to structure the data for even faster retrieval, such as using a separate collection for messages or using a combination of indexing and query optimization. However, for the two scenarios proposed in the question, the first one is likely to be faster in a large dataset.hope this was helpful", "username": "Deepak_Kumar16" }, { "code": "", "text": "@Deepak_Kumar16, your answer seems straight out of what ChatGPT would produce.Why do you quote the whole original post?How storing the _id of the message in one collection make fetching the message in another collection faster. You get the _id fast but not the message. Contrary to what you wrote, you will need an index lookup in the message collection to get the message.It’s worth noting that there may be other ways to structure the data for even faster retrieval, such as using a separate collection for messagesSuch as a separate collection for messages, really, if you only store the _id of messages in the sender and receiver, where do you store the messages if not in a separate collection.combination of indexing and query optimizationHow, please be specific.However, for the two scenarios proposed in the question, the first one is likely to be faster in a large dataset.Can you point to some documentation that substantiate what you wrote.In the second scenario, a search needs to be performed on the sender or receiver _id, which would require an index lookupIn both scenario there is lookup, you either lookup the messages in the messages collection by _id stored in the receiver/sender array or you lookup the messages in the messages collection by the receiver/sender _id.hope this was helpfulNot really because it is wrong and lacks details.", "username": "steevej" } ]
Search for documents using an array of _id vs search using an indexed value inside those documents
2023-03-03T14:07:08.693Z
Search for documents using an array of _id vs search using an indexed value inside those documents
492
null
[ "aggregation", "queries", "dot-net", "views" ]
[ { "code": "{\n \"_id\": \"2010-0001\"\n \"ArrayOfGuids\": [ Guid1, Guid2, Guid8]\n ..other fields not important\n},\n{\n \"_id\": \"2010-0002\"\n \"ArrayOfGuids\": [ Guid7, Guid10, Guid5]\n ..other fields not important\n},\n{\n \"_id\": \"2010-0003\"\n \"ArrayOfGuids\": [ Guid14, Guid11, Guid3]\n ..other fields not important\n}\n{\n \"_id\": \"SomeId\",\n \"name\": \"SomeName\"\n \"modified\": DateTime\n \"ArrayOfGuids\": [ Guid1, Guid2, Guid3]\n ..other fields not important\n}\n{\n \"_id\": \"SomeId\",\n \"name\": \"SomeName\"\n \"modified\": DateTime\n \"mappedOtherIds\": [\"2010-0001\",\"2010-0003\"]\n}\n", "text": "Hello there, I am trying to evaluate whether I can create a Mongo view which joins two collections and inspects two arrays of Guids. Ideally, I would like to use a view, since I have a C# API layer which is leveraging IQueryable and expressions through the MongoDB C# driver. If need be, I could also probably use the aggregation pipeline and aggregate().In a nutshell here is an example of the two collections:FirstCollection: (probably can have 500,000-600,000 documents max or so)SecondCollection: (probably will not contain more then 1000 documents)The “intersection” I need to do is between the 2 collections on the “ArrayOfGuids” properties. For every document in the SecondCollection, if the ArrayOfGuids has any match in FirstCollection, return a projection of:SecondCollection._id,\nSecondCollection.name,\nSecondCollection.modified,\nFirstCollection._id (as array)From the data sample above, one projection returned would beSo, since FirstCollection _id “2010-0001” and _id “2010-0003” contain a matching Guid in its corresponding array, it is included in the result. The smaller collection is more or less the primary data to return, but I need to project the array of ids from the FirstCollection where an array item match exists.The larger FirstCollection does have an index on the “ArrayOfGuids” field and its _id field. Unfortunately, the smaller, second collection does not have an index on the “ArrayOfGuids” column, but the addition of another index is potentially possible.I have looked and $unwind and flattening both arrays but that didn’t seem to lead me to a proper intersection query. I am also looking to create a view which is as optimal as possible, given I don’t want to revert to doing this in memory at the app tier and using auxiliary caches and such. If this query would be too intensive and slow, I probably have to abandon the idea. Also, restructuring these collections, minus adding an index or so, is not possible at the moment.If anyone can provide a sample or advice to nudge me in the right direction, that would be great. Thanks in advance.", "username": "Martin_Koslof" }, { "code": "db.createView(\"myView\", \"collection1\", [\n {\n $lookup: {\n from: \"collection2\",\n localField: \"commonField\",\n foreignField: \"commonField\",\n as: \"joinedData\"\n }\n },\n {\n $unwind: {\n path: \"$joinedData\",\n preserveNullAndEmptyArrays: true\n }\n }\n])\n", "text": "I am trying to evaluate whether I can create a Mongo view which joins two collections and inspects two arrays of Guids. Ideally, I would like to use a view, since I have a C# API layer which is leveraging IQueryable and expressions through the MongoDB C# driver. If need be, I could also probably use the aggregation pipeline and aggregate().@Martin_Koslof I recommend using the “aggregate” method with the “$lookup” stage to join the collections based on a matching field. You can then use other stages such as “$project” or “$group” to shape the output of the view as needed. Here is an example of how to create a view from two collections:In the above example, the “myView” view is being created based on “collection1”. The “$lookup” stage is used to join “collection1” with “collection2” based on a common field called “commonField”. The resulting documents are then flattened using “$unwind” to create one document per matching pair of documents from the two collections.\nNote that in order to create a view in MongoDB, you must have the “createView” privilege on the database.Hope this was helpful ", "username": "Deepak_Kumar16" } ]
Best way to create a View from these two collections
2023-03-09T23:06:57.779Z
Best way to create a View from these two collections
1,438
null
[ "queries", "node-js", "crud" ]
[ { "code": "", "text": "I have a JS application where I have a mongodb driver. After connecting to it I try to run ‘const result = await db.command(eval(commandString));’ in order to run diverse types of commands into my database from a string, for example: db.collection(“users”).updateOne( { name: “Rodrigo” }, { $set: { age: 24 } }). When I triggers the query is executed correctly in the database, the document is updated, but the Mongo Server retreieves ‘MongoServerError: command not found’, I want to get rid of this error message and just have the query being executed successfully. Thanks in advance ", "username": "Jorge_Nava" }, { "code": "", "text": "@Jorge_Nava The error message “MongoServerError: command not found” indicates that the MongoDB server was unable to find the command that was executed. This could be due to a number of reasons, such as incorrect syntax or an invalid command. One possible reason is that the command being executed is not supported by the version of MongoDB that you are using.\nTo troubleshoot the issue, you can check the version of MongoDB that you are using and compare it to the documentation to ensure that the command is supported. You should also validate the syntax of the command and ensure that it is correct.\nAnother thing to consider is that the error message suggests that the command is being executed as a top-level command, rather than being executed in the context of a specific database. You may want to try running the command within a specific database by specifying the database name in the command.\nFinally, you can try using a different approach to executing the command, such as using the MongoDB shell or a different driver method. For example, instead of using db.command(eval(commandString)), you could try using db.eval(commandString) or a different driver method that is better suited for the type of command that you are executing.\nOverall, the key is to validate the syntax of the command, ensure that it is supported by the version of MongoDB that you are using, and consider using a different approach to executing the command if necessary.", "username": "Deepak_Kumar16" } ]
Running 'db.command(eval(commandString));' gets me 'MongoServerError: command not found'
2023-03-10T21:58:23.062Z
Running &lsquo;db.command(eval(commandString));&rsquo; gets me &lsquo;MongoServerError: command not found&rsquo;
859
null
[ "node-js", "replication", "mongoose-odm", "containers" ]
[ { "code": "MongooseServerSelectionError: Server selection timed out after 30000 ms\n at Function.Model.$wrapCallback (/usr/share/backend-server/node_modules/mongoose/lib/model.js:5192:32)\n at /usr/share/backend-server/node_modules/mongoose/lib/query.js:4901:21\n at /usr/share/backend-server/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5\n at new Promise (<anonymous>)\n at promiseOrCallback (/usr/share/backend-server/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)\n at model.Query.exec (/usr/share/backend-server/node_modules/mongoose/lib/query.js:4900:10)\n at model.Query.Query.then (/usr/share/backend-server/node_modules/mongoose/lib/query.js:4983:15) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'rs1:27017' => [ServerDescription],\n 'rs2:27017' => [ServerDescription],\n 'rs3:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'rs0',\n maxElectionId: new ObjectId(\"7fffffff0000000000000053\"),\n maxSetVersion: 1,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: 30\n },\n code: undefined\n}\nversion: '3.3'\nsecrets:\n mongo_cluster_key:\n external: true\nservices:\n rs1:\n image: mongodb-custom:v1.0.0\n command: mongod --keyFile /run/secrets/mongo_cluster_key --replSet \"rs0\"\n networks:\n - mongo\n ports:\n - 27017:27017\n secrets:\n - source: mongo_cluster_key\n target: mongo_cluster_key\n uid: '999'\n gid: '999'\n mode: 0400\n environment:\n - MONGO_INITDB_ROOT_USERNAME=admin \n - MONGO_INITDB_ROOT_PASSWORD=password\n - MONGO_INITDB_DATABASE=admin\n - MAIN_MONGO_DB_NAME=testing\n - MAIN_MONGO_DB_USERNAME=test\n - MAIN_MONGO_DB_PASSWORD=password\n - MAIN_MONGO_DB_ROLE=readWrite\n deploy:\n replicas: 1\n volumes:\n - rs1:/data/db \n - rs1:/data/configdb\n rs2:\n image: mongodb-custom:v1.0.0\n command: mongod --keyFile /run/secrets/mongo_cluster_key --replSet \"rs0\"\n networks:\n - mongo\n secrets:\n - source: mongo_cluster_key\n target: mongo_cluster_key\n uid: '999'\n gid: '999'\n mode: 0400\n environment:\n - MONGO_INITDB_ROOT_USERNAME=admin \n - MONGO_INITDB_ROOT_PASSWORD=password\n - MONGO_INITDB_DATABASE=admin\n - MAIN_MONGO_DB_NAME=testing\n - MAIN_MONGO_DB_USERNAME=test\n - MAIN_MONGO_DB_PASSWORD=password\n - MAIN_MONGO_DB_ROLE=readWrite\n deploy:\n replicas: 1\n volumes:\n - rs2:/data/db \n - rs2:/data/configdb\n rs3:\n image: mongodb-custom:v1.0.0\n command: mongod --keyFile /run/secrets/mongo_cluster_key --replSet \"rs0\"\n networks:\n - mongo\n secrets:\n - source: mongo_cluster_key\n target: mongo_cluster_key\n uid: '999'\n gid: '999'\n mode: 0400\n environment:\n - MONGO_INITDB_ROOT_USERNAME=admin \n - MONGO_INITDB_ROOT_PASSWORD=password\n - MONGO_INITDB_DATABASE=admin\n - MAIN_MONGO_DB_NAME=testing\n - MAIN_MONGO_DB_USERNAME=test\n - MAIN_MONGO_DB_PASSWORD=password\n - MAIN_MONGO_DB_ROLE=readWrite\n deploy:\n replicas: 1\n volumes:\n - rs3:/data/db \n - rs3:/data/configdb\n rs:\n image: mongodb-custom:v1.0.0\n command: /usr/local/bin/replica-init.sh\n networks:\n - mongo\n secrets:\n - source: mongo_cluster_key\n target: mongo_cluster_key\n uid: '999'\n gid: '999'\n mode: 0400\n environment:\n - MONGO_INITDB_ROOT_USERNAME=admin \n - MONGO_INITDB_ROOT_PASSWORD=password\n - MONGO_INITDB_DATABASE=admin\n - MAIN_MONGO_DB_NAME=testing\n - MAIN_MONGO_DB_USERNAME=test\n - MAIN_MONGO_DB_PASSWORD=password\n - MAIN_MONGO_DB_ROLE=readWrite\n deploy:\n restart_policy:\n condition: on-failure\n delay: 5s\n max_attempts: 10\nvolumes:\n rs1:\n driver: local\n rs2:\n driver: local\n rs3:\n driver: local\nnetworks:\n mongo:\n driver: overlay\n driver_opts:\n encrypted: \"true\"\n internal: true\n attachable: true\n#!/bin/bash\n# Make sure 3 replicas available\nfor rs in rs1 rs2 rs3;do\n mongo --host $rs --eval 'db'\n if [ $? -ne 0 ]; then\n exit 1\n fi\ndone\nMONGO_INITDB_ROOT_USERNAME=\"$(< $MONGO_INITDB_ROOT_USERNAME_FILE)\"\nMONGO_INITDB_ROOT_PASSWORD=\"$(< $MONGO_INITDB_ROOT_PASSWORD_FILE)\"\n# Connect to rs1 and configure replica set if not done\nstatus=$(mongo --host rs1 --quiet --eval 'rs.status().members.length')\nif [ $? -ne 0 ]; then\n # Replicaset not yet configured\n mongo --username $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --host rs1 --eval 'rs.initiate({ _id: \"rs0\", version: 1, members: [ { _id: 0, host : \"rs1\", priority: 100 }, { _id: 1, host : \"rs2\", priority: 2 }, { _id: 2, host : \"rs3\", priority: 2 } ] })';\nfi\nconst mongoose = require('mongoose');\n// MongoDB Connection Class\nclass MongoDB {\n constructor() {\n mongoose\n .connect('mongodb://test:password@rs1:27017,rs2:27017,rs3:27017/testing?replicaSet=rs0')\n .then(() => {\n console.log('Connected to MongoDB');\n })\n .catch((err) => {\n console.error('MongoDB Error: ', err.message);\n });\n // Add error handler while connected\n mongoose.connection.on('error', (err) => {\n console.log('MongoDB Error: ', err);\n });\n mongoose.pluralize(null);\n }\n}\nmodule.exports = new MongoDB();\n", "text": "Mongo: 4.4.5\nMongoose: 6.6.0\nNode: 14.20.1 (bullseye-slim)\nDocker: 20.10.8We have a docker swarm with multiple servers and a mongo stack with 3 replicas each on different servers.\nOur rs1(primary) replica goes down due to a Docker restart. When this happens an election occurs and rs3 is selected as the new primary. After rs1 is re-elected a few seconds later, some of our backend client replicas receive the following error when making queries:Upon checking the status of the replicas, rs1 is the primary and most of our backend clients fulfill queries correctly. We have seen this issue occur on two separate occasions and we are unsure why some Mongoose clients are unable to find the primary replica after the election. We have been unable to recreate the error intentionally.The image being used is for Mongodb 4.4.5We have 2500 connections, between 60 clients, so using the directConnection flag may be too slow.We verified that the authentication and connection strings are all setup correctly. When this error occurs, restarting the backend server docker containers fixes the issue. We are not receiving any connection errors in our backend server logs. Errors only appear when querying the database.", "username": "Austin_Conry" }, { "code": "", "text": "Wouldn’t a restart of the Docker environment also bring down the Docker network as well? The TCP/IP connection made by your client prior to the Docker restart would have been interrupted by the Docker restart. It could be that the driver in your app attempts to reestablish the connection but the Docker network is down at the time during its restart.", "username": "Steve_Hand1" }, { "code": "", "text": "@Austin_Conry … not sure, if you were able to resolve … but here is some help from my limited knowledgeThe error message “MongooseServerSelectionError: Server selection timed out after 30000 ms” indicates that the Mongoose client was unable to connect to a MongoDB server within the specified timeout period.\nThis error occurred after the primary replica (rs1) went down due to a Docker restart and resulted in an election that selected rs3 as the new primary. After rs1 was re-elected a few seconds later, some backend client replicas could not find the primary replica, resulting in the error.\nThe relevant issue seems to be related to the Mongoose client’s inability to communicate with the primary replica. It is unclear why some replicas are experiencing this issue while others are not. The error could be a result of a network issue or a bug in the MongoDB NodeJS driver.\nTo resolve this issue, you can try the following steps:", "username": "Deepak_Kumar16" } ]
Mongoose client not reconnecting to primary Mongo replica after replica election
2022-12-30T20:16:41.855Z
Mongoose client not reconnecting to primary Mongo replica after replica election
1,649
null
[]
[ { "code": "", "text": "I want to read my Realm Objects, concatenate the three properties in order to load each calculated String element and load the results as a single element in an array to be used in a SwIftUI Picker. I can feed a hardcoded array with string (year-yearType-yearComment) but cannot get realm results to load into array. I am not updating the array just selecting values to save in a different string realm db property. I am working with local realms and no sync or update of the source from this view.", "username": "Michael_Granberry" }, { "code": "", "text": "It’s always helpful to us if code in included in the question that shows what’s being attempted. Describing the issue helps the actual issue is unclear. What’s preventing the realm results from being added to the array? What’s in the array? Do you have separate Swift objects that contain the concatenated realm data or is it just a string?Please update the question with code and clarification and we’ll take a look.", "username": "Jay" } ]
Local Realm feeding an array for input into SwIftUI Picker
2023-03-10T16:39:06.048Z
Local Realm feeding an array for input into SwIftUI Picker
645
null
[ "java" ]
[ { "code": "", "text": "Hi,I am using PojoCodecRegistry (BSON library) and need to map the the \"_id \" field of the Mongo DB document to user defined POJO field.However, the “_id” field is getting mapped to null. I have a property named as :I am using mongo java 4.0 driver and JAVA 8", "username": "ADITYA_RATRA" }, { "code": "_id_id", "text": "Hello Aditya,Can you provide details about how you are mapping the _id field - please include your Pojo class with the _id field mapping.However, the “_id” field is getting mapped to null. I have a property named as :Also, please include code you had tried to build the Pojo object and insert into the collection.", "username": "Prasad_Saya" }, { "code": "@BsonProperty(\"_id\") \n@JsonProperty(\"_id\")\nprivate ObjectId _id;\n\nprivate String name;\nprivate int age;\nprivate int courseCount;\nprivate String email;\nprivate boolean isVerified;\nprivate ArrayList<String> hobbies;\nprivate ContactDetails contactDetails;\nprivate ArrayList<Marks> marks;\n\n// Getters and Setters\n\n// toString() method\n", "text": "Thanks So very much for acknowledging the concern.\nThe POJO class is something like below ::package org.mongo;//imports@JsonIgnoreProperties(ignoreUnknown = true)\n@JsonInclude(JsonInclude.Include.NON_NULL)\npublic class Student {\npublic Student() {\t\n}}Basically this error is occurring when the BSON is deserialized to Java Object. _id field is mapped to null. Rest all other fields are mapped properly.Please let me know how to solve this problem.", "username": "ADITYA_RATRA" }, { "code": "ObjectIdnew Student()", "text": "Basically this error is occurring when the BSON is deserialized to Java Object. _id field is mapped to null.The Pojo class is good.You can create Pojo objects and insert into a MongoDB collection as documents and retrieve them - without problem. All this using the default driver created ObjectId.You can post the code related with creating the object (new Student(), setting the properties, etc.,), how you are inserting and retrieving.Also, see code examples at: MongoDB Java Driver - POJOs.", "username": "Prasad_Saya" }, { "code": "", "text": "Is there any solution for the above issue I have the same problem too. Thanks", "username": "Prashant_Abbigeri" }, { "code": "", "text": "Please use @BsonId and ObjectId along with fieldName ‘id’ you will get the objectIdsome thing like@BsonId\nprivate ObjectId id;reason is by default Bson will be using ‘_’ as word saparator of database feilds like below\nlets take an example\nDB field user_id\nPojo class userId then it will map to it.It is working for me , hope will help you as well.", "username": "ameer_Y" } ]
"_id" field mapped to null || PojoCodecRegistry
2020-06-22T20:32:21.740Z
&rdquo;_id&rdquo; field mapped to null || PojoCodecRegistry
11,292
null
[ "python", "spark-connector", "scala" ]
[ { "code": " spark = SparkSession \\\n .builder \\\n .config(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector_2.13:10.1.1\") \\\n .getOrCreate()\n streaming_df = spark \\\n .readStream \\\n .format(\"mongodb\") \\\n .option(\"spark.mongodb.connection.uri\", \"mongodb://localhost:27017\") \\\n .option(\"spark.mongodb.database\", \"database_name\") \\\n .option(\"spark.mongodb.collection\", \"collection_name\") \\\n .schema(data_schema) \\\n .load()\n...\n# operations, among which there is a groupby\n...\n source_aggregation \\\n .writeStream \\\n .format(\"mongodb\") \\\n .option(\"spark.mongodb.connection.uri\", \"mongodb://localhost:27017\") \\\n .option(\"spark.mongodb.database\", \"database_name\") \\\n .option(\"spark.mongodb.collection\", \"other_collection_name\") \\\n .option('replacedocument', 'true') \\\n .option(\"checkpointLocation\", os.path.join(\"checkpoint\", \"mongodb_checkpoint\")) \\\n .start()\\\n .awaitTermination()\n", "text": "Hi, I’m trying to read data from a MongoDB collection using Spark Structured Streaming (Python version). On these data I need to apply some operations, such as groupby, which are not supported in the continuous mode. For this reason I’m trying to use the microbatch mode instead.\nIn order to do so I have tryied to install the version 10.1 of the Spark Connector, which should support the microbatch mode (as said in this post), but it still does not seem to work.\nWhat I’ve been doing is essentialy one of the following two things:Here instead a sample of the code that I want to run:Some informations about my working environment:Do you have any suggestions? Thank you in advance", "username": "andcan" }, { "code": "", "text": "Can you paste the logs? What errors/warnings are you getting?", "username": "Robert_Walters" }, { "code": "pyspark.sql.utils.StreamingQueryException: Query Sources writing [id = 2a173727-93d0-41e0-8e22-5ff7e636d5be, runId = 6255dbf8-9010-44f0-aa90-cd6c3d0ca869] terminated with exception: Data source mongodb does not support microbatch processing.\npyspark.sql.utils.StreamingQueryException: Query Sources writing [id = 2a173727-93d0-41e0-8e22-5ff7e636d5be, runId = 73c29cef-2cd4-4134-b78a-a7f34018ac9a] terminated with exception: org.apache.spark.sql.types.StructType.toAttributes()Lscala/collection/immutable/Seq;\n", "text": "Thank you for the reply. I get two different kinds of errors based on the way I try to use version 10.1.\nIn the case of the modified Spark Session I get this:About this error, it may be useful to point out that I’ve been (and I still am) using the version 10.0.5 without any problems. Because of this I interpreted the error as if version 10.1 is not being actually used at all with this configuration.If instead I insert the downloaded jar (the one named “mongo-spark-connector_2.13-10.1.1.jar” at this link) in the Spark jars folder I get this error:which I have more problems understanding.Thank you again in advance", "username": "andcan" } ]
Problems with the spark-connector in version 10.1.1
2023-03-09T00:36:02.865Z
Problems with the spark-connector in version 10.1.1
1,497
null
[ "transactions" ]
[ { "code": "List {\n....\n}.onDelete { offset in\ntransactionModel.deleteTransaction(expense: expensesList[offset.first!])\n}\nif let localRealm = realm {\n if let toDelete = localRealm.object(ofType: ExpenseData.self, forPrimaryKey: expense.uuid) {\n try! localRealm.write {\n localRealm.delete(toDelete)\n }\n }\n}\nRLMException', reason: 'Object has been deleted or invalidated.'\n", "text": "SwifUI viewTransactionViewModelError:@Mohit_SharmaThanks", "username": "Eman_Nollase" }, { "code": "", "text": "Some debugging may be in order. Add a breakpoint to your code and step through it line by line, inspecting the vars and code execution along the way. When you spot something unexpected, update your question with that info and also include the specific line that’s crashing.Also, the code is a bit incomplete so we don’t really know what’s being called or in what order.", "username": "Jay" }, { "code": "", "text": "Thanks Jay, after further investigation, i just figure out the cause of the issue. The issue comes when i try to access the expense object after i deleted it. So I just assign the properties i needed before deleting the object. thanks", "username": "Eman_Nollase" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Crashing on swiftui ondelete
2023-03-09T17:21:44.005Z
Crashing on swiftui ondelete
849
https://www.mongodb.com/…d_2_1024x430.png
[]
[ { "code": "", "text": "\nimage1920×807 147 KB\nI did as it was written, I copied and pasted the code, but it gives this error continually. What am I doing wrong please?", "username": "Ayomide_Bankole" }, { "code": "mongoshmongosh", "text": "You have typed that into mongosh that command is intended for the command line on the host.Exit mongosh and try the command again.", "username": "chris" } ]
SyntaxError on the CLI
2023-03-09T06:39:00.920Z
SyntaxError on the CLI
600
null
[]
[ { "code": "", "text": "I followed every tutorial in the internet also the docs of how to install SSL/TLS the correct way because if i didn’t do this , my database always getting hacked and i am tired of this , please i need the updated version of how to install SSL/TLS inside ubuntu because the docs is outdated and it doesn’t explain anything useful , please help", "username": "TIA" }, { "code": "", "text": "my database always getting hacked and i am tired of thisThis has less to do with TLS and more to do with access control, authentication and authorization. TLS protects data in transit, it won’t protect the server from a ‘hack’, you could set up TLS perfectly in its most secure form and still leave the server wide open for anyone to access.I Recommend:\nRestrict access to the database by only allowing certain IP addresses access, block all others.\nEnable authentication with strong passwords/passphrases.\nUse existing built-in roles or create your own to restrict users only to the operations and databases required.https://www.mongodb.com/docs/manual/core/authentication/\nhttps://www.mongodb.com/docs/manual/core/authorization/\nhttps://www.mongodb.com/docs/manual/core/security-hardening/#network-hardeningplease i need the updated version of how to install SSL/TLS inside ubuntu because the docs is outdated and it doesn’t explain anything usefulThe tutorial (https://www.mongodb.com/docs/manual/tutorial/configure-ssl/) is accurate, what particular issue are you having?", "username": "chris" } ]
Having trouble installing SSL inside ubuntu server
2023-03-09T22:25:14.252Z
Having trouble installing SSL inside ubuntu server
513
null
[]
[ { "code": "", "text": "Hi,I found a message while checking the Mongos log.Can you tell me what this message means?log:\n2020-07-23T05:35:47 W NETWORK [listener] Error accepting new connection SocketException: remote_endpoint: Transport endpoint is not connectedMongo Version: 4.2.6\nconfiguration(5 Servers):\n4 shards (replica set)\n5 mognos\n3 config", "username": "minjeong_ban" }, { "code": "", "text": "Hey minjeong,\nI can see this is an old topic but i see the exactly same message all over my mongos logs.Could you find the reason and fixed it?My setup is nearly the same but different numbers.\n4 config,4 mongos, 3 replica set", "username": "Emre_Tombaloglu" } ]
Warnning Message (SocketException)
2020-07-27T05:42:30.064Z
Warnning Message (SocketException)
2,254
null
[ "dot-net", "transactions" ]
[ { "code": "An exception occurred while receiving a message from the server.\nAttempted to read past the end of the stream.\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n--- End of stack trace from previous location ---\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBuffer(Int32 responseTo, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.PooledConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.ConnectionPools.ExclusiveConnectionPool.AcquiredConnection.ReceiveMessage(Int32 responseTo, IMessageEncoderSelector encoderSelector, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingCommandMessageWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.WireProtocol.CommandWireProtocol`1.Execute(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocol[TResult](IWireProtocol`1 protocol, ICoreSession session, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.Command[TResult](ICoreSession session, ReadPreference readPreference, DatabaseNamespace databaseNamespace, BsonDocument command, IEnumerable`1 commandPayloads, IElementNameValidator commandValidator, BsonDocument additionalOptions, Action`1 postWriteAction, CommandResponseHandling responseHandling, IBsonSerializer`1 resultSerializer, MessageEncoderSettings messageEncoderSettings, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteCommandOperationBase.ExecuteAttempt(RetryableWriteContext context, Int32 attempt, Nullable`1 transactionNumber, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.Execute[TResult](IRetryableWriteOperation`1 operation, RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.ExecuteBatches(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase`1.Execute(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatch(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.Execute(IWriteBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteWriteOperation[TResult](IWriteBinding binding, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.ExecuteWriteOperation[TResult](IClientSessionHandle session, IWriteOperation`1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IClientSessionHandle session, IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSession[TResult](Func`2 func, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.BulkWrite(IEnumerable`1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionBase`1.ReplaceOne(FilterDefinition`1 filter, TDocument replacement, ReplaceOptions options, Func`3 bulkWrite)\n at MongoDB.Driver.MongoCollectionBase`1.ReplaceOne(FilterDefinition`1 filter, TDocument replacement, ReplaceOptions options, CancellationToken cancellationToken)\n at SOT.MongoDB.Extensions.MongoCollectionExtension.Update[T](IMongoCollection`1 pT, T pEntity)\n", "text": "Hi,Sometimes when saving a record in the database I get the following error. I can’t figure out what’s causing the error because it only happens sometimes.", "username": "Antonio_Moreira" }, { "code": "\"Attempted to read past the end of the stream\"", "text": "Hi @Antonio_Moreira,Welcome to the MongoDB Community forums The error message \"Attempted to read past the end of the stream\" indicates that the program is trying to read data from a MongoDB server, but the data stream has ended unexpectedly.it only happens sometimes.Can you confirm if you see the pattern - might be related to the load on the database at certain times, like during the day or week. Have you checked your server logs? That might give us some clues as to the specific cause.Please share the specific driver version you are using here. Also, what is the batch size of the documents you are inserting at once in the collection when this error throws?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_Kesav ,Thank you.I’m iterating a list and for each iteration I write one documento to the database.\nI’m using the following version Regards,\nAntonio Moreira", "username": "Antonio_Moreira" }, { "code": "", "text": "Hi @Antonio_Moreira,Apologies for the late response, I think you missed putting information in your response. Could you please provide the version details so that the error can be reproduced accurately and reliably?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "{\"t\":{\"$date\":\"2023-02-27T23:27:42.191+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":20883, \"ctx\":\"conn15697203\",\"msg\":\"Interrupted operation as its client disconnected\",\"attr\":{\"opId\":290227564}}\n", "text": "Hi, regarding the same exact problem I would like to add more information.The application is being run on a AWS container and the MongoDB too.\nWhen this error happens, this is the log that we get from Mongo:This error keeps happening every 10/11 minutes until the end of the process.It is also worth noticing that it never happens when running things locally.Thanks in advance!\nBest Regards,\nGustavo Jorge", "username": "Guga_Jorge" }, { "code": "", "text": "Hi @Guga_Jorge,In many cases, even though the message looks the same, the cause could be different. Can you please open a new thread and provide the required details for a better understanding of the question?Please share the specific driver version you are using here. Also, what is the batch size of the documents you are inserting at once in the collection when this error throws?Also, kindly share the MongoDB version you are using.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Attempted to read past the end of the stream
2023-02-02T15:06:15.938Z
Attempted to read past the end of the stream
2,396
null
[ "aggregation", "compass" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"63e0c7c6acf3efb62bddb066\"\n },\n \"quoteID\": {\n \"$oid\": \"63d0e7af279938cefad3e1fc\"\n },\n \"userID\": 10,\n \"timestamp\": 1675675642.5906088,\n \"outcome\": \"Draw\",\n \"quote\": {\n \"_id\": {\n \"$oid\": \"63d0e7af279938cefad3e1fc\"\n },\n \"league\": {\n \"id\": 78,\n \"name\": \"Bundesliga\",\n \"country\": \"Germany\",\n \"logo\": \"https://media.api-sports.io/football/leagues/78.png\",\n \"flag\": \"https://media.api-sports.io/flags/de.svg\",\n \"season\": 2022\n },\n \"fixture\": {\n \"id\": 871300,\n \"timezone\": \"UTC\",\n \"date\": \"2023-01-20T19:30:00+00:00\",\n \"timestamp\": 1674243000\n },\n \"update\": \"2023-01-20T12:01:00+00:00\",\n \"bookmakers\": [\n {\n \"id\": 6,\n \"name\": \"Bwin\",\n \"bets\": [\n {\n \"id\": 1,\n \"name\": \"Match Winner\",\n \"values\": [\n {\n \"value\": \"Home\",\n \"odd\": \"3.90\"\n },\n {\n \"value\": \"Draw\",\n \"odd\": \"3.90\"\n },\n {\n \"value\": \"Away\",\n \"odd\": \"1.83\"\n }\n ]\n }\n ]\n }\n ],\n \"timestamp\": 1674635183.38935\n },\n \"match\": {\n \"_id\": {\n \"$oid\": \"63cfee5ef80c2c7462b89a84\"\n },\n \"fixture\": {\n \"id\": 871300,\n \"referee\": \"Daniel Siebert, Germany\",\n \"timezone\": \"UTC\",\n \"date\": \"2023-01-20T19:30:00+00:00\",\n \"timestamp\": 1674243000,\n \"periods\": {\n \"first\": 1674243000,\n \"second\": 1674246600\n },\n \"venue\": {\n \"id\": 738,\n \"name\": \"Red Bull Arena\",\n \"city\": \"Leipzig\"\n },\n \"status\": {\n \"long\": \"Match Finished\",\n \"short\": \"FT\",\n \"elapsed\": 90\n }\n },\n \"league\": {\n \"id\": 78,\n \"name\": \"Bundesliga\",\n \"country\": \"Germany\",\n \"logo\": \"https://media-3.api-sports.io/football/leagues/78.png\",\n \"flag\": \"https://media-3.api-sports.io/flags/de.svg\",\n \"season\": 2022,\n \"round\": \"Regular Season - 16\"\n },\n \"teams\": {\n \"home\": {\n \"id\": 173,\n \"name\": \"RB Leipzig\",\n \"logo\": \"https://media-3.api-sports.io/football/teams/173.png\",\n \"winner\": null\n },\n \"away\": {\n \"id\": 157,\n \"name\": \"Bayern Munich\",\n \"logo\": \"https://media-3.api-sports.io/football/teams/157.png\",\n \"winner\": null\n }\n },\n \"goals\": {\n \"home\": 1,\n \"away\": 1\n },\n \"score\": {\n \"halftime\": {\n \"home\": 0,\n \"away\": 1\n },\n \"fulltime\": {\n \"home\": 1,\n \"away\": 1\n },\n \"extratime\": {\n \"home\": null,\n \"away\": null\n },\n \"penalty\": {\n \"home\": null,\n \"away\": null\n }\n },\n \"timestamp\": 1675673856.844521\n },\n \"result\": \"Draw\"\n}\n{\n points: {\n $cond: {\n if: {\n $eq: [\"$outcome\", \"$result\"],\n },\n then: {\n $arrayElemAt: [\n {\n $filter: {\n input:\n \"$quote.bookmakers.bets.values\",\n as: \"value\",\n cond: {\n $eq: [\n \"$$value.value\",\n \"$outcome\",\n ],\n },\n },\n },\n 0,\n ],\n },\n else: 0,\n },\n },\n}\n\"$quote.bookmakers.bets.values\"", "text": "Hey,I have a problem with an $addFields aggregation in a pipeline. And despite it’s great though, even ChatGPT couldn’t solve it In a pipeline, I have a document like this:As you can imagine I want to simplify the document, thus, I want to add another field “points” that reflects the correct odd. So for example when the outcome was predicted correctly, a new field should directly show the odd of the respective outcome (in this case a Draw → points: 3.90).This code ($addFields) returns nothing (same document as above but without the additional “points” field) even tho in the pipeline in the step before there’s a document:Hardcoding “$outcome” to “Draw” also doesn’t change it. When I use\n\"$quote.bookmakers.bets.values\"\nin the else: statement and making the if condition above to return false, it does return the correct Array (just for testing purposes to see if it can access the data).Does anyone have an idea? I use MongoDB 6.0.3 in MongoDB Compass.Thanks!!", "username": "Patrick01234" }, { "code": "db.test.aggregate([\n {\n $addFields: {\n points: {\n $cond: {\n if: {\n $eq: [\"$outcome\", \"$result\"],\n },\n then: {\n $first: {\n $map: {\n input: {\n $filter: {\n input: {\n $arrayElemAt: [\n {\n $arrayElemAt: [\n \"$quote.bookmakers.bets.values\",\n 0,\n ],\n },\n 0,\n ],\n },\n cond: {\n $eq: [\n \"$$this.value\",\n \"$outcome\",\n ],\n },\n },\n },\n as: \"val\",\n in: \"$$val.odd\",\n },\n },\n },\n else: 0,\n },\n },\n },\n },\n])\n\"points\"{\n \"_id\": {\n \"$oid\": \"63e0c7c6acf3efb62bddb066\"\n },\n \"quoteID\": {\n \"$oid\": \"63d0e7af279938cefad3e1fc\"\n },\n \"userID\": 10,\n \"timestamp\": 1675675642.5906088,\n \"outcome\": \"Draw\",\n \"quote\": {\n \"_id\": {\n \"$oid\": \"63d0e7af279938cefad3e1fc\"\n },\n \"league\": {\n \"id\": 78,\n \"name\": \"Bundesliga\",\n \"country\": \"Germany\",\n \"logo\": \"https://media.api-sports.io/football/leagues/78.png\",\n \"flag\": \"https://media.api-sports.io/flags/de.svg\",\n \"season\": 2022\n },\n \"fixture\": {\n \"id\": 871300,\n \"timezone\": \"UTC\",\n \"date\": \"2023-01-20T19:30:00+00:00\",\n \"timestamp\": 1674243000\n },\n \"update\": \"2023-01-20T12:01:00+00:00\",\n \"bookmakers\": [\n {\n \"id\": 6,\n \"name\": \"Bwin\",\n \"bets\": [\n {\n \"id\": 1,\n \"name\": \"Match Winner\",\n \"values\": [\n {\n \"value\": \"Home\",\n \"odd\": \"3.90\"\n },\n {\n \"value\": \"Draw\",\n \"odd\": \"3.90\"\n },\n {\n \"value\": \"Away\",\n \"odd\": \"1.83\"\n }\n ]\n }\n ]\n }\n ],\n \"timestamp\": 1674635183.38935\n },\n \"match\": {\n \"_id\": {\n \"$oid\": \"63cfee5ef80c2c7462b89a84\"\n },\n \"fixture\": {\n \"id\": 871300,\n \"referee\": \"Daniel Siebert, Germany\",\n \"timezone\": \"UTC\",\n \"date\": \"2023-01-20T19:30:00+00:00\",\n \"timestamp\": 1674243000,\n \"periods\": {\n \"first\": 1674243000,\n \"second\": 1674246600\n },\n \"venue\": {\n \"id\": 738,\n \"name\": \"Red Bull Arena\",\n \"city\": \"Leipzig\"\n },\n \"status\": {\n \"long\": \"Match Finished\",\n \"short\": \"FT\",\n \"elapsed\": 90\n }\n },\n \"league\": {\n \"id\": 78,\n \"name\": \"Bundesliga\",\n \"country\": \"Germany\",\n \"logo\": \"https://media-3.api-sports.io/football/leagues/78.png\",\n \"flag\": \"https://media-3.api-sports.io/flags/de.svg\",\n \"season\": 2022,\n \"round\": \"Regular Season - 16\"\n },\n \"teams\": {\n \"home\": {\n \"id\": 173,\n \"name\": \"RB Leipzig\",\n \"logo\": \"https://media-3.api-sports.io/football/teams/173.png\",\n \"winner\": null\n },\n \"away\": {\n \"id\": 157,\n \"name\": \"Bayern Munich\",\n \"logo\": \"https://media-3.api-sports.io/football/teams/157.png\",\n \"winner\": null\n }\n },\n \"goals\": {\n \"home\": 1,\n \"away\": 1\n },\n \"score\": {\n \"halftime\": {\n \"home\": 0,\n \"away\": 1\n },\n \"fulltime\": {\n \"home\": 1,\n \"away\": 1\n },\n \"extratime\": {\n \"home\": null,\n \"away\": null\n },\n \"penalty\": {\n \"home\": null,\n \"away\": null\n }\n },\n \"timestamp\": 1675673856.844521\n },\n \"result\": \"Draw\",\n \"points\": \"3.90\"\n}\n", "text": "Hello @Patrick01234,Welcome to the MongoDB Community forum Apologies for the late response!Here your pipeline looks fine and after a few tweaking, it works as per your expectation and returns the desired result. Here is the pipeline for your reference:Here I’ve used the $map and $filter operators together to extract and filter the relevant data, and then used $first to select the first element from the resulting array. Also, if the condition in the $cond operator is false, the value of the \"points\" field will be set to 0.The pipeline returns the following output:I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" } ]
$addFields with $arrayElemAt and $filter returns nothing
2023-02-06T12:53:27.212Z
$addFields with $arrayElemAt and $filter returns nothing
992
null
[ "node-js" ]
[ { "code": "MongoServerError: Authentication failed.\n at Connection.onMessage (C:\\SubMAN\\Backend-2\\railway_test\\node_modules\\mongodb\\lib\\cmap\\connection.js:201:30) \n at MessageStream.<anonymous> (C:\\SubMAN\\Backend-2\\railway_test\\node_modules\\mongodb\\lib\\cmap\\connection.js:59:60)\n at MessageStream.emit (node:events:513:28)\n at processIncomingData (C:\\SubMAN\\Backend-2\\railway_test\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:124:16) \n at MessageStream._write (C:\\SubMAN\\Backend-2\\railway_test\\node_modules\\mongodb\\lib\\cmap\\message_stream.js:33:9) \n at writeOrBuffer (node:internal/streams/writable:391:12)\n at _write (node:internal/streams/writable:332:10)\n at MessageStream.Writable.write (node:internal/streams/writable:336:10)\n at Socket.ondata (node:internal/streams/readable:754:22)\n at Socket.emit (node:events:513:28) {\n ok: 0,\n code: 18,\n codeName: 'AuthenticationFailed',\n connectionGeneration: 0,\n [Symbol(errorLabels)]: Set(2) { 'HandshakeError', 'ResetPool' }\n}\n\nimport mongoose from \"mongoose\";\n\n\nconst connect = async () => {\n mongoose.connect('mongodb://mongo:<password>@containers-us-west-85.railway.app:6949')\n .then(() => console.log('Connected to database movieDB'))\n .catch((err: any) => console.log(err));\n}\n\nconnect();\n", "text": "Hello MongoDB people,Im trying to connect to a MongoDB hosted on Railway, but I keep getting this error:Here is my code:The connection uri should be right, since I was able to connect to the DB using MongoDBCompass and the built in IntelliJ Database too.Any help would be awesome,\nThanks in advance!", "username": "Splatted_I0I" }, { "code": "node v18.7.0import mongoose from \"mongoose\";\n\nconst connect = async () => {\n mongoose.connect('mongodb://mongo:<password>@containers-us-west-85.railway.app:6949')\n .then(() => console.log('Connected to database movieDB'))\n .catch((err) => console.log(err));\n}\n\nconnect();\n.catch(...) .catch((err: any) => console.log(err));\n[Running] node \"/node-mongoDB/src/test.js\"\nConnected to database movieDB\ncodeName: 'AuthenticationFailed',\n connectionGeneration: 0,\n", "text": "Hello @Splatted_I0I,Welcome to the MongoDB Community forums I ran the following code on my system using node v18.7.0:Initially, there was an error in the .catch(...) parameter,which I modified to the above code, and it ran successfully with the output:Although the code seems fine, based on the error message you provided, it may be an incorrect combination of user-id/password or might be an issue with the built-in role. Please double-check the code you are running and the role of your database user, if it still does not work as expected, please provide more details about the error and your workflow.For more information, please check the documentation on how to manage users and roleAlso, please note that you should not share the URI string with the user-id and password on public forums or anywhere else to avoid any potential issues.I hope this information helps you!Let me know if you have any further questions.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thanks for the Fix! My connection also works now ", "username": "Splatted_I0I" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
MongoServerError: Authentification Failed, Code 18
2023-03-08T15:03:39.085Z
MongoServerError: Authentification Failed, Code 18
6,749
https://www.mongodb.com/…755a627af66.jpeg
[ "data-modeling", "java" ]
[ { "code": "", "text": "\nimage975×481 64.4 KB\n\nThe IDE is not recognizing my completion of the lab, I had previously logged this issue and got a reply but i was unable to reply so I recreated the log", "username": "Ramone_Granston1" }, { "code": "", "text": "2 posts were merged into an existing topic: Associate Java Developer Path", "username": "Kushagra_Kesav" }, { "code": "Project 0myAtlasClusterEDU MDB_EDUusers", "text": "Hello @Ramone_Granston1,Thanks for sharing the link to the lab.Based on the screenshot you shared, it appears that you have entered the data into the project cluster that you created yourself, named Project 0. However, according to the lab instructions, when you click on the Open External Window within the lab, it will automatically create a new database cluster named myAtlasClusterEDU under the project name MDB_EDU in a new tab within which you have to insert data in the users collection.\nmyAtlasClusterEDU3186×1894 486 KB\n\n\nAnd after insertion of the data, return back to the lab CLI and click on the Check button to verify the progress.If it still does not work as expected, please provide more details about the error and your workflow.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Associate Java Developer Path(2)
2023-03-09T22:23:46.840Z
Associate Java Developer Path(2)
1,282
https://www.mongodb.com/…a_2_1024x506.png
[ "data-modeling", "java" ]
[ { "code": "", "text": "\nimage1366×675 77.5 KB\n\nThe IDE is not recognizing the changes I made in the lab", "username": "Ramone_Granston1" }, { "code": "", "text": "\nimage1366×660 75.6 KB\n", "username": "Ramone_Granston1" }, { "code": "", "text": "Hello @Ramone_Granston1,Welcome to the MongoDB Community forums Can you please share the link to the lab you are attempting and facing issues with?Also, can you confirm just to ensure that this might not be an issue related to the cache and cookies of your browser? Is this the first instance where you have encountered such an issue with the lab? Have you successfully completed similar lab tasks in the course prior to this one?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "\nimage975×481 64.4 KB\n\nThe IDE is not recognizing my completion of the lab, I had previously logged this issue and got a reply but i was unable to reply so I recreated the log", "username": "Ramone_Granston1" }, { "code": "", "text": "Hi @Kushagra_KesavPlease assist if you can, I cleared my cache after your initial response:\nimage826×346 32.5 KB\nThis is the link for atlasThis is the link for MongoDB University labYes I have successfully completed similar labs priorThose are my responses to your questions", "username": "Ramone_Granston1" }, { "code": "Project 0myAtlasClusterEDU MDB_EDUusers", "text": "Hello @Ramone_Granston1,Thanks for sharing the link to the lab.Based on the screenshot you shared, it appears that you have entered the data into the project cluster that you created yourself, named Project 0. However, according to the lab instructions, when you click on the Open External Window within the lab, it will automatically create a new database cluster named myAtlasClusterEDU under the project name MDB_EDU in a new tab within which you have to insert data in the users collection.\nmyAtlasClusterEDU3186×1894 486 KB\n\n\nAnd after insertion of the data, return back to the lab CLI and click on the Check button to verify the progress.If it still does not work as expected, please provide more details about the error and your workflow.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Associate Java Developer Path
2023-03-08T00:51:13.804Z
Associate Java Developer Path
1,200
null
[ "golang", "containers", "field-encryption" ]
[ { "code": "apt-get -y install libmongocrypt-dev libbson-dev pkg-config\ngo build -mod=vendor -tags cse\n# go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:296:16: could not determine kind of name for C.mongocrypt_crypt_shared_lib_version\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:305:20: could not determine kind of name for C.mongocrypt_crypt_shared_lib_version_string\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:169:11: could not determine kind of name for C.mongocrypt_ctx_rewrap_many_datakey_init\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:263:12: could not determine kind of name for C.mongocrypt_ctx_setopt_contention_factor\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:159:11: could not determine kind of name for C.mongocrypt_ctx_setopt_key_material\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:257:12: could not determine kind of name for C.mongocrypt_ctx_setopt_query_type\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:72:3: could not determine kind of name for C.mongocrypt_setopt_append_crypt_shared_lib_search_path\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:64:3: could not determine kind of name for C.mongocrypt_setopt_bypass_query_analysis\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:399:11: could not determine kind of name for C.mongocrypt_setopt_encrypted_field_config_map\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:77:4: could not determine kind of name for C.mongocrypt_setopt_set_crypt_shared_lib_path_override\nvendor/go.mongodb.org/mongo-driver/x/mongo/driver/mongocrypt/mongocrypt.go:81:2: could not determine kind of name for C.mongocrypt_setopt_use_need_kms_credentials_state\n", "text": "I am trying to compile an golang application with the mongo go driver and libmongocrypt in a debian bullseye docker container and running into errors.libmongocrypt is installed as described here: GitHub - mongodb/libmongocrypt: Required C library for Client Side and Queryable Encryption in MongoDBlibmongocrypt and libbson do seem to be installed properly based on trying a few pkg-config commands.The failure happens here:Versions:Can you provide some guidance to debug?", "username": "danny_fry" }, { "code": "", "text": "Go driver 1.11.2 requires libmongocrypt 1.5.2 or higher. The required version of libmongocrypt is described here: mongo package - go.mongodb.org/mongo-driver/mongo - Go PackagesTo install a newer version of libmongocrypt, one option is to install from the PPA packages described here: GitHub - mongodb/libmongocrypt: Required C library for Client Side and Queryable Encryption in MongoDB", "username": "Kevin_Albertson" }, { "code": "", "text": "Thanks, this solved the issue.", "username": "danny_fry" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error compiling go app with libmongocrypt on debian 11
2023-03-09T02:22:36.121Z
Error compiling go app with libmongocrypt on debian 11
1,229
null
[ "aggregation", "queries", "node-js", "mongoose-odm" ]
[ { "code": "const filterResults = await Listing.aggregate([\n {\n $geoNear: {\n near: { type: 'Point', coordinates: [parseFloat(lng), parseFloat(lat)] },\n distanceField: 'location.distance',\n maxDistance: 2000, //2km\n spherical: true,\n },\n },\n ...\n])\nlocation: {\n type: {\n type: String,\n enum: ['Point'],\n required: true,\n },\n coordinates: {\n type: [Number],\n required: true,\n },\n },\nListingSchema.index({ location: '2dsphere' })", "text": "Hello Mongo team,I have a $geoNear aggregation query which isn’t working for some reason. To check I was doing it correctly I copied the example code from the docs, and it still gives the following error:“MongoServerError: geo near accepts just one argument when querying for a GeoJSON point. Extra field found: $maxDistance: 2000.0”This is query code:In my Mongoose model/schema the field is defined like this:And indexed like this:ListingSchema.index({ location: '2dsphere' })I am using parseFloat(lng) etc. to ensure I’m passing numbers and not strings, and passing longitude first. In my collection the data is stored as [longitude, latitude] too.Can anyone see what I’m doing wrong, please? Any help very much appreciated!\nCheers, Matt", "username": "Matt_Heslington1" }, { "code": "", "text": "The issue is likely related to the order of the coordinates in the $geoNear aggregation and/or the longitude/latitude inputs being in the wrong order. According to the MongoDB documentation, the longitude should be the first value and the latitude should be the second value when querying for a GeoJSON point. Make sure that the longitude is the first value of the coordinates array when defining the location in the model and also when passing the longitude and latitude arguments in the $geoNear query.\nAssisted by Doc-E.ai\nReference: node.js - mongodb - using $geoNear gives \"geo near accepts just one argument when querying for a GeoJSON point\" when using maxDistance - Stack Overflow", "username": "Deepak_Kumar16" }, { "code": "", "text": "Hi Deepak,\nThanks for your reply. I did write in the question, “I’m passing longitude first. In my collection the data is stored as [longitude, latitude] too”, which is why it’s so confusing. I’ll keep testing and report back when I find a solution.\nCheers,\nMatt", "username": "Matt_Heslington1" }, { "code": "\"MongoServerError: geo near accepts just one argument when querying for a GeoJSON point. Extra field found: $maxDistance: 2000.0\"$geoNear$maxDistance$maxDistance$maxDistance$maxDistance2000sphericaltrue$maxDistance$maxDistancedistance in meters / earth radius in meters$maxDistancemaxDistance: 2000 / 6371.1$maxDistanceconst filterResults = await Listing.aggregate([\n {\n $geoNear: {\n near: { type: 'Point', coordinates: [parseFloat(lng), parseFloat(lat)] },\n distanceField: 'location.distance',\n maxDistance: 0.0311, //2000 meters / 6371.1 earth radius in km = 0.0311 radians\n spherical: true,\n },\n },\n ...\n])\n", "text": "@Matt_Heslington1\nI got busy … spent some time today … let me know hope this is helpful …\nThe error message \"MongoServerError: geo near accepts just one argument when querying for a GeoJSON point. Extra field found: $maxDistance: 2000.0\" suggests that the $geoNear stage only accepts one argument when querying for a GeoJSON point, and that an extra field $maxDistance was found.Looking at the provided query code, it appears that the $maxDistance parameter is being incorrectly used. According to the MongoDB documentation, $maxDistance should be specified in meters as a number or in radians when using a legacy coordinate pair. In the given example, $maxDistance is set to 2000, which could be interpreted as 2000 meters. However, since the spherical option is set to true, the $maxDistance parameter should be in radians.To fix the issue, try specifying $maxDistance in radians instead. One way to do this is by converting the distance in meters to radians using the formula distance in meters / earth radius in meters. For example, to set a $maxDistance of 2000 meters, use maxDistance: 2000 / 6371.1.Modified query code with $maxDistance parameter specified in radians:", "username": "Deepak_Kumar16" } ]
$geoNear Aggregation Query Example Causing Strange Error
2023-02-25T07:04:41.142Z
$geoNear Aggregation Query Example Causing Strange Error
1,315
null
[ "aggregation" ]
[ { "code": "[{\n \"roadVolume\": {\n \"stock\": {\n \"suppliers\": [\n {\n \"name\": \"supplier 1\",\n \"placeOfIncoterms\": [\n {\n \"name\": \"place A\",\n \"total\": {}\n },\n {\n \"name\": \"place B\",\n \"total\": {}\n }\n ]\n },\n {\n \"name\": \"supplier 2\",\n \"placeOfIncoterms\": [\n {\n \"name\": \"place C\",\n \"total\": {\n \"orderVolume\": \"value\",\n \"numberOfTrucks\": \"value\"\n }\n },\n {\n \"name\": \"place D\",\n \"total\": {}\n }\n ]\n },\n ]\n }\n }\n}]\n[{\n \"roadVolume\": {\n \"stock\": {\n \"suppliers\": [\n {\n \"name\": \"supplier 2\",\n \"placeOfIncoterms\": [\n {\n \"name\": \"place C\",\n \"total\": {\n \"orderVolume\": \"value\",\n \"numberOfTrucks\": \"value\"\n }\n }\n ]\n },\n ]\n }\n }\n}]\n", "text": "Hello,Here is the data structure I’m working with:I want to filter out the suppliers for which “total” field renders an empty object. In this example:I found a similar post (Mongodb aggregation remove null values from object with nested properties) but didn’t manage to adapt the solution to my use case. I did some tests with $filter and $reduce, but did not manage to reach a viable outcome. Also, I don’t want to $unwind and then $group back, to save on performance.Your assistance is relly appreciated!", "username": "Antoine_Delequeuche" }, { "code": "$map$filter$mergeObjectsdb.collection.aggregate([\n {\n $addFields: {\n \"roadVolume.stock.suppliers\": {\n $filter: {\n input: {\n $map: {\n input: \"$roadVolume.stock.suppliers\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n placeOfIncoterms: {\n $filter: {\n input: \"$$this.placeOfIncoterms\",\n cond: { $ne: [\"$$this.total\", {}] }\n }\n }\n }\n ]\n }\n }\n },\n cond: { $ne: [\"$$this.placeOfIncoterms\", []] }\n }\n }\n }\n }\n])\n", "text": "Hello @Antoine_Delequeuche,You can use $map, $filter and $mergeObjects operators, something like this,", "username": "turivishal" }, { "code": "", "text": "Hello @turivishal,\nYour solution works, thank you so much for helping out!", "username": "Antoine_Delequeuche" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filtering array element depending on subdocument condition
2023-03-05T15:17:10.923Z
Filtering array element depending on subdocument condition
783
null
[ "charts" ]
[ { "code": "", "text": "Our Mongo Charts dashboard has recently (last few days) started acting very wonky, but only for certain charts.We have a chart which displays a field “tsOrderBookUpdated”. This chart is now (seemingly at random) sometimes displaying the correct contents of this field but other times using the value of a different field “tsScreenComplete”. The chart configuration clearly shows the former field is referenced here. The chart hover popup clearly shows that the former field is being referenced. Yet the value is frequently taken from the latter field.Furthermore, examining the data for that particular point (“show data for this item”) indicates that the document has the correct values, yet nevertheless, the value from the latter field is rendered in the chart.What’s really strange about this is that the same chart has both correct and incorrect values in it!This is a private dashboard so I can’t really share many details but I would be more than happy to share additional privately for debugging purposes.", "username": "centos7" }, { "code": "", "text": "HI @centos7, It would be difficult to investigate this without more details. Could you please talk to support and lodge a ticket for us to investigate?", "username": "Avinash_Prasad" } ]
Mongo charts is displaying incorrect data for some fields
2023-03-09T19:31:36.965Z
Mongo charts is displaying incorrect data for some fields
916
null
[ "aggregation", "java", "compass", "mongodb-shell", "atlas-search" ]
[ { "code": "db.people.aggregate([\n{\n \"$search\": {\n \"embeddedDocument\": {\n \"path\": \"names\",\n \"operator\": {\n \"moreLikeThis\": {\n \"like\": {\n \"names.name\": \"tester\"\n }\n }\n }\n }\n }\n }\n])\nMongoServerError: java.lang.AssertionError: unreachable\n at Connection.onMessage (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:3099431)\n at MessageStream.<anonymous> (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:3096954)\n at MessageStream.emit (node:events:394:28)\n at c (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:3118818)\n at MessageStream._write (/Applications/MongoDB Compass.app/Contents/Resources/app.asar.unpacked/node_modules/@mongosh/node-runtime-worker-thread/dist/worker-runtime.js:1917:3117466)\n at writeOrBuffer (node:internal/streams/writable:389:12)\n at _write (node:internal/streams/writable:330:10)\n at MessageStream.Writable.write (node:internal/streams/writable:334:10)\n at TLSSocket.ondata (node:internal/streams/readable:749:22)\n at TLSSocket.emit (node:events:394:28)\n", "text": "There doesn’t seem to be an exclusion listed in the documentation. I am sending the following agg to mongo:Which generates the following error every time:The mappings are correct as well. In the above example “names” is mapped as type: ‘embeddedDocuments’ and ‘name’ as type string. Is this a bug or just unsupported?", "username": "Luke_Snyder" }, { "code": "{\n \"embeddedDocument\": {\n \"path\": \"names\",\n \"operator\": {\n \"text\": {\n \"path\": \"names.name\",\n \"query\": \"tester\"\n }\n }\n }\n}\n", "text": "Hi Luke_Snyder, thanks for your question!Sorry about the confusion here - the moreLikeThis operator is indeed not supported within the embeddedDocument operator, and I have filed a ticket to call that out in our documentation.I am not sure if this is a “real” query/if this is an example designed to highlight this issue - but if this is a real query, you might try using a text operator inside the embeddedDocument operator instead of using moreLikeThis:", "username": "Evan_Nixon" } ]
moreLikeThis unsupported inside embeddedDocument operator?
2023-03-09T19:59:31.395Z
moreLikeThis unsupported inside embeddedDocument operator?
792
null
[ "kotlin" ]
[ { "code": "", "text": "Hello everyone! Does Atlas support OTP authentication?\nAny idea what is the best choice for Phone Authentication verification (OTP) in KMM application (Kotlin + iOS)?\nHave a nice day!", "username": "Ciprian_Gabor" }, { "code": "", "text": "@Ciprian_Gabor: I don’t think Atlas has an in-built OTP authentication system, but you consider calling a third-party API like Twilio from the Cloud function.", "username": "Mohit_Sharma" }, { "code": "", "text": "Thank you, do you have any example for that?", "username": "Ciprian_Gabor" }, { "code": "", "text": "No, I haven’t created anything similar. But you should be able to find an example on the web as it’s a very common use-case.", "username": "Mohit_Sharma" }, { "code": "", "text": "Hello @Ciprian_Gabor ,Thank you for your question.You can use custom function authentication where you can define your custom logic to generate OTP codes and do OTP authentication or you can use an external auth service with OTP that gives you a JWT and login with that.I hope the provided information is helpful.Please feel free to ask any follow-up questions.Cheers, \nHenna", "username": "henna.s" }, { "code": "", "text": "Do you have any video tutorial showing this type of authentication?Have a nice day!", "username": "Ciprian_Gabor" } ]
OTP Authentication
2023-02-21T19:06:43.896Z
OTP Authentication
1,534
null
[]
[ { "code": "", "text": "Hello everyone! Does Atlas support phone number authentication verification? Or e-mail verification? Like Firebase does.Have a nice day", "username": "Daniel_Gabor" }, { "code": "", "text": "Thanks for creating this thread! It was helpful for me too", "username": "Wostire_Wostire" }, { "code": "", "text": "Hi! Regarding your question, yes, Atlas does support phone number and email authentication verification. In fact, it offers several methods for user authentication, including email and password, Google, Facebook, and phone number. You can configure and customize the authentication methods according to your application’s needs and even use a fake phone number for verification. Good luck!", "username": "Mixoponop_Mixoponop" }, { "code": "", "text": "Atlas does not support phone number authentication verification.\n\nimage2282×1734 154 KB\nAm I missing something?", "username": "Ciprian_Gabor" } ]
Phone and E-mail Authentication Verification
2023-01-24T22:51:13.367Z
Phone and E-mail Authentication Verification
840
https://www.mongodb.com/…5_2_715x1024.png
[ "installation" ]
[ { "code": "", "text": "Hi guys! I have been dealing with this kind of issue for the last 30 hours or so. I have tried pretty much anything I found on the web. Hopefully nothing was messed up!Any idea what this error means and how to deal with it?\nCapture31009×1444 59.6 KB\n \nCapture1364×722 82 KB\n", "username": "Evangelos_Kolimitras" }, { "code": "", "text": "MongoDB error code 48 indicates that the default port of MongoDB is already in useTry to use another port\nWhen you run mogod without any parameters like port it uses default port 27017", "username": "Ramachandra_Tummala" } ]
Failed to set up listener: SocketException: An attempt was made to access a socket in a way forbidden by its access permissions
2020-05-28T07:07:03.584Z
Failed to set up listener: SocketException: An attempt was made to access a socket in a way forbidden by its access permissions
5,470
null
[ "aggregation", "compass" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"6407581b1f37c526aa6e0af3\"\n },\n \"field1\": \"12211\",\n \"field2\": \"ABC\",\n \"field3\": \"ABC\",\n \"field4\": \"ABC\",\n \"versions\": [{\n \"version\": \"1\",\n \"version_create_datetime\": \"202303071028\",\n \"version_created_by\": \"ENTITY\",\n \"version_source_file\": \"123.txt\",\n \"version_status\": \"MERGED\",\n \"my_data\": {\n \"field4\": \"01\",\n \"field4\": \"04\",\n \"field4\": \"03\",\n }\n }]\n}\n", "text": "I cannot figure out how to do the aggregation in compass on the following data for which I want:\nI want only these documents where the last element of the ‘versions’ array has ‘version_status’ = ‘MERGED’ and return the entire document.Thanks!", "username": "Brian_Parker" }, { "code": "c.find( { \"$expr\" : { \"$eq\" : [ { \"$last\" : \"$versions.version_status\" } , \"MERGED\" ] } } )\n$last$arrayElemAt{ $arrayElemAt: [ <array expression>, -1 ] }\n", "text": "Sometimes, things are simpler than what we think.From the $last documentation:New in version 4.4.andThe $last operator is an alias for the following $arrayElemAt expression:", "username": "steevej" }, { "code": "[{$project: {\n field1: 1,\n last: {\n $arrayElemAt: [\n '$versions',\n -1\n ]\n }\n}}, {$match: {\n 'last.version_status': 'MERGED',\n last.version_create_datetime: {\n $gt: '2023-03-07',\n $lt: '2023-03-08'\n }\n}}, {$out: 'output_log_collection'}]\n", "text": "Thanks Steeve…I was trying similar with find yesterday before posting this, and cannot get it to work. However this aggregation pipeline does work…I’ll try your find approach again soon to see if I can get it to function.– my pipeline", "username": "Brian_Parker" } ]
Need help with specific aggregation
2023-03-08T19:40:00.595Z
Need help with specific aggregation
502
https://www.mongodb.com/…9a95274572f7.png
[ "connector-for-bi" ]
[ { "code": "", "text": "I believe we aren’t being correctly charged for our the BI connector. We have a cluster with a dedicated M10 analytics node. I want to enable the BI connector only for that node. The configuration page states one pricing but then I’m getting billed another.\n\n2023-03-03_10-08-29921×299 41.4 KB\n\nAnyone experiencing the same problem?", "username": "Elio_Capella" }, { "code": "", "text": "What I’m being charged:\n\n2023-03-03_10-11-03898×120 13.7 KB\n", "username": "Elio_Capella" }, { "code": "", "text": "Hi @Elio_Capella Thanks for creating this comment, as I too want to make sure no one is being billed incorrectly. For your specific case we should work with the account team and billing to see if we can uncover what is going on. Did this just start when introducing the Analytics Node? Here is my email if you’d like to discuss further: [email protected],\nAlexi", "username": "Alexi_Antonino" } ]
Incorrect billing for our BI Connector
2023-03-09T14:26:54.237Z
Incorrect billing for our BI Connector
905
null
[ "aggregation" ]
[ { "code": "{\n \"_id\": ObjectId (\"5976fd2eb0adec0a32fa9831\"),\n \"People\": [\n {\n \"user_id\": 1, <--- ID\n \"Name\": \"Jane\",\n \"age\": 12\n },\n {\n \"user_id\": 2, <--- ID\n \"Name\": \"Mark\",\n \"age\": 60\n },\n {\n \"user_id\": 3, <--- ID\n \"Name\": \"Tomer\",\n \"age\": 100\n }\n ],\n \"Contents\": [\n {\n \"user_id\": 2, <--- People ID\n \"Text\": \"111\"\n },\n {\n \"user_id\": 1, <--- People ID\n \"Text\": \"Hi\"\n }\n ]\n}\n{\n \"_id\": ObjectId (\"5976fd2eb0adec0a32fa9831\"),\n \"People\": [\n {\n \"user_id\": 1,\n \"Name\" : \"Jane\",\n \"age\": 12\n },\n {\n \"user_id\": 2,\n \"Name\": \"Mark\",\n \"age\": 60\n },\n {\n \"user_id\": 3, <--- ID\n \"Name\": \"Tomer\",\n \"age\": 100\n }\n ],\n \"Contents\": [\n {\n \"user_id\": 2,\n \"Name\": \"Mark\", <-- Adding\n \"Text\": \"111\",\n\n },\n {\n \"user_id\": 1,\n \"Name\": \"Jane\", <-- Adding\n \"Text\": \"Hi\",\n\n }\n ]\n}\n$lookup$unwind.aggregate()", "text": "I would like to combine the data in one collection using the IDs of the two arrays.An example is shown below.and I want to make the above document as below.I have tried various things like $lookup or $unwind of .aggregate() but I cannot get the result.", "username": "Daniel_Tourgman" }, { "code": "", "text": "@Daniel_Tourgman have you tired $replaceRoot? I had a similar situation just come up where I merged docs from one collection with objects of an array in a different collection", "username": "Natac13" }, { "code": " ],\n \n },\n \n }\n }\n}\n", "text": "“$addFields”: {\n“Contents”: {\n“$map”: {\n“input”: “$Contents”,\n“as”: “c”,\n“in”: {\n“user_id”: “$$c.user_id”,\n“Text”: “$$c.Text”,\n“Name”: {\n“$arrayElemAt”: [\n“$People.Name”,\n{\n“$indexOfArray”: [\n“$People.user_id”,\n“$$c.user_id”\n]\n},}you can try this method", "username": "Niveditha_Rai" }, { "code": "", "text": "THanks @Niveditha_Rai your answer helped me resolve a long pending problem ", "username": "Rai_Deepak" } ]
How to merge two matching objects from different array into one object?
2020-05-19T13:21:54.479Z
How to merge two matching objects from different array into one object?
5,520
null
[ "queries" ]
[ { "code": "try {\n // Execute a FindOne in MongoDB \n let id = query.city_id;\n let o_id = new BSON.ObjectId(query.city_id)\n findResult = await collection.find({ \"_id\": o_id });\n\n } catch(err) {\n console.log(\"Error occurred while executing findOne:\", err.message);\n\n return { error: err.message };\n }\n", "text": "How to find document by Object Id in atlas function.I am getting state id in query and I want to get matching city, with other key I am getting state but with object id I am not getting any state", "username": "Zubair_Rajput" }, { "code": "", "text": "Hi,Three quick thoughts looking at your function:Let me know if any of these suggestions help,\nTyler", "username": "Tyler_Kaye" } ]
How to find city document by matching state _id
2023-03-09T14:11:41.944Z
How to find city document by matching state _id
716
null
[ "queries", "node-js", "mongoose-odm", "graphql" ]
[ { "code": "{\n\t\"experiences\": {\n\t\t\"Project0\": [{\n\t\t\t\t\"title\": \"AA\"\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"title\": \"BB\",\n\t\t\t\t\"sub\": \"B\"\n\n\t\t\t},\n\t\t\t{\n\t\t\t\t\"title\": \"CC\",\n\t\t\t\t\"sub\": \"C\"\n\t\t\t}\n\t\t],\n\t\t\"Project1\": [\n\t\t\t{\n\t\t\t\t\"title\": \"AA\",\n\t\t\t\t\"sub\": \"A\"\n\n\t\t\t}\n\t\t]\n\t\t\n\t}\n}\nimport mongoose from \"mongoose\";\n\nconst pastProjectsSchema = new mongoose.Schema({\n experiences:{\n Project0:[\n { \n title: String,\n sub: String\n }\n ]\n }\n})\nexport const Experience = mongoose.model( \"Experience\", pastProjectsSchema , \"experiences\");\n\nimport gql from 'graphql-tag';\n\nexport const typeDefs = gql`\n type Query {\n experiences: [Experiences]\n }\n\n type Experiences{\n experiences:Experience\n }\n\n type Experience {\n Project0: [ProjectDetail]\n }\n \n type ProjectDetail {\n title: String\n sub: String\n }\n \n`;\nimport { Experience} from './models/Book.js'\n\nexport const resolvers = {\n Query: {\n experiences: async() => await Experience.find({}),\n }\n};\n", "text": "I have the following data in a MongoDB.I want to get Project0 and Project1 back from a GraphQL query. My current mongoose.Schema only retrieves data for “Project0”.Book.jstypeDefs.jsresolvers.js", "username": "lindylex" }, { "code": "const pastProjectsSchema = new mongoose.Schema({\n experiences:{\n Project0:[\n { \n title: String,\n sub: String\n }\n ]\n }\n})\nProject 0const pastProjectsSchema = new mongoose.Schema({\n experiences: {\n Project0: [\n { \n title: String,\n sub: String\n }\n ],\n Project1: [\n { \n title: String,\n sub: String\n }\n ]\n }\n})\ntypeDefsresolversProject1", "text": "Hi @lindylex,Welcome to the MongoDB Community forums It appears that the schema code currently only includes the Project 0 field, which will require some modifications to properly align with the data you shared from your collection in MongoDB.Would you mind sharing how you inserted the data into your MongoDB collection, such as whether you used an insert query from the mongo shell or an API call from your application? This information would be helpful in ensuring that the schema accurately reflects the data in your collection.Perhaps, we could update the schema to better match the collection data. Here’s an example of what the schema would look like:Also after modifying the schema, you will need to update your typeDefs and resolvers as well to include Project1.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
How to get all objects of array?
2023-03-06T02:52:48.214Z
How to get all objects of array?
1,503
null
[ "react-native" ]
[ { "code": "", "text": "I have a realm app device sync setup which has a react native client which writes documents in the local database and then realm syncs it with atlas database. However, in a rare scenario the sync write fails with an error OtherSessionError. This deletes the document generate from local database as well. I am unable to debug why this error comes up. The device was successfully writing documents before this error and sync session restarted after this error and continued to write more documents. I did not find any documentation related to OtherSessionError. The error description is as follows:integrating changesets failed: error creating new integration attempt: failed to get latest server version while integrating changesets: context canceled (ProtocolErrorCode=201)", "username": "Aditya_Rathore" }, { "code": "", "text": "Hi, that error should not result in the document being deleted locally. Device Sync is designed to handle these kinds of errors and retry them (both in the server and in the protocol between the client and the server). Normally when we see those errors it just means that the client disconnects, reconnects, re-uploads the changes, and then the data is persisted. Are you sure the document is being “deleted”? If so can you send the “request id” of the error in your “Logs” page of the app services UI and we can help take a look?Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi Tyler,\nThank you for your response. The request id for the error log is 6404eaa54c509266ce223b8f. It looks like it was deleted because we are not able to find the associated document _id in the database.", "username": "Aditya_Rathore" }, { "code": "", "text": "Hi, if you look at this link (it is safe to post, noone else can view other than you and MongoDB employees) you can see the error happen, the client reconnect, and upload the proper document (can see the _id in the write summary section of the log): App ServicesIn fact, if you follow it through, we can see why this document doesnt exist: (link was hard to show the exact events)\n\nScreenshot 2023-03-07 at 9.55.15 AM1343×803 89.1 KB\nHere is a more complete timeline: App ServicesIt looks like someone deleted the document in MongoDB. My hunch is that the trigger you have setup did it, but I can’t be certain without looking in more detail.Let me know if this sounds right to you, but it sounds like the error has nothing to do with the document disappearing as that is just a coincidence of someone/something deleting the document in MongoDB at roughly the same time.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Yes, you are correct. The document write was attempted again and was successful. We are deleting it in the trigger and rewriting into another collection. Thank you for your support.", "username": "Aditya_Rathore" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to handle the OtherSessionError and the consequent loss of data?
2023-03-07T10:36:41.928Z
How to handle the OtherSessionError and the consequent loss of data?
851
null
[ "queries", "performance" ]
[ { "code": "", "text": "Hi,I need to replace hundreds of thousands of documents in a collection every few minutes with a new list of documents. The actual changes between the old documents and the new documents are very small - might be just a few inserts, a few deletes and a few updates. Each document contains a number field, a string field, and an object field.Simply deleting all old documents and inserting all new documents would be slow.I also tried loading all old documents into the client application, and compare the two lists of documents in-app, which is putting a lot of pressure on the client application.Any advice on how to solve this problem? Thanks!", "username": "JesseC" }, { "code": "", "text": "Assume you don’t know which doc has been modified. The only thing you know is that some are changed, some are deleted, and some are inserted.In that case you can try comparing the two lists and just ask mongodb to to the “delta” work. But as you said, this requires some computing resources and memory on a host. If this can not be done, i would simply suggest to remove all old ones and insert all new ones.that being said, if you can change the source data generation side so that you know the “delta”, the work will be much easier.", "username": "Kobe_W" }, { "code": "", "text": "Thanks for your reply! Assume I can’t know the diff from the source data generation, which of these two would require less bandwidth on CPU, network and memory?", "username": "JesseC" }, { "code": "", "text": "Essentially, no.1 and no.3 are the same. As before you can compare, you will have to insert all news anyway.if client host machine can store all those old and new docs in memory, i would then suggest no.2. Using a database for the comparison work can be more challenging since it not only involves memory must also disk, locking, concurrency control, etc.Using ram to store everything and rely on CPU purely to do the work is generally faster", "username": "Kobe_W" }, { "code": "", "text": "No. 3 will be really bad in terms of indexes compared to the others if the you have a lot of indexes and if you do not update indexed fields often.No. 2 is bad in terms of I/O since one list is downloaded and one is uploaded.No. 1 can be achieve somewhat easily by uploading the new list in a temporary collection and then use a $merge stage to perform the updates and inserts.Handling the delete is a different story in all cases, How do you know which document you need to delete? Unless of course you delete all documents in the original collection that are not present in the updated list.", "username": "steevej" }, { "code": "", "text": "The “use a $merge stage to perform the updates and inserts.” approach sounds interesting. Would that be resource intense on Mongodb though if it involves comparing the documents between the two collections?", "username": "JesseC" }, { "code": "", "text": "Would that be resource intenseNo matter where it is done comparing the documents from 2 lists is the same complexity. Usually, your server is better equip to handle load.Anyway downloading the original from the server to compare on the client also involves the server for the download and then for the update upload. Bandwidth might be an issue.What is not clear is how you determine the documents to delete. Unless of course, if you assume that documents not present in the updated list are to be deleted.No matter what, if your are not sure of what provides the best performance for your use-case, the best solution is to setup benchmarks and test and then chose the best solution for your use-case. If you are not willing to write benchmarks to compare your different approaches, simply start by implementing the simplest one and fix it only if it is problematic.", "username": "steevej" } ]
Replace hundreds of thousands of documents with few changes
2023-03-08T01:19:59.636Z
Replace hundreds of thousands of documents with few changes
1,133
https://www.mongodb.com/…4_2_1024x512.png
[ "queries", "crud" ]
[ { "code": "{\n docId: \"123\",\n outcomes: [\n {\n prizeId: \"test\",\n notifications: [\n {\n action: \"foo\",\n foreignKey: \"abc\"\n }\n ]\n }\n ]\n}\nparams = {\n docId: \"123\",\n prizeId: \"test\",\n action: \"foo\",\n}\nfilter = {docId: params.docId, outcomes: {$elemMatch: {prizeId: params.prizeId}}}\ndb.getCollection('collection-name').find(filter)\nupdate = {\n $pull: {\n \"outcomes.$.notifications\": {\n $elemMatch: {\n action: params.action\n }\n }\n }\n}\ndb.getCollection('collection-name').updateOne(filter, update)\nupdate = {\n $pull: {\n \"outcomes.$[outcome].notifications\": {\n $elemMatch: {\n action: params.action\n }\n }\n }\n}\n\noptions = {\n arrayFilters: [\n { \"outcome.prizeId\" : params.prizeId },\n // { \"notification.action\" : action },\n ]\n}\ndb.getCollection('collection-name').updateOne(filter, update, options)\nupdateOneupdateMany", "text": "I want to manage (append, remove, and update) documents within an array, and the array field is within a document within an array. I’m talking about the “notifications” array visible in the following sample document:Given these parameters provided by a client,I am able to select (find) the document with the following filter:I am not able to modify the document. For “remove”, I tried the following:As well asI’m suprised I cannot find examples in the documentation that deal with removing an array item from an array nested within a document within an array.UPDATE: Actually I did find an example showing how to remove an item from a nested array. Unfortunately, I believe I am doing the same technique for my data, but the results are not the same. The main difference I can see is that I’m using updateOne and the example uses updateMany.UPDATE 2: I believe the example does not do what I need. It updates multiple documents, and potentially multiple (outer) array items, and the syntax I need should constrain to updating one (outer) array item.", "username": "John_Grant1" }, { "code": "params = {\n docId: \"123\",\n prizeId: \"test\",\n action: \"foo\",\n}\ndocId=123prizeId=testaction=foodocId=123prizeId=testaction=foo{\n docId: '123',\n outcomes: [\n {\n prizeId: 'test',\n notifications: [\n { action: 'foo', foreignKey: 'abc' },\n { action: 'bar', foreignKey: 'def' }\n ]\n },\n {\n prizeId: 'test2',\n notifications: [\n { action: 'foo', foreignKey: 'abc' },\n { action: 'aaa', foreignKey: 'ccc' },\n ]\n }\n ]\n}\noutcomes[\n {\n docId: '123',\n outcomes: {\n prizeId: 'test',\n notifications: [\n { action: 'foo', foreignKey: 'abc' },\n { action: 'bar', foreignKey: 'def' }\n ]\n }\n },\n {\n docId: '123',\n outcomes: {\n prizeId: 'test2',\n notifications: [\n { action: 'foo', foreignKey: 'abc' },\n { action: 'aaa', foreignKey: 'ccc' }\n ]\n }\n }\n]\ndb.test.updateOne(\n {docId:'123', 'outcomes.prizeId': 'test'},\n [\n {$addFields: {\n 'outcomes.notifications': {\n $filter: {\n input: '$outcomes.notifications',\n cond: {$ne: ['$$this.action', 'foo']}\n }\n }\n }}\n ]\n)\n[\n {\n _id: ObjectId(\"6409236401dd5b18bd5800f3\"),\n docId: '123',\n outcomes: {\n prizeId: 'test',\n notifications: [ \n { action: 'bar', foreignKey: 'def' }\n ]\n }\n },\n {\n _id: ObjectId(\"6409236401dd5b18bd5800f4\"),\n docId: '123',\n outcomes: {\n prizeId: 'test2',\n notifications: [\n { action: 'foo', foreignKey: 'abc' },\n { action: 'aaa', foreignKey: 'ccc' }\n ]\n }\n }\n]\n", "text": "hi @John_Grant1If I understand correctly, this input:means “Find docId=123, prizeId=test, action=foo and remove it from the document”. Is this correct?I think this is a difficult situation to solve, since in my mind, the update you need is to match docId=123 and prizeId=test, but you want the resulting document to not contain action=foo. The schema that contains array inside array is also difficult to work with, as you have observed.Instead, I would suggest you to explore alternative schema design. I think if you remove one array layer would make things much easier. For example, if this is the original schema:How about spreading this across two documents instead by unwinding the outcomes array:Then the update becomes:Note that I’m using an aggregation pipeline to perform the update.Result is:Of course this may not work for your use case, but I think this is much easier to work with by avoiding array-inside-array complexity.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi @kevinadi ,Yes, your understanding is correct. The resulting document should not contain action=foo FOR prizeId=test. The example document you provided is a good one for discussion. prizeId=test2 should not be affected by the update for the input “params”.I agree, my schema is challenging to work with. I am intrigued by the suggestion to spread an array across multiple documents. I might look into that.Another trick I have used in the past is to have 2 arrays at the same level (siblings) within the sole document, and I make sure to keep them in sync with each other. This is my first choice if I cannot find a mechanism to remove an item from a nested array.Can you confirm if an item can be removed from a single nested array? The reference I found makes it seem like the removal will impact multiple outer array items.I am assuming I should use $pull for an implementation to remove an item from an array. If there is a different operator I should consider, please let me know.Thank you.\nJohn", "username": "John_Grant1" }, { "code": "$pull", "text": "Another trick I have used in the past is to have 2 arrays at the same level (siblings) within the sole document, and I make sure to keep them in sync with each other.This sounds like a different layer of complexity, where you need to ensure things are in sync all the time. I’m not sure if this is better or worse than trying to work with array-inside-array Can you confirm if an item can be removed from a single nested array? The reference I found makes it seem like the removal will impact multiple outer array items.Yes you’re correct that given an array-inside-array, the $pull operation will affect multiple array items, and so far I haven’t found a good workaround for this. In my mind, if it requires workarounds, then perhaps it’s not the best way forward.I would definitely consider remodeling the schema though. Not having to fight the query language or the schema to do your work seems quite appealing to me Best regards\nKevin", "username": "kevinadi" } ]
How to manage array of documents nested within an object within an array
2023-03-06T16:54:27.411Z
How to manage array of documents nested within an object within an array
799
null
[ "api" ]
[ { "code": "{\n \"error\": \"EOF\"\n}\n{\n \"error\": \"app not found\"\n}\n", "text": "Hi, I was following this guide to verify a realm-generated access token on a server app. Apparently, the only way I’ve found to do so is to send the client token to this endpoint.\nadmin/v3.0/​groups/​{groupId}/​apps/​{appId}/​users/​verify_tokenIn order to be able to use it, I had to create an API key following this doc. https://www.mongodb.com/docs/atlas/configure-api-access/I added permissions, invited & granted it permissions to the app, and also added IP access to that key.\nHowever every time I hit the endpoint I get:orNot really sure why I started getting a different response at some point.By the way, I’ve been using the “_id” of the app as {appid}, which is correct according to the docs. So that’s unlikely to be the problem.Does anyone have an idea how to fix this or at least an alternative for verifying the access token on the server side?Thanks!", "username": "Lino_Rallo" }, { "code": "", "text": "I have a similar issue. Getting error 401 even though I just created the access_token.Did you manage to figure it out?", "username": "Alexandar_Dimcevski" }, { "code": "group_idapp_idhttps://realm.mongodb.com/groups/{group_id}/apps/{app_id}/dashboard", "text": "For the {appId} make sure you’re using an objectId for the app not the short name. You can find the app id using this Atlas App Services APIor check the your url in the browser when you open an app. Note both group_id and app_id are object ids.https://realm.mongodb.com/groups/{group_id}/apps/{app_id}/dashboard", "username": "Alexandar_Dimcevski" } ]
Verify user Relam access token on server
2022-09-10T20:04:31.897Z
Verify user Relam access token on server
2,207
null
[]
[ { "code": "", "text": "Hello,\nWhat is most optimal way to run a “contains” query on all fields in a collection? The search can be on a partial word or a partial phrase.I have tried using wildcard and regex queries, but they are quite slow and do not meet the expected performance.Thanks,\nPrasad", "username": "Prasad_Kini" }, { "code": "autocomplete", "text": "Have you considered using autocomplete with an ngram?", "username": "Elle_Shwer" }, { "code": "", "text": "Yes, but it doesn’t support wildcard paths. My requirement is to be able to search for a given string in any field.", "username": "Prasad_Kini" }, { "code": "", "text": "Also, I have observed that wildcard queries (contains search in all fields) are slow for small (<500 docs) datasets as well. It seems that other clauses in the same query that is supposed to reduce the number of documents to be searched have no effect when wildcards are specified.@Elle_Shwer any thoughts on why the wildcard (contains) search would be slow on filtered datasets? I have tested it with a dataset with only 8 documents with the query returning in 7+ seconds.Thanks much,\nPrasad", "username": "Prasad_Kini" }, { "code": "", "text": "In general, it is well known that wildcards are slow and computationally expensive. Especially if you are doing both a wildcard query and a wildcard path. That is generally why we recommend users to use autocomplete if they can afford to do so.Without seeing your exact query, it’s hard to know why it took so long. But if you’re doing a contains of very few characters over very large documents, I am not surprised at all.", "username": "Elle_Shwer" }, { "code": "", "text": "The documents are not large. I have been able to get around this for now by limiting the fields for the searches by specifying them explicitly in the regex operator.While this is not optimal, I think that I might be able to get away with for the time being.", "username": "Prasad_Kini" } ]
Atlas fulltext searches - "contains" query
2023-03-03T21:20:05.030Z
Atlas fulltext searches - &ldquo;contains&rdquo; query
879
https://www.mongodb.com/…_2_1024x450.jpeg
[ "app-services-user-auth", "realm-web", "react-js" ]
[ { "code": "", "text": "Hi there,After anonymous login, I am performing linkCredentials with a Google credentials. I am getting following errorserver-error1093×481 32.9 KBCan you please guide me how to fix this error.Thanks\nSudarshan", "username": "Sudarshan_Roy" }, { "code": "", "text": "Which SDK are you using?Can you share the code that you’re using to set this up?", "username": "Andrew_Morgan" }, { "code": "import { App, Credentials } from \"realm-web\";\n\n//Calling initializeRealmApp function from the root react component\nconst initializeRealmApp = () => {\n const app = new App({ id: process.env.REACT_APP_ID });\n return app;\n};\n\n//performing anonymous login to call some realm functions\nconst loginAnonymousRealm = () => {\n const app = App.getApp(process.env.REACT_APP_ID);\n const credentials = Credentials.anonymous();\n return app.logIn(credentials);\n};\n\n\n//Once user chooses to signin with Google\n//Google Sdk callback the handleLogin function with \"response\" object, \n//which contains credential.\n//handleLogin function in turn calls loginGoogleRealm function\nconst loginGoogleRealm = async (response) => {\n const app = App.getApp(process.env.REACT_APP_ID);\n const credential = Credentials.google(response.credential);\n await app.currentUser.linkCredentials(credential);\n};", "text": "Hi Andrew,I am using realm-web sdk in react-js.Following is the code snippet:", "username": "Sudarshan_Roy" }, { "code": "", "text": "Have you checked that everything is set up correctly (with the correct permissions etc.) in your Google app? Any logs on the Google side?", "username": "Andrew_Morgan" }, { "code": "const user = await app.logIn(Realm.Credentials.anonymous());\nawait app.emailPasswordAuth.registerUser(email, password);\nconst credentials = Realm.Credentials.emailPassword(email, password);\nawait user.linkCredentials(credentials);\nlinkCredentialsUnhandled Rejection (Error): Request failed (POST https://stitch.mongodb.com/api/client/v2.0/app/app-id/auth/providers/local-userpass/login?link=true): linking forbidden without first specifying allowed request origins (status 403)", "text": "I’m experiencing the same issue with realm-web 1.3.0. Simplified use case:The linkCredentials method results in Unhandled Rejection (Error): Request failed (POST https://stitch.mongodb.com/api/client/v2.0/app/app-id/auth/providers/local-userpass/login?link=true): linking forbidden without first specifying allowed request origins (status 403)How do we specify the allowed request origin?", "username": "Craig_Phares" }, { "code": "", "text": "Turns out this is due to a setting in Realm. App Settings > Allowed Request Origins > + Add Allowed Request Origin.", "username": "Craig_Phares" }, { "code": "", "text": "Hey,\nWere you able to find a solution to this? I tried what @Craig_Phares mentioned, but as soon as I deploy the application after adding the origins, the added origins get removed automatically by Atlas (Tried doing it both from CLI as well as the UI).Stuck for now.", "username": "Rajeev_R_Sharma" }, { "code": "", "text": "I’m having the same issue.How to reproduce:", "username": "Daniel_Weiss" } ]
linkCredentials: linking forbidden without first specifying allowed request origins
2021-05-10T08:38:52.323Z
linkCredentials: linking forbidden without first specifying allowed request origins
5,306
null
[ "atlas-cluster", "rust" ]
[ { "code": "?clientasync fn index(Form(login): Form<Login>)-> Response<String>{\n\tlet client = Client::with_uri_str(\"mongodb+srv://user:[email protected]/?retryWrites=true&w=majority\").await?;\n\tlet db = client.database(\"db\").collection::<Login>(\"coll\");\n\t//...\n\tOk(Response::builder().status(axum::http::StatusCode::OK)\n .header(\"Content-Type\", \"text/html; charset=utf-8\")\n .body(tera.render(\"index\", &context).unwrap()).unwrap())\n}\n#9 319.5 error[E0277]: the `?` operator can only be used in an async function that returns `Result` or `Option` (or another type that implements `FromResidual`)\n#9 319.5 --> src/main.rs:51:126\n#9 319.5 50 | async fn index(Form(login): Form<Login>)-> Response<String>{\n#9 319.5 | ______________________________________________________________-\n#9 319.5 51 | | let client = Client::with_uri_str(\"***cluster0.um0c2p7.mongodb.net/?retryWrites=true&w=majority\").await?;\n#9 319.5 | | ^ cannot use the `?` operator in an async function that returns `Response<std::string::String>`\n#9 319.5 52 | | let db = client.database(\"db\").collection::<Login>(\"coll\");\n#9 319.5 53 | | let deb: Login = db.find_one(doc!{\"user\":\"test\"},None).await?;\n#9 319.5 65 | | .body(tera.render(\"index\", &context).unwrap()).unwrap()\n#9 319.5 66 | | }\n#9 319.5 | |_- this function should return `Result` or `Option` to accept `?`\n#9 319.5 = help: the trait `FromResidual<Result<Infallible, mongodb::error::Error>>` is not implemented for `Response<std::string::String>`\nresult", "text": "Hi,\nI’m trying to use ? with client with this code:But this error appears:I tried to use result in many ways and it didn’t work either, is there a solution for this problem?", "username": "mmahdi" }, { "code": "async fn index(Form(login): Form<Login>) -> Result<Response<String>, Error> {\n\tlet client = Client::with_uri_str(\"mongodb+srv://user:[email protected]/?retryWrites=true&w=majority\").await.unwrap();\nunwrapResult", "text": "Hi! You have a couple of options here - you can either change the return type of the function to be a Result type, e.g.or you can unwrap the value:Note that using unwrap means your code will panic if there’s an error, which will likely cause the program to terminate. This is fine for test code and proof of concept, but for anything beyond that I recommend using a Result. The Rust book has a good chapter on how to handle errors, if you haven’t read it yet I highly recommend it ", "username": "Abraham_Egnor" }, { "code": "async fn signin_form(Form(login): Form<Login>)-> Result<impl IntoResponse, Box<dyn Error>> {\n\tlet db = Client::with_uri_str(\"mongodb+srv://user:[email protected]/?retryWrites=true&w=majority\").await.unwrap().database(\"braq\").collection::<Login>(\"users\");\n\tlet find = db.find_one(doc!{\"user\":&login.user},None).await?;\n\t//..\n}\n#9 187.7 error[E0277]: the trait bound `fn(Form<Login>) -> impl Future<Output = Result<impl IntoResponse, Box<(dyn StdError + 'static)>>> {signin_form}: Handler<_, _, _>` is not satisfied\n#9 187.7 --> src/main.rs:13:39\n#9 187.7 |\n#9 187.7 13 | .route(\"/signin/\", get(signin).post(signin_form))\n#9 187.7 | ---- ^^^^^^^^^^^ the trait `Handler<_, _, _>` is not implemented for fn item `fn(Form<Login>) -> impl Future<Output = Result<impl IntoResponse, Box<(dyn StdError + 'static)>>> {signin_form}`\n#9 187.7 | |\n#9 187.7 | required by a bound introduced by this call\n#9 187.7 |\n#9 187.7 = help: the following other types implement trait `Handler<T, S, B>`:\n#9 187.7 <Layered<L, H, T, S, B, B2> as Handler<T, S, B2>>\n#9 187.7 <MethodRouter<S, B> as Handler<(), S, B>>\n#9 187.7 note: required by a bound in `MethodRouter::<S, B>::post`\n#9 187.7 --> /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.7/src/routing/method_routing.rs:618:5\n#9 187.7 |\n#9 187.7 618 | chained_handler_fn!(post, POST);\n#9 187.7 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ required by this bound in `MethodRouter::<S, B>::post`\n#9 187.7 = note: this error originates in the macro `chained_handler_fn` (in Nightly builds, run with -Z macro-backtrace for more info)\n#9 187.7 \n#9 187.8 For more information about this error, try `rustc --explain E0277`.\n#9 187.8 warning: `hello` (bin \"hello\") generated 1 warning\n#9 187.8 error: could not compile `hello` due to previous error; 1 warning emitted\n#9 ERROR: executor failed running [/bin/sh -c cargo build]: exit code: 101\n", "text": "Hi!,\nI have read the book but only understand when building projects.I’m trying to create a login page using the following code:But this error appears:", "username": "mmahdi" }, { "code": "StatusCodeBox<dyn Error>ClientClient", "text": "I’m not at all familiar with Axum, but as far as I can tell from its documentation the return type needs to use StatusCode as the error type, not Box<dyn Error>; you’ll need to convert errors produced by methods you call within your handler to that.I’d also recommend against creating a new Client for each request; Client construction is heavyweight, and while that will work, it’ll be slow. It looks like Axum provides good support for state shared across handlers, so that sounds like the way to go.", "username": "Abraham_Egnor" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
This function should return `Result` or `Option` to accept `?`
2023-03-06T08:36:23.494Z
This function should return `Result` or `Option` to accept `?`
1,713
null
[ "sharding" ]
[ { "code": "", "text": "how to troubleshoot mongo sharded environment like query slow", "username": "giribabu_venugopal" }, { "code": "", "text": "Hi @giribabu_venugopal and welcome to the MongoDB community!!Could you help with some details on the issues you are observing:Also, for troubleshooting slow queries in sharded cluster the following documentation can be of help.Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "i have two shards SH1 and SH2, most of my queries (inserts and update) queries going to SH2.\nBalancer is running.may i know what is reason ?", "username": "giribabu_venugopal" }, { "code": "rs2:PRIMARY> db.serverStatus().globalLock\n{\n \"totalTime\" : NumberLong(\"2535707000\"),\n \"currentQueue\" : {\n \"total\" : 2430,\n \"readers\" : 0,\n \"writers\" : 2430\n },\n \"activeClients\" : {\n \"total\" : 167,\n \"readers\" : 30,\n \"writers\" : 137\n }\n}\nrs2:PRIMARY>\n4:18\nrs1:PRIMARY> db.serverStatus().globalLock\n{\n \"totalTime\" : NumberLong(\"11815012000\"),\n \"currentQueue\" : {\n \"total\" : 0,\n \"readers\" : 0,\n \"writers\" : 0\n },\n \"activeClients\" : {\n \"total\" : 10,\n \"readers\" : 5,\n \"writers\" : 5\n }\n}\nrs1:PRIMARY>\n", "text": "", "username": "giribabu_venugopal" }, { "code": "sh.status()", "text": "i have two shards SH1 and SH2, most of my queries (inserts and update) queries going to SH2.How is your shard key defined? If your key is built on a monotonically increasing value you would definitely have a situation like this.What is the result of running sh.status()?", "username": "Doug_Duncan" }, { "code": "", "text": "no error in sh.statusalso shards are equally distributed", "username": "giribabu_venugopal" }, { "code": "sh.status()", "text": "I wasn’t asking if there were errors while running sh.status() but was hoping to see the results of from the command. That would help us to see what might be going on. Without having any of the information we’ve asked for, helping will be near impossible.", "username": "Doug_Duncan" }, { "code": "", "text": "i shard mongodb to 3 shards 3 replica and 3 config server 2 router server …\nbut insert query for a simple user collection is 30 case per 10 sec\nbefore sharding i have very speed 1000 insert per sec\nwaht happen for performance how can achive high speedwhat is your experience\nthis is not scaling it is like fu…ing scaling", "username": "kube_ctl" } ]
How to troubleshoot mongo sharded environment like query slow
2022-09-27T22:17:24.239Z
How to troubleshoot mongo sharded environment like query slow
3,457
null
[ "compass", "atlas-search" ]
[ { "code": "{\n _id: <whatever>,\n sound: 'Dong'\n}\n{\n mappingType: 'explicit',\n input: ['Ding'],\n synonyms: ['Ding', 'Dong']\n}\n'Ding'soundlucene.standardlucene.englishlucene.keyword \"sound\": {\n \"analyzer\": \"lucene.keyword\",\n \"searchAnalyzer\": \"lucene.keyword\",\n \"type\": \"string\"\n },\n \"synonyms\": [\n {\n \"analyzer\": \"lucene.keyword\",\n \"name\": \"synonym_mapping\",\n \"source\": {\n \"collection\": \"synonyms\"\n }\n }\n ]\nlucene.standardlucene.englishtype: \"DefaultQuery\"\"queryType\": \"SafeTermAutomatonQueryWrapper\"type: \"TermQuery\"", "text": "Say I just have one document in a collectionand a synonyms collection with only one mappingand I want to create a search index which uses those to return the one document when one queries for 'Ding' on the property sound.In this minimal example I can just use the lucene.standard analyzer and all works perfectly (lucene.english works as well). But changing just the analyzer definitions to lucene.keyword (and custom analyszers, but there I might be making another mistake) breaks things, i.e. no document is returned. The definitions are pretty straight-forward; search index field definitionand synonymsUsing MongoDB Compass to explain the query, I can see that for lucene.standard and lucene.english the explain looks slightly different (type: \"DefaultQuery\" and \"queryType\": \"SafeTermAutomatonQueryWrapper\" sounds like a wrapper for synonyms is used, maybe?) than for the not-working analyzers (type: \"TermQuery\"), but there is no documentation on what everything means.At this point, my best guess is that either some analyzers are not supposed to work with synonyms (I couldn’t find anything in the docs though, no error or warning either obviously), or the implementation to handle that case is missing.Am I doing something wrong?", "username": "Oliver_Haas" }, { "code": "", "text": "Hi @Oliver_Haas ! Can you share the query you are running?", "username": "amyjian" }, { "code": "[\n {\n $search: {\n index: \"default\",\n text: {\n query: \"Ding\",\n path: 'sound',\n synonyms: 'synonym_mapping'\n }\n }\n }\n]\n", "text": "Oh yes, sure. @amyjian", "username": "Oliver_Haas" }, { "code": "lucene.keywordsound: 'Ding''Ding'lucene.keyword'ding''ding'{\n mappingType: 'explicit',\n input: ['ding'],\n synonyms: ['Ding', 'Dong']\n}\n'Ding''Dong'lucene.keywordlucene.keywordlucene.keywordlucene.keyword", "text": "I think I somewhat understand the behavior now. The following starts with the use-case of the question with the lucene.keyword analyzer. What I think happens is the following:So if I change my synonyms toI can find documents with 'Ding' or 'Dong', but here the case matters again, because that is lucene.keyword behavior.I guess it maybe makes sense, because I read that lucene (always?) parses queries to lowercase, but since this conflict with the behavior of lucene.keyword this is pretty confusing, to me anyway. Honestly I feel like this is a mistake in the implementation, since lucene.keyword can’t be used case-sensitive with synonyms this way.What I will use in the end is a custom analyzer which behaves like a case-insenstive lucene.keyword, since I don’t care about the case but want to match multi-word-queries otherwise, and use lowercase synonyms. But I won’t start with this today…", "username": "Oliver_Haas" } ]
Synonyms are ignored when using some analyzers
2023-03-08T14:02:31.778Z
Synonyms are ignored when using some analyzers
729
null
[ "aggregation", "dot-net" ]
[ { "code": "var searchBuilder = new SearchDefinitionBuilder<MyModel>();\nvar clauses = new List<SearchDefinition<MyModel>>();\nclauses.Add(searchBuilder.Phrase(\"topic\", \"water\"));\n\nvar compoundSearchDef = Builders<Product>.Search.Compound();\ncompoundSearchDef.Must(clauses);\n\nvar aggPipeline = new EmptyPipelineDefinition<MyModel>()\n .AppendStage(PipelineStageDefinitionBuilder.SearchMeta<MyModel>(searchDefinition: compoundSearchDef, indexName: MySearchIndexName));\n\n var aggResult = await collection.Aggregate(pipeline: aggPipeline).ToListAsync();\n$searchMeta: {\n index: defaults.graphIndex,\n facet:{\n operator: {\n compound:{\n must:defaults.aggregateFilters\n }\n },\n facets: searchMetaFacets\n }\n }\n", "text": "I am lost as to where I add the facet definitions in the searchmeta pipline.Here is a simplified example of my processI have a process the adds clauses based on data passed in, so to simplify the example here is one clause added to the filter with hard coded values:This works for getting the lower bounds count, but the facets return null, which makes sense, since I have not defined any. I just can’t seam to find where I add them in.Here is a working example I am trying to port from atlas to c#This is the part I am stuck on converting to c# facets: searchMetaFacets", "username": "Jeff_VanHorn" }, { "code": "facetcollection().Aggregate().SearchMeta(Builders.Search.Facet(...))\n var result = GetTestCollection().Aggregate()\n .SearchMeta(Builders.Search.Facet(\n Builders.Search.Phrase(x => x.Body, \"life, liberty, and the pursuit of happiness\"),\n Builders.SearchFacet.String(\"string\", x => x.Author, 100),\n Builders.SearchFacet.Number(\"number\", x => x.Index, 0, 100),\n Builders.SearchFacet.Date(\"date\", x => x.Date, DateTime.MinValue, DateTime.MaxValue)))\n .Single();\n", "text": "Hi Jeff. I see you’re having difficulty figuring out how to use the facets in the search meta pipeline. We recognize that our existing documentation is a bit sparse on this use case and our team is already planning to expand documentation to address this question.In the meantime, hopefully I can help.The short answer is that you can include your facet definition using an additional pipeline stage (facet) and that the query should take the form of:Here’s a working example of an aggregation using facets from our driver test cases.You can also reference the API docs for the Facet pipeline stage here.I’m out of office for the next few days, but I’ll check back in on Monday to see how things are looking. Hope this helps!", "username": "Patrick_Gilfether1" }, { "code": "", "text": "That was exactly what I needed!Thanks.", "username": "Jeff_VanHorn" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to I add facet definitions to searchMeta in c#
2023-03-07T19:05:52.048Z
How to I add facet definitions to searchMeta in c#
1,251
null
[]
[ { "code": "\"msg\":\"Slow query\"\"msg\":\"serverStatus was very slow\"", "text": "We recently moved to a manual install of Rocketchat which uses MongoDB for it’s database and I am completely new to MongoDB.After running for a weeks without issue today it appears the kernel OOM killer went to work on MongoDB and killed the main server process. Normally when things like this happen, my assumption is that:It would also be nice if the Mongo server restarted itself after such events but that’s another story.As an absolute beginner, can someone please help guide me in getting to the root of what happened here? I see lots of \"msg\":\"Slow query\" and \"msg\":\"serverStatus was very slow\" messages in the logs among other things. Please let me know what further info I can provide here that would be of use debugging this issue.", "username": "billy_noah" }, { "code": "", "text": "Hi @billy_noah,\nI think you need to create index to permorf a faster query:\nI suggest you to read the production note to have the better environment for mongo (and avoid oom killer) :\nAnd finally i suggest you to install mongodb compass:Explore and interact with your data using Compass, the GUI for MongoDB. Query, modify, delete, and more — all from one interface.Best regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Thank you for the resources. I was just wondering if the “slow query” messages could simply be related to server memory issues stemming from some other database problem - or if we can confidently assume that they are the root cause of running out of memory? Coming over from MySQL, I’ve seen a single long running process do things like that and affect many other queries and overall performance.At any rate, is this a place where I can post a few log snippets to get more specific advice?", "username": "billy_noah" }, { "code": "mongodmongodmongod\"msg\":\"Slow query\"\"msg\":\"serverStatus was very slow\"", "text": "Hi @billy_noahSorry you’re having issues with this. OOMkiller is a common issue that typically points to the lack of resources in the server itself for the workload it’s supposed to do. You are correct that it may be caused by query issues. However if you’re only running Rocketchat, there’s probably not much to be done on that front. That is, other than provisioning a larger server, or setting up swap (this may delay but not eliminate the main issue).Are you running Rocketchat and mongod in the same server? If yes, it may benefit both apps if you separate them into their own servers. A single mongod process was designed to wholly take over the server to give you the best possible performance out of the hardware you have. Consequently in a busy server they frequently have resource contention issues with other apps, or with other mongod running on the same machine.I see lots of \"msg\":\"Slow query\" and \"msg\":\"serverStatus was very slow\" messages in the logs among other things.Yes this is typical message you see when a server is being overwhelmed by work. MongoDB calls serverStatus every second for telemetry purposes and the call should be very quick to complete (it was designed to place negligible burden on the server). If these calls start to get bogged down, it means the server is really busy.At any rate, is this a place where I can post a few log snippets to get more specific advice?Yes if you have some log snippets that you think may be helpful, please do post them here.Best regards\nKevin", "username": "kevinadi" }, { "code": "db.rocketchat_message.find({ '$text': { '$search': 'type=test' }, t: { '$ne': 'rm' }, _hidden: { '$ne': true }, rid: 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i' })\ndb.rocketchat_message.find({ '$text': { '$search': 'type=www' }, t: { '$ne': 'rm' }, _hidden: { '$ne': true }, rid: 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i' })\nridridriddb.rocketchat_message.createIndex( {\"msg\" : \"text\", \"rid\" : 1 } )\ndb.rocketchat_message.createIndex( { \"rid\" : 1, \"msg\" : \"text\" } )\nriddb.rocketchat_message.find({ '$text': { '$search': 'https://www.example.com' }, t: { '$ne': 'rm' }, _hidden: { '$ne': true }, rid: 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i' }).explain('executionStats')\n{\n explainVersion: '1',\n queryPlanner: {\n namespace: 'rocketchat.rocketchat_message',\n indexFilterSet: false,\n parsedQuery: {\n '$and': [\n { rid: { '$eq': 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i' } },\n { _hidden: { '$not': { '$eq': true } } },\n { t: { '$not': { '$eq': 'rm' } } },\n {\n '$text': {\n '$search': 'https://www.example.com',\n '$language': 'english',\n '$caseSensitive': false,\n '$diacriticSensitive': false\n }\n }\n ]\n },\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { _hidden: { '$not': { '$eq': true } } },\n { t: { '$not': { '$eq': 'rm' } } }\n ]\n },\n inputStage: {\n stage: 'TEXT_MATCH',\n indexPrefix: {},\n indexName: 'msg_text_rid_1',\n parsedTextQuery: {\n terms: [ 'com', 'exampl', 'https', 'www' ],\n negatedTerms: [],\n phrases: [],\n negatedPhrases: []\n },\n textIndexVersion: 3,\n inputStage: {\n stage: 'FETCH',\n inputStage: {\n stage: 'OR',\n filter: { rid: { '$eq': 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i' } },\n inputStages: [\n {\n stage: 'IXSCAN',\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {}\n },\n {\n stage: 'IXSCAN',\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {}\n },\n {\n stage: 'IXSCAN',\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {}\n },\n {\n stage: 'IXSCAN',\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {}\n }\n ]\n }\n }\n }\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 2,\n executionTimeMillis: 47743,\n totalKeysExamined: 13301404,\n totalDocsExamined: 4,\n executionStages: {\n stage: 'FETCH',\n filter: {\n '$and': [\n { _hidden: { '$not': { '$eq': true } } },\n { t: { '$not': { '$eq': 'rm' } } }\n ]\n },\n nReturned: 2,\n executionTimeMillisEstimate: 15933,\n works: 13301408,\n advanced: 2,\n needTime: 13301405,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n docsExamined: 2,\n alreadyHasObj: 2,\n inputStage: {\n stage: 'TEXT_MATCH',\n nReturned: 2,\n executionTimeMillisEstimate: 15717,\n works: 13301408,\n advanced: 2,\n needTime: 13301405,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n indexPrefix: {},\n indexName: 'msg_text_rid_1',\n parsedTextQuery: {\n terms: [ 'com', 'exampl', 'https', 'www' ],\n negatedTerms: [],\n phrases: [],\n negatedPhrases: []\n },\n textIndexVersion: 3,\n docsRejected: 0,\n inputStage: {\n stage: 'FETCH',\n nReturned: 2,\n executionTimeMillisEstimate: 15470,\n works: 13301408,\n advanced: 2,\n needTime: 13301405,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n docsExamined: 2,\n alreadyHasObj: 0,\n inputStage: {\n stage: 'OR',\n filter: { rid: { '$eq': 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i' } },\n nReturned: 2,\n executionTimeMillisEstimate: 15309,\n works: 13301408,\n advanced: 2,\n needTime: 13301405,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n dupsTested: 13301404,\n dupsDropped: 6824735,\n inputStages: [\n {\n stage: 'IXSCAN',\n nReturned: 351631,\n executionTimeMillisEstimate: 214,\n works: 351632,\n advanced: 351631,\n needTime: 0,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {},\n keysExamined: 351631,\n seeks: 1,\n dupsTested: 351631,\n dupsDropped: 0\n },\n {\n stage: 'IXSCAN',\n nReturned: 271,\n executionTimeMillisEstimate: 0,\n works: 272,\n advanced: 271,\n needTime: 0,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {},\n keysExamined: 271,\n seeks: 1,\n dupsTested: 271,\n dupsDropped: 0\n },\n {\n stage: 'IXSCAN',\n nReturned: 6476186,\n executionTimeMillisEstimate: 5115,\n works: 6476187,\n advanced: 6476186,\n needTime: 0,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {},\n keysExamined: 6476186,\n seeks: 1,\n dupsTested: 6476186,\n dupsDropped: 0\n },\n {\n stage: 'IXSCAN',\n nReturned: 6473316,\n executionTimeMillisEstimate: 4796,\n works: 6473317,\n advanced: 6473316,\n needTime: 0,\n needYield: 0,\n saveState: 13308,\n restoreState: 13308,\n isEOF: 1,\n keyPattern: { _fts: 'text', _ftsx: 1, rid: 1 },\n indexName: 'msg_text_rid_1',\n isMultiKey: true,\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'backward',\n indexBounds: {},\n keysExamined: 6473316,\n seeks: 1,\n dupsTested: 6473316,\n dupsDropped: 0\n }\n ]\n }\n }\n }\n }\n },\n command: {\n find: 'rocketchat_message',\n filter: {\n '$text': { '$search': 'https://www.example.com' },\n t: { '$ne': 'rm' },\n _hidden: { '$ne': true },\n rid: 'LkgTmX2dCncp5Rxtcx2Hj2YYiyyK49zj9i'\n },\n '$db': 'rocketchat'\n },\n serverInfo: {\n host: 'chat',\n port: 27017,\n version: '5.0.15',\n gitVersion: '935639beed3d0c19c2551c93854b831107c0b118'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 104857600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 104857600,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1678285569, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"e07e7a674340cdc236b59f232718bd4694bb0321\", \"hex\"), 0),\n keyId: Long(\"7176963756902055940\")\n }\n },\n operationTime: Timestamp({ t: 1678285569, i: 1 })\n}", "text": "A year late here @kevinadi but hopefully you are still around.After spending a bit of time over the last few days learning more about which specific queries have been causing issues I can offer a bit more here and hopefully get some productive feedback.The problem queries seem to be ones that use a text index and have certain words under 4 characters. For instance, this query performs fine:While this one is very slow:In fact, even searching “www” is very slow but “wwww” is not. Furthermore, despite having created a compound index and added rid to the text index I found the order of the columns makes a big difference. When I created this, it would search text fields first (on 6 million records) and then filter by rid which is silly because rid only matches 257 of those:This led me to recreate the index as such:This is now much faster but obviously fails completely when rid is not included. That makes the index useless for querying the entire table and since Mongodb only allows one text index per table I’m a bit stuck there.At any rate, I’m wondering I can get some specific input into why my initial queries are performing so badly on shorter word lengths. Here is an explain that might help shed some light:", "username": "billy_noah" } ]
Debug Out of Memory Crash
2022-03-17T20:27:39.580Z
Debug Out of Memory Crash
5,606
null
[]
[ { "code": "", "text": "Is there any plan to support Google Cloud Eventarc integration like Atlas currently does with Amazon EventBridge ?\nAnd if so, when can we see that feature ?Thank you", "username": "Shunei_Hayakawa" }, { "code": "", "text": "This would be a really cool feature! I hope the MongoDB team will consider it!", "username": "Maarten_Baijs" } ]
Will MongoDB Atlas support Google Cloud Eventarc integration?
2022-11-09T00:32:48.709Z
Will MongoDB Atlas support Google Cloud Eventarc integration?
1,473
null
[ "upgrading" ]
[ { "code": "", "text": "Hey community!I cannot upgrade my MongoDB Atlas cluster 4.2 to 4.4, I get the message “We’re sorry, an unexpected error has occurred” with a 500 error code. No other error messages appear and at the top header I see the message “We are deploying your changes (current action: configuring MongoDB)”, but nothing happens.Any ideas?I’m running M10, in GCP Frankfurt", "username": "Arturas_Radzevicius" }, { "code": "", "text": "Hi @Arturas_RadzeviciusThis may be one that only MongoDB support can help with. Open a support case/chat with them.", "username": "chris" }, { "code": "", "text": "Hi @chris, thanks for your reply, I had to increase storage to be able to upgrade, sadly there was no error message indicating that.", "username": "Arturas_Radzevicius" } ]
Cannot upgrade MongoDB Atlas
2023-03-08T07:19:41.444Z
Cannot upgrade MongoDB Atlas
1,061
null
[ "queries" ]
[ { "code": "", "text": "If we save a field like this{\n“state_id”:\"640817d74490c6aa49689306,\n“state”: “State Name”\n}if we don’t want to run join, is it ok\nplease mention pros and cons", "username": "Zubair_Rajput" }, { "code": "\"640817d74490c6aa49689306\"ObjectId(\"640817d74490c6aa49689306\")localFieldforeignField", "text": "Hello @Zubair_Rajput,If I understand correctly, you want to save objectId as string type (hex representation) \"640817d74490c6aa49689306\", not objectId type ObjectId(\"640817d74490c6aa49689306\"),If we save a field like this{\n“state_id”:\"640817d74490c6aa49689306,\n“state”: “State Name”\n}if we don’t want to run join, is it okYes you can save, and there is no connection between join and objectId type, you can $lookup by any type of field, but should match the types of properties provided in localField and foreignField,I would suggest you store objectId in objectId type instead of string (hex representation), there are benefits:You can read more about ObjectID in this doc,", "username": "turivishal" }, { "code": "", "text": "{\n“firstname”:“firstname”,\n“lastname”:“lastname”,\n“state_id”: ObjectId(\"640817d74490c6aa49689306),\n“state”: “State Name”\n}For the sake of understand we can say it is a USER collection.\nActually I am saving with ObjectId, the question was that I want to save both id and it’s value in same document so that I don’t to run lookup. Is there any harm or bad about it.", "username": "Zubair_Rajput" }, { "code": "", "text": "the question was that I want to save both id and it’s value in same document so that I don’t to run lookup. Is there any harm or bad about it.I would say that is not bad, MongoDB’s rule of thumb is “Data that is accessed together should be stored together”.", "username": "turivishal" } ]
What about if we save value as well as it's objectid in the same collection
2023-03-08T07:01:51.008Z
What about if we save value as well as it&rsquo;s objectid in the same collection
571
null
[ "queries", "java", "transactions" ]
[ { "code": "private void readOplog() {\n\n Publisher<Document> pubDoc = null;\n MongoDBOperationsSubscriber<Document> sub = null;\n\n Document filter = new Document();\n filter.put(\"ns\", namespace);\n \n pubDoc = fromCollection.find(clientSession, filter);\n sub = new MongoDBOperationsSubscriber<Document>();\n pubDoc.subscribe(sub);\n try {\n sub.await();\n sub.onComplete();\n } catch (Throwable e) {\n System.out.println(\n \"Read from Start, Error occurred while subscriber is in wait for messages from MongoDB publisher.\"\n + e);\n }\n fetchedOplogDocs = sub.getData();\n System.out.println(\"Total documents fetched so far are [ \"+ fetchedOplogDocs.size() +\" ].\");\n }\n", "text": "Hi Team,Recently we migrated from 2.6 to 3.6, Storage engine is still MMAVP1.But our code is extremely slow to read data/transactions from “oplog” [ oplog size - 50 GB ]. Queries were taking more than 90 minutes to give an output.After migration, Please let us know if any additional steps to be implemented regarding oplog configuration.Below is the snapshot of code for reading the “oplog”. The test instance where we directly installed 3.6, the same code works without any lag.Please let us know if there is any suggestion which need to be implemented on Mongo for improving the performance of reading the “oplog”.", "username": "Anwesh_Kota" }, { "code": "", "text": "Hi @Anwesh_Kota,Welcome to the MongoDB Community forums Recently we migrated from 2.6 to 3.6, Storage engine is still MMAVP1.Unfortunately, it’s difficult to say what is happening as both MongoDB 2.6 and 3.6 series are out of support, and the MMAPv1 storage engine was removed since MongoDB 4.2. It’s a distinct possibility that the issue you’re seeing is due to the characteristic of MMAPv1.I would recommend you upgrade to a supported version of MongoDB using the WiredTiger storage engine as soon as possible.Let me know if you have any further questions or concerns.Best,\nKushagra", "username": "Kushagra_Kesav" } ]
MongoDB performance degradation in reading "oplog" using java driver after migrating from 2.6 to 3.6
2023-03-07T08:08:58.510Z
MongoDB performance degradation in reading &ldquo;oplog&rdquo; using java driver after migrating from 2.6 to 3.6
836
null
[ "java", "android" ]
[ { "code": "", "text": "With the move to Android Studio 4.2, there is now a warning on builds to move off of jcenter() as a repository. Doing this causes a build failure for me for android-adapters version 4.0.0What other repository has this other than jcenter()? What is the gradle setup to access it?Thanks!", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger: We have already moved away from jCenter a while back, consider migrating to 10.4.x.For detailed info, you can check our release log", "username": "Mohit_Sharma" }, { "code": "", "text": "I have already upgraded to 10.4.0, but for the Realm recyclerview adapter 4.0.0 this is not on mavenCentral(), only jcenter, as far as I can tell.", "username": "Tad_Frysinger" }, { "code": "", "text": "Btw - here is a screenshot from your docs on 10.4.0 and up, note that it still references jcenter().realm_config753×948 28.5 KB", "username": "Tad_Frysinger" }, { "code": "", "text": "Hi is there an update here? It seems you need to move your 4.0.0 adapters to mavenCentral() as well, correct?", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger: Thanks for following up. Yes, team is looking into how quickly they can migrate adapters as well.", "username": "Mohit_Sharma" }, { "code": "", "text": "Mohit -Hi, any update?Thanks.Tad", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger : Sorry for delayed response, I don’t have update on this.But the team has committed on this and you check progress from Migrate android-adapters to MavenCentral · Issue #7486 · realm/realm-java · GitHub", "username": "Mohit_Sharma" }, { "code": "", "text": "Hi Mohit -It’s been over three weeks, this is still not resolved. It really is a pain during day-to-day development (I have noticed sometimes after the error occurs and I try again and it seems to build, the build actually has errors requiring me to do a FULL REBUILD after even the most trivial edits to any file that uses Realm).Can we PLEASE get this escalated and resolved?Thanks!", "username": "Tad_Frysinger" }, { "code": "", "text": "Btw - I am more than happy to do a screen share with a Realm developer to show the error occurring and debug it if necessary.Tad", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger : Sorry for about the issue you are facing. Let me try to followup on this.", "username": "Mohit_Sharma" }, { "code": "", "text": "Hi @Tad_Frysinger We are currently in the process of doing the migration. You can follow progress here https://github.com/realm/realm-android-adapters/pull/162", "username": "ChristanMelchior" }, { "code": "", "text": "Thanks, I will keep track of it and try it out when it gets finalized!", "username": "Tad_Frysinger" }, { "code": "", "text": "@Mohit_Sharma and @ChristanMelchior - it’s been two months since my original post. Not trying to be a pest, but this issue of not being able to do incremental builds because of Realm is getting to be a real pain. Any word on when we might see a fix? Thanks!", "username": "Tad_Frysinger" }, { "code": "", "text": "Btw - there is another issue opened up I think may be related to the realm adapter work, it is here.", "username": "Tad_Frysinger" }, { "code": "", "text": "@Tad_Frysinger: Sorry for the delayed response.I was trying to understand what best can be done to expedite this, considering the current situation we wouldn’t be able to commit to a fixed date but it looks like we can release this early next month.", "username": "Mohit_Sharma" }, { "code": "", "text": "@Tad_Frysinger : Hope you are doing good.I was internally trying to sort out something which can unblock you until official release, and @ChristanMelchior suggested that you can try replacing\nRealmRecyclerViewAdapter.java.Do let me know if this works out for you or not?", "username": "Mohit_Sharma" }, { "code": "", "text": "Do you mean utilize this file in my project directly and completely remove the reference to implementation group: ‘io.realm’, name: ‘android-adapters’, version: ‘4.0.0’ from my gradle file?", "username": "Tad_Frysinger" }, { "code": "", "text": "@ChristanMelchior : Can you please explain in detail ", "username": "Mohit_Sharma" }, { "code": "", "text": "Hello @ChristanMelchior Is there any news concerning this? I still cannot resolve the Realmbase adapter using MavenCentral and the documentation still reflects the same errors.", "username": "Jean-Rene_NSHUTI" } ]
Repository other than jcenter() to get realm adapters 4.0.0?
2021-05-09T13:52:15.936Z
Repository other than jcenter() to get realm adapters 4.0.0?
9,359
null
[ "aggregation", "dot-net", "compass", "atlas-search", "text-search" ]
[ { "code": "{\n $text: {\n $search: \"\\\"searchedquery\\\"\"\n },\n}\nvar stage1 = new BsonDocument(\"$match\", new BsonDocument(\"$text\", \n new BsonDocument(\"$search\", $\"\\\"{searchRequest.Query}\\\"\"\n)));\n", "text": "Hi,I have an aggregation pipeline with a $match $text $search stage, when I execute it in MongoDb Compass, the query returns the results. But the same query returns nothing when executed in dotnet.When I remove the double quotes (phrase) the query in dotnet returns the same result as in Compass.MongoDB Compass:.NET:Please advice.", "username": "Jorn_Kersseboom" }, { "code": "$textmongosh[\n {\n _id: ObjectId(\"64076217e3c19f9160c2fd0e\"),\n words: 'word1 word2 word3'\n },\n {\n _id: ObjectId(\"6407621ce3c19f9160c2fd0f\"),\n words: 'word1 word2 word3 word4'\n },\n {\n _id: ObjectId(\"64076226e3c19f9160c2fd10\"),\n words: 'wordr2 word1 word3'\n }\n]\ntextdb.textsearch.createIndex({words: \"text\"})\nmongoshtest> db.textsearch.aggregate([{ $match: { $text: { $search: \"\\\"word1 word2 word3\\\"\" } } }])\n[\n {\n _id: ObjectId(\"64076217e3c19f9160c2fd0e\"),\n words: 'word1 word2 word3'\n },\n {\n _id: ObjectId(\"6407621ce3c19f9160c2fd0f\"),\n words: 'word1 word2 word3 word4'\n }\n]\nusing System;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<BsonDocument>(\"textsearch\");\n\nvar searchRequest = new { Query = \"word1 word2 word3\" };\n\nvar stage1 = new BsonDocument(\"$match\", new BsonDocument(\"$text\",\n new BsonDocument(\"$search\", $\"\\\"{searchRequest.Query}\\\"\"\n )));\nvar query = coll.Aggregate().AppendStage<BsonDocument>(stage1);\nforeach (var doc in query.ToList())\n{\n Console.WriteLine(doc);\n}\nConsole.WriteLine(query);\nmongosh{ \"_id\" : ObjectId(\"64076217e3c19f9160c2fd0e\"), \"words\" : \"word1 word2 word3\" }\n{ \"_id\" : ObjectId(\"6407621ce3c19f9160c2fd0f\"), \"words\" : \"word1 word2 word3 word4\" }\naggregate([{ \"$match\" : { \"$text\" : { \"$search\" : \"\\\"word1 word2 word3\\\"\" } } }])\nBuilders<T>.Filter.TextBsonDocumentvar filter = Builders<BsonDocument>.Filter.Text($\"\\\"{searchRequest.Query}\\\"\");\nvar query = coll.Aggregate().Match(filter);\nConsole.WriteLine(query);\n", "text": "Hi, @Jorn_Kersseboom,Welcome to the MongoDB Community Forums.I understand that the MongoDB .NET/C# Driver is not returning the same results as MongoDB Compass for a $text search. I inserted the following 3 documents in a collection via mongosh:and created a text index:Running the text search in mongosh the first two documents are returned, but not the third (as expected):Running the following minimal C# repro:I received back the same two documents. I also print out the MQL that is sent to the server, which is the same as executed in mongosh:I would recommend writing your aggregation pipeline to the console to look for any differences potentially added by other stages.NOTE: You can use Builders<T>.Filter.Text to construct the query stage rather than BsonDocument.This outputs the identical query as displayed above.Hope that helps in your debugging efforts.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hi,Thanks for the quick reply, I was able to reproduce your usecase and indeed the results were the same. I investigated my pipeline further and found that I made a mistake in another $match stage with $or.This helped a lot!\nNice day.", "username": "Jorn_Kersseboom" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB .NET driver aggregation $text $search phrase not returning results
2023-03-07T10:51:18.942Z
MongoDB .NET driver aggregation $text $search phrase not returning results
1,380