image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "app-services-cli" ]
[ { "code": "realm-cli push --remote=\"<our app name>\" --include-node-modulesDeployed app is identical to proposed version, nothing to do", "text": "I have an automated deployment setup for my realm app, and sometimes our dependencies change without any changes to our realm app configuration files.However, when I try to push my changes using realm-cli push --remote=\"<our app name>\" --include-node-modules, it won’t push the new dependencies if the app configuration files haven’t changed in any way. Instead, it will give the message Deployed app is identical to proposed version, nothing to doIs there a way to force-update the dependencies, even if the app configuration files haven’t changed?", "username": "Elias_Heffan" }, { "code": "", "text": "Anything new? This is a real issue…", "username": "YuvalW" } ]
Can't push only node_modules through realm-cli when app hasn't changed
2021-11-07T00:12:14.867Z
Can&rsquo;t push only node_modules through realm-cli when app hasn&rsquo;t changed
3,493
null
[ "node-js", "atlas-functions" ]
[ { "code": "context.services.get(\"mongodb-atlas\").db(\"XYZ\").collection(headers.table.toString()).insertOne(JSON.parse(body.text()));TypeError: Cannot access member 'toString' of undefined", "text": "In order to make function dynamic, I am try to read collection names from headers but getting this error. I have create custom header key namely ‘table’ and gave collection name as value.context.services.get(\"mongodb-atlas\").db(\"XYZ\").collection(headers.table.toString()).insertOne(JSON.parse(body.text()));TypeError: Cannot access member 'toString' of undefined", "username": "Rajan_Braiya" }, { "code": "context.services.get(\"mongodb-atlas\").db(\"XYZ\").collection(headers.table.toString()).insertOne(JSON.parse(body.text()));TypeError: Cannot access member 'toString' of undefinedexports = async function (coll_name) {\n const result = context.services.get(\"mongodb-atlas\").db(\"test\").collection(coll_name).findOne();\n return {result}\n}\ncoll_nameTesting Console", "text": "Hey @Rajan_Braiya,Thank you for reaching out to the MongoDB Community forums.I am try to read collection names from headers but getting this error. I have created a custom header key namely ‘table’ and given a collection name as a value.Could you please clarify how you are passing the collection name and provide the full code snippet to better understand the issue?context.services.get(\"mongodb-atlas\").db(\"XYZ\").collection(headers.table.toString()).insertOne(JSON.parse(body.text()));TypeError: Cannot access member 'toString' of undefinedHowever, if you want to make the function dynamic and pass the collection name in the Atlas Function, you can refer to the following code snippet:In this case, you need to pass the coll_name from the Testing Console as shown in the screenshot below:\nimage3030×436 77.3 KB\nI hope it helps!Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "headers.tableheaders.table.toString()", "text": "Hi @Kushagra_Kesav Thank you for your reply.I am trying to test API using Postman with the static collection name its working fine, but I want to read collection name from headers object to make endpoint more dynamic for all table. So I tried creating new key on postman headers namely ‘table’ and in the value I am passing the actual collection name, but when I am read like headers.table or headers.table.toString() getting error.", "username": "Rajan_Braiya" }, { "code": "find(id, collection): Observable<any> {\n const headers = new HttpHeaders({\n 'Content-Type': 'application/json',\n 'Authorization': `civcw ywewe2422`,\n 'Table': collection\n });\n\n const queryParams = new URLSearchParams(id ? { _id: id } : {});\n return this.http.get(`${mongo}/find?${queryParams}`, { headers: headers });\n}\ncontext.services.get(\"mongodb-atlas\").db(\"xyz\").collection(headers.Table.toString());\nAccess to XMLHttpRequest at 'https://eu-west-1.aws.data.mongodb-api.com/app/zyx-xshsy/endpoint/find?' from origin 'http://localhost:4200' has been blocked by the CORS policy. The response to the preflight request doesn't pass the access control check, as there is no 'Access-Control-Allow-Origin' header present on the requested resource.\n\nGET https://eu-west-1.aws.data.mongodb-api.com/app/xyz-xshsy/endpoint/find? net::ERR_FAILED\n\n\nIt seems like the issue lies with CORS (Cross-Origin Resource Sharing) policy blocking your request when it's coming from the web app at 'http://localhost:4200'. You may need to configure CORS settings on the server-side to allow requests from this origin.```", "text": "Hello @Kushagra_Kesav,I’m attempting to call MongoDB endpoints from an Angular web application, and everything seems to be working well. However, I need to pass the collection name dynamically with the API request. To achieve this, I’m trying to send it within the headers by adding a custom header named ‘Table’ to hold the collection name.Here is the JavaScript function:In my MongoDB function:When I tested this using Postman, it worked perfectly. However, when I make the same call from the web app, I encounter a CORS policy error. I even tried making the call without the custom header, and it worked fine on the app.Here’s the CORS error message:", "username": "Rajan_Braiya" } ]
How to read custom header key values and use in function?
2023-06-28T09:08:15.657Z
How to read custom header key values and use in function?
868
https://www.mongodb.com/…c35f60135efe.png
[ "replication" ]
[ { "code": "", "text": "\nimage541×550 17.9 KB\nIs there any solution to release the disk space after document deletion? (would be great if the solution does not cause any downtime.)", "username": "Weilin_wu" }, { "code": "", "text": "Hello, welcome to the MongoDB communityAfter deleting the data, it is necessary to run a compact and then reduce the disk size.", "username": "Samuel_84194" }, { "code": "", "text": "same issues i need help please", "username": "Top1_seo_N_A" }, { "code": "", "text": "NoSQL and SQL Database systems never release used disk space to operating system, even when a large amount of data were deleted. While in a SQL database, reorganizing tables or indexes regularly could lower storage requirement, MongoDB also needs “compact” operation to defragment space used.", "username": "Zhen_Qu" }, { "code": "", "text": "Run the compression procedure on the read replicas, failover, and run on the old primary. This will solve your problem.", "username": "Samuel_84194" } ]
Deleted documents but disk space is not released
2023-09-13T14:16:44.585Z
Deleted documents but disk space is not released
489
https://www.mongodb.com/…_2_1024x576.jpeg
[ "node-js", "compass", "connecting", "atlas" ]
[ { "code": "", "text": "Can anyone help me in resolving this issue\n\n169528207280633960954962407666641920×1080 132 KB\n", "username": "Soumyadip_Roy" }, { "code": "", "text": "Hello, welcome to the MongoDB community, it will be our pleasure to help you.Read this forum and let me know if you have any questions.", "username": "Samuel_84194" } ]
I'm facing this error while connecting with the compass with atlas
2023-09-21T07:40:03.573Z
I&rsquo;m facing this error while connecting with the compass with atlas
544
null
[ "database-tools", "backup", "storage" ]
[ { "code": "{\"t\":{\"$date\":\"2023-09-20T15:45:32.372+07:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":0,\"message\":{\"ts_sec\":1695199532,\"ts_usec\":372664,\"thread\":\"2409:0x7fbb2d1b4b80\",\"session_dhandle_name\":\"file:sizeStorer.wt\",\"session_name\":\"txn-recover\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__wt_block_read_off:226:sizeStorer.wt: potential hardware corruption, read checksum error for 4096B block at offset 36864: block header checksum of 0x92f4a8c5 doesn't match expected checksum of 0xaba5cd27\"}}}", "text": "My mongodb data got delete due to a security hole in my vps so all the table got deleted and i dont have any backup from mongodump just have the database data in /data/db from 1 month ago.\nSo i get that copy of data and cp to the /data/db folder\nand run\nmongod --dbpath /data/dbBut its return these error{\"t\":{\"$date\":\"2023-09-20T15:45:32.372+07:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":0,\"message\":{\"ts_sec\":1695199532,\"ts_usec\":372664,\"thread\":\"2409:0x7fbb2d1b4b80\",\"session_dhandle_name\":\"file:sizeStorer.wt\",\"session_name\":\"txn-recover\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__wt_block_read_off:226:sizeStorer.wt: potential hardware corruption, read checksum error for 4096B block at offset 36864: block header checksum of 0x92f4a8c5 doesn't match expected checksum of 0xaba5cd27\"}}}", "username": "long_van1" }, { "code": "", "text": "Good morning, welcome to the MongoDB community.Run a repair to try to solve the problem.mongod --repair --dbpath /data/db", "username": "Samuel_84194" }, { "code": "{\"t\":{\"$date\":\"2023-09-21T08:42:32.077+07:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22302, \"ctx\":\"initandlisten\",\"msg\":\"Recovering data from the last clean checkpoint.\"}\n{\"t\":{\"$date\":\"2023-09-21T08:42:32.077+07:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=1383M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-09-21T08:42:33.390+07:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":21,\"message\":\"[1695260553:390185][1139:0x7f5794b11b80], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 805: /var/lib/mongo/WiredTiger.turtle: handle-open: open: Is a directory\"}}\n{\"t\":{\"$date\":\"2023-09-21T08:42:33.392+07:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":21,\"message\":\"[1695260553:391825][1139:0x7f5794b11b80], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 805: /var/lib/mongo/WiredTiger.turtle: handle-open: open: Is a directory\"}}\n{\"t\":{\"$date\":\"2023-09-21T08:42:33.393+07:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":21,\"message\":\"[1695260553:393191][1139:0x7f5794b11b80], wiredtiger_open: [WT_VERB_DEFAULT][ERROR]: __posix_open_file, 805: /var/lib/mongo/WiredTiger.turtle: handle-open: open: Is a directory\"}}\n{\"t\":{\"$date\":\"2023-09-21T08:42:33.393+07:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n{\"t\":{\"$date\":\"2023-09-21T08:42:33.393+07:00\"},\"s\":\"F\", \"c\":\"STORAGE\", \"id\":28595, \"ctx\":\"initandlisten\",\"msg\":\"Terminating.\",\"attr\":{\"reason\":\"21: Is a directory\"}}\n{\"t\":{\"$date\":\"2023-09-21T08:42:33.393+07:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23091, \"ctx\":\"initandlisten\",\"msg\":\"Fatal assertion\",\"attr\":{\"msgid\":28595,\"file\":\"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp\",\"line\":702}}\n{\"t\":{\"$date\":\"2023-09-21T08:42:33.393+07:00\"},\"s\":\"F\", \"c\":\"ASSERT\", \"id\":23092, \"ctx\":\"initandlisten\",\"msg\":\"\\n\\n***aborting after fassert() failure\\n\\n\"}\n", "text": "Thanks but after i run\nhere is the logI dont know but somehow in my backup my WiredTiger.turtle is a directory and not a file so its make the mongo error", "username": "long_van1" }, { "code": "[WT_VER", "text": "[WT_VERStrange, the WiredTiger.turtle file is a metadata file, there is no automated format to retrieve it. Can you do an ls -lrt on the folder and put the result here? Did you give the user permission?", "username": "Samuel_84194" }, { "code": "", "text": "Can you list the files within the directory .turtle?", "username": "Samuel_84194" } ]
How to backup standalone instance using vps file from 1 months ago
2023-09-20T08:49:33.753Z
How to backup standalone instance using vps file from 1 months ago
322
null
[ "android" ]
[ { "code": "{\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"id\": {\n \"bsonType\": \"objectId\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"online\": {\n \"bsonType\": \"boolean\"\n },\n \"wins\": {\n \"bsonType\": \"int\"\n },\n \"losses\": {\n \"bsonType\": \"int\"\n },\n \"draws\": {\n \"bsonType\": \"int\"\n }\n },\n \"title\": \"Users\",\n \"bsonType\": \"object\",\n \"required\": [\n \"_id\",\n \"id\",\n \"name\"\n ]\n}\nopen class Users: RealmObject {\n @PrimaryKey var _id: ObjectId = ObjectId.create()\n var id: ObjectId = ObjectId.create()\n var name: String = \"\"\n var online: Boolean? = null\n var wins: Long? = null\n var losses: Long? = null\n var draws: Long? = null\n}\n private fun createSync() {\n runBlocking {\n try {\n val user = app.currentUser\n if (user != null) {\n val config = SyncConfiguration.Builder(\n user = user,\n partitionValue = \"123456789\",\n schema = setOf(Users::class)\n ).build()\n val realm = Realm.open(config)\n val currentUser = Users().apply {\n name = \"MyName\"\n }\n realm.write {\n copyToRealm(currentUser)\n }\n } else {\n Log.d(\"PreparationScreen\", \"User Null\")\n }\n } catch (e: Exception) {\n Log.d(\"PreparationScreen\", \"$e\")\n }\n }\n }\n", "text": "I’m building an Android app using a Realm Sync Mongo DB library. The problem is that whenever I try to insert some data, I can see that data inside my local realm database file, however that data is not visible on the backend database on Mongo DB Atlas. I’ve already created a database. I’ve specified the schema for the model class like this:This is the model class inside my project:And this is the code that triggers the data insertion:Also I cannot see any log that prints the error message about the realm sync. How can I achieve that?", "username": "111757" }, { "code": "", "text": "Any help with this issue please?", "username": "111757" }, { "code": "", "text": "HelloHave you found any solution for this problem? Encountering same issue. Would be glad if you can share your solution/fix. Thank you", "username": "michael_villacarlos" } ]
Mongo DB Realm Sync - Can't See the data on Atlas
2022-09-07T07:14:14.394Z
Mongo DB Realm Sync - Can&rsquo;t See the data on Atlas
2,302
https://www.mongodb.com/…d_2_1024x338.png
[]
[ { "code": "", "text": "Hi there,I am trying to remove MongoDB on Almalinux 8. I seem to be stuck on the same error using dnf and yum. Any assistance as to how to resolve the issue?\n\nScreenshot 2023-09-20 at 22.46.452014×666 87 KB\nI have tried --skip-broken and --nobest together with no luck.Problem: package mongodb-org-tools requires mongodb-database-tools problem with installed packageThanks in advance ", "username": "Tahseen_Hisbani" }, { "code": "", "text": "Hi @Tahseen_Hisbani,I think you can remove manually the binary of mongo’ s installed.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Any idea on how to do this on RHEL?I have found a guide for OSX but not RHEL.Thanks ", "username": "Tahseen_Hisbani" }, { "code": "", "text": "Hi @Tahseen_Hisbani,\nYou can find It with command locate,find etc…\nUsually are located in /usr/bin/lib or something similar.Regards", "username": "Fabio_Ramohitaj" } ]
Removing MongoDB from Almalinux 8 RHEL
2023-09-20T21:49:05.631Z
Removing MongoDB from Almalinux 8 RHEL
217
null
[ "aggregation" ]
[ { "code": "\"errCode\": 279,errMsg\": \"Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected\",\n \"errName\": \"ClientDisconnect\",\n \"errCode\": 279,\n{\"t\":{\"$date\":\"2023-09-20T11:48:21.945+00:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":518, \n\"ctx\":\"conn1990250\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"DBO.sample\",\n\"command\":{\"aggregate\":\"test\",\"pipeline\":[{\"$match\":{\"$and\":[{\"status\":\"Open\"},{\"appDateTime\":{\"$gte\":{\"$date\":\"2023-09-20T11:48:13.031Z\"}}},\n{\"appDateTime\":{\"$lt\":{\"$date\":\"2023-10-20T11:48:13.031Z\"}}}]}},{\"$group\":{\"_id\":\"$storeNumber\",\"totalSlotCount\":{\"$sum\":1},\"minSlotDate\":\n{\"$min\":\"$appDateTime\"}}}],\"cursor\":{},\"allowDiskUse\":false,\"$db\":\"RAVaccineSchedulerPRODDB\",\"lsid\":{\"id\":{\"$uuid\":\"807d5eb6-4067-4051-a89b-38f06aa1bd86\"}}},\n\"planSummary\":\"IXSCAN { status: 1, appTime: 1, reservedTime: 1 }\",\n\"numYields\":1725,\"queryHash\":\"E2C2E097\",\"planCacheKey\":\"6DD10207\",\n\"ok\":0,\"errMsg\":\"Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected\",\"errName\":\"ClientDisconnect\",\n\"errCode\":279,\"reslen\":311,\"locks\":{\"FeatureCompatibilityVersion\":{\"acquireCount\":{\"r\":1782}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":1782}},\n\"Global\":{\"acquireCount\":{\"r\":1782}},\"Database\":{\"acquireCount\":{\"r\":1781}},\"Collection\":{\"acquireCount\":{\"r\":1781}},\"Mutex\":{\"acquireCount\":{\"r\":57}}},\n\"protocol\":\"op_msg\",\"durationMillis\":8050}}\n", "text": "Hi Team,Can you fix the issue?\nWhat is causes \"errCode\": 279, was generated", "username": "Srihari_Mamidala" }, { "code": "\"errMsg\":\"Error in $cursor stage :: caused by :: operation was interrupted because a client disconnected", "text": "Hey @Srihari_Mamidala,\"errMsg\":\"Error in $cursor stage :: caused by :: operation was interrupted because a client disconnectedLooking at the log, it appears that your client code or application is terminating abnormally. Could you please share the logs or screenshot that shows what happened on the client or application side?Additionally, it would be helpful if you could share both the query you executed and the workflow you are following. These additional details can provide insights into the specific events around this issue.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Operation was interrupted because a client disconnected
2023-09-20T12:46:01.319Z
Operation was interrupted because a client disconnected
344
null
[ "graphql", "serverless", "app-services-data-access" ]
[ { "code": "", "text": "We are getting an alert about the connection limit being reached. We are using Mongo Atlas App Service as a GraphQL service to access MongoDB.We have a Serverless Cluster and a 100 connection limit. We are frequently reaching the connection limit in the development phase, Currently, we have only 2-3 users active on the website.Can you please suggest why we are reaching the connection limit and how we can overcome it?Thanks", "username": "Raman_Kumar" }, { "code": "", "text": "Hi Raman_KumarThank you for the question. In order to better assist you can you please answer the following questions:Can you please explain how you configured a connection limit of 100? All MongoDB Atlas Serverless Instances Serverless instances can support up to 500 simultaneous connections.What is your Org name?", "username": "Anurag_Kadasne" }, { "code": "", "text": "Hi Anurag_KadasnaThank you for the reply.\nimage1319×299 24 KB\nPlease let me know if you need more clarification from us.Thanks", "username": "Raman_Kumar" }, { "code": "", "text": "Hi Raman_KumarWe are investigating and will come back to you with a response shortly.Thanks", "username": "Anurag_Kadasne" }, { "code": "", "text": "Hi Anurag_KadasnaThank you for the reply. We are looking forward to you.", "username": "Raman_Kumar" }, { "code": "", "text": "Hi RamanWould you happen to know how many queries you were sending when you noticed the connection spike on August 28th? The number of connections are proportional to how heavy your workload was (in addition to being correlated to the number of users).", "username": "Anurag_Kadasne" }, { "code": "", "text": "Hi Anurag_KadasnaYes, we want to know how many queries we were sending when we noticed the connection spike on August 28th.Also, how can we control the connections as we are using MongoDB app Graphql service and we have no control over making connections and closing connections.Currently, we are in the development stage and we have only 4-5 users active on the website so I don’t think the user workload enabled connectionCan you please let us know the exact reason and give suggestions to overcome this problem?Thanks", "username": "Raman_Kumar" }, { "code": "", "text": "Hi Raman,Atlas GraphQL is a fully-managed, serverless hosted layer with built-in support for data permissioning, authentication, and relationships across collections. With that being said, we manage the connections for you, and will pool connections in order to improve data access performance to your cluster when requests are made through App Services.GraphQL opens a connection count relative to what your cluster can handle, but defaults to a certain number for serverless instances. It would be great to know how many requests you were making when you received the alert – this could let us know whether we should adjust our settings internally for the connection pool size.", "username": "Kaylee_Won" }, { "code": "", "text": "Hi Anurag_KadasnaThanks, but we are not sure about the number of requests and also unable to get logs on 28th August as App Service logs can be searched for the past 10 days so can you please suggest how we can get how many requests were made on 28th August?Thanks", "username": "Raman_Kumar" }, { "code": "", "text": "Hi Raman,We don’t have access to this data as we only save logs within a 10 day timeframe, but if this spike happens again, please let us know asap so we can look into your workload. Thank you!", "username": "Kaylee_Won" }, { "code": "", "text": "Hi @Kaylee_Won,Thanks for the support.We will let you know if the connection spike happens again.", "username": "Raman_Kumar" } ]
Alert - Connections % of configured limit has gone above 80%
2023-09-05T05:52:05.600Z
Alert - Connections % of configured limit has gone above 80%
774
https://www.mongodb.com/…_2_1024x531.jpeg
[ "data-modeling", "sharding", "containers" ]
[ { "code": "", "text": "Hello, there is a production in the IoT sector that prints 5 KB of data for 15000 devices every 30 minutes. My servers are approximately 5 16 core 32 GB ram vm linux. We manage all our applications and db’s through the docker swarm cluster under these 5 VMs. 3 masters 2 workers.\nWe want to keep this IoT data in mongodb. Can you guide me for the most optimal performance installation? Should it be in Docker swarm under the VM for Mongo, or should I separate it in a separate VM? We are considering indexing the DeviceId and ts fields. What should our Sharding Cluster structure and numbers be? How should I configure CQRS? Currently, I have a system that has 2 shardings via Mongos and 3 replicas in each sharding, but I send the requests to Mongos and it uses a lot of CPU and RAM, and I’m also having performance problems with CQRS.\n\nCapture1897×985 112 KB\n", "username": "yigit_yalnizca" }, { "code": "", "text": "Hi @yigit_yalnizca and welcome to MongoDB community forums!!Should it be in Docker swarm under the VM for Mongo, or should I separate it in a separate VM?Deploying on an individual VM or on a docker swarm would not make much difference as a docker swarm under the hood converts multiple Docker instances into a single virtual host. Please follow the guidelines for production notes and the hardware and OS configuration to deploy the database successfully.What should our Sharding Cluster structure and numbers be?If you are considering sharding as your deployment, the choice of the shard key would play an important role in this case and hence would suggest you to carefully follow all considerations before you select the shard key.Currently, I have a system that has 2 shardings via Mongos and 3 replicas in each sharding, but I send the requests to Mongos and it uses a lot of CPU and RAM, and I’m also having performance problems with CQRS.In saying so, could you help me understand on how does the deployment structure looks like in your case? What are the number of mongos and the shard chunks you are running on the system ?\nWhat is the data size and and what are the operations you are performing?\nCould you also help me understand the workload and the performance issues with the current deployment?Providing this information would help others to give extra context of your use case.Warm regards\nAasawari", "username": "Aasawari" } ]
What should be the mongodb installation and configuration recommendation in Docker swarm?
2023-09-18T16:05:23.286Z
What should be the mongodb installation and configuration recommendation in Docker swarm?
351
null
[ "atlas-search" ]
[ { "code": "{\n compound: {\n should: [\n {\n text: {\n path: 'i18.en.name', // Score = 10\n query: 'amsterdam'\n }\n },\n {\n text: {\n path: 'i18.fr.name', // Score = 8\n query: 'amsterdam'\n }\n },\n {\n text: {\n path: 'i18.es.name', // Score = 6\n query: 'amsterdam'\n }\n },\n ],\n minimumShouldMatch: 1,\n score: ?,\n }\n}\n", "text": "Is there a way to get the maximum score for compound should clauses instead of the combined score?Say I have the query:Say it has a match in all 3 languages and gives me a score of 24, I only want the score for the match with the highest score (10). So if the i18n.en.name has the highest match score with multiple matches, it would give me the same score if it only had a match for i18n.en.name and no others.In elasticsearch you can use dis_max for this, is there a way to do this in atlas search?Thanks!", "username": "Ruud_van_Buul" }, { "code": "", "text": "Hello @Ruud_van_Buul, and a warm welcome to the MongoDB Community forums!!In order to understand the scenario better, could you please help me with a few details in relation to the above concerns:Warm Regards,\nAasawari", "username": "Aasawari" } ]
Atlas search get maximum should score instead of combined should score
2023-09-19T02:54:00.542Z
Atlas search get maximum should score instead of combined should score
337
null
[]
[ { "code": "", "text": "Hi. I am bulk writing 10k updates to the same collection at a time and it’s quite a slow process because this particular collection has a lot of indexes. I’m guessing because of all the updates to the indexes. If they are being re-written after each write can this be reduced to once per bulk write in any way?\nThanks,\nMatt", "username": "Matthew_Gane" }, { "code": "", "text": "i guess that’s true. indexes are being updated one by one.from my understanding bulk update is no big different from single update except that it saves you some round trip time. you can try running the 10k batch in a non-peak time.If they are being re-written after each write can this be reduced to once per bulk write in any way?i guess no, at least the doc doesn’t mention it.", "username": "Kobe_W" }, { "code": "", "text": "OK, thanks for your time ", "username": "Matthew_Gane" } ]
When bulk writing, are indexes re-written after each write & if so is there a way to reduce this to once per bulk write?
2023-09-20T07:41:07.334Z
When bulk writing, are indexes re-written after each write &amp; if so is there a way to reduce this to once per bulk write?
147
null
[ "node-js", "typescript" ]
[ { "code": "", "text": "Hello everyone. My name is Jamin, and I’m from Lagos, Nigeria. A FullStack developer with over a decade of experience building scalable web solutions using JavaScript, TypeScript, React, Nodejs, and MongoDB technologies. In addition to my technical knowledge, I’m a big fan of fostering developer communities around products I enjoy using I believe it’s a great way for peers to interact, share ideas, and learn from one another.I love exploring new opportunities for connecting people together and I believe communities are one of such ways so I’m building https://twitter.com/communityleads in my free time.I’m super excited to be here.", "username": "Trust_Jamin" }, { "code": "", "text": "Hello, I need help from users of MongoDB in Nigeria. I will appreciate any little time given to me.", "username": "Raymond_Olisa" }, { "code": "", "text": "Hi Raymond, is there a specific issue you need help with? Please elaborate so we can assist. Thank you!", "username": "Karissa_Fuller" } ]
Hello everyone, Jamin from Lagos Nigeria
2023-02-14T17:12:29.081Z
Hello everyone, Jamin from Lagos Nigeria
1,363
null
[ "aggregation" ]
[ { "code": "{\n \"myArray\": [\n {\n \"id\": \"1\",\n \"data_a\": 50,\n \"data_b\": 50\n },\n {\n \"id\": \"2\",\n \"data_a\": 100,\n \"data_b\": 200\n }\n ],\n}\n{\n \"myArray\": [\n {\n \"id\": \"1\",\n \"data_a\": 50,\n\t \"data_b\": 50,\n\t \"max\": 50\n },\n {\n \"id\": \"2\",\n \"data_a\": 100,\n\t \"data_b\": 200,\n\t \"max\": 200\n }\n ],\n}\n[\n {\n $addFields: {\n \"myArray.max\": {\n $max: [\n \"$myArray.data_a\",\n \"$myArray.data_b\",\n ],\n },\n },\n },\n]\n{\n \"interestedBusinessUnits\": [\n {\n \"id\": \"1\",\n \"data_a\": 50,\n \"data_b\": 50,\n \"max\": [50, 200]\n },\n {\n \"id\": \"2\",\n \"data_a\": 100,\n \"data_b\": 200,\n \"max\": [50, 200]\n }\n ],\n}\n", "text": "Hello,I have a collection with the following schema:Using an aggregation, my goal is to obtain the following result:When trying the following aggregation:I get the following result:I tried different techniques but can’t get to the desired result. Please assist!", "username": "Antoine_Delequeuche" }, { "code": "db.getCollection(\"Test\").aggregate([\n{\n $unwind:'$myArray'\n},\n{\n $addFields:{\n 'myArray.maxval':{\n $cond:{\n if:{$lt:['$myArray.data_b', '$myArray.data_a']},\n then:'$myArray.data_a',\n else:'$myArray.data_b'\n }\n }\n }\n},\n{\n $group:{\n _id:'$_id',\n myArray:{$push:'$myArray'}\n }\n}\n\n])\n", "text": "$max is an accumulator operator, so must be used within a $group stage (there is a non-aggregation $max for running updates as well)So an easy way is to unwind the arrays, then work out the max for each item and then re-combine.As an example of an approach:I’m sure there are other ways of doing this, but this is fairly simple, watch out for performance with a lot of data and $unwinding.", "username": "John_Sewell" }, { "code": "", "text": "Hello @John_Sewell,Thank you very much for your prompt answer. The actual collection is very large, so I wish to avoid that method: using $unwind and then $group is very costly performance-wise.By the way, using $max or $cond actually gave me the same output within $addFields stage, even though $max is an accumulator. Interesting!", "username": "Antoine_Delequeuche" }, { "code": "map = { \"$map\" : {\n \"input\" : \"$interestedBusinessUnits\" ,\n \"as\" : \"unit\" ,\n \"in\" : { \"$mergeObjects\" : [\n \"$$unit\" ,\n { \"max\" : { \"$max\" : [\n \"$$unit.data_a\" ,\n \"$$unit.data_b\"\n ] } }\n ] }\n} }\naddFields = { \"$addFields\" : {\n \"_result\" : map\n} }\npipeline = [ addFields ]\n", "text": "My approach would be to use $map with $mergeObjects.Something along the untested code:", "username": "steevej" }, { "code": " {\n $addFields: {\n myArray: {\n $map: {\n input: \"$myArray\",\n in: {\n $mergeObjects: [\n \"$$this\",\n {\n max: {\n $max: [\n \"$$this.data_a\",\n \"$$this.data_b\",\n ],\n },\n },\n ],\n },\n },\n },\n },\n },\n", "text": "Hello @steevej,Thank you very much! Once more, your brains save me! In case it would help someone else, I rewrite your solution as a pipeline, as this was my initial request:", "username": "Antoine_Delequeuche" }, { "code": "db.your_collection.aggregate( pipeline )\nmergeObject = { \"$mergeObjects\" : [\n \"$$unit\" \n { \"max\" : { \"$max\" : [\n \"$$unit.data_a\"\n \"$$unit.data_b\"\n ] } }\n] }\nmap = { \"$map\" : {\n \"input\" : \"$interestedBusinessUnits\" ,\n \"as\" : \"unit\" ,\n \"in\" : mergeObjects\n} }\n", "text": "I rewrite your solution as a pipelinePlease note that my solution is also pipeline. You simply run it with:It is written using variables because it is easier to develop, to read, to edit and to understand. Just like when you write a program using multiple small functions rather than having a monolithic main function that is indented past the middle of screen and where you have to scroll to see the end of a block.I even find the map object/variable quite big. I should have wrote instead:This way syntax errors are easier to correct because you edit a much smaller chunk of code.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Create a field in an array of objects, in an aggregation
2023-09-20T13:42:36.259Z
Create a field in an array of objects, in an aggregation
225
null
[]
[ { "code": "", "text": "Hello,I read the OP_MSG specifications in the wire protocol documentation and saw that there is a flag bit called more_to_come. I am trying to learn more about how this bit is toggled and how it would affect the requestID or responseTo fields for request/response messages. Is this bit commonly toggled? An example of a query to run to reproduce the scenario on wireshark would be great.Thanks so much!!", "username": "Kartik_Pattaswamy" }, { "code": "\n \n auto msg = assembleCommandRequest(_client, _ns.dbName(), getMoreRequest.toBSON({}), _readPref);\n \n \n // Set the exhaust flag if needed.\n if (_isExhaust) {\n OpMsg::setFlag(&msg, OpMsg::kExhaustSupported);\n }\n return msg;\n }\n \n \nbool DBClientCursor::init() {\n invariant(!_connectionHasPendingReplies);\n Message toSend = assembleInit();\n MONGO_verify(_client);\n Message reply;\n try {\n reply = _client->call(toSend, &_originalHost);\n } catch (const DBException&) {\n // log msg temp?\n LOGV2(20127, \"DBClientCursor::init call() failed\");\n // We always want to throw on network exceptions.\n throw;\n \n \n \n boost::optional<BSONObj> postBatchResumeToken = boost::none);\n \n \nvirtual ~DBClientCursor();\n \n \n/**\n * If true, safe to call next(). Requests more from server if necessary.\n */\n virtual bool more();\n \n \nbool hasMoreToCome() const {\n invariant(_isInitialized);\n return _connectionHasPendingReplies;\n }\n \n \n/**\n * If true, there is more in our local buffers to be fetched via next(). Returns false when a\n * getMore request back to server would be required. You can use this if you want to exhaust\n * whatever data has been fetched to the client already but then perhaps stop.\n */\n int objsLeftInBatch() const {\n invariant(_isInitialized);\n \n \n \n docs = self.unpack_response(codec_options=codec_options)\n assert self.number_returned == 1\n return docs[0]\n \n \ndef raw_command_response(self) -> NoReturn:\n \"\"\"Return the bytes of the command response.\"\"\"\n # This should never be called on _OpReply.\n raise NotImplementedError\n \n \n@property\n def more_to_come(self) -> bool:\n \"\"\"Is the moreToCome bit set on this response?\"\"\"\n return False\n \n \n@classmethod\n def unpack(cls, msg: bytes) -> _OpReply:\n \"\"\"Construct an _OpReply from raw bytes.\"\"\"\n # PYTHON-945: ignore starting_from field.\n flags, cursor_id, _, number_returned = cls.UNPACK_FROM(msg)\n \n \n documents = msg[20:]\n \n ", "text": "From the server source code looks like this is a trigger for that flag:", "username": "John_Sewell" }, { "code": "moreToCome", "text": "Hi, @Kartik_Pattaswamy,A good source of information is the moreToCome section of the OP_MSG spec. For a high-level overview of OP_MSG, I would recommend this blog post by one of our engineers.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Hello,\nThanks a lot for the response. I realize that the more to come bit is set internally for bulk writes/find operations but what actually makes a bulk operation and causes the bit to be set by the client or server? I am trying to create a query that sets the bit and capture the traffic through wireshark.", "username": "Kartik_Pattaswamy" }, { "code": "moreToComehellogetMore", "text": "The server will set the OP_MSG moreToCome flag on responses to exhaust cursors. Exhaust cursors are only valid for handshake messages (e.g. hello and legacy hello) and getMore. The easiest scenario to reproduce is to monitor the initial handshake between a 4.4-compatible driver and a 4.4 or later cluster. Streamable monitoring was introduced in 4.4. The initial handshake will start with OP_QUERY since the driver doesn’t know whether the cluster speaks OP_MSG. It will then upgrade to OP_MSG.Another option is to opt into Stable API, which was introduced in MongoDB 5.0. Because we know the driver is talking to a MongoDB 5.0 cluster, the initial handshake is performed over OP_MSG.I hope that this helps in your repro efforts.", "username": "James_Kovacs" }, { "code": "", "text": "Thanks a lot for the response. I was able to repro the server sending a message with moreToCome flag set. Is there also a way to repro the client also setting that flag on a request? I have tried inserting a document larger than the max BSON size as well sending a find query to receive many documents. Would I be right in assuming that this bit is usually set on server response?", "username": "Kartik_Pattaswamy" }, { "code": "moreToComemoreToComemoreToCome", "text": "A single document cannot exceed the maximum BSON size of 16MB. According to the spec, drivers can set moreToCome on a request for unacknowledged writes, but these are generally discouraged. Reviewing the .NET/C# Driver code, we do not set moreToCome for unacknowledged writes. Typically moreToCome is only set by the server.", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Need information about OP_MSG's more_to_come flag bit
2023-09-18T17:48:11.234Z
Need information about OP_MSG&rsquo;s more_to_come flag bit
309
null
[ "aggregation" ]
[ { "code": "{\n\t$and[\t\n\t\t{\n\t\t “a”:1\n\t\t},\n\t\t{\n\t\t “b”:2\n\t\t},\n\t\t{\n\t\t “c”:\n\t\t\t{$in:[3,4,5]}\n\t\t},\n\t\t{\n\t\t “d”: true // this is always fixed\n\t\t}\n\t],\n\t$or[\n\t\t{\n\t\t e: {$regex: “something”, $options: I}\n\t\t}\n\t]\n\n\t$sort:{\n\t\t{ f: 1}\n\t}\n}\n", "text": "Hi community,I’m working on the index building.I read the mongoDB documentation that talked about best practice on building index.\nFor example, I got that you should follow the ESR rule or that is better put in the first place of the index the attribute that have an higher cardinality.I thought I’ve grasp the concept but when I put in practice there were a few things that did not work as I expected.Let me explain better.The query under-study is the following (do not pay attention to eventually mistakes and how the query is written):I created also the following index (I used as I said the ESR and the cardinality rules):a:1, b:1, f:1, c:1, e:1\nI used also the partial index on the d field that is fixed for all the query and the collation to get better performance with the case insensitive regex.But, if I run the query with the explain I get that the index used is another one that have some fields of the query and other one that is not present, for example:b:1, c:1, a:1, h:1, i:1\n(c is a range and it should go after the sort field, h and i are not even present in the query)So, my questions are:Thanks in advance!", "username": "Luciano_Bigiotti" }, { "code": "", "text": "If there are multiple candidate indexes for a given query, the MongoDB Query Planner will test their relative performances by seeing which index returns 100 documents first. That index will then be used going forward, until another candidate index is added or the MongoDB process is restarted.Your ‘e’ field is queried with a regex, which can only use an index if it is anchored left.Regarding using different indexes: each clause of an $or query can use a different index, perhaps this is what you are seeing.", "username": "Peter_Hubbard" }, { "code": ".explain(\"allPlansExecution\")", "text": "Hi @Luciano_Bigiotti, great questions!Further to what my colleague @Peter_Hubbard mentioned, I’d broadly say that it is the job of any query optimizer to execute all queries as efficiently as possible. If you want to dive into the specifics of why one index/plan was chosen over another then you may wish to review the .explain(\"allPlansExecution\") output for the operations.I spoke about this topic a few years ago in Tips and Tricks for Query Performance: Let Us .explain() Them. You can separately find a diagram about how plan caching work presently works in MongoDB on our Query Plans page.Best of luck!\n-Chris", "username": "Christopher_Harris" }, { "code": "{\n\t$and[\t\n\t\t{\n\t\t “a”:1\n\t\t},\n\t\t{\n\t\t “b”:2\n\t\t},\n\t\t{\n\t\t “c”:\n\t\t\t{$in:[3,4,5]}\n\t\t},\n\t\t{\n\t\t “d”: true // this is always fixed\n\t\t}\n\t],\n\t$or[\n\t\t{\n\t\t e: {$regex: “something”, $options: I}\n\t\t}\n\t]\n\n\t$sort:{\n\t\t{ f: 1}\n\t}\n}\n{\n\t$and[\t\n\t\t{\n\t\t “a”:10\n\t\t},\n\t\t{\n\t\t “b”:12\n\t\t},\n\t\t{\n\t\t “c”:\n\t\t\t{$in:[3,4,5]}\n\t\t},\n\t\t{\n\t\t “d”: true // this is always fixed\n\t\t}\n\t],\n\t$or[\n\t\t{\n\t\t e: {$regex: “something”, $options: I}\n\t\t}\n\t]\n\n\t$sort:{\n\t\t{ f: 1}\n\t}\n}\n", "text": "Hi Peter, Christopher,Thanks for your answer and for the video.\nI watched it very carefully and I found it very interesting.But I did not find out the answer to all my questions.I do not understand why, executing the same query but with different values, mongoDB uses different index.For example, if I run:that returns 6 documents, mongoDB uses an index (let me call indexA), but if I run:that returns 200 documents, mongoDB uses another index (let me call indexB).How is it possible that changing the values in the query, mongoDB changes the used index?\nThe index used in a query is selected by query planner at the beginning and must be the same until a new query planner process start, isn’t it?Another question:\nApart from the ESR rule and the cardinality, is there some other best practice to use?\nIs there a way to choose the fields to use in an index?Thanks in advance", "username": "Luciano_Bigiotti" }, { "code": ".explain(\"allPlansExecution\")", "text": "Glad to hear that the video was helpful!The .explain(\"allPlansExecution\") output from the specific queries of interest in your environment would have the direct answers to some of your questions. Without that information we can respond to your questions in general terms, please find such responses below.How is it possible that changing the values in the query, mongoDB changes the used index?This is intentional and by design. I mentioned in my previous response that the job of any query optimizer to execute all queries as efficiently as possible. There are two components of that sentence which are important. The first is that I specifically didn’t make reference to a phrase like “the best index”. This is because it is not always the case that there is a single index that performs optimally for a given query shape. The second is that I mentioned “all queries”. The specific values of the query predicates matter for both of these items and optimizers do their best to account for them when planning and executing queries.The index used in a query is selected by query planner at the beginning and must be the same until a new query planner process start, isn’t it?No, this is not correct. In general the optimizer does attempt to reuse plans via a caching mechanism to minimize the amount of repetitive work that it needs to do. However because the values of the predicates can make a difference when it comes to query efficiency, there are safeguards in place to prevent plans from getting inappropriately used indefinitely for a given query shape. The aforementioned Query Plans page contains some details about the plan caching process.Apart from the ESR rule and the cardinality, is there some other best practice to use?Cardinality is typically not nearly as important of a factor when it comes to designing indexes as other things (such as they key ordering and reusability). The ESR Strategy provides easy-to-remember guidance that is effective in a variety of situations. That doesn’t mean that it always has the answer as there are always situations that require further consideration and have additional nuance. That said, I am not personally aware of any other ‘rules of thumb’ that are as generally applicable or that supersede this one.Is there a way to choose the fields to use in an index?I’m not sure what this question means.When preparing an execution plan for a query, the optimizer will take a look at the fields that are used in the query and will bound the index scan as much as possible. Its ability to do so is driven by how the fields are used in the index along with the structure of the index itself. The optimizer will be as aggressive as possible when applying the rules while making sure that the result set will be logically correct.Hope this helps.Best,\nChris", "username": "Christopher_Harris" } ]
MongoDB index, how does it work?
2023-09-18T09:58:01.931Z
MongoDB index, how does it work?
282
null
[]
[ { "code": "", "text": "Hi Team,I am trying to run the below command but the changes are reflecting on replication cluster secondary nodes.mongo --port <secondaty_node_port> --eval “rs.slaveOk()”Please advise on this.Thanks and Regards,\nRamesh", "username": "Ramesh_Audireddy1" }, { "code": "", "text": "Hi @Ramesh_Audireddy1,\ni don’ t understand your question, but if you want to enable the read operation on the secondary members, you can try “rs.secondaryOk()”.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "For slaveOk vs secondaryOk see\nhttps://www.mongodb.com/community/forums/t/info-rs-slave-is-deprecated-use-rs-secondaryok/108435/2?u=steevejWhat do you mean bythe changes are reflecting on replication cluster secondary nodes", "username": "steevej" }, { "code": "mongo--evalmongo", "text": "Hi @Ramesh_Audireddy1,I think you may be misunderstanding the use case for setting read preferences. You would set this read preference before running commands or queries via the same connection.mongo --port <secondaty_node_port> --eval “rs.slaveOk()”The outcome of this would be mongo shell attempting to connect to your secondary on localhost and the specified port, running the JavaScript code in --eval, and then exiting. Setting the read preference for the current session has no effect on future sessions.Can you share more details on what you are trying to achieve? It sounds like perhaps you are trying to Write Scripts for the mongo Shell.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hi @Stennie_X ,Thank you for the update.I am trying to automate the replication setup via Ansible and want to enable read for all the secondary nodes. As part of the automation, the above mentioned command is executed via ansible on the secondary nodes.Please let me know if you need more details on this.Thanks and Regards,\nRamesh", "username": "Ramesh_Audireddy1" }, { "code": "", "text": "Hi @Fabio_Ramohitaj ,\nThank you for your reply. I am using the MongoDB 4.2 version. Seems the rs.secondaryok() will work from 4.5 or higher versions.Thanks and Regards,\nRamesh", "username": "Ramesh_Audireddy1" }, { "code": "", "text": "Hi @steevej ,I mean the read option is not activated even after running the rs.slaveOk(). Here is the command I executed on secondary nodes.\nmongo --port <secondaty_node_port> --eval “rs.slaveOk()”\nThanks and Regards,\nRamesh", "username": "Ramesh_Audireddy1" }, { "code": "", "text": "Read carefully, Stennie’s answer.Allowing secondary reads is transient for the current connection only.", "username": "steevej" }, { "code": "", "text": "I am trying to automate the replication setup via Ansible and want to enable read for all the secondary nodes. As part of the automation, the above mentioned command is executed via ansible on the secondary nodes.Hi @Ramesh_Audireddy1,Read preference is a client/driver option for routing requests, not a replica set configuration option.If you want clients/drivers to use a secondary read preference, they need to specify this in their connection string or using a per-query/command option.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "StennieHi @Stennie_X ,\nThank you for the details.In that case, don’t I need to run the “rs.slaveOk()” on the secondary node?Thanks and Regards,\nRamesh", "username": "Ramesh_Audireddy1" }, { "code": "rs.slaveOk()rs.secondaryOk()db.getMongo().setReadPref('secondary')", "text": "Hi all,rs.slaveOk() and rs.secondaryOk() are now deprecated.Use this instead:db.getMongo().setReadPref('secondary')Cheers,\nMaxime.", "username": "MaBeuLux88" } ]
Set rs.slaveOk() is not working via CLI?
2022-01-03T16:56:46.735Z
Set rs.slaveOk() is not working via CLI?
6,746
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "Hi,trying to take a backup using mongodump utility and using an account that is granted backup role…every time i execute the mongodump command the following error is thrown:could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: XXXXXXX:270171, Type: Unknown, Last error: connection() error occured during connection handshake: connection(XXXXXX:27017[-64]) socket was unexpectedly closed: EOF }, ] }Regards,\nEmad", "username": "emad_mousa" }, { "code": "", "text": "looks like connection to the server failed. Did you find any error messages in mongodb log file?are you able to connect to the server with mongo shell?what’s the command you run for mongodump?", "username": "Kobe_W" }, { "code": "", "text": "65535 is the max tcp port. You have 270171", "username": "chris" } ]
Error while backing up MongoDB instance using mongodump utility
2023-09-20T16:56:10.263Z
Error while backing up MongoDB instance using mongodump utility
367
null
[ "sharding" ]
[ { "code": "> db.fs.chunks.getShardDistribution()\nShard shtest at shtest/192.168.82.10:27019,192.168.82.20:27019\n{\n data: '716.95GiB',\n docs: 4038821,\n chunks: 7271,\n 'estimated data per chunk': '100.97MiB',\n 'estimated docs per chunk': 555\n}\n---\nTotals\n{\n data: '716.95GiB',\n docs: 4038821,\n chunks: 7271,\n 'Shard shtest': [\n '100 % data',\n '100 % docs in cluster',\n '186KiB avg obj size on shard'\n ]\n}\n\nshards \n[\n {\n _id: 'shtest', \n host: 'shtest/192.168.82.10:27019,192.168.82.20:27019',\n state: 1, \n topologyTime: Timestamp({ t: 1695174662, i: 4 })\n } \n] \n\n collections: {\n 'dbtest.fs.chunks': {\n shardKey: { files_id: 1, n: 1 },\n unique: false,\n balancing: true,\n chunkMetadata: [ { shard: 'shtest', nChunks: 7271 } ],\n chunks: [\n 'too many chunks to print, use verbose if you want to force print'\n ],\n tags: []\n }\n }\n", "text": "I have made an experimental setup with two sharding nodes by following the “Deploy a Sharded Cluster” tutorial.I intend to distribute accross the nodes a large GridFS chunks collection of over 700 GB.At first, I tried working with the collection already populated, but it appeared that MongoDB would not split it up into chunks. Therefore I exported the collection and imported it again to mongos, this time it turned into 7271 chunks.While this looked great I looked at the disk statistics and figured out that the full collection had been replicated entirely accrross the two nodes instead of being load balanced.From there, I am stuck.I am running MongoDB 6.", "username": "Daniel_Jackson3" }, { "code": "> sh.status()\n { shard: 'shtest', nChunks: 7231 },\n { shard: 'shtest1', nChunks: 40 }\n", "text": "I have resolved my issue.As I have limited knowledge of MongoDB I was a little confused reading documents saying that “shard” must be configured as “replicate” and actually got my two nodes as members of the same shard.I have configured each node with a different shard name and load balancing is now operating as it should.", "username": "Daniel_Jackson3" } ]
Collection replicated entirely instead of being distributed
2023-09-20T14:35:03.474Z
Collection replicated entirely instead of being distributed
254
https://www.mongodb.com/…5_2_1024x260.png
[ "node-js", "mongoose-odm" ]
[ { "code": "", "text": "Hello everyone, I am currently doing a project that simulates planets and launches in nasa. have a problem when I try to implement the SaveLaunch function and save a launch, I get the following error:\nimage1320×336 23.6 KB\nI did required the collection of the planets from my mongo in the current fileI would really appriciate if you could help me understand and solve this problem, unfortunately i didn’t find anything on the internet.Thank you,\nVera", "username": "vera_naroditski" }, { "code": "", "text": "What does the code look like? Looks like you’re calling findOne on the wrong object as whatever you’re calling it on does not have that method available.", "username": "John_Sewell" }, { "code": "", "text": "\nimage1032×718 58.4 KB\n\nthis is how the function looks like", "username": "vera_naroditski" }, { "code": "", "text": "\nimage1160×193 22 KB\nthis is the import", "username": "vera_naroditski" }, { "code": "", "text": "And what is planets? Where is the definition of that?", "username": "John_Sewell" }, { "code": "", "text": "planets only have names and i’ve imported them to mongo succesfully from csv file, I wanted to handle a case if I try to create a launch(that has planets as destinations) with a planet that doesn’t exist, that it won’t let me create the launch. this how the schema of planet and the definition looks like\n\nimage1233×575 42.6 KB\n", "username": "vera_naroditski" }, { "code": "", "text": "I’ve not used Mongoose that much personally but take a look at this SO article which seems to be similar:Something else to try is to set a breakpoint on that line and check what the signatures of the object in question look like and debug it.", "username": "John_Sewell" }, { "code": "", "text": "Also, do you have a circular dependency from that warning?If you can push code to a temporary repo somewhere it may be easier to debug / view for people.", "username": "John_Sewell" }, { "code": "", "text": "Hi vera_naroditski,\nI have faced the same issue with this problem. the issue lies in your launch object, If you rename the property called destination to the target in your launch object it will be resolved.\nThanks", "username": "dixit_joshi" } ]
Issue with findOne function
2023-08-11T07:05:22.964Z
Issue with findOne function
760
null
[ "atlas", "migration" ]
[ { "code": "", "text": "Hi,I’m trying to do a live import from/to MongoDB v7.0.1 instances but am getting the following error message in the pre validation step.We can’t start the migration yet. Live import validation failed. Live Migrations do not support clusters running MongoDB Version 7.0.0 or greater. Source: 7.0.1, Destination: 7.0.1. Learn more about migration troubleshooting. Any help would be appreciated.Thanks,", "username": "Jayride_Devops" }, { "code": "", "text": "Good morning, welcome to the MongoDB community.Currently, live migration does not support version 7, as stated in the documentation, nor does it support serverless instances.", "username": "Samuel_84194" }, { "code": "", "text": "We’ll be releasing mongosync with version 7 support this/next week. Live Migrate will have version 7 support soon after.", "username": "Alexander_Komyagin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Migration - Live Import pre validation failing for MongoDB 7.0.1
2023-09-20T05:24:51.873Z
Migration - Live Import pre validation failing for MongoDB 7.0.1
338
null
[ "time-series" ]
[ { "code": "", "text": "Okay, so I have some (TB sized) collections (observation data) that I’d like to copy over to equivalent time-series collections (currently I’m using 5.X, self-managed). The current collection has a proper UTC date-time as a “string” and time-series of course requires that to be a legit UTC “date”.Option A, do I introduce a new UTC date field (based on the string that exists) into the existing collection that permeates the entire collection? First of all, is that bad? I’m not 100% sure what Mongo would do at the locking level if a command like that was issued (on a production system).Option B: Pre-create the Time-Series collection, then attempt to populate it (from the original) while copy/converting that “string” UTC date to the actual UTC date required by time-series? Perhaps there is an option C that I haven’t thought of (other than \"just retire already!)? I’d greatly appreciate a nudge in the right direction.", "username": "Allan_Chase" }, { "code": "", "text": "Hey @Allan_Chase,Welcome to the MongoDB Community forums!Okay, so I have some (TB sized) collections (observation data) that I’d like to copy over to equivalent time-series collections (currently I’m using 5.X, self-managed).Could you please share the specific use case and the rationale behind opting for the time-series collections? Moreover, it’s essential to note that in time series collections, data insertion occurs in accordance with the timestamp, which ultimately results in the creation of meta-bucket collections.I recommend you consider inserting data in a sorted manner based on the timestamp. By doing so, you can fully leverage the advantages of time series data, ensuring efficient querying and analysis.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hey @Allan_Chase,Option A, do I introduce a new UTC date field (based on the string that exists) into the existing collection that permeates the entire collection? First of all, is that bad? I’m not 100% sure what Mongo would do at the locking level if a command like that was issued (on a production system).Option B: Pre-create the Time-Series collection, then attempt to populate it (from the original) while copy/converting that “string” UTC date to the actual UTC date required by time-series? Perhaps there is an option C that I haven’t thought of (other than \"just retire already!)? I’d greatly appreciate a nudge in the right direction.You can use the MongoDB Kafka connector which will automatically convert the string to BSON Date:Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Great, thank you for responding. The types of use cases is 100% on par with the concept of weather sensors that repeatedly give new readings at well defined intervals. The rationale is that we were using a pre 5.X version of MongoDB that didn’t support Time Series Collections and thought that a handful of our collections fit the mold of Time Series. I will look at inserting/experimenting with sorted order with some of the smaller collections and see how that goes. Thanks again!", "username": "Allan_Chase" }, { "code": "", "text": "Thanks Kushagra, I’ll have a second look at Kafka. At first glance I thought it was too complicated for something I should be able to do with NoSQLBooster or export/import features. I’m totally old school when it comes to doing things like this, so I’ll take your advice and see how we go. Thank again.", "username": "Allan_Chase" } ]
Copy data from standard collection to time-series collection
2023-09-19T23:50:59.565Z
Copy data from standard collection to time-series collection
317
null
[]
[ { "code": "", "text": "Tive que criar um campo de nome: normalized Field, onde no pré save, salvar o valor dos outros campos sem acentos, sem espaço e tudo caixa baixa. Tudo isto, porque não consegui resolver com um simples “find” usando e não usando “index search” e ter uma pesquisa que aceitasse receber parte de um nome sem ou com acento e buscasse no banco.I had to create a field named: normalized Field, where in the pre-save, save the value of the other fields without accents, without spaces and all in lower case. All this, because I couldn’t solve it with a simple “find” using and not using “index search” and have a search that accepted receiving part of a name without or with an accent and searching in the database.", "username": "Marcos_Andre_Gerard_alves" }, { "code": "", "text": "Hi @Marcos_Andre_Gerard_alves , can you share an example of the document you want to search for, the search query and the search index definition? Atlas Search offers many ways to search data, and I think language analyzers may be able to help for your use case.", "username": "amyjian" } ]
Search disregarding accents and lower case
2023-09-20T13:17:18.046Z
Search disregarding accents and lower case
269
null
[ "java", "transactions", "spring-data-odm" ]
[ { "code": "", "text": "Hello All,I have a scenario where i make 2 updates in a @Transactional block of Spring Data.\nWe for sure know that the mongo DB clusters were being restarted during the updates.Now during the execution, the Spring app received the error\n“Could not commit mongo Transaction to session”, but when we see the database after the restart, the updates made were visible.Could someone help in understanding why this might happen?", "username": "Vidyadhar_V_Bhutaki" }, { "code": "", "text": "Good morning!What type of Write Concern were you using?", "username": "Samuel_84194" }, { "code": "", "text": "We have 3 nodes in a cluster and write concern is Majority.\nand we for sure know that it was rolling restarts of the nodes", "username": "Vidyadhar_V_Bhutaki" } ]
Transaction - Received update failure exception from mongo, but could see update in Database
2023-09-19T13:54:40.645Z
Transaction - Received update failure exception from mongo, but could see update in Database
342
null
[ "connector-for-bi" ]
[ { "code": "", "text": "Hey,\nI am loading the data into Power BI using Mongodb Atlas SQL ODBC Driver (since new power BI connector does not allow dataset refresh nor allow live query option for now). I was not getting any column names in power BI. After initial research, I found that I have to execute the following commanddb.runCommand({\nsqlGenerateSchema: 1,\nsampleNamespaces: [‘DB.col1’],\nsampleSize: 100,\nsetSchemas: true\n})Once I executed the above command, It initially executed successfully and columns were visible to the power BI. After that, I have exposed new fields/columns into the same collection however Power BI is not showing the new columns/fields. I am under the impression that I have to execute the above command again to expose new fields to Power BI, However, when I am executing this command again, I am getting the following error.{\nok: 1,\nschemas: ,\nfailedNamespaces: [ { namespace: ‘DB.col1’, error: ‘an internal error occurred’ } ]\n}Could anyone help me out how to resolve this issue or how I can have all fields/columns visible into Power BI?Best\nShuja", "username": "Shuja" }, { "code": "", "text": "hi @Shuja\nYou are correct that if new fields are added to you underlying data source collection, you would need to regenerate the sql schema to pick up those new fields. We are working on a project now that will allow users to manage the SQL schema from within the Atlas UI, I think this would help tremendously.As for the error you are now getting, can you verify this namespace does still exist? The fact that you ran this command to success before, leads me to believe you did this in the past with correct syntax etc.Are you by any chance able to run this command for other virtual collections within your federated db? I just want to rule out that this error maybe caused by some unforeseen data type within the underlying collections or this is a wider spread error.\n\nScreenshot 2023-08-25 at 9.15.00 AM1285×722 152 KB\n", "username": "Alexi_Antonino" }, { "code": "", "text": "Hey Alexi, Thanks for the prompt response. I have also tried wild card (*) and unfortunately, I am getting this error for all collections. Would you please let me know if there is other thing that I need to try?thanks\nshuja", "username": "Shuja" }, { "code": "", "text": "Hey Alexi, We have further investigated the issue that found that the collections, that are using data lakes throwing this error. We are unable to run schema against these collections. Would you please let us know the solution?Thanks\nShuja", "username": "Shuja" }, { "code": "", "text": "Thanks for this information. I will attempt to recreate this on my end and submit a case to the engineers. Please feel free to email me the screen shot error still, just incase I can’t recreate it and it is not a global issue.Best,\nAlexi", "username": "Alexi_Antonino" } ]
An internal error occured on db.runCommand( { sqlGenerateSchema: 1
2023-09-19T17:49:04.881Z
An internal error occured on db.runCommand( { sqlGenerateSchema: 1
333
null
[ "aggregation", "queries", "react-js" ]
[ { "code": "", "text": "I have data in MongoDB with date format 14-May-2023 and May 14, 2023. What I want is to fetch data depending on range provide such as I need data from 01-May-2023 to 31-May-2023 and so on. Can anyone help me on how can I achieve this. I have input type-“date” in frontend side in React.js. Thanks", "username": "Prabjot_Singh" }, { "code": "$gte$ltemoment.jsDate", "text": "Hey @Prabjot_Singh,I have data in MongoDB with date format 14-May-2023 and May 14, 2023. What I want is to fetch data depending on range provide such as I need data from 01-May-2023 to 31-May-2023 and so on.Could you please confirm whether these values are stored as String or Date data types in the MongoDB documents?However, I think that you’ll likely need to convert these date strings into the MongoDB Date format and then use the $gte and $lte operators to filter the documents within the desired date range. Moving forward, I would recommend ensuring consistency in the date format within your MongoDB documents. This will help maintain data integrity and streamline your queries.If you need the date in a different format for the React.js part of your application, consider using a library like moment.js or the built-in JavaScript Date object for easy conversion.I hope it helps!Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Note that you have 2 waysto convert these date strings into the MongoDB Date formatDynamically convert when you queryThis will be slow every time since your query will not be able to use any index.Permanently convert using an aggregation to overwrite the current date fieldsThis will be slow once during the migration to the appropriate format. After the query will be fast as an index could be used.", "username": "steevej" } ]
Fetch data for a period of days
2023-09-19T23:12:48.053Z
Fetch data for a period of days
308
null
[ "jakarta-mug" ]
[ { "code": "", "text": "Apa yang didapatkan dari MongoDB.local?Belajar tentang teknologi, aplikasi, dan best practice yang bisa mempermudah kamu dalam membuat aplikasi yang berbasiskan data di MongoDB.local. Temukan cara membuat aplikasi dengan mengunakan fitur terbaru di MongoDB. Belajar langsung dari expert dan top engineer yang berhasil di industrinya.Detil lebih lanjut : MongodDB Local JakartaEvent Type: In-Person\nLocation: Pullman Hotel Thamrin", "username": "Fajar_Abdul_Karim" }, { "code": "", "text": "Nanya donk untuk tim MongoDB, untuk presentasi dari masing2 speaker, apakah bisa kita dapatkan? Dan untuk dokumentasi seperti foto2 juga, apakah bisa di share?", "username": "Wicaksono_Hari_Prayoga" }, { "code": "", "text": "gw coba tanyain sama tim mongodb ya, kalo. video biasanya akan di upload di youtubenya mongodb https://www.youtube.com/@MongoDB", "username": "Fajar_Abdul_Karim" } ]
Mongodb.local
2023-09-19T14:04:16.057Z
Mongodb.local
550
null
[ "node-js", "mongoose-odm" ]
[ { "code": "let allCount = await Invoice.countDocuments().exec();\nallCount = allCount + 1;\nconst stringID = 'I' + allCount.toString().padStart(6, \"0\");\n\nconst invoice = new Invoice({\n stringID,\n customerName: req.body.customerName,\n});\n\ninvoice\n .save()\n .then(async data => {})\n .catch(err => {})\nstringID", "text": "I use Mongoose ODM. I have a field stringID in Invoice model which is incremental. What I do is, count the documents and add 1 for the next invoice id.But the problem is I get duplicated stringID sometimes. I think concurrent insert operation is the cause but I want to know what is the exact cause and what is the possible solution to this?", "username": "Zahid_Hasan1" }, { "code": "stringIDstringIDstringIDconst invoiceSchema = new mongoose.Schema({\n stringID: {\n type: String,\n unique: true, // This ensures uniqueness\n },\n});\n...\nconst Invoice = mongoose.model('Invoice', invoiceSchema);\nstringIDtry-catchconst stringID = 'I' + allCount.toString().padStart(6, '0');\n\nconst invoice = new Invoice({\n stringID,\n customerName: req.body.customerName,\n});\n\ntry {\n const savedInvoice = await invoice.save();\n // success\n} catch (error) {\n if (error.code === 11000) {\n // Duplicate key error --> handle it appropriately or retry with a new ID\n } else {\n // Handle other errors\n }\n}\nstringIDstringIDstringID", "text": "Hey @Zahid_Hasan1,Welcome to the MongoDB Community forums But the problem is I get duplicated stringID sometimes. I think concurrent insert operation is the cause but I want to know what is the exact cause and what is the possible solution to this?I think this issue is likely due to concurrent insert operations. When multiple requests or processes try to create new Invoice documents simultaneously, they may retrieve the same document count before incrementing it, leading to duplicate stringID values.To overcome this issue, you can define a unique index on the stringID field in your Mongoose schema:Secondly, you can modify your code to handle potential duplicate key errors that may occur when inserting a document with a duplicate stringID. You can use a try-catch block to catch and handle these errors:By defining a unique index on the stringID field and handling duplicate key errors, you can ensure that no two invoices will have the same stringID, even in concurrent insertion scenarios. If a duplicate key error occurs, you can generate a new stringID and attempt to save the document again.However, this is just a general approach and I would recommend you test this behavior with your specific use case, to ensure that the application can handle different scenarios.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you Kushagra for the reply… I will test this approach and let you know if it works perfectly.", "username": "Zahid_Hasan1" } ]
Incremental ID using countDocuments becomes duplicate
2023-09-20T11:10:58.084Z
Incremental ID using countDocuments becomes duplicate
273
null
[ "compass" ]
[ { "code": "", "text": "Mongo DB server is listening at port 27017 inside a VM. Inside this VM, my code connect to this DB using mongodb:localhost:27017/ successfully.Now if I have a Mongo DB GUI tool such as Compass trying to look inside , how do I connect to the Mongo DB running inside the VM?", "username": "Universal_Simplexity" }, { "code": "", "text": "The same way you connected to it without Compass, but inside of compass. Route it through the VM hiarchy to where the server is.Make sure you have network pass through and so on as appropriate to the VM itself, you’ll essentially connect to the VM just like you would any other remote server, and then from the remote server into the MongoDB Database.The hard part in answering this, is this depends all on how you’ve configured your host system, your VM, and MongoDB.", "username": "Brock" }, { "code": "", "text": "Thank you, Brock.\nI am using UTM VM and like you said I need to check what is the configuration of my host, guest, VM and MongoDB.\nI will update here for a resolution.", "username": "Universal_Simplexity" }, { "code": "", "text": "@Universal_Simplexity something you can use to make this all easier is instead of a VM or in addition to your VM (set network as pass through of this is academic, otherwise build a proxy and yada yada) is setup Docker to host MongoDB, and then setup Kubernetes to handle all of the network and routing services to MongoDB.You could also use Mininet as well.", "username": "Brock" }, { "code": "", "text": "Sounds like a good idea because MongoDB failed to start even after thoroughly removed the current 6.0.4 community edition installation and installed the latest 6.0.5 using https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/ instructions.\nhmm…I should perform the installation on my native MacOS?", "username": "Universal_Simplexity" }, { "code": "", "text": "For me I’m a big DevOps guy personally, I use Docker and Kubernetes anytime that I can, because it’s SO easy to setup once you understand it. Can spin up, route and connect anything etc. handle encryptions, firewalls, all of that jazz. I try to avoid VMs unless it’s going to host things like as a server. Then I’ll make a host VM and load it with containers and internetwork the containers.Then route between VM servers using mininet to a similar way you’d network if you had physical racked servers connected to a network switch etc.Even an M1 Mac Mini with 8 GB of RAM can handle a lot of MongoDB, web servers etc. the largest container is my Apollo GraphQL which serves as a replication sync, and data transit node to orchestrate data between an 11 node MongoDB replica set, 5 Redis DB replica sets, and 5 CouchDB replica sets. All in 100% sync with 3GB of RAM allocated to it.", "username": "Brock" }, { "code": "", "text": "I am completely new to containers too and have a deadline to complete an assignment to query MongoDB across collections.\nWhat is my best MVP environment?", "username": "Universal_Simplexity" }, { "code": "", "text": "Honestly just Docker will work, lots of guides to spin it up and run MongoDB as a microservice.I dont know how long you have, but sometime next week I’ll be publishing a Node.JS and Python microservice guide to build a MongoDB cluster/replica set.", "username": "Brock" }, { "code": "", "text": "@Universal_SimplexityYouTube has a bunch of tutorials, but here’s a great one. Have you up and working in 10 mins.", "username": "Brock" }, { "code": "", "text": "By the end of this week.\nNode JS and MongoDB accessed via Mongoose is my minimal requirement to deliver a RESTful API microservice to find favourite movies against movies dataset from Kaggle given a userid.", "username": "Universal_Simplexity" }, { "code": "", "text": "", "username": "Brock" }, { "code": "Mar 30 09:59:45 simplexity2204 systemd[1]: Started MongoDB Database Server.\nMar 30 09:59:45 simplexity2204 systemd[12475]: mongod.service: Failed to locate executable /usr/bin/mongod: No such file or directory\nMar 30 09:59:45 simplexity2204 systemd[12475]: mongod.service: Failed at step EXEC spawning /usr/bin/mongod: No such file or directory\n", "text": "Out of curiosity …\nWhy would this happen even after a full installation?", "username": "Universal_Simplexity" }, { "code": "", "text": "That could several things from not having collections and DB built, to bad installation. Tbh I’ve never seen that error. Typically I just use Linux or Apple.Node and Mongoose connections:", "username": "Brock" }, { "code": "", "text": "\nerror === {message : \"Client must be connected before running operations \"}\ni am facing this type of error some many times i worked it but i don’t know that bug fix", "username": "Madhesh_Siva" } ]
GUI access to Mongo DB inside UTM VM
2023-03-28T09:00:59.040Z
GUI access to Mongo DB inside UTM VM
1,635
null
[]
[ { "code": "", "text": "How could we compare 2 documents of the same collection and find which all field changed between the documents.for example if there is a document in a collection like\n{\n“property1” : “A”,\n“property2” : “B”,\n}which gets changed to\n{\n“property1” : “A1”,\n“property2” : “B”,\n}then I wish to produce output like\n{\n“propertyChanged” : “property1”,\n“oldValue”: “A”\n“newValue”: “A1”\n}@MaBeuLux88", "username": "Lakhan_SINGH" }, { "code": "", "text": "Hello, welcome to the MongoDB community.I believe this link can help you understand, let me know your questions.", "username": "Samuel_84194" }, { "code": "", "text": "Hi @Samuel_84194 ,\nThanks for replying , the post has lot of ideation around how to store data so that we get history of changes , but there is no mongo query to produce output like i want .", "username": "Lakhan_SINGH" }, { "code": "", "text": "Hi @Lakhan_SINGH and welcome in the MongoDB Community !I think it’s a use case for Change Streams, no?Cheers,\nMaxime.", "username": "MaBeuLux88" }, { "code": "", "text": "Hi @MaBeuLux88 ,\nthanks for replying.\nWe are limited mongo 4.4 where we only get the updated document and not the changed document.\nIn mongo 6+ we get this out of the box.Is it possible for you to guide me to an aggregate pipeline which could determine the changed properties of 2 documents in a collection ?", "username": "Lakhan_SINGH" }, { "code": "", "text": "Unless you have stored the changes you cant use a pipeline to check a document from yesterday against what it looks like today.\nWe currently store a document history on a document when a data fix is done so we can review any field that changed on any document.\nThe better way now (that we’re moving to) is a change stream to track changes, via an Atlas trigger.If you don’t have anything place currently to track changes, you can’t.We do have scripts in place that can rollup changes for a document, but to re-create a document from history is non-trivial, which is why we were also waiting to upgrade to later versions to get pre-image.I’m not sure how you could do it in an aggregation but it’s not that hard to do in a script.", "username": "John_Sewell" }, { "code": "", "text": "Change Streams where introduced in 3.6 so if you are (still) running in 4.4, you have them available as long as you are running a Replica Set which should be the default for a production environment.As you are reading the docs from the change stream, you can create a side collection and update the documents to keep track of the old value as you update them in the main collection.", "username": "MaBeuLux88" } ]
Compare 2 documents of the same collection in MongoDB
2023-09-10T02:48:26.959Z
Compare 2 documents of the same collection in MongoDB
437
null
[ "node-js", "atlas-cluster" ]
[ { "code": "[nodemon] starting `node app.js`\nserver is listening on port 8000.......\nError: querySrv EREFUSED _mongodb._tcp.cluster1.qm7glq1.mongodb.net\n at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/callback_resolver:47:19) {\n errno: undefined,\n code: 'EREFUSED',\n syscall: 'querySrv',\n hostname: '_mongodb._tcp.cluster1.qm7glq1.mongodb.net'\n}\n", "text": "", "username": "Anurag_Gupta2" }, { "code": "[nodemon] starting `node app.js`\nserver is listening on port 8000…\nError: querySrv EREFUSED _mongodb._tcp.cluster1.qm7glq1.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/callback_resolver:47:19) {\nerrno: undefined,\ncode: ‘EREFUSED’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.cluster1.qm7glq1.mongodb.net’\n}\n8.8.8.88.8.4.4", "text": "Hi @Anurag_Gupta2,Welcome to the MongoDB Community It seems like the DNS issue, try using Google’s DNS 8.8.8.8 and 8.8.4.4. Please refer to the Public DNS for more details.Apart from this, please refer to this post and try using the connection string from the connection modal that includes all three hostnames instead of the SRV record.If it returns a different error, please share that error message here.In addition to the above, I would recommend also checking out the Atlas Troubleshoot Connection Issues documentation.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Error - Not able to connect to mongodb server
2023-09-20T10:45:24.416Z
Error - Not able to connect to mongodb server
324
null
[]
[ { "code": "", "text": "My website used a lot of data from Mongodb, sometimes if my database/collection didn’t update for a while, it means that my system is probably broken, therefore I want to get the last update (document) time of a database and collection, how can I get it accurately?", "username": "WONG_TUNG_TUNG" }, { "code": "", "text": "Hey @WONG_TUNG_TUNG,Welcome to the MongoDB Community!if my database/collection didn’t update for a while, it means that my system is probably broken,May I ask what specifically you mean by “system is probably broken”? Could you please clarify this more in order to assist you better?Additionally, also share the MongoDB version you are using and where it is deployed, is it MongoDB Atlas or on-prem?Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "db.currentOp().inprog.forEach(\n function(c) {\n print(\" client: \", c.client);\n print(\" appName: \", c.appName);\n print(\" active: \", c.active);\n print(\" currentOpTime: \", c.currentOpTime);\n print(\" op: \", c.op);\n print(\" ns: \", c.ns);\n print(\" command: \", c.command);\n print(\"= = = = = = = = = = = \\n\");\n }\n)\n", "text": "If you just want to check whether there is any updating operation occasionally, db.currentOp() can help you find the active connections to MongoDB and what they are doing. Since the output of db.currentOp() is long and lengthy, you can define your own function to capture only the content you are interested in:", "username": "Zhen_Qu" } ]
How to find the last update time of a collection and database
2023-09-16T19:55:24.861Z
How to find the last update time of a collection and database
274
null
[ "replication", "mongodb-shell" ]
[ { "code": "{\n \"t\": {\n \"$date\": \"2023-09-18T20:46:10.853+00:00\"\n },\n \"s\": \"F\",\n \"c\": \"-\",\n \"id\": 23095,\n \"ctx\": \"OplogApplier-0\",\n \"msg\": \"Fatal assertion\",\n \"attr\": {\n \"msgid\": 34437,\n \"error\": \"NamespaceNotFound: Failed to apply operation: { op: \\\"i\\\", ns: \\\"admin.system.users\\\", ui: UUID(\\\"d14e0fd3-a568-471f-9e75-4667babdb3ae\\\"), o: { _id: \\\"admin.monitoring\\\", userId: UUID(\\\"6918d85d-0a31-440a-9f18-0c71e79cea8c\\\"), user: \\\"monitoring\\\", db: \\\"admin\\\", credentials: { SCRAM-SHA-1: { iterationCount: 10000, salt: \\\"...==\\\", storedKey: \\\"...=\\\", serverKey: \\\"...=\\\" }, SCRAM-SHA-256: { iterationCount: 15000, salt: \\\"...==\\\", storedKey: \\\"...=\\\", serverKey: \\\"...=\\\" } }, roles: [ { role: \\\"clusterMonitor\\\", db: \\\"admin\\\" } ] }, ts: Timestamp(1695069016, 9), t: 141, v: 2, wall: new Date(1695069016310) } :: caused by :: Unable to resolve d14e0fd3-a568-471f-9e75-4667babdb3ae\",\n \"file\": \"src/mongo/db/repl/oplog_applier_impl.cpp\",\n \"line\": 343\n }\n}\nmongosh -u root -p --host RS_NAME/127.0.0.1 --port 27018 admin \n...\ndb.createUser(\n {\n user: \"monitoring\",\n pwd: 'the-password',\n roles: [ { role: \"clusterMonitor\", db: \"admin\" } ]\n }\n) # create is ok\ndb.auth('monitoring','the-password') # check is ok\n", "text": "I have a 3 node replica set.\nTwice today I’ve had a node fall out of the set after adding a user to the admin DB.The node fails with this error log:The commands I used wasAfter adding the user it syncs from the primary to one of the secondaries (I canuse it locally on the node).\nThe other secondary fails and stops. After restarting it, it simply dies again.What am I doing wrong here?", "username": "Johan_Forssell" }, { "code": "", "text": "If the normal CRUD operations can be applied and synchronized to your secondary nodes except for the “createUser” command, you may need to check the security configuration in the mongod.conf of the failing node, is authentication mode identical across all the nodes ?", "username": "Zhen_Qu" } ]
Adding a new user to admin db in replica set forces one secondary offline
2023-09-18T21:25:52.305Z
Adding a new user to admin db in replica set forces one secondary offline
342
null
[ "queries", "python", "crud" ]
[ { "code": "from datetime import datetime\nimport logging\n\nimport logging.handlers\nfrom pymongo import MongoClient, InsertOne, ReplaceOne, UpdateOne, DeleteOne, ASCENDING, DESCENDING\nfrom pymongo.errors import BulkWriteError\nimport certifi\n\n\n\ndef bulk_write_operations(collection, operations, force):\n if operations == []:\n return True\n\n if not force and (len(operations) < batch_size):\n return True\n\n try:\n # Bulk write the operations\n result = collection.bulk_write(operations, ordered=True)\n print(f'{len(operations)} total records processed. {result.bulk_api_result}')\n\n except BulkWriteError as e:\n print(e.details[\"errmsg\"])\n return False\n\n del operations[:]\n return True\n\n\nbatch_size = 49\n\nuri=\"XXXX?ssl=true&authSource=admin\"\nclient = MongoClient(uri, tlsCAFile=certifi.where())\nmydb = client[\"test\"]\ncoll_main = mydb[\"testReprocess\"]\n\n# create 10 docs with timestamps\nnumRec = 10\noperation_ins = []\nfor i in range(numRec):\n res = {'fileName':\"testReprocess\", \"i\": i, \"log_ts_bv_processing\": datetime.utcnow()}\n operation_ins.append(InsertOne(res))\n bulk_write_operations(coll_main, operation_ins, False)\n\n# create 200 docs without timestamps\nnumRec = 200\nfor i in range(numRec):\n res = {'fileName': \"testReprocess\", \"i\": i}\n operation_ins.append(InsertOne(res))\n bulk_write_operations(coll_main, operation_ins, False)\n\nbulk_write_operations(coll_main, operation_ins, True)\n\n\nfilter = {'fileName':\"testReprocess\", 'log_ts_bv_processing': {'$exists': False}}\n\nnum1 = coll_main.count_documents(filter)\nprint (f\"Number of docs to reprocess: {num1}\")\n\n# workaround: adding sort to find\n#result = coll_main.find(filter).sort('_id', ASCENDING)\n\n# try to delete and insert documents in batches\nresult = coll_main.find(filter)\ncnt = 0\noperation_del = []\noperation_ins = []\nfor res in result:\n delCondition = {\"_id\": res[\"_id\"]}\n # print(f\"_id: {res['_id']}\")\n operation_del.append(DeleteOne(delCondition))\n operation_ins.append(InsertOne(res))\n cnt += 1\n bulk_write_operations(coll_main, operation_del, False)\n bulk_write_operations(coll_main, operation_ins, False)\n if (cnt % 10) == 0:\n print(f\"Reprocessing {cnt} stalled loaded documents...\")\n\nprint(f\"len of operation_del: {len(operation_del)} \")\n\nbulk_write_operations(coll_main, operation_del, True)\nbulk_write_operations(coll_main, operation_ins, True)\nprint(f\"Total: Reprocessed {cnt} stalled loaded documents\")\n==================\nREsults:\nNumber of docs to reprocess: 200\nReprocessing 10 stalled loaded documents...\nReprocessing 20 stalled loaded documents...\nReprocessing 30 stalled loaded documents...\nReprocessing 40 stalled loaded documents...\nReprocessing 50 stalled loaded documents...\nReprocessing 60 stalled loaded documents...\nReprocessing 70 stalled loaded documents...\nReprocessing 80 stalled loaded documents...\nReprocessing 90 stalled loaded documents...\nReprocessing 100 stalled loaded documents...\nReprocessing 110 stalled loaded documents...\nReprocessing 120 stalled loaded documents...\nReprocessing 130 stalled loaded documents...\nReprocessing 140 stalled loaded documents...\nReprocessing 150 stalled loaded documents...\nReprocessing 160 stalled loaded documents...\nReprocessing 170 stalled loaded documents...\nReprocessing 180 stalled loaded documents...\nReprocessing 190 stalled loaded documents...\nReprocessing 200 stalled loaded documents...\nReprocessing 210 stalled loaded documents...\nReprocessing 220 stalled loaded documents...\nReprocessing 230 stalled loaded documents...\nReprocessing 240 stalled loaded documents...\nReprocessing 250 stalled loaded documents...\nReprocessing 260 stalled loaded documents...\nReprocessing 270 stalled loaded documents...\nReprocessing 280 stalled loaded documents...\nReprocessing 290 stalled loaded documents...\nlen of operation_del: 4 \nTotal: Reprocessed 298 stalled loaded documents\n", "text": "My code needs to loop through documents without timestamps, read docs, delete them, and reinsert them ( needed for trigger).\nI found that the number of deletes/inserts is higher than the number of docs without timestamps. Some of those docs are processed twice.\nI also found that adding “sort” to “find” is resolving the issue. Also, issues appear with bulk operations size 100 and fewer operations in batch, and total size of data must be bigger than the batch size.Could somebody explain what is going on under the hood?Code to repro the issue:So, it was supposed to reinsert 200 docs but ended with reinserting 298. When a number of docs is in millions, the difference is significant.", "username": "SERGIY_MITIN" }, { "code": "find()bulk_write()sort()", "text": "Hi @SERGIY_MITIN and welcome to MongoDB community forums!!I appreciate your detailed information sharing.As stated in the MongoDB documentation regarding CRUD concepts, it’s important to note that the cursor may, under certain circumstances, return the same document multiple times. Consequently, this behavior can lead to a higher count of returned documents than the initially calculated number. This occurrence could potentially be a contributing factor to the error you are currently encountering.In MongoDB, document processing occurs in the order of their retrieval by the find() method, primarily because this method does not guarantee any specific order for document retrieval.When you employ the bulk_write() method with a list of operations, MongoDB executes these operations sequentially in the order they are listed. However, if you are processing documents without any particular order, there is a possibility that some documents may be processed more than once.I also found that adding “sort” to “find” is resolving the issue.Using sort() , the sequence is getting defined and it does not process the document twice and ensures that documents are processed in a specific order.Please don’t hesitate to reach out if you have any further questions or need additional clarification.Regards\nAasawari", "username": "Aasawari" } ]
How find works if cursor data is modified?
2023-09-06T21:20:11.092Z
How find works if cursor data is modified?
405
null
[ "aggregation", "node-js", "atlas-search" ]
[ { "code": "const GarmentSchema = new Schema({\n name: {\n type: String,\n required: true,\n },\n brand: {\n type: String,\n required: true,\n },\n colors: {\n type: [String],\n required: true,\n },\n category: {\n type: String,\n required: true,\n },\n image_url: {\n type: String,\n },\n user: {\n type: Schema.Types.ObjectId,\n ref: \"User\",\n },\n embedding: {\n type: [Number],\n required: true,\n },\n createdAt: {\n type: String,\n required: true,\n index: true,\n },\n});\nconst matchingGarments = await collection.aggregate([\n {\n $search: {\n index: \"default\",\n knnBeta: {\n vector: embedding,\n path: \"embedding\",\n k: 5,\n },\n },\n },\n ])\n .toArray();\nconst matchingGarments = await collection\n .aggregate([\n {\n $search: {\n index: \"default\",\n knnBeta: {\n vector: embedding,\n path: \"embedding\",\n k: 5,\n },\n },\n },\n {\n $match: {\n category,\n },\n },\n ])\n .toArray();\n", "text": "I’m trying to perform a vector embedding search with MongoDB Atlas Search.I’m searching for clothes which I store in the Garment collection of my database. This is my Garment Schema:This is based on the example MongoDB gives us for vector search, as it applies to my schema:However, I would like to query by “top” or “bottom” so I added the match field, and Atlas search requires that if using knnBeta the $search query be first (or it returns an error) so I wrote it like this:However, I tested it to realize that it doesn’t actually make a difference because it’s still querying the top 5 results and then splitting them up based on the category so I could end up with 4 tops and 1 bottom regardless of the category I specify. I would like to get the top 5 bottoms that match when I input the category as “bottom” and the top 5 tops when I input “top”.Is there any way to do this? Any input is really appreciated.", "username": "Amal_Sony" }, { "code": "const matchingGarments = await collection\n .aggregate([\n {\n $search: {\n index: \"default\",\n knnBeta: {\n vector: embedding,\n path: \"embedding\",\n k: 5,\n filter: {\n text: {\n query: category,\n path: \"category\",\n },\n },\n },\n },\n },\n ])\n .toArray();\n", "text": "Update: This worked for me", "username": "Amal_Sony" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filtering in vector embedding search with Atlas
2023-09-20T04:46:15.052Z
Filtering in vector embedding search with Atlas
361
null
[ "aggregation", "replication", "sharding", "transactions", "configuration" ]
[ { "code": "mongod --configsvr --replSet rs1 --dbpath /s/node-61/c/nobackup/db --bind_ip node-61{\"t\":{\"$date\":\"2023-09-13T01:17:37.285-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.288-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.289-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ReshardingCoordinatorService\",\"namespace\":\"config.reshardingOperations\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ConfigsvrCoordinatorService\",\"namespace\":\"config.sharding_configsvr_coordinators\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"RenameCollectionParticipantService\",\"namespace\":\"config.localRenameParticipants\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ShardingDDLCoordinator\",\"namespace\":\"config.system.sharding_ddl_coordinators\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ReshardingDonorService\",\"namespace\":\"config.localReshardingOperations.donor\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"ReshardingRecipientService\",\"namespace\":\"config.localReshardingOperations.recipient\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"namespace\":\"config.tenantMigrationDonors\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationRecipientService\",\"namespace\":\"config.tenantMigrationRecipients\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.291-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":5945603, \"ctx\":\"main\",\"msg\":\"Multi threading initialized\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.292-06:00\"},\"s\":\"I\", \"c\":\"TENANT_M\", \"id\":7091600, \"ctx\":\"main\",\"msg\":\"Starting TenantMigrationAccessBlockerRegistry\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.292-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4615611, \"ctx\":\"initandlisten\",\"msg\":\"MongoDB starting\",\"attr\":{\"pid\":1722894,\"port\":27019,\"dbPath\":\"/s/node-61/c/nobackup/db\",\"architecture\":\"64-bit\",\"host\":\"node-61\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.292-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"initandlisten\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"7.0.1\",\"gitVersion\":\"425a0454d12f2664f9e31002bbe4a386a25345b5\",\"openSSLVersion\":\"OpenSSL 1.1.1k FIPS 25 Mar 2021\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"rhel80\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.292-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"initandlisten\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"AlmaLinux release 8.8 (Sapphire Caracal)\",\"version\":\"Kernel 4.18.0-477.15.1.el8_8.x86_64\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.292-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"net\":{\"bindIp\":\"localhost,node-61\"},\"replication\":{\"replSet\":\"rs1\"},\"sharding\":{\"clusterRole\":\"configsvr\"},\"storage\":{\"dbPath\":\"/s/node-61/c/nobackup/db\"}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.295-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22297, \"ctx\":\"initandlisten\",\"msg\":\"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-09-13T01:17:37.295-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22315, \"ctx\":\"initandlisten\",\"msg\":\"Opening WiredTiger\",\"attr\":{\"config\":\"create,cache_size=3311M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.563-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":4795906, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger opened\",\"attr\":{\"durationMillis\":1268}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.563-06:00\"},\"s\":\"I\", \"c\":\"RECOVERY\", \"id\":23987, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger recoveryTimestamp\",\"attr\":{\"recoveryTimestamp\":{\"$timestamp\":{\"t\":0,\"i\":0}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.838-06:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"initandlisten\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.838-06:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22178, \"ctx\":\"initandlisten\",\"msg\":\"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.838-06:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22184, \"ctx\":\"initandlisten\",\"msg\":\"Soft rlimits for open file descriptors too low\",\"attr\":{\"currentValue\":1024,\"recommendedMinimum\":64000},\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.838-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5853300, \"ctx\":\"initandlisten\",\"msg\":\"current featureCompatibilityVersion value\",\"attr\":{\"featureCompatibilityVersion\":\"unset\",\"context\":\"startup\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.838-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":5071100, \"ctx\":\"initandlisten\",\"msg\":\"Clearing temp directory\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.839-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20536, \"ctx\":\"initandlisten\",\"msg\":\"Flow Control is enabled on this deployment\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.839-06:00\"},\"s\":\"I\", \"c\":\"FTDC\", \"id\":20625, \"ctx\":\"initandlisten\",\"msg\":\"Initializing full-time diagnostic data capture\",\"attr\":{\"dataDirectory\":\"/s/node-61/c/nobackup/db/diagnostic.data\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.842-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40440, \"ctx\":\"initandlisten\",\"msg\":\"Starting the TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.842-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":40445, \"ctx\":\"TopologyVersionObserver\",\"msg\":\"Started TopologyVersionObserver\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:38.842-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.startup_log\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"ee40806b-4715-4a4c-adae-41331c3e8116\"}},\"options\":{\"capped\":true,\"size\":10485760}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.000-06:00\"},\"s\":\"W\", \"c\":\"REPL\", \"id\":21533, \"ctx\":\"ftdc\",\"msg\":\"Rollback ID is not initialized yet\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.006-06:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"ee40806b-4715-4a4c-adae-41331c3e8116\"}},\"namespace\":\"local.startup_log\",\"index\":\"_id_\",\"ident\":\"index-1-7962442540314929980\",\"collectionIdent\":\"collection-0-7962442540314929980\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.007-06:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22727, \"ctx\":\"ShardRegistryUpdater\",\"msg\":\"Error running periodic reload of shard registry\",\"attr\":{\"error\":\"NotYetInitialized: Config shard has not been set up yet\",\"shardRegistryReloadIntervalSeconds\":30}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.008-06:00\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":7445900, \"ctx\":\"initandlisten\",\"msg\":\"Started with ShardServer role, but no shardIdentity document was found on disk.\",\"attr\":{\"namespace\":\"admin.system.version\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.008-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigStartingUp\",\"oldState\":\"ConfigPreStart\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.008-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":200}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.008-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.008-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6005300, \"ctx\":\"initandlisten\",\"msg\":\"Starting up replica set aware services\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.011-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280500, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to create internal replication collections\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.011-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.oplogTruncateAfterPoint\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"e091b3b0-8e85-423c-8aa5-9fcde0c873b5\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.161-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":7360102, \"ctx\":\"initandlisten\",\"msg\":\"Added oplog entry for create to transaction\",\"attr\":{\"namespace\":\"local.$cmd\",\"uuid\":{\"uuid\":{\"$uuid\":\"e091b3b0-8e85-423c-8aa5-9fcde0c873b5\"}},\"object\":{\"create\":\"replset.oplogTruncateAfterPoint\",\"idIndex\":{\"v\":2,\"key\":{\"_id\":1},\"name\":\"_id_\"}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.161-06:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"e091b3b0-8e85-423c-8aa5-9fcde0c873b5\"}},\"namespace\":\"local.replset.oplogTruncateAfterPoint\",\"index\":\"_id_\",\"ident\":\"index-3-7962442540314929980\",\"collectionIdent\":\"collection-2-7962442540314929980\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.161-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.minvalid\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"a7303326-00e2-4ca6-b007-6b0178b27af3\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.209-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":400}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.345-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":7360102, \"ctx\":\"initandlisten\",\"msg\":\"Added oplog entry for create to transaction\",\"attr\":{\"namespace\":\"local.$cmd\",\"uuid\":{\"uuid\":{\"$uuid\":\"a7303326-00e2-4ca6-b007-6b0178b27af3\"}},\"object\":{\"create\":\"replset.minvalid\",\"idIndex\":{\"v\":2,\"key\":{\"_id\":1},\"name\":\"_id_\"}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.345-06:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"a7303326-00e2-4ca6-b007-6b0178b27af3\"}},\"namespace\":\"local.replset.minvalid\",\"index\":\"_id_\",\"ident\":\"index-5-7962442540314929980\",\"collectionIdent\":\"collection-4-7962442540314929980\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.345-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.replset.election\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"ac41a2e0-7fd2-469a-92f7-96722fa5cf3a\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.511-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":7360102, \"ctx\":\"initandlisten\",\"msg\":\"Added oplog entry for create to transaction\",\"attr\":{\"namespace\":\"local.$cmd\",\"uuid\":{\"uuid\":{\"$uuid\":\"ac41a2e0-7fd2-469a-92f7-96722fa5cf3a\"}},\"object\":{\"create\":\"replset.election\",\"idIndex\":{\"v\":2,\"key\":{\"_id\":1},\"name\":\"_id_\"}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.511-06:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"ac41a2e0-7fd2-469a-92f7-96722fa5cf3a\"}},\"namespace\":\"local.replset.election\",\"index\":\"_id_\",\"ident\":\"index-7-7962442540314929980\",\"collectionIdent\":\"collection-6-7962442540314929980\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.512-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280501, \"ctx\":\"initandlisten\",\"msg\":\"Attempting to load local voted for document\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.512-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21311, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local initialized voted for document at startup\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.512-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":4280502, \"ctx\":\"initandlisten\",\"msg\":\"Searching for local Rollback ID document\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.512-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21312, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local Rollback ID document at startup. Creating one\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.512-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.system.rollback.id\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"8cb38746-f9dc-49ef-93eb-03fa71f7816e\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.610-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":600}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.678-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":7360102, \"ctx\":\"initandlisten\",\"msg\":\"Added oplog entry for create to transaction\",\"attr\":{\"namespace\":\"local.$cmd\",\"uuid\":{\"uuid\":{\"$uuid\":\"8cb38746-f9dc-49ef-93eb-03fa71f7816e\"}},\"object\":{\"create\":\"system.rollback.id\",\"idIndex\":{\"v\":2,\"key\":{\"_id\":1},\"name\":\"_id_\"}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.678-06:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"8cb38746-f9dc-49ef-93eb-03fa71f7816e\"}},\"namespace\":\"local.system.rollback.id\",\"index\":\"_id_\",\"ident\":\"index-9-7962442540314929980\",\"collectionIdent\":\"collection-8-7962442540314929980\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.678-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21531, \"ctx\":\"initandlisten\",\"msg\":\"Initialized the rollback ID\",\"attr\":{\"rbid\":1}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.678-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":21313, \"ctx\":\"initandlisten\",\"msg\":\"Did not find local replica set configuration document at startup\",\"attr\":{\"error\":{\"code\":47,\"codeName\":\"NoMatchingDocument\",\"errmsg\":\"Did not find replica set configuration document in local.system.replset\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.679-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":6015317, \"ctx\":\"initandlisten\",\"msg\":\"Setting new configuration state\",\"attr\":{\"newState\":\"ConfigUninitialized\",\"oldState\":\"ConfigStartingUp\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.679-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":20320, \"ctx\":\"initandlisten\",\"msg\":\"createCollection\",\"attr\":{\"namespace\":\"local.system.views\",\"uuidDisposition\":\"generated\",\"uuid\":{\"uuid\":{\"$uuid\":\"7003c68c-32fd-44f1-bc3e-bc16dac27ae6\"}},\"options\":{}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.853-06:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":7360102, \"ctx\":\"initandlisten\",\"msg\":\"Added oplog entry for create to transaction\",\"attr\":{\"namespace\":\"local.$cmd\",\"uuid\":{\"uuid\":{\"$uuid\":\"7003c68c-32fd-44f1-bc3e-bc16dac27ae6\"}},\"object\":{\"create\":\"system.views\",\"idIndex\":{\"v\":2,\"key\":{\"_id\":1},\"name\":\"_id_\"}}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.853-06:00\"},\"s\":\"I\", \"c\":\"INDEX\", \"id\":20345, \"ctx\":\"initandlisten\",\"msg\":\"Index build: done building\",\"attr\":{\"buildUUID\":null,\"collectionUUID\":{\"uuid\":{\"$uuid\":\"7003c68c-32fd-44f1-bc3e-bc16dac27ae6\"}},\"namespace\":\"local.system.views\",\"index\":\"_id_\",\"ident\":\"index-11-7962442540314929980\",\"collectionIdent\":\"collection-10-7962442540314929980\",\"commitTimestamp\":null}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.854-06:00\"},\"s\":\"I\", \"c\":\"STORAGE\", \"id\":22262, \"ctx\":\"initandlisten\",\"msg\":\"Timestamp monitor starting\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.855-06:00\"},\"s\":\"I\", \"c\":\"QUERY\", \"id\":7080100, \"ctx\":\"ChangeStreamExpiredPreImagesRemover\",\"msg\":\"Starting Change Stream Expired Pre-images Remover thread\"}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.856-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20712, \"ctx\":\"LogicalSessionCacheReap\",\"msg\":\"Sessions collection is not set up; waiting until next sessions reap interval\",\"attr\":{\"error\":\"NotYetInitialized: Config shard has not been set up yet\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.856-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"/tmp/mongodb-27019.sock\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.856-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"127.0.0.1\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.856-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23015, \"ctx\":\"listener\",\"msg\":\"Listening on\",\"attr\":{\"address\":\"129.82.208.71\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.856-06:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23016, \"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27019,\"ssl\":\"off\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:39.857-06:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20710, \"ctx\":\"LogicalSessionCacheRefresh\",\"msg\":\"Failed to refresh session cache, will try again at the next refresh interval\",\"attr\":{\"error\":\"NotYetInitialized: Config shard has not been set up yet\"}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:40.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:40.211-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":800}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:41.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:41.012-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":1000}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:42.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:42.013-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":1200}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:43.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:43.215-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":1400}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:44.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:44.616-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":1600}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:45.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:46.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:46.217-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":1800}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:47.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:48.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:48.019-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":2000}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:49.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:50.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:50.020-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":2200}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:51.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:52.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:52.223-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":2400}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:53.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:54.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:54.624-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":2600}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:55.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:56.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:57.001-06:00\"},\"s\":\"W\", \"c\":\"QUERY\", \"id\":23799, \"ctx\":\"ftdc\",\"msg\":\"Aggregate command executor error\",\"attr\":{\"error\":{\"code\":26,\"codeName\":\"NamespaceNotFound\",\"errmsg\":\"Unable to retrieve storageStats in $collStats stage :: caused by :: Collection [local.oplog.rs] not found.\"},\"stats\":{},\"cmd\":{\"aggregate\":\"oplog.rs\",\"cursor\":{},\"pipeline\":[{\"$collStats\":{\"storageStats\":{\"waitForLock\":false,\"numericOnly\":true}}}],\"$db\":\"local\"}}}\n{\"t\":{\"$date\":\"2023-09-13T01:17:57.227-06:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"ReadConcernMajorityNotAvailableYet: Read concern majority reads are currently not possible.\",\"nextWakeupMillis\":2800}}\n", "text": "RE: MongoDB Community Server version 7.0.1 (Platform: RedHat / CentOS 8.0 x64)I’m attempting to deploy a shared cluster using the documentation provided here.The command mongod --configsvr --replSet rs1 --dbpath /s/node-61/c/nobackup/db --bind_ip node-61 results in the following output.Any advice on how to resolve this is appreciated!", "username": "menuka" }, { "code": "", "text": "Can’t find out any root cause from the above output. It could be more helpful to troubleshoot if you paste the configuration files (config replica set, shards) here.", "username": "Zhen_Qu" } ]
Cannot start a config server -- NamespaceNotFound, Code 26
2023-09-13T07:38:19.653Z
Cannot start a config server &ndash; NamespaceNotFound, Code 26
1,423
null
[]
[ { "code": "mongosh", "text": "Amy,\nI read the recent update claiming that createSearchIndex has been implemented in both Node.js and mongosh, namely (from the changelog):\n\" For Atlas clusters running MongoDB 6.0.8 or later, introduces ability to create and manage Atlas Search indexes from mongosh and NodeJS driver\"However, as reported on the Be able to create Search indexes from Mongo shell – MongoDB Feedback Engine page, I still see “createSearchIndex is not a function” (in mongosh) and “command not found!” (in Node.js).\nI am running respectively:\nmongosh v 1.10.1 (and have readWrite on the target db)\nNodeJS v. 18.17.1 with mongodb driver as per npm installation (it turns out to be 6.0.0 in the package.json that results).Can you please shed some light?\nThanks\n-s", "username": "Stefano_Odoardi" }, { "code": "", "text": "Hey @Stefano_Odoardi,However, as reported on the Be able to create Search indexes from Mongo shell – MongoDB Feedback Engine page, I still see “createSearchIndex is not a function” (in mongosh) and “command not found!” (in Node.js).To further assist you, could you please share the full command you are trying to run and confirm that you are using the MongoDB Atlas M10+ tier?Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "OK I see now, it just escaped me that one needs to have M10 +, also because the notification I received didn’t mention that explicitlyTHanks", "username": "Stefano_Odoardi" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
createSearchIndexes not working
2023-09-05T09:22:54.869Z
createSearchIndexes not working
309
null
[ "python" ]
[ { "code": "", "text": "Python 3.10.2\nDjango==4.1.1\nPyMongo==4.2.0\ndjongo==1.3.6\nwhile running migrate\nNotImplementedError: Database objects do not implement truth value testing or bool(). Please compare with None instead: database is not None\nReinstall pymongo pymongo==3.12.3 resolved this problem.", "username": "John_Wang" }, { "code": "", "text": "I’m using mongodb 6.0.1 Community,maybe should be wating for next version of djongo", "username": "John_Wang" }, { "code": "", "text": "Welcome to the MongoDB Community @John_Wang !The Djongo open source project currently isn’t very actively maintained, so I suspect you’ll be waiting a long while for bug fixes. You might be able to find more advice via the community resources in the Djongo Readme: Djongo Questions and Discussions.Reinstall pymongo pymongo==3.12.3 resolved this problem.Djongo 1.3.6 was released in June 2021 almost six months before the PyMongo 4.0 driver, so presumably needs some updates for full compatibility. Thanks for sharing your workaround.Can you share more details on how you are trying to use MongoDB with Django? For example: are you trying to run everything (including Django admin UI) with MongoDB as a backend database?I would personally be inclined to use an official driver like PyMongo or Motor with one of the Python microframeworks. The Django core is designed around SQL and isn’t a great fit for MongoDB at the moment. Alternatively, you could use a driver like PyMongo in your Django views but would still need a supported relational database for core Django admin features.Regards,\nStennie", "username": "Stennie_X" }, { "code": "Database.__bool__if database:if database is not None:", "text": "To address the original error:NotImplementedError: Database objects do not implement truth value testing or bool(). Please compare with None instead: database is not NoneThis was an intentional minor breaking change we made in PyMongo 4.0 described here:Database.__bool__ raises NotImplementedError¶\nDatabase now raises an error upon evaluating as a Boolean. Code like this:if database:\nCan be changed to this:if database is not None:\nYou must now explicitly compare with None.https://pymongo.readthedocs.io/en/stable/migrate-to-pymongo4.html#database-bool-raises-notimplementederrorThe full trackback of the exception will show which line needs to be changed.", "username": "Shane" }, { "code": "", "text": "it’s resolved\nafter\npip install pymongo==3.12.3", "username": "Chandrashekhar_Pattar" }, { "code": "", "text": "I could fix the line of error in djongo with like “if self.connection ⇒ self.connection is not None”", "username": "jong-hwan_Kim" } ]
Djongo NotImplementedError: Database objects do not implement truth value testing or bool()
2022-09-22T05:42:52.375Z
Djongo NotImplementedError: Database objects do not implement truth value testing or bool()
13,223
null
[]
[ { "code": "", "text": "We are recently considering migrating our self-built mongodb to mongodb atlas for aws. Before migrating, I will learn about the functions of mongodb atlas auto scaling, such as how long it takes to scale up and down each time. Will normal use be affected during this period?", "username": "wu_xu" }, { "code": "", "text": "Hey @wu_xu,Welcome to the MongoDB Community forums! how long it takes to scale up and down each time.MongoDB Atlas effectively manages all the backend resources and scales up as quickly as possible. Our system monitors CPU and RAM utilization, triggering scaling when it surpasses 75% within an hour, and scaling down when it drops below 50% within 24 hours.Although the option to configure custom auto-scaling policies is not available, there is a feedback post related to customizing the duration for auto-scaling evaluation and monitoring, which you can vote for.Will normal use be affected during this period?As per documentation, Auto-scaling works on a rolling basis, meaning the process doesn’t incur any downtime.Please feel free to reach out if you have any further questions.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "\"Scaling up to a greater cluster tier requires enough time to prepare backing resources. \" How long will this take?", "username": "wu_xu" }, { "code": "", "text": "Hey @wu_xu,\"Scaling up to a greater cluster tier requires enough time to prepare backing resources. \" How long will this take?As previously mentioned, the precise timing of this process cannot be predicted with certainty due to its dependence on several factors. However, it occurs in a rolling fashion, meaning the process doesn’t incur any downtime.To illustrate, let’s consider a scenario where you have a PSS node. Initially, it will start the autoscaling on one secondary node, while the Primary, Secondary (PS) continues to operate as usual. Once the first secondary node completes its upgrade, the PS will still be available to handle incoming requests.Subsequently, the system will select another secondary node for upgrading, and during this phase, the Primary and the upgraded Secondary (PS) will remain operational to handle ongoing requests. Following this, the Primary node will undergo the auto-scaling. Meanwhile, one of the secondary nodes will be elected as the new primary node to ensure the continuity of the current system. This transition typically takes only a few seconds for the re-election to occur.I hope this clarifies your doubts.Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you for your reply. Your reply is very useful to me. In addition, I want to know how mongodb atlas backup works and how it can restore data from one minute ago.", "username": "wu_xu" } ]
How long does it take for mongo atlas to scale up each time, for example from M50 to M60 storage is 1T? And how long will it take to scale down?
2023-09-13T09:33:22.538Z
How long does it take for mongo atlas to scale up each time, for example from M50 to M60 storage is 1T? And how long will it take to scale down?
370
null
[ "aggregation", "indexes" ]
[ { "code": " { _id: 2988725, partNumber: '2WFG4' },\n { _id: 3177996, partNumber: '2WFP4' }\n {\n v: 2,\n key: { partNumber: -1, _id: 1 },\n name: 'partNumber_-1__id_1'\n }\npartNumberdb.product_part_number.aggregate( [\n {\n $group: {\n _id: \"$partNumber\",\n count: { $sum: 1 }\n }\n },\n {\n $sort: { count: -1 }\n },\n {\n $limit: 1\n }\n])\nMongoServerError: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in.\n{ allowDiskUse: true } executionStats: {\n executionSuccess: true,\n nReturned: 2388237,\n executionTimeMillis: 15192,\n totalKeysExamined: 0,\n totalDocsExamined: 2388237,\n .\n .\n .\n", "text": "Good morning! I have a collection, product_part_number, and it contains a little more than 2.3 million documents like the following:(Yes, it only contains _id and partNumber). I also have the following compound index:Now I need to see which partNumber corresponds to the most products using the following aggregation:However, running explain(“executionStats”) results in an error:If I add { allowDiskUse: true } to the expalin() call I can see no any index is used:In my situation, how can I use index? Thanks a lot, in advance!", "username": "Daniel_Li1" }, { "code": "", "text": "You can use hint parameters to force query to use indexcursor.hint() - MongoDB shell method - w3resource.", "username": "Bhavya_Bhatt" }, { "code": "", "text": "@Daniel_Li1 How many distinct ‘partNumber’ values are there in the collection?", "username": "Jack_Yang1" }, { "code": "db.Test.explain().aggregate( [\n{\n $sort:{\n 'partNumber':-1\n }\n},\n {\n $group: {\n _id: \"$partNumber\",\n count: { $sum: 1 }\n }\n },\n {\n $sort: { count: -1 }\n },\n {\n $limit: 1\n }\n])\n winningPlan: {\n queryPlan: {\n stage: 'GROUP',\n planNodeId: 2,\n inputStage: {\n stage: 'COLLSCAN',\n planNodeId: 1,\n filter: {},\n direction: 'forward'\n }\n },\n winningPlan: {\n queryPlan: {\n stage: 'GROUP',\n planNodeId: 3,\n inputStage: {\n stage: 'PROJECTION_COVERED',\n planNodeId: 2,\n transformBy: { partNumber: true, _id: false },\n inputStage: {\n stage: 'IXSCAN',\n planNodeId: 1,\n keyPattern: { partNumber: -1, _id: 1 },\n indexName: 'partNumber_-1__id_1',\n isMultiKey: false,\n multiKeyPaths: { partNumber: [], _id: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: {\n partNumber: [ '[MaxKey, MinKey]' ],\n _id: [ '[MinKey, MaxKey]' ]\n }\n }\n }\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 100,\n executionTimeMillis: 6,\n totalKeysExamined: 0,\n totalDocsExamined: 10000,\n executionStats: {\n executionSuccess: true,\n nReturned: 100,\n executionTimeMillis: 16,\n totalKeysExamined: 10000,\n totalDocsExamined: 0,\n", "text": "Have you tried adding a sort to the query before the group? I tested this locally (v6 Server) and without the sort it’s using a COLSCAN and after it’s using an index scan:Old:New:Old Stats:New Stats:Interestingly with a larger data set the execution time is still faster without the sort and doing a colscan, even though it’s a covered index.", "username": "John_Sewell" }, { "code": "{\n $sort:{\n 'partNumber':-1\n }\n},\n executionStats: {\n executionSuccess: true,\n nReturned: 2388237,\n executionTimeMillis: 15001,\n totalKeysExamined: 2388237,\n totalDocsExamined: 0,\nMongoServerError: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in.{ allowDiskUse: true}", "text": "Addingat the beginning of the aggregation improved a lot and now explain(“executionStats”) returnsNow the only thing I don’t understand is that it still complainsMongoServerError: Exceeded memory limit for $group, but didn't allow external sort. Pass allowDiskUse:true to opt in.if I remove { allowDiskUse: true}. The index is only about 37M. How can sorting need more than 100M?", "username": "Daniel_Li1" }, { "code": "partNumber", "text": "@Jack_Yang1 There are totally 2,287,646 unique partNumber values.", "username": "Daniel_Li1" }, { "code": "", "text": "@Bhavya_Bhatt Without changing the aggregation, simply adding hint(‘partNumber_-1__id_1’) does not change anything.", "username": "Daniel_Li1" }, { "code": "", "text": "I’m sure there is lots of magic and optimisations behind the scenes but from a simplistic view, doing a group is much easier when data is sorted as you know when one group ends and another begins. Without the $sort, it takes a lot more effort, i.e.Grouping: “AABACAABA” is harder than:\nGrouping: “AAAAAABBC”You know a group has finished when the letter changes so you have to keep track of less as you go along. I imagine there are lots of clever sorting optimisations that the server really does.The memory issue you’re still getting is interesting, how large is your collection in size? The actual size and not size on disk? I’m not sure if the index reported size is compressed or not, remember that an index will be a tree structure so data is reduced from the actual source data. When you actually build documents from an index it’ll get bigger.So if you had four documents:\n“ABC”\n“ABD”\n“ABE”\n“ACE”The index only needs to store, AB CDE and ACE, reducing storage from 12 Chars to 8, but if you want to output the data, and it’s a covered index, it will need to re-create the data and so you’re back to 12 in memory storage. You obviously have overhead for the structure…but with lots of data, things tend to go in your favour!(I’m sure someone can correct me if I’d made a terrible error above)To sum up my ramblings, while the index may be 37M, the data that’s processed may get bigger, i.e. the size of your collection if you’re processing it all, although optimisation will be made along the way so I’m sure it’ll be much, much smaller…", "username": "John_Sewell" }, { "code": "", "text": "@Bhavya_Bhatt Appreciate your help, John! You are an expert ", "username": "Daniel_Li1" }, { "code": " { _id: 2988725, partNumber: '2WFG4' },\n { _id: 3177996, partNumber: '2WFP4' },\n { _id: 3072736, partNumber: '2WFP5' },\n", "text": "@Johan_Forssell I exported the collection to my local in text format and the total size is exactly 99,192,344 bytes. The following are a few documents:So still do not understand why sorting needs more than 100M memory. Thanks a lot!", "username": "Daniel_Li1" }, { "code": "", "text": "You’re probably on the cusp of in-memory there, if you filter it down a bit (with a $limit or something) how much do you need to reduce to not have the issue?With the $sort added and using allowdiskuse, does that calculate in enough time for your use case?At this point I guess it’s down to if you think each time you call this query, is it reasonable to take the hit of this query, if the data only updates on occasion then you could pre-calculate it when making changes, or just do it every so-often.There have been a few questions about grouping large datasets recently and there gets to a point with mongo where some things just take a little while, it may be quicker on some RDMS servers but mongo gives you other flexibility!You could do a merge update with this, so when you run it, it works out the most used and updates the document that matches.You could store the data as arrays with a document for part number…but you need to be wary in case you have a monster product that has a million components and you’ll blow the doc size limit (16MB).\nI’m sure running a $size and getting the biggest would be pretty quick.", "username": "John_Sewell" }, { "code": "db.product.explain(\"executionStats\").aggregate([\n {\n $sort: {partNumber: -1},\n },\n {\n $group: {\n _id: \"$partNumber\",\n count: { $sum: 1 }\n }\n },\n {\n $match: {count: {$gt: 8}},\n },\n {\n $sort: { count: -1 }\n },\n {\n $limit: 1\n }\n], { allowDiskUse: true })\n", "text": "@Johan_Forssell I added a $match stage as follows:but seems no changes. By the way, now I am very satisfied with this result. We only need to run this aggregation once or twice a year manually. Thanks again for your help and have a nice day, John!", "username": "Daniel_Li1" }, { "code": "{ \"$group\" : {\n \"_id\" : \"$partNumber\"\n} }\nlookup = { \"$lookup\" : {\n \"from\" : \"product_part_number\" ,\n \"as\" : \"count\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"partNumber\" ,\n \"pipeline\" : [\n \"$count\" : \"_result\"\n ]\n} }\nproject = { \"$project\" : {\n \"count\" : { \"$arrayElemAt\" : [ \"$count._result\" , 0 ] }\n} }\n", "text": "You are sorting on the computed value count. No index can be used when sorting on a computed value. Indexes are used on stored field. A $sort on computed values will always be a memory sort, so it uses a lot of memory.A $group stage is blocking. All incoming documents must be processed before the first resulting document is passed to the next stage, so it uses a lot of memory.One trick I use to reduce the memory used by $group is to $group without an accumulator, then use $lookup to implement the accumulator logic. In your case, it may reduce the amount of memory needed by group by 50%. What I mean is something like:So the above 3 way more complicated states may reduce the memory footprint compared to the single $group. But not for the in memory sort on a computed field.", "username": "steevej" }, { "code": "", "text": "Thanks a lot, @steevej ! I learned something very useful. Have a nice day!", "username": "Daniel_Li1" } ]
How to use index in $group with $sum then $sort
2023-09-18T02:53:34.431Z
How to use index in $group with $sum then $sort
591
null
[ "compass", "sharding", "mongodb-shell" ]
[ { "code": "db.<collection name>.crateSearchIndex(\"test\", {})MongoServerError: command not founddb.<collection name>.crateSearchIndexdb.<collection name>.crateSearchIndex.help()createSearchIndex Creates one search indexes on a collection", "text": "Hi,\nI’m running mongosh (latest/greatest) from my Window 7 Pro PC against an Atlas-deployed db of mine (version 6.0.6 it says), to test the construction of Atlas Search indexes through mongosh.\nI connected to the cluster and entered the shell, then went to the db and showed the collections, just fine.I picked one collection and calleddb.<collection name>.crateSearchIndex(\"test\", {})expecting some syntax errors, but also trying in any other way, I getMongoServerError: command not foundYet, if I try\ndb.<collection name>.crateSearchIndex\nthe function appears to be known, showing:[Function: createSearchIndex] AsyncFunction {\napiVersions: [ 0, 0 ],\nreturnsPromise: true,\nserverVersions: [ ‘6.0.0’, ‘999.999.999’ ],\ntopologies: [ ‘ReplSet’, ‘Sharded’, ‘LoadBalanced’, ‘Standalone’ ],\nreturnType: { type: ‘unknown’, attributes: {} },\ndeprecated: false,\nplatforms: [ ‘Compass’, ‘Browser’, ‘CLI’ ],\nisDirectShellCommand: false,\nacceptsRawInput: false,\nshellCommandCompleter: undefined,\nhelp: [Function (anonymous)] Help\n}As well as it appears in\ndb.<collection name>.crateSearchIndex.help()\nwhich shows, among other things,\ncreateSearchIndex Creates one search indexes on a collectionSo if it’s “there” and in the help() list, why is it reporting a “not known” error when used?\nThe user I connect as has readWrite access to the db.Thanks", "username": "Stefano_Odoardi" }, { "code": "crateSearchIndexcreateSearchIndex", "text": "You have typed several times crateSearchIndex instead of createSearchIndex.\nThe first guess which leaps to mind is that you are misspelling the function name sometimes.", "username": "Jack_Woehr" }, { "code": "Atlas atlas-**** [primary] -**** > db.companies.getSearchIndexes()\nMongoServerError: $listSearchIndexes is not allowed or the syntax is incorrect,\nsee the Atlas documentation for more information\nAtlas atlas-**** [primary] -**** > db.companies.createSearchIndex()\nMongoServerError: command not found\n", "text": "Jack,\nthanks for catching that, but unfortunately that’s not the issue.Here is the latest output from properly spelled commands:Thanks", "username": "Stefano_Odoardi" }, { "code": "", "text": "Does it work if you try to create an index using Compass?", "username": "Jack_Woehr" }, { "code": "", "text": "I am currently on Compass 1.21.1 and I don’t see any way to create an Atlas Search index – native indexes OK, but it doesn’t seem to be there for Atlas Search indexes.\nSo I can’t seem to be able to try if they work from there.", "username": "Stefano_Odoardi" }, { "code": "", "text": "Hmm, I’m not so clear on the difference between the two types of indexes.However, the latest version of Compass is 1.38.0 … you might try updating.", "username": "Jack_Woehr" }, { "code": "", "text": "Hi there, thanks for posting in the MongoDB Community forum! createSearchIndex() is not yet available in MongoDB v6.0. Work is in progress so please hang tight! We will update here when it becomes available.", "username": "amyjian" }, { "code": "", "text": "Hello Amy,\nthanks for the clarification, but… ah! It didn’t really come across clearly from the documentation…At any rate, the post you refer to, where updates should appear on the matter is already 4 years old… it doesn’t seem the matter has been MongoDB’s priority…\nCan you share some info on the timeframe when this capability could be available?Alternatively, is there a way to create Atlas Search indexes via a more recent Compass version as Jack hypothesized?Alternatively yet, any way at all to create these indexes in a scripting context, or do I have to resort to other options, say a mini NodeJS app to that end?Thanks\n-stefano", "username": "Stefano_Odoardi" }, { "code": "In Progress", "text": "Hi @Stefano_Odoardi , thanks for your feedback on the documentation. We will update it to make this more clear.The status on the post should show In Progress now, and we expect to be able to deliver support for 6.0 within the next few months. Work to support creating Atlas Search indexes via Compass is also in progress.In the meanwhile, you can use the Atlas Admin API or Atlas CLI to create search indexes in a scripting context - see docs on different methods here. Let me know if that helps!", "username": "amyjian" }, { "code": "", "text": "3 posts were split to a new topic: createSearchIndexes not working", "username": "Kushagra_Kesav" } ]
createSearchIndex() not found in mongosh
2023-07-11T17:00:14.973Z
createSearchIndex() not found in mongosh
1,131
null
[ "aggregation", "atlas-search" ]
[ { "code": " [{\n \"$search\":{\n \"index\":\"vector_index\",\n \"knnBeta\":{\n \"vector\":[\n -0.30345699191093445,\n 0.6833441853523254,\n 1.2565147876739502,\n -0.6364057064056396\n ],\n \"path\":\"embedding\",\n \"k\":10,\n \"filter\":{\n \"compound\":{\n \"filter\":[\n {\n \"text\":{\n \"path\":\"my.field.name\",\n \"query\":[\n \"value1\",\n \"value2\",\n \"value3\",\n \"value4\"\n ]\n },\n {\n \"text\":{\n \"path\":\"my.field.name2\",\n \"query\":\"something_else\",\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n \"$project\":{\n \"score\":{\n \"$meta\":\"searchScore\"\n },\n \"embedding\":0\n }\n }\n \n ]\nmy.field.namevalue1value2...my.field.name2something_elsemustcompound", "text": "I’m building an aggregation pipeline in mongodb and I’m encountering some unexpected behaviour.The pipeline is as follow:The pipeline (should) do a vector search according (vector_index, embedding, vector) (it work correctly it seems. With a filter, in particular the filter should limit the vector search to documents having my.field.name equal to value1 or value2 or ... and my.field.name2 equal to something_else.Instead, only the second filter works, or at least it seems (the value of the second filter is a single letter).I tried using the must clause as well in place of the filter inside the compound clause but the outcome remains the same.I tried also removing the second filtering (the one without the list) and I still get unfiltered results.Am I doing something wrong? how can it correctly?", "username": "Matteo_Villosio" }, { "code": "", "text": "I have the same problem : I’d like to filter the collection of documents before applying the vector search. But according to the documentation, KNN can’t be used in a compound query. It’s very frustrating.There is a sample use case where it could be useful:\nImagine working with sample_mflix datasource, and you’d like to do some recommendations when you select a Movie. Suppose you’ve selected a “Drama” movie, you’d like to focus on Drama movies to get recommendations based on the plot of this movie.\nActually, I cannot filter on genres, so I have recommendations on Comedy, Action, War …etc. That’s not what I want.Hoped filter could work, but it does not seem to be the case regarding @Matteo_Villosio question.", "username": "Frederic_Meriot" }, { "code": "", "text": "Hey @Frederic_MeriotUsing Vector Search via knnBeta allows you to run a approximate nearest neighbor query along with text pre-filtering.You do not need to use a compound statement to achieve pre-filtering. If you go to the docs here, and choose the tab for the “Filter Example” you’ll see how you can use a filter with vector search, and even though that filter is nested inside the knnBeta statement it is doing a “pre-filter”.With this filtering approach you should be able to exactly what you’re looking for in that example.", "username": "Benjamin_Flast" } ]
Filter on MongoDB Vector Search doesn't work as expected
2023-09-17T10:47:25.222Z
Filter on MongoDB Vector Search doesn&rsquo;t work as expected
484
null
[ "python", "php" ]
[ { "code": "pubDateFormatted = pubDate.strftime(\"%Y-%m-%dT00:00:00Z\")\ndocument {\n...\n\"publicationDate\": { \"$date\": pubDateFormatted }\n}\nfilter = {\n \"type\": \"pbook\",\n \"publicationDate\": {\n \"$gte\": {\n \"$date\": \"2022-01-01T00:00:00Z\"\n },\n \"$lte\": {\n \"$date\": \"2022-12-31T23:59:59Z\"\n }\n }\n}\n$filter = array('type' => 'pbook');\nif (!empty ($year))\n $filter['pubDate2'] =\n array(\n '$gte' => array(\n '$date' => \"2022-01-01T00:00:00Z\"\n ),\n '$lte' => array(\n '$date' => \"2022-12-31T23:59:59Z\"\n )\n );\n", "text": "I created a free M0 mongodb cluster in the Atlas Cloud.I added about 400.000 documents with book data using python MongoClient.\nThe documents contain a field “publicationDate” of the type “Date” which i defined as follows in python:Now I am able to query all books from the year 2022 by defining the filter as follows in python and get the correct results:Now the problem:\nI created a small HTML page to display all books. That uses a little php script and the Atlas DATA API to query the data. The API queries work, but as soon as i add the filter for the year it returns zero results.I also get no results when I run the query in the Atlas admin backend, although the field “publicationDate” is clearly marked with the type “Date” there!What am I doing wrong?", "username": "S_F" }, { "code": "$filter['pubDate2'] =$filter['publicationDate'] =", "text": "I think that$filter['pubDate2'] =should be\n$filter['publicationDate'] =\nsincethe field “publicationDate", "username": "steevej" }, { "code": "filter = {\"publicationDate\": {\"$type\": \"date\"}}filter = {\"publicationDate\": {\"$type\": \"date\"}}\nprint(collection.count_documents(filter))\n> 0\n", "text": "Oh, sorry, i have been testing a lot and copied the wrong code.\nI get zero results also when using $filter[‘publicationDate’] = …Furthermore:\nWhen i query all documents where publicationDate is of type “Date”, i get zero results.\nfilter = {\"publicationDate\": {\"$type\": \"date\"}}Even in python, where filtering by date is working, i get zero results for this:im really confused.", "username": "S_F" }, { "code": "filter = {\"publicationDate\": {\"$type\": \"date\"}}\nprint(collection.count_documents(filter))\n", "text": "How many documents if you use an empty filter jn", "username": "steevej" }, { "code": "", "text": "I get 380191 results with empty filter", "username": "S_F" }, { "code": "", "text": "can you print out some of the results, a screenshot of the same document in compass would also be nice", "username": "steevej" }, { "code": "", "text": "Thanks for pointing me to Compass. I’m still very new to mongodb.In Atlast Cloud Backend it says “Date”, in Compass it says “Object”. (Is Date an object? Or did i save it the wrong way?)", "username": "S_F" }, { "code": "{\n \"_id\": {\n \"$oid\": \"64e73f03ed4d31d42c8d7b58\"\n },\n \"title\": \"Wie die Stille vor dem Fall. Zweites Buch: Special Edition\",\n \"isbn\": \"9783736321670\",\n \"publicationDate\": {\n \"$date\": \"2023-07-28T00:00:00Z\"\n },\n \"type\": \"pbook\",\n}\n", "text": "And thats the document:", "username": "S_F" }, { "code": "filter = {\"publicationDate\": {\"$type\": \"date\"}}\nprint(collection.count_documents(filter))\n> 0\nfilter = {\"publicationDate.$date\": {\"$type\": \"string\"}}\nprint(collection.count_documents(filter))\nfilter = {\"publicationDate.$date\" : \"2023-07-28T00:00:00Z\" }\nprint(collection.count_documents(filter))\n", "text": "Are you sure you are looking at the same document?I am not.The one in Atlas has an appropriate date that ends with .0000+00:00 while the other one ends with Z.It looks like the document 64e73f03ed4d31d42c8d7b58 really has a field named publicationDate which is an object that has a field named $date with a string value.That would explain that you get:Try the queriesand", "username": "steevej" } ]
Weired Problem with Date field query returning zero results only using Atlas Data API
2023-09-15T10:08:44.251Z
Weired Problem with Date field query returning zero results only using Atlas Data API
409
null
[]
[ { "code": "{\"t\":{\"$date\":\"2020-11-22T22:11:36.308+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:61989\",\"connectionId\":1386,\"connectionCount\":62}}\n{\"t\":{\"$date\":\"2020-11-22T22:11:36.314+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1386\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:61989\",\"client\":\"conn1386\",\"doc\":{\"driver\":{\"name\":\"nodejs|Mongoose\",\"version\":\"3.6.2\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19042\"},\"platform\":\"'Node.js v14.15.0, LE (unified)\",\"version\":\"3.6.2|5.10.12\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:11:36.318+01:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1386\",\"msg\":\"Successful authentication\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"principalName\":\"tobias.jung\",\"authenticationDatabase\":\"admin\",\"client\":\"127.0.0.1:61989\"}}\n{\"t\":{\"$date\":\"2020-11-22T22:12:09.042+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1384\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61978\",\"connectionId\":1384,\"connectionCount\":61}}\n{\"t\":{\"$date\":\"2020-11-22T22:16:08.970+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:62016\",\"connectionId\":1387,\"connectionCount\":62}}\n{\"t\":{\"$date\":\"2020-11-22T22:16:08.971+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1387\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:62016\",\"client\":\"conn1387\",\"doc\":{\"driver\":{\"name\":\"nodejs|Mongoose\",\"version\":\"3.6.2\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19042\"},\"platform\":\"'Node.js v14.15.0, LE (unified)\",\"version\":\"3.6.2|5.10.12\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:16:08.972+01:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1387\",\"msg\":\"Successful authentication\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"principalName\":\"tobias.jung\",\"authenticationDatabase\":\"admin\",\"client\":\"127.0.0.1:62016\"}}\n{\"t\":{\"$date\":\"2020-11-22T22:16:25.902+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1385\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61987\",\"connectionId\":1385,\"connectionCount\":61}}\n{\"t\":{\"$date\":\"2020-11-22T22:17:41.357+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1386\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:61989\",\"connectionId\":1386,\"connectionCount\":60}}\n{\"t\":{\"$date\":\"2020-11-22T22:22:08.973+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22944, \"ctx\":\"conn1387\",\"msg\":\"Connection ended\",\"attr\":{\"remote\":\"127.0.0.1:62016\",\"connectionId\":1387,\"connectionCount\":59}}\n{\"t\":{\"$date\":\"2020-11-22T22:24:17.965+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn1278\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"dns.domain_last_query\",\"command\":{\"insert\":\"domain_last_query\",\"documents\":971,\"ordered\":true,\"lsid\":{\"id\":{\"$uuid\":\"59eaba3a-9239-41cb-b21a-51aff330bb25\"}},\"$db\":\"dns\"},\"ninserted\":971,\"keysInserted\":6988,\"numYields\":0,\"reslen\":45,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":2}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":2}},\"Global\":{\"acquireCount\":{\"w\":2}},\"Database\":{\"acquireCount\":{\"w\":2}},\"Collection\":{\"acquireCount\":{\"w\":2}},\"Mutex\":{\"acquireCount\":{\"r\":2}}},\"flowControl\":{\"acquireCount\":2,\"timeAcquiringMicros\":2},\"storage\":{\"data\":{\"bytesRead\":20026987,\"timeReadingMicros\":331777}},\"protocol\":\"op_msg\",\"durationMillis\":542}}\n{\"t\":{\"$date\":\"2020-11-22T22:24:18.151+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"127.0.0.1:62090\",\"connectionId\":1388,\"connectionCount\":60}}\n{\"t\":{\"$date\":\"2020-11-22T22:24:18.156+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn1388\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"127.0.0.1:62090\",\"client\":\"conn1388\",\"doc\":{\"driver\":{\"name\":\"nodejs|Mongoose\",\"version\":\"3.6.2\"},\"os\":{\"type\":\"Windows_NT\",\"name\":\"win32\",\"architecture\":\"x64\",\"version\":\"10.0.19042\"},\"platform\":\"'Node.js v14.15.0, LE (unified)\",\"version\":\"3.6.2|5.10.12\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:24:18.161+01:00\"},\"s\":\"I\", \"c\":\"ACCESS\", \"id\":20250, \"ctx\":\"conn1388\",\"msg\":\"Successful authentication\",\"attr\":{\"mechanism\":\"SCRAM-SHA-256\",\"principalName\":\"tobias.jung\",\"authenticationDatabase\":\"admin\",\"client\":\"127.0.0.1:62090\"}}\n{\"t\":{\"$date\":\"2020-11-22T22:24:18.219+01:00\"},\"s\":\"I\", \"c\":\"COMMAND\", \"id\":51803, \"ctx\":\"conn1270\",\"msg\":\"Slow query\",\"attr\":{\"type\":\"command\",\"ns\":\"dns.data\",\"command\":{\"insert\":\"data\",\"documents\":4243,\"ordered\":true,\"lsid\":{\"id\":{\"$uuid\":\"c28063d8-9b91-499f-ad7e-f152466bfb51\"}},\"$db\":\"dns\"},\"ninserted\":4243,\"keysInserted\":16972,\"numYields\":0,\"reslen\":45,\"locks\":{\"ParallelBatchWriterMode\":{\"acquireCount\":{\"r\":9}},\"ReplicationStateTransition\":{\"acquireCount\":{\"w\":9}},\"Global\":{\"acquireCount\":{\"w\":9}},\"Database\":{\"acquireCount\":{\"w\":9}},\"Collection\":{\"acquireCount\":{\"w\":9}},\"Mutex\":{\"acquireCount\":{\"r\":9}}},\"flowControl\":{\"acquireCount\":9,\"timeAcquiringMicros\":9},\"storage\":{\"data\":{\"bytesRead\":11295746,\"timeReadingMicros\":520657}},\"protocol\":\"op_msg\",\"durationMillis\":752}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:38.902+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23134, \"ctx\":\"conn1244\",\"msg\":\"Unhandled exception\",\"attr\":{\"exceptionString\":\"(access violation)\",\"addressString\":\"0x000000000000000E\"}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:38.902+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23135, \"ctx\":\"conn1244\",\"msg\":\"Access violation\",\"attr\":{\"accessType\":\"DEP violation at\",\"address\":\" 0xe\"}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:38.902+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23136, \"ctx\":\"conn1244\",\"msg\":\"*** stack trace for unhandled exception:\"}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.515+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31380, \"ctx\":\"conn1244\",\"msg\":\"BACKTRACE: {bt}\",\"attr\":{\"bt\":{\"backtrace\":[{\"a\":\"E\"},{\"a\":\"AFC96F8FD0\"},{\"a\":\"AFC96F9080\"},{\"a\":\"7FF70A143401\",\"module\":\"mongod.exe\",\"s\":\"`string'\",\"s+\":\"1\"},{\"a\":\"7FF7098D5847\",\"module\":\"mongod.exe\",\"file\":\".../src/third_party/wiredtiger/src/config/config_check.c\",\"line\":91,\"s\":\"config_check\",\"s+\":\"597\"},{\"a\":\"7FF7098D5296\",\"module\":\"mongod.exe\",\"file\":\".../src/third_party/wiredtiger/src/config/config_check.c\",\"line\":28,\"s\":\"__wt_config_check\",\"s+\":\"26\"},{\"a\":\"7FF7098D9308\",\"module\":\"mongod.exe\",\"file\":\".../src/third_party/wiredtiger/src/session/session_api.c\",\"line\":1605,\"s\":\"__session_begin_transaction\",\"s+\":\"3D8\"},{\"a\":\"7FF708530B8D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_begin_transaction_block.cpp\",\"line\":72,\"s\":\"mongo::WiredTigerBeginTxnBlock::WiredTigerBeginTxnBlock\",\"s+\":\"19D\"},{\"a\":\"7FF7084F23BA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_recovery_unit.cpp\",\"line\":508,\"s\":\"mongo::WiredTigerRecoveryUnit::_txnOpen\",\"s+\":\"16A\"},{\"a\":\"7FF7084F3899\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_recovery_unit.cpp\",\"line\":306,\"s\":\"mongo::WiredTigerRecoveryUnit::getSession\",\"s+\":\"19\"},{\"a\":\"7FF7085306C6\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_cursor.cpp\",\"line\":48,\"s\":\"mongo::WiredTigerCursor::WiredTigerCursor\",\"s+\":\"56\"},{\"a\":\"7FF7085256D7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_index.cpp\",\"line\":813,\"s\":\"mongo::`anonymous namespace'::WiredTigerIndexCursorBase::WiredTigerIndexCursorBase\",\"s+\":\"1D7\"},{\"a\":\"7FF70852DDB9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_index.cpp\",\"line\":1364,\"s\":\"mongo::WiredTigerIndexUnique::newCursor\",\"s+\":\"59\"},{\"a\":\"7FF708C7C3A3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/index/index_access_method.cpp\",\"line\":222,\"s\":\"mongo::AbstractIndexAccessMethod::newCursor\",\"s+\":\"13\"},{\"a\":\"7FF708C34DBD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/index_scan.cpp\",\"line\":92,\"s\":\"mongo::IndexScan::initIndexScan\",\"s+\":\"3D\"},{\"a\":\"7FF708C3452D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/index_scan.cpp\",\"line\":145,\"s\":\"mongo::IndexScan::doWork\",\"s+\":\"34D\"},{\"a\":\"7FF708C2BC00\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.cpp\",\"line\":47,\"s\":\"mongo::PlanStage::work\",\"s+\":\"50\"},{\"a\":\"7FF708C3C5A0\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/fetch.cpp\",\"line\":84,\"s\":\"mongo::FetchStage::doWork\",\"s+\":\"70\"},{\"a\":\"7FF708C2BC00\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.cpp\",\"line\":47,\"s\":\"mongo::PlanStage::work\",\"s+\":\"50\"},{\"a\":\"7FF708C33124\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/limit.cpp\",\"line\":70,\"s\":\"mongo::LimitStage::doWork\",\"s+\":\"44\"},{\"a\":\"7FF708C2BC00\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.cpp\",\"line\":47,\"s\":\"mongo::PlanStage::work\",\"s+\":\"50\"},{\"a\":\"7FF708BED8C7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/query/plan_executor_impl.cpp\",\"line\":582,\"s\":\"mongo::PlanExecutorImpl::_getNextImpl\",\"s+\":\"557\"},{\"a\":\"7FF708BEF0D4\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/query/plan_executor_impl.cpp\",\"line\":413,\"s\":\"mongo::PlanExecutorImpl::getNext\",\"s+\":\"44\"},{\"a\":\"7FF70894E540\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/commands/find_cmd.cpp\",\"line\":516,\"s\":\"mongo::`anonymous namespace'::FindCmd::Invocation::run\",\"s+\":\"DF0\"},{\"a\":\"7FF708DEC403\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/commands.cpp\",\"line\":187,\"s\":\"mongo::CommandHelpers::runCommandInvocation\",\"s+\":\"83\"},{\"a\":\"7FF7085753B1\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":855,\"s\":\"mongo::`anonymous namespace'::runCommandImpl\",\"s+\":\"171\"},{\"a\":\"7FF708570068\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1201,\"s\":\"mongo::`anonymous namespace'::execCommandDatabase\",\"s+\":\"18D8\"},{\"a\":\"7FF70856BFBE\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1387,\"s\":\"<lambda_c6d7a9996e183ee308520a8a4f9ec84a>::operator()\",\"s+\":\"50E\"},{\"a\":\"7FF708573406\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1415,\"s\":\"mongo::`anonymous namespace'::receivedCommands\",\"s+\":\"B6\"},{\"a\":\"7FF708571BE8\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1717,\"s\":\"mongo::ServiceEntryPointCommon::handleRequest\",\"s+\":\"988\"},{\"a\":\"7FF70855F812\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_mongod.cpp\",\"line\":291,\"s\":\"mongo::ServiceEntryPointMongod::handleRequest\",\"s+\":\"32\"},{\"a\":\"7FF708561D5E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":474,\"s\":\"mongo::ServiceStateMachine::_processMessage\",\"s+\":\"1BE\"},{\"a\":\"7FF7085622D1\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":562,\"s\":\"mongo::ServiceStateMachine::_runNextInGuard\",\"s+\":\"C1\"},{\"a\":\"7FF708561454\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/functional\",\"line\":910,\"s\":\"std::_Func_impl_no_alloc<<lambda_b23af5efc3b61ab25bff0c3bcd13382b>,void>::_Do_call\",\"s+\":\"54\"},{\"a\":\"7FF709536406\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_executor_synchronous.cpp\",\"line\":109,\"s\":\"mongo::transport::ServiceExecutorSynchronous::schedule\",\"s+\":\"106\"},{\"a\":\"7FF7085624BA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":609,\"s\":\"mongo::ServiceStateMachine::_scheduleNextWithGuard\",\"s+\":\"CA\"},{\"a\":\"7FF708562C69\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":378,\"s\":\"mongo::ServiceStateMachine::_sourceCallback\",\"s+\":\"A9\"},{\"a\":\"7FF7085601DC\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/future_impl.h\",\"line\":237,\"s\":\"mongo::future_details::call<<lambda_6d4b3db97642441b3cfe124a238ff086> &,mongo::StatusWith<mongo::Message> >\",\"s+\":\"BC\"},{\"a\":\"7FF708560F5D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/future_impl.h\",\"line\":851,\"s\":\"<lambda_645097e44654016d0f66787be9d363e4>::operator()\",\"s+\":\"4D\"},{\"a\":\"7FF708560504\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/future_impl.h\",\"line\":1163,\"s\":\"mongo::future_details::FutureImpl<mongo::Message>::generalImpl<<lambda_645097e44654016d0f66787be9d363e4>,<lambda_23814a3d2ba869424aa848885b1b33f3>,<lambda_812c3ff3f7ad2a89b9a0b078eb4dc8ab> >\",\"s+\":\"34\"},{\"a\":\"7FF7085633AA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":329,\"s\":\"mongo::ServiceStateMachine::_sourceMessage\",\"s+\":\"12A\"},{\"a\":\"7FF7085622FD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":558,\"s\":\"mongo::ServiceStateMachine::_runNextInGuard\",\"s+\":\"ED\"},{\"a\":\"7FF708561454\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/functional\",\"line\":910,\"s\":\"std::_Func_impl_no_alloc<<lambda_b23af5efc3b61ab25bff0c3bcd13382b>,void>::_Do_call\",\"s+\":\"54\"},{\"a\":\"7FF709535FE2\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_executor_synchronous.cpp\",\"line\":127,\"s\":\"<lambda_29546a6698e80e311a5ae805fe7aad67>::operator()\",\"s+\":\"152\"},{\"a\":\"7FF7096B3326\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":43,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_30130984df28a890937aeb9eb32385d9> >,0>\",\"s+\":\"36\"},{\"a\":\"7FFB001614C2\",\"module\":\"ucrtbase.dll\",\"s\":\"configthreadlocale\",\"s+\":\"92\"},{\"a\":\"7FFB01797034\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}]}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.515+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"E\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.515+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"AFC96F8FD0\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.515+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"AFC96F9080\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.515+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF70A143401\",\"module\":\"mongod.exe\",\"s\":\"`string'\",\"s+\":\"1\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.515+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7098D5847\",\"module\":\"mongod.exe\",\"file\":\".../src/third_party/wiredtiger/src/config/config_check.c\",\"line\":91,\"s\":\"config_check\",\"s+\":\"597\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.515+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7098D5296\",\"module\":\"mongod.exe\",\"file\":\".../src/third_party/wiredtiger/src/config/config_check.c\",\"line\":28,\"s\":\"__wt_config_check\",\"s+\":\"26\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7098D9308\",\"module\":\"mongod.exe\",\"file\":\".../src/third_party/wiredtiger/src/session/session_api.c\",\"line\":1605,\"s\":\"__session_begin_transaction\",\"s+\":\"3D8\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708530B8D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_begin_transaction_block.cpp\",\"line\":72,\"s\":\"mongo::WiredTigerBeginTxnBlock::WiredTigerBeginTxnBlock\",\"s+\":\"19D\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7084F23BA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_recovery_unit.cpp\",\"line\":508,\"s\":\"mongo::WiredTigerRecoveryUnit::_txnOpen\",\"s+\":\"16A\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7084F3899\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_recovery_unit.cpp\",\"line\":306,\"s\":\"mongo::WiredTigerRecoveryUnit::getSession\",\"s+\":\"19\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085306C6\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_cursor.cpp\",\"line\":48,\"s\":\"mongo::WiredTigerCursor::WiredTigerCursor\",\"s+\":\"56\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085256D7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_index.cpp\",\"line\":813,\"s\":\"mongo::`anonymous namespace'::WiredTigerIndexCursorBase::WiredTigerIndexCursorBase\",\"s+\":\"1D7\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF70852DDB9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/storage/wiredtiger/wiredtiger_index.cpp\",\"line\":1364,\"s\":\"mongo::WiredTigerIndexUnique::newCursor\",\"s+\":\"59\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C7C3A3\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/index/index_access_method.cpp\",\"line\":222,\"s\":\"mongo::AbstractIndexAccessMethod::newCursor\",\"s+\":\"13\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C34DBD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/index_scan.cpp\",\"line\":92,\"s\":\"mongo::IndexScan::initIndexScan\",\"s+\":\"3D\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C3452D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/index_scan.cpp\",\"line\":145,\"s\":\"mongo::IndexScan::doWork\",\"s+\":\"34D\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C2BC00\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.cpp\",\"line\":47,\"s\":\"mongo::PlanStage::work\",\"s+\":\"50\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C3C5A0\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/fetch.cpp\",\"line\":84,\"s\":\"mongo::FetchStage::doWork\",\"s+\":\"70\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C2BC00\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.cpp\",\"line\":47,\"s\":\"mongo::PlanStage::work\",\"s+\":\"50\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C33124\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/limit.cpp\",\"line\":70,\"s\":\"mongo::LimitStage::doWork\",\"s+\":\"44\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708C2BC00\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/exec/plan_stage.cpp\",\"line\":47,\"s\":\"mongo::PlanStage::work\",\"s+\":\"50\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708BED8C7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/query/plan_executor_impl.cpp\",\"line\":582,\"s\":\"mongo::PlanExecutorImpl::_getNextImpl\",\"s+\":\"557\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708BEF0D4\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/query/plan_executor_impl.cpp\",\"line\":413,\"s\":\"mongo::PlanExecutorImpl::getNext\",\"s+\":\"44\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF70894E540\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/commands/find_cmd.cpp\",\"line\":516,\"s\":\"mongo::`anonymous namespace'::FindCmd::Invocation::run\",\"s+\":\"DF0\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708DEC403\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/commands.cpp\",\"line\":187,\"s\":\"mongo::CommandHelpers::runCommandInvocation\",\"s+\":\"83\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085753B1\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":855,\"s\":\"mongo::`anonymous namespace'::runCommandImpl\",\"s+\":\"171\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708570068\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1201,\"s\":\"mongo::`anonymous namespace'::execCommandDatabase\",\"s+\":\"18D8\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF70856BFBE\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1387,\"s\":\"<lambda_c6d7a9996e183ee308520a8a4f9ec84a>::operator()\",\"s+\":\"50E\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708573406\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1415,\"s\":\"mongo::`anonymous namespace'::receivedCommands\",\"s+\":\"B6\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708571BE8\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_common.cpp\",\"line\":1717,\"s\":\"mongo::ServiceEntryPointCommon::handleRequest\",\"s+\":\"988\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF70855F812\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/service_entry_point_mongod.cpp\",\"line\":291,\"s\":\"mongo::ServiceEntryPointMongod::handleRequest\",\"s+\":\"32\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708561D5E\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":474,\"s\":\"mongo::ServiceStateMachine::_processMessage\",\"s+\":\"1BE\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085622D1\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":562,\"s\":\"mongo::ServiceStateMachine::_runNextInGuard\",\"s+\":\"C1\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708561454\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/functional\",\"line\":910,\"s\":\"std::_Func_impl_no_alloc<<lambda_b23af5efc3b61ab25bff0c3bcd13382b>,void>::_Do_call\",\"s+\":\"54\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF709536406\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_executor_synchronous.cpp\",\"line\":109,\"s\":\"mongo::transport::ServiceExecutorSynchronous::schedule\",\"s+\":\"106\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085624BA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":609,\"s\":\"mongo::ServiceStateMachine::_scheduleNextWithGuard\",\"s+\":\"CA\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708562C69\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":378,\"s\":\"mongo::ServiceStateMachine::_sourceCallback\",\"s+\":\"A9\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085601DC\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/future_impl.h\",\"line\":237,\"s\":\"mongo::future_details::call<<lambda_6d4b3db97642441b3cfe124a238ff086> &,mongo::StatusWith<mongo::Message> >\",\"s+\":\"BC\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708560F5D\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/future_impl.h\",\"line\":851,\"s\":\"<lambda_645097e44654016d0f66787be9d363e4>::operator()\",\"s+\":\"4D\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708560504\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/future_impl.h\",\"line\":1163,\"s\":\"mongo::future_details::FutureImpl<mongo::Message>::generalImpl<<lambda_645097e44654016d0f66787be9d363e4>,<lambda_23814a3d2ba869424aa848885b1b33f3>,<lambda_812c3ff3f7ad2a89b9a0b078eb4dc8ab> >\",\"s+\":\"34\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085633AA\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":329,\"s\":\"mongo::ServiceStateMachine::_sourceMessage\",\"s+\":\"12A\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7085622FD\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_state_machine.cpp\",\"line\":558,\"s\":\"mongo::ServiceStateMachine::_runNextInGuard\",\"s+\":\"ED\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF708561454\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/functional\",\"line\":910,\"s\":\"std::_Func_impl_no_alloc<<lambda_b23af5efc3b61ab25bff0c3bcd13382b>,void>::_Do_call\",\"s+\":\"54\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF709535FE2\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/transport/service_executor_synchronous.cpp\",\"line\":127,\"s\":\"<lambda_29546a6698e80e311a5ae805fe7aad67>::operator()\",\"s+\":\"152\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FF7096B3326\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files (x86)/Microsoft Visual Studio/2019/Professional/VC/Tools/MSVC/14.26.28801/include/thread\",\"line\":43,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_30130984df28a890937aeb9eb32385d9> >,0>\",\"s+\":\"36\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFB001614C2\",\"module\":\"ucrtbase.dll\",\"s\":\"configthreadlocale\",\"s+\":\"92\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.516+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"conn1244\",\"msg\":\" Frame: {frame}\",\"attr\":{\"frame\":{\"a\":\"7FFB01797034\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.517+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23131, \"ctx\":\"conn1244\",\"msg\":\"Failed to open minidump file\",\"attr\":{\"dumpName\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.4\\\\bin\\\\mongod.2020-11-22T21-25-41.mdmp\",\"error\":\"Zugriff verweigert\"}}\n{\"t\":{\"$date\":\"2020-11-22T22:25:41.517+01:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23137, \"ctx\":\"conn1244\",\"msg\":\"*** immediate exit due to unhandled exception\"}\n", "text": "Hello together!I have a problem, MongoDB crashes nearly every day.Here is the paste:Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.I use Windows 10 Pro 64 Bit.\nMongoDB Community 4.4What is the problem?Kind Regards\nTobias Jung", "username": "Tobias_Jung" }, { "code": "", "text": "Were you able to find a reason for your crashes? I am having the same thing happen to me.", "username": "Tim_D" }, { "code": "", "text": "Was this ever resolved? Experiencing the same issue.", "username": "ryan123" } ]
MongoDB Crashes sometimes
2020-11-22T22:10:25.493Z
MongoDB Crashes sometimes
3,000
null
[ "queries" ]
[ { "code": "const blockCheck = await ctx.db.collection(\"userblocks\").findOne({\n $or: [\n { user: ctx.state.user.id, blocked: ctx.state.comment.forAuthor },\n { user: ctx.state.comment.forAuthor, blocked: ctx.state.user.id }\n ]\n }, { session })\n{ $or: [\n { \n user: ObjectId('6111111af44c3438499b00a0'),\n blocked: ObjectId('6111111af44c3438499b00a0')\n },\n {\n user: ObjectId('6111111af44c3438499b00a0'),\n blocked: ObjectId('6111111af44c3438499b00a0')\n }\n] }\n{\n _id: new ObjectId(\"64fc57379b5ad000ffde000\"),\n user: new ObjectId(\"6111111af44c3438499b00a0\"),\n blocked: new ObjectId(\"65cade111531f6bc014dbb45\"),\n createdAt: 2023-09-09T11:29:59.984Z,\n notes: null\n}\n", "text": "I have a findOne running this query:The issue i’m having is that for one user, when the user ID and the author ID are the same, this OR query returns a block that doesn’t match the query. I can replicate it with this user every single time. I can’t yet replicate it with any other user. I’m concerned there are more users experiencing this random behavior that I can’t yet see.Does anyone have any idea why this would be happening? It shouldn’t be.For example, if I run this in MongoDB Compass:It returns 0 results as expected, for the given user that has the issue when the first code example is run against them and a document is returned. I also had them test by removing all their blocks, and 0 results were returned as expected, when they added them back and created new documents in the collection, this issue started happening again.For further clarification, it returns this object when running the findOne:I’m running on MongoDB Atlas on v7.0.1 in a 3 node replica set.", "username": "boomography" }, { "code": "_id: new ObjectId(\"64fc57379b5ad000ffde000\"), $or: [\n { \"$elemMatch\" : { \"array.user\" : ctx.state.user.id, \"array.blocked\" : ctx.state.comment.forAuthor },\n { \"$elemMatch\" : { \"array.user\" : ctx.state.comment.forAuthor, \"array.blocked\": ctx.state.user.id }\n ]\n", "text": "The query seems correct and works correctly.However, I have notice that_id: new ObjectId(\"64fc57379b5ad000ffde000\"),is not a valid ObjectId since I get the errorArgument passed in must be a string of 12 bytes or a string of 24 hex characters or an integer.What I think is that you have redacted your problem in order to simplify it. But it was over-simplified and the real issue is now hidden. What I suspect is that your fields user and blocked and inside an array rather than at the top level. You have indeed have an array you will need to use $elemMatch like:", "username": "steevej" } ]
FindOne with OR matching wrong data
2023-09-17T01:36:58.270Z
FindOne with OR matching wrong data
244
null
[ "containers", "storage" ]
[ { "code": "{\"t\":{\"$date\":\"2023-09-17T21:26:24.775+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":{\"ts_sec\":1694985984,\"ts_usec\":775719,\"thread\":\"1:0xffff8b25d040\",\"session_dhandle_name\":\"file:WiredTiger.wt\",\"session_name\":\"connection\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__posix_open_file:805:/data/db/WiredTiger.wt: handle-open: open\",\"error_str\":\"Operation not permitted\",\"error_code\":1}}}\n{\"t\":{\"$date\":\"2023-09-17T21:26:24.782+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":{\"ts_sec\":1694985984,\"ts_usec\":782742,\"thread\":\"1:0xffff8b25d040\",\"session_dhandle_name\":\"file:WiredTiger.wt\",\"session_name\":\"connection\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__posix_open_file:805:/data/db/WiredTiger.wt: handle-open: open\",\"error_str\":\"Operation not permitted\",\"error_code\":1}}}\n{\"t\":{\"$date\":\"2023-09-17T21:26:24.789+00:00\"},\"s\":\"E\", \"c\":\"WT\", \"id\":22435, \"ctx\":\"initandlisten\",\"msg\":\"WiredTiger error message\",\"attr\":{\"error\":1,\"message\":{\"ts_sec\":1694985984,\"ts_usec\":789444,\"thread\":\"1:0xffff8b25d040\",\"session_dhandle_name\":\"file:WiredTiger.wt\",\"session_name\":\"connection\",\"category\":\"WT_VERB_DEFAULT\",\"category_id\":9,\"verbose_level\":\"ERROR\",\"verbose_level_id\":-3,\"msg\":\"__posix_open_file:805:/data/db/WiredTiger.wt: handle-open: open\",\"error_str\":\"Operation not permitted\",\"error_code\":1}}}\n{\"t\":{\"$date\":\"2023-09-17T21:26:24.790+00:00\"},\"s\":\"W\", \"c\":\"STORAGE\", \"id\":22347, \"ctx\":\"initandlisten\",\"msg\":\"Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.\"}\n", "text": "I am running mongo inside a docker container. It shut down unexpectedly and I was able to run --repair. However whenever I try to restart the container I get an error saying something about WiredTiger.wt having a handle open._posix_open_file:805:/data/db/WiredTiger.wt: handle-open: open\",“error_str”:“Operation not permitted”,“error_code”:1I’ll paste more of the log output below.Thanks for any help.Bruce", "username": "Bruce_Sherin" }, { "code": "docker run -p 27017:27017 -v mongo-volume:/data/db --name tactic-mongo --network=tactic-net --restart always -d mongo:latest", "text": "This is at least a partial answer to my own question. I discovered that I could start the mongo container with a bind mount to an empty directory and it would start just fine. But if I stopped the container and remounted it I would get the exact same errors as above. And I did try messing with the permissions.However I didn’t mention in my initial post that I am running this in the docker desktop on macOS. And I found this warning on the documentation associated with the mongo image on the docker hub.WARNING (Windows & OS X): When running the Linux-based MongoDB images on Windows and OS X, the file systems used to share between the host system and the Docker container is not compatible with the memory mapped files used by MongoDB (docs.mongodb.org and related jira.mongodb.org bug). This means that it is not possible to run a MongoDB container with the data directory mapped to the host. To persist data between container restarts, we recommend using a local named volume instead (see docker volume create). Alternatively you can use the Windows-based images on Windows.So I created a named volume “mongo-volume” and started the mongo container with the linedocker run -p 27017:27017 -v mongo-volume:/data/db --name tactic-mongo --network=tactic-net --restart always -d mongo:latestThis seems to do the trick, except that I wasn’t able to recover some data in the previous version of the database. Also, it’s unclear why my old way of running the container, which I had been using for years, suddenly stopped working.", "username": "Bruce_Sherin" } ]
WiredTiger.wt handle-open preventing server start
2023-09-17T21:34:33.888Z
WiredTiger.wt handle-open preventing server start
551
null
[ "compass", "mongodb-shell" ]
[ { "code": "", "text": "Hello.\nI am using mongosh to connect to my clusters. Sometimes I cannot access to my cluster due to the company restrictions. So, can I use a command line in the web environment?. It is, when I enter the mongodb website I only can add, delete, replace documents in my collections (under a easy interface), but I would need to use the Js Commands to learn and to teach mongodb. Is there any option? I have the same problem using Compass. Thanks", "username": "pedro_montilla" }, { "code": "", "text": "Hello @pedro_montilla and welcome to the forums.The Mongo Web Shell can be used.https://mws.mongodb.com/", "username": "chris" } ]
Command line in web environment
2023-09-19T08:41:04.194Z
Command line in web environment
278
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hi,We have implemented the aggregation in the app. But the aggregation query is taking 20 sec to load.We have the $match condition $facet stage.Below is the sample query:db.used_vehicles.explain(“executionStats”).aggregate([{“$match”:{“status”:“ACTIVE”,“media_exist”:1,“vehicle_type_id”:1,“listing_status”:“APPROVED”}},{“$facet”:{“categoryByState”:[{“$match”:{“used_vehicle_spec.state.state_name”:{“$ne”:null}}},{“$group”:{“_id”:“$used_vehicle_spec.state.state_id”,“state_identifier”:{“$first”:“$used_vehicle_spec.state.state_identifier”},“state_name”:{“$first”:“$used_vehicle_spec.state.state_name”},“state_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“state_id”:“$_id”,“state_identifier”:“$state_identifier”,“state_name”:“$state_name”,“state_count”:“$state_count”}},{“$sort”:{“state_count”:-1,“state_name”:1}}],“categoryByCity”:[{“$match”:{“used_vehicle_spec.city.city_name”:{“$ne”:“”}}},{“$group”:{“_id”:“$used_vehicle_spec.city.city_id”,“city_identifier”:{“$first”:“$used_vehicle_spec.city.city_identifier”},“city_name”:{“$first”:“$used_vehicle_spec.city.city_name”},“state_id”:{“$first”:“$used_vehicle_spec.city.state_id”},“city_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“city_id”:“$_id”,“city_identifier”:“$city_identifier”,“city_name”:“$city_name”,“state_id”:“$state_id”,“city_count”:“$city_count”}},{“$sort”:{“city_count”:-1,“city_name”:1}}],“categoryByBrand”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“brand.brand_name”:{“$ne”:null}}},{“$group”:{“_id”:“$brand.brand_id”,“brand_name”:{“$first”:“$brand.brand_name”},“count”:{“$sum”:1}}},{“$project”:{“_id”:0,“brand_id”:“$_id”,“brand_name”:“$brand_name”,“brand_count”:“$count”}},{“$sort”:{“brand_count”:-1,“brand_name”:1}}],“categoryByModel”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“model.model_name”:{“$ne”:null}}},{“$group”:{“_id”:“$model.model_id”,“brand_id”:{“$first”:“$brand.brand_id”},“model_name”:{“$first”:“$model.model_name”},“count”:{“$sum”:1}}},{“$project”:{“_id”:0,“model_id”:“$_id”,“model_name”:“$model_name”,“brand_id”:“$brand_id”,“model_count”:“$count”}},{“$sort”:{“model_count”:-1,“model_name”:1}}],“categoryByBody”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“variant.body_type.shape_name”:{“$ne”:null}}},{“$group”:{“_id”:“$variant.body_type.shape_id”,“shape_name”:{“$first”:“$variant.body_type.shape_name”},“body_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“body_type_id”:“$_id”,“body_type_name”:“$shape_name”,“body_count”:“$body_count”}},{“$sort”:{“body_count”:-1,“body_type_name”:1}}],“categoryByFuel”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“variant.fuel_type.fuel_name”:{“$ne”:null}}},{“$group”:{“_id”:“$variant.fuel_type.fuel_id”,“fuel_name”:{“$first”:“$variant.fuel_type.fuel_name”},“fuel_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“fuel_type_id”:“$_id”,“fuel_type_name”:“$fuel_name”,“fuel_type_count”:“$fuel_count”}},{“$sort”:{“fuel_type_count”:-1,“fuel_type_name”:1}}],“categoryBytransmission”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“variant.transmission_type”:{“$ne”:“”}}},{“$group”:{“_id”:“$variant.transmission_type”,“transmission_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“transmission_type”:“$_id”,“transmission_count”:“$transmission_count”}},{“$sort”:{“transmission_count”:-1,“transmission_type”:1}}],“categoryBySeller”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“certified_seller_data.seller_type”:{“$ne”:null}}},{“$group”:{“_id”:“$certified_seller_data.seller_type”,“seller_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“seller_type”:“$_id”,“seller_count”:“$seller_count”}},{“$sort”:{“seller_count”:-1,“seller_type”:1}}],“categoryByOwner”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“used_vehicle_spec.no_of_owners”:{“$ne”:null}}},{“$group”:{“_id”:“$used_vehicle_spec.no_of_owners”,“owner_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“no_of_owners”:“$_id”,“owner_count”:“$owner_count”}},{“$sort”:{“owner_count”:-1,“no_of_owners”:1}}],“categoryByRegister”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“used_vehicle_spec.register_type”:{“$ne”:“”}}},{“$group”:{“_id”:“$used_vehicle_spec.register_type”,“register_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“register_type”:“$_id”,“register_count”:“$register_count”}},{“$sort”:{“register_count”:-1,“register_type”:1}}],“categoryByCertified”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“certified”:{“$ne”:null}}},{“$group”:{“_id”:“$certified”,“certified_count”:{“$sum”:1}}},{“$project”:{“_id”:0,“is_certified”:“$_id”,“certified_count”:“$certified_count”}},{“$sort”:{“certified_count”:-1}}],“categoryByPrice”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“price”:{“$ne”:“”}}},{“$bucket”:{“groupBy”:“$price”,“boundaries”:[0,500000,1000000,2500000,5000000,7500000,10000000],“default”:10000000,“output”:{“count”:{“$sum”:1}}}},{“$addFields”:{“upperBound”:{“$arrayElemAt”:[[0,500000,1000000,2500000,5000000,7500000,10000000],{“$sum”:[{“$indexOfArray”:[[0,500000,1000000,2500000,5000000,7500000,10000000],“$_id”]},1]}]}}}],“categoryByYear”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“used_vehicle_spec.mfg_year”:{“$ne”:null}}},{“$bucket”:{“groupBy”:“$used_vehicle_spec.mfg_year”,“boundaries”:[0,2011,2014,2017,2019,2022],“default”:2023,“output”:{“count”:{“$sum”:1}}}},{“$addFields”:{“upperBound”:{“$arrayElemAt”:[[0,2011,2014,2017,2019,2022],{“$sum”:[{“$indexOfArray”:[[0,2011,2014,2017,2019,2022],“$_id”]},1]}]}}}],“categoryByKms”:[{“$match”:{“used_vehicle_spec.city.city_id”:{“$in”:[349]},“used_vehicle_spec.kms_driven”:{“$ne”:null}}},{“$bucket”:{“groupBy”:“$used_vehicle_spec.kms_driven”,“boundaries”:[0,30000,60000,100000],“default”:100000,“output”:{“count”:{“$sum”:1}}}},{“$addFields”:{“upperBound”:{“$arrayElemAt”:[[0,30000,60000,100000],{“$sum”:[{“$indexOfArray”:[[0,30000,60000,100000],“$_id”]},1]}]}}}]}}])", "username": "subramanian_k1" }, { "code": "", "text": "That’s pretty unreadable, can you paste it in as a code fragment with formatting, use the “Code” formatting on the editbox toolbar.", "username": "John_Sewell" }, { "code": "used_vehicles.aggregate([\n {\n \"$match\": {\n \"status\": \"ACTIVE\",\n \"media_exist\": 1,\n \"vehicle_type_id\": 1,\n \"listing_status\": \"APPROVED\"\n }\n },\n {\n \"$facet\": {\n \"categoryByState\": [\n { \"$match\": { \"used_vehicle_spec.state.state_name\": { \"$ne\": null } } },\n {\n \"$group\": {\n \"_id\": \"$used_vehicle_spec.state.state_id\",\n \"state_identifier\": {\n \"$first\": \"$used_vehicle_spec.state.state_identifier\"\n },\n \"state_name\": { \"$first\": \"$used_vehicle_spec.state.state_name\" },\n \"state_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"state_id\": \"$_id\",\n \"state_identifier\": \"$state_identifier\",\n \"state_name\": \"$state_name\",\n \"state_count\": \"$state_count\"\n }\n },\n { \"$sort\": { \"state_count\": -1, \"state_name\": 1 } }\n ],\n \"categoryByCity\": [\n { \"$match\": { \"used_vehicle_spec.city.city_name\": { \"$ne\": \"\" } } },\n {\n \"$group\": {\n \"_id\": \"$used_vehicle_spec.city.city_id\",\n \"city_identifier\": {\n \"$first\": \"$used_vehicle_spec.city.city_identifier\"\n },\n \"city_name\": { \"$first\": \"$used_vehicle_spec.city.city_name\" },\n \"state_id\": { \"$first\": \"$used_vehicle_spec.city.state_id\" },\n \"city_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"city_id\": \"$_id\",\n \"city_identifier\": \"$city_identifier\",\n \"city_name\": \"$city_name\",\n \"state_id\": \"$state_id\",\n \"city_count\": \"$city_count\"\n }\n },\n { \"$sort\": { \"city_count\": -1, \"city_name\": 1 } }\n ],\n \"categoryByBrand\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"brand.brand_name\": { \"$ne\": null }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$brand.brand_id\",\n \"brand_name\": { \"$first\": \"$brand.brand_name\" },\n \"count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"brand_id\": \"$_id\",\n \"brand_name\": \"$brand_name\",\n \"brand_count\": \"$count\"\n }\n },\n { \"$sort\": { \"brand_count\": -1, \"brand_name\": 1 } }\n ],\n \"categoryByModel\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"model.model_name\": { \"$ne\": null }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$model.model_id\",\n \"brand_id\": { \"$first\": \"$brand.brand_id\" },\n \"model_name\": { \"$first\": \"$model.model_name\" },\n \"count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"model_id\": \"$_id\",\n \"model_name\": \"$model_name\",\n \"brand_id\": \"$brand_id\",\n \"model_count\": \"$count\"\n }\n },\n { \"$sort\": { \"model_count\": -1, \"model_name\": 1 } }\n ],\n \"categoryByBody\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"variant.body_type.shape_name\": { \"$ne\": null }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$variant.body_type.shape_id\",\n \"shape_name\": { \"$first\": \"$variant.body_type.shape_name\" },\n \"body_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"body_type_id\": \"$_id\",\n \"body_type_name\": \"$shape_name\",\n \"body_count\": \"$body_count\"\n }\n },\n { \"$sort\": { \"body_count\": -1, \"body_type_name\": 1 } }\n ],\n \"categoryByFuel\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"variant.fuel_type.fuel_name\": { \"$ne\": null }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$variant.fuel_type.fuel_id\",\n \"fuel_name\": { \"$first\": \"$variant.fuel_type.fuel_name\" },\n \"fuel_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"fuel_type_id\": \"$_id\",\n \"fuel_type_name\": \"$fuel_name\",\n \"fuel_type_count\": \"$fuel_count\"\n }\n },\n { \"$sort\": { \"fuel_type_count\": -1, \"fuel_type_name\": 1 } }\n ],\n \"category transmission\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"variant.transmission_type\": { \"$ne\": \"\" }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$variant.transmission_type\",\n \"transmission_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"transmission_type\": \"$_id\",\n \"transmission_count\": \"$transmission_count\"\n }\n },\n { \"$sort\": { \"transmission_count\": -1, \"transmission_type\": 1 } }\n ],\n \"categoryBySeller\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"certified_seller_data.seller_type\": { \"$ne\": null }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$certified_seller_data.seller_type\",\n \"seller_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"seller_type\": \"$_id\",\n \"seller_count\": \"$seller_count\"\n }\n },\n { \"$sort\": { \"seller_count\": -1, \"seller_type\": 1 } }\n ],\n \"categoryByOwner\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"used_vehicle_spec.no_of_owners\": { \"$ne\": null }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$used_vehicle_spec.no_of_owners\",\n \"owner_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"no_of_owners\": \"$_id\",\n \"owner_count\": \"$owner_count\"\n }\n },\n { \"$sort\": { \"owner_count\": -1, \"no_of_owners\": 1 } }\n ],\n \"categoryByRegister\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"used_vehicle_spec.register_type\": { \"$ne\": \"\" }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$used_vehicle_spec.register_type\",\n \"register_count\": { \"$sum\": 1 }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"register_type\": \"$_id\",\n \"register_count\": \"$register_count\"\n }\n },\n { \"$sort\": { \"register_count\": -1, \"register_type\": 1 } }\n ],\n \"categoryByCertified\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"certified\": { \"$ne\": null }\n }\n },\n { \"$group\": { \"_id\": \"$certified\", \"certified_count\": { \"$sum\": 1 } } },\n {\n \"$project\": {\n \"_id\": 0,\n \"is_certified\": \"$_id\",\n \"certified_count\": \"$certified_count\"\n }\n },\n { \"$sort\": { \"certified_count\": -1 } }\n ],\n \"categoryByPrice\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"price\": { \"$ne\": \"\" }\n }\n },\n {\n \"$bucket\": {\n \"groupBy\": \"$price\",\n \"boundaries\": [\n 0, 500000, 1000000, 2500000, 5000000, 7500000, 10000000\n ],\n \"default\": 10000000,\n \"output\": { \"count\": { \"$sum\": 1 } }\n }\n },\n {\n \"$addFields\": {\n \"upperBound\": {\n \"$arrayElemAt\": [\n [0, 500000, 1000000, 2500000, 5000000, 7500000, 10000000],\n {\n \"$sum\": [\n {\n \"$indexOfArray\": [\n [\n 0, 500000, 1000000, 2500000, 5000000, 7500000,\n 10000000\n ],\n \"$_id\"\n ]\n },\n 1\n ]\n }\n ]\n }\n }\n }\n ],\n \"categoryByYear\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"used_vehicle_spec.mfg_year\": { \"$ne\": null }\n }\n },\n {\n \"$bucket\": {\n \"groupBy\": \"$used_vehicle_spec.mfg_year\",\n \"boundaries\": [0, 2011, 2014, 2017, 2019, 2022],\n \"default\": 2023,\n \"output\": { \"count\": { \"$sum\": 1 } }\n }\n },\n {\n \"$addFields\": {\n \"upperBound\": {\n \"$arrayElemAt\": [\n [0, 2011, 2014, 2017, 2019, 2022],\n {\n \"$sum\": [\n {\n \"$indexOfArray\": [\n [0, 2011, 2014, 2017, 2019, 2022],\n \"$_id\"\n ]\n },\n 1\n ]\n }\n ]\n }\n }\n }\n ],\n \"categoryByKms\": [\n {\n \"$match\": {\n \"used_vehicle_spec.city.city_id\": { \"$in\": [349] },\n \"used_vehicle_spec.kms_driven\": { \"$ne\": null }\n }\n },\n {\n \"$bucket\": {\n \"groupBy\": \"$used_vehicle_spec.kms_driven\",\n \"boundaries\": [0, 30000, 60000, 100000],\n \"default\": 100000,\n \"output\": { \"count\": { \"$sum\": 1 } }\n }\n },\n {\n \"$addFields\": {\n \"upperBound\": {\n \"$arrayElemAt\": [\n [0, 30000, 60000, 100000],\n {\n \"$sum\": [\n { \"$indexOfArray\": [[0, 30000, 60000, 100000], \"$_id\"] },\n 1\n ]\n }\n ]\n }\n }\n }\n ]\n }\n }\n])\n", "text": "HI john\nThanks for the update. Here you can find the formatted query code. Please help us to solve this problem.", "username": "subramanian_k1" }, { "code": "", "text": "Thats a monster of a query, what have you done so far to analyse performance? Have you looked at an explain of the query?\nHow much data is in your collections and what indexes do you have, what does a document look like?", "username": "John_Sewell" }, { "code": "", "text": "I should probably add, what are you trying to do and how often does this run, is it a key query that’s run? How often does the data update that would affect the output of it?", "username": "John_Sewell" }, { "code": "", "text": "Hi John.Yup.its primary query. the application related used vehicle ecommerce.So we have filter with different components vehicles and need to show the count of vehicles by default.If the user changes the filter combination, need to the count depends on that.So we used facet.Moreover, we have different combinations in this filter.\nSo possible combination also high. We can able to create 64 index …How we can handle these scenarios to handle the compound index also.Thanks in advance.", "username": "subramanian_k1" }, { "code": "", "text": "If this is a prime query and it takes this kind of query to get the data that’s used often in the right format then it kind of indicates that you may want to look at the schema.We still don’t know what a document looks like, so I’ll not comment further on that but lots of the queries above look like they are filtering on the same criteria WITHIN the facet, this means the server has to repeat work for every facet, you also need to watch out for index use within a facet. Example is the filtering on city_id, perhaps split those up into a new pipeline so that you can share the filtering on all the items that use that criteria.I’d start with one facet and try and look at how that’s performing and then work out from that. Don’t get hung up on trying to do everything in one pipeline stage if it’ll cripple your performance.", "username": "John_Sewell" }, { "code": "", "text": "Hi John.Thanks for the update.We tried with each facet in different aggregate pipeline and added index for match condition. Now that latency got reduced.But still I have an doubt on index creation.If we have multiple combination for filter,how do we create index for all combinationsEx. Application have 15 type filter in page. So user can do the different combinations of filter in application.How do we handle all the scenarios to create index", "username": "subramanian_k1" }, { "code": "", "text": "That’s a tricky one, perhaps someone who’s had to do this can comment, but as you say if you have lots of possible filters then indexes could quickly spiral out of control.\nIf each index is sufficiently limiting, then you could just have one on each field and rely on a non-indexed filter to kick in after thatIn our application, we have lots of fields the user can filter on as well but the users mainly use a field that is sufficiently unique to reduce the amount of data to filter without the index to a handful, so it works out well.", "username": "John_Sewell" }, { "code": "", "text": "If we have multiple combination for filter,how do we create index for all combinationsLooks like the Attribute Pattern might be the way to go as 1 index might be able to accommodate the multiple combination.", "username": "steevej" } ]
Mongodb Aggregate / $facet query is too slow. its taking 20sec
2023-09-13T12:39:17.154Z
Mongodb Aggregate / $facet query is too slow. its taking 20sec
523
null
[ "data-modeling", "java" ]
[ { "code": "", "text": "I am trying to read documents from a mongodb collection that contains a nested object. This nested object contains other objects, let’s call them cars.In my java code, those cars are represented by an abstract class “Car”, with various different classes extending the car class. Furthermore the JSON object containing the Cars is represented by a Map<String, Car>.If I read the object, a BeanInstantiationException is thrown, because Car is abstract. If i declare Car not abstract, the documents get read, but the Map contains a lot of cars with no specific values the inheriting classes would have (the Car class itself has no fields). I know that abstract classes can’t be instantiated, but every JSON object that gets read is of a type of an inheriting class, that is persisted in the attribute “_class” in the document in my DB.mongoDB Version: 4.2.16spring-data-mongodb Version: 3.2.3", "username": "gkoef594" }, { "code": "", "text": "Were you able to find any solution for this @gkoef594 ???", "username": "Shwetabh_Shrivastava" } ]
Instantiate abstract class reading mongoDB
2021-12-17T14:30:06.201Z
Instantiate abstract class reading mongoDB
3,371
null
[]
[ { "code": "{\n \"banana\": [\n {\n \"name\": \"goodBanana\",\n \"ripe\": true\n },\n {\n \"name\": \"badBanana\",\n\"ripe\": false\n }\n ]\n}\n findOne({\"banana.name\":\"goodBanana\"})\n\nreturns the entire document... \n{\n _id: ObjectId(\"6504850c64b81fce975f7e1e\"),\n banana: [\n {\n name: 'goodBanana',\n ripe: true\n },\n {\n name: 'badBanana',\n ripe: false\n }\n ]\n}\n {\n name: 'goodBanana',\n ripe: true\n }\n", "text": "This might be the silliest questions of all, but I can’t get it to work…I am testing with the following to get the “name”:“goodBanana” object . I have tried a few filter options but looks like I am missing something fundamental…All I wanted isTried this with $elemMatch as well and same result.Any help is much appreciated.", "username": "illay_senner" }, { "code": "[\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n \"banana.name\": \"goodBanana\",\n },\n },\n {\n $limit:\n /**\n * Provide the number of documents to limit.\n */\n 1,\n },\n {\n $unwind:\n /**\n * path: Path to the array field.\n * includeArrayIndex: Optional name for index.\n * preserveNullAndEmptyArrays: Optional\n * toggle to unwind null and empty values.\n */\n {\n path: \"$banana\",\n },\n },\n {\n $match:\n /**\n * query: The query in MQL.\n */\n {\n \"banana.name\": \"goodBanana\",\n },\n },\n]\n{\n \"_id\": {\n \"$oid\": \"65049795bbe0655ee98b3808\"\n },\n \"banana\": {\n \"name\": \"goodBanana\",\n \"ripe\": true\n }\n}\n", "text": "So when you do a findOne, it returns the entire document, to get an element from an array you need to do the $unwind aggregation state. I added the document and did the aggregation with a match and limit to 1. Then it unwinds it into two documents, then matches those two documents for ‘goodBanana’Output:Aggregation stages in Atlas\n\nimage795×263 17.2 KB\n\n\nimage768×268 16.6 KB\n\n\nimage1109×283 25.5 KB\n\n\nimage931×273 14.9 KB\n", "username": "tapiocaPENGUIN" }, { "code": "", "text": "You can add a $project to remove the _id\nimage824×299 12.6 KB\n", "username": "tapiocaPENGUIN" }, { "code": "", "text": "thank you so much @tapiocaPENGUIN for taking the time to show how to achieve my goal. You have thought me few things there for future…Legend!", "username": "illay_senner" }, { "code": "", "text": "Alternatives that are good to know:Rather than $unwind and $match you may $filter the array in a $project.You may also use $elemMatch in a projection in your findOne.", "username": "steevej" }, { "code": "", "text": "Thanks @steevej .I found out about using $filtering the array in a $project. Good that you mentioned…By the way, just started using MongoDB, so I am a newbie…Tried $elemMatch for few hours today. it only returns the first array item that matches the condition. This is the problem… If you were to add to more bananas with the same values, $elemMatch still returns a single item. Do you know a way to do it with $elemMatch?", "username": "illay_senner" }, { "code": "", "text": "$elemMatch still returns a single itemThis is the documented behaviour:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filter on findOne
2023-09-15T16:55:58.008Z
Filter on findOne
376
https://www.mongodb.com/…e9c3d199bc26.png
[]
[ { "code": "", "text": "I am having issues sending http request to my local machine when trying to use the trigger + function service in mongo atlas, I have tried using both fetch api and axios but still getting some blocker, here’s the last one I tried writing but still not working:\nbug798×441 16.5 KB\nmaybe there’s something I am missing", "username": "Abolaji_Disu" }, { "code": "", "text": "Hey @Abolaji_Disu,Welcome to the MongoDB Community!I am having issues sending http request to my local machine when trying to use the trigger + function service in mongo atlas, I have tried using both fetch api and axios but still getting some blocker, here’s the last one I tried writing but still not working:From my understanding, it seems that sending a request to the localhost API is not feasible, as it is not accessible in the cloud and cannot be accessed by MongoDB Atlas Functions. I hope this clarifies your doubts.However, to further assist you, may I ask what specifically you are trying to accomplish?Best regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Yeah @Kushagra_Kesav thanks, I finally figured it won’t work, I was basically trying to test mongo trigger functions to a local server however it didn’t work out. I had to deploy my backend code on a remote server and provide the endpoint then it worked just fine,Thanks", "username": "Abolaji_Disu" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Having issues with mongodb trigger over sending http request
2023-09-14T01:46:39.305Z
Having issues with mongodb trigger over sending http request
416
null
[ "flutter" ]
[ { "code": "_user_articles : {\n objectID:,\n createdby: user_id,\n _article. : '_articles ' ( the global one )' (on atlas the object id of that _article is stored)\n _vote : ,\n flag: boolean , \n}\n\"document_filters\": {\n \"write\": {\n \"createdby\": \"%%user.id\"\n },\n \"read\": {\n \"createdby\": \"%%user.id\"\n }\n }\n", "text": "Lets get started with bit of a context.And it is working fine as expected. I can access those _user data whose who is currently logged in.Lets gets to an actual IssueI have two collection.‘_articles’ and ‘_user_articles’ .‘_articles’ contains the global data, like ’ title, description, content .‘_user_articles’ holds user interaction on above _article, such as ’ likes, votes , notes etc’_user_articles is set up as :The document rule for _user_articles is as :such that the user who created the _user_article can only access that _user_Article.For now I am maintaining '_user_list field for ‘_articles’ and defining the sync rules such that if _user_list on ‘_Articles’ contains logged in user id, then that _articles gets accessible on local.I dont know if this is the correct practice.Another thingThe _user_article has flag value boolean, and _article filed which holds’_articles’ . If the 'flag field is true in ‘_user_article’ I want '_article ’ of same ‘_article’ to be accessible locally. Is there any better way to achieve this?", "username": "Shreedhar_Pandey" }, { "code": "_user_articlesuser.idrealm.all<User_articles>", "text": "Hi @Shreedhar_Pandey!\nAs I see you are only missing the subscriptions. Use Realm Flutter SDK and open a Synced Realm. Then subscribe for you data that you need to have locally. Once you subscribe, the data will be downloaded and they will be synced in future if there are new changes. Define your filter criteria in the subscriptions. If you have added already permissions rules on the server for _user_articles by user.id, you don’t need to have a filter in the subscription by user articles. Just add one subscription for realm.all<User_articles>. Then add second subscription for the articles filtered according to your logic.", "username": "Desislava_St_Stefanova" }, { "code": "mutableSubscriptions \n..add(\n realmHelper.query<Articles>(''' user_lists == $loggedInuserID '''),\n)\n", "text": "@Desislava_St_Stefanova\nThanks for Responding.\nI have already opened a synced realm, the data is syncing fine.I am trying to find a way to write subscription for ‘_articles’ based on the value of field in ‘_user_articles’ .Each ‘_user_articles’ has field ‘article’ which references to a ‘_articles’ doc.My current subscription query looks like.So, if the user_lists field on ‘Articles’ contains my userID, it is synced to my local realm .Can I do something like, query to my '_user_articles ’ and each ‘_user_articles’ has ‘_articles’ in them, and a boolean flag value. List out ‘_articles’ form ‘_user_articles’ whose flag value is true and sync those ‘_articles’ only ?", "username": "Shreedhar_Pandey" } ]
I am bit confused about Document Permissions and Subscription rules
2023-09-19T05:39:20.239Z
I am bit confused about Document Permissions and Subscription rules
323
null
[]
[ { "code": "", "text": "Dear Team,We are seeing an issue while resynchronizing one of our failure node in the shared cluster. Our Synchronization is getting failed with an error message every time. We could not able to figure out what could be the exact issue. Initially we through the issue with the open file limit on OS. Currently, the server is configured with the value 1048576 but still we are facing the same error.MongoDBVersion: 4.2.8\nDBSize: 370GB\nError:\nFailed to commit collection indexes dbname.tblname: Location16814: error opening file “/shardserver/data/_tmp/extsort-index.218”: errno:24 Too many open files\n2023-06-14T00:08:06.979+0000 E INITSYNC [replication-7] collection clone for ‘dbname.tblname’ failed due to Location16814: Error cloning collection ‘dbname.tblname’ :: caused by :: error opening file “/shardserver/data/_tmp/extsort-index.218”: errno:24 Too many open filesNote: The table size is huge and mentioned below.DocumentCount:367716433\nIndexesCount:12\nTotalIndexSize:43.8 GB\nTotalTableSize:1.3TB\nTableStorageSize:194.6 GBCan anyone provide your valuable feedback about the issue we are facing?Best Regards,\nAshwin", "username": "ashwin_reddy1" }, { "code": "ulimit -a", "text": "Hi @ashwin_reddy1 and welcome to MongoDB community forums!!Based on the details shared, could you help me understand a few more details like:Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Dear Aasawari,Thanks for your response.If you need any more information, please let me know.Best Regards,\nAshwin", "username": "ashwin_reddy1" }, { "code": "systemctl", "text": "Hi @ashwin_reddy1\nThank you for the information shared.As per the response, it seems the value for the limit has been set. Just to make sure and also mentioned in the MongoDB documentation, could you confirm if the system was started using systemctl which uses the ulimit setting.\nPlease refer to the Linux ulimit documentation for further reference.Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Dear Aasawari,Yes, the service is configured with systemctl.Best Regards,\nAshwin", "username": "ashwin_reddy1" }, { "code": "", "text": "Dear Aasawari,Do you have any update about the above issue?Best Regards,\nAshwin", "username": "ashwin_reddy1" } ]
MongoDB Shard Node Synchronization Failure
2023-07-20T08:22:45.167Z
MongoDB Shard Node Synchronization Failure
701
null
[ "sharding" ]
[ { "code": "", "text": "Hello! I have a cluster with the following characteristics:\nHardware: AMD Ryzen 9 7950X3D 16-Core Processor with 128GB of RAM.\ncluster configuration: mongodb community edition 4.4.24,\nthe number of shards is 7, the number of replicas in each shard is 3, the Wired Tiger cache size is 64GB.\nproblem: During operation, RAM consumption increases randomly, which leads to OOM_kill being triggered and the primary replica of the shard stops. The same thing happens with the secondary replica that won the election. The problematic primary shard contains a database containing only non-sharded collections, and its size is many times smaller than the cache, namely about 5 GB. What can cause such an increase in mapi consumption and how to get rid of it?", "username": "u202fr" }, { "code": "", "text": "Do you see anything interesting from mongodb log messages?", "username": "Kobe_W" }, { "code": "", "text": "Nothing interesting. There were no mistakes. Just a few slow query.", "username": "u202fr" } ]
Mongod killed by oom_killer
2023-09-18T12:24:18.828Z
Mongod killed by oom_killer
326
null
[ "atlas", "devops", "app-services-cli" ]
[ { "code": "DESCRIBE_APP_SOURCE=$(realm-cli app describe)\nDESCRIBE_APP_SOURCE=$(echo \"app1\" | realm-cli app describe)\nDESCRIBE_APP_SOURCE=$(realm-cli app describe)\nAPP_ID_SOURCE=$(echo ${DESCRIBE_APP_SOURCE} | sed 's/^.*client_app_id\": \"\\([^\"]*\\).*/\\1/')\nREALM_APPLICATION_ID_SOURCE=${APP_ID_SOURCE}\n? Select App [Use arrows to move, type to filter]\n> app1-dsfas (9999999999999999999999)\n app2-asdfs (999999999999999999998)\n app3-dfasd (9999999999999999999997)\n", "text": "Well, I answered my own question using some hints in the code from the docs MongoDB provided. I’m just posting here in case anyone else runs into this issue in the future.The hint was the example code had an echo command in it, which means we can send some text into the command.So when you do “realm-cli describe”, and you’re being asked to manually/interactively select the app from the list, it will take “type to filter”, not just arrows. So if you know the name of your app, or even the first part of it, you can echo in the app name and it will autoselect it for you !In the example below, you’d just change:toVery happy it was this easy.I’m using the following code to determine the appid in MongoDB Atlas. This works great if there’s only 1 app, but if more than 1, it’s asking to select the app manually, which cannot be done through realm-cli scripting.Is there any way around this or some way I can script auto-selecting the app ?", "username": "stack_engineering" }, { "code": "DESCRIBE_APP_SOURCE=$(realm-cli app describe)\nDESCRIBE_APP_SOURCE=$(echo \"app1\" | realm-cli app describe)\n", "text": "Just reposting the solution here so it can be marked as Solution and closed.Well, I answered my own question using some hints in the code from the docs MongoDB provided. I’m just posting here in case anyone else runs into this issue in the future.The hint was the example code had an echo command in it, which means we can send some text into the command.So when you do “realm-cli describe”, and you’re being asked to manually/interactively select the app from the list, it will take “type to filter”, not just arrows. So if you know the name of your app, or even the first part of it, you can echo in the app name and it will autoselect it for you !In the example below, you’d just change:toVery happy it was this easy.", "username": "stack_engineering" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can I autoselect an app in realm-cli describe?
2023-09-14T06:54:22.470Z
Can I autoselect an app in realm-cli describe?
400
null
[ "python", "spark-connector" ]
[ { "code": "", "text": "I’m having trouble trying to figure out how to write a value to MongoDb of type ObjectId using the Spark connector 10.1 using Python (Pyspark).Although I haven’t found much about it online, I have tried the solution in the below link which states to write a StructType column containing a string called “oid”, but this does not work. It instead ends up creating an Object with a “oid” attribute instead of an ObjectId.\npython - Write PySpark dataframe to MongoDB inserting field as ObjectId - Stack OverflowI’ve also tried enabling the convertJson write option mentioned in the below reference, but that doesn’t seem to make any difference either.\nWrite Configuration Options — MongoDB Spark Connector", "username": "Scott_S" }, { "code": "oid$oid$oid", "text": "I’m also facing the same issue. I think if we could somehow write oid to $oid then it would work, although directly naming the column as $oid is giving error.", "username": "Dhruv_S" }, { "code": "oidObjectId", "text": "Yeah this is a problem. The document stored in the database has an object with oid attribute inside of it instead of creating an object of ObjectId.", "username": "temp_hd" }, { "code": "", "text": "did anyone solve this issue ?", "username": "Stephan_Levi" }, { "code": ".config(\"spark.mongodb.write.convertJson\",\"object_Or_Array_Only\")state = \"{ '$oid' : '6240b43c279082371d0e835f' }\"", "text": "Solution:\nSet .config(\"spark.mongodb.write.convertJson\",\"object_Or_Array_Only\") in your SparkSession.\nObjectId fields should receive a value in the following format: state = \"{ '$oid' : '6240b43c279082371d0e835f' }\"MongoSpark connector requires curly brackets to parse the string to bson, while Mongo Java driver parses “$oid” as ObjectId.", "username": "Lior_Harel" }, { "code": "", "text": "I have tried above options:I am using MongoDB Spark Connector 10.0.4 driverReally appreciate if anyone can share exact steps with sample code that works. This is important requirement for us - to at least WRITE back as ObjectId - I can live with READ coming on existing ObjectId as string worst case", "username": "JTBS" }, { "code": ".config(\"spark.mongodb.write.convertJson\",\"object_Or_Array_Only\")", "text": "@JTBS .config(\"spark.mongodb.write.convertJson\",\"object_Or_Array_Only\") is only available in MongoSpark connector V10.2.0. Try updating the connector and let us know.", "username": "Lior_Harel" }, { "code": "", "text": "Yes - I noticed documentation change and switched MongoDB Spark Connector: 10.2.0\nNow it works: If value of column is in this format: ‘{ “$oid” : “xxxxxxxxx”}’But I have two problems:Finally there is also mismatch in Documentation of driver: This post says use “object_Or_Array_Only” but documentation uses “objectOrArrayOnly”In my case I was ONLY able to get above working with “any” option - but when I use “object_Or_Array_Only” - it gave error on converting my DF - due to other data in DF I thinkQuestion\nHow I can convert _id (String) to have value with $oid in the format ‘{“$oid”: “_id value” }’ ?This seems so simple issue - and having latest MongoDB / Spark built for purpose - and having to go through all these workarounds - bit surprising. But I need good sample if you can share for above question - greatly appreciate - and can live with whatever workarounds", "username": "JTBS" }, { "code": "", "text": "While I wait to hear any better option - for those having same issue:Below is what worked for me:Open question:I hope either future driver handles this better without breaking existing support OR something I am missing here", "username": "JTBS" }, { "code": "‘{ “$oid” : “xxxxxxxxx”}’object_Or_Array_OnlyobjectOrArrayOnly_id = xxxxx\nformatted_id = \"{ '$oid' : '\" + _id + \"' }\"\nfrom pyspark.sql import SparkSession\n\n_id = \"{ '$oid' : '650898287d503960a631ccac' }\"\n\nspark = (\n SparkSession\n .builder\n .config(\"spark.mongodb.write.connection.uri\",\"your_uri\")\n .config(\"spark.mongodb.write.convertJson\",\"object_Or_Array_Only\")\n .config(\"spark.jars.packages\",\"org.mongodb.spark:mongo-spark-connector_2.12:10.2.0\")\n .getOrCreate())\n\nexpr = f'\"{_id}\" as _id'\n\nquery = f'select {expr}'\n\ndf = spark.sql(query)\n\ndf.write.format(\"mongodb\").mode(\"append\").save()\nconvertJson : \"any\"", "text": "@JTBS, I’ll try to breakdown everything:", "username": "Lior_Harel" }, { "code": "", "text": "Thank you very much for concat tip - Yes no need for struct/json/replace… below worked:However - while this works consistently for me with “any” option ONLY. I don’t want to use any like you suggested. But strangely when I use “object_Or_Array_Only” consistently I get below error: “Cannot cast into a BsonValue. StringType has no matching BsonValue. Error: String index out of range: 0”This error comes out even if I remove above string data - so not sure exactly what column in my DF has issue - but using “any” has no issues.I understand using “any” will get into unpredictable issues that I don’t like either. But it does work for ObjectId.I wish there is some option created just to handle ObjectId which is more common scenario.\nI can’t afford any data-type changes but not sure I have clear path.Thanks", "username": "JTBS" }, { "code": "str==''str==' 'str==null/None", "text": "“Cannot cast into a BsonValue. StringType has no matching BsonValue. Error: String index out of range: 0”Is caused since your dataset contains empty strings and the connector does not know how to handle it. replace all occurrences of str=='' to str==' ' or str==null/None.", "username": "Lior_Harel" }, { "code": "", "text": "Thank you very much. I really appreciate you getting back.\nIn our case I have built a dynamic PySpark engine that works on any incoming data and masks only requested data - saves it back to DB.So I don’t know income schema - other than - just set of fields that will be masked/manipulated.We have contract to retain all other data/data-types as-is once masking is done on requested masked fields - with no changes to rest of schema/data.While PySpark approach proving very performance friendly for large data sets, I have to think through on how to probably detect string type columns and add this expressions - in my dynamic engine.All this I have to do - just to get _id (String) to _id(ObjectId) I only hope authors of MongoDB Spark Connectors will improve this with may be new option - change only ObjectId scenarios etc. But can’t thank you enough for coming forward and helping me on this.You can close this case.", "username": "JTBS" } ]
How to write ObjectId value using Spark connector 10.1 using Pyspark?
2023-05-20T00:20:03.924Z
How to write ObjectId value using Spark connector 10.1 using Pyspark?
1,500
https://www.mongodb.com/…_2_1023x496.jpeg
[ "queries" ]
[ { "code": "", "text": "In the labs teaching queries using comparison operators pasting in the solved code does not return documents that pass the comparison. As an example I’ve attached a sample input asking for items with quantities greater than ten and a sample of its output. Here is a link to the specific lab. Please let me know why this is occurring.Here is an ugly snippet of a sample input and output because this godforsaken forum only allows for ONE media upload for new users.\n\ncombine_images1341×650 84.2 KB\n", "username": "Phoenix_Stout" }, { "code": "items", "text": "Here is a link to the specific labHi Phoenix,Thanks for reaching out!The query returns the documents that has at least 1 item with a quantity of 10 or above. items being a list, if one of the entries matches the criteria, the document will be returned.The document in the screenshot matches the criteria.", "username": "Davenson_Lombard" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Find by comparison operators lab not returning expected values
2023-09-18T17:51:25.410Z
Find by comparison operators lab not returning expected values
310
null
[ "python", "connecting", "atlas-cluster", "atlas" ]
[ { "code": "pymongo.errors.ServerSelectionTimeoutError: ac-0bb81fi-shard-00-01.t4cncq2.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:\ndnspython ==2.4.2\npymongo==4.5.0\ncertifi==2023.7.22\nbeanie==1.22.5\nmotor ==3.3.1\n", "text": "I’ve been trying to connect my fast API app to the MongoDB atlas but I keep getting this error:By the way, I’m using macOS and I have seen similar questions/solutions but they all seem to be a halfway solution. I followed a solution from this post and went ahead to download a certificate from lets-encrypt but nothing happened.I have these libraries installed:MacOS and xcode are all updated.", "username": "Albert_Frimpong" }, { "code": "pymongo.errors.ServerSelectionTimeoutError:...[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed:\n", "text": "Hey @Albert_Frimpong,Welcome to the MongoDB Community!I would recommend taking a look at our Which certificate authority signs MongoDB Atlas cluster TLS certificates? documentation.Also, refer to the Troubleshooting TLS Errors documentation and Common SSL Issues on Python and How to Fix it.In case the issue persists, please feel free to reach out.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
ServerSelectionTimeoutError Error with FastAPI and MongoDB
2023-09-15T19:23:40.413Z
ServerSelectionTimeoutError Error with FastAPI and MongoDB
406
null
[ "aggregation", "indexes", "atlas-cluster", "atlas-search" ]
[ { "code": "db.getCollection('songs').explain('executionStats').aggregate(\n [ \n {\n \"$search\":{\n \"index\": 'lyricsSearch',\n \"phrase\":{\n \"query\": \"love\",\n \"path\" :'lyrics.textLine',\n },\n \"highlight\":{ \n \"path\":'lyrics.textLine'\n }\n },\n },\n {\n $match:{\n languages: {$in:['en', 'sv']} \n }\n },\n {\n $sort : {\n score:-1,\n spotifyPopularity:-1,\n }\n },\n {\n $limit:200\n },\n { \n $project: {\n _id: 1,\n title: 1,\n artist:1,\n lyrics: 1,\n posterImage: 1,\n video:1,\n youtubeLink:1,\n spotifyId:1,\n spotifyPopularity:1,\n score: { $meta: 'searchScore' },\n highlight: { $meta: 'searchHighlights' }\n }\n },\n \n]\n)\n{\n \"explainVersion\": \"1\",\n \"stages\": [\n {\n \"$_internalSearchMongotRemote\": {\n \"mongotQuery\": {\n \"index\": \"lyricsSearch\",\n \"phrase\": {\n \"query\": \"love\",\n \"path\": \"lyrics.textLine\"\n },\n \"highlight\": {\n \"path\": \"lyrics.textLine\"\n }\n },\n \"explain\": {\n \"type\": \"TermQuery\",\n \"args\": {\n \"path\": \"lyrics.textLine\",\n \"value\": \"love\"\n },\n \"stats\": {\n \"context\": {\n \"nanosElapsed\": 7861461,\n \"invocationCounts\": {\n \"createWeight\": 1,\n \"createScorer\": 8\n }\n },\n \"match\": {\n \"nanosElapsed\": 1120063,\n \"invocationCounts\": {\n \"nextDoc\": 11513\n }\n },\n \"score\": {\n \"nanosElapsed\": 3423048,\n \"invocationCounts\": {\n \"setMinCompetitiveScore\": 220,\n \"score\": 11509\n }\n }\n }\n }\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 18\n },\n {\n \"$_internalSearchIdLookup\": {},\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 18\n },\n {\n \"$match\": {\n \"languages\": {\n \"$in\": [\n \"en\",\n \"sv\"\n ]\n }\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 18\n },\n {\n \"$sort\": {\n \"sortKey\": {\n \"score\": -1,\n \"spotifyPopularity\": -1\n },\n \"limit\": 200\n },\n \"totalDataSizeSortedBytesEstimate\": 0,\n \"usedDisk\": false,\n \"spills\": 0,\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 18\n },\n {\n \"$project\": {\n \"_id\": true,\n \"artist\": true,\n \"youtubeLink\": true,\n \"lyrics\": true,\n \"spotifyId\": true,\n \"video\": true,\n \"title\": true,\n \"posterImage\": true,\n \"spotifyPopularity\": true,\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n \"highlight\": {\n \"$meta\": \"searchHighlights\"\n }\n },\n \"nReturned\": 0,\n \"executionTimeMillisEstimate\": 18\n }\n ],\n \"serverInfo\": {\n \"host\": \"atlas-yeynqc-shard-00-01.ywrjg.mongodb.net\",\n \"port\": 27017,\n \"version\": \"6.0.10\",\n \"gitVersion\": \"8e4b5670df9b9fe814e57cb5f3f8ee9407237b5a\"\n },\n \"serverParameters\": {\n \"internalQueryFacetBufferSizeBytes\": 104857600,\n \"internalQueryFacetMaxOutputDocSizeBytes\": 104857600,\n \"internalLookupStageIntermediateDocumentMaxSizeBytes\": 104857600,\n \"internalDocumentSourceGroupMaxMemoryBytes\": 104857600,\n \"internalQueryMaxBlockingSortMemoryUsageBytes\": 104857600,\n \"internalQueryProhibitBlockingMergeOnMongoS\": 0,\n \"internalQueryMaxAddToSetBytes\": 104857600,\n \"internalDocumentSourceSetWindowFieldsMaxMemoryBytes\": 104857600\n },\n \"command\": {\n \"aggregate\": \"songs\",\n \"pipeline\": [\n {\n \"$search\": {\n \"index\": \"lyricsSearch\",\n \"phrase\": {\n \"query\": \"love\",\n \"path\": \"lyrics.textLine\"\n },\n \"highlight\": {\n \"path\": \"lyrics.textLine\"\n }\n }\n },\n {\n \"$match\": {\n \"languages\": {\n \"$in\": [\n \"en\",\n \"sv\"\n ]\n }\n }\n },\n {\n \"$sort\": {\n \"score\": -1,\n \"spotifyPopularity\": -1\n }\n },\n {\n \"$limit\": 200\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"title\": 1,\n \"artist\": 1,\n \"lyrics\": 1,\n \"posterImage\": 1,\n \"video\": 1,\n \"youtubeLink\": 1,\n \"spotifyId\": 1,\n \"spotifyPopularity\": 1,\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n \"highlight\": {\n \"$meta\": \"searchHighlights\"\n }\n }\n }\n ],\n \"cursor\": {},\n \"$db\": \"songsay\"\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": {\n \"$timestamp\": {\n \"t\": 1695038389,\n \"i\": 3\n }\n },\n \"signature\": {\n \"hash\": {\n \"$binary\": {\n \"base64\": \"KaIV8c6iWtbkQ0D+QMnZ9sl1ef4=\",\n \"subType\": \"00\"\n }\n },\n \"keyId\": 7243340663985537000\n }\n },\n \"operationTime\": {\n \"$timestamp\": {\n \"t\": 1695038389,\n \"i\": 3\n }\n }\n}\n", "text": "Hi\nI have an aggregate pipeline that is taking too long to run, and when I run the aggregate with explain there is not information about Index usage or winningPlan, why is that?\nI have tried to search for this, but I can’t find any information.\nIsn’t this information available for an aggregate?\nI have watched a lot of videos about find and explain, but haven’t found any one that brings up this situation.This is my aggregate pipelines:And here is the explain result:I would be very grateful if someone could point me in the right direction ", "username": "Mats_Rydgren" }, { "code": "explain$search", "text": "@Mats_Rydgren when explaining an Atlas Search operation you won’t see the same details you would with other Aggregation operations. This is due to Atlas search (the $search stage) being executed via a Lucene Query as opposed to the MongoDB query engine.The format of the output in this case is described in greater detail here: “Retrieve Query Plan and Execution Statistics”", "username": "alexbevi" }, { "code": "", "text": "Thank you for the response.\nIs it a good idea to use the $search, or is is it better to use $match?\nAnd is it doable with the search that we have today?We have like 40000 documents, where the documents have an array of maybe 50 songs. So maximum 2 million options.", "username": "Mats_Rydgren" } ]
Why is there no winningPlan or index information in my aggregate
2023-09-18T12:01:06.502Z
Why is there no winningPlan or index information in my aggregate
277
null
[ "text-search" ]
[ { "code": "(db.getCollection(\"xyz\").createSearchindex( etc.. )) db.runCommand(\n{\n createSearchIndexes: \"xyz\"\n", "text": "I’m using Studio3T Intellishell. We’ve just updated our M10 MongoDB cluster to MongoDB 7.0. I was under the impression that now creating text search indexes will become simpler and I’ll be able to create them straight from the IDE (previously we’ve used Cloud Portal for creating those).\nI’ve tried examples from the documentation and there are a couple of them, some use the createSearchIndex command (db.getCollection(\"xyz\").createSearchindex( etc.. )), I’m getting error that createSearchIndexfunction does not exist. I’ve also tried the:Something runs, but nothing happens.Can I please get the full syntax with some example. Unless I’ve misunderstood release notes and this is not possible. I don’t want to use the cloud MongoDB portal to create these indexes if there’s another option.", "username": "Szymon_Katanski" }, { "code": "db.getCollection(\"xyz\").createSearchindex( etc.. )createSearchIndexes()mongosh", "text": "Hey @Szymon_Katanski,Welcome to the MongoDB Community!db.getCollection(\"xyz\").createSearchindex( etc.. )Looking at the given command, I notice that you might have misspelled the createSearchIndexes().However, the feature is available for Atlas M10+ clusters running MongoDB 6.0.8 or later introducing the ability to create and manage Atlas Search indexes from mongosh and NodeJS driver. Please see July 10, 2023 changelog and documentation to learn more about it.I was under the impression that now creating text search indexes will become simpler and I’ll be able to create them straight from the IDECould you please attempt to run it from mongosh or MongoDB Compass as it worked successfully for me? If the problem persists, kindly provide the complete command you are attempting to execute.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
How to use MongoDB Search Index from inside of an IDE
2023-09-15T15:41:51.488Z
How to use MongoDB Search Index from inside of an IDE
348
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hey everyone, I’m working on a side project where I display the statistics for football players, and I currently have a database with all their statistics and numbers. I planned to create a spider chart on which different players can be compared. However, I ran into a scaling issue, distance run and shots/90 don’t fit well on the same spider chart. I figured I could make the spider chart percentile-based, which would make them all be set to the same scale (1 to 100). I started working on calculating the percentile based on the player requested, which has gone fine, but I was wondering if there was a way to find the percentile for all fields at once instead of having to do them individually. Thanks for your time.", "username": "Toluwanimi_Emoruwa" }, { "code": "", "text": "Do you have your current query and sample documents? This makes it easier for people to play with a query using your data.\nYou can use the “Preformatted Text” button in the editor to style your code and document fragments so they can be easily copied.", "username": "John_Sewell" }, { "code": "{\n \"_id\": {\n \"$oid\": \"645fffcf1119f5fafce46a24\"\n },\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"standard_stats\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"goals90\": 0.05,\n \"assists90\": 0.14,\n \"goalsAndAssits90\": 0.19,\n \"nonPenGoals90\": 0.05,\n \"penScored\": 0,\n \"pensTaken\": 0,\n \"yellow90\": 0,\n \"red90\": 0,\n \"xG90\": 0.03,\n \"nonPenXG90\": 0.03,\n \"xAG\": 0.08,\n \"nonPenXGAG90\": 0.11,\n \"progCarries90\": 2.91,\n \"progressivePass90\": 4.93,\n \"progPassRec90\": 6.02\n },\n \"shooting_stats\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"shots90\": 0.71,\n \"shotOnTarget90\": 0.24,\n \"shotOnTargetPercent90\": \"33.3%\",\n \"goalPerShot90\": 0.07,\n \"goalPerShotOnTarget90\": 0.2,\n \"avgShotDistance\": 20.8,\n \"shotsFreeKicks\": 0,\n \"nonPenXGPerShot90\": 0.04,\n \"goalsMinusXG\": \"+0.02\",\n \"nonPenGoalsMinusXG\": \"+0.02\"\n },\n \"passing_stats\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"passesCompleted90\": 42.3,\n \"passesAttempted90\": 54.75,\n \"passCompletionPercent\": 77.3,\n \"totalPassingDistance90\": 765.4,\n \"progressivePassingDistance90\": 244.72,\n \"shortPassesCompleted90\": 17.81,\n \"shortPassesAttempted90\": 20.07,\n \"shortPassesCompletionPercent\": \"88.8%\",\n \"mediumPassesCompleted90\": 20.26,\n \"mediumPassesAttempted90\": 24.11,\n \"mediumPassesCompletionPercent\": \"84.0%\",\n \"longPassesCompleted90\": 3.85,\n \"longPassesAttempted90\": 8.18,\n \"longPassesCompletionPercent\": \"47.1%\",\n \"xA90\": 0.08,\n \"keyPasses90\": 0.94,\n \"passesFinalThird90\": 3.81,\n \"passesPenaltyArea90\": 1.13,\n \"crossesPenaltyArea90\": 0.56\n },\n \"pass_types\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"liveBallPasses90\": 45.82,\n \"deadBallPasses90\": 8.6,\n \"freeKickPasses\": 0.33,\n \"throughBalls90\": 0,\n \"switches90\": 0.61,\n \"crosses90\": 3.71,\n \"throwIns90\": 8.22,\n \"cornerKicks90\": 0.05,\n \"inswingingCorners90\": 0,\n \"outswingingCorners90\": 0,\n \"straightCorners90\": 0,\n \"passesOffside90\": 0.33,\n \"passesBlockedByOpp90\": 1.03\n },\n \"shot_goalCreation\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"shotCreateAct90\": 2.07,\n \"liveBallShotCreateAct90\": 1.69,\n \"deadBallShotCreateAct90\": 0.24,\n \"takeOnShotCreateAct90\": 0.05,\n \"shotsLeadingToNewShot90\": 0.05,\n \"foulsLeadingToShot90\": 0.05,\n \"defendingActionsLeadingToShot90\": 0,\n \"goalCreateAct90\": 0.09,\n \"liveBallGoalCreateAct90\": 0.09,\n \"deadBallGoalCreateAct90\": 0,\n \"takeOnGoalCreateAct90\": 0,\n \"shotsLeadingToGoal90\": 0,\n \"foulsLeadingToGoal90\": 0,\n \"defendingActionsLeadingToGoal90\": 0\n },\n \"defensive_stats\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"tacklesAttempted90\": 0.85,\n \"tacklesWon90\": 0.66,\n \"tacklesInDef3rd90\": 0.52,\n \"tacklesInMiddle3rd90\": 0.33,\n \"tacklesInAttack3rd90\": 0,\n \"dribblersTackled90\": 0.47,\n \"dribblersChallenged90\": 0.52,\n \"percentDribblersTackled90\": \"90.9%\",\n \"challengesLost90\": 0.05,\n \"blocks90\": 1.08,\n \"shotsBlocked90\": 0.24,\n \"passesBlocke90\": 0.85,\n \"interceptions90\": 0.71,\n \"tacklesAndInterceptions90\": 1.55,\n \"clearances90\": 2.54,\n \"errors90\": 0\n },\n \"possession_stats\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"touches90\": 63.31,\n \"touchesInDefPenArea90\": 2.91,\n \"touchesInDef3rd90\": 15.89,\n \"touchesInMiddle3rd90\": 27.31,\n \"touchesInAttack3rd90\": 20.49,\n \"touchesInAttackPenArea90\": 1.64,\n \"liveBallTouches90\": 63.31,\n \"takeOnsAttempted90\": 1.08,\n \"successfulTakeOns90\": 0.42,\n \"successfulTakeOnPercent\": \"39.1%\",\n \"timesTackledInTakeOn90\": 0.66,\n \"tackledInTakeOnPercent\": \"60.9%\",\n \"totalCarryingDistance90\": 176.33,\n \"progressiveCarryingDistance90\": 102.78,\n \"carriesIntoFinal3rd90\": 1.5,\n \"carriesIntoPenArea90\": 0.33,\n \"carries90\": 34.31,\n \"miscontrols90\": 0.85,\n \"dispossessed90\": 0.38,\n \"passesReceived90\": 39.1\n },\n \"other_stats\": {\n \"name\": \"Kieran Tierney\",\n \"club\": \"Arsenal\",\n \"season\": \"2021-22\",\n \"position\": \"FB\",\n \"league\": \"Premier League\",\n \"secondYellow90\": 0,\n \"foulsCommitted90\": 0.24,\n \"foulsDrawn90\": 0.75,\n \"offsides90\": 0.05,\n \"penaltiesWon90\": 0,\n \"penaltiesGivenAway90\": 0,\n \"ownGoals90\": 0,\n \"ballRecoveries90\": 3.99,\n \"aerialsWon90\": 0.56,\n \"aerialsLost90\": 1.08,\n \"percentAerialsWon\": \"34.3%\"\n }\n}\nconst result = await db.collection('PremierLeague2022-21Big6').aggregate([\n {\n $sort: {\"shooting_stats.shots90\": 1}\n },\n {\n $group: {\n _id: null,\n data: { $push: \"$shooting_stats.shots90\" },\n count: { $sum: 1 }\n }\n },]).toArray();\n", "text": "Thanks for the reply, John. Here’s what one of my documents looks like.And here’s what the query in the backend looks like.", "username": "Toluwanimi_Emoruwa" }, { "code": "", "text": "I did have a play and got some $map running and was thinking about $facet but I’m not sure I understand the calcs you want to do exactly.So for the example aggregate you shared, for each player, you want to calculate shots90 / total number of players? Or do you want to calculate shots90 for a player compared to the average of all players?", "username": "John_Sewell" }, { "code": "", "text": "The calc I’m trying to do is find what percentile specific stats for players are in based on my database. For example, what percentile is his assists90 in. I have figured out how to do this for individual statistics, but I was wondering if I could do this for all of the statistics. My current implementation returns a sorted list of all the numbers for a specific stat(goals90) and from there I can calculate the percentile. I’m trying to see if I could replicate this for all other statistics without having to use the same lines of code over and over", "username": "Toluwanimi_Emoruwa" }, { "code": "", "text": "What version of Mongo are you running? I just saw this…", "username": "John_Sewell" }, { "code": "", "text": "What’s the current query you’re running to calculate the percentile?", "username": "John_Sewell" }, { "code": "", "text": "Thank you for this. I’ll look into it further, but it seems to require me to specify the percentile instead of me giving it the number and then it telling me the percentile that number is in", "username": "Toluwanimi_Emoruwa" }, { "code": "", "text": "I’m currently finding the percentile in the frontend. So right now I have it returning the array t", "username": "Toluwanimi_Emoruwa" }, { "code": "", "text": "Depending on how many players…it may be worth keeping a collection of percentile ranges for each stat…and just recalculate it as players update, or if you do a bulk update once a day / week, refresh it then, you could then join/lookup the collection onto the player(s) as you get them and work out which of the percentiles the players stat falls into.", "username": "John_Sewell" }, { "code": "", "text": "I figured as much. Do you have any ideas on how I could make a collection of the percentiles quickly?", "username": "Toluwanimi_Emoruwa" }, { "code": "", "text": "Also, I’m running Mongodb version 5.7.0 and mongoose version 7.4.3. I just saw I needed to be on version 7. When you say Mongo are you referring to Mongodb or is mongo a separate thing I need to install?", "username": "Toluwanimi_Emoruwa" }, { "code": "", "text": "I found a SO article that has a few options:Depending on what version you’re running you have a few options.", "username": "John_Sewell" }, { "code": "", "text": "Thanks for these. I’ll test them out and post what works", "username": "Toluwanimi_Emoruwa" } ]
Calculating percentiles
2023-09-15T21:08:18.815Z
Calculating percentiles
425
null
[ "cxx" ]
[ { "code": "", "text": "Hello,I am new to mongodb. I have started working with mongodb c++ driver. I was wondering is it possible to serialize bsoncxx::document to a binary file and also create bsoncxx::document from a binary file ?Regards", "username": "Ahsan_Iqbal" }, { "code": "bsoncxx::document::view view = ...; // your document view\nstd::string json_str = bsoncxx::to_json(view);\nstd::ofstream ofs(\"document.bin\", std::ios::binary);\nofs.write(json_str.c_str(), json_str.size());\nstd::ifstream ifs(\"document.bin\", std::ios::binary);\nifs.seekg(0, std::ios::end);\nsize_t size = ifs.tellg();\nifs.seekg(0, std::ios::beg);\nstd::string json_str(size, '\\0');\nifs.read(&json_str[0], size);\nbsoncxx::document::value value = bsoncxx::from_json(json_str);\nbsoncxx::document doc(value.view());\n", "text": "@Ahsan_IqbalYes, I used to do this a lot back when I worked with a lot of IoT devices and services.It is possible to serialize bsoncxx::document to a binary file and create bsoncxx::document from a binary file using the MongoDB C++ driver.To serialize a bsoncxx::document to a binary file, you can use the bsoncxx::to_json function to convert the document to a JSON string, and then write the string to a file using standard file I/O functions. For example:To create a bsoncxx::document from a binary file, you can read the JSON string from the file using standard file I/O functions, and then use the bsoncxx::from_json function to parse the JSON string and create a new document. For example:Note: This approach will work only if the BSON document is valid JSON. If the document contains binary data or other non-JSON data types, you may need to use a different serialization format, such as BSON, instead of JSON.", "username": "Brock" }, { "code": "", "text": "Here’s an article that covers this topic - Storing Binary Data with MongoDB and C++ | MongoDB", "username": "Rishabh_Bisht" } ]
Is it possible to serialize bsoncxx::document to a binary file
2023-04-11T20:47:06.714Z
Is it possible to serialize bsoncxx::document to a binary file
1,065
null
[ "storage" ]
[ { "code": "", "text": "Hi,Since upgrading to 3.6 (yes I know!) we have found that dropped collections remain in pending on secondaries.We’re running a primary-secondary-arbiter setup. The primary immediately frees disk space for dropped collections. The secondary however keeps the dropped collections in pending, and doesn’t free disk space.We’re working around the problem by periodically restarting mongod on the secondary as drop-pending collections are removed on shutdown.Could this be a configuration problem? We’re mostly using default settings (WiredTiger with journal enabled).Thanks!Martin.", "username": "Martin_Fido" }, { "code": "", "text": "data pruning is usually done as a background task, so they may sit there for some time.Does this cause any issue to your server?", "username": "Kobe_W" }, { "code": "", "text": "We have some collections that hold daily data that are a few GB in size. A new collection is created each day and we keep the last x days in Mongo. We archive and drop the oldest collection daily. The secondary is therefore growing a few GB every day, until we manually force it to shutdown and restart.", "username": "Martin_Fido" }, { "code": "", "text": "Can anyone help me to restore the collection that is in a pending state after the drop?\nI accidentally deleted a collection. I run db.getCollectionNames({includePendingDrops: true}) and I get the list of the collections that are in a pending state. Is there any way to recover them?", "username": "Preetam_Chakraborty" } ]
Dropped collections on 3.6.23 secondaries stuck pending
2023-02-23T05:31:37.785Z
Dropped collections on 3.6.23 secondaries stuck pending
989
null
[ "cluster-to-cluster-sync" ]
[ { "code": "", "text": "Hi experts,Does anyone know if mongosync (cluster-to-cluster-sync) is working with monmgodb community edition?\nAre there any limitations?Thank you in advance,\nMarios", "username": "pavlidis.marios" }, { "code": "", "text": "Hi Marios,While it may work, as suggested in the docs, it’s best to reach out to your MongoDB Account Executive for assistance with your specific requirements.", "username": "Alexander_Komyagin" }, { "code": "", "text": "Hi Alexander,Thank you for your time. I have read the docs and still it seems abstract.As it says: ‘Cluster-to-Cluster Sync supports a limited number of operations with MongoDB Community Edition.’What is the limitation with community edition? Is it commercial or technical limitation? Has somebody tried it?\nThe license text in mongosync does not imply usage only with Enterprise.", "username": "pavlidis.marios" }, { "code": "1. LICENSE. During the Period (as defined below), subject to Your full and\nongoing compliance with all terms and conditions of this Agreement and subject\nto You having purchased a MongoDB Atlas subscription, Company hereby grants You\na limited, revocable, non-exclusive, non-transferable, non-sublicensable\nlicense to install and use the Software in your internal environment, and\nsolely for the intended purpose of the Software. For clarity, you may only\ninstall and use the Software to migrate from MongoDB Community server to\nMongoDB Atlas or to migrate or sync between your MongoDB Atlas clusters.\n", "text": "based on this:probably we cannot use it for community edition sync. Can somebody confirm it ?", "username": "Balazs_Varga" } ]
Mongosync community edition limitations
2023-06-12T12:37:42.916Z
Mongosync community edition limitations
792
null
[ "queries", "node-js", "connecting", "atlas-cluster" ]
[ { "code": "node:internal/errors:496\n ErrorCaptureStackTrace(err);\n ^\n\nError: querySrv ETIMEOUT _mongodb._tcp.nodetut.hvyffwg.mongodb.net \n at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\n errno: undefined,\n code: 'ETIMEOUT',\n syscall: 'querySrv',\n hostname: '_mongodb._tcp.nodetut.hvyffwg.mongodb.net'\n}\n\nNode.js v20.5.1\n[nodemon] app crashed - waiting for file changes before starting...\n", "text": "", "username": "sadiq_abdulwahab" }, { "code": "Error: querySrv ETIMEOUT _mongodb._tcp.nodetut.hvyffwg.mongodb.net\n at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\n errno: undefined,\n code: 'ETIMEOUT',\n syscall: 'querySrv',\n hostname: '_mongodb._tcp.nodetut.hvyffwg.mongodb.net'\n}\n8.8.8.88.8.4.4", "text": "Hey @sadiq_abdulwahab,Welcome to the MongoDB Community! It seems like the DNS issue, try using Google’s DNS 8.8.8.8 and 8.8.4.4. Please refer to the Public DNS for more details.Apart from this, please refer to this post and try using the connection string from the connection modal that includes all three hostnames instead of the SRV record.If it returns a different error, please share that error message here.In addition to the above, I would recommend also checking out the Atlas Troubleshoot Connection Issues documentation.Best regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Error while connecting to mongoDB
2023-09-13T16:24:51.150Z
Error while connecting to mongoDB
600
null
[ "atlas", "api", "graphql-api" ]
[ { "code": "", "text": "Hi guys,Has anyone tried to build an LLM, ChatGPT or similar, on top of MongoDB?\nSo its possible to “chat” with the database.", "username": "Rasmus_Gregersen" }, { "code": "", "text": "Hello Rasmus,Here how you can do thatI would be interested in learning more about what you are exploring", "username": "Prakul_Agarwal" }, { "code": "", "text": "Hi Prakul,Awesome thanks a lot, good read Im my case I have been working with LangChain+MongoDB Atlas+GraphQL to use ChatGPT to write a GQL query and then execute to get an output.The goal is to be able to ask “What is the avg. sales in August?” for structured data in MongoDB.I have been testing with MDB with GQL, but my challenge is that I haven’t found a way/solution to add the auth for the API in LangChain. I have followed the guide in LangChain: GraphQL | 🦜️🔗 LangchainI can send my script in a message for inspo and interest?", "username": "Rasmus_Gregersen" }, { "code": "", "text": "Solution for the auth in the header:", "username": "Rasmus_Gregersen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
LLM / AI on top of MongoDB with Atlas
2023-09-06T17:49:23.718Z
LLM / AI on top of MongoDB with Atlas
792
https://www.mongodb.com/…d8163ecdeb0a.png
[ "python", "graphql", "api", "graphql-api" ]
[ { "code": "", "text": "Hi guys,I wanna chat with the data in MongoDB I have succeeded with data that is not stored in MongoDB.Does anyone have experience in working with LangChain? and connecting OpenAI/ChatGPT to MongoDB with GraphQL API keys?I have issues connecting to the database because I can’t find a way to add a header/apikey correctly so I don’t get an error 401 “401 Client Error: Unauthorized for url:…”I have tested with a simple connect with requests, and i worked fine.\nrequests.post(url, json={‘query’: query}, headers=headers)LangChain with GQL doc:This Jupyter Notebook demonstrates how to use the BaseGraphQLTool component with an Agent.Code snippet:API_KEY = “bdytnQMOwW…6zMZEY”\nheaders = {“api-key”: API_KEY}tools = load_tools(\n[“graphql”],\ngraphql_endpoint=“App Services”,\nheaders=headers\n)", "username": "Rasmus_Gregersen" }, { "code": "", "text": "Hi Rasmus,Is there a reason you’re aiming to connect via GraphQL to MongoDB Atlas instead of going directly from the MongoDB driver (pymongo) akin to what’s described in the docs here (MongoDB Atlas | 🦜️🔗 LangchainCheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "Hi Andrew,Thanks for your input and link, appreciated \nIm have structured data in MDB (real estate data: sqm, address, sold price etc.), and kind of using LLM to create the GQL query and output the result from that. Prompting “What are the latest sales in x-zipcode?”.Reviewing your link, seems like it’s for text or unstructured data or?\nLLM is optimal for unstructured data, but also great for writing queries which makes it easy for non-code/tech ppl to extract data from db. I haven’t found another way/method to “ask the database” so far ", "username": "Rasmus_Gregersen" }, { "code": "", "text": "Hi @Rasmus_Gregersen\nAll the params in GraphQL tool in Langchain are specified here https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/tools/graphql/tool.pySo you are thinking of generating the GraphQL from natural language using an LLM (GPT4 works pretty well for certain cases for instance ) and then query MongoDB using that?", "username": "Prakul_Agarwal" }, { "code": "", "text": "Awesome, I will have a look, thanks a lot @Prakul_Agarwal Yep, exactly. I have made a small demo using LangChain, ChatGPT and GraphQL API (not MDB). The verbose in the output shows the generated GQL query:Chat with data from GraphQL API with help from LangChain in a Gradio interface - GitHub - ragre/LangChain-with-Gradio-interface-and-data-from-GraphQL: Chat with data from GraphQL API with help fro...", "username": "Rasmus_Gregersen" }, { "code": "", "text": "Found a solution to connect to MongoDB with GraphQL API auth.\nload_tools2114×962 345 KB\n", "username": "Rasmus_Gregersen" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help needed to add GraphQL API keys in LangChain script
2023-08-25T11:04:02.724Z
Help needed to add GraphQL API keys in LangChain script
867
null
[ "node-js" ]
[ { "code": "", "text": "Hello.We save objects that are shared between different users - think chat rooms. The objects have an array containing the usename ids of the participants. To find the rooms, we scan through the arrays finding the rooms that contain the username id of the user.We need a way to index that array for faster querys, but Im also wondering if there’s a better way to do this that doesn’t require an array. Any ideas appreciated.Thanks,\nYohami", "username": "Yohami_Zerpa" }, { "code": "", "text": "You can have indexes on arrays, there are some limitations, see:https://www.mongodb.com/docs/v5.0/core/index-multikey/#:~:text=To%20index%20a%20field%20that,%2C%20numbers)%20and%20nested%20documents.Other than that you could store the user list in the chat room object and the chat rooms in the users as well, that way it’s quick to find users in a room or rooms that a user is in.What do your documents look like and what indexes do you have currently and what query are you running?", "username": "John_Sewell" } ]
How to index an array? searching dynamic lists of users associated to an object
2023-09-18T09:59:50.328Z
How to index an array? searching dynamic lists of users associated to an object
213
null
[ "sharding", "installation", "cxx" ]
[ { "code": "\napt-get --no-install-recommends install libmongoc-dev\n\napt-get --no-install-recommends install cmake\n\n\nsudo apt-get --no-install-recommends install libssl-dev\n\nsudo apt-get --no-install-recommends install libsas12-dev\n\nmkdir downloads/installers\n\ncd downloads/installers\n\nwget https://github.com/mongodb/mongo-c-driver/releases/download/1.24.4/mongo-c-driver-1.24.4.tar.gz\n\ntar xzf mongo-c-driver-1.24.4.tar.gz\n\ncd mongo-c-driver-1.24.4\n\nmkdir cmake-build\n\ncd cmake-build\n\ncmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..\n\n\ncurl -OL https://github.com/mongodb/mongo-cxx-driver/releases/download/r3.5.1/mongo-cxx-driver-r3.5.1.tar.gz\n\ntar -xzf mongo-cxx-driver-r3.5.1.tar.gz\n\ncd mongo-cxx-driver-r3.5.1/build\n\ncmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local -DBSONCXX_POLY_USE_MNMLSTC=1\n\n\nsudo cmake --build . --target EP_mnmlstc_core\n\n\ncmake --build .\n\nsudo cmake --build . --target install\n\n", "text": "Hello,I want to setup a mongo DB inside a Khadas VIM3 board, with Ubuntu 20.04. I want to implement MongoDB as the database for my C++ program.Khadas boards have arm64 architectureThese are the steps I followed:Installation of driversAccording to mongo’s official manual for mongo C++, I started installing drivers for C, then for cxx.I see this warning, judged it not critical (I am no expert, please don’t value my judgement ):/sbin/ldconfig.real: /lib/ is not a symbolic linkI installed libmongocxx driverAs I’m using MNMLSTCPolyfill successfully installed, let’s build and install cxx driver:I have all the logs from this, stored using ubuntu’s “script” command. I can provide them if necessary.Thank you for taking your time to read and maybe help me, you are wonderful. Let’s keep going.Installation of MongoDB 4.4Following https://www.mongodb.com/docs/v6.0/tutorial/install-mongodb-on-ubuntu/´´´sudo apt-get install gnupg curlecho “deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-4.4.gpg ] MongoDB Repositories focal/mongodb-org/4.4 multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.listsudo apt-get updatesudo apt-get install -y mongodb-org=4.4.24 mongodb-org-server=4.4.24 mongodb-org-shell=4.4.24 mongodb-org-mongos=4.4.24 mongodb-org-tools=4.4.24echo “mongodb-org hold” | sudo dpkg --set-selectionsecho “mongodb-org-server hold” | sudo dpkg --set-selectionsecho “mongodb-org-shell hold” | sudo dpkg --set-selectionsecho “mongodb-org-mongos hold” | sudo dpkg --set-selectionsecho “mongodb-org-tools hold” | sudo dpkg --set-selections´´´No errors returned (appart from “/sbin/ldconfig.real: /lib/ is not a symbolic link” )My system uses systemctl, so:´´´khadas@Khadas:~$ sudo systemctl status mongod● mongod.service - MongoDB Database ServerLoaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)Active: failed (Result: signal) since Thu 2023-09-14 09:50:34 UTC; 13s agoProcess: 8600 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=killed, signal=ILL)Main PID: 8600 (code=killed, signal=ILL)Sep 14 09:50:34 Khadas systemd[1]: Started MongoDB Database Server.Sep 14 09:50:34 Khadas systemd[1]: mongod.service: Main process exited, code=killed, status=4/ILLSep 14 09:50:34 Khadas systemd[1]: mongod.service: Failed with result ‘signal’.´´´Configuration tweaksI checked “ulimit -a”. It seems fine, but this is not a performance issue.I see there’s no log file created, so I modify /etc/mongod.conf to raise log verbose. Still no log file created.If you reached this point, I admire your effort. Thank youMay you have a great day.Gabriel", "username": "MetolaEtra_N_A" }, { "code": "mongodsudo systemctl start mongod", "text": "Hi @MetolaEtra_N_A !\nWelcome to the MongoDB community forums.Thanks for the detailed steps. If I understand correctly, you are not able to run the MongoDB server that you installed. I see that call to start the mongod service is missing in your steps. Could you try executing it?sudo systemctl start mongod", "username": "Rishabh_Bisht" }, { "code": "ILLarm64mongodmongosmongoarm64", "text": "Hi @MetolaEtra_N_AWelcome to the forums.Khadas boards have arm64 architectureCheck that this system is a supported platform, the ILL signal suggests that it does not.MongoDB on arm64 requires the ARMv8.2-A or later microarchitecture.Starting in MongoDB 5.0, mongod, mongos, and the legacy mongo shell no longer support arm64 platforms which do not meet this minimum microarchitecture requirement.", "username": "chris" }, { "code": "", "text": "Thank you both for your answers.Chris was on the right track, the problem was on the ARM microarchitecture.Khadas VIM3 has: A311D big-little architecture. x4 2.2Ghz Cortex-A73 cores, paired with x2 1.8Ghz Cortex-A53 cores. (source)MongoDB requieres ARMv8.2-A or later\nARM Cores Cortex-A73 and Cortex-A53 implement a lower ARM version, the ARMv8-A 64-bit instruction set (source A73, source A53)Thus, MongoDB cannot run in Khadas VIM3.", "username": "MetolaEtra_N_A" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Trying to setup MongoDB in Khadas board
2023-09-15T12:34:37.059Z
Trying to setup MongoDB in Khadas board
374
null
[ "aggregation", "data-api" ]
[ { "code": "Failed to aggregate documents: FunctionError: aggregation stage \"$geoNear\" is not supported in a user context. A pipeline with this stage must be run as a System User\n", "text": "Hi,I am trying to execute a pipeline via the data api with api-key auth and I am getting this error:The api key is User Type:server tho. Has anyone else experience this?", "username": "Michael_Murray4" }, { "code": "", "text": "@Michael_Murray4 Based on the provided log, it appears that the problem may be related to the role or permissions of the user you are currently using…", "username": "Jack_Yang1" } ]
Aggregation pipeline in data api
2023-09-17T22:13:52.061Z
Aggregation pipeline in data api
336
null
[ "queries", "crud", "golang" ]
[ { "code": "[]model\ntype Sample struct {\n\tData string `json:\"data,omitempty\" valid:\"-\"`\n\tExam []Result `json:\"result\" valid:\"-\"`\n}\n\ntype Result struct {\n\n\tfield1 string `json:\"field1,omitempty\"`\n\tfield2 string `json:\"field2,omitempty\"`\n\n}\n\n//what I try to do\n\nvar result m.Result \nerr := db.sampleCollection.UpdateOne(bson.M{\"id\": sampleID}, bson.M{\"$push\": bson.M{\"exam\": result}})\n[] array$pushmgo", "text": "After the code migration from mgo to go-mongo driver the []model are getting updated with null instead of empty array , So i’m not able to do $push operation for a golang struct . How to $push even if the array is null is there a way ?But during insert the field result is set to null instead of empty [] array , So it’s not allowing to do a $push operation . Since that’s not an array and just an object, In the old mgo driver it used to work fine because it was never set to null .", "username": "karthick_d" }, { "code": "UpdateOneresult.Exams = []Exam{}\n", "text": "You can avoid this issue by inserting the line below before calling UpdateOne method:", "username": "yudukikun5120" } ]
How to $push a nil slice in golang
2022-12-02T10:50:54.102Z
How to $push a nil slice in golang
2,491
null
[ "android", "flutter", "dart" ]
[ { "code": "import 'package:realm/realm.dart';\n\nimport 'model/withdrawal.dart';\n\nclass WithdrawalService {\n final Configuration _config = Configuration.local([Withdrawal.schema]);\n late Realm _realm;\n\n WithdrawalService() {\n openRealm();\n }\n\n openRealm() {\n _realm = Realm(_config);\n }\n\n closeRealm() {\n if (!_realm.isClosed) {\n _realm.close();\n }\n }\n\n RealmResults<Withdrawal> getWithdrawals() {\n return _realm.all<Withdrawal>();\n }\n\n Future<bool> addWithdrawal(data) async {\n final withdrawal = _realm.query<Withdrawal>('id == \"${data['id']}\"');\n\n try {\n DateTime createdAt = DateTime.parse(data['created_at']).toLocal();\n DateTime updatedAt = DateTime.parse(data['updated_at']).toLocal();\n if (withdrawal.isEmpty) {\n _realm.write(() {\n _realm.add(\n Withdrawal(\n data['id'],\n data['user_id'],\n data['user']['username'],\n data['amount'] is int\n ? data['amount'].toDouble()\n : (data['amount'] is String\n ? double.tryParse(data['amount'])\n : data['amount']),\n data['user']['balance'] is int\n ? data['user']['balance'].toDouble()\n : (data['user']['balance'] is String\n ? double.tryParse(data['user']['balance'])\n : data['user']['balance']),\n data['confirmed'] == 0 ? false : true,\n data['processed'] == 0 ? false : true,\n updatedAt,\n createdAt,\n ),\n );\n });\n } else {\n _realm.write(() {\n withdrawal[0].confirmed = data['confirmed'] == 0 ? false : true;\n withdrawal[0].processed = data['processed'] == 0 ? false : true;\n withdrawal[0].updatedAt = updatedAt;\n });\n }\n return true;\n } catch (e) {\n print(e);\n return false;\n }\n }\n\n \n}\nvar withdrawals = WithdrawalService().getWithdrawals(); void initState() {\n super.initState();\n subscription.connect();\n withdrawals.changes.listen((event) {\n print(event);\n setState(() {\n withdrawals = WithdrawalService().getWithdrawals();\n });\n });\n getWithdrawals();\n }\nWithdrawalService().addWithdrawal(data)", "text": "I am using flutter realm to store data on an android device.In a widget class, I have var withdrawals = WithdrawalService().getWithdrawals();and the initState is as follows:When I call WithdrawalService().addWithdrawal(data) in another class, there are no changes in the widget. What could be the problem?", "username": "Tim_Kariuki" }, { "code": "withdrawals void initState() {\n super.initState();\n subscription.connect();\n _sub = withdrawals.changes.listen((event) {\n print(event);\n setState(() {});\n });\n }\n\n @override\n void dispose() {\n _sub.cancel();\n super.dispose();\n }\n}\n", "text": "Don’t re-assign withdrawals. It is a live realm results and will always contain the latest commit’ted state.In particular you are overwriting the realm result you are listening to. Since you don’t store the subscription either it becomes eligible for garbage collection, and you stop listening. It is not really related to realm at all.Anyway, of the top of my head, something like this should work:but in general I would suggest using a stream builder in a stateless widget instead. As an example you can see here: Realm Query Pagination · Issue #1058 · realm/realm-dart · GitHub", "username": "Kasper_Nielsen1" } ]
No event fired when data is added
2023-09-18T06:13:18.109Z
No event fired when data is added
338
null
[ "queries", "node-js" ]
[ { "code": "else if (request.method === 'DELETE') {\n const idsToDelete = request.body.selected.map((_id: any) => new ObjectId(_id))\n console.log(idsToDelete); \n //[\n //new ObjectId(\"6507706269daaddf53f7a722\"),\n //new ObjectId(\"6507707069daaddf53f7a723\")\n //]\n await dbProducts.deleteMany({ _id: { in: idsToDelete } }).catch((error) => console.log(error))\n return response.status(200).json({ message: 'Product successfully deleted!' });\n }\nObjectId(\"${_id}\")", "text": "Hey there community \nI am having trouble with the deleteMany of ids.Namely, I am passing via api call an array of string [“6507706269daaddf53f7a722”, “6507707069daaddf53f7a723”]Then, I am handling the api call like so:I get the OK status of the responce, but the ids are not deleted.I have also tried to create the array for the “in” like so:\nconst idsToDelete = request.body.selected.map((_id: any) => ObjectId(\"${_id}\"))but to no avail Can anyone help?anyone ", "username": "Branislav_Damjanovic" }, { "code": "await dbProducts.deleteMany({ _id: { $in: idsToDelete } }).catch((error) => console.log(error))", "text": "Shouldn’t it this be:await dbProducts.deleteMany({ _id: { $in: idsToDelete } }).catch((error) => console.log(error))", "username": "John_Sewell" }, { "code": "", "text": "I need to stop coding after midnight, my eye sight gets worse by the minute Thank you sir, and sorry for the bother, and if my notification woke you ", "username": "Branislav_Damjanovic" }, { "code": "", "text": "Not to worry, I’ve done that many times, the longer you stare at it the less likely you are to see the issue!", "username": "John_Sewell" } ]
deleteMany by _id
2023-09-17T22:24:24.083Z
deleteMany by _id
320
null
[ "security" ]
[ { "code": "", "text": "Hey guys, i’ve been given a mongodb cluster to manage, currently implementing PBM for backup/restore strategy. It turns out that guys dont have the admin user for the mongod shard local instances, so, i’m trying to create the shard local user for pbm but i dont have the permissions and apparently someone created a admin user they dont have the username/password… I tried Localhost Exception but it’s not working…anyone been throught this?", "username": "Michael_Coelho" }, { "code": "db.hello()sudo systemctl stop mongodsudo -u mongodb mongod --port 55555 --fork --syslogmongo --port 55555 admindb.changeUserPassword('root','passw0rd')db.createUser({user:'root',pwd:'passw0rd',roles:[...]})db.auth('root','passw0rd')db.shutdownServer()sudo systemctl start mongodmongo --host replicasetName/hostname --port 27018 admindb.changeUserPassword('root','passw0rd')", "text": "I tried Localhost Exception but it’s not working…It won’t work once the first user is created.anyone been throught thisYou’re going to need access to start and stop mongod on the host.\nA change of primary is required during this process.", "username": "chris" }, { "code": "", "text": "Don’t you need to set the local password for the “old primary” node explicitly? Or will this be synced in some way?", "username": "Johan_Forssell" }, { "code": "", "text": "Step 11 will take care of it.This is because all nodes are running as part of the replicaset again and the password update will communicated to all the members in the normal fashion.", "username": "chris" }, { "code": "", "text": "Even though I fumbled the node restart (I was working with docker containers/Nomad allocations) I managed to create a new root user on all my nodes using this technique.Thank you very much, this was a life saver.", "username": "Johan_Forssell" } ]
Create shard local user
2021-08-19T13:47:17.321Z
Create shard local user
1,979
null
[ "replication", "sharding", "mongodb-shell" ]
[ { "code": "", "text": "Hi,We want to setup a mongodb cluster for MongoDB 7.0 community edition on Windows. However, there is no documentation on windows.I also checked the installation and MongoDb Tools. There is no mongosh file. we have “mongos.exe”Please share the detailed documentation on how to setup Mongodb cluster with Mongodb 7.0 on Windows Servers", "username": "Gaurav_Vij" }, { "code": "", "text": "Hi @Gaurav_Vij,From the documentation:Free download for MongoDB tools to do more with your database. MongoDB Shell, Compass, CLI for Cloud, BI Connector and other database tools available.Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "I have read through all of these links. there is no “mongosh” in windows installer. There is “mongos” but it doesn’t work.", "username": "Gaurav_Vij" }, { "code": "", "text": "Hi @Gaurav_Vij,\nIs named mongo Shell.Rergards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "It is a separate download, it is under the MongoDB Tools link that @Fabio_Ramohitaj posted.The MongoDB Shell is a modern command-line experience, full with features to make it easier to work with your database. Free download. Try now!", "username": "chris" }, { "code": "", "text": "Its not even in MongoDB Tools. I have downloaded the latest version 189", "username": "Gaurav_Vij" }, { "code": "", "text": "\nimage930×658 18.2 KB\n", "username": "Gaurav_Vij" }, { "code": "", "text": "Hi @Gaurav_Vij,\nAs mentioned from @chris, Is a separate download.\nAs shown in the following picture:\nScreenshot_20230916-1736061080×2400 206 KB\n", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "May you have not set ENV path in windowsPlease go toWindows Serach → ENV\n\n169488936963761914966951269797901920×1446 141 KB\nOpen Environment Variables → \n169488941351775514996309104643894080×3072 1.03 MB\nOpen Environment Variables → set inside path", "username": "Bhavya_Bhatt" }, { "code": "", "text": "This is closed. the mongoshell exe when downloaded worked.", "username": "Gaurav_Vij" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDb 7.0 ReplicaSet Cluster Setup
2023-09-16T06:34:59.487Z
MongoDb 7.0 ReplicaSet Cluster Setup
375
null
[ "node-js", "crud", "connecting", "next-js" ]
[ { "code": "const {MongoClient} = require('mongodb');\n\n\n// connect to client\nconst uri = process.env.URI\nconst client = await MongoClient.connect(uri, {useNewUrlParser: true, useUnifiedTopology: true}) // this line hangs on Vercel\n\nconst companies = client.db(\"myDatabase\").collection(\"companies\");\n\n// update document \nconst updateDoc = {$set: set};\nconst result = await companies.updateOne(filter, updateDoc)\nconsole.log(`${result.matchedCount} document(s) matched the filter, updated ${result.modifiedCount} document(s): `, updateDoc);\n\n// close connection to client \nawait client.close();\nMongoClient.connect()", "text": "I am building a NextJS web app hosted on Vercel. One of my api routes contains the following code to connect to my cluster:This runs fine on localhost. However, when running on Vercel, the code hangs on the MongoClient.connect() line. There are no errors but the connection doesn’t resolve.I have whitelisted all IP addresses in Network Access in Atlas (“Allow Access from anywhere”) as I’ve seen that Vercel does not have a particular IP address but still getting the same issue.Any idea what could be causing the issue?", "username": "Laekipia" }, { "code": "useNewUrlParseruserUnifiedTopologyconst client = new MongoClient(uri);\nclient.on('serverHeartbeatStarted', (event) => {\n console.log(event);\n});\nclient.on('serverHeartbeatSucceeded', (event) => {\n console.log(event);\n});\nclient.on('serverHeartbeatFailed', (event) => {\n console.log(event);\n});\nawait client.connect();\n", "text": "The code in general is correct - if you are using the 4.x driver you can remove the useNewUrlParser and userUnifiedTopology options from the connect call. I would double check the URI env variable is correct and that the function can actually reach the cluster. Also it’s usually best to cache the connected client in the initialization phase as we describe here: https://www.mongodb.com/docs/atlas/best-practices-connecting-from-vercel/. I’d also check the default timeout of the function and maybe it needs more time for the connection phase. Another thing to potentially help debug is to listen to the monitor heartbeats and see if there is a potential issue there.", "username": "Durran_Jordan" }, { "code": "mongodb+srv://<username>:<password>@cluster0.2xyeu.mongodb.net/?retryWrites=true&w=majorityServerHeartbeatStartedEvent {\n connectionId: 'cluster0-shard-00-02.2xyeu.mongodb.net:27017'\n}\nServerHeartbeatStartedEvent {\n connectionId: 'cluster0-shard-00-00.2xyeu.mongodb.net:27017'\n}\nServerHeartbeatStartedEvent {\n connectionId: 'cluster0-shard-00-01.2xyeu.mongodb.net:27017'\n}\nules/mongodb/lib/utils.js:587:9)\n}\n2022-06-05T10:02:01.926Z\tdc5eee5e-9613-42b7-8193-9035eaff8971\tERROR\tMongoServerSelectionError: Server selection timed out after 30000 ms\n at Timeout._onTimeout (/var/task/node_modules/mongodb/lib/sdam/topology.js:312:38)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'cluster0-shard-00-00.2xyeu.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-01.2xyeu.mongodb.net:27017' => [ServerDescription],\n 'cluster0-shard-00-02.2xyeu.mongodb.net:27017' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: 'atlas-c5yrqw-shard-0',\n logicalSessionTimeoutMinutes: undefined\n }\n}\n", "text": "Hi @Durran_Jordan, thanks for your reply!I’ve doubled checked the uri and it looks like this (with my username and password):mongodb+srv://<username>:<password>@cluster0.2xyeu.mongodb.net/?retryWrites=true&w=majorityIt’s the same uri I used from my localhost and I was able to connect fine from there so I don’t think that’d be the issue.I’ve added the 3 monitor heartbeats listeners in my code. I get the following console logs:I also sometimes see this error popping up:Does this Timeout message shed any light on the cause of the issue?Many thanks for your help!", "username": "Laekipia" }, { "code": "await client.connect()", "text": "@Durran_Jordan are you able to help at all?Since my last message, I’ve implemented the new Vercel integration but I’m experiencing the same issue. The MONGODB_URI produced automatically through the integration looks fine, but the connection hangs on await client.connect().I’ve been stuck on this for months now and would really appreciate some help in identifying where the issue may come from!Many thanks in advance!", "username": "Laekipia" }, { "code": "", "text": "Have you tried using this tutorial. [How to Integrate MongoDB Into Your Next.js App | MongoDB](https://How to Integrate MongoDB Into Your Next.js App) There is file that makes the connection to the atlas and makes the cache.Good Luck.", "username": "Mariano_Miro" }, { "code": "", "text": "Hey, I’m experiencing the same exact problem on the same exact place.The weird thing in my case is that when I’m doing actions on my Front End that trigger a DB update, the connection is working perfect, I only have a DB connection problem when receiving Stripe’s events to my webhooks endpoint, and trying to connect to Mongo and update my DB.I have tried Vercel’s Mongo integration, and all the tutorials above, same problem every time.If anyone knows a solution for this problem, please help. ", "username": "Asaf_Jacovchak1" }, { "code": "", "text": "Hey @Laekipia , @Durran_Jordan, is there any solution for this problem?", "username": "Asaf_Jacovchak1" } ]
Connecting to MongoDB cluster from NextJS app hosted on Vercel
2022-05-16T14:07:01.683Z
Connecting to MongoDB cluster from NextJS app hosted on Vercel
6,660
null
[]
[ { "code": "", "text": "Can someone tell me if MongoDB provides this type of crash consistency - Suppose there are 3 shards - A, B and C, in a cluster. And we are executing a big batch “atomic” update query that will update few rows on each of the shard - A, B and C. Now, suppose update on shard A is done, but before B and C are executed, both those nodes are crashed. Now, when all 3 nodes comes back, will they have updates available on A only without B and C updates? Or how? In general, how this scenario will be handled?", "username": "Anand_Kayande" }, { "code": "", "text": "I’m not sure how mongodb implements cross shard transaction internally. But if it claims (which it is) the transaction across shards are atomic meaning all or nothing , it has to ensure that eventually all happen or nothing happens.In terms of when the state will be in sync, no idea.You can try reading concept of two/three phase commit.", "username": "Kobe_W" }, { "code": "", "text": "Thanks @Kobe_W. Another question on similar lines - Suppose MongoDB is having a big “batch atomic transaction” - updating 3 shards A, B and C. Now, after A is updated and before B and C are updated, if both shards B and C are down, what would happen? It will roll back updates on A or what? How MongoDB handles such scenario? Any official document supporting it?", "username": "Anand_Kayande" }, { "code": "", "text": "after A is updated and before B and C are updated, if both shards B and C are down, what would happen? It will roll back updates on A or what?If the transaction should succeed, then the transaction coordinator should ensure once B and C are back online, the change will be made.If the transaction should be aborted, then the transaction coordinator should ensure once B and C are back online, any changes are reverted.cross node transaction is difficult. Nodes can crash at any time, and it may take a long time for those nodes to be come in sync again. (e.g. imagine what happens if B and C are never back online? or back after 1 year?).", "username": "Kobe_W" }, { "code": "", "text": "Thanks again @Kobe_W. That sounds perfect.In existing MongoDB implementation i.e. 4.2 onwards for its SSPL version, how this scenario is handled?I mean, did they follow part 1 of what you mentioned (i.e. If the transaction should succeed) or part 2 (i.e. if transaction should be aborted)?Basically, I want to know if MongoDB maintain the “atomicity” (all or none) of distributed batch transaction across the shards? I am asking for MongoDB SSPL and not for Atlas. For Atlas, I imagine they maintain the atomicity.", "username": "Anand_Kayande" }, { "code": "", "text": "Secondly, how do I test this scenario in-house? Though it sounds simple, it is not to reproduce. Any thoughts?Any official document from MongoDB on this? I tried, but couldn’t find any.", "username": "Anand_Kayande" }, { "code": "", "text": "I’m not sure how cross-shard transactions are implemented internally by mongodb. However, if it asserts—as it does—that the transaction between shards is atomic, file meaning all or nothing, then it must make sure that eventually all occur or nothing occurs.\nI have no idea when the state will be synchronized.\nTry reading up on the two- or three-phase commit paradigm.", "username": "Top1_seo_N_A" } ]
Crash Consistency for a batch update query
2023-08-23T06:13:55.276Z
Crash Consistency for a batch update query
625
null
[ "aggregation" ]
[ { "code": "", "text": "Hello,\nI`m working on some aggregation pipeline.\nMy previous step end with two following documents:\n{ x: 1, y: 60, z: 111}\nand\n{ x: 1, y: 90, z: 222}i`m using x as id and wanna combine two document to be like that:\n{ x: 1, y1: { z: 111 }, y2: { z: 222} }any help would be appreciatedI`ve been trying to group them by X value; but at the end still have array; and after unwind step I back to two documents.", "username": "Daniel_Sorkin" }, { "code": "db.aggregate([\n {\n // here are your samples documents\n $documents: [\n { x: 1, y: 60, z: 111},\n { x: 1, y: 90, z: 222}\n ],\n },\n {\n $group: {\n _id: '$x',\n y1: {\n $first: '$z'\n },\n y2: {\n $last: '$z'\n }\n }\n },\n // check the output of the $group stage\n // maybe you won't need $addFields stage\n {\n $addFields: {\n y1: {\n z: '$y1'\n },\n y2: {\n z: '$y2'\n }\n }\n }\n]);\n", "text": "Hello, @Daniel_Sorkin ! It’s been a while since your last post You can try this solution:", "username": "slava" }, { "code": "", "text": "thank you for your help", "username": "Top1_seo_N_A" } ]
Combine two documents
2023-08-28T11:36:46.067Z
Combine two documents
457
null
[ "node-js", "replication", "mongoose-odm", "compass", "mongodb-shell" ]
[ { "code": "mongosh --host \"127.0.0.1:27031\" -u \"m103-admin\" -p \"m103-pass\" --authenticationDatabase \"admin\"\n\nmongosh --host \"127.0.0.1:27032\" -u \"m103-admin\" -p \"m103-pass\" --authenticationDatabase \"admin\"\nmongodb://m103-admin:[email protected]:27031/\n\nmongodb://m103-admin:[email protected]:27032/?replicaSet=m103-example&serverSelectionTimeoutMS=2000&authSource=admin&readPreference=secondary\n", "text": "Hi, basically I have a replica set with 3 nodes (1 Primary and 2 Secondaries).\nI want to make connection to both Primary (port 27031) and 1 Secondary (port 27032) and in express query either of them depending upon use case. Basically 1 express app with multiple connections.I am able to connect to them in mongosh usingI am also able to connect to them in MongoDb Compass usingBut I am struggling to create 2 separate connections in my Express app using Mongoose.\nSort of like 2 connections objects that I can use throughout codebase.I read below Mongoose doc of multiple connections but I am confused as it only provides 1 variable ‘conn’\nAnd I am not able to use multiple createConnection()\nhttps://mongoosejs.com/docs/connections.html#multiple_connectionsIt will be really helpful if someone has some suggestions or experienced something similar.", "username": "Naman_Saxena1" }, { "code": "const express = require('express');\nconst cors = require('cors');\nconst mongoose = require('mongoose');\n\nconst startServer = async () => {\n const app = express();\n \n // Middlewares\n app.use(cors());\n app.options('*', cors());\n app.use(express.urlencoded({ extended: true }));\n app.use(express.json());\n\n // Primary\n await mongoose.connect('mongodb://m103-admin:[email protected]:27031/?directConnection=true');\n\n // Secondary (Delayed by 10 mins)\n // await mongoose.connect('mongodb://m103-admin:[email protected]:27032/?directConnection=true&readPreference=secondary');\n\n // Secondary (No Delay)\n // await mongoose.connect('mongodb://m103-admin:[email protected]:27033/?directConnection=true&readPreference=secondary');\n\n const userRoute = require('./routes/user');\n app.use('/api/user', userRoute);\n\n const server = app.listen(1337, () => {\n console.log(\"Server started at port 1337\");\n });\n};\n\nstartServer();\nconst mongoose = require(\"mongoose\");\n\nconst UserSchema = new mongoose.Schema({\n name: { type: String }\n});\n\n// users is collection name\nconst practicedb = mongoose.connection.useDb('practicedb');\nconst usersModel = practicedb.model('users', UserSchema);\n\nmodule.exports = { usersModel };\nconst express = require('express')\nconst router = express.Router();\nconst User = require('../models/User')\n\n//Get all users\nrouter.get('/allUsers', async (req, res) => {\n console.log(\"Inside allUsers route!\");\n\n try\n {\n const usersInPrimaryReplica = await User.usersModel.find({})\n console.log(\"usersInPrimaryReplica: \",usersInPrimaryReplica)\n\n if(!usersInPrimaryReplica)\n {\n return {status:'error', error: 'Invalid login'}\n }\n\n res.json({status:'ok',allUserDetails: {usersInPrimaryReplica: usersInPrimaryReplica}}) \n }\n catch(err)\n {\n res.json({status:'error',allUserDetails: false,error:err}) \n }\n})\n\n\nmodule.exports = router\n", "text": "To add more context, this is a very simple code of 3 files.\nAnd I wanted to ask if it is possible that for some apis I access primary node, for some delayed node?\nLike multiple connections in 1 express app.\nIt would be really helpful if someone could suggest what modifications I can make to implement such thingindex.jsUser.jsuser.js", "username": "Naman_Saxena1" }, { "code": "const express = require('express');\nconst cors = require('cors');\nconst Mongoose = require('mongoose').Mongoose;\n\nlet instance1 = new Mongoose();\nlet instance2 = new Mongoose();\nlet instance3 = new Mongoose();\n\nconst startServer = async () => {\n const app = express();\n \n // Middlewares\n app.use(cors());\n app.options('*', cors());\n app.use(express.urlencoded({ extended: true }));\n app.use(express.json());\n\n // Primary\n await instance1.connect('mongodb://m103-admin:[email protected]:27031/?directConnection=true')\n \n // Secondary (Delayed by 10 mins)\n await instance2.connect('mongodb://m103-admin:[email protected]:27032/?directConnection=true&readPreference=secondary')\n \n // Secondary (No Delay)\n await instance3.connect('mongodb://m103-admin:[email protected]:27033/?directConnection=true&readPreference=secondary')\n\n const userRoute = require('./routes/user');\n app.use('/api/user', userRoute);\n\n const server = app.listen(1337, () => {\n console.log(\"Server started at port 1337\");\n });\n};\n\nstartServer();\n\nmodule.exports = {\n instance1,\n instance2,\n instance3\n}\nconst mongoose = require(\"mongoose\");\nconst { instance1, instance2, instance3 } = require('../index')\n\nconst UserSchema = new mongoose.Schema({\n name: { type: String }\n});\n\n// users is collection name\nconst practicedb = instance1.connection.useDb('practicedb');\nconst usersModel = practicedb.model('users', UserSchema);\n\nmodule.exports = { usersModel };\nconst express = require('express')\nconst router = express.Router();\nconst User = require('../models/User')\n\n//Get all users\nrouter.get('/allUsers', async (req, res) => {\n console.log(\"Inside allUsers route!\");\n\n try\n {\n const usersInReplica = await User.usersModel.find({})\n console.log(\"usersInReplica: \",usersInReplica)\n\n res.json({status:'ok',allUserDetails: {usersInReplica: usersInReplica}}) \n }\n catch(err)\n {\n res.json({status:'error',allUserDetails: false,error:err}) \n }\n})\n\n\nmodule.exports = router\n", "text": "Thanks to this Stack Overflow question I understood it.\nAdding details in case someone gets same doubt in future.These are the changes I made in my code, can now use multiple instances in express-index.jsUser.jsuser.js", "username": "Naman_Saxena1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Make multiple connections in Express using Mongoose
2023-09-15T15:21:06.280Z
Make multiple connections in Express using Mongoose
453
null
[ "python" ]
[ { "code": "", "text": "I had a database created in an Atlas cluster and populated with various collections. When I logged into Atlas, this database had disappeared. However, I can still access it externally (for example, by making a call from a Python code)", "username": "Juan_Moreno_Nadales" }, { "code": "", "text": "", "username": "Bhavya_Bhatt" }, { "code": "", "text": "Hi @Juan_Moreno_Nadales,\nMake sure the database name is spelled correctly (respecting upper and lower case)Regards", "username": "Fabio_Ramohitaj" } ]
Database disappeared from Atlas Cluster
2023-09-15T09:38:48.617Z
Database disappeared from Atlas Cluster
320
null
[ "cxx" ]
[ { "code": "libbson-1.0Config.cmake\nlibbson-1.0-config.cmake\n", "text": "Hi, i’m currently trying to compile mongocxx- driver on Window 10 but I’m having trouble understanding how to resolve some errors. I search in all directory but i don’t found this libbson-1.0. I followed the instruction on this page “Windows” and the c-driver have been successfully installed. Below, I’ve reported the error that is being generated. I’ve seen a topic where a similar issue was discussed, but it was resolved in a Linux environment, I apologize in advance if I have opened another similar topic.‘’’\n– Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.19045.\n– No build type selected, default is Release\n– Auto-configuring bsoncxx to use boost std library polyfills since C++17 is inactive and compiler is MSVC\nbsoncxx version: 3.8.0\nCMake Error at src/bsoncxx/CMakeLists.txt:114 (find_package):\nBy not providing “Findlibbson-1.0.cmake” in CMAKE_MODULE_PATH this project\nhas asked CMake to find a package configuration file provided by\n“libbson-1.0”, but CMake did not find one.Could not find a package configuration file provided by “libbson-1.0”\n(requested version 1.24.0) with any of the following names:Add the installation prefix of “libbson-1.0” to CMAKE_PREFIX_PATH or set\n“libbson-1.0_DIR” to a directory containing one of the above files. If\n“libbson-1.0” provides a separate development package or SDK, be sure it\nhas been installed.– Configuring incomplete, errors occurred!\n‘’’", "username": "walter_trupia" }, { "code": "", "text": "Hi @walter_trupia, could you please try this step by step guide to build and install C/C++ driver on windows - Getting Started with MongoDB and C++ | MongoDB ?", "username": "Rishabh_Bisht" }, { "code": "", "text": "I tried this guide earlier this afternoon, but unfortunately, it didn’t work either. I used the specified versions of each software, but it still generated several errors as soon as I ran this command: “cmake … -G “Visual Studio 17 2022” -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=”/Zc:__cplusplus /EHsc\" -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver.\" I will try again this afternoon", "username": "walter_trupia" }, { "code": "libbson-1.0Config.cmake\nlibbson-1.0-config.cmake\n", "text": "I have tried the guide again, following each step carefully. I used the versions of the software specified in the guide, and the only difference is that I am using Windows 10 instead of Windows 11.– Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.19045.\n– No build type selected, default is Release\n– Auto-configuring bsoncxx to use C++17 std library polyfills since C++17 is active and user didn’t specify otherwise\nbsoncxx version: 3.7.0\nCMake Error at src/bsoncxx/CMakeLists.txt:113 (find_package):\nBy not providing “Findlibbson-1.0.cmake” in CMAKE_MODULE_PATH this project\nhas asked CMake to find a package configuration file provided by\n“libbson-1.0”, but CMake did not find one.Could not find a package configuration file provided by “libbson-1.0”\n(requested version 1.13.0) with any of the following names:Add the installation prefix of “libbson-1.0” to CMAKE_PREFIX_PATH or set\n“libbson-1.0_DIR” to a directory containing one of the above files. If\n“libbson-1.0” provides a separate development package or SDK, be sure it\nhas been installed.– Configuring incomplete, errors occurred!This is the output of this script: cmake … -G “Visual Studio 17 2022” -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=“/Zc:__cplusplus /EHsc” -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver", "username": "walter_trupia" }, { "code": "", "text": "How is the C driver installed? Are you using the same tutorial to build and install C driver as well?", "username": "Rishabh_Bisht" }, { "code": "", "text": "Yes, I followed the same tutorial for the C drivers as well.", "username": "walter_trupia" }, { "code": "C:\\mongo-c-driverCMAKE_PREFIX_PATHlibbson-1.0Config.cmake", "text": "Can you check whether C driver is installed at C:\\mongo-c-driver? This is the path we are providing to CMAKE_PREFIX_PATH variable and should contain the libbson-1.0Config.cmake", "username": "Rishabh_Bisht" }, { "code": "cmake .. -G \"Visual Studio 17 2022\" -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus /EHsc\" -DCMAKE_PREFIX_PATH=\"C:\\mongo-c-driver\" -DCMAKE_INSTALL_PREFIX=\"C:\\mongo-cxx-driver\"", "text": "You also try encapsulating the prefix and install path in quotes, ie.cmake .. -G \"Visual Studio 17 2022\" -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus /EHsc\" -DCMAKE_PREFIX_PATH=\"C:\\mongo-c-driver\" -DCMAKE_INSTALL_PREFIX=\"C:\\mongo-cxx-driver\"", "username": "Rishabh_Bisht" }, { "code": "", "text": "In the path C:\\mongo-c-driver-1.23.0\\src\\libbson or \\libmongoc, you can find the files that I listed.", "username": "walter_trupia" }, { "code": "", "text": "I tried running it like this:cmake … -G “Visual Studio 17 2022” -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=“/Zc:__cplusplus /EHsc” -DCMAKE_PREFIX_PATH=“C:\\mongo-c-driver-1.23.0” -DCMAKE_INSTALL_PREFIX=“C:\\Repos\\mongo-cxx-driver-r3.7.0”And like this:cmake … -G “Visual Studio 17 2022” -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=“/Zc:__cplusplus /EHsc” -DCMAKE_PREFIX_PATH=“C:\\mongo-c-driver” -DCMAKE_INSTALL_PREFIX=“C:\\Repos\\mongo-cxx-driver”Because the folder containing the C drivers is “mongo-c-driver-1.23.0,” while the folder for the C++ drivers is “C/Repos/mongo-cxx-driver-r3.7.0.”", "username": "walter_trupia" }, { "code": "CMAKE_PREFIX_PATHCMAKE_PREFIX_PATHC:\\mongo-c-driver", "text": "The CMAKE_PREFIX_PATH should point to the directory where C driver is installed. The directory should look something like below, with include, bin, lib and share folder. The needed cmake file is present inside the lib folder:\n\ncDriverInstalled904×552 22.6 KB\nIt seems like you are currently pointing the CMAKE_PREFIX_PATH to the directory where the C driver source code is present, which doesn’t seem right. Please figure out the location where the C driver was installed. Ideally it should be C:\\mongo-c-driver if one followed the steps in the tutorial.", "username": "Rishabh_Bisht" }, { "code": "", "text": "Yes, you were right; I managed to solve it by specifying the path where I had installed the mongo-c-driver. Thank you for the support.", "username": "walter_trupia" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problems trying to compile MongoCxx on Windows
2023-09-13T17:10:27.767Z
Problems trying to compile MongoCxx on Windows
420
null
[]
[ { "code": "", "text": "Hello Team,We had an issue a few days ago in production where data was stuck in Kafka, and those data were moved to Mongo DB in bulk (in general, the data was limited before to Mongo DB), which caused a change in plan, and it seems like that process took all the write tickets available in Mongo to process.We try to produce the same scenario in our development setup, and after improving the query by removing the nulls and adding a proper index, we still see that the ticket count at the end reaches 0.the job process is basically a k8s job deployment which inserts those kafka records into two collections and updates one field of the collection for new inserted documentscan you provide some guide whats wrong ?Regards\nDash", "username": "Dasharath_Dixit" }, { "code": "", "text": "Connect Altas Support", "username": "Bhavya_Bhatt" } ]
Ticket count decrease to 0 after process job deployed which perform update and modify all inserted new records
2023-09-15T10:05:37.294Z
Ticket count decrease to 0 after process job deployed which perform update and modify all inserted new records
188
null
[]
[ { "code": "mongodb://{{username}}:{{password}}@<ip-addr>:27017/?tls=false&authSource=<test-authSource>\nnetnet:\n port: 27017\n bindIp: 0.0.0.0\n\n tls:\n mode: disabled\ntlsssl{\"t\":{\"$date\":\"2023-09-17T07:46:06.849+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"initandlisten\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"0.0.0.0\",\"port\":27017,\"tls\":{\"mode\":\"disabled\"}},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"security\":{\"authorization\":\"enabled\"},\"storage\":{\"dbPath\":\"/var/lib/mongodb\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n\n\"ctx\":\"listener\",\"msg\":\"Waiting for connections\",\"attr\":{\"port\":27017,\"ssl\":\"off\"}\n", "text": "Update\nTurns out it was disabling TLS but the fact that I’m not super familiar with Wireshark meant I was misinterpreting some of the data. I had to right click the packets > Decode As > then specifically tell WireShark to decode as MONGO packets. Previously WireShark was looking at this as generic TCP, and I just assumed it wasn’t providing any useful info due to TLS, however after explicitly telling WireShark what to look for in these packets I’m able to deconstruct everything more clearly.Thanks!Solved - Update aboveI’m currently attempting to reproduce some odd client behaviour and would like to validate how the connection is both leaving by client and arriving at my test MongoDB instance. The debug logs I’ve been able to set have not provided much more insight to this, so as a next step I was hoping to observe this in a bidirectional packet capture.The issue is, MongoDB and the client seem to be forcing TLS, rendering the PCAPs fairly useless.The connection string used by the client is as follows:I tried to edit the net parameter to disable TLS as well:I’ve tried the above with both tls and ssl options.However the packet captures show it’s still using TLS/SSL.Mongo’s startup logs do tell me that TLS is disabled:Any ideas what I might be doing wrong here? Is this by design, or am I missing something?", "username": "conor" }, { "code": " # tls:\n # mode: disabled\n", "text": "Do Line Comment in tls in mongod.conf file like below", "username": "Bhavya_Bhatt" } ]
Disabling TLS for testing
2023-09-17T07:59:11.542Z
Disabling TLS for testing
261
null
[ "database-tools", "containers", "backup" ]
[ { "code": "", "text": "Dear friends,\nI migrated my microservice using MongoDB as its permanence layer to a Docker Compose dockerized solution.\nThe architecture has an official MongoDB 6 container running which is using a Volume (not a bind mount to the host filesystem as before).I am reading several approaches to backing up daily but am unsure of the best practice to follow.Ideally I want to setup a daily cron, stop the app and/or freeze the DB, perform a mongodump on a mounted bind directory, then resume normal operations, all of which possibly with a log for failures.Any suggestions or better approaches? thanks a lot in advancePS What I like about archive dump is the ease of transferring them to another host possibily running a non dockerized mongodb and restoring there", "username": "Robert_Alexander" }, { "code": "", "text": "Nothing really changes because it is running in a container.All the backup methods still apply.Filesystem snapshots are one of the easiest to execute and the quickest to restore but that will be reliant on your underlying storage or platform.Mongodump does not scale well as the deployment grows in size as well as the potential performance impact mentioned in the manual. Restoring will build the indexes fresh as they are not included in the mongodump, this can take a considerable amount of time.Whatever you chose make sure you regularly test it to ensure it is valid and to know if it meets your Recovery Time Objective or not.", "username": "chris" } ]
Proper way of backing up a Docker Volume used by MongoDB
2023-09-15T06:06:18.853Z
Proper way of backing up a Docker Volume used by MongoDB
348
null
[ "database-tools", "backup" ]
[ { "code": "--archivemongorestoreFailed: corruption found in archive; 858927154 is neither a valid bson length nor a archive terminator\nmongorestore -h localhost --archive=Dump.mongodump -vvvv\n2023-09-02T18:39:19.230+0000\tarchive prelude quiltmc_modmail.logs\n2023-09-02T18:39:19.230+0000\tarchive prelude quiltmc_modmail.config\n2023-09-02T18:39:19.230+0000\tarchive format version \"0.1\"\n2023-09-02T18:39:19.230+0000\tarchive server version \"6.0.2\"\n2023-09-02T18:39:19.230+0000\tarchive tool version \"100.6.0\"\n2023-09-02T18:39:19.232+0000\tpreparing collections to restore from\n2023-09-02T18:39:19.232+0000\tusing as dump root directory\n2023-09-02T18:39:19.232+0000\treading collections for database quiltmc_modmail in quiltmc_modmail\n2023-09-02T18:39:19.232+0000\tfound collection quiltmc_modmail.logs bson to restore to quiltmc_modmail.logs\n2023-09-02T18:39:19.232+0000\tfound collection metadata from quiltmc_modmail.logs to restore to quiltmc_modmail.logs\n2023-09-02T18:39:19.232+0000\tadding intent for quiltmc_modmail.logs\n2023-09-02T18:39:19.232+0000\tfound collection quiltmc_modmail.config bson to restore to quiltmc_modmail.config\n2023-09-02T18:39:19.232+0000\tfound collection metadata from quiltmc_modmail.config to restore to quiltmc_modmail.config\n2023-09-02T18:39:19.232+0000\tadding intent for quiltmc_modmail.config\n2023-09-02T18:39:19.243+0000\tdemux End\n2023-09-02T18:39:19.243+0000\tdemux finishing (err:corruption found in archive; 858927154 is neither a valid bson length nor a archive terminator)\n2023-09-02T18:39:19.243+0000\treceived from namespaceChan\n2023-09-02T18:39:19.243+0000\treading metadata for quiltmc_modmail.logs from archive 'Dump.mongodump'\n2023-09-02T18:39:19.244+0000\treading metadata for quiltmc_modmail.config from archive 'Dump.mongodump'\n2023-09-02T18:39:19.244+0000\trestoring up to 4 collections in parallel\n2023-09-02T18:39:19.244+0000\tstarting restore routine with id=3\n2023-09-02T18:39:19.244+0000\tending restore routine with id=3, no more work to do\n2023-09-02T18:39:19.244+0000\tstarting restore routine with id=1\n2023-09-02T18:39:19.244+0000\tending restore routine with id=1, no more work to do\n2023-09-02T18:39:19.244+0000\tstarting restore routine with id=0\n2023-09-02T18:39:19.244+0000\tending restore routine with id=0, no more work to do\n2023-09-02T18:39:19.244+0000\tstarting restore routine with id=2\n2023-09-02T18:39:19.244+0000\tending restore routine with id=2, no more work to do\n2023-09-02T18:39:19.244+0000\tbuilding indexes up to 4 collections in parallel\n2023-09-02T18:39:19.244+0000\tstarting index build routine with id=3\n2023-09-02T18:39:19.244+0000\trestoring indexes for collection quiltmc_modmail.logs from metadata\n2023-09-02T18:39:19.244+0000\tindex: &idx.IndexDocument{Options:primitive.M{\"default_language\":\"english\", \"language_override\":\"language\", \"name\":\"messages.content_text_messages.author.name_text_key_text\", \"textIndexVersion\":3, \"v\":2, \"weights\":primitive.M{\"key\":1, \"messages.author.name\":1, \"messages.content\":1}}, Key:primitive.D{primitive.E{Key:\"_fts\", Value:\"text\"}, primitive.E{Key:\"_ftsx\", Value:1}}, PartialFilterExpression:primitive.D(nil)}\n2023-09-02T18:39:19.244+0000\t\trun create Index command for indexes: messages.content_text_messages.author.name_text_key_text\n2023-09-02T18:39:19.245+0000\tstarting index build routine with id=0\n2023-09-02T18:39:19.245+0000\tno indexes to restore for collection quiltmc_modmail.config\n2023-09-02T18:39:19.245+0000\tending index build routine with id=0, no more work to do\n2023-09-02T18:39:19.245+0000\tstarting index build routine with id=1\n2023-09-02T18:39:19.245+0000\tending index build routine with id=1, no more work to do\n2023-09-02T18:39:19.245+0000\tstarting index build routine with id=2\n2023-09-02T18:39:19.245+0000\tending index build routine with id=2, no more work to do\n2023-09-02T18:39:19.248+0000\tending index build routine with id=3, no more work to do\n2023-09-02T18:39:19.248+0000\tFailed: corruption found in archive; 858927154 is neither a valid bson length nor a archive terminator\n2023-09-02T18:39:19.248+0000\t0 document(s) restored successfully. 0 document(s) failed to restore.\n8589271540x33323032202332303233 2D30372D 33315431 353A3035 3A30312E 3539322B 30303030 09646F6E 65206475 6D70696E 67207175 696C746D 635F6D6F 646D6169 6C2E6C6F 67732028 37313120 646F6375 6D656E74 73290D0A 45000000 02646200 10000000 7175696C 746D635F 6D6F646D 61696C00 02636F6C 6C656374 696F6E00 05000000 6C6F6773 0008454F 46000112 43524300 AD1DD5DB A30C5FCD 00FFFFFF FF\n2023-07-31T15:05:01.592+0000\tdone dumping quiltmc_modmail.logs (711 documents)\nEdbquiltmc_modmailcollectionlogsEOFCRC­ÕÛ£_Íÿÿÿÿ\n6.0.8100.6.0454F4600 01124352 43000000 00000000 000000FF FFFFFF\nEOF CRC ....454F4600 01124352 4300AD1D D5DBA30C 5FCD00FF FFFFFF\nEOF CRC . ... _. ....", "text": "I have a MongoDB dump (created with --archive) that I’m trying to restore to our database using mongorestore. However, when I try to do this, it fails with the following error:We received the dump from our former cloud provider, so I don’t know much about how it was created. I do know that it was created with Mongodump 100.6.0 and MongoDB 6.0.2, and I have tried restoring with that exact version, but it still fails.This outputted the following data:I have examined the dump in a hex editor, and I have found that the 32-bit integer 858927154 is equivalent to the hex 0x33323032, which, when encoded, translates to 2023. The database contains quite a few dates, so it’s hard to know exactly where the problematic data is in the file. The last date in the file occurs here:Which encodes to:I’m not sure if this is the cause, though, because the file has a similar “done dumping” message for each time the process finishes dumping a table.I did create a dump of another database for testing purposes, which I’ve verified that I can restore. It was created in a newer version of MongoDB (6.0.8 100.6.0), but I noticed that it lacked the “done dumping” messages. I also noticed that the test dump ended with:which showed in the hex editor as EOF CRC ...., meanwhile, the corrupted database shows aswhich showed in the hex editor as EOF CRC . ... _. .....I did try and replace the corrupted file’s file end with the working file’s, but this didn’t seem to work.I found absolutely nothing about this error online, so I wondered if anyone had any ideas about what I might try next.", "username": "Southpaw_1496" }, { "code": "2023-07-31T15:05:01.592+0000\tdone dumping quiltmc_modmail.logs (711 documents)(mongodump --archive 2>&1) >dump.archivemongorestore --drop --archive=dump.archive\n2023-09-02T18:47:07.203-0400\tpreparing collections to restore from\n2023-09-02T18:47:07.354-0400\tdemux finishing when there are still outs (1)\n2023-09-02T18:47:07.354-0400\treading metadata for test.foo from archive 'dump.archive'\n2023-09-02T18:47:07.354-0400\tno indexes to restore for collection test.foo\n2023-09-02T18:47:07.354-0400\tFailed: corruption found in archive; 858927154 is neither a valid bson length nor a archive terminator\n2023-09-02T18:47:07.354-0400\t0 document(s) restored successfully. 0 document(s) failed to restore.\n", "text": "Hi @Southpaw_1496Amazingly thorough investigation on your part.2023-07-31T15:05:01.592+0000\tdone dumping quiltmc_modmail.logs (711 documents)If this is encoded in the archive then the vendor has created the dump incorrectly. I’d ask them for a new one.What I think has happened is they managed to redirect stderr to stdout and pipe the whole thing to the output file.Possibly ran with stderr redirection over ssh? I’m just guessing.I could reproduce the situation like so:(mongodump --archive 2>&1) >dump.archive", "username": "chris" }, { "code": "", "text": "It’s almost certain that they will have deleted the database by now since we are no longer using their services, but thank you for confirming where the corruption lies.Do you think it’s feasible to edit these lines out and make a valid file?", "username": "Southpaw_1496" }, { "code": "vimbbe2023-.*\\n\\n", "text": "I tried one yesterday with success using vimbbe might be a good tool for this, I was not able to come up with a valid pattern so far though.Each valid data end with FFFFFFFF before the invalid data. Removing everything 2023-.*\\n and the final \\n at the end of file should see you right.", "username": "chris" }, { "code": "", "text": "Thanks so much!The cloud provider has offered to repair the dump for us even though they no longer have the original; I will send your advice to help them along ", "username": "Southpaw_1496" }, { "code": "2023-.*\\n32 30 32 33 2D 30 37 2D 33 31 54 31 35 3A 30 34 3A 35 39 2E 39 38 39 2B 30 30 30 30 09 77 72 69 74 69 6E 67 20 71 75 69 6C 74 6D 63 5F 6D 6F 64 6D 61 69 6C 2E 63 6F 6E 66 69 67 20 74 6F 20 61 72 63 68 69 76 65 20 6F 6E 20 73 74 64 6F 75 74 0D\nFailed: corruption found in archive; bson (size: 18186, byte: 54) doesn't end with a null byte\n00 00 FF FF FF FF\n", "text": "2023-.*\\nI removed the first occurrence as you recommended:and attempted to import it as a test. I figured that it would at least get the first document imported if it was doing it correctly. However, it failed with a different error:Could this be to do with the six bytes preceding the bytes I removed?", "username": "Southpaw_1496" }, { "code": "\\x32\\x30\\x32\\x33.......\\x0a", "text": "For sure only be removing:\n\\x32\\x30\\x32\\x33.......\\x0aI’m still working on this myself on and off for a repeatable solution.Some refs that may help are the archive format spec and the BSON spec.", "username": "chris" }, { "code": "", "text": "Do I need to remove the line breaks as well? The regex you provided didn’t match them when I put it into a regex tester, so I didn’t remove them.", "username": "Southpaw_1496" }, { "code": "\\x0a\\n", "text": "Yes. \\x0a or \\n should match that.", "username": "chris" }, { "code": "2023-09-03T18:57:28.787+0100\tFailed: quiltmc_modmail.config: error restoring from archive '/Users/southpaw/Docker/MongoDB/Working-dump.mongodump': reading bson input: error demultiplexing archive; archive io error\n", "text": "After removing the patterns you suggested in the file, the operation now fails with the following error:I will consult the specs you sent to see if anything sticks out and update with any progress.I don’t suppose there’s any way to know in what part of the file the error occurred?", "username": "Southpaw_1496" }, { "code": "kubectl execkubectl exec6D E2 99 81 66 00 00 00 10 63 6F 6E 63 75 72 72 65 6E 74 5F 63 6F 6C 6C 65 63 74 69 6F 6E 73 00 04 00 00 00 02 76 65 72 73 69 6F 6E 00 04 00 00 00 30 2E 31 00 02 73 65 72 76 65 72 5F 76 65 72 73 69 6F 6E 00 06 00 00 00 36 2E 30 2E 38 00 02 74 6F 6F 6C 5F 76 65 72 73 69 6F 6E 00 08 00 00 00 31 30 30 2E 36 2E 30 00 00 D3 02 00 00\nm♁fconcurrent_collectionsversion0.1server_version6.0.8tool_version100.6.0Ó\n6D E2 99 81 66 00 00 00 10 63 6F 6E 63 75 72 72 65 6E 74 5F 63 6F 6C 6C 65 63 74 69 6F 6E 73 00 04 00 00 00 02 76 65 72 73 69 6F 6E 00 04 00 00 00 30 2E 31 00 02 73 65 72 76 65 72 5F 76 65 72 73 69 6F 6E 00 06 00 00 00 36 2E 30 2E 38 00 02 74 6F 6F 6C 5F 76 65 72 73 69 6F 6E 00 08 00 00 00 31 30 30 2E 36 2E 30 00 00 0E 01 00 00 02 64 62 00 05 00 00 00 74 65 73 74 00 02 63 6F 6C 6C 65 63 74 69 6F 6E 00 0C 00 00 00 62 61 63 6B 75 70 2D 64 75 6D 70 00 02 6D 65 74 61 64 61 74 61 00 B3 00 00 00 7B 22 69 6E 64 65 78 65 73 22 3A 5B 7B 22 76 22 3A 7B 22 24 6E 75 6D 62 65 72 49 6E 74 22 3A 22 32 22 7D 2C 22 6B 65 79 22 3A 7B 22 5F 69 64 22 3A 7B 22 24 6E 75 6D 62 65 72 49 6E 74 22 3A 22 31 22 7D 7D 2C 22 6E 61 6D 65 22 3A 22 5F 69 64 5F 22 7D 5D 2C 22 75 75 69 64 22 3A 22 65 38 37 32 30 34 37 66 64 37 34 39 34 61 39 37 61 63 30 36 61 30 62 39 65 38 35 35 65 35 35 36 22 2C 22 63 6F 6C 6C 65 63 74 69 6F 6E 4E 61 6D 65 22 3A 22 62 61 63 6B 75 70 2D 64 75 6D 70 22 2C 22 74 79 70 65 22 3A 22 63 6F 6C 6C 65 63 74 69 6F 6E 22 7D 00 10 73 69 7A 65 00 00 00 00 00 02 74 79 70 65 00 0B 00 00 00 63 6F 6C 6C 65 63 74 69 6F 6E 00 00\nm♁fconcurrent_collectionsversion0.1server_version6.0.8tool_version100.6.0dbtestcollectionbackup-dumpmetadata³{\"indexes\":[{\"v\":{\"$numberInt\":\"2\"},\"key\":{\"_id\":{\"$numberInt\":\"1\"}},\"name\":\"_id_\"}],\"uuid\":\"e872047fd7494a97ac06a0b9e855e556\",\"collectionName\":\"backup-dump\",\"type\":\"collection\"}sizetypecollection\n", "text": "I have learned that the corrupted file was created using kubectl exec, and so I created a valid backup file of a test database, and also one using kubectl exec to try and compare the two. However, the output of the dump command seems to change even when given the exact same data, making comparing difficult.As a test, I took two dumps of the same database. The first dump began as follows:which encodes toHowever, the dump I took immediately after starts like this:which encodes toTo more plainly illustrate, here is a screenshot of the diff from my hex editor\n\nScreenshot 2023-09-08 at 8.39.18 pm1888×1422 685 KB\nI will need to do more investigation as to why this might be happening.", "username": "Southpaw_1496" }, { "code": "kubectl execttyttykubectl exec --tty podname mongodump --archive > yourdump.archivekubectl exec podname mongodump --archive > yourdump.archive", "text": "Cannot reproduce. What are the commands you are running for the dump?I have learned that the corrupted file was created using kubectl execYes. I think they must have run with the tty flag and redirected to the output file. Running without the tty flag will have resulted in a valid archive.kubectl exec --tty podname mongodump --archive > yourdump.archivevskubectl exec podname mongodump --archive > yourdump.archive", "username": "chris" }, { "code": "mongodump -u root -p Fw0NRvwNvgFSyitQYqQS --archive=Test-Dump-1.mongodump\nmongodump -u root -p Fw0NRvwNvgFSyitQYqQS --archive=Test-Dump-2.mongodump\n", "text": "What are the commands you are running for the dump?followed immediately byI tested in a database inside Docker on my local machine that wasn’t connected to anything. It couldn’t have been modified in between the two dumps.Edit: I have rolled the root password of my database, and it was isolated and couldn’t have been connected to externally anyway.", "username": "Southpaw_1496" }, { "code": "#sha256 sum of 10 dumps of the same database.\nsha256sum dump? | sort\n4329c4c7eea4114be68e8d70b5bedc1481a24d46ded445ef62cbf42d0ae1964c dump3\n5bb5299bfce2aeee2ecb144120ff3d3b8ef4cb85bc8ab4a247aaaebaa137b700 dump0\n5bb5299bfce2aeee2ecb144120ff3d3b8ef4cb85bc8ab4a247aaaebaa137b700 dump9\n98c831a6564f11f8d5eeed9544fb1a612135ea217f740920335d35e4f7e5e8ee dump2\n98c831a6564f11f8d5eeed9544fb1a612135ea217f740920335d35e4f7e5e8ee dump4\n98c831a6564f11f8d5eeed9544fb1a612135ea217f740920335d35e4f7e5e8ee dump5\nbd6682c623606a38f0bb9f735638034bfaba147513e7e40f180e908e1ffe9c63 dump1\nec70a7541713c374a24f00d06b0c9fcc5c31442b34c12006412db2176b205a1e dump8\neecaac98a78941f95aa215187515f493c7e44f0be56c715b36a0db01b2ac5dbe dump6\neecaac98a78941f95aa215187515f493c7e44f0be56c715b36a0db01b2ac5dbe dump7\n", "text": "The difference is likely the order the collections are dumped.Collections can be dumped in parallel and their contents can also switch between them during the dump.Don’t expect different dumps of the same data to be the same.", "username": "chris" }, { "code": "-vvvv2023-09-14T07:51:47.243+0000\tusing write concern: &{majority false 0}\n2023-09-14T07:51:47.265+0000\tchecking options\n2023-09-14T07:51:47.265+0000\t\tdumping with object check disabled\n2023-09-14T07:51:47.265+0000\twill listen for SIGTERM, SIGINT, and SIGKILL\n2023-09-14T07:51:47.266+0000\tconnected to node type: standalone\n2023-09-14T07:51:47.282+0000\tarchive prelude quiltmc_modmail.logs\n2023-09-14T07:51:47.282+0000\tarchive prelude quiltmc_modmail.config\n2023-09-14T07:51:47.282+0000\tarchive format version \"0.1\"\n2023-09-14T07:51:47.282+0000\tarchive server version \"6.0.2\"\n2023-09-14T07:51:47.282+0000\tarchive tool version \"100.6.0\"\n2023-09-14T07:51:47.283+0000\tpreparing collections to restore from\n2023-09-14T07:51:47.283+0000\tusing as dump root directory\n2023-09-14T07:51:47.283+0000\treading collections for database quiltmc_modmail in quiltmc_modmail\n2023-09-14T07:51:47.283+0000\tfound collection quiltmc_modmail.logs bson to restore to quiltmc_modmail.logs\n2023-09-14T07:51:47.283+0000\tfound collection metadata from quiltmc_modmail.logs to restore to quiltmc_modmail.logs\n2023-09-14T07:51:47.283+0000\tadding intent for quiltmc_modmail.logs\n2023-09-14T07:51:47.283+0000\tfound collection quiltmc_modmail.config bson to restore to quiltmc_modmail.config\n2023-09-14T07:51:47.283+0000\tfound collection metadata from quiltmc_modmail.config to restore to quiltmc_modmail.config\n2023-09-14T07:51:47.283+0000\tadding intent for quiltmc_modmail.config\n2023-09-14T07:51:47.294+0000\tdemux namespaceHeader: {quiltmc_modmail config false 0}\n2023-09-14T07:51:47.294+0000\treceived quiltmc_modmail.config from namespaceChan\n2023-09-14T07:51:47.294+0000\tfirst non special collection quiltmc_modmail.config found. The demultiplexer will handle it and the remainder\n2023-09-14T07:51:47.294+0000\treading metadata for quiltmc_modmail.logs from archive 'backup-dump.mongodump'\n2023-09-14T07:51:47.295+0000\treading metadata for quiltmc_modmail.config from archive 'backup-dump.mongodump'\n2023-09-14T07:51:47.295+0000\trestoring up to 4 collections in parallel\n2023-09-14T07:51:47.295+0000\tstarting restore routine with id=3\n2023-09-14T07:51:47.295+0000\tstarting restore routine with id=1\n2023-09-14T07:51:47.295+0000\tstarting restore routine with id=2\n2023-09-14T07:51:47.295+0000\tstarting restore routine with id=0\n2023-09-14T07:51:47.295+0000\tdemux Open for quiltmc_modmail.config\n2023-09-14T07:51:47.297+0000\tdemux End\n2023-09-14T07:51:47.299+0000\trestoring to existing collection quiltmc_modmail.config without dropping\n2023-09-14T07:51:47.299+0000\tcollection quiltmc_modmail.config already exists - skipping collection create\n2023-09-14T07:51:47.299+0000\trestoring quiltmc_modmail.config from archive 'backup-dump.mongodump'\n2023-09-14T07:51:47.299+0000\tusing 1 insertion workers\n2023-09-14T07:51:47.311+0000\tfinished restoring quiltmc_modmail.config (0 documents, 0 failures)\n2023-09-14T07:51:47.311+0000\tFailed: quiltmc_modmail.config: error restoring from archive 'backup-dump.mongodump': reading bson input: error demultiplexing archive; archive io error\n2023-09-14T07:51:47.311+0000\t0 document(s) restored successfully. 0 document(s) failed to restore.\n(err:corruption found in archive; ParserConsumer.BodyBSON() ( EOF ))", "text": "I’m still struggling with the demultiplexing archive error. I took a -vvvv log to make sure I wasn’t missing anything.The only things I could find online about the error seemed to deal with networked filesystems that were too slow for the restore, but I am running a Docker container locally, so that can’t be the issue. I did find TOOLS-2458 on the MongoDB bug tracker, but that also had (err:corruption found in archive; ParserConsumer.BodyBSON() ( EOF )), which my log does not have.I will continue my investigations.", "username": "Southpaw_1496" }, { "code": "2023-09-14T09:34:12.221+0000\tusing write concern: &{majority false 0}\n2023-09-14T09:34:12.265+0000\tchecking options\n2023-09-14T09:34:12.265+0000\t\tdumping with object check disabled\n2023-09-14T09:34:12.265+0000\twill listen for SIGTERM, SIGINT, and SIGKILL\n2023-09-14T09:34:12.266+0000\tconnected to node type: standalone\n2023-09-14T09:34:12.281+0000\tarchive prelude quiltmc_modmail.logs\n2023-09-14T09:34:12.282+0000\tarchive prelude quiltmc_modmail.config\n2023-09-14T09:34:12.282+0000\tarchive format version \"0.1\"\n2023-09-14T09:34:12.282+0000\tarchive server version \"6.0.2\"\n2023-09-14T09:34:12.282+0000\tarchive tool version \"100.6.0\"\n2023-09-14T09:34:12.289+0000\tpreparing collections to restore from\n2023-09-14T09:34:12.290+0000\tusing as dump root directory\n2023-09-14T09:34:12.290+0000\treading collections for database quiltmc_modmail in quiltmc_modmail\n2023-09-14T09:34:12.290+0000\tfound collection quiltmc_modmail.logs bson to restore to quiltmc_modmail.logs\n2023-09-14T09:34:12.290+0000\tfound collection metadata from quiltmc_modmail.logs to restore to quiltmc_modmail.logs\n2023-09-14T09:34:12.290+0000\tadding intent for quiltmc_modmail.logs\n2023-09-14T09:34:12.290+0000\tfound collection quiltmc_modmail.config bson to restore to quiltmc_modmail.config\n2023-09-14T09:34:12.290+0000\tfound collection metadata from quiltmc_modmail.config to restore to quiltmc_modmail.config\n2023-09-14T09:34:12.290+0000\tadding intent for quiltmc_modmail.config\n2023-09-14T09:34:12.311+0000\tdemux namespaceHeader: {quiltmc_modmail config false 0}\n2023-09-14T09:34:12.311+0000\treceived quiltmc_modmail.config from namespaceChan\n2023-09-14T09:34:12.311+0000\tfirst non special collection quiltmc_modmail.config found. The demultiplexer will handle it and the remainder\n2023-09-14T09:34:12.311+0000\treading metadata for quiltmc_modmail.logs from archive 'backup-dump.mongodump'\n2023-09-14T09:34:12.312+0000\treading metadata for quiltmc_modmail.config from archive 'backup-dump.mongodump'\n2023-09-14T09:34:12.312+0000\trestoring up to 4 collections in parallel\n2023-09-14T09:34:12.312+0000\tstarting restore routine with id=3\n2023-09-14T09:34:12.312+0000\tstarting restore routine with id=1\n2023-09-14T09:34:12.312+0000\tstarting restore routine with id=0\n2023-09-14T09:34:12.312+0000\tstarting restore routine with id=2\n2023-09-14T09:34:12.312+0000\tdemux Open for quiltmc_modmail.config\n2023-09-14T09:34:12.314+0000\tdemux End\n2023-09-14T09:34:12.317+0000\trestoring to existing collection quiltmc_modmail.config without dropping\n2023-09-14T09:34:12.317+0000\tcollection quiltmc_modmail.config already exists - skipping collection create\n2023-09-14T09:34:12.317+0000\trestoring quiltmc_modmail.config from archive 'backup-dump.mongodump'\n2023-09-14T09:34:12.318+0000\tusing 1 insertion workers\n2023-09-14T09:34:12.328+0000\tfinished restoring quiltmc_modmail.config (0 documents, 0 failures)\n2023-09-14T09:34:12.328+0000\tdemux finishing when there are still outs (1)\n2023-09-14T09:34:12.328+0000\tdemux finishing (err:corruption found in archive; ParserConsumer.BodyBSON() ( corruption found in archive; bson (size: 2458, byte: 103) doesn't end with a null byte ))\n2023-09-14T09:34:12.328+0000\tending restore routine with id=0, no more work to do\n2023-09-14T09:34:12.328+0000\tFailed: quiltmc_modmail.config: error restoring from archive 'backup-dump.mongodump': reading bson input: error demultiplexing archive; archive io error\n2023-09-14T09:34:12.328+0000\t0 document(s) restored successfully. 0 document(s) failed to restore.\nroot@cec8a2653bef:/# mongorestore -h localhost -u root -p aI73yBJzrItJcZxq2Nr0 --archive=\"backup-dump.mongodump\" -vvvvvvv\n2023-09-14T09:35:40.658+0000\tusing write concern: &{majority false 0}\n2023-09-14T09:35:40.674+0000\tchecking options\n2023-09-14T09:35:40.674+0000\t\tdumping with object check disabled\n2023-09-14T09:35:40.674+0000\twill listen for SIGTERM, SIGINT, and SIGKILL\n2023-09-14T09:35:40.675+0000\tconnected to node type: standalone\n2023-09-14T09:35:40.685+0000\tarchive prelude quiltmc_modmail.logs\n2023-09-14T09:35:40.685+0000\tarchive prelude quiltmc_modmail.config\n2023-09-14T09:35:40.687+0000\tarchive format version \"0.1\"\n2023-09-14T09:35:40.687+0000\tarchive server version \"6.0.2\"\n2023-09-14T09:35:40.687+0000\tarchive tool version \"100.6.0\"\n2023-09-14T09:35:40.689+0000\tpreparing collections to restore from\n2023-09-14T09:35:40.689+0000\tusing as dump root directory\n2023-09-14T09:35:40.689+0000\treading collections for database quiltmc_modmail in quiltmc_modmail\n2023-09-14T09:35:40.690+0000\tfound collection quiltmc_modmail.logs bson to restore to quiltmc_modmail.logs\n2023-09-14T09:35:40.690+0000\tfound collection metadata from quiltmc_modmail.logs to restore to quiltmc_modmail.logs\n2023-09-14T09:35:40.690+0000\tadding intent for quiltmc_modmail.logs\n2023-09-14T09:35:40.690+0000\tfound collection quiltmc_modmail.config bson to restore to quiltmc_modmail.config\n2023-09-14T09:35:40.690+0000\tfound collection metadata from quiltmc_modmail.config to restore to quiltmc_modmail.config\n2023-09-14T09:35:40.690+0000\tadding intent for quiltmc_modmail.config\n2023-09-14T09:35:40.699+0000\tdemux namespaceHeader: {quiltmc_modmail config false 0}\n2023-09-14T09:35:40.699+0000\treceived quiltmc_modmail.config from namespaceChan\n2023-09-14T09:35:40.699+0000\tfirst non special collection quiltmc_modmail.config found. The demultiplexer will handle it and the remainder\n2023-09-14T09:35:40.699+0000\treading metadata for quiltmc_modmail.config from archive 'backup-dump.mongodump'\n2023-09-14T09:35:40.699+0000\treading metadata for quiltmc_modmail.logs from archive 'backup-dump.mongodump'\n2023-09-14T09:35:40.699+0000\trestoring up to 4 collections in parallel\n2023-09-14T09:35:40.699+0000\tstarting restore routine with id=3\n2023-09-14T09:35:40.699+0000\tstarting restore routine with id=1\n2023-09-14T09:35:40.699+0000\tdemux Open for quiltmc_modmail.config\n2023-09-14T09:35:40.700+0000\tstarting restore routine with id=2\n2023-09-14T09:35:40.700+0000\tstarting restore routine with id=0\n2023-09-14T09:35:40.701+0000\tdemux End\n2023-09-14T09:35:40.702+0000\trestoring to existing collection quiltmc_modmail.config without dropping\n2023-09-14T09:35:40.703+0000\tcollection quiltmc_modmail.config already exists - skipping collection create\n2023-09-14T09:35:40.703+0000\trestoring quiltmc_modmail.config from archive 'backup-dump.mongodump'\n2023-09-14T09:35:40.703+0000\tusing 1 insertion workers\n2023-09-14T09:35:40.715+0000\tfinished restoring quiltmc_modmail.config (0 documents, 0 failures)\n2023-09-14T09:35:40.715+0000\tdemux finishing when there are still outs (1)\n2023-09-14T09:35:40.715+0000\tending restore routine with id=0, no more work to do\n2023-09-14T09:35:40.715+0000\tdemux finishing (err:corruption found in archive; ParserConsumer.BodyBSON() ( corruption found in archive; bson (size: 2458, byte: 103) doesn't end with a null byte ))\n2023-09-14T09:35:40.715+0000\tending restore routine with id=2, no more work to do\n2023-09-14T09:35:40.715+0000\tFailed: quiltmc_modmail.config: error restoring from archive 'backup-dump.mongodump': reading bson input: error demultiplexing archive; archive io error\n2023-09-14T09:35:40.715+0000\t0 document(s) restored successfully. 0 document(s) failed to restore.\ncorruption found in archive;bson (size: 2458, byte: 103)", "text": "I originally assumed that the maximum logging verbosity was 4, but now I’ve found it goes up to five. Here is the new log:I now see in my logs the same corruption found in archive; error present on the MongoDB issue. I think bson (size: 2458, byte: 103) is telling me where the error is occurring, which is the other thing I couldn’t figure out, however, I don’t know how to interpret it.", "username": "Southpaw_1496" }, { "code": "\\x0d", "text": "I was working with some larger mongodumps and it does appear that the corruption is more that just a log line.In the test file I created the was a \\x0d inserted in the midst of a document. Fixing a dump corrupted in this fashion I think will take a long time and a log of effort.", "username": "chris" }, { "code": "x0d", "text": "The dump I have is around 18.5MB, I’m not sure whether that’s considered to be a large dump or not.I can see a few x0ds in my dump, the hard part is knowing whether they’re supposed to be there or not.I would be prepared to go through the bad ones one by one if I knew where they were. Is there any way to tell when it fails at what byte the error occurs?", "username": "Southpaw_1496" }, { "code": "x0d\\xffffffff# pip install pymongo to get the mongo implementation of bson\nimport bson\nimport struct\n\nMAGIC=b'\\x6d\\xe2\\x99\\x81'\n\nwith open('dump.archive','br') as dump:\n # read magic bytes\n magic = dump.read(4)\n if not magic == MAGIC:\n raise SystemExit(\"Not a mongodump archive\")\n while True:\n pos = dump.tell()\n # read document length\n docLen, = struct.unpack_from('<i', dump.read(4))\n if docLen == -1:\n continue\n dump.seek(pos)\n docStart = dump.peek()[:16]\n # read the document\n try:\n bson.decode(dump.read(docLen))\n except bson.errors.InvalidBSON as e:\n print(f'{pos:#08x} {e} {docStart[:16]}')\n raise SystemExit(1)\n\n\\x0a\\x0dmongorestore# remove added \\x0d before \\x0a and remove loglines starting with timestamp 2023-\nbbe -e 's/\\r\\n/\\n/' dump.bad | sed -z 's/2023-.\\+\\n//g' >dump.archive\n#remove added `\\x0a` at EOF\ntruncate -s -1 dump.archive\n", "text": "I can see a few x0ds in my dump, the hard part is knowing whether they’re supposed to be there or not.Yes exactly, it could be valid data or an anomaly. I was writing something in python to try and clean it up when I came across it.After reading the magic bytes the rest of a proper archive is documents with separations of terminator bytes \\xffffffff.The python below can help locate corruption, it could be the document before the byte location reported though.For the dump file I have create I have determined that any \\x0a byte has had a \\x0d prefixed to it in addition to the loglines from the dump.The below steps actually cleaned it up really good and I could mongorestore it.", "username": "chris" }, { "code": "truncate -s -1 dump.archive2023-09-16T18:21:45.563+0000\tpreparing collections to restore from\n2023-09-16T18:21:45.576+0000\treading metadata for quiltmc_modmail.config from archive './dump.archive'\n2023-09-16T18:21:45.576+0000\treading metadata for quiltmc_modmail.logs from archive './dump.archive'\n2023-09-16T18:21:45.579+0000\trestoring to existing collection quiltmc_modmail.config without dropping\n2023-09-16T18:21:45.579+0000\trestoring quiltmc_modmail.config from archive './dump.archive'\n2023-09-16T18:21:45.589+0000\tfinished restoring quiltmc_modmail.config (2 documents, 0 failures)\n2023-09-16T18:21:45.590+0000\trestoring to existing collection quiltmc_modmail.logs without dropping\n2023-09-16T18:21:45.590+0000\trestoring quiltmc_modmail.logs from archive './dump.archive'\n2023-09-16T18:21:45.632+0000\tfinished restoring quiltmc_modmail.logs (0 documents, 0 failures)\n2023-09-16T18:21:45.632+0000\tFailed: quiltmc_modmail.logs: error restoring from archive './dump.archive': cannot transform type bson.Raw to a BSON Document: not enough bytes available to read type. bytes=3 type=string\n2023-09-16T18:21:45.632+0000\t0 document(s) restored successfully. 0 document(s) failed to restore.\nroot@62c1a0b36a15:/Dumps# mongorestore -h localhost -u root -p aI73yBJzrItJcZxq2Nr0 --archive=\"./dump.archive\" -vvvvv\n2023-09-16T18:22:04.482+0000\tusing write concern: &{majority false 0}\n2023-09-16T18:22:04.495+0000\tchecking options\n2023-09-16T18:22:04.495+0000\t\tdumping with object check disabled\n2023-09-16T18:22:04.495+0000\twill listen for SIGTERM, SIGINT, and SIGKILL\n2023-09-16T18:22:04.496+0000\tconnected to node type: standalone\n2023-09-16T18:22:04.512+0000\tarchive prelude quiltmc_modmail.logs\n2023-09-16T18:22:04.512+0000\tarchive prelude quiltmc_modmail.config\n2023-09-16T18:22:04.512+0000\tarchive format version \"0.1\"\n2023-09-16T18:22:04.513+0000\tarchive server version \"6.0.2\"\n2023-09-16T18:22:04.513+0000\tarchive tool version \"100.6.0\"\n2023-09-16T18:22:04.517+0000\tpreparing collections to restore from\n2023-09-16T18:22:04.517+0000\tusing as dump root directory\n2023-09-16T18:22:04.517+0000\treading collections for database quiltmc_modmail in quiltmc_modmail\n2023-09-16T18:22:04.517+0000\tfound collection quiltmc_modmail.logs bson to restore to quiltmc_modmail.logs\n2023-09-16T18:22:04.518+0000\tfound collection metadata from quiltmc_modmail.logs to restore to quiltmc_modmail.logs\n2023-09-16T18:22:04.519+0000\tadding intent for quiltmc_modmail.logs\n2023-09-16T18:22:04.519+0000\tfound collection quiltmc_modmail.config bson to restore to quiltmc_modmail.config\n2023-09-16T18:22:04.519+0000\tfound collection metadata from quiltmc_modmail.config to restore to quiltmc_modmail.config\n2023-09-16T18:22:04.519+0000\tadding intent for quiltmc_modmail.config\n2023-09-16T18:22:04.524+0000\tdemux namespaceHeader: {quiltmc_modmail config false 0}\n2023-09-16T18:22:04.524+0000\treceived quiltmc_modmail.config from namespaceChan\n2023-09-16T18:22:04.524+0000\tfirst non special collection quiltmc_modmail.config found. The demultiplexer will handle it and the remainder\n2023-09-16T18:22:04.524+0000\treading metadata for quiltmc_modmail.logs from archive './dump.archive'\n2023-09-16T18:22:04.524+0000\treading metadata for quiltmc_modmail.config from archive './dump.archive'\n2023-09-16T18:22:04.525+0000\trestoring up to 4 collections in parallel\n2023-09-16T18:22:04.525+0000\tstarting restore routine with id=3\n2023-09-16T18:22:04.525+0000\tstarting restore routine with id=1\n2023-09-16T18:22:04.525+0000\tstarting restore routine with id=2\n2023-09-16T18:22:04.525+0000\tstarting restore routine with id=0\n2023-09-16T18:22:04.525+0000\tdemux Open for quiltmc_modmail.config\n2023-09-16T18:22:04.528+0000\trestoring to existing collection quiltmc_modmail.config without dropping\n2023-09-16T18:22:04.528+0000\tcollection quiltmc_modmail.config already exists - skipping collection create\n2023-09-16T18:22:04.528+0000\trestoring quiltmc_modmail.config from archive './dump.archive'\n2023-09-16T18:22:04.528+0000\tusing 1 insertion workers\n2023-09-16T18:22:04.528+0000\tdemux namespaceHeader: {quiltmc_modmail config true -1616536111820191563}\n2023-09-16T18:22:04.530+0000\tcontinuing through error: E11000 duplicate key error collection: quiltmc_modmail.config index: _id_ dup key: { _id: ObjectId('608acdbda56c97501eda92ea') }\n2023-09-16T18:22:04.530+0000\tcontinuing through error: E11000 duplicate key error collection: quiltmc_modmail.config index: _id_ dup key: { _id: ObjectId('636c3de7b2cd74350b2c0670') }\n2023-09-16T18:22:04.539+0000\tfinished restoring quiltmc_modmail.config (0 documents, 2 failures)\n2023-09-16T18:22:04.539+0000\tdemux checksum for namespace quiltmc_modmail.config is correct (-1616536111820191563), 2496 bytes\n2023-09-16T18:22:04.539+0000\tdemux namespaceHeader: {quiltmc_modmail logs false 0}\n2023-09-16T18:22:04.539+0000\tdemux Open for quiltmc_modmail.logs\n2023-09-16T18:22:04.540+0000\trestoring to existing collection quiltmc_modmail.logs without dropping\n2023-09-16T18:22:04.540+0000\tcollection quiltmc_modmail.logs already exists - skipping collection create\n2023-09-16T18:22:04.540+0000\trestoring quiltmc_modmail.logs from archive './dump.archive'\n2023-09-16T18:22:04.540+0000\tusing 1 insertion workers\n2023-09-16T18:22:04.574+0000\tdemux End\n2023-09-16T18:22:04.579+0000\tfinished restoring quiltmc_modmail.logs (0 documents, 0 failures)\n2023-09-16T18:22:04.579+0000\tFailed: quiltmc_modmail.logs: error restoring from archive './dump.archive': cannot transform type bson.Raw to a BSON Document: not enough bytes available to read type. bytes=3 type=string\n2023-09-16T18:22:04.579+0000\t0 document(s) restored successfully. 0 document(s) failed to restore.\n", "text": "truncate -s -1 dump.archiveI have tried the script you recommended and the error is now different. This was the archive that I already tried modifying and not the original, so I will try with the original and report back.", "username": "Southpaw_1496" } ]
Mongorestore fails with error: `Failed: corruption found in archive; 858927154 is neither a valid bson length nor a archive terminator`
2023-09-02T19:25:47.854Z
Mongorestore fails with error: `Failed: corruption found in archive; 858927154 is neither a valid bson length nor a archive terminator`
1,033
null
[ "aggregation", "views" ]
[ { "code": " $addFields: {\n adjustedUTC: {\n $dateFromParts: {\n year: {\n $year: {\n date: new Date(),\n timezone: \"$timezone\",\n },\n },\n month: {\n $month: {\n date: new Date(),\n timezone: \"$timezone\",\n },\n },\n day: {\n $dayOfMonth: {\n date: new Date(),\n timezone: \"$timezone\",\n },\n },\n hour: {\n $hour: {\n date: \"$startDateUTC\",\n timezone: \"$timezone\",\n },\n },\n minute: {\n $minute: {\n date: \"$startDateUTC\",\n timezone: \"$timezone\",\n },\n },\n timezone: \"$timezone\",\n },\n },\n }\nadjustedUTC$dateFromPartstimezone$dateFromParts$dateFromPartstimezoneadjustedUTC$startDateUTC", "text": "In a View I’m developing, my aggregation includes:The adjustedUTC field is set by a $dateFromParts expression that needs to account for the timezone of the event data. This code is doing what I want, but all of the setting of timezone fields seems redundant when I set it for the entire $dateFromParts expression and each part field. I thought I should be able to remove the latter entries, but then the code fails. Likewise, if I remove the option for the entire $dateFromParts expression, the logic fails.What is the difference between these timezone options that makes all of them required?And second, I’d welcome other ideas to simplify the creation of adjustedUTC. It does need to account for both the current date and the time contained in $startDateUTC.", "username": "Tim_Rohrer" }, { "code": "", "text": "So you’re creating a new date, based off the current year, month and day, but the hours and minutes come from what you pass in within $startDateUTC?That looks suspiciously like you could have an edge case on the scenario when you cross a date boundary if the current time is say 10am but the startDateUTC is at 11pm.", "username": "John_Sewell" }, { "code": "adjustedUTC$$NOWstartDateUTCdateFromParts", "text": "The data is of recurring events that are only captured by weekday, local time, and timezone. This creates problems when regions (aka, timezones) change between DST and standard time.The adjustedUTC is reflects the current UTC time of the event based on $$NOW as the date, and the original time reflected in startDateUTC, thus accounting for DST in the subject timezone.I just don’t understand why every particular field and the overall dateFromParts expression requires the timezone to be set.", "username": "Tim_Rohrer" } ]
What is the purpose of these potentially redundant timezone fields?
2023-09-16T14:56:57.843Z
What is the purpose of these potentially redundant timezone fields?
289
null
[ "node-js", "compass", "mongodb-shell", "mongoid-odm", "mongodb-world-2022" ]
[ { "code": "", "text": "Hello,Kindly please help on Auto-delete the whole Databases after a certain amount of time in mongodb compass.Thank you", "username": "raj_codonnier" }, { "code": "", "text": "Compass is just a ui to mongodb. If you want to say delete a database you’ll need to schedule something to run. Either on a server somewhere or if on Atlas should be able to use a scheduled trigger.Whats your use case for this?", "username": "John_Sewell" }, { "code": "", "text": "If what you’re looking for is to auto-delete the contents of a collection, you can Expire Data from Collections by Setting TTL.", "username": "alexbevi" }, { "code": " Created time-series collections to auto-delete.\n", "text": "Hello,OrIf you have custom condition after delete collection then use BATCH scripts to match your conditions and delete operationOrSet cron jobs to run on specific time and delete data", "username": "Bhavya_Bhatt" } ]
Auto-delete the whole Databases after a certain amount of time in mongodb compass
2023-09-16T11:24:22.743Z
Auto-delete the whole Databases after a certain amount of time in mongodb compass
312
null
[ "mongodb-shell", "server", "database-tools" ]
[ { "code": "", "text": "I am using a Centos Server & have installed mongodb Database and it’s working fine means storing data perfectly but i haven’t any admin panel or export function in my project to I want to export my data for checking json or csv format. So, how to export please guide me or have any options then suggest me.", "username": "Test_Code" }, { "code": "", "text": "To export in MongoDB, see the command mongoexport.", "username": "steevej" }, { "code": "", "text": "Command to checkMongosh --helpMongoexport --uri=“database-uri” --collection=''collection-name to export\" --out=“path to our document”", "username": "Bhavya_Bhatt" } ]
How to Export or Import Csv or Json file using command in MongoDB's Database Centos Server or Linux Server?
2023-09-15T09:55:54.496Z
How to Export or Import Csv or Json file using command in MongoDB&rsquo;s Database Centos Server or Linux Server?
342
null
[ "node-js" ]
[ { "code": " exports = function(changeEvent) {\n \n const { PubSub } = require('@google-cloud/pubsub');\n const atob = require('JSON');\n \n // Set up Google Cloud Pub/Sub credentials\n const projectId = 'xxxxs';\n const topicName = 'xxxx8';\n const keyFilename = context.values.get('GOOGLE_APPLICATION_CREDENTIALS');\n // Create a Pub/Sub client with credentials\n const pubsub = new PubSub({ projectId, keyFilename });\n \n // Publish a message to the Pub/Sub topic\n try {\n const message = JSON.stringify(changeEvent.fullDocument);\n const dataBuffer = Buffer.from(message);\n const topic = pubsub.topic(topicName);\n return topic.publish(dataBuffer)\n .then((messageId) => console.log(`Message ${messageId} published.`))\n .catch((err) => console.error(`Error publishing message: ${err}`));\n } catch (err) {\n console.error(`Error converting changeEvent to JSON: ${err}`);\n }\n };\n", "text": "this is the code i am using:Error:\nError publishing message: FunctionError: TypeError: The “path” argument must be of type string. Received type object", "username": "Yeickson_Mendoza" }, { "code": "return topic.publish(dataBuffer)return topic.publish(dataBuffer.toString())", "text": "return topic.publish(dataBuffer)Maybe return topic.publish(dataBuffer.toString())", "username": "Jack_Woehr" }, { "code": "const keyFilename = JSON.parse(context.values.get('GOOGLE_APPLICATION_CREDENTIALS')); const { PubSub } = require('@google-cloud/pubsub');\n const atob = require('JSON');\n \n // Set up Google Cloud Pub/Sub credentials\n const projectId = 'xxx';\n const topicName = 'xxxxx';\n const keyFilename = JSON.parse(context.values.get('GOOGLE_APPLICATION_CREDENTIALS'));\n // Create a Pub/Sub client with credentials\n const pubsub = new PubSub({ projectId, keyFilename });\n \n // Publish a message to the Pub/Sub topic\n try {\n const message = JSON.stringify(changeEvent.fullDocument);\n const dataBuffer = Buffer.from(message);\n const topic = pubsub.topic(topicName);\n return topic.publish(dataBuffer.toString())\n .then((messageId) => console.log(`Message ${messageId} published.`))\n .catch((err) => console.error(`Error publishing message: ${err}`));\n } catch (err) {\n console.error(`Error converting changeEvent to JSON: ${err}`);\n }\n };\n\nDo you know what is the way to connect to pubsub?\n\nThanks\n", "text": "topic.publish(dataBuffer.toString())thanks for your answer, now it gives me another error when it does the const keyFilename = JSON.parse(context.values.get('GOOGLE_APPLICATION_CREDENTIALS')); gives the following error:Mistake:\nSyntaxError: invalid character ‘o’ looking for beginning of value\nexports = function(changeEvent) {new code:", "username": "Yeickson_Mendoza" }, { "code": "Do you know what is the way to connect to pubsub?", "text": "Do you know what is the way to connect to pubsub?I’ve found it very difficult. I use Apache Camel.", "username": "Jack_Woehr" }, { "code": "", "text": "I posted the same question on Stack Overflow hoping it will receive a bit more attention there. @Yeickson_Mendoza did you end up resolving your problem?", "username": "Justus_Voigt" }, { "code": "", "text": "To send event changes of a collection to Google Cloud Pub/Sub, you can use a combination of services and technologies. Here’s a general guide on how to achieve this:", "username": "Jack_Wilson_N_A" }, { "code": "", "text": "Dear @garagedoor_high_land,congratulation for hiding your SPAM in a ChatGPT useless reply. I have started following you to make I get notify in your next attempt.", "username": "steevej" }, { "code": "", "text": "I’ve found it very difficult. I use Apache Camel.I’ve found it very difficult. I use Apache Camel.", "username": "John_Harry1" } ]
How do I send the event changes of a collection to pub/sub?
2023-05-08T14:40:09.528Z
How do I send the event changes of a collection to pub/sub?
1,237
null
[ "mongodb-shell" ]
[ { "code": "", "text": "Hi, I installed Mongodb 7. When I try to connect to a different port it is not working. I was wondering if you can help me with this. Here is the example: mongosh --port 28015Current Mongosh Log ID: 6505ba1b299d74abb10b364c\nConnecting to: mongodb://127.0.0.1:28015/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.6\nMongoNetworkError: connect ECONNREFUSED 127.0.0.1:28015", "username": "john_Doe3" }, { "code": "", "text": "Looks fine.The problem appears to be there is no server listening on that address:port.", "username": "chris" }, { "code": "", "text": "many thanks for your quick reply. I first used >mongod --port 27000\nAnd it seems that mongod can not connect to the port. here is part of the output:\ncache for shutdown\"}\n{“t”:{“$date”:“2023-09-16T11:30:44.011-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“initandlisten”,“msg”:“Now exiting”}\n{“t”:{“$date”:“2023-09-16T11:30:44.012-04:00”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“initandlisten”,“msg”:“Shutting down”,“attr”:{“exitCode”:100}}C:\\Users\\faghihi>", "username": "john_Doe3" }, { "code": "\"s\":\"E\"\"s\":\"F\"\\data\\db", "text": "Not enough there to tell why that exited, it might be further back in the log file.Lines with \"s\":\"E\" or \"s\":\"F\" are good bet.As there is no dbpath on the command line the default will be used(\\data\\db), if there is no directory or adequte permisssions to it that could explain why.", "username": "chris" }, { "code": "", "text": "I did restart the computer and weirdly it is working. Thank you so much for your help!!!", "username": "john_Doe3" } ]
Using different port with mongosh
2023-09-16T14:29:13.955Z
Using different port with mongosh
370
null
[ "aggregation", "mongodb-shell" ]
[ { "code": "// mongosh\nconf = rs.conf();\nconf.members[0].tags = { \"myTag\": \"0\" };\nconf.members[1].tags = { \"myTag\": \"1\" };\nconf.members[2].tags = { \"myTag\": \"2\" };\nrs.reconfig(conf);\n// pseudocode\ni := 0\npipelines := [pipeline1, pipeline2, pipeline3, ... ]\nFOR pipeline IN pipelines\n i := (i + 1) % 3\n async {\n collection.secondaryPreferred.tagSet({ myTag: i }).aggregate(pipeline)\n }\n", "text": "I have a large dataset on which I need to perform aggregation pipelines in batch workflows and I was considering utilizing secondary nodes for these aggregations.I conducted a test by assigning a label to each node, and in the code, I use the label to distribute the load in round-robin fashion on three nodes.Do you think this technique is appropriate?\nAre there any reasons why I shouldn’t do it?Thank you very much.", "username": "Giacomo_Benvenuti" }, { "code": "", "text": "Hi @Giacomo_Benvenuti,\nThe data will have to be consistent across all three nodes if you want to use this method.From documentation:Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Hello Fabio, thank you, certainly the data that the pipelines will work on must be “cold,” meaning already aligned among all the nodes.", "username": "Giacomo_Benvenuti" } ]
Distribute aggregations in round-robin mode across replicas
2023-09-15T09:10:29.001Z
Distribute aggregations in round-robin mode across replicas
329
null
[]
[ { "code": "", "text": "Hello guysLets say you have a document and this document has an array, you want this document to automatically get deleted on the collection as soon as the array runs/gets emptyHow do u do that? Because i have some documents with empty arrays still hanging around but i want them deleted automatically when that happensPleasd help", "username": "Tumelo_waheng" }, { "code": "", "text": "Hi @Tumelo_waheng welcome to the community!Lets say you have a document and this document has an array, you want this document to automatically get deleted on the collection as soon as the array runs/gets emptyI don’t believe we have such a feature at the moment. You can expire documents based on time using a TTL index but not based on arbitrary conditions. Currently you might need a scheduled process using db.collection.deleteMany() or implement the removal in the application side.In the meantime, if this is an important feature for you, please provide a feedback in the MongoDB Feedback Engine.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "In addition to what Kevin has stated above, you could possibly achieve the delete by using Database Triggers (if your deployment is on Atlas) or Change Streams.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Auto-delete the whole Databases after a certain amount of time in mongodb compass please help in.Thank you", "username": "raj_codonnier" } ]
Automatic document deletion
2022-10-15T15:31:28.716Z
Automatic document deletion
2,044
null
[ "node-js", "atlas-triggers" ]
[ { "code": "", "text": "I want to know how to use database triggers on electron app; As if now, I have created the triggers on mongoDB and listening on logs.", "username": "Inderpal_Kaur" }, { "code": "", "text": "I want to know how to use database triggers on electron app; As if now, I have created the triggers on mongoDB and listening on logs.최신 뉴토끼 주소를 찾고 있다면 링크를 클릭하기만 하면 찾을 수 있도록 도와드리겠습니다.", "username": "Rana_Jee_N_A" } ]
How to read database triggers on electron app
2022-10-10T06:12:08.774Z
How to read database triggers on electron app
1,871
null
[ "database-tools", "backup", "atlas" ]
[ { "code": "--eval \"db.getCollectionNames().forEach(collection_name=>db[collection_name].deleteMany({}))\"\nerror: E11000 duplicate key error collection: __realm_sync_63750a5292e8ebc57ad100ee.client_meta_version_counter index: _id_ dup key:\n", "text": "Goal: DevOps pipeline that will drop Collections in target system, then refresh them with an import from a source system.I tried doing mongodump and mongorestore, but those break Device Sync replication, so not an option. Next idea was to just drop the data inside the Collections to preserve the schema, then restore from Source database. This would ensure the import would be a non-breaking change that would not break replication.However, the problem with this is the index still preserves the old values from the original data.Dropping the index would be a Breaking change, and MongoDB doesn’t let you drop an index with ID in it anyway.So I’m not sure what the best path forward here is. If there is a way to “clear” the index without dropping it, that would be great. If that’s not possible, any other suggestions would be appreciated. These are small config tables, so we thought it was a cleaner path forward to move the entire table between environments, instead of trying to piece them together with updates, inserts, etc.Thanks for any help.", "username": "stack_engineering" }, { "code": "", "text": "Answered my own question again lol. I guess the key point I didn’t mention here was I was using mongrestore to import the new data. That process will also build the indexes as well, so that’s why I got the issue.I used mongexport/mongoimport and now good to go !", "username": "stack_engineering" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Any way to clear values in an index without dropping it?
2023-09-16T01:07:02.563Z
Any way to clear values in an index without dropping it?
381
null
[ "queries" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"5ca4bbcea2dd94ee58162a6d\"\n },\n \"username\": \"gregoryharrison\",\n \"name\": \"Natalie Ford\",\n \"address\": \"17677 Mark Crest\\nWalterberg, IA 39017\",\n \"birthdate\": {\n \"$date\": \"1996-09-13T17:14:27.000Z\"\n },\n \"email\": \"[email protected]\",\n \"accounts\": [\n 904260,\n 565468\n ],\n \"tier_and_details\": {\n \"69f8b6a3c39c42edb540499ee2651b75\": {\n \"tier\": \"Bronze\",\n \"benefits\": [\n \"dedicated account representative\",\n \"airline lounge access\"\n ],\n \"active\": true,\n \"id\": \"69f8b6a3c39c42edb540499ee2651b75\"\n },\n \"c85df12c2e394afb82725b16e1cc6789\": {\n \"tier\": \"Bronze\",\n \"benefits\": [\n \"airline lounge access\"\n ],\n \"active\": true,\n \"id\": \"c85df12c2e394afb82725b16e1cc6789\"\n },\n \"07d516cfd7fc4ec6acf175bb78cb98a2\": {\n \"tier\": \"Gold\",\n \"benefits\": [\n \"dedicated account representative\"\n ],\n \"active\": true,\n \"id\": \"07d516cfd7fc4ec6acf175bb78cb98a2\"\n }\n }\n}\n", "text": "Dear All,\nI’m new to MongoDB. I’m trying to built query to access all the customers whose “tier” is “Bronze”. Below is the sample document. Any help is really appreciated.", "username": "faas_nads" }, { "code": "{\n \"_id\": {\n \"$oid\": \"5ca4bbcea2dd94ee58162a6d\"\n },\n \"username\": \"gregoryharrison\",\n \"name\": \"Natalie Ford\",\n \"address\": \"17677 Mark Crest\\nWalterberg, IA 39017\",\n \"birthdate\": {\n \"$date\": \"1996-09-13T17:14:27.000Z\"\n },\n \"email\": \"[email protected]\",\n \"accounts\": [\n 904260,\n 565468\n ],\n \"tier_and_details\": [\n {\n \"tier\": \"Bronze\",\n \"benefits\": [\n \"dedicated account representative\",\n \"airline lounge access\"\n ],\n \"active\": true,\n \"id\": \"69f8b6a3c39c42edb540499ee2651b75\"\n },\n {\n \"tier\": \"Bronze\",\n \"benefits\": [\n \"airline lounge access\"\n ],\n \"active\": true,\n \"id\": \"c85df12c2e394afb82725b16e1cc6789\"\n },\n {\n \"tier\": \"Gold\",\n \"benefits\": [\n \"dedicated account representative\"\n ],\n \"active\": true,\n \"id\": \"07d516cfd7fc4ec6acf175bb78cb98a2\"\n }\n ]\n}\n\ndb.getCollection('test').find({'tier_and_details.tier':'Bronze'})\n", "text": "Your storage of the tier_and_details seems a little strange, are you using the ID of the relationship between the two as a key?\nYou may be better having the tier_and_details as an array of items, each of which has the _id field (which you already have anyway).With the data in this shape it’ll be hard to look for data as the path is different on every document, unless I’m missing something?The you could do:If you wanted to check for people with Bronze that are also active you could then use\n$elemMatch.", "username": "John_Sewell" }, { "code": "db.getCollection('test').find({'tier_and_details.tier':'Bronze'})", "text": "db.getCollection('test').find({'tier_and_details.tier':'Bronze'})Dear John,\nThanks for quick support.\nAs mentioned earlier I’m new to MongoDB. I got the data in json file and I imported it using import utility without any errors. Hence thought this also a valid document structure.I’ll try to fix the document structure by putting all the subdocuments as array.Thanks again.\nBest Regards,\nFN", "username": "faas_nads" } ]
Find all documents bases on value in subdocument elements
2023-09-15T07:17:16.233Z
Find all documents bases on value in subdocument elements
173
null
[]
[ { "code": "import { MongoClient } from 'mongodb'\n\nconst url = process.env.MONGODB_URI \n\nMongoClient.connect(url, { useNewUrlParser: true, useUnifiedTopology: true },(err, db)=>{\n console.log(url)\n db.close()\n})\nimport mongoose from 'mongoose'\n\nconst url = process.env.MONGODB_URI \nmongoose.Promise = global.Promise\nmongoose.connect(url, { useNewUrlParser: true, useCreateIndex: true, useUnifiedTopology: true })\nmongoose.connection.on('error', () => {\n throw new Error(`unable to connect to database: ${url}`)\n})\nwebpack://HappyHourWeb/./server/server.js?:29\n throw new Error(`unable to connect to database: ${_config_config__WEBPACK_IMPORTED_MODULE_0__[\"default\"].mongoUri}`)\n ^\nError: unable to connect to database: my_database_url,\n at NativeConnection.eval (webpack://HappyHourWeb/./server/server.js?:29:9)\n at NativeConnection.emit (node:events:390:28)\n at /Users/Hieudo/Documents/Project/HappyHourWeb/node_modules/mongoose/lib/connection.js:807:30\n at processTicksAndRejections (node:internal/process/task_queues:78:11)\nmongodb+srv://<username>:<password>@<cluster>.vr5kw.mongodb.net/<dbname>?retryWrites=true&w=majority\n", "text": "So when I run my app in deployment, with the backend connecting to MongoDB using MongoClient as follow:everything works fine. But if I change it intoit gives the following error:My URI is in the form:Any help is greatly appreciated!", "username": "Hieu_Do_23" }, { "code": "", "text": "did you find any resolution for above error?", "username": "Rishabh_Majithiya" } ]
MongoDB can be connected with MongoClient but not mongoose
2021-12-23T04:20:53.185Z
MongoDB can be connected with MongoClient but not mongoose
3,332
https://www.mongodb.com/…9d3e510ea92.jpeg
[ "aggregation" ]
[ { "code": "{\n _id: ObjectId(\"650305a48a9b68934923a0f1\"),\n ename: 'Employee4327067',\n age: 57,\n salary: 18431,\n department: 'Department23'\n}\n {\n $group: {\n _id: \"$department\",\n salary: {\n $sum: \"$salary\",\n },\n },\n }\n", "text": "Hi there,I’m having a performance test between PostgreSQL and MongoDB.\nUsing a collection “employee” with 10 000 000 documents.I use $group stage to group by department0 index keys examined while I created an index department/salary and the query is to slowDo you have an idea ?Regards", "username": "Emmanuel_Bernard1" }, { "code": "", "text": "If you have an index on that, then sort before you group by that field", "username": "John_Sewell" }, { "code": "", "text": "Hi @John_Sewell ,Yes I have an index\ndepartment_1_salaray_11276×101 10.4 KB\n", "username": "Emmanuel_Bernard1" }, { "code": "", "text": "With $sort stage, the query uses the index but it is not really faster\nSlower than Postgres Without the $sort stage, the execution time is nearly the sameDo you think a search index could improve the query ?", "username": "Emmanuel_Bernard1" }, { "code": "", "text": "I tested it locally with a collection of about 10M on a workstation with a slow SSD and the same index, took about 10s.How fast does PostgreSQL do the same operation on your hardware? I assume it’s the same box running both database servers?", "username": "John_Sewell" }, { "code": "", "text": "Hi @Emmanuel_Bernard1This kind of test can cause a huge confusion about databases.You are just putting millions of documents in a collection and doing one operation but we need to go deeper in this comparison. For you to have the benefits from any tool, there are several things that you must do, and MongoDB isn’t different.One of the most important things using MongoDB is modeling your data based on your needs and your model of documents looks terrible to get a sum of salaries by the department.Your query is loading all documents on memory to make the operation and it’s very bad to server do it.\nSo one of the things that you can solve your problem is to create another collection with data modeled based on what you need and solve this query. You will have data duplicated but for MongoDB designs, there is no problem with that.", "username": "Jennysson_Junior" }, { "code": "", "text": "I used Mongo Compass to connect to a M20 cluster.\nIt took about 13s.Does the execution time give a real information or execution statistics of the query plans ?\nDo I need to write some code to execute the query and return the result ?", "username": "Emmanuel_Bernard1" }, { "code": "", "text": "Hi @Jennysson_Junior,I’m agree with you \nI’m trying to explain to a colleague that he can’t use the same modeling approach for NoSQL and SQL.Your solution “create another collection with data modeled based on what you need and solve this query” is the good one.I’m sure MongoDB can’t be faster then PostgreSQL if we use the same “relational” modelThanks a lot", "username": "Emmanuel_Bernard1" }, { "code": "", "text": "A good solution could be using the Computed PatternBuilding with Patterns: The Computed Pattern | MongoDB", "username": "Emmanuel_Bernard1" } ]
Aggregation $group with 10 000 000 documents
2023-09-14T15:37:52.082Z
Aggregation $group with 10 000 000 documents
274
null
[ "node-js" ]
[ { "code": "", "text": "Hello,\nI have an electron app working with the Node Js SDK and now when I turn off the WIFI on my computer,\nmy requests on the SDK don’t answer anymore. The only error I get is “SyncError: Host not found (autoritative)”\nBased on a similar topic from over a year ago, this could be do to an internal issue on Atlas.\nAnyone else experiencing this? Could anyone from Atlas confirm is everything is okay?\nAm I doing anything wrong? it used to work fine offline.\nThanks.", "username": "Benoit_Werner" }, { "code": "", "text": "The reason why my request didn’t answer anymore was was because I’m using the library react-query and they changed a behaviour in their v4 that pauses by default the fetching when offline. It is described here : Network Mode | TanStack Query Docs This was easily fixable by setting a specific option.", "username": "Benoit_Werner" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
SyncError: Host not found (autoritative)
2023-08-29T13:30:59.087Z
SyncError: Host not found (autoritative)
445
null
[ "queries" ]
[ { "code": "", "text": "I have a collection with documents that form a linked list each document has a field parentID that point to the parent document id, the problem is I want to get a sorted array based on this list the first element has parentID equal to null and the next element in the array points to the previous one, can someone guide me if this is possible to do in one query without performing multiple queries", "username": "Ay.Be" }, { "code": "", "text": "Hi @Ay.Be,\nI’m afraid it doesn’t sound like a great idea. Handling a linked list in memory is easy, but it gets complicated in a database, especially when you have to make changes to the links. Maybe you can tell us why you want to do it this way, and we can check if there’s a better option.", "username": "Jack_Yang1" }, { "code": "", "text": "Hi @Jack_Yang1I have a product collection that I want a user to be able to sort. I thought of giving each product a rank to help with sorting but it gets complicated when they try to remove a product or change a product rank because I have to change all the products ranks that comes after it (1,2,3,4) if I want to insert a document between rank 1 and rank 2 the documents 2,3,4 will change to 3,4,5 , and so linked list might be to solve the issue here for example when I remove a document I only need to change the parentID of the next document to link to the document before the one I’d like to delete, or if I’m inserting a document I’ll just update the parentID of 2 documents. I hope you understood.", "username": "Ay.Be" }, { "code": "", "text": "take a look at $graphLookup, it might help", "username": "steevej" } ]
Query a linked list
2023-09-15T00:53:24.069Z
Query a linked list
229
https://www.mongodb.com/…fb477a4b814a.png
[ "installation" ]
[ { "code": "", "text": "\nScreenshot from 2022-07-09 12-38-29880×352 61.2 KB\nvenkat@cal-on:~$ sudo apt install mongodb-org\nReading package lists… Done\nBuilding dependency tree… Done\nReading state information… Done\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:The following packages have unmet dependencies:\nmongodb-org-mongos : Depends: libssl1.1 (>= 1.1.0) but it is not installable\nmongodb-org-server : Depends: libssl1.1 (>= 1.1.0) but it is not installable\nmongodb-org-shell : Depends: libssl1.1 (>= 1.1.0) but it is not installable\nE: Unable to correct problems, you have held broken packages.", "username": "Venkat_ch1" }, { "code": "sudo apt install libssl-devsudo apt install libssl1.1", "text": "have you tried sudo apt install libssl-dev and if that isn’t enough try sudo apt install libssl1.1", "username": "Jack_Woehr" }, { "code": "", "text": "installed libssl-dev but when i try to install libssl1.1, displaying the below errorReading package lists… Done\nBuilding dependency tree… Done\nReading state information… Done\nPackage libssl1.1 is not available, but is referred to by another package.\nThis may mean that the package is missing, has been obsoleted, or\nis only available from another sourceE: Package ‘libssl1.1’ has no installation candidate", "username": "charles_dass" } ]
Can't install mongodb community edition on ubuntu
2022-07-09T07:13:16.425Z
Can&rsquo;t install mongodb community edition on ubuntu
6,594